IEEE Communications Society

Transcription

IEEE Communications Society
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THIS MONTH’S DIGITAL DELIVERY OF
IEEE COMMUNICATIONS MAGAZINE
SUPPORTED BY:
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
IEEE
July 2011, Vol. 49, No. 7
www.comsoc.org
MAGAZINE
C
l
ria
to
Tu
c
1
So 6
om IPv age
P
IEEE Standards for Wireless
e
Se
Network and Service Management
ee
Fr
Future Internet Architectures
Free ComSoc Articles on 3GPP
See Page 3
A Publication of the IEEE Communications Society
Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Introducing the Cisco® Carrier Packet Transport (CPT) System, the industry’s first
standards-based, Packet Optical Transport System (P-OTS) that unifies packet and
transport technologies using Multiprotocol Label Switching-Transport Profile (MPLS-TP).
Enhance your service offerings and increase profitability
Learn more about the Cisco Carrier Packet Transport System and view the
Current Analysis report at www.cisco.com/go/cpt.
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
TM
A
BEMaGS
F
____________________________________________________
New 2011 tutorial
BUILDING A COMPREHENSIVE
IPv6
TRANSITION
STRATEGY
To maintain business continuity, service providers must now accelerate
their transition to Internet Protocol Version 6 (IPv6).
Deploying IPv6 requires a well-constructed network design, a detailed
deployment plan, and thorough testing to ensure compatibility with
existing network characteristics. Depending on the primary drivers for
IPv6 deployment and the state of the current IPv4 network, Service
Providers may choose different approaches for integrating IPv6 into
their networks. This tutorial is intended help Service Providers build a
comprehensive IPv6 transition strategy.
C ris Metz,
etz, Cisco
Chris
FREE ACCESS SPONSORED BY
Brought to you by
For other sponsor opportunities,
please contact Eric Levine,
Associate Publisher
Phone: 212-705-8920,
E-mail: ______________
[email protected]
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Director of Magazines
Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland)
A
BEMaGS
F
IEEE
Editor-in-Chief
Steve Gorshe, PMC-Sierra, Inc. (USA)
Associate Editor-in-Chief
Sean Moore, Centripetal Networks (USA)
Senior Technical Editors
Tom Chen, Swansea University (UK)
Nim Cheung, ASTRI (China)
Nelson Fonseca, State Univ. of Campinas (Brazil)
Peter T. S. Yum, The Chinese U. Hong Kong (China)
Technical Editors
Sonia Aissa, Univ. of Quebec (Canada)
Mohammed Atiquzzaman, U. of Oklahoma (USA)
Paolo Bellavista, DEIS (Italy)
Tee-Hiang Cheng, Nanyang Tech. U. (Rep. Singapore)
Sudhir S. Dixit, Hewlett-Packard Labs India (India)
Stefano Galli, ASSIA, Inc. (USA)
Joan Garcia-Haro, Poly. U. of Cartagena (Spain)
Admela Jukan, Tech. Univ. Carolo-Wilhelmina zu
Braunschweig (Germany)
Vimal Kumar Khanna, mCalibre Technologies (India)
Janusz Konrad, Boston University (USA)
Deep Medhi, Univ. of Missouri-Kansas City (USA)
Nader F. Mir, San Jose State Univ. (USA)
Amitabh Mishra, Johns Hopkins University (USA)
Seshradi Mohan, University of Arkansas (USA)
Glenn Parsons, Ericsson Canada (Canada)
Joel Rodrigues, Univ. of Beira Interior (Portugal)
Jungwoo Ryoo, The Penn. State Univ.-Altoona (USA)
Hady Salloum, Stevens Institute of Tech. (USA)
Antonio Sánchez Esguevillas, Telefonica (Spain)
Dan Keun Sung, Korea Adv. Inst. Sci. & Tech. (Korea)
Danny Tsang, Hong Kong U. of Sci. & Tech. (Japan)
Chonggang Wang, InterDigital Commun., LLC (USA)
Alexander M. Wyglinski, Worcester Poly. Institute (USA)
Series Editors
Ad Hoc and Sensor Networks
Edoardo Biagioni, U. of Hawaii, Manoa (USA)
Silvia Giordano, Univ. of App. Sci. (Switzerland)
Automotive Networking and Applications
Wai Chen, Telcordia Technologies, Inc (USA)
Luca Delgrossi, Mercedes-Benz R&D N.A. (USA)
Timo Kosch, BMW Group (Germany)
Tadao Saito, University of Tokyo (Japan)
Consumer Communicatons and Networking
Madjid Merabti, Liverpool John Moores U. (UK)
Mario Kolberg, University of Sterling (UK)
Stan Moyer, Telcordia (USA)
Design & Implementation
Sean Moore, Avaya (USA)
Salvatore Loreto, Ericsson Research (Finland)
Integrated Circuits for Communications
Charles Chien (USA)
Zhiwei Xu, SST Communication Inc. (USA)
Stephen Molloy, Qualcomm (USA)
Network and Service Management Series
George Pavlou, U. of Surrey (UK)
Aiko Pras, U. of Twente (The Netherlands)
Networking Testing Series
Yingdar Lin, National Chiao Tung University (Taiwan)
Erica Johnson, University of New Hampshire (USA)
Tom McBeath, Spirent Communications Inc. (USA)
Eduardo Joo, Empirix Inc. (USA)
Topics in Optical Communications
Hideo Kuwahara, Fujitsu Laboratories, Ltd. (Japan)
Osman Gebizlioglu, Telcordia Technologies (USA)
John Spencer, Optelian (USA)
Vijay Jain, Verizon (USA)
Topics in Radio Communications
Joseph B. Evans, U. of Kansas (USA)
Zoran Zvonar, MediaTek (USA)
Standards
Yoichi Maeda, NTT Adv. Tech. Corp. (Japan)
Mostafa Hashem Sherif, AT&T (USA)
Columns
Book Reviews
Piotr Cholda, AGH U. of Sci. & Tech. (Poland)
History of Communications
Steve Weinsten (USA)
Regulatory and Policy Issues
J. Scott Marcus, WIK (Germany)
Jon M. Peha, Carnegie Mellon U. (USA)
Technology Leaders' Forum
Steve Weinstein (USA)
Very Large Projects
Ken Young, Telcordia Technologies (USA)
Publications Staff
Joseph Milizzo, Assistant Publisher
Eric Levine, Associate Publisher
Susan Lange, Online Production Manager
Jennifer Porcello, Production Specialist
Catherine Kemelmacher, Associate Editor
®
2
Communications
IEEE
MAGAZINE
July 2011, Vol. 49, No. 7
www.comsoc.org/~ci
FUTURE INTERNET ARCHITECTURES:
DESIGN AND DEPLOYMENT PERSPECTIVES
GUEST EDITORS: RAJ JAIN, ARJAN DURRESI, AND SUBHARTHI PAUL
24
26
GUEST EDITORIAL
38
LOCI OF COMPETITION FOR FUTURE INTERNET ARCHITECTURES
44
BIOLOGICAL PRINCIPLES FOR FUTURE INTERNET ARCHITECTURE DESIGN
54
ENABLING FUTURE INTERNET RESEARCH: THE FEDERICA CASE
62
A SURVEY OF THE RESEARCH ON FUTURE INTERNET ARCHITECTURES
The current Internet, which was designed over 40 years ago, is facing unprecedented
challenges in many aspects, especially in the commercial context.
JIANLI PAN, SUBHARTHI PAUL, AND RAJ JAIN
Designing for competition is an important consideration for the design of future
Internet architectures. Network architects should systematically consider the loci of
competition in any proposed network architecture.
JOHN CHUANG
The Internet has evolved to accommodate unexpected diversity in services and
applications. This trend will continue. The architecture of the new-generation
Internet must be designed in a dynamic, modular, and adaptive way. Features like
these can often be observed in biological processes that serve as inspiration for
designing new cooperative architectural concepts.
SASITHARAN BALASUBRAMANIAM, KENJI LEIBNITZ, PIETRO LIO’, DMITRI BOTVICH,
AND MASAYUKI MURATA
The authors provide a comprehensive overview of the state-of-the-art research
projects that have been using the virtual infrastructure slices of FEDERICA in order
to validate their research concepts, even when they are disruptive to the testbed’s
infrastructure, to obtain results in realistic network environments.
PETER SZEGEDI, JORDI FERRER RIERA, JOAN A. GARCIA-ESPIN, MARKUS HIDELL, PETER SJÖDIN,
PEHR SÖDERMAN, MARCO RUFFINI, DONAL O’MAHONY, ANDREA BIANCO, LUCA GIRAUDO,
MIGUEL PONCE DE LEON, GEMMA POWER, CRISTINA CERVELLO-PASTOR, VICTOR LOPEZ,
AND SUSANNE NAEGELE-JACKSON
CONTENT, CONNECTIVITY, AND CLOUD: INGREDIENTS FOR THE NETWORK OF THE
FUTURE
A new network architecture for the Internet needs ingredients from three
approaches: information-centric networking, cloud computing integrated with
networking, and open connectivity.
BENGT AHLGREN, PEDRO A. ARANDA, PROSPER CHEMOUIL, SARA OUESLATI, LUIS M. CORREIA,
HOLGER KARL, MICHAEL SÖLLNER, AND ANNIKKI WELIN
71
PEARL: A PROGRAMMABLE VIRTUAL ROUTER PLATFORM
The authors present the design and implementation of PEARL, a programmable
virtual router platform with relatively high performance.
GAOGANG XIE, PENG HE, HONGTAO GUAN, ZHENYU LI, YINGKE XIE, LAYONG LUO,
JIANHUA ZHANG, YONGGONG WANG, AND KAVÉ SALAMATIAN
TOPICS IN NETWORK AND SERVICE MANAGEMENT
SERIES EDITORS: GEORGE PAVLOU AND AIKO PRAS
78
80
SERIES EDITORIAL
88
NETWORK RESILIENCE: A SYSTEMATIC APPROACH
TOWARD DECENTRALIZED PROBABILISTIC MANAGEMENT
The authors discuss the potential of decentralized probabilistic management and its
impact on management operations, and illustrate the paradigm by three example
solutions for real-time monitoring and anomaly detection.
ALBERTO GONZALEZ PRIETO, DANIEL GILLBLAD, REBECCA STEINERT, AVI MIRON
Aspects of network resilience, such as the application of fault-tolerant systems
techniques to optical switching, have been studied and applied to great effect.
However, networks — and the Internet in particular — are still vulnerable
PAUL SMITH, DAVID HUTCHISON, JAMES P. G. STERBENZ, MARCUS SCHÖLLER, ALI FESSI,
MERKOURIS KARALIOPOULOS, CHIDUNG LAC, AND BERNHARD PLATTNER
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
http://www.comsoc.org/te
g/techfocus
presented by IEEE Communications Society
BEMaGS
F
A FREE
ONLINE OFFER
from the IEEE COMMUNICATIONS DIGITAL LIBRARY
3GPP
TECHNOLOGY FOCUS
A
LIMITED TIME OFFER!
Enjoy free access to full content from IEEE Communications Society
publications and conferences. This is the only site with free access from the
IEEE Communications Digital Library on 3GPP.
Conference papers and publication articles were originally presented at
IEEE Communications Society conferences including IEEE GLOBECOM, ICC,
PIMRC, DYSPAN, WCNC, and IEEE Communications Society Magazines and
Journals.
Free 3GPP articles and conference papers include:
Synchronization and cell search in 3GPP LTE systems
QoS Architecture for the 3GPP IETF-Based Evolved Packet Core
Network-based mobility management in the evolved 3GPP core network
3GPP Machine-to-Machine Communications
Handover between mobile WiMAX and 3GPP UTRAN
Assessing 3GPP LTE-Advanced as IMT-Advanced Technology
Robust channel estimation and detection in 3GPP-LTE
3GPP LTE downlink system performance
Downlink MIMO with frequency-domain packet scheduling for 3GPP LTE
Coexistence studies for 3GPP LTE with other mobile systems
QoS control in the 3GPP evolved packet system
Combating timing asynchronism in relay transmission for 3GPP LTE uplink
…plus many more papers from the IEEE Communications Society; all with
limited time free access! Go to http://www.comsoc.org/techfocus
free access
compliments of
For other sponsor opportunities,
please contact Eric Levine, Associate Publisher
Phone: 212-705-8920,
E-mail: [email protected]
______________
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
2011 Communications Society
Elected Officers
Byeong Gi Lee, President
Vijay Bhargava, President-Elect
Mark Karol, VP–Technical Activities
Khaled B. Letaief, VP–Conferences
Sergio Benedetto, VP–Member Relations
Leonard Cimini, VP–Publications
Members-at-Large
Class of 2011
Robert Fish, Joseph Evans
Nelson Fonseca, Michele Zorzi
Class of 2012
Stefano Bregni, V. Chan
Iwao Sasase, Sarah K. Wilson
Class of 2013
Gerhard Fettweis, Stefano Galli
Robert Shapiro, Moe Win
2011 IEEE Officers
Moshe Kam, President
Gordon W. Day, President-Elect
Roger D. Pollard, Secretary
Harold L. Flescher, Treasurer
Pedro A. Ray, Past-President
E. James Prendergast, Executive Director
Nim Cheung, Director, Division III
IEEE COMMUNICATIONS MAGAZINE (ISSN 01636804) is published monthly by The Institute of
Electrical
and
Electronics
Engineers,
Inc.
Headquarters address: IEEE, 3 Park Avenue, 17th
Floor, New York, NY 10016-5997, USA; tel: +1-212705-8900; http://www.comsoc.org/ci. Responsibility for
the contents rests upon authors of signed articles and
not the IEEE or its members. Unless otherwise specified, the IEEE neither endorses nor sanctions any positions or actions espoused in IEEE Communications
Magazine.
ANNUAL SUBSCRIPTION: $27 per year print subscription. $16 per year digital subscription. Non-member print
subscription: $400. Single copy price is $25.
98
104
AND
REPRINT
POSTMASTER:
Send address changes to IEEE
Communications Magazine, IEEE, 445 Hoes Lane,
Piscataway, NJ 08855-1331. GST Registration No.
125634188. Printed in USA. Periodicals postage paid at New
York, NY and at additional mailing offices. Canadian Post
International Publications Mail (Canadian Distribution)
Sales Agreement No. 40030962. Return undeliverable
Canadian addresses to: Frontier, PO Box 1051, 1031 Helena
Street, Fort Eire, ON L2A 6C7
SUBMISSIONS: The magazine welcomes tutorial or
survey articles that span the breadth of communications. Submissions will normally be approximately 4500
words, with few mathematical formulas, accompanied
by up to six figures and/or tables, with up to 10 carefully selected references. Electronic submissions are preferred, and should be sumitted through Manuscript
Central http://mc.manuscriptcentral.com/commag-ieee.
Instructions can be found at the following: http://dl.com______
soc.org/livepubs/ci1/info/sub_guidelines.html. For further
information contact Sean Moore, Associate Editor-inChief ([email protected]).
__________ All submissions will be
peer reviewed.
VLANs are widely used in today’s enterprise networks to improve Ethernet scalability
and support network policies. However, manuals and textbooks offer very little
information about how VLANs are actually used in practice.
MINLAN YU, JENNIFER REXFORD, XIN SUN, SANJAY RAO, AND NICK FEAMSTER
TOWARD FINE-GRAINED TRAFFIC CLASSIFICATION
The authors propose a fine-grained traffic classification scheme based on the
analysis of existing classification methodologies.
BYUNGCHUL PARK, JAMES WON-KI HONG, AND YOUNG J. WON
SERIES EDITORIAL
IEEE 802.15.3C: THE FIRST IEEE WIRELESS STANDARD FOR DATA RATES OVER 1 GB/S
The authors explain the important features of IEEE 802.15.3c, the first wireless
standard from IEEE in the 60-GHz (millimeter wave) band and its development.
TUNCER BAYKAS, CHIN-SEAN SUM, ZHOU LAN, JUNYI WANG, M. AZIZUR RAHMAN,
HIROSHI HARADA, AND SHUZO KATO
OVERVIEW OF FEMTOCELL SUPPORT IN ADVANCED WIMAX SYSTEMS
132
SMART UTILITY NETWORKS IN TV WHITE SPACE
140
ADVANCES IN MODE-STIRRED REVERBERATION CHAMBERS FOR WIRELESS
COMMUNICATION PERFORMANCE EVALUATION
The authors provide an update on novel concepts and mechanisms for femtocell
support in the network architecture and air interface that have been adopted into
the WiMAX Forum network specifications and the IEEE 802.16m specification.
YING LI, ANDREAS MAEDER, LINGHANG FAN, ANSHUMAN NIGAM, AND JOEY CHOU
The authors present an overview of the background, technology, regulation, and
standardization in the course of deploying smart utility networks (SUNs) in TV white
space (TVWS) communications, two wireless technologies currently receiving
overwhelming interest in the wireless industry and academia.
CHIN-SEAN SUM, HIROSHI HARADA, FUMIHIDE KOJIMA, ZHOU LAN, AND RYUHEI FUNADA
ACCEPTED FROM OPEN CALL
The authors highlight recent advances in the development of second-generation
mode-stirred chambers for wireless communications performance evaluation.
MIGUEL Á. GARCIA-FERNANDEZ, JUAN D. SANCHEZ-HEREDIA, ANTONIO M. MARTINEZ-GONZALEZ,
DAVID A. SANCHEZ-HERNANDEZ, AND JUAN F. VALENZUELA-VALDÉS
148
SYSTEM-LEVEL SIMULATION METHODOLOGY AND PLATFORM FOR MOBILE CELLULAR
SYSTEMS
The authors propose a general unified simulation methodology for different cellular
systems.
LI CHEN, WENWEN CHEN, BIN WANG, XIN ZHANG, HONGYANG CHEN, AND DACHENG YANG
156
LAYER 3 WIRELESS MESH NETWORKS: MOBILITY MANAGEMENT ISSUES
164
PREAMBLE DESIGN, SYSTEM ACQUISITION, AND DETERMINATION IN MODERN
OFDMA CELLULAR COMMUNICATIONS: AN OVERVIEW
SUBSCRIPTIONS, orders, address changes — IEEE
Service Center, 445 Hoes Lane, Piscataway, NJ
08855-1331, USA; tel: +1-732-981-0060; e-mail:
[email protected].
____________
ADVERTISING: Advertising is accepted at the discretion of the publisher. Address correspondence to:
Advertising Manager, IEEE Communications Magazine,
3 Park Avenue, 17th Floor, New York, NY 10016.
A SURVEY OF VIRTUAL LAN USAGE IN CAMPUS NETWORKS
122
PERMISSIONS:
Abstracting is permitted with credit to the source. Libraries
are permitted to photocopy beyond the limits of U.S.
Copyright law for private use of patrons: those post-1977
articles that carry a code on the bottom of the first page provided the per copy fee indicated in the code is paid through
the Copyright Clearance Center, 222 Rosewood Drive,
Danvers, MA 01923. For other copying, reprint, or republication permission, write to Director, Publishing Services,
at IEEE Headquarters. All rights reserved. Copyright © 2011
by The Institute of Electrical and Electronics Engineers, Inc.
Wireless Mesh Networks (WMNs) may be broadly classified into two categories:
Layer 2 and Layer 3 WMNs. The authors focus on the Layer 3 WMN, which provides
the same service interfaces and functionalities to the conventional mobile host (MH)
as the conventional wireless local area network.
KENICHI MASE
The wide choices of deployment parameters in next generation wireless communication
systems present significant challenges in preamble and system acquisition design. The
authors address these challenges, as well as the solutions provided by the next
generation wireless standards,
MICHAEL MAO WANG, AVNEESH AGRAWAL, AAMOD KHANDEKAR, AND SANDEEP AEDUDODLA
176
EVALUATING STRATEGIES FOR EVOLUTION OF PASSIVE OPTICAL NETWORKS
185
ON ASSURING END-TO-END
AND A POSSIBLE SOLUTION
The authors study the requirements for optimal migration toward higher bandwidth per
user, and examine scenarios and cost-effective solutions for PON evolution.
MARILET DE ANDRADE, GLEN KRAMER, LENA WOSINSKA, JIAJIA CHEN, SEBASTIÀ SALLENT,
AND BISWANATH MUKHERJEE
IEEE
QOE IN NEXT GENERATION NETWORKS: CHALLENGES
The authors discuss challenges and a possible solution for optimizing end-to-end QoE in
Next Generation Networks.
JINGJING ZHANG AND NIRWAN ANSARI
President’s Page
Conference Report/ICC 2011
Conference Calendar
Society News
Communications
F
STANDARDS SERIES EDITORS: MOSTAFA HASHEM SHERIF AND YOICHI MAEDA
112
114
in-Chief, Steve Gorshe, PMC-Sierra, Inc., 10565 S.W.
Nimbus Avenue, Portland, OR 97223; tel: +(503) [email protected].
7440, e-mail: ______________
4
BEMaGS
RECENT IEEE STANDARDS FOR WIRELESS DATA COMMUNICATIONS
EDITORIAL CORRESPONDENCE: Address to: Editor-
COPYRIGHT
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
6
12
16
17
New Products
Global Communications Newsletter
Advertisers’ Index
18
19
192
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TECHNICAL CONFERENCE March 4–8, 2012
EXPOSITION March 6–8, 2012
LOS ANGELES CONVENTION CENTER
Los Angeles, California, USA
CALL FOR PAPERS
SUBMISSION DEADLINE: OCTOBER 6, 2011, 12:00 P.M. NOON EDT (16:00 GMT)
SUBMIT YOUR RESEARCH TO THE WORLD’S LARGEST
CONFERENCE FOR ADVANCING OPTICAL SOLUTIONS
IN TELECOM, DATACOM, COMPUTING AND MORE
Scan QR Code
with your
smartphone for
more information
STUDENTS: Enter the Corning Outstanding Student Paper Competition
for Your Chance to Win $1,500!
WWW.OFCNFOEC.ORG/SUBMISSIONS
____________________________
Communications
IEEE
SPONSORED BY
NON-FINANCIAL
TECHNICAL CO-SPONSOR
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PRESIDENT’S PAGE
“GLOBAL COMSOC” EMBRACING THE GLOBE
T
toward globally balanced representation by
he “Golden Triangle” vision, as introMembers-at-Large (MaL) in the BoG.
duced in the January 2010 President’s
In this President’s Page we will introduce
Page, supports transformation of our IEEE
ComSoc’s efforts to achieve our globalization
Communications Society (ComSoc) into a
goal, the programs we support for our global
truly global, vital and high-value professional
membership, and how we plan to enhance
society. The three vertices of the “Golden Triour existing successful programs to become
angle,” namely Globalization, Young Leaders,
more global. The contributors to this page
and Industry, are the three fundamental coninclude: Shri Goyal, ComSoc Director-Memcepts of transformation. Globalization enables
bership Programs Development; Roberto
utilization of the best talent, education, trainSaracco, Director-Societal Relations; Naoaki
ing and cultural values among our members
Yamanaka, Director-Asia Pacific Region;
from around the world. Young Leaders are
Tariq Durrani, Director-Europe, Middle
the future of our Society. Interestingly, the
East, and Africa Region; Jose David Cely,
rapidly growing countries, fueling the drive
BYEONG GI LEE
Director-Latin Amerifor Globalization, have
ca Region; and Gabriel
higher percentages of
Jakobson, Directortheir work force underNorth America Region.
30 years of age. IndusShri Goyal received
try implements techhis M.Sc. degree in
nology in products and
Electronics from Allaservices, making these
habad University in
available to user comIndia and the Ph.D. in
munities around the
Electrical Engineering
world, including rurual
from North Carolina
areas and the expandState University. Shri
ing consumer base
worked 25 plus years
around the world. All
at GTE (now Verithree vertices in the
zon) Laboratories and
“Golden Triangle,”
was Dean of the Coltherefore, center around
SHRI GOYAL
ROBERTO SARACCO
NAOAKI YAMANAKA
lege of Technology &
Globalization in the
Management in St
development and operPetersburg, Florida
ations of our Communiuntil recently. Shri has
cations Society.
organized numerous
Recognizing its importglobal symposia, inance, special emphasis
cluding NOMS and
has long been placed on
IM. Shri served as the
globalization in the
Director of Meetings
Communication Sociand Conferences (2004ety. ComSoc has been
6). Currently he is
establishing its global
ComSoc’s Director of
footprint since at least
Membership Programs
the 1990s. In 1994,
Development, embracMaurizio Decina, the
ing activities that serve
first non-U.S. based
our members and
ComSoc
President,
TARIQ DURRANI
JOSE DAVID CELY
GABRIEL JAKOBSON
Chapters worldwide.
coined the phrase
Shri is an IEEE Fellow.
“Global CommunicaRoberto Saracco is the Director of the Telecom Italia
tions Society” to concisely embody the future direction of
Future Centre, where he is leading a group of researchers in
ComSoc. Since then ComSoc has been transforming from a
the analysis of the impact of technology evolution on the
primarily US-centric Society (before the Bell System divestitelecommunications business. He has participated in a number
ture in 1984) into a global Society of today with a majority of
of EU groups, including the 2020 Visionary Group and the
members residing outside the US.
Internet 2020 Group. He is a longstanding member of
ComSoc’s globalization has been successful in membership,
IEEE/COMSOC and has volunteered in several positions.
publications, conferences, and technical activities. In parallel
Currently he is the Director of Sister and Related Societies
with that, we have also been promoting globalization in sharand the Chair ComSoc’s 2020 Committee.
ing ComSoc’s leadership roles among different regional memNaoaki Yamanaka received his B.E., M.E., and Ph.D.
bers. ComSoc’s Board of Governors (BoG) includes four
degrees from Keio University, Japan. He joined Nippon TeleRegional Directors, and Presidents and Vice Presidents were
graph and Telephone Corporation (NTT) in 1983, where he
elected from all four ComSoc Regions. We are also working
6
Communications
IEEE
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PRESIDENT’S PAGE
COMSOC’S GLOBALIZATION INITIATIVES
F IGURE 1. Geographic trends in ComSoc membership 20002010.
conducted research and development on high-speed switching
systems and technologies. Currently he is a Professor in the
Information and Computer Science Department of Keio University. Yamanaka is ComSoc’s Director of the Asia Pacific
Region and a Board member of the IEEE CPMT Society. He
is a Fellow of the IEEE and a Fellow of the IEICE. He has
published more than 122 journal and transaction articles, 82
international conference papers, and 174 patents, including 17
international patents.
Tariq Durrani has been a Research Professor in Electronic & Electrical Engineering, University of Strathclyde,
Glasgow, UK, since 1982. He was Department Head in
1990-94, and University Deputy Principal (Provost equivalent) in 2000-06. Tariq has held various visiting appointments around the world, including at Princeton University
and the University of Southern California. He has
authored/co-authored more than 350 papers and six books.
His research interests cover Wireless Communications Systems and Signal Processing. He served as President of the
IEEE Signal Processing Society (1994-95) and President of
the IEEE Engineering Management Society (2006-07). Currently he is ComSoc’s Regional Director for Europe, Middle
East and Africa, and the IEEE Vice President for Educational Activities (also chairing the IEEE Educational Activities Board).
Jose David Cely C. received his B.S. degree from the Universidad Distrital Francisco Jose de Caldas in Bogota, Colombia. He worked for the Telecommunications Research Center
of Colombia CINTEL, TELECOM Colombia, and the Universidad Catolica de Colombia. Currently he works for Universidad Distrital. He has served in several appointed and
elected positions in IEEE and ComSoc, and actively participated in organizing LATINCOM 2009 and LATINCOM
2010. Currently he is ComSoc’s Director of the Latin America
Region. He received ComSoc’s Latin America Region Distinguished Service Award in 2009 and the IEEE MGA’s Achievement Award in 2010.
Gabriel Jakobson is Chief Scientist of Altusys Corp., a consulting company specializing in situation management technologies for defense, cyber security, and enterprise
management applications. He received his Ph.D. in Computer
Science from the Institute of Cybernetics, Estonia, and an
Honorary Degree of Doctor Honorius Causa from Tallinn
University of Technology, Estonia. Within ComSoc Gabe is a
Distinguished Lecturer who has given lectures in more than
20 countries, the Director of the North America Region, the
Vice-Chair of the Tactical Communications and Operations
Technical Committee, and the Chair of the Sub-Committee
on Situation Management.
Rapid globalization of communications and information
technology services has been changing the demographics of
ComSoc’s membership. Over the past 10 years the proportion
of ComSoc membership from outside of its North America
Region has increased from 44% to more than 59% (see Figure 1). It is natural that fast developing regions are assuming
an increasingly heavier role in various ComSoc activities.
As a world-wide organization ComSoc operates four global
regions: North America Region (NAR), Latin America Region
(LAR), Europe/Middle East/Africa Region (EMEAR), and
Asia Pacific Region (APR), with a Regional Director (RD)
representing each region. Our over 50,000 members in the
four ComSoc Regions find their local technical home in 207
ComSoc Chapters. Our global members are served by our
ComSoc headquarters in New York and a remote office in Singapore, collocated with the IEEE Singapore office. More satellite offices are being pursued in China, Russia, and India.
Over the years, ComSoc has made efforts to engage members from all regions in its leadership and governance. Today
various operational units in ComSoc make “open calls” for
appointments when leadership positions are available and
encourage all Regions to nominate eligible candidates.
ComSoc has had a fairly global representation in its leadership positions (e.g. in boards, committees, and councils) and
at the “grass roots” throughout its conferences, publications,
and technical activities. In addition, ComSoc has been eagerly
seeking, through its ad hoc Nomination and Election Process
Committee, improved methods to encourage election of
Members-at-Large to the Board of Governors (BoG) better in
line with membership demographics.
GOLD Meeting at Globecom 2010 in Miami, Florida.
The young professionals are the future of ComSoc. The
Graduates of the Last Decade (GOLD) program, initiated by
the IEEE Member and Geographic Activities Board (MGAB),
provides an excellent framework for engaging young professional communities. ComSoc works closely with the IEEE
MGAB and supports the GOLD program through members
it appointes to its key councils and boards. We also hold
GOLD sessions at our flagship and portfolio conferences,
making special efforts to encourage young members’ participation. Through our mentoring, young members can take
more significant leadership roles. Recently at IEEE GLOBECOM 2010 in Miami, we organized a special event by engaging young members from all four ComSoc Regions (see photo
above). In this special session, they shared their experiences
and discussed pros and cons about jobs and careers in
academia and industry.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
7
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PRESIDENT’S PAGE
GLOBALIZED MEMBER SERVICES AND PROGRAMS
ComSoc offers a wide variety of programs to serve its
members. Increasingly, these programs are being customized
and tailored to serve the specific needs of local Regions and
countries. Some key member services and programs are highlighted in the following sections.
Chapter Funding and Recognition: ComSoc chapters have
been growing steadily in numbers and activities. Ten new
chapters were formed in 2010, including several in the developing countries in the Asia Pacific Region. Each chapter is
funded on a yearly basis with the amount depending on the
previous year’s activities. The chapters with distinguished
activities are recognized by the Chapter Achievement Award
(CAA). One chapter is selected from each of the four ComSoc Regions for the CAA, and the best performing chapter
among those four CAA recipients is additionally recognized
as the Chapter of the Year.
Distinguished Lecturers/Speakers Programs: As a service
to local chapters, ComSoc operates Distinguished Lecturer
Tours (DLT) and individual Distinguished Speakers programs. These programs are arranged in response to requests
from one or more chapter chairs, regional directors, or individual members. Usually up to five lecture tours are supported in each of the four regions every year. Distinguished
Lecturers (DL) are selected by the DL Selection Committee,
chaired by the Vice Chair of the Technical Activities Council
(TAC). DLs are encouraged to network and exchange ideas
with local chapter members while making technical lectures
during DLT visits.
Student Travel Grants (STG): ComSoc provides Student
Travel Grants to help IEEE Student Members attend major
ComSoc conferences. The number of grants is limited in number and offered to students who meet the eligibility requirement. Some conferences have additional sources for travel
grants, e.g.; INFOCOM has the NSF program which supports
travel for students studying at a US college or university. For
more information on the STG program, please visit
http://www.comsoc.org/about/documents/pp/4.1.5.
Young Professionals Program: Within the framework of
the IEEE GOLD program, ComSoc’s Young Professionals
Program is designed to engage and support young member
initiatives and leadership development. In support of this program, a Young Professional Web portal (YPW) was created
and linked to ComSoc’s website, which focuses on career,
education, on-line networking and engaging activities of members of interest. Please visit http://committees.comsoc.org/ypc/
for more information and participation.
Industry Outreach Programs: The rapid transformation
of the communications industry in the world’s developing
as well as developed countries and their quest for technical
excellence are creating an ever-increasing demand for networking and exchange of ideas and technologies. In
response to such demands ComSoc proactively sets a
framework to engage and support communications industry
initiatives in these economies. The Society’s Corporate
Patron Program and Industry Now Program are two representative programs that were developed to support these
efforts.
Corporate Patron Program (CPP): CPP is a package of
ComSoc products and services that are specifically customized
and bundled to meet the needs of each participant company.
The program provides an opportunity for industry leaders to
reach out to ComSoc’s influential members through exposure
across the ComSoc web site, publications, and conferences. It
enables companies to leverage ComSoc products and services
through a package of customized discounts.
8
Communications
IEEE
Industry Now Program (INP): INP is designed to promote
industry participation in ComSoc activities around the world
by offering companies and their employees the opportunity to
use the values that ComSoc creates by working with professionals around the world. It is specifically tailored for industry
organizations in the fast developing regions of the world. The
program offers the option of customizing packages to address
both geographic and company-specific needs, included in special membership packages.
Industry Day: Recently, to jump-start IEEE’s interactions
with global industry, ComSoc, in partnership with the IEEE
and MGAB, organized an Industry Day event in Bangalore,
India (http://ieee-industry.org/india/) with participation of over
400 delegates from 150 companies, government and academic
Opening session of India Industry Day.
institutions (photo below).
Industry Services Program (ISP): ISP, a new program, is
being developed to meet the special needs of industry members in different countries. This program will offer mentoring
service for industry members throughout their careers; information filtering services for the practicing engineer; unbiased
product information summaries; business guidance services;
and job search assistance services. Those component services
will be designed, tested, and rolled out in the near future.
SISTER SOCIETIES WORLD-WIDE
The ComSoc Sister Society program was one of the first
tangible efforts by our Society to reach out to the global community of professionals who share our interests and values.
Through establishing such relationships, we are able to generate interest in our activities, and in some cases formally collaborate on conference and publication activities (e.g., the
Journal of Communications and Networks with our Sister Society in Korea, and the China Communications Magazine with
our Sister Society in China). In many parts of the world, Sister
Society colleagues are also active in ComSoc Chapters. This
provides an opportunity for even more synergy across the professional communities. The “ComSoc/KICS Exemplary Global
Service Award” jointly created by ComSoc and KICS (Korea
Information and Communications Award) to recognize those
who contributed to the exchange of technology and networking globally is in itself an excellent example of inter-Society
collaboration and synergy.
ComSoc is now connected to 30 Societies around the world
and has a significant international footprint (see photo at the
top of the next page). There are numerous reasons for having
established these links, including the possibility of creating a
much larger community (globally our Sister Society members
exceed 500,000 engineers, scientists, and other professionals)
and the fact that each Society, in a way, brings a unique perspective that enriches the ComSoc ecosystem.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PRESIDENT’S PAGE
World map of ComSoc’s Sister Societies.
Attendees at the AP-RCCC in Kyoto, Japan.
Over the years we have developed important programs for
cooperation with Sister Societies. For example, we jointly
organize conferences, where ComSoc is ready to provide
keynote speakers and tutorials by leveraging its member base.
This increases attendance by bringing international participation to our Sister Society conferences and bringing local attendance to ComSoc’s conferences. In addition, these provide
forums for interaction and networking among those with
broader interests.
Through ComSoc’s services, we advertise the activities of
the Sister Societies in IEEE Communications Magazine and on
the ComSoc web site. Papers published in IEEE Communications Magazine can be reprinted in Sister Society publications
and, in some cases (e.g., two Sister Societies in China), with
translation and republication in local languages.
Recently, the ComSoc BoG took three significant steps to
advance the Society’s relationships with its Sister Societies: (a)
provide a link to ComSoc chapters in the geographical area of
a Sister Society by appointing a local ComSoc representative
in the Sister Society; (b) extend Sister Societies’ participation
in areas such as Africa and Australia where the membership
coverage is limited; and (c) insert in renewal agreements a
specific goal that has to be achieved during the renewal period to make our relationship measurable and more effective in
meeting cross-Society needs. We expect that these advancements will all be implemented by the end of this year.
In 1998, APR was the first ComSoc Region to create the
Asia Pacific Young Researcher Award. It is presented to
selected young researchers who have demonstrated distinguished performances and very active participation in ComSoc
publications and conferences over the last three years.
APR hosted an Asia Pacific-Regional Chapter Chairs
Congress (AP-RCCC) in Kyoto in June, co-located with ICC
2011 (see photo above).
COMSOC ASIA PACIFIC REGION (APR)
APR has 41 ComSoc chapters and has over 10,000 IEEE
members. It covers a geographical area stretching from South
Korea and Japan in the north-east to New Zealand in the
south, and Pakistan in the west, and has diverse and dynamic
cultural backgrounds.
APR has a very well organized operational structure centered around itsAsia Pacific Board (APB), and a large number of active volunteers. The APB operational structure
includes a Director, two Vice Directors, and five Committees,
namely: Technical Affairs Committee (TAC), Meetings and
Conferences Committee (MCC), Information Services Committee (ISC), Membership Development Committee (MDC),
and Chapters Coordination Committee (CCC). This APB
structure, closely supported by ComSoc’s Singapore Office,
very effectively propels APR activities.
APR has been very energetic in hosting major international conferences and has organized a number of annual regional
conferences. Being dynamic and the fastest developing
Region, it has shown significant growth in membership, chapters, conferences, and DLT/DSP visits. To keep its members
informed, the region regularly publishes a newsletter, AP
Newsletter, and contributes to ComSoc’s Global Communications Newsletter (GCN).
COMSOC EUROPE, MIDDLE EAST, AND AFRICA REGION
(EMEAR)
Geographically, EMEAR covers the largest area of four
ComSoc Regions, extending from the Azores to Vladivostok,
and from Aalborg to Cape Town. The Region has more than
10,000 ComSoc members. The volunteers in the Region are
active in every aspect of ComSoc interest, including various
member services, highly effective chapters, and successful,
high quality conferences.
EMEAR includes 49 chapters, spanning a very wide geographical area. The chapters are providing support and services to its members through regular meetings, topical
lectures, DLT/DSP visits, and industry engagements. The
number of chapters has been steadily increasing, with the
addition of two to three new chapters each year. In 2010 the
Chapter Achievement Award of EMEAR was presented to
the Russia Siberia (Tomsk) Chapter.
In 2010 EMEAR hosted a number of conferences on communications, including the IEEE International Conference on
Communications (ICC 2010) in Cape Town, South Africa.
This conference was successful in providing participants with a
unique cultural experience and in connecting with industry
and academic institutions and student organizations in the
Region.
In 2010 EMEAR established a ComSoc Young Researcher
Award to recognize talented young members who have made
significant original research contributions and actively participated in ComSoc publication and conference activities over
the last three years. Dr. Joao Barros, at the University of
Porto, Portugal, was selected as the first recipient of the
award.
COMSOC LATIN AMERICA REGION (LAR)
LAR is currently in a growth phase for its activities and
membership. ComSoc is well represented throughout the
Latin American continent in all countries from Mexico to
Argentina with 24 chapters. Regular conferences at a national
level are organized by its chapters in partnership with IEEE
Sections, local universities and industry.
IEEE LATINCOM is an annual communications conference organized by ComSoc’s Latin America Board (LAB). It
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
9
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PRESIDENT’S PAGE
As ComSoc continues its globalization reach to form the
“Global ComSoc” village, NAR will continue to contribute as
the Region with the longest history and a wealth of expertise.
FUTURE DIRECTIONS
Volunteer leaders at LA-RCC 2011, held in Cancun.
is the largest communications conference in LAR and features significant international participation. LATINCOM not
only serves as a major forum for technical interchange in
NAR but also offers an opportunity for networking and sharing of cultural experiences. LATINCOM 2011 was held in
Bogota, Columbia and will be held in Belem, Brasil in 2012.
This year LAR hosted the LA Regional Chapter Chairs
Congress (RCCC) in Cancun, Mexico, on March 30-31, colocated with WCNC (see photo above) and the BoG’s Operating Committee meeting. RCCC provided an opportunity to
re-establish the relationship among chapters in LAR, and to
conveniently meet with ComSoc’s leadership and staff. During
the congress, an LAR strategic plan and a workable action
plan were developed.
COMSOC NORTH AMERICA REGION (NAR)
NAR is the largest ComSoc Region with almost 20,000
members in 93 chapters in the U.S. and Canada, which is
about 40% of overall ComSoc organization. The size of chapters varies broadly from the Santa Clara Valley Chapter with
1,100 members to small chapters with 30-40 members.
The leadership body of NAR, the North America Board
(NAB), was created in 2008 with 10 members representing all
NAR areas. Each board member plays two roles: representation of his/her own chapter, and leadership responsibilities to
support the whole NAR, which includes all activities related
to student, DLT/DSP, industry relations, membership development, information services, and GOLD programs.
For years, the DLT has been a most demanding program
in NAR. It has successfully stimulated cooperation among
multiple chapters and helped membership development. In
recent years, an increasing number of lecturers have been
coming to NAR from other Regions. Such a trend directly
serves ComSoc’s globalization goal and also serves our local
members by providing additional value through cultural diversity. In the spirit of our Golden Triangle initiative, many of
our regional distinguished lecturers and ComSoc officers have
been delivering lectures in different Regions around the
globe. NAR uses the North America-Regional Chapters
Chairs Congress (NA-RCCC) as a forum for exchange among
all constituent chapters.
10
Communications
IEEE
We are proud of the ComSoc globalization accomplishments that have progressed far ahead of other Societies and
have been leading the way within the IEEE. The geographic
stretch of ComSoc’s membership and its world-wide activities
is certainly making ComSoc a global Society, truly deserving
the name “Global ComSoc.” The contributions of our members to ComSoc’s publications and conferences as well as their
participation in technical and regional activities of APR and
EMEAR, and the rapidly growing efforts by LAR, have
already substantially improved the global balance across
regions. In parallel, ComSoc’s global coverage has been steadily expanding, reaching 207 regional chapters and collaborating
with 30 Sister Societies. In response to that growth we have
diversified our membership programs so that they can weave
the global networks and activities of our members worldwide.
Globalization may be defined by two levels. With globalization at the membership level, or Level-1 Globalization,
reached, we are now working to step up to the next level,
Level-2 Globalization, where globalization is realized on the
Society’s leadership team, including the Board of Governors
the Society’s various councils, boards, and committees.
Achieving balanced regional representation for ComSoc’s
leadership team is very important for assuring ComSoc’s
growth in the global era, through enabling ComSoc to operate
in a truly global manner through making good use of global
cultures and values. ComSoc has already succeeded in establishing a basic level of regional balance by appointing Regional Directors to its Board of Governors. The next step is to
move forward toward a balanced representation of its Members-at-Large (MaL), who are voting members of the ComSoc
BoG (in addition to other elected officers, such as the President and Vice Presidents). In practice, it can be realized by
electing three MaLs in APR, three MaLs in EMEAR, one
MaL in LAR, and five MaL in NAR, in approximate proportion to the number of ComSoc members in those regions.
Since 2009, ComSoc’s BoG has been working toward this
through its ad hoc Nomination and Election Process Committee. We anticipate Level-2 Globalization will be a reality within the next few years.
The 21st century has opened a new era in our ongoing
quest for a better and more humane world. While physical
divides still exist in some parts of the world, disparity in economic, social and political life has a chance, for the first time
in the history of mankind, to be patched through “communications.” Global communications in the digital convergence
era will offer an affordable means to communicate among all
different sectors of humanity, regardless of where people live.
Leveraging our technical strength, creativity, and collaboration among our “Global ComSoc” membership, we now have
the opportunity to capitalize on this movement and serve our
members and humanity at large. We invite you to join us in
meeting this most important challenge.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
TM
This presentation provides the latest technical developments and
business/market potential for what promises to be the most prominent
wireless technology ever to be deployed.
IEEE
BEMaGS
F
____________________________________________________
The trend towards All-IP networks and increased demand for data
services prompted the 3GPP to assess the implications for UMTS and
High Speed Packet Access (HSPA) technologies. Even though these
technologies will be highly competitive in the near term, to ensure
competitiveness over a longer time frame, the 3GPP realized the need
for a long-term evolution of the radio-access technology and an
optimization of the packet core network. Research, development and
standardization in these areas have led to the definition of Evolved
UTRAN (E-UTRAN) and Evolved Packet Core (EPC) specifications.
Communications
A
FREE ACC
CESS
sponso
ored by
Brought to you by
For other sponso
sorr opportunities,
please con
onta
tact Eric Levine,
As
sso
soci
ciate Publisher
Phone: 212-705-8920,
E-mail: [email protected]
_____________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CONFERENCE REPORT
IEEE ICC 2011 EXPLORES GLOBAL ADVANCE OF
WIRELESS SYSTEM AND NETWORKING COMMUNICATIONS IN KYOTO, JAPAN
Themed the “Source of Innovation: Back to the Origin,”
the IEEE International Communications Conference (ICC
2011) recently concluded its latest annual event with over
1,800 international experts participating in the presentation of
nearly 1,200 technical papers, symposia, tutorials and workshops dedicated to the ongoing advance of next-generation
voice, data,and multimedia services, and theory and technologies for wireless and optical communications.
Held in the fabled city of Kyoto, which reigned as Japan’s
capitol for 1200 years and is recognized throughout the world
as the cultural heart of Japan, IEEE ICC 2011 began on Sunday, June 5 with the first of two full days of workshops and
tutorials detailing the latest research in topics ranging from
“Visible Light Communications,” “Practical Solutions for
Cognitive Radio,” and “Next Generation Broadband Access”
to “Heterogeneous Networks (HETnet),” “Smart Grid Communications” and “Game Theory and Resource Allocation for
Future Wireless Systems.”
The following morning, the conference’s comprehensive
symposium and business forum officially commenced with
introductions from IEEE ICC 2011 General Chair Noritaka
Uji, Executive Chair Koichi Asatani, IEEE ComSoc President
Byeong Gi Lee and TPC Chair and IEICE-CS President
Kazuo Hagimoto. In addition to exploring the event’s goal of
enhancing life through the proliferation of next wave information communications, each speaker expressed their individual
admiration for the Japanese people, who were so deeply
affected by the recent tragedies befalling the nation just a few
months ago. As a result, the entire forum joined in a moment
of silence to honor the loved ones lost during the devastating
March 11 event.
Immediately afterwards, Ryuji Yamada, President & CEO
of NTT DOCOMO, INC. detailed his company’s “Actions for
Growth” in the Japanese marketplace as well as its “Response
to the Great East Japan Earthquake” in the conference’s first
keynote address. This included moving rapidly and steadily to
restore the capabilities of 4,900 base stations to pre-disaster
levels by April 30 and placing a high-priority on the use of
radio signal transmission systems, satellite circuits and highperformance antennas to supply service to the many areas surrounding the Fukushima Daiichi Nuclear Plant. As for the
future, Yamada spoke about the growing proliferation of
Professor Maurizio Decina of the Politecnico di Milano in
Milan, Italy, addressed his vision of "Future Networks & Services."
12
Communications
IEEE
Ryuji Yamada, President & CEO of NTT DOCOMO, INC.
detailed his company's "Actions for Growth" in the Japanese
marketplace.
smart phones in a Japanese marketplace that already “leads
the world in the adoption of 3G mobile communications services” and his company’s “aim to sell six million units of smart
phones in 2011,” which will be armed with various first-ofkind services including real-time translation capabilities.
Following this presentation, Professor Maurizio Decina of
the Politecnico di Milano in Milan, Italy, addressed his vision
of “Future Networks & Services” in a world that currently
includes 5.3 billion mobile subscriptions and trillions of dollars in revenues. In the future, Decina envisions a “flatter and
much more densely interconnected Internet” comprised of
mobile devices completing the “billions of simultaneous transactions” needed to mesh up personal data, preferences and
devices in real-world time. This includes the introduction of
cheap, easy and convenient on-demand services that “knows
you and what is around you,” “learns what you like,” “discovers things relevant to you” and “filters out the irrelevant.”
At the end of these keynotes, attendees were then invited to
attend any or all of the 1,000 technical paper presentations and
business forums designed to explore the full array of information security, wireless networking, communication theory, signal
processing and cognitive radio issues over the next three days.
Among them were the hosting of nearly one dozen high-level
executive panels that started with Business Forum Co-Chair
Chi-Ming Chen moderating the opening discussion of “Open
Innovation and Standardization Toward Next Generation Visual Networks” and “Dependable Wireless Communications.”
Tokumichi Murakami of the Mitsubishi Electric Corp. in
Japan began this session with his presentation on “High Efficiency Video Coding for Smart Phone to Super High Vision.”
In addition to declaring the “2010s as the era of the smart
phone, which will accelerate the personal use of visual contents and give rise to a new wave of content explosion,”
Murakami spoke about the need and effort to develop new
video coding standards targeting compression performances in
order to realize the full-capabilities of future services such as
SHV/UHD and Mobile HD.
Afterwards, Professor Ryuji Kohno of the Medical ICT
Center at Yokohama National University, continued the discussion of future technologies by speaking about “Future
M2M for Medicine, Vehicle, Robot, Energy and others.” His
vision also detailed the development of “safe and secure
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CONFERENCE REPORT
infrastructures based on advanced ICT” that support “the
intelligent traffic control of energy, money, vehicles and medical flows.” As an example, Prof. Kohno cited the ongoing
development of Body Area Networks (BAN) as a new method
for improving efficiencies, reducing regional gaps and offering
real-time medical care through the tele-metering of vital signs
and tele-control of medical equipment and devices.
Throughout Monday afternoon, IEEE ICC 2011 continued
its high-level agenda with several General Industry Business
Forums dedicated to next generation wireless advances and
applications. Moderator Stan McClellan of Texas State University initiated the panel on “Standards, Technologies and
Platforms for Emerging Smart Grid Deployments,” which
highlighted various perspectives on grid stability and the
numerous challenges facing the modern use of Smart Grid
technologies ranging from interoperability to the development
of competing standards and complex architectural paradigms.
Following this forum, Stefano Bregni of the Politecnico di
Milano in Italy led a second panel on “Next Generation
Access Networks (NGAN): Ultrabroadband Infrastructures
and Services.” Through this seminar, panelists examined the
deployment of optical fiber access technologies in Europe and
Asia with reference to the adoption of specific network architectures, services and regulations.
At the end of day, Yoshihito Sakurai of Hitachi Ltd, in
Japan then moderated the session entitled “Communication
Technology for Smart Grid & Its Standardization,” which
explored the newest cases, requirements and architectures for
implementing smart gird communications that provide realtime information to international markets, while greatly
reducing costs and saving energy on multiple levels.
On Tuesday morning, IEEE ICC 2011 began with a commemorative ceremony led by IEEE ICC 2011 Executive Chair
Prof. Koichi Asatani, who welcomed numerous Japanese and
fellow IEEE dignitaries to the day’s events. This included
Vice Governor of Kyoto Prefecture Shuichi Yamauchi, Vice
Mayor of Kyoto City Fumihiko Yuki, the President of the Science Council of Japan Prof. Ichiro Kanazawa, the President of
IEICE Communications Society Kazuo Haigmoto, the President of IEEE Communications Society Prof. Byeong Gi Lee,
His Imperial Highness Prince Akishino and the General Chair
of IEEE ICC 2011 Noritaka Uji.
Following these introductions, Mr. Uji, Prof. Kanazawa and
His Imperial Highness addressed the forum by offering their
special thanks to all IEEE ICC 2011 participants for their attendance and expressing their deepest sympathies to the many individuals and families that recently suffered devastating losses.
This also included strongly highlighting conference objectives
and the overall goal of enriching lives through the deployment
of the latest ICT applications, services and innovations.
Afterwards, Prof. Asatani read a message from the Prime
Minister of Japan who also welcomed all and again profoundly cited the advance of electronic and information communications worldwide as a key method for significantly reducing
the horrific affects of future and potential global disasters.
Dr. Toshitaka Tsuda of Fujitsu Laboratories Limited then
opened the day’s educational agenda by offering his gratitude
to all the overseas attendees, who traveled to the beautiful
and peaceful city of Kyoto for IEEE ICC 2011. Following
these remarks, Dr. Tsuda spoke about the new “ICT Paradigm
Shift and the Trend of Communications Technology.” This
includes moving toward “Human Centric Systems” and
“Human Centric Intelligent Societies” that are specificallydesigned to transform large amounts of data acquired through
sensor networks into knowledge that initiates social and business changes. According to Dr. Tsuda, this shift is currently
being realized by the ongoing adoption of innovative systems
that place individuals and their surrounding business environments at the center of learning networks that constantly
evolve to new requirements.
Immediately after the keynote address of Dr. Tsuda, representatives of IEEE ComSoc and IEEE ICC 2011 highlighted
the morning’s proceedings with the presentation of numerous
industry, association and conference awards signifying the outstanding achievements and dedication to excellence of the
international recipients.
Senior IEEE and IEEE ComSoc representatives offered
the Marconi Prize Paper Award to Li Ping Qian, Angela
Yingjun Zhang & Jianwei Huang; Stephen O. Rice Award to
Shi Jin, Matthew R. McKay, Xiqi Gao & Iain B. Collings;
Fred Ellersick Prize to Dusit Niyato, Ekram Hossain & Zhu
Han; Heinrich Hertz Award to Illsoo Sohn & Jeffrey G.
Andrews; Leonard Abraham Prize to Watcharapan Suwansantisuk, Marco Chiani & Moe Z. Win; William Bennett Prize to
Murali Kodialam, T. V. Lakshman, James B. Orlin & Sudipta
Sengupta; Outstanding Paper on New Communication Topics
to Maria Gorlatova, Peter Kinget, Ioannis (John) Kymissis,
Dan Rubenstein, Xiaodong Wang & Gil Zussman; Best Tutorial Paper Award to Steven Gringeri, Bert Basch, Vishnu
Shukla, Roman Egorov & Tiejun J. Xia; and the Journal of
Communications & Networks (JCN) Best Paper Award
offered by the KICS society and cosponsored by ComSoc to
Changhun Bae and Wayne E. Stark.
Others honored throughout the Tuesday’s ceremony
included Suk-Chae Lee, who received the Distinguished
Industry Leader Award; H. Vincent Poor, who was provided
the Eric E. Sumner Award; Moe Win, who was named the
Kiyo Tomiyasu Award winner; Andreas F. Molisch, Larry J.
Greenstein & Mansoor Shafi, who were all granted the Donald G. Fink Prize Paper Award; and Mounir Hamdi, Yasutaka
Ogawa, Yunghsiang S. Han, Marco Chiani, Kwang Bok Lee,
Kiho Kim, John Sadowsky & Robert Heath, who were all
named 2011 IEEE Fellows.
Furthermore, the Communications Society/Information
Theory Joint Paper Award was presented to two separate
entries. The first was provided to Matthieu Bloch, Joao Barros, Miguel R. D. Rodrigues, and Steven W. McLaughlin for
the paper titled “Wireless Information-Theoretic Security,
while the second was given to Giuseppe Caire, Nihar Jindal,
Mari Kobayashi & Niranjay Ravindran for their paper on
“Multiuser MIMO Achievabole Rates with Downlink Training
and Channel State Feedback.”
The awards presentations then concluded with the IEEE
ICC 2011 Awards for Best Papers. These included:
•Prathapasinghe Dharmawansa & Matthew R. McKay of
Hong Kong University of Science and Technology and Peter J.
Smith of the University of Canterbury, New Zealand for their
presentation on Analysis of the Level Crossing Rates for Ordered
Random Processes in the Communication Theory category
•Ya-Feng Liu & Yu-Hong Dai of the Chinese Academy of
Sciences and Zhi-Quan Luo of the University of Minnesota for
their submission on Max-Min Fairness Linear Transceiver
Design for a Multi-User MIMO Interference Channel within
the conference’s Signal Processing for Communications section
•Shuping Gong & Husheng Li of the University of Tennessee, Lifeng Lai of the University of Arkansas and Robert
C. Qiu of the Tennessee Technological University for their
entry titled Decoding the ‘Nature Encoded’ Messages for Distributed Energy Generation Control in Microgrid in the Wirelesss Communications category
•Mingwei Wu & Pooi-Yuen Kam of the National University of Singapore for their submission on ARQ with PacketError-Outage-Probability QoS Measure under the Wireless
Communications heading
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
13
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CONFERENCE REPORT
•Xiangyun Zhou & Are Hjørungnes of the University of
Oslo, Norway and Dusit Niyato of the Nanyang Technological
University in Singapore for their submission titled How Much
Training is Needed Against Smart Jamming? in the Wireless
Communications area
•Dejun Yang, Xi Fang & Guoliang Xue of Arizona State
University for their paper on OPRA: Optimal Relay Assignment for Capacity Maximization in Cooperative Networks in
the area of Wireless Networking
•Chengyi Gao, Hakki Cankaya, Ankitkumar Patel & Jason
P. Jue of the University of Texas, Dallas and Xi Wang, Qiong
Zhang, Paparao Palacharla & Motoyoshi Sekiya of Fujitsu
Laboratories of America, Inc., USA for their submission on
Survivable Impairment-aware Traffic Grooming and Regenerator Placement with Dedicated Connection Level Protection
in the Optical Networks and Systems section
•Hossam Sharara of the University of Maryland and
Cedric Westphal, Svetlana Radosavac & Ulas Can Kozat of
DoCoMo Labs USA for the entry titled Utilizing Social Influence in Content Distribution Networks in the Next Generation Networking and Internet category
•Jesus Alonso-Zarate & Christos Verikoukis of the Centre
Tecnologic de Telecomunicacions de Catalunya, Spain and
Luis Alonso of the Universitat Politecnica de Catalunya,
Spain and Eirini Stavrou, Adamantia Stamou & Pantelis
Angelidis of the University of Western Macedonia, Greece for
their entry on Energy-Efficiency Evaluation of a Medium
Access Control Protocol for Cooperative ARQ in the area of
Communications QoS, Reliability and Modeling
•Jalel Ben-othman & Bashir Yahya of the Université de
Versailles, France and Lynda Mokdad of the Université de
Paris 12, France for their submission on An Energy Efficient
Priority-based QoS MAC Protocol for Wireless Sensor Networks in the Ad Hoc, Sensor and Mesh Networking
•Jin Tang & Yu Cheng of the Illinois Institute of Technology, USA for the paper titled Quick Detection of Stealthy SIP
Flooding Attacks in VoIP Networks within the conference’s
Communication and Information System Security category
Following the morning’s keynote and commemorative ceremony, Business Forum Co-Chair Chi-Ming Chen moderated the first
of several Business Forums dedicated to the development of the
latest broadband technologies and their rapid worldwide deploy.
In his address on “Telecom Transformation and Challenges,”
Shyue-Ching Lu, CEO of Chungwa Telecom in Taiwan opened
the session by stating how networked readiness has become a significant index for rating a nation’s competence. He then noted the
steady paradigm shifts in all technological areas including e-government, smart transportation, i-education, healthcare and disaster prevention, which are currently doubling the amount of global
data traffic every two years, while only increasing network energy
efficiencies by 10 to 20 percent annually.
As a result, Shyue-Ching Lu prominently spoke about the
need for initiatives, such as those offered by the Green Touch
Consortium, that are actively promoting the development and
implementation of architectures and technologies that will
yield a 1,000-fold improvement in network efficiencies over
the next five years. This includes the digital convergence of
smart devices and cloud services that provide centralized
monitoring and management for power, air conditioning,
lighting and pumping systems in facilities ranging from commercial buildings and factories to hospitals and schools.
Afterwards, Toshitaka Tsuda of Fujitsu Laboratories Limited, Japan furthered the comments made during his earlier
address by speaking about “R&D for Green and Human Centric Networks” as well as the requirements necessary to
accommodate the increasing demand for bandwidth and green
communications services. He also explored the rapid adoption
14
Communications
IEEE
and innovation of network technologies that are proactively
transforming business processes and creating vast pools of
knowledge that support human wide geometric areas and
spontaneous smart transmissions.
Ibrahim Gedeon, IEEE ICC 2012 General Chair & CTO
of Telus Canada, then concluded the morning business session
by addressing the need to continually transform businesses
through leveraged technology convergence practices. According to Gedeon, what is needed is a holistic view of convergence based on end-to-end business models that improve
end-user experiences. This is because “one thing is clear” and
that is end-users care most about the value of their experience
and the amount they have to pay for it.
After a brief afternoon break on Tuesday afternoon, Business Forum Co-Chair Tetsuya Yokotani began his third Business Forum of the conference by introducing Yoshiharu
Shimatani of KDDI, Japan to session attendees. In his presentation, Shimatani spoke at-length about “KDDI’s Vision of
Innovative, Seamless and Eco-friendly ICT Platforms” and the
need to effectively alleviate traffic congestion through the
coordination of seamless multi-networks that support robust
and sophisticated anytime, at anywhere applications. These
methods entailed alleviating the rapid rise of traffic congestion through the development of innovative ICT platforms
and cloud services that drive the newest e-health, e-education,
e-environment, e-disaster prevention infrastructures.
Tetsuya Yuge of Softbank Mobile Corporation, Japan, then
followed this presentation by highlighting his company’s “Wireless Broadband Strategy for Growing Mobile Data Traffic” and
accommodating the prevalent use of smart phones in the
Japanese marketplace and the societal benefits to its businesses
and culture. Young Ky Kim of Samsung Electronics, Korea also
leveraged the theme by exploring the “Crossroads in the Middle
of Mobile Big Bang Era” and the mobile industry of the future.”
Throughout the rest of Tuesday and Wednesday, IEEE
ICC 2011 proceeded with the presentation of numerous other
General Business Forums provided by the representatives of
leading worldwide corporations and academic institutions such
as Hitachi, Mitsubishi, NTT, Create-Net, DOCOMO, University of Zurich, Politecnico di Milano, University of Geneva,
UC Berkley and Auckland University of Technology in New
Zealand. During these sessions, experts from each facility
readily shared their views and research in areas that included
“Scientific Wireless Sensor Network Testbeds: Growth &
Impact,” “Green ICT,” “Business Showstoppers of Cognitve
Radio Technologies,” “Business Strategies of sustainable
Growth on Broadband Telecommunication Markets,”
“eHealth Support in the Future Internet.”
On Thursday, June 9, IEEE ICC 2011 then concluded the
conference’s extensive five-day agenda with a second day of
tutorials and workshops dedicated to key communications like
“MIMO Detection for Emerging Wireless Standards,” “Wireless Without Batteries,” Beyond IMT-advanced: The Next 10
Years,” “Green Communications & Energy Efficiency,”
“Advances in Mobile Networking” and “Embedding the Real
World in the Future Internet.”
As for IEEE ICC 2012, planning has already begun for the
next event to be held June 10 – 15 in Ottawa, Canada. Titled
“CONNECT • COMMUNICATE • COLLABORATE,” original papers detailing the latest advances in wide ranging communications are currently being accepted with a deadline date
of September 6, 2011. For more information in this premier
annual conference, including “Call for Papers” details please
visit http://www.ieee-icc.org/2012. All interested parties are also
invited to use the conference’s Facebook and Twitter pages to
share thoughts about IEEE ICC experiences and upcoming
attendance and scheduling plans with peers and colleagues.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
TM
A
BEMaGS
F
____________________________________________________
This presentation from the recent IEEE Communications’ Conference
on Consumer Communications& Networks discusses emerging
technology for next-generation television and video applications and
services. International standards (High efficiency video coding,
stereoscopic 3D) for deployment of new servicesare covered, along
with IPTV and dynamic adaptive streaming on HTTP for internet video
delivery, and 1080p50/60 and ultra high-resolution television.
FREE ACCESS SPONSORED BY
Brought to you by
For other sponsor opportunities,
please contact Eric Levine,
Associate Publisher
Phone: 212-705-8920,
E-mail: ______________
[email protected]
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CONFERENCE CALENDAR
z IEEE EDOC 2011 - 15th IEEE Int’l.
Enterprise Distributed Object Computing Conference, 31 Aug.-2 Sept.
2011
JULY
z IEEE HPSR 2011 - 12th IEEE Int’l.
Conference on High Performance
Switching and Routing, 4-6 July
Cartagena, Spain.
http://www.ieee-hpsr.org/
• OECC 2011 - 16th Opto-Electronics
and Communications Conference, 4-8
July
Kaoshlung, Taiwan.
http://www.oecc2011.org/
z IEEE ICME 2011 - 2011 IEEE Int’l.
Conference on Multimedia and
Expo, 11-15 July
Barcelona, Spain.
http://www.icme2011.org/
AUGUST
• ICCCN 2011 - Int’l. Conference on
Computer Communications and Networks 2011, 1-4 Aug.
Maui, Hawaii.
http://www.icccn.org/ICCCN11/
z ATC 2011 - 2011 Int’l. Conference
on Advanced Technologies for Communications, 3-5 Aug.
Da Nang City, Vietnam.
http://rev-conf.org/
• ICADIWT 2011 - 4th Int’l. Conference on the Applications of Digital
Information and Web Technologies, 46 Aug.
Stevens Point, WI.
http://www.dirf.org/DIWT/
• GIIS 2011 - Global Information
Infrastructure Symposium, 4-7 Aug.
Da Nang City, Vietnam.
http://www.giis2011.org/GIIS2011/index.htm
• ITST 2011 - 11th Int’l. Conference
on ITS Telecommunications, 23-25
Aug.
St. Petersburg, Russia
http://www.itst2011.org/
z IEEE P2P 2011 - IEEE Int’l. Conference on Peer-to-Peer Computing,
31 Aug.-2 Sept.
Tokyo, Japan.
http://p2p11.org/
Helsinki, Finland.
http://edoc2011.cs.helsinki.fi/edoc2011/
Palermo, Italy.
http://www.fitce2011.org/
IEEE
• WPMC 2011 - 14th Int’l. Symposium
on Wireless Personal Multimedia Communications, 3-7 Oct.
Brest, France.
http://www.wpmc2011.org/
SEPTEMBER
• ITC 23 2011 - 2011 Int’l. Teletraffic
Congress, 6-8 Sept.
• ICIN 2011 - 2011 15th Int’l. Conference on Intelligence in Next Gneration Networks, 4-7 Oct.
San Francisco, CA.
http://www.itc-conference.org/2011
Berlin, Germany.
http://www.icin.biz/
z IEEE PIMRC 2011 - 22nd IEEE Int’l.
Symposium on Personal, Indoor
and Mobile Radio Communications,
11-14 Sept.
• LANOMS 2011 - 7th Latin American
Network Operations and Management Symposium, 10-11 Oct.
Toronto, Canada.
http://www.ieee-pimrc.org/2011/
• ICUWB 2011 - 2011 IEEE Int’l. Conference on Ultra-Wideband, 14-16
Sept.
Bologna, Italy.
http://www.icuwb2011.org/
• ICCCT 2011 - 2nd Int’l. Conference
on Computer and Communication
Technology, 15-17 Sept.
Allahabad, India.
http://www.mnnit.ac.in/iccct2011/
• SoftCOM 2011 - Int’l. Conference
on Software, Telecommunications and
Computer Networks, 15-17 Sept.
Split, Croatia.
http://marjan.fesb.hr/SoftCOM/2011/index.ht
ml
_
z IEEE DSA 2011 - IEEE Dynamic
Spectrum Access Workshop, 19
Sept.
Quito, Ecuador.
http://www.lanoms.org/2011/
z IEEE CCW 2011 - 2011 IEEE Annual Computer Communications
Workshop, 10-12 Oct.
Hyannis, MA.
http://committees.comsoc.org/tccc/ccw/2011/
• DRCN 2011 - 8th Int’l. Workshop on
Design of Reliable Communication
Networks, 10-12 Oct.
Krakow, Poland.
http://www.drcn2011.net/index.html
• IEEE ICWITS 2011 - 2011 IEEE Int’l.
Conference on Wireless Information
Technology and Systems, 10-13 Oct.
Beijing, China.
http://icwits.ee.tsinghua.edu.cn/index.html
z IEEE SmartGridComm 2011 - IEEE
Int’l. Conference on Smart Grid
Communications, 17-20 Oct.
http://www.ieee-dsa.org/IEEE_DSA_2011.html
Brussels, Belgium.
http://www.ieee-smartgridcomm.org/2011/
• APNOMS 2011 - 13th Asia-Pacific
Network Operations and Management Symposium, 21-23 Sept.
z IEEE LANMAN 2011 - 18th IEEE
Workshop on Local & Metropolitan
Area Networks, 20-21 Oct.
Taipei, Taiwan.
http://apnoms2011.cht.com.tw/Home.html
Chapel Hill, North Carolina.
http://www.ieee-lanman.org/
z IEEE GreenCom 2011 - Online
Conference, 26-29 Sept.
• LATINCOM 2011 - IEEE Latin American Conference on Communications
2011 - 246-26 Oct.
Virtual.
http://www.ieee-greencom.org/
before the listing. Individuals with information about upcoming conferences, calls for papers, meeting
announcements, and meeting reports should send this information to: IEEE Communications
Society, 3 Park Avenue, 17th Floor, New York, NY 10016; e-mail: ____________
[email protected]; fax:
+1-212-705-8996. Items submitted for publication will be included on a space-available basis.
Communications
Seoul, Korea.
http://www.ictc2011.org/main/
OCTOBER
• FITCE 2011 - 50th FITCE Congress ICT: Bridging the Ever Shifting Digital
Divide, 31 Aug.-3 Sept.
z Communications Society portfolio events are indicated with a diamond before the listing;
• Communications Society technically co-sponsored conferences are indicated with a bullet
16
• ICTC 2011 - Int’l. Conference on ICT
Convergence 2011, 28-30 Sept.
Belém, Pará, Brazil.
http://www.ieee-latincom.ufpa.br/
• ITU WT 2011 - Technical Symposium
at 40th ITU Telecom World 2011, 2427 Oct.
Geneva, Switzerland.
http://www.itu.int/WORLD2011/index.html
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
SOCIETY NEWS
COMSOC 2011 ELECTION
TAKE TIME TO VOTE
Ballots were mailed and emails with election information were sent 31 May 2011 to all Higher Grade* IEEE
Communications Society Members and Affiliates
(excluding Students) whose memberships were effective
prior to 1 May 2011.
To cast your ballot electronically you will need your
IEEE Web Account username/password--which is the
same account information used to access IEEE online
services such as renewing your membership, myIEEE,
and Xplore. If you do not recall your web account information, you may go to: www.ieee.org/web/accounts to
recover. You may also email ________________
[email protected] or
call +1 800 678 4333 (USA/Canada) or +1 732 981 0060
(Worldwide).
If you do not receive an email from _______
[email protected]
_____________ on 31 May 2011 or a paper ballot by
30 June, but you feel your membership was valid before
1 May 2011, you may e-mail [email protected]
___________________
or call +1 732 562 3904 to check your member status.
(Provide your member number, full name, and address.)
Please note IEEE Policy (Section 14.1) below stating
IEEE mailing lists should not be used for electioneering
in connection with any office within the IEEE:
IEEE membership mailing lists, whether obtained
through IEEE Headquarters or through any IEEE
organizational unit, may be used only in connection
with normal IEEE sponsored activities and may be
used only for such purposes as are permitted under
the New York Not-For-Profit Corporation Law.
They may not be used for electioneering in connection with any office within the IEEE, or for political
purposes, or for commercial promotion, except as
explicitly authorized … .
See details at url http://www.ieee.org/web/aboutus/
whatis/policies/p14-1.html.
_________________
Voting for this election closes 26 July 2011 at 4:00
p.m. EDT! Please vote!
*Includes Graduate Student Members
Open Call from the IEEE Communications Society
A
re you enthusiastic? Have you
performed quality reviews
for technical periodicals?
Demonstrated solid technical
accomplishments? Have a reputation
for high ethical standards and for
reliability?
www
.com
soc.
org/
edit
or
You may be ready ...
The IEEE Communications Society is looking
for volunteers who are interested in becoming
part of a prestigious Communications Society
editorial board.
Duties include: A commitment to handle at
least two manuscripts per month; arrange for
three reviews or more in a timely fashion; and
the ability to make firm and fair decisions.
Qualifications: Subject matter expertise,
editing experience, technical program
committee experience; references,
representative papers.
Apply at: www.comsoc.org/editor
The decision to appoint an editor rests with the Editor-in-Chief of
the journal/magazine. Please note that it will not be possible to
send individual acknowledgments to all applicants.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
17
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
NEW PRODUCTS
EDITED BY ERIC LEVINE
COMPLETE MIPI M-PHY TEST SUITE
Agilent Technologies Inc.
Agilent Technologies has introduced
a comprehensive MIPI M-PHY test
solution for mobile computing customers. The Agilent solution suite helps
design engineers turn on, debug and
validate all layers of their M-PHY
devices, including physical and protocol
layers, at speeds up to 5.8 Gb/sec.
The Mobile Industry Processor
Interface (MIPI) Alliance is finalizing
the M-PHY specification to allow development of faster, more reliable highspeed interfaces for mobile devices.
M-PHY technology supports a broad
range of applications, including interfaces for monitors, cameras, audio and
video equipment, memory, power management and communication between
baseband and RFIC components.
The Agilent solution consists of
oscilloscopes, protocol analyzers and
exercisers, and bit error-rate testers
(BERTs) using custom M-PHY stimulus software. Each instrument comes
with custom M-PHY-ready software to
support design teams through the entire
product design process.
The Infiniium 90000 X-Series oscilloscope real-time bandwidth of up to 32
GHz with a 30-GHz probing system.
With low noise and low jitter measurement floor performance, the scope
ensures superior accuracy and is ideal
for MIPI M-PHY transmitter conformance testing for speeds up to Gear 3.
Using 90000 X-Series oscilloscopes
gives engineers increased confidence in
their MIPI M-PHY product performance and increases design margins.
Accurate and automated MIPI MPHY receiver testing is supported by
Agilent’s high-performance ParBERT
81250A for multi-lane testing and JBERT N4903B for single-lane testing.
Although the MIPI M-PHY receiver
test specifications have not been finalized yet, engineers can use these bit
error-ratio testers for accurate M-PHY
receiver tolerance testing in a pattern
generator or full BERT configuration
in conjunction with N5181/2A, E4438C
and 81150A signal generators.
http://www.agilent.com
REMCOM ANNOUNCES UPDATE TO
XFDTD EM SIMULATION SOFTWARE
Remcom Inc.
Remcom is offering an updated version of its electromagnetic simulation
software, XFdtd Release 7 (XF7),
which includes several geometric modeling additions and new performance
18
Communications
IEEE
efficiencies. The upgrade, which
updates the software to Release 7.2,
contains enhancements that simplify
and speed overall usability:
•Additional modeling capabilities that
enable more precise control over cut
geometries when sketching complex
models.
•Simplified sampling interval settings
for Frequencies of Interest (DFT).
•Simulations using averaged materials
now exploit XStream GPU acceleration technology.
•XFSolver intelligently records which
XStream devices are being used for
simulations, allowing multiple simultaneous XStream simulations on a
single machine.
•Performance of Broadband Far-Zone
(previously known as Transient FarZone) computations have been dramatically improved.
•Additional efficiencies in CAD import,
particularly for large models with
many assemblies.
XF7 is available in both Pro and
Bio-Pro versions. Both include XStream
GPU acceleration, 32- or 64-bit analysis
module, geometric modeler and postprocessor, shared memory multiprocessor (MPM) at eight cores, and a
comprehensive variety of 3D CAD
import modules. The Bio-Pro version
also includes SAR capability and high
fidelity human body meshes.
http://www.remcom.com
BROADBAND QUADRATURE
MODULATORS FOR CELLULAR
INFRASTRUCTURE MARKET
Skyworks Solutions, Inc.
introduced three wideband quadrature modulators for cellular infrastructure and high performance radio link
applications. Skyworks’ modulators are
the latest additions to its wireless infrastructure portfolio and designed to support the world’s leading 3G and 4G
base station providers.
These new, fixed gain quadrature
modulators deliver excellent phase
accuracy and amplitude balance
enabling high performance for a variety
of multi-carrier communication systems.
In addition, Skyworks’ new modulators
have greater than 500 megahertz (MHz)
3dB modulation bandwidth, a low noise
floor, and a wide operating frequency
range that support multiband designs
and network requirements.
The SKY73077 (for 1500 to 2700
MHz), the SKY73078 (for 500 to 1500
MHz), and the SKY73092 (for 400 to
6000 MHz), quadrature modulators
contain high linearity, excellent I/Q
phase accuracy and amplitude balance –
making the devices ideal for use in high
performance communication systems.
The modulators accept two differential
baseband inputs and a single-ended
local oscillator, and generate a singleended RF output.
http://www.skyworksinc.com
LTE PROTOCOL SIMULATOR
GL Communications Inc.
GL Communications has released
the LTE Protocol Simulator, a software
application for emulating LTE interfaces. LTE is an all IP infrastructure
with service priority built in – audio and
video are given priority. All necessities
like IP address, authentication, and
security are validated. Instant resources
over RF (the air) and IP (internal network) are made available depending on
what the user is attempting to do. Also,
LTE is designed for compatibility with
older 2G and 3G mobile systems.
GL has released LTE Protocol Simulation for several interfaces – currently S1
– MME, and eGTP (S5/S8 and S11). Its
Message Automation & Protocol Simulation (MAPS™ - LTE-S1 and LTE eGTP)
is designed for LTE Testing. It can simulate eNodeB (also called Evolved
NodeB), and MME (Mobility Management Entity) and other interfaces.
MAPS™ - LTE-S1 can act either as
eNodeB or as MME and simulate the
other entity. It can generate any LTES1 messages in an automated, interactive, and scripted fashion. Possible
applications include:
-•Simulate up to 500 Smartphones
(UEs) powering up and down
-•Authenticate and confirm security
procedures
-•QoS requests for greater or lesser
bandwidth
-•Temporary addressing management
for mobility and security
LTE-S1 Testing Features
•Simulates eNodeB and MME
•Supports LTE control plane
•Generates hundreds of UE Signaling
•Handles Retransmissions
•Generates and process S1/NAS valid
and invalid messages
•Supports message templates for both
S1-AP and NAS message and customization of the field values
•Allows defining variables for various
fields of the selected message type
•Ready-to-use scripts for quick testing
•Supports customization of call flow
and messages
•Supports scripted call generation and
automated call reception
http://www.gl.com
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Global
Newsletter
July 2011
4G Mobile Licenses Under Auction in Spain
By F. Falcone Lanas and Paula Pérez Gómez, Public University of Navarra and University of Vigo, Spain
The Ministry of Industry announced in February 2010 [1] a
plan to collect between €1500 and €2000 million with the auction of new frequency bands for mobile telephony. The bid for
a total of 310 MHz in different frequency bands will be done
by a mixed formula: 90 percent through auctions and 10 percent by competition.
The available spectrum space is free as a result of the rearrangement of the frequency portion related to the previous
generation of mobile telephony (GSM) and the spectrum
vacated by the switch-off of analog TV services. Of the
amount raised, €800 million will go to arrange the frequencies
now occupied by digital terrestrial television (DTT), the socalled digital dividend in the range from 470 to 862 MHz.
Several management approaches toward spectrum allocation are possible, the auction being among the preferred ones
[2]. The Minister Miguel Sebastian pointed out that the decision of a mixed procedure has the desire to add transparency
to the process as well as the tax collection increase. The
Telecommunications Market Commission (TMC) noted in his
report on the rearrangement of the spectrum that this auction
system is more transparent than the tender-based one used in
2000 and 2005.
The bid for 310 MHz is the most important spectrum
restructuring undertaken in Spain and thus facing similar processes in Europe. The Ministry of Industry explained that the
system wants to prioritize the investment of operators in the
sector for the benefit of citizens and society with a large and
direct impact on employment and productivity. Thanks to this
formula, it is expected to mobilize an investment of €1200 million and create 40,000 jobs.
The Ministry of Industry already has the Electronic Auction Platform (EAP) software to be used by interested parties
to bid in a process expected to begin by July 2010 and to last
as long as bids continue to rise.
Government has limited the access to frequencies by traditional mobile operators (Movistar, Vodafone, and Orange) by
establishing an upper limit for any operator to grab a majority
of the spectrum, according to the TMC indications, so as not
to harm competition or lock out new entry rivals. More than
30 companies have shown interest in the consultation regarding the process, far more than telecommunications operators
in the country.
The use of this bidding system is being pioneered in Spain,
although some companies have experience in auctions held in
other countries, such as Telefonica, which participated in the
auction held in Germany in 2010 via its filial O2 and paid
€1380 million for a set of frequency blocks: two in 800 MHz,
one in 2 GHz, and four in 2.6 GHz.
With the allocation of additional spectrum to be held in
the second quarter of 2011, the government follows the exam-
ple of other countries to promote competition and enable new
services for fourth-generation mobile radio by increasing the
space available by 70 percent.
In this rearrangement the band of 800 MHz would be bid
on in auction; in the 900 MHz band, the block of 5 MHz will
be awarded by competition and with availability last year,
whereas the two blocks of 5 and 4.8 MHz was to be auctioned
this year but will be available in 2015. The band of 1800 MHz
would also be awarded by competition to be available in 2011,
while the new 2.6 GHz band would be bid at auction and also
available last year.
The industry spent months awaiting the completion of the
process to undertake the reorganization of the radio spectrum
that would allow telecom operators to have more resources to
grow, mainly in mobile broadband services.
With this spectral reorganization, the regulation that allows
the use of these frequency bands for mobile operators is put
in practice, so the band of 900 MHz which has been only
available to GSM may be used for broadband services; and
also incorporated new frequency bands, such as 800 MHz and
2.6GHz. These frequencies will allow better coverage with less
investment, but would not be available until 2015. They will
also ensure coverage for ultra-fast mobile broadband to 98
percent of the population, thus facilitating the achievement of
the objectives of the Digital Agenda for Europe [3] by 2020
and so strongly contribute to reducing the digital divide.
In 2010 in Spain radio frequencies generated a business of
€22,000 million, and will go further to meet the demand for
mobile broadband. In Spain there are 54.3 million mobile
lines (116.3 lines per 100 inhabitants), 92.5 percent owned by
Telefonica (TEF.MC), Vodafone (VOD.L), and Orange
(FTE.PA).
Telstra (TLSN.ST) has a market share of 3.9 percent, and
3.5 percent is held by a dozen mobile virtual network operators (who have no network).
The explosion of data services via mobile phones due to
the proliferation of new devices like smart phones, laptops,
and tablets, which require mobile Internet connection, has
made those frequencies the desired target for all companies
who fear the collapse of their networks. These indicators
turned the Spanish auction into a promising business for
international investors.
Telecommunications operators interested in radio frequency bands of 800 MHz, 900 MHz, and 2.6 GHz had until 7
June to submit applications and participate in the auction process, which began before 30 June 2011.
Orange is committed to invest a total of €433 million in the
competition for frequencies of 900 MHz, while Telstra has
offered a total of €300 million for the bands of 900 and 1800
(Continued on Newsletter page 4)
Global Communications Newsletter • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
1
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Fourth IEEE Lebanon Communications Workshop (IEEE LCW ’10)
Attracts More than 250 Participants
By Zaher Dawy, IEEE ComSoc Lebanon Chapter Chair
The IEEE Communications Society Lebanon Chapter
organized the Fourth IEEE Lebanon Communications Workshop 2010 (IEEE LCW ’10) on 18 December at the Faculty of
Engineering, Lebanese University (LU). The general theme of
the workshop was Communications Security with focus on
state-of-the-art topics related to network and wireless security.
The IEEE LCW workshop is the biggest telecom event that is
organized at the national level on an annual basis. This year,
it attracted around 275 participants including telecom engineers, telecom executives, university professors, and engineering students. The program started with an opening session,
followed by two technical sessions.
The IEEE Communications Society Lebanon Chapter
chair, Dr. Zaher Dawy (Professor, American University of
Beirut), opened with a welcoming speech, thankful for the
strong participation and distinguished speakers from Lebanon
and abroad. He pointed out that IEEE LCW ’10 is instrumental in achieving the IEEE ComSoc Lebanon Chapter’s objectives: to contribute to telecom advancement on a national
level, to add to telecom awareness, and to strengthen ties
between the academic and industrial worlds. He concluded by
thanking the Workshop’s corporate supporters for their generous contributions: National Instruments Arabia, MTC
Touch, Mada Communications, Data Consult, Alfa managed
by Orascom Telecom, and Terranet.
The opening session included a presentation by Dr. Imad
Elhajj (Professor, American University of Beirut) about the
IEEE Lebanon Section activities highlighting the professional
Attendees at IEEE LCW ’10.
benefits of IEEE membership, a speech by Dr. Imad Hoballah
(Acting Chairman and CEO, Lebanese Telecom Regulatory
Authority) on the importance of cyber security awareness at
the national level, and a welcome speech by Dr. Mohamed
Zoaeter (Dean of the Faculty of Engineering, LU).
The technical program included featured presentations by
distinguished invited speakers from the United Nations Interregional Crime and Justice Institute (UNICRI) on Cyberwar
and Information Warfare by Mr. Raoul Chiesa (Senior Advisor, Strategic Alliances and Cybercirme Issues); Cisco on Borderless Network Security by Mr. Joseph Hanna (Business
(Continued on Newsletter page 4)
New European Research Initiative on Techno-Economic Regulatory
Framework for Cognitive Radio/Software Defined Radio
By Arturas Medeisis, Chair of COST-TERRA, Vilnius Gediminas Technical University, Lithuania
A new joint research initiative has recently been set up in
Europe within the framework of the European Cooperation
for Science and Technology (COST) to address the issue of a
techno-economic regulatory framework for cognitive
radio/software defined radio (CR/SDR). The initiative, formally known as COST Action IC0905 TERRA (COSTTERRA), is organized as a kind of “think tank” with regular
networking meetings. Its planned activities span to May 2014.
At the time of writing, the COST-TERRA network included researchers from 19 European countries representing
academia, industry and regulators. Recently, members from
other parts of the world have started joining the action. The
first non-European membership was by Communications
Research Centre of Canada, now being followed by the Meraka institute of CSIR South Africa and a couple of US institutions that are considering joining as well. COST-TERRA also
established institutional liaison with bodies like CEPT (association of European regulators), European Telecommunications
Standards Institute (ETSI), IEEE DySPAN (former SCC41)
and the Wireless Innovation Forum (former SDR Forum).
The meetings of COST-TERRA present an excellent
opportunity for researchers as well as various players in the
field to come for lively brainstorming sessions on the subject
of developing regulatory policies for cognitive radio. The most
recent meeting took place in Lisbon, Portugal, on 19–21 January 2011, and was hosted by the Instituto de Telecomunicações.
The meeting was attended by 38 participants, and consisted
2
Communications
IEEE
Participants of the COST-TERRA meeting in Lisbon.
of both regular sessions to present the latest research in the
field as well as panel discussions dedicated to hot issues such
as regulatory policy for TV white space devices and developments with the CR/SDR-related agenda item for the ITU
World Radiocommunications Conference of 2012.
The next meeting of COST-TERRA will take place 20–22
June 2011 in Brussels, and, in addition to regular sessions, will
feature a public workshop on the afternoon of 22 June. It is to
be noted that COST-TERRA meetings have an open participation policy; therefore, any researchers from around the
world who work on developments of regulatory policies for
(Continued on Newsletter page 4)
Global Communications Newsletter • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Report on the 12th International Conference on
Communication Systems (ICCS 2010)
By Michael Ong, Institute for Infocomm Research, Singapore
The 12th International Conference on
Communication Systems (ICCS 2010) was
held at the Furama Riverfront Hotel, Singapore from 17–19 November 2010.
This is a biennial series of conferences
co-organized by the National University
of Singapore (NUS) and the Institute for
Infocomm Research (I2R), Singapore,
and technically sponsored by IEEE Communications Society, IEEE Singapore
Communications Chapter, and IEEE Singapore Vehicular Technology Chapter.
ICCS 2010 was a three-day event comprising a keynote speech by a leading
academic researcher, technical oral and
poster presentations, and tutorials by
industrial experts. This event continues to
provide opportunities for researchers and
practitioners to share their experience
and ideas in the field of communications
engineering and systems.
The conference showcased a technical program consisting of 10 oral sessions, two poster sessions, five special
sessions, and three tutorials covering
many exciting aspects of wireless communications, optical communications,
devices, and new emerging technologies.
In particular, the five special sessions
consisted of 25 invited papers, addressing the latest developments in the fields
of broadband mobile communications,
cognitive and cooperative communications, energy harvesting and sustainable
communications, optical communica-
Clockwise from upper left: General
Co-Chair Dr. Liang Yingchang at the
opening ceremony, 18 November
2010; delegates touring the Singapore
River on the way to the banquet dinner at the Asian Civilisation Museum;
Dr. Sun Sumei, TPC Chair, introduced Prof. Fumiyuki Adachi for the
keynote speech; local Arrangement
Chair Dr Manjeet Singh toasting during the banquet dinner at the Asian
Civilisation Museum.
(Continued on Newsletter page 4)
Highlights from the World Telecommunications Congress
By Kohei Shiomoto, NTT, Japan
The World Telecommunications Congress (WTC) builds on
the traditions of quality, timeliness, and open interaction originating in the International Switching Symposium (ISS) and
International Symposium on Subscriber Loops and Services
(ISSLS). WTC brings together leading experts from industry,
academia, and government to present the latest advances in
technology and to exchange views on the future of telecommunications. After the merger of ISS and ISSLS, WTC has
been held every two years: WTC 2006 and 2008 were held in
Budapest, Hungary, and New Orleans, Louisiana (WTC 2008
was held in conjunction with IEEE GLOBECOM 2008).
The last WTC was held in Vienna, Austria, on 13–14
September, 2010. In keynote sessions, six talks were presented
by high-profile speakers from Europe, the United States, and
Japan. The Congress was opened by Dr. Rüdiger Köster (TMobile Austria GmbH) with a presentation entitled “Telecommunications: The Infrastructure for the 21st Century.” Mag.
Dr. August Reschreiter (Austrian Federal Ministry of Transport, Innovation and Technology) presented the talk “The Governmental Challenges in Light of Next Generation Networks.”
And Ruprecht Niepold (European Commission, DG INFSO)
presented his views on “The Digital Agenda for Europe — The
Opening session of WTC 2010.
Policy and Regulatory Perspective.”
On day two, Prof. Dale Hatfield (University of Colorado)
presented “The Role of Self-Regulation in the Next Generation
Network.” Dr. Kou Miyake (NTT, Japan) presented the talk
“FTTH/NGN Service Deployment Strategy in NTT.” And Prof.
Dr.-Ing. habil. Jörg Lange (Nokia Siemens Networks, Ger(Continued on Newsletter page 4)
Global Communications Newsletter • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
3
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ICCS 2010/continued from page 3
tions, and Underwater Communications. The conference also
featured a keynote speech by IEEE Fellow Prof Fumiyuki
Adachi from Tohoku University, Japan, titled “Challenges for
Gigabit Wireless Pipe.” There were close to 300 submissions
from more than 40 countries to the open call for papers, and
152 papers were accepted for presentation at the conference
after a rigorous and challenging technical review process.
For more information, please refer to iccs-2010.org/
index.htm
[email protected]
______ or Dr Michael Ong at ___________________
HIGHLIGHTS FROM WTC 2010/continued from page 3
many) gave a presentation entitled “Convergent Charging,
Billing and Care – The Increasing Importance of Online Cost
Control for Post-Paid Subscribers.”
The regular technical sessions encompassed a wide variety
of timely topics in telecommunications: Future Mobile, Network & Service Management, NGN, Mobile Ad Hoc, QoS,
Ambient Assisted Living, Optical Networks, Mobile Access and
PANs, Future Internet, Regulatory and Policy Issues, Service
and Applications, and Security. Among quality papers, the best
paper award and the best presentation award were selected. Dr.
Florian Winkler (NEC Europe Ltd., Germany) received the
best paper award for his paper entitled “Driving Mobile eCommerce Services Using Identity Management.” Dr. Erwin P.
Rathgeb (Universität Duisburg-Essen, Germany) received the
best presentation award for his lecture on “Security in the Net
—Why Everything Used to Be Better, Bad Things Happen
A
BEMaGS
F
Today and the Future Looks Bright.”
WTC 2010 presented 54 talks in 12 sessions and was very
successful, attracting participants from Germany, Austria,
Japan, Italy, Poland, Hungary, Saudi Arabia, the United States,
France, Spain, Turkey, Korea, Australia, the United Kingdom,
Belgium, and Taiwan.
The next WTC will be held in Miyazaki, Japan, on 5–6
March, 2012. WTC 2012 is sponsored by the IEICE and is technically co-sponsored by the VDE/ITG and the IEEE Communications Society. Please visit http://www.wtc2012.jp for details.
See you in Japan!
LEBANON WORKSHOP/continued from page 2
Development Manager); Secunet/ISN-Technologies on Highly
Secure Ethernet Encryption Concepts by Mr. Michael Frings
(Regional Sales Director); Bank of America on Open Trust
Frameworks for Online Security by Dr. Abbie Barbir (VP,
Senior Security Architect); UN-ESCWA on Cyber Security
Awareness by Dr. Nibal Idlebi (Chief of the ICT Applications
Section); LU Faculty of Law on Regulatory Aspects of Cyber
Security by Prof. Mona Al-Achkar; and Alfa on GSM Security
Challenges by Mr. Issam El-Hajal (Head of Release and Portfolio Management Unit).
The workshop activities included two coffee breaks and a
lunch break which provided participants with the opportunity
to network among each other and to follow up on the discussions with the specialist invited speakers. The IEEE Communications Society Lebanon Chapter Executive Committee has
initiated planning activities for IEEE LCW ’11, which will take
place in November 2011 with a focus on emergency communications.
For more information about IEEE ComSoc Lebanon
Chapter activities, check http://ewh.ieee.org/r8/lebanon/com
Global
4G MOBILE LICENSES/continued from page 1
Newsletter
www.comsoc.org/pubs/gcn
STEFANO BREGNI
Editor
MHz. Some virtual operators will create alliances to avoid losing this opportunity and thus address the investment power of
the big operators.
Further Readings
Politecnico di Milano - Dept. of Electronics and Information
Piazza Leonardo da Vinci 32, 20133 MILANO MI, Italy
Ph.: +39-02-2399.3503 - Fax: +39-02-2399.3413
___________ [email protected]
_________
Email: [email protected],
[1] http://www.mityc.es
[2] M. Cave, C. Doyle, W. Webb, “Essentials of Modern
Spectrum Management”, Cambridge Press, 2007
[3] http://ec.europa.eu/information_society/digital-agenda
IEEE COMMUNICATIONS SOCIETY
KHALED B. LETAIEF, VICE-PRESIDENT CONFERENCES
SERGIO BENEDETTO, VICE-PRESIDENT MEMBER RELATIONS
JOSÉ-DAVID CELY, DIRECTOR OF LA REGION
GABE JAKOBSON, DIRECTOR OF NA REGION
TARIQ DURRANI, DIRECTOR OF EAME REGION
NAOAKI YAMANAKA, DIRECTOR OF AP REGION
ROBERTO SARACCO, DIRECTOR OF SISTER AND RELATED SOCIETIES
REGIONAL CORRESPONDENTS WHO CONTRIBUTED TO THIS ISSUE
ANA ALEJOS, SPAIN/USA (ANNALEJOS
@UVIGO.ES)
____________
JOEL RODRIGUES, PORTUGAL (__________
[email protected])
__________________
KOHEI SHIOMOTO, JAPAN (SHIOMOTO
[email protected])
EWELL TAN, SINGAPORE (EWELL
[email protected])
____________
®
A publication of the
IEEE Communications Society
4
Communications
IEEE
EUROPEAN RESEARCH/continued from page 2
CR/SDR technologies are welcome to come and present their
research or simply listen in and join in the debates.
Early research directions within COST-TERRA focused
around the analysis and categorization of known CR/SDR use
scenarios and business cases. Three parallel threads will be
pursued for the time being:
•CR/SDR deployment scenarios
•CR/SDR coexistence studies
•Economic aspects of CR/SDR regulation
Later, the fourth research area will be activated to deal
with the impact assessment of CR/SDR regulation.
For more information on the aims, work programme, and
ongoing results of COST-TERRA, please visit the action’s
website at http://www.cost-terra.org/.
Global Communications Newsletter • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
EUROPE’S PREMIER
MICROWAVE, RF, WIRELESS
AND RADAR EVENT
A
BEMaGS
F
_______________
European Microwave Week is the largest event dedicated to RF,
Microwave, Radar and Wireless Technologies in Europe.
Capitalising on the success of the previous shows, the event
promises growth in the number of visitors and delegates.
EuMW2011 will provide:
• 7,500 sqm of gross exhibition space
• 5,000 key visitors from around the globe
• 1,700 - 2,000 conference delegates
• In excess of 250 exhibitors
Running alongside the exhibition are 3 separate, but
complementary Conferences:
• European Microwave Integrated Circuits Conference (EuMIC)
• European Microwave Conference (EuMC)
• European Radar Conference (EuRAD)
Plus a one day Defence and Security Conference
Co-sponsored by:
European Microwave
Association
Co-sponsored by:
Supported by:
Official Publication:
Organised by:
The 6th European Microwave
Integrated Circuits Conference
Co-sponsored by:
The 8th European Radar Conference
The 41st European Microwave Conference
Interested in exhibiting? Book online NOW!
www.eumweek.com
For further information, please contact:
Richard Vaughan
Horizon House Publications Ltd.
16 Sussex Street London SW1V 4RW, UK
E:[email protected]
_______________________
Tel: +44 20 7596 8742
Fax: +44 20 7596 8749
Communications
IEEE
Kristen Anderson
Horizon House Publications Inc.
685 Canton Street Norwood, MA 02062, USA
E:[email protected]
_____________________
Tel: +1 781 769 9750
Fax: +1 781 769 5037
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
GUEST EDITORIAL
FUTURE INTERNET ARCHITECTURES:
DESIGN AND DEPLOYMENT PERSPECTIVES
Raj Jain
T
Arjan Durresi
he last 40 years of research has matured packet
switching technology as a key communication primitive. Its key use context, the Internet, has been phenomenally successful. From its humble beginning as a research
network, it has evolved into a critical infrastructure for the
development of businesses, societies, and nations. The
Internet’s most popular application, the World Wide Web,
has powered the present information age that has accelerated progress in all areas. There is no doubt that a lot has
been achieved. Yet as we look toward the future, a very
different set of research challenges present themselves.
These challenges originate primarily from the “responsibilities” of handling an elite infrastructure, the “burden” of
satisfying popular expectations, and catering to the
“change” in its use context.
The future Internet needs to cater to the responsibilities
of a critical infrastructure. Security, energy efficiency, and
performance guarantee are the primary issues. Also, the
future Internet needs to live up to its “near-magical” perception of communication capabilities. It needs to be able
to scale to billions of nodes and also provide support for
the diversified requirements of next-generation applications. The original architecture of the Internet and its communication protocols were not designed for such
requirements. Moreover, the use context for which the
original Internet was designed has changed considerably.
We have adapted to these changes through incremental
modifications to the original architecture. On the one hand
these changes have helped sustain the growth of the Internet while on the other it has increasingly made the Internet
architecture brittle and non-deterministic. Thus, the basic
underlying principles that have been instrumental in the
Internet’s success need to be revisited and possibly redefined in light of future requirements.
The networking research community has taken up the task
for designing the architecture for the future Internet. Initially
started as part of the FIND and GENI programs by the
National Science Foundation (NSF) in the United States,
future Internet research is now a key agenda for all leading
research agencies around the world including the European
Union, Japan, and China.
The goal of this feature topic is to present some interesting
24
Communications
IEEE
Subharthi Paul
design and deployment perspectives on the future Internet
research. We received 48 papers. Six of these have been
selected for publication in this issue. True to the spirit of
diversified future Internet design, all six articles address different design and research areas. The topic should be interesting reading, with each article providing a new and fresh
perspective on the design space.
“A Survey of the Research on Future Internet Architectures” provides a concise and informative survey of the various next-generation Internet design initiatives around the
world. The article is background reading for those who want a
high-level look into the research landscape of diversified projects with very different objectives and design approaches.
This article provides pointers for further research into specific
projects and ideas to an interested reader. While we did not
receive any papers from the four winners of the NSF Future
Internet Architecture (FIA) competition, or any of the NSF
GENI participants, this article provides brief insights into
those projects. The reviews of this article were handled directly by Dr. Steve Gorshe, Editor-in-Chief of IEEE Communications Magazine.
The article “Loci of Competition for Future Internet
Architectures” proposes a new design principle for the future
Internet that advocates “designing for competition.” This
design principle is rooted in economics and represents a relevant interdisciplinary research area for future Internet design.
The article identifies various “loci” in the design space that
should allow for multiple competitive providers to coexist.
This will prevent monopolies. Future design choices and innovations shall evolve more naturally based on market forces.
The key challenges are to locate the proper loci across the
horizontal and vertical design space that do not unnecessarily
make the architecture too complex, and manage the interaction between the different loci to provide a seamless communication infrastructure. Clearly, the current Internet was not
designed from the perspective of future commercial use. As a
result, policy enforcements, security, and accountability across
interorganizational boundaries have perennially been the
problem areas for the current design. The “design for competition” principle will hopefully set the right economic circumstances for the design of the future Internet.
Another interesting interdisciplinary research area that
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
GUEST EDITORIAL
might potentially contribute new ideas to the design of a
robust, self-managed, and naturally evolving future Internet
design is biology. “Biological Principles for Future Internet
Architecture Design” maps biological principles to network
architectures. Biological systems present different models for
intercommunications among elements to implement a self-sustaining system. These mechanisms may be modeled to design
a robust and self-managed control plane for the future Internet. The article also provides a possible way to implement
these abstract principles on actual networks. The key insight
presented is that to correctly map a biological intercommunication mechanism to an equivalent networking mechanism, it
is necessary to consider the scale and administrative control
boundaries of the intercommunication scenario.
One of the primary components of next-generation Internet research is a testbed platform. Designing testbeds for atscale experimentation is itself an independent area of
research. The article “Enabling Future Internet Research:
The FEDERICA Case” presents a discussion on the experiences with building and managing the FEDERICA testbed.
Apart from a discussion on the testbed itself, its features, and
the different technologies it uses, the article presents a list of
projects that run on the testbed and how FEDERICA supports their diversified experimental contexts. While there is a
lot of literature on testbed technologies, their properties, and
their unique features, this article is differently organized such
that the end user may be able to appreciate the different features of the testbed better through real use case examples.
The next-generation Internet design
space is highly diversified across different design philosophies, principles, and
technologies. “Content, Connectivity,
and Cloud: Ingredients for the Network
of the Future” provides an integrated
design framework from three of the
most promising components of the nextgeneration Internet design space: information-centric network design, cloud
computing, and open connectivity. The
high point of this article is that it provides a lot of insight into each of these
design space components and also integrates them into a coherent framework.
A key area of research for next-generation network technologies is the programmable data plane. There are two
aspects to this research. The first aspect
is the set of control and management
protocols that support the programmability of the underlying data plane. The
second aspect is the system-level design
of the high-performance programmable
data plane itself. “PEARL: A Programmable Virtual Router Platform”
presents a system-level design of a programmable data plane with discussion
of both the hardware and software platforms.
We would like to thank Editor-inChief Dr. Steve Gorshe and the IEEE
publication staff for their assistance
with this feature topic. Special thanks to
253 reviewers who took time to provide
detailed comments. We are also thankful to the authors who submitted their
articles for this special issue.
BIOGRAPHIES
R AJ J AIN [F’93] ([email protected])
________ is a Fellow of ACM, and a winner of the
ACM SIGCOMM Test of Time award, CDAC-ACCS Foundation Award 2009,
and Hind Rattan 2011 award. He is currently a professor of computer science and engineering at Washington University in St. Louis. Previously, he
was one of the co-founders of Nayna Networks, Inc., a next-generation
telecommunications systems company in San Jose, California. He was a
senior consulting engineer at Digital Equipment Corporation in Littleton,
Massachusetts, and then a professor of computer and information sciences
at Ohio State University , Columbus. He is the author of Art of Computer
Systems Performance Analysis, which won the 1991 Best Advanced How-to
Book, Systems award from the Computer Press Association. His fourth
book, entitled High-Performance TCP/IP: Concepts, Issues, and Solutions,
was published by Prentice Hall in November 2003. He recently co-edited
Quality of Service Architectures for Wireless Networks: Performance Metrics
and Management, published in April 2010.
ARJAN DURRESI [S’03] ([email protected])
_________ is currently an associate professor
of computer and information science at Indiana University Purdue University Indianapolis (IUPUI). He was a senior software analyst at Telesoft Inc. in
Rome, Italy; then a research scientist in computer and information sciences
at Ohio State University; and an assistant professor of computer science at
Louisiana State University, Baton Rouge. He has authored over 70 articles in
journals and more than 140 in conferences in the fields of networking and
security. He is the founder of several international workshops, including
Heterogeneous Wireless Networks — HWISE in 2005, Advances in Information Security — WAIS in 2007, Bio and Intelligent Computing — BICOM
2008, and Trustworthy Computing — TWC in 2010.
SUBHARTHI PAUL received his B.S. degree from the University of Delhi, India,
and his Master’s degree in software engineering from Jadavpur University,
Kolkata, India. He is presently a doctoral student in computer science and
engineering at Washington University, St. Louis, Missouri. His primary
research interests are in the area of future Internet architectures.
__________
__________
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
25
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FUTURE INTERNET ARCHITECTURES: DESIGN AND
DEPLOYMENT PERSPECTIVES
A Survey of the Research on
Future Internet Architectures
Jianli Pan, Subharthi Paul, and Raj Jain, Washington University
ABSTRACT
The current Internet, which was designed
over 40 years ago, is facing unprecedented challenges in many aspects, especially in the commercial context. The emerging demands for
security, mobility, content distribution, etc. are
hard to be met by incremental changes through
ad-hoc patches. New clean-slate architecture
designs based on new design principles are
expected to address these challenges. In this
survey article, we investigate the key research
topics in the area of future Internet architecture. Many ongoing research projects from
United States, the European Union, Japan,
China, and other places are introduced and discussed. We aim to draw an overall picture of
the current research progress on the future
Internet architecture.
INTRODUCTION
This work was supported
in part by a grant from
Intel Corporation and
NSF CISE Grant
#1019119.
26
Communications
IEEE
The Internet has evolved from an academic network to a broad commercial platform. It has
become an integral and indispensable part of
our daily life, economic operation, and society.
However, many technical and non-technical
challenges have emerged during this process,
which call for potential new Internet architectures. Technically, the current Internet was
designed over 40 years ago with certain design
principles. Its continuing success has been hindered by more and more sophisticated network
attacks due to the lack of security embedded in
the original architecture. Also, IP’s narrow waist
means that the core architecture is hard to modify, and new functions have to be implemented
through myopic and clumsy ad hoc patches on
top of the existing architecture. Moreover, it has
become extremely difficult to support the ever
increasing demands for security, performance
reliability, social content distribution, mobility,
and so on through such incremental changes. As
a result, a clean-slate architecture design
paradigm has been suggested by the research
community to build the future Internet. From a
non-technical aspect, commercial usage requires
fine-grained security enforcement as opposed to
the current “perimeter-based” enforcement.
0163-6804/11/$25.00 © 2011 IEEE
Security needs to be an inherent feature and
integral part of the architecture. Also, there is a
significant demand to transform the Internet
from a simple “host-to-host” packet delivery
paradigm into a more diverse paradigm built
around the data, content, and users instead of
the machines. All of the above challenges have
led to the research on future Internet architectures.
Future Internet architecture is not a single
improvement on a specific topic or goal. A cleanslate solution on a specific topic may assume the
other parts of the architecture to be fixed and
unchanged. Thus, assembling different cleanslate solutions targeting different aspects will not
necessarily lead to a new Internet architecture.
Instead, it has to be an overall redesign of the
whole architecture, taking all the issues (security,
mobility, performance reliability, etc.) into consideration. It also needs to be evolvable and flexible to accommodate future changes. Most
previous clean-slate projects were focused on
individual topics. Through a collaborative and
comprehensive approach, the lessons learned
and research results obtained from these individual efforts can be used to build a holistic Internet architecture.
Another important aspect of future Internet
architecture research is the experimentation
testbeds for new architectures. The current
Internet is owned and controlled by multiple
stakeholders who may not be willing to expose
their networks to the risk of experimentation. So
the other goal of future Internet architecture
research is to explore open virtual large-scale
testbeds without affecting existing services. New
architectures can be tested, validated, and
improved by running on such testbeds before
they are deployed in the real world.
In summary, there are three consecutive steps
leading toward a working future Internet architecture:
Step 1: Innovations in various aspects of the
Internet
Step 2: Collaborative projects putting multiple
innovations into an overall networking architecture
Step 3: Testbeds for real-scale experimentation
It may take a few rounds or spirals to work out a
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
future Internet architecture that can fit all the
requirements.
Future Internet research efforts may be classified based on their technical and geographical
diversity. While some of the projects target at
individual topics, others aim at holistic architectures by creating collaboration and synergy
among individual projects. Research programs
specifically aimed at the design of the future
Internet have been set up in different countries
around the globe, including the United States,
the European Union (EU), Japan, and China.
The geographical diversity of research presents
different approaches and structures of these different research programs. While dividing the
projects by their major topics is also possible,
due to the holistic architecture goals, different
projects may have some overlap.
Over the past few years’ future Internet
research has gathered enormous momentum as
evidenced by the large number of research projects in this area. In this article, primarily based
on the geographical diversity, we present a short
survey limited in scope to a subset of representative projects and discuss their approaches, major
features, and potential impact on the future.
We discuss the key research topics and design
goals for the future Internet architectures.
Research projects in the United States, European Union, and Asian countries are discussed
in detail, respectively. Some of our discussions
and perspectives on future Internet architectures
are included later. Finally, a summary concludes
the article.
KEY RESEARCH TOPICS
In this section, we discuss some key research
topics that are being addressed by different
research projects.
Content- or data-oriented paradigms: Today’s
Internet builds around the “narrow waist” of IP,
which brings the elegance of diverse design
above and below IP, but also makes it hard to
change the IP layer to adapt for future requirements. Since the primary usage of today’s Internet has changed from host-to-host
communication to content distribution, it is
desirable to change the architecture’s narrow
waist from IP to the data or content distribution.
Several research projects are based on this idea.
This category of new paradigms introduces challenges in data and content security and privacy,
scalability of naming and aggregation, compatibility and co-working with IP, and efficiency of
the new paradigm.
Mobility and ubiquitous access to networks:
The Internet is experiencing a significant shift
from PC-based computing to mobile computing.
Mobility has become the key driver for the future
Internet. Convergence demands are increasing
among heterogeneous networks such as cellular,
IP, and wireless ad hoc or sensor networks that
have different technical standards and business
models. Putting mobility as the norm instead of
an exception of the architecture potentially nurtures future Internet architecture with innovative
scenarios and applications. Many collaborative
research projects in academia and industry are
pursuing such research topics with great interest.
These projects also face challenges such as how
to trade off mobility with scalability, security,
and privacy protection of mobile users, mobile
endpoint resource usage optimization, and so
on.
Cloud-computing-centric architectures:
Migrating storage and computation into the
“cloud” and creating a “computing utility” is a
trend that demands new Internet services and
applications. It creates new ways to provide
global-scale resource provisioning in a “utilitylike” manner. Data centers are the key components of such new architectures. It is important
to create secure, trustworthy, extensible, and
robust architecture to interconnect data, control, and management planes of data centers.
The cloud computing perspective has attracted
considerable research effort and industry projects toward these goals. A major technical
challenge is how to guarantee the trustworthiness of users while maintaining persistent service availability.
Security: Security was added into the original
Internet as an additional overlay instead of an
inherent part of the Internet architecture. Now
security has become an important design goal
for the future Internet architecture. The research
is related to both the technical context and the
economic and public policy context. From the
technical aspect, it has to provide multiple granularities (encryption, authentication, authorization, etc.) for any potential use case. Also, it
needs to be open and extensible to future new
security related solutions. From the non-technical aspect, it should ensure a trustworthy interface among the participants (e.g., users,
infrastructure providers, and content providers).
There are many research projects and working
groups related to security. The challenges on this
topic are very diverse, and multiple participants
make the issue complicated.
Experimental testbeds: As mentioned earlier,
developing new Internet architectures requires
large-scale testbeds. Currently, testbed research
includes multiple testbeds with different virtualization technologies, and the federation and
coordination among these testbeds. Research
organizations from the United States, European
Union, and Asia have initiated several programs
related to the research and implementation of
large-scale testbeds. These projects explore challenges related to large-scale hardware, software,
distributed system test and maintenance, security
and robustness, coordination, openness, and
extensibility.
Besides these typical research topics, there
are several others, including but not limited to
networked multimedia; “smart dust,” also called
the “Internet of things”; and Internet services
architecture. However, note that in this survey,
we are not trying to enumerate all the possible
topics and corresponding research projects.
Instead, we focus on a representative subset and
discuss a few important ongoing research projects.
Due to length limitations, we are not able to
enumerate all the references for the projects discussed below. However, we do have a longer survey [18], which includes a more complete
reference list for further reading.
IEEE
BEMaGS
F
Since the primary
usage of the today’s
Internet has changed
from host-to-host
communication to
content distribution,
it is desirable to
change the
architecture’s narrow
waist from IP to the
data or content
distribution.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
27
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Categories
Project or cluster names (selected)
FIA
NDN, MobilityFirst, NEBULA, XIA, etc.
FIND
CABO, DAMS, Maestro, NetSerV, RNA, SISS, etc. (more than 47
total)
Spiral1: (5 clusters totally): DETER (1 project), PlanetLab (7 projects), ProtoGENI (5 projects), ORCA (4 projects), ORBIT (2 projects; 8 not classified; 2 analysis projects
GENI
Spiral2: over 60 active projects as of 2009*
Spiral3: about 100 active projects as of 2011*
* GENI design and prototyping projects can last for more than one spiral.
Table 1. U.S. projects and clusters on the future Internet.
RESEARCH PROJECTS FROM THE
UNITED STATES
Research programs on future Internet architecture in United States are administrated by the
National Science Foundation (NSF) directorate
for Computer and Information Science and
Engineering (CISE).
FIA AND FIND
The Future Internet Architecture (FIA) program [1] of the National Science Foundation
(NSF) is built on the previous program, Future
Internet Design (FIND) [2]. FIND funded about
50 research projects on all kinds of design
aspects of the future Internet. FIA is the next
phase to pull together the ideas into groups of
overall architecture proposals. There are four
such collaborative architecture groups funded
under this program, and we introduce them
here. Table 1 illustrates the overall research
projects from the United States, including FIA
and FIND.
Named Data Networking (NDN) — The
Named Data Networking (NDN) [3] project is
led by the University of California, Los Angeles
with participation from about 10 universities and
research institutes in the United States. The initial idea of the project can be traced to the concept of content-centric networks (CCNs) by Ted
Nelson in the 1970s. After that, several projects
such as TRIAD at Stanford and DONA from
the University of California at Berkeley were
carried out exploring the topic. In 2009 Xerox
Palo Alto Research Center (PARC) released the
CCNx project led by Van Jacobson, who is also
one of the technical leaders of the NDN project.
The basic argument of the NDN project is
that the primary usage of the current Internet
has changed from end-to-end packet delivery to
a content-centric model. The current Internet,
which is a “client-server” model, is facing challenges in supporting secure content-oriented
functionality. In this information dissemination
model, the network is “transparent” and just
forwarding data (i.e., it is “content-unaware”).
Due to this unawareness, multiple copies of the
same data are sent between endpoints on the
28
Communications
IEEE
A
BEMaGS
F
network again and again without any traffic
optimization on the network’s part. The NDN
uses a different model that enables the network
to focus on “what” (contents) rather than
“where” (addresses). The data are named
instead of their location (IP addresses). Data
become the first-class entities in NDN. Instead
of trying to secure the transmission channel or
data path through encryption, NDN tries to
secure the content by naming the data through
a security-enhanced method. This approach
allows separating trust in data from trust
between hosts and servers, which can potentially
enable content caching on the network side to
optimize traffic. Figure 1 is a simple illustration
of the goal of NDN to build a “narrow waist”
around content chunks instead of IP.
NDN has several key research issues. The
first one is how to find the data, or how the data
are named and organized to ensure fast data
lookup and delivery. The proposed idea is to
name the content by a hierarchical “name tree”
which is scalable and easy to retrieve. The second research issue is data security and trustworthiness. NDN proposes to secure the data
directly instead of securing the data “containers”
such as files, hosts, and network connections.
The contents are signed by public keys. The
third issue is the scaling of NDN. NDN names
are longer than IP addresses, but the hierarchical structure helps the efficiency of lookup and
global accessibility of the data.
Regarding these issues, NDN tries to address
them along the way to resolve the challenges in
routing scalability, security and trust models, fast
data forwarding and delivery, content protection
and privacy, and an underlying theory supporting
the design.
MobilityFirst — The MobilityFirst [4] project
is led by Rutgers University with seven other
universities. The basic motivation of MobilityFirst is that the current Internet is designed for
interconnecting fixed endpoints. It fails to
address the trend of dramatically increasing
demands of mobile devices and services. The
Internet usage and demand change is also a key
driver for providing mobility from the architectural level for the future Internet. For the near
term, MobilityFirst aims to address the cellular
convergence trend motivated by the huge
mobile population of 4 to 5 billion cellular
devices; it also provides mobile peer-to-peer
(P2P) and infostation (delay-tolerant network
[DTN]) application services which offer robustness in case of link/network disconnection. For
the long term, in the future, MobilityFirst has
the ambition of connecting millions of cars via
vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) modes, which involve capabilities such as location services, georouting, and
reliable multicast. Ultimately, it will introduce a
pervasive system to interface human beings
with the physical world, and build a future
Internet around people.
The challenges addressed by MobilityFirst
include stronger security and trust requirements
due to open wireless access, dynamic association,
privacy concerns, and greater chance of network
failure. MobilityFirst targets a clean-slate design
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
directly addressing mobility such that the fixed
Internet will be a special case of the general
design. MobilityFirst builds the “narrow waist”
of the protocol stack around several protocols:
• Global name resolution and routing service
• Storage-aware (DTN-like) routing protocol
• Hop-by-hop segmented transport
• Service and management application programming interfaces (APIs)
The DTN-like routing protocol is integrated with
the use of self-certifying public key addresses for
inherent trustworthiness. Functionalities such as
context- and location-aware services fit into the
architecture naturally. An overview of the MobilityFirst architecture is shown in Fig. 2. It shows
all the building blocks mentioned above and how
they work together.
Some typical research challenges of MobilityFirst include:
• Trade-off between mobility and scalability
• Content caching and opportunistic data
delivery
• Higher security and privacy requirements
• Robustness and fault tolerance
NEBULA — NEBULA [5] is another FIA project focused on building a cloud-computing-centric network architecture. It is led by the
University of Pennsylvania with 11 other universities. NEBULA envisions the future Internet
consisting of a highly available and extensible
core network interconnecting data centers to
provide utility-like services. Multiple cloud providers can use replication by themselves. Clouds
comply with the agreement for mobile “roaming” users to connect to the nearest data center
with a variety of access mechanisms such as
wired and wireless links. NEBULA aims to
design the cloud service embedded with security
and trustworthiness, high service availability and
reliability, integration of data centers and
routers, evolvability, and economic and regulatory viability.
NEBULA design principles include:
• Reliable and high-speed core interconnecting data centers
• Parallel paths between data centers and
core routers
• Secure in both access and transit
• A policy-based path selection mechanism
• Authentication enforced during connection
establishment
With these design principles in mind, the NEBULA future Internet architecture consists of the
following key parts:
• The NEBULA data plane (NDP), which
establishes policy-compliant paths with flexible access control and defense mechanisms
against availability attacks
• NEBULA virtual and extensible networking
techniques (NVENT), which is a control
plane providing access to applicationselectable service and network abstractions
such as redundancy, consistency, and policy
routing
• The NEBULA core (NCore), which redundantly interconnects data centers with ultrahigh-availability routers
NVENT offers control plane security with policy-selectable network abstraction including mul-
IEEE
BEMaGS
F
A “narrow waist” around content chunks instead of the IP.
Web, email, VoIP, eBusiness...
Browsers, Skype, online gaming...
HTTP, RTP, SMTP...
File streams...
TCP, UDP, SCTP...
Security...
IP
“Narrow waist”
Contents
Ethernet, WIFi...
Strategies...
CSMA, ADSL, Sonet...
P2P, UDP, IP broadcast...
Optical fiber, copper, radio...
Optical fiber, copper, radio...
Figure 1. The new “narrow waist” of NDN (right) compared to the current
Internet (left).
Mobility first routers are
with storage capability
Hop-by-hop segment
transport
Core network
Generalized
DTN routing
Data plane
Global name
resolution service
Name to
address
mapping
Control and management plane
Figure 2. MobilityFirst architecture.
tipath routing and use of new networks. NDP
involves a novel approach for network path
establishment and policy-controlled trustworthy
paths establishment among NEBULA routers.
Figure 3 shows the NEBULA architecture comprising the NDP, NVENT, and NCore, and
shows how they interact with each other.
eXpressive Internet Architecture (XIA) —
Expressive Internet Architecture (XIA) [6] is
also one of the four projects from the NSF FIA
program, and was initiated by Carnegie Mellon
University collaborating with two other universities. As we observe, most of the research projects on future Internet architectures realize the
importance of security and consider their architecture carefully to avoid the flaws of the original Internet design. However, XIA directly and
explicitly targets the security issue within its
design.
There are three key ideas in the XIA architecture:
• Define a rich set of building blocks or communication entities as network principals
including hosts, services, contents, and
future additional entities.
• It is embedded with intrinsic security by
using self-certifying identifiers for all principals for integrity and accountability properties.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
29
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
NDP path
Wireless
access
network
NVENT
NDP path NVENT
Reliable
Wired
trustworthy
access
core network
Data center
network
(NCore)
Data center Transit network
Figure 3. NEBULA architecture components and their interactions.
• A pervasive “narrow waist” (not limited to
the host-based communication as in the
current Internet) for all key functions,
including access to principals, interaction
among stakeholders, and trust management; it aims to provide interoperability at
all levels in the system, not just packet forwarding.
The XIA components and their interactions
are illustrated in Fig. 4. The core of the XIA is
the Expressive Internet Protocol (XIP) supporting communication between various types of
principals. Three typical XIA principal types arecontent, host (defined by “who”), and service
(defined by what it does). They are open to
future extension. Each type of principal has a
narrow waist that defines the minimal functionality required for interoperability. Principles talk
to each other using expressive identifiers (XIDs),
which are 160 bit identifiers identifying hosts,
pieces of content, or services. The XIDs are
basically self-certifying identifiers taking advantage of cryptographic hash technology. By using
this XID, the content retrieval no longer relies
on a particular host, service or network path.
XIP can then support future functions as a
diverse set of services. For low-level services, it
uses a path-segment-based network architecture
(named Tapa in their previous work) as the
basic building block; and builds services for content-transfer and caching and service for secure
content provenance at a higher level. XIA also
needs various trustworthy mechanisms and provides network availability even when under
attack. Finally, XIA defines explicit interfaces
between network actors with different roles and
goals.
GLOBAL ENVIRONMENT FOR
NETWORK INNOVATIONS (GENI)
GENI [7] is a collaborative program supported
by NSF aimed at providing a global large-scale
experimental testbed for future Internet architecture test and validation. Started in 2005, it
has attracted broad interest and participation
from both academia and industry. Besides its initial support from existing projects on a dedicated
backbone network infrastructure, it also aims to
attract other infrastructure platforms to participate in the federation — the device control
framework to provide these participating networks with users and operating environments, to
observe, measure, and record the resulting experimental outcomes. So generally, GENI is differ-
30
Communications
IEEE
A
BEMaGS
F
ent from common testbeds in that it is a generalpurpose large-scale facility that puts no limits on
the network architectures, services, and applications to be evaluated; it aims to allow clean-slate
designs to experiment with real users under real
conditions.
The key idea of GENI is to build multiple
virtualized slices out of the substrate for resource
sharing and experiments. It contains two key
pieces:
• Physical network substrates that are expandable building block components
• A global control and management framework that assembles the building blocks
together into a coherent facility
Thus, intuitively two kinds of activities will be
involved in GENI testbeds: one is deploying a
prototype testbed federating different small and
medium ones together (e.g., the OpenFlow
testbed for campus networks [8]); the other is to
run observable, controllable, and recordable
experiments on it.
There are several working groups concentrating on different areas, such as the control framework working group; GENI experiment workflow
and service working group; campus/operation,
management, integration, and security working
group; and instrumentation and management
working group.
The GENI generic control framework consists of several subsystems and corresponding
basic entities:
• Aggregate and components
• Clearinghouse
• Research
organizations,
including
researchers and experiment tools
• Experiment support service
• “Opt-in” end users
• GENI operation and management
Clearinghouses from different organizations and
places (e.g., those from the United States and
European Union) can be connected through federation. By doing this, GENI not only federates
with identical “GENI-like” systems, but also with
any other system if they comply with a clearly
defined and relatively narrow set of interfaces
for federation. With these entities and subsystems, “slices” can be created on top of the
shared substrate for miscellaneous researchdefined specific experiments, and end users can
“opt in” onto the GENI testbed accordingly.
GENI’s research and implementation plan
consists of multiple continuous spirals (currently
in spiral 3). Each spiral lasts for 12 months. Spiral 1 ran from 2008 to 2009; spiral 2 ran from
2009 to 2010; spiral 3 started in 2011. In spiral 1,
the primary goals were to demonstrate one or
more early prototypes of the GENI control
framework and end-to-end slice operation across
multiple technologies; there were five competing
approaches to the GENI control framework,
called “clusters.”
Cluster A was the Trial Integration Environment based on DETER (TIED) control framework focusing on federation, trust, and security.
It was a one-project cluster based on the CyberDefense Technology Experimental Research
(DETER) control framework by the University
of Southern California (USC)/ISI, which is an
individual “mini-GENI” testbed to demonstrate
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
F
Users
Application
Services
Host
support
Content
support
Services
support
eXpressive Internet protocol
Intrinsic
security
Figure 4. XIA components and interactions.
We can see that spirals 1 and 2 integrated a
very wide variety of testbeds into its control
framework. Spiral 2 was the second phase aiming to move toward continuous experimentation.
Key developments include improved integration
of GENI prototypes; architecture, tools, and services enabling experiment instrumentation; interoperability across GENI prototypes; and
researcher identity management. In spiral 3, the
goal is to coordinate the design and deployment
of a first GENI Instrumentation and Measurement Architecture. Supporting experimental use
of GENI and making it easier to use are also key
goals. Also, more backbone services and participants are expected to join in the GENI framework for this spiral.
Another notable and unique characteristic
offered by GENI is that instrumentation and
measurement support have been designed into
the system from the beginning since the ultimate
goal of GENI is to provide an open and extensible testbed for experimentation with various new
Internet architectures.
RESEARCH PROJECTS FROM THE
EUROPEAN UNION AND ASIA
The European Union has also initiated a bundle
of research projects on future Internet architectures. In this section, we introduce the research
organized under the European Seventh Framework Programme (FP7) along with that in Japan
and China.
EUROPEAN UNION
The European Future Internet Assembly [19]
(abbreviated FIA as in the United States) is a
collaboration between projects under FP7 on
future Internet research. Currently, the FIA
brings together about 150 projects that are part
of FP7. These projects have a wide coverage,
including the network of the future, cloud computing, Internet of service, trustworthy information and communication technology (ICT),
networked media and search systems, socio-economic aspects of the future Internet, application
domain, and Future Internet Research and
Experimentation (FIRE) [10]. The FIA maintains a European Future Internet Portal [20],
IEEE Communications Magazine • July 2011
IEEE
BEMaGS
Trustworthy network operation
federated and coordinated network provisioning.
Cluster A particularly aimed to provide usability
across multiple communities through federation.
The project delivered software “fedd” as the
implementation of the TIED federation architecture providing dynamic and on-demand federation, and interoperability across ProtoGENI,
GENIAPI, and non-GENI aggregate. It included
an Attribute Based Access Control (ABAC)
mechanism for large-scale distributed systems. It
created a federation with two other projects:
StarBED in Japan and ProtoGENI in the United States.
Cluster B was a control framework based on
PlanetLab implemented by Princeton University
emphasizing experiments with virtualized
machines over the Internet. By the end of spiral
2, it included at least 12 projects from different
universities and research institutes. The results
of these projects are to be integrated into the
PlanetLab testbed. PlanetLab provided “GENIwrapper” code for independent development of
an aggregate manager (AM) for Internet entities. A special “lightweight” protocol was introduced to interface PlanetLab and OpenFlow
equipment. Through these mechanisms, other
projects in the cluster can design their own substrates and component managers with different
capacities and features.
Cluster C was the ProtoGENI control framework by the University of Utah based on Emulab, emphasizing network control and
management. By the end of spiral 2, it consisted
of at least 20 projects. The cluster integrated
these existing and under-construction systems to
provide key GENI functions. The integration
included four key components: a backbone based
on Internet2; sliceable and programmable PCs
and NetFPGA cards; and subnets of wireless
and wired edge clusters. Cluster C so far is the
largest set of integrated projects in GENI.
Cluster D was Open Resource Control Architecture (ORCA) from Duke University and
RENCI focusing on resource allocation and
integration of sensor networks. By the end of
spiral 2, it consisted of five projects. ORCA tried
to include optical resources from the existing
Metro-Scale Optical Testbed (BEN). Different
from other clusters, the ORCA implementation
included the integration of wireless/sensor prototypes. It maintains a clearinghouse for the
testbeds under the ORCA control framework
through which it connects to the national backbone and is available to external researchers.
Cluster E was Open-Access Research Testbed
for Next-Generation Wireless Networks
(ORBIT) by Rutgers University focusing on
mobile and wireless testbed networks. It included three projects by the end of spiral 2. The
basic ORBIT did not include a full clearinghouse implementation. Cluster E tried to
research how mobile and wireless work can
affect and possibly be merged into the GENI
architecture. WiMAX is one of the wireless network prototypes in this cluster.
A more detailed description of the clusters
and their specific approaches and corresponding
features can be found in our previous survey
[18]. Even more details can be found from GENI
project websites and wikis [7].
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
User-network
IEEE
Network-network
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
31
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A significant trait of
the “Network of the
Categories
Project names (selected)
Future Architectures and Technologies
4AWARD, TRILOGY, EIFFEL, SPARC, SENSEI, Socrates, CHANGE,
PSIRP, etc.
Services, Software, and Virtualization
ALERT, FAST, PLAY, S-Cube, SLA@SOI, VISION Cloud, etc.
Network Media
3DLife, COAST, COMET, FutureNEM, nextMEDIA, P2P-Next, etc.
Internet of Things
ASPIRE, COIN, CuteLoop, SYNERGY, etc.
Trustworthiness
ABC4Trust, AVANTSSAR, ECRYPT II, MASTER, uTRUSTit, etc.
Testbeds
FIRE, N4C, OPNEX, OneLAB2, PII, WISEBED, G-Lab, etc.
Others
HYDRA, INSPIRE, SOCIALNETS, etc.
Future” is that the
research projects
A
BEMaGS
F
cover a very wide
range of topics and
a number of
commercial
organizations,
including traditional
telecommunication
companies,
participate in
Table 2. EU research projects on future Internet.
the research
consortiums.
which is an important web portal for sharing
information and interaction among the participating projects. Multiple FIA working groups
have been formed to encourage collaboration
among projects.
Of these projects, around 90 of them were
launched following the calls of FP7 under the
“Network of the Future” Objective 1.1. They can
be divided into three clusters: “Future Internet
Technologies (FI),” “Converged and Optical
Networks (CaON),” and “Radio Access and
Spectrum (RAS).” The total research funding
since 2008 is over €390 million. A subset of the
projects is shown in Table 2.
A significant trait of the “Network of the
Future” [17] is that the research projects cover a
very wide range of topics and a number of commercial organizations, including traditional
telecommunication companies, participate in the
research consortiums. Since there are a large
number of projects, we selected a few representative ones and explain them in some detail.
They are all under FP7 based on a series of
design objectives categorized by the ICT challenge #1 of building “Pervasive and Trusted
Network and Service Infrastructure.”
Due to the large number of projects, for the
architecture research in this article, we selected
a project named 4WARD (Architecture and
Design for the Future Internet), and for the
testbed we selected FIRE. We selected them
due to the fact that FIRE is often deemed the
European counterpart project to GENI, and the
4WARD project aims at a general architectural
level of redesign of the Internet, and we feel that
it is representative of the rest. It also involves a
large number of institutions’ participation and
cooperation.
In the following, we discuss these two projects briefly.
4WARD — 4WARD [9] is an EU FP7 project
on designing a future Internet architecture led
primarily by an industry consortium. The funding is over 45 million dollars for a 2-year period.
The key 4WARD design goals are:
• To create a new “network of information”
paradigm in which information objects have
their own identity and do not need to be
32
Communications
IEEE
bound to hosts (somewhat similar to the
goal of the NDN project)
• To design the network path to be an active
unit that can control itself and provide
resilience and failover, mobility, and secure
data transmission
• To devise “default-on” management capability that is an intrinsic part of the network
itself
• To provide dependable instantiation and
interoperation of different networks on a
single infrastructure.
Thus, on one hand, 4WARD promotes the
innovations needed to improve a single network
architecture; on the other hand, it enables multiple specialized network architectures to work
together in an overall framework. There are five
task components in the 4WARD research:
• A general architecture and framework
• Dynamic mechanisms for securely sharing
resources in virtual networks
• “Default-on” network management system;
a communication path architecture with
multipath and mobility support
• Architecture for information-oriented networks
Note that 4WARD is one of many projects
under the FP7 framework on future Internet
architecture research. Readers can find more
information on a complete list of the projects
from [19]. Some typical projects focusing on different aspects of future architecture are listed in
Table 2.
Future Internet Research and Experimentation (FIRE) — FIRE [10] is one of the European Union’s research projects on testbeds and
is like a counterpart of GENI in the United
States. FIRE was started in 2006 in FP6 and has
continued through several consecutive cycles of
funding. FIRE involves efforts from both industry and academia. It is currently in its “third
wave” focusing on providing federation and sustainability between 2011 and 2012. Note that the
FIRE project’s research is built on the previous
work on the GEANT2 (Gigabit European Academic Networking Technology) project [11],
which is the infrastructure testbed connecting
over 3000 research organizations in Europe.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
FIRE has two interrelated dimensions:
• To support long-term experimentally driven
research on new paradigms and concepts
and architectures for the future Internet
• To build a large-scale experimentation facility by gradually federating existing and
future emerging testbeds
FIRE also expects not only to change the Internet in technical aspects but also in socio-economic terms by treating socio-economic
requirements in parallel with technical requirements.
A major goal of FIRE is federation, which by
definition is to unify different self-governing
testbeds by a central control entity under a common set of objectives. With this goal in mind,
the FIRE project can be clustered in a layered
way as depicted in Fig. 5. As shown in the figure,
it contains three basic clusters. The top-level
cluster consists of a bundle of novel individual
architectures for routing and transferring data.
The bottom cluster consists of projects providing
support for federation. In the middle is the federation cluster, which consists of the existing
testbeds to be federated. These existing small
and medium-sized testbeds can be federated
gradually to meet the requirements for emerging
future Internet technologies. Documents describing these sub-testbeds can be found on the FIRE
project website [10].
ASIA
Asian countries such as Japan and China also
have projects on future Internet architectures.
Japan — Japan has broad collaborations with
both the United States and European Union
regarding future Internet research. It participates
in PlanetLab in the United States, and the
testbed in Japan is also federated with the German G-Lab facility. The Japanese research program on future Internet architecture is called
New Generation Network (NWGN) sponsored
by the Japan National Institute of Information
and Communications Technology (NICT). The
Japanese research community defines the cleanslate architecture design as “new generation” and
the general IP-based converged design as “next
generation” (NXGN) design. NWGN started in
June 2010 and expects to change the network
technologies and Internet community with broad
impact in both the short term (to 2015) and long
term (to 2050). Like the projects in the United
States and European Union, NWGN consists of
a series of sub-projects collaborated on by
academia and industry. The sub-projects range
from architecture designs, testbed designs, virtualization laboratories, and wireless testbeds to
data-centric networking, service-oriented networks, advanced mobility management over network virtualization, and green computing. Rather
than enumerating all projects, we briefly discuss
the architecture project AKARI [12] and the
testbed projects JGN2plus [13] and JGN-X (JGN
stands for Japan Gigabit Network). The reason
we selected these projects is similar to the reason
we selected FIA and GENI. AKARI is so far the
biggest architectural research project in Japan;
JGN2plus and JGN-X are the testbed research
counterparts to GENI and FIRE.
IEEE
BEMaGS
F
Experimentally-driven, multi-disciplinary research
Testbeds
VITAL++
WISEBED
PII
Federica
OneLab2
Support actions
Figure 5. FIRE clustering of projects.
AKARI: AKARI means “a small light in the
darkness.” The goal of AKARI is a clean-slate
approach to design a network architecture of
the future based on three key design principles:
• “Crystal synthesis,” which means to keep
the architecture design simple even when
integrating different functions
• “Reality connected,” which separates the
physical and logical structures
• “Sustainable and evolutional,” which means
it should embed the “self-*” properties
(self-organizing, self-distributed, self-emergent, etc.), and be flexible and open to the
future changes
AKARI is supposed to assemble five subarchitecture models to become a blueprint
NWGN:
• An integrated subarchitecture based on a
layered model with cross-layer collaboration; logical identity separate from the data
plane (a kind of ID/locator split structure)
• A subarchitecture that simplifies the layered
model by reducing duplicated functions in
lower layers
• A subarchitecture for quality of service
(QoS) guarantee and multicast
• A subarchitecture to connect heterogeneous
networks through virtualization
• A mobile access subarchitecture for sensor
information distribution and regional adaptive services
AKARI is currently in the process of a proofof-concept design and expects to get a blueprint
in 2011. Through systematic testbed construction
and experimentations, it aims to establish a new
architecture ready for public deployment by
2016.
JGN2plus and JGN-X: JGN2plus is the
nationwide testbed for applications and networks
in Japan, and also the testbed for international
federation. It includes broad collaboration from
both industry and academia. It evolved as JGN,
migrated to JGN II in 2004, and then to
JGN2plus in 2008. From 2011, the testbed is
under JGN-X, which targets to be the real
NWGN testbed to deploy and validate AKARI
research results. JGN2plus provides four kinds
of services:
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
33
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Any architecture that
requires investment
without immediate
payoff is bound to
fail. Of course, the
payoff will increase
as the deployment
of the new
technology increases,
economies of scale
reduce the cost, and
eventually the old
architecture
deployed base will
diminish and
disappear.
34
Communications
IEEE
• Layer 3 (L3) IP connection
• L2 Ethernet connection
• Optical testbed
• Overlay service platform provisioning
There are also five subtopics in the research
of JGN2plus:
• NWGN service platform fundamental technologies
• NWGN service testbed federation technology
• Middleware and application of lightpath
NWGN
• Component establishment technologies for
NWGN operation
• Verification of new technologies for international operation
JGN2plus expects to create collaboration among
industry, academia, and government for NWGN
experiments. It also aims to contribute to human
resource development in the ICT area via these
experiments.
China — The research projects on future Internet in China are mostly under the 863 Program,
973 Program, and “12th Five-Year Plan Projects” administrated by the Ministry of Science
and Technology (MOST) of China. Currently
there are several ongoing research projects,
which include:
• New Generation Trustworthy Networks
(from 2007 to 2010)
• New Generation Network Architectures
(from 2009 to 2013)
• Future Internet Architectures (from 2011 to
2015)
Project 1 was still IP - network research instead
of a clean-slate future Internet. It consists of
research sub-projects on new network architecture, next generation broadcasting (NGB), new
network services, a national testbed for new
generation networks and services, new routing/
switching technology, a new optical transmission network, and low-cost hybrid access equipment.
Besides the research projects on future Internet architecture, there are also ongoing research
projects for building a China Next Generation
Internet (CNGI) testbed. It is based on the previous infrastructure network testbed of the
China Education and Research Network (CERNET [14] and CERNET2 [15]) and the China
Science and Technology Network (CSTNET). A
terabit optical, terabit WDM, terabit router plus
IPTV testbed called (3T-NET) was also
announced on July 2009 as NGB. The testbed
projects are mostly industry oriented with specific interest in IPv6 related protocols and applications.
We observed that the current future Internet
architecture research in China leans heavily
toward IPv6 related testbed, which is relatively
short-term. To some extent, it reveals the pain
China felt due to the collision between the
extreme shortage of IPv4 address space in
China and the ever expanding demands from
increasing customers and novel services.
Longer-term research projects on innovative
architectural research are still in the cradle
compared to those of the United States and
European Union.
A
BEMaGS
F
DISCUSSIONS AND PERSPECTIVES
Having presented a variety of research projects,
we find that there are several issues worth discussing. In this section, we give our perspective
regarding these issues. Of course, there is no
agreement among researchers regarding these
perspectives, and none is implied.
Clean-slate vs. evolutionary: Clean-slate
designs impose no restriction and assumption
on the architectural design. The key idea is not
to be subjected to the limitations of the existing
Internet architecture. It is also called “new generation” by Japanese and Chinese researchers.
While the architectures can be revolutionary,
their implementation has to be evolutionary.
Today, the Internet connects billions of nodes
and has millions of applications that have been
developed over the last 40 years. We believe
any new architecture should be designed with
this reality in mind; otherwise, it is bound to
fail. Legacy nodes and applications should be
able to communicate over the new architecture
without change (with adapter nodes at the
boundary), and new nodes and applications
should similarly be able to communicate over
the existing Internet architecture. Of course,
the services available to such users will be an
intersection of those offered by both architectures. Also, the new architecture may provide
adaptation facilities for legacy devices at their
boundary points. Various versions of Ethernet
are good examples of such backward compatibility. Some variations of IP are potential examples of missing this principle.
New architecture deployment will start in a
very small scale compared to the current Internet. These early adopters should have economic
incentives for change. Any architecture that
requires investment without immediate payoff is
bound to fail. Of course, the payoff will increase
as the deployment of the new technology increases, economies of scale reduce the cost and eventually the old architecture deployed base will
diminish and disappear.
Integration of security, mobility, and other
functionalities: It is well understood and agreed
that security, mobility, self-organization, disruption tolerance, and so on are some of the key
required features for the future Internet. However, most of the projects, even for those collaborative ones like in FIA program, put more
emphasis on a specific attribute or a specific set
of problems. It seems to be a tough problem to
handle many challenges in a single architecture
design. Currently, for the collaborative projects
such as FIA, they are trying to integrate miscellaneous previous research results into a coherent one trying to balance some of the issues.
Although different projects have different
emphases, it is beneficial to create such diversity and allow a bunch of integrated architectures
to potentially compete in the future. However,
we believe that there is still a long way to go
before there is a next-generation architecture
unifying these different lines of designs. For
example, we observe that the four U.S. FIA
projects concentrate on four different specific
issues. Self-certifying and hash-based addresses
are effective tools for security. However, securi-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ty needs much more consideration on both
micro and macro scopes. Content- and information-centric features are also important trends,
but how to integrate these differing requirements and resulting architectures is still a pending problem. We expect that more integration
research will be required when such issues
emerge in the future. It is therefore desirable
for different projects to create some synergy for
the integration process.
Architectures built around people instead of
machines: It has been widely realized that the
usage pattern of the Internet has changed, and
the trend of building future Internet architecture
around the contents, data, and users seems to be
justifiable and promising in the future. Design
goal changes naturally lead to design principle
changes. Different patterns may emerge without
any further synthesis. Current existing projects
on future Internet architectures sort out different principles according to their own design
emphases. From our perspective, it is essential
and important to form a systematic and comprehensive theory in the research process rather
than designing based only on experiences. It may
take several continuous spirals between theoretical improvement and practical experience to
achieve a sound architecture. We believe more
research in this area may be desirable and meaningful for future Internet research.
Interfaces among stakeholders: Future Internet architectures are required to provide extensible and flexible explicit interfaces among
multiple stakeholders (users, Internet service
providers, application service providers, data
owners, and governments) to allow interaction,
and enforce policies and even laws. A typical
example is Facebook, which creates a complex
situation for data, privacy, and social relationships. Societal and economic components have
become indispensible factors in the future Internet. The transition from the academic Internet
to a multifunctional business-involved future
Internet puts much higher requirements on the
architectural supports to regulate and balance
the interests of all stakeholders. In both technical and non-technical aspects, the future Internet architectures are required to provide
extensible and flexible explicit interfaces among
multiple actors to allow interaction, and enforce
policies and even laws. The deep merging of the
Internet into everyone’s daily life has made such
endeavors and efforts more and more urgent
and important. From our perspective, significant
research efforts are still needed in aspects such
as economics, society, and laws.
Experimental facilities: Most of the current
testbeds for future Internet architecture research
in different countries are results of previous
research projects not related to future Internet
architectures. The networks use different technologies and have different capabilities.
Although the federation efforts are meaningful,
they may be restricted in both manageability and
capability by such diversity. Testbeds from different countries are also generally tailored or specialized for the architectural design projects of
those countries, with different features and
emphases. Federation and creating synergy
among such testbeds may be challenging. From
our perspective, such challenges also mean a
valuable opportunity for research on sharing and
virtualization over diverse platforms.
Service delivery networks: The key trend
driving the growth of the Internet over the last
decade is the profusion of services over the
Internet. Google, Facebook, YouTube, and similar services form the bulk of Internet traffic.
Cloud computing and the proliferation of mobile
devices have lead to further growth in services
over the Internet. Therefore, Internet 3.0 [16],
which is a project in which the authors of this
article are involved, includes developing an open
and secure service delivery network (SDN) architecture. This will allow telecommunication carriers to offer SDN services that can be used by
many application service providers (ASPs). For
example, an ASP wanting to use multiple cloud
computing centers could use it to set up its own
worldwide application-specific network and customize it by a rule-based delegation mechanism.
These rules will allow ASPs to share an SDN
and achieve the features required for widely distributed services, such as load balancing, fault
tolerance, replication, multihoming, mobility,
and strong security, customized for their application. One way to summarize this point is that
service delivery should form the narrow waist of
the Internet (Fig. 1), and content and IP are
special cases of service delivery.
IEEE
BEMaGS
F
Internet 3.0, which is
a project in which
the authors are
involved, includes
developing an open
and secure service
delivery network
architecture. This will
allow telecommunication carriers to
offer SDN services
that can be used by
many application
service providers.
SUMMARY
In this article, we present a survey of the current
research efforts on future Internet architectures.
It is not meant to be a complete enumeration of
all such projects. Instead, we focus on a series of
representative research projects. Research programs and efforts from the United States, European Union, and Asia are discussed. By doing
this, we hope to draw an approximate overall
picture of the up-to-date status in this area.
REFERENCES
[1] NSF Future Internet Architecture Project, http://www.
nets-fia.net/.
______
[2] NSF NeTS FIND Initiative, http://www.nets-find.net.
[3] Named Data Networking Project, http://www.named___________
data.net.
_____
[4] MobilityFirst Future Internet Architecture Project,
http://mobilityfirst.winlab.rutgers.edu/.
[5] NEBULA Project, http://nebula.cis.upenn.edu.
[6] eXpressive Internet Architecture Project,
http://www.cs.cmu.edu/~xia/.
[7] Global Environment for Network Innovations (GENI)
Project, http://www.geni.net/.
[8] OpenFlow Switch Consortium, http://www.open___________
flowswitch.org/.
________
[9] The FP7 4WARD Project, http://www.4ward-project.eu/.
[10] FIRE: Future Internet Research and Experimentation,
http://cordis.europa.eu/fp7/ict/fire/
[11] GEANT2 Project, http://www.geant2.net/.
_________
[12] AKARI Architecture Design Project, http://akari-project.nict.go.jp/eng/index2.htm
________________
[13] JGN2plus- Advanced Testbed Network for R&D,
http://www.jgn.nict.go.jp/english/index.html.
[14] China Education and Research Network,
http://www.edu.cn/english/.
[15] CERNET2 Project, http://www.cernet2.edu.cn/
index_en.htm.
_______
[16] Internet 3.0 project, http://www1.cse.wustl.edu/
~jain/research/index.html.
______________
[17] The Network of the Future Projects of EU FP7,
http://cordis.europa.eu/fp7/ict/future-networks/
home_en.html.
________
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
35
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
[18] S. Paul, J. Pan, and R. Jain, “Architectures for the
Future Networks and the Next Generation Internet: A
Survey,” Comp. Commun., U.K., vol. 34, issue 1, 15
Jan. 2011, pp. 2–42.
[19] Future Internet Assembly, http://www.future____________
internet.eu/home/future-internet-assembly.html.
_________________________
BIOGRAPHIES
____________ received his B.E. in
J IANLI P AN [S] ([email protected])
2001 from Nanjing University of Posts and Telecommunications (NUPT), China, and his M.S. in 2004 from Beijing University of Posts and Telecommunications (BUPT), China. He
is currently a Ph.D. student in the Department of Computer
Science and Engineering at Washington University in Saint
Louis, Missouri. His current research is on future Internet
architecture and related topics such as routing scalability,
mobility, mulithoming, and Internet evolution. His recent
research interests also include green building in the networking context.
S UBHARTHI P AUL [S] ([email protected])
____________ received his
B.S. degree from the University of Delhi, India, and his
Master’s degree in software engineering from Jadavpur
University, Kolkata, India. He is presently a doctoral
36
Communications
IEEE
A
BEMaGS
F
student in the Department of Computer Science and
Engineering at Washington University. His primary
research interests are in the area of future Internet
architectures.
__________ is a Fellow of ACM, a winRAJ JAIN [F] ([email protected])
ner of the ACM SIGCOMM Test of Time award and CDACACCS Foundation Award 2009, and ranks among the top
50 in Citeseer’s list of Most Cited Authors in Computer Science. He is currently a professor in the Department of
Computer Science and Engineering at Washington University. Previously, he was one of the co-founders of Nayna
Networks, Inc., a next-generation telecommunications systems company in San Jose, California. He was a senior consulting eEngineer at Digital Equipment Corporation in
Littleton, Massachusetts, and then a professor of computer
and information sciences at Ohio State University, Columbus. He is the author of Art of Computer Systems Performance Analysis, which won the 1991 Best-Advanced
How-to Book, Systems award from Computer Press Association. His fourth book, High-Performance TCP/IP: Concepts,
Issues, and Solutions, was published by Prentice Hall in
November 2003. Recently, he co-edited Quality of Service
Architectures for Wireless Networks: Performance Metrics
and Management, published in April 2010.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Join!
www.comsoc.org/NewJoinSpecial
Run Ahead of the Curve
IEEE Communications Society
Global Community of Communications Professionals
Society Membership benefits include:
IEEE Communications Magazine.
Wireless Communication
Engineering Technologies (WCET) Certification Program.
ComSoc e-News issues.
ComSoc Community Directory.
member-only products.
___________
Headquarters - New York, USA
3 Park Avenue, 17th Floor
New York, NY 10016 USA
Tel: +1 212 705 8900
Fax: +1 212 705 8999
[email protected], www.comsoc.org
________
Singapore Office
Fanny Su Beh Noi, Manager
Solaris, Singapore 138628
Tel. +65 778 2873, Fax: +65 778 9723
[email protected]
______
China Office
Ning Hua, Chief Representative
Rm 1530, South Twr,
Raycom Info Tech Park C.
Haidian District
Beijing, 100190, China
Tel. +86 10 8286 2025, Fax: +86 10 8262 2135
[email protected]
______
______________________
Communications
IEEE
______________
______________
Singapore Office
Fanny Su Beh Noi, Manager
59E Science Park Drive
The Fleming, Singapore Science Park
Singapore 118244 SINGAPORE
Tel. +65 778 2873, Fax: +65 778 9723
[email protected]
_____
_____________________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FUTURE INTERNET ARCHITECTURES: DESIGN AND
DEPLOYMENT PERSPECTIVES
Loci of Competition for
Future Internet Architectures
John Chuang, UC Berkeley
ABSTRACT
Designing for competition is an important
consideration for the design of future Internet
architectures. Network architects should systematically consider the loci of competition in any
proposed network architecture. To be economically sustainable, network architectures should
encourage competition within each locus, anticipate and manage the interactions between the
loci, and be adaptable to evolution in the loci.
Given the longevity of network architectures relative to network technologies and applications, it
is important to ensure that competition is not
unnecessarily foreclosed at any particular locus
of competition.
DESIGN FOR COMPETITION
In contemplating future Internet architectures,
the networking community has identified economic viability as a key architectural requirement,
along with other requirements such as security,
scalability, and manageability. This is in recognition of the fact that any global-scale distributed
communications infrastructure requires significant
capital investments, and incentives must exist for
the network owners to invest in new facilities and
services in a sustainable fashion.
It is widely accepted that competition is
important for promoting the long-term economic
viability of the network. This is because competition imposes market discipline on the network
operators and service providers, providing them
with incentives for continual innovation and
investment in their facilities. Conversely, given a
lack of competition, a monopoly provider may
become complacent and fail to invest in the
long-term health of the network. However, competition does not occur automatically in a network. Therefore, “design for competition” must
be an important design principle for any future
Internet architecture.
Designing for competition can also promote
the security of a network by encouraging diversity in all levels of the architecture. Robust competition preempts the vulnerabilities of a
monoculture [1], including the increased likelihood of correlated failures or cascade failures.
Diversity should be sought not just in hardware
38
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
and software levels, but also in the operators of
networks, the providers of network services, and
the policies for network management and control.
As a design principle, design for competition
is similar to but different from the “design for
choice” principle as articulated by Clark et al. in
their influential tussles paper [2]. By designing
for choice, Clark et al. refer to the ability of the
architecture “to permit different players to
express their preferences.” The design for competition principle extends this to the ability of
the architecture to permit different players to
express their preferences for services by different
providers. For example, in thinking about architectural support for user-directed routing, the
design for choice principle might suggest that
route diversity be made available to end users by
a provider over its network. The design for competition principle, on the other hand, might lead
to a stronger requirement that the end users can
choose routes offered by multiple providers
and/or over multiple networks.
It should be clear that design for competition
does not imply the mandating of competition.
Instead, it is an argument against the unnecessary foreclosure of competition. Design for competition means the architecture should be
designed to allow for the possibility of supporting multiple providers of a given service, even if
the option of having multiple actual providers is
not exercised from the start. This is because a
network architecture usually outlives the network technologies and network applications.
Sometimes, competition may not be feasible or
desirable at a particular locus of the architecture. For example, the current generation of network technologies may be subject to strong
economies of scale, such that the cost inefficiencies due to competition dominate the benefits
obtained from market competition. However,
the introduction of new technologies or the
adoption of new applications may lead to a reevaluation of the merits of actual competition at
that locus at a future point in time. Put another
way, a network architecture that is designed for
competition would allow innovation, in the form
of new entrants and/or new services, to easily be
introduced into the network, to compete with
existing offerings in the network.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
LOCI OF COMPETITION
Designing for competition in a network architecture is not a straightforward exercise. In facilitating a competitive networking environment, one
needs to systematically think through the what,
where, who (by whom and for whom), when, and
how of supporting competition.
Network architects can begin by identifying the
potential loci of competition in the architecture,
that is, candidate locations in a network architecture where a multiplicity of service providers can
be supported. Based on the characteristics of prevailing technologies and uses, the architects can
also gauge the level of competition that may arise
organically for each locus. For example, loci subject to high fixed costs and low marginal costs
(i.e., strong economies of scale), strong network
effects, or high switching costs should be considered competition-challenged; that is, they tend to
face greater obstacles to competition.
In the event that a design calls for the preclusion of choice or competition at a particular
locus, this decision needs to be clearly justified.
In effect, a monopoly is being hard-coded into
the architecture.
Recognizing that “loci are not silos,” network
architects must also anticipate and understand
the economic relationships that may exist
between the loci of competition. For example, if
a firm operates in two vertically related loci (i.e.,
vertical integration), its market power in one
locus may influence the level of competition in
the other locus. As another example, two adjacent loci may both be dominated by a small number of firms (i.e., bilateral oligopolies), leading to
strategic behavior that may impede innovation on
both sides. A critical job of the network architects is to define and develop interfaces to manage and lubricate both the technical and market
transactions between the loci.
LOCI PAST, PRESENT, AND FUTURE
In the early days of telephony in the United
States, with AT&T controlling the long distance
lines, the local loops, and even the customer
premises equipment (CPE) leased to customers,
there was effectively only a single locus of competition. A potential competitor would have had
to construct an alternate nationwide network
capable of providing end-to-end service in order
to compete effectively against AT&T. In reality,
strong economies of scale meant it was economically infeasible for such an alternate network to
be deployed. Hence, AT&T remained a dominant monopoly, and had to be held in check
through regulation for many decades. However,
starting in the 1960s and 1970s, supply-side and
demand-side changes, in the form of improvements in communication technologies and
increasing business demands for telephony services, set the stage for multiple loci of competition to emerge in the telephony network
architecture. In particular, the adoption of FCC
Part 68 rules in 1975 and the divestiture of
AT&T in 1984 made concrete the existence of
three separate (but vertically related) loci: CPE,
local service, and long distance service. Through
explicit intervention by the executive and judicial
branches of the government, firms can now com-
pete in each locus individually, and the lowering
of entry barriers led to significant innovations in
each locus. Even so, the loci remain vertically
related, and under the 1996 Telecommunications
Act, firms that are dominant in one locus can
compete in other loci. Therefore, government
oversight remains necessary to this day.
For the current Internet architecture, multiple
loci of competition can also be identified. In the
topological dimension, the architecture includes
long distance (backbone) networks, local access
networks, and end hosts. In addition, the layering
principle on which the Internet is built means
that separate loci of competition can also be
identified for the physical, network, and application layers of the Internet. Further evolution of
the architecture in support of video distribution,
cloud computing, and other emerging applications may lead to additional loci of competition
at datacenters, content distribution networks
(CDNs), authentication authorities, and so on.
In contemplating future Internet architectures, it is not unreasonable to expect some loci
of competition to look just like those of the current architecture, and some to be entirely different. In addition to devices, conduits, services,
applications, and data, even the network architectures themselves can be potential loci of competition. The case has been made, on multiple
occasions, that the incumbency of the current
IPv4 network architecture has impeded innovation in the network itself [3]. Through network
virtualization techniques, it is possible to allow
multiple network architectures to be concurrently
deployed over a shared physical infrastructure, to
compete with one another. Then end users can
vote with their feet (or with their data packets)
for their favorite, be it IPv4, IPv6, IPvN, or some
other network architecture or protocol suite.
IEEE
BEMaGS
F
Designing for competition in a network
architecture is not a
straightforward exercise. In facilitating a
competitive networking environment,
one needs to systematically think
through the what,
where, who (by
whom and for
whom), when, and
how of supporting
competition.
A 2 × 2 EXAMPLE
Network architectures, present and future, can
have a large number of loci. As an illustrative
example, let us consider a simple network with
four loci of competition organized along two
dimensions. In the topological dimension, we can
separate the network into the edge (E) and the
core (C). The edge provides access to end hosts,
and is commonly referred to as “the last mile” or
“the local loop.” The core provides connectivity
between the edge networks over long distances,
and is commonly referred to as “the backbone”.
In the layering dimension, we can view the network as consisting of a physical layer (P) that
carries bits over a physical infrastructure, and a
logical layer (L) that sits on top of the physical
layer and provides network services to end users.
With this simple 2 × 2 matrix (Table 1), we can
articulate four different loci of competition.
In the physical-edge (P-E) cell of the matrix,
competition is manifested in the form of multiple
physical connections (wired or wireless) to the
end hosts. Telecommunications regulators use
the term facilities-based competition to describe
competition in this locus. The build-out of new
facilities typically involves high up-front costs for
trenching, erection of towers, and/or acquisition
of radio spectrum. Therefore, the barriers to
entry are usually high, and the number of parallel
facilities is usually limited. However, once the
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
39
A
BEMaGS
F
Communications
IEEE
Logical (L)
Physical (P)
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Edge (E)
Core (C)
Number of access service providers
Number and coverage of
transit service providers
Number of wired and wireless
“last-mile” conduits to each
end host
Number and coverage of
wide-area physical conduits
Table 1. Loci of competition in a stylized 2 × 2 network architecture.
facilities are built, the marginal cost of delivering
data is very low, so this part of the industry is
said to exhibit strong economies of scale.
In the logical-edge (L-E) cell of the matrix,
multiple firms may compete in providing network
access services to end users. The providers may
offer services over different physical networks or
over the same physical network. The providers
may offer similar services (e.g., IPv4) and compete on quality and/or cost, or may offer different
services for different segments of the market (e.g.,
virtual private networking [VPN] for enterprises).
It is not uncommon to find a single firm operating in both the P-E and L-E cells of the matrix: a
firm offers network service to end users over a
physical infrastructure it owns and operates.
In the physical-core (P-C) cell of the matrix,
we can have multiple firms who deploy physical
infrastructure for wide-area bit transport, such as
fiber, submarine cables, terrestrial radio, and
satellite links. To the extent that there is overlap
in geographic coverage, the firms may compete
for business with one another.
Finally, in the logical-core (L-C) cell of the
matrix, firms might compete with one another for
transit customers while at the same time interconnecting with one another to provide global connectivity for their customers. Once again, it is possible,
although not necessary, for a firm to be operating
in the P-C and L-C cells of the matrix simultaneously. For that matter, it is even possible for a firm
to be operating in all four cells of the matrix.
The presence of competition at these four loci
directly translates into choice available to the end
users. First, they may choose among different P-E
links based on requirements for mobility, bandwidth, and considerations of cost. End users may
even choose to multihome (i.e., connect to multiple
physical networks at the same time) for availability
or redundancy reasons. The end users can also
choose among different L-E providers, with the services offered over different physical networks or
over the same physical network. Furthermore, if the
network architecture supports some form of userdirected routing, the end users may be able to exercise choice between different L-C and possibly even
P-C options. In telephony, for example, consumers
can select long-distance carriers on a per-call basis
today by dialing additional prefixes. In sum, the end
user can, in theory, exercise (or delegate) choice in
all four cells of the matrix.
LOCI ARE NOT SILOS
While we can identify multiple loci of competition in a network architecture, it would be a mistake to think that we can design for competition
one locus at a time, ignoring any interactions
40
Communications
IEEE
A
BEMaGS
F
between the loci. In reality, there are many different ways through which competition in one
locus can affect that in another locus.
SUBSTITUTES
First, consumer choice in one locus may sometimes serve as a substitute for choice in another
locus. For example, if the network architecture
seeks to promote route diversity, it could do so
through the support of multi-homing (first-hop
diversity), and/or through the support of userdirected routing (ith-hop diversity). Multihoming
requires P-E (and possibly L-E) diversity, while
user-directed routing is often targeted at L-C
(and possibly P-C) diversity. To the extent that
end users can access meaningful routing alternatives, multihoming and user-directed routing are
imperfect substitutes, so a network architect can
weigh the relative costs and benefits of supporting
one or both mechanisms in his/her architectural
design, and justify his/her design accordingly.
P-E competition (facilities-based competition)
and L-E competition (e.g., through open access
regulation) is another good example of potential
substitutes in the architecture. In the 1990s, when
facilities-based competition was considered weak
in the United States, regulators and legislators
pushed for L-E competition by requiring the
physical network operators to open up access to
their facilities (e.g., through unbundled network
elements) to competing service providers. As a
result, competitive local exchange carriers
(CLECs) were able to offer alternative L-E service to consumers by gaining access to the existing
local loop from the incumbent local exchange carriers (ILECs) such as the Baby Bells, rather than
build out their own physical network. In response,
the ILECs put up non-technical obstacles to
access their facilities (e.g., switches in central
offices, cell towers) in their efforts to frustrate
competition in the L-E locus. In retrospect, one
can view the CLEC experiment as a failure: open
access may relieve the pressure for facilities-based
competition, but it is not an effective substitute
for P-E competition, as a monopoly P-E operator
can still tilt the playing field in the L-E locus.
VERTICAL INTEGRATION
A network architecture should not preclude a single firm from operating in more than one locus of
the competition matrix. A firm may be vertically
integrated by operating in both the L-E and P-E
loci (e.g., Comcast), in both the L-C and P-C loci
(e.g., Level 3), or even in all four loci (e.g.,
AT&T). Once vertical integration comes into
play, however, a network architect needs to be
cognizant of the competition dynamics that arise.
The benefit of vertical integration derives
from the fact that an integrated firm (e.g., one
operating in both L-E and P-E) may provide service to consumers more cheaply than two independent firms operating in L-E and P-E
separately. This economies of scope cost savings
may come from a variety of sources (e.g., joint
facilities operation, billing, customer service).
For example, in Fig. 1b, the vertically integrated
provider may be able to provide the same service
at a lower cost, and therefore offer it at a lower
price, than two separate providers at the logical
and physical layers.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
To the extent that economies of scope are
strong, vertical integration is socially desirable,
and the industry structure may be dominated by
vertically integrated firms, as illustrated in Fig.
1c. From a consumer’s perspective, there is still
effective choice in providers. However, this
industry structure may discourage innovation at
one of the loci. For example, a new firm with an
innovation in the L-E locus may not be able to
offer the new service unless it also enters the PE market, which may have a high entry barrier.
Furthermore, vertical integration can be detrimental to competition, and therefore social welfare, if it allows a firm to use its dominant position
(i.e., market power) in one locus to compete
unfairly in another locus. Figure 1d offers an
example where a vertically integrated firm enjoys
a monopoly position in the P-E market, and has
the ability to exercise its monopoly power to frustrate competition in the L-E market.
In our earlier example, the Baby Bells (e.g.,
Pacific Bell, Bell Atlantic) were vertically integrated ILECs that held monopoly positions in
their respective P-E markets, and were accused
of unfairly competing against the CLECs in the
L-E market.
In an even earlier example, AT&T prior to
the 1984 divestiture was vertically integrated
across the local and long-distance telephony markets, and was accused of using its monopoly
power in the local telephony market to compete
unfairly against MCI, Sprint, and other competitors in the long-distance telephony market. Even
after the competing long-distance carriers successfully sued to gain access to interconnection
with AT&T’s local networks, AT&T was still able
to tilt the playing field in its favor by effecting
hidden subsidies from its local service to its longdistance service. It eventually took anti-trust
action by the U.S. Department of Justice to force
a “vertical disintegration” of AT&T, in 1984, into
separate firms for local and long-distance service.
The story continues in the 1990s and 2000s,
when the digitization of voice and video, together
with the widespread availability of mobile data networks, meant that local telephony operators no
longer had a monopoly in the P-E market. Consequently, anti-trust objections to vertical integration
no longer hold, and vertical re-integration occurred.
Given the historical evolution of the industry,
it should be clear that the current market structure should not be assumed as the final equilibrium state of the industry. While multiple
competitors occupy the P-E market today, continuing technological advances and/or changing
consumer requirements may once again shift this
locus toward a monopolistic market in the future.
TYING
The designation of AT&T as the exclusive carrier in
the United States for the iconic Apple iPhone when
it was first launched in 2007 is a notable recent
example of tying. By selling one product or service
only in conjunction with another product or service,
tying can lead to competition dynamics in two loci.
Similar to vertical integration, tying is not a major
concern if consumers have choices in alternative
products and services in both loci, or if potential
entrants do not face high entry barriers into either
locus. In the case of iPhone/AT&T, consumers have
(a)
(c)
(b)
(d)
IEEE
BEMaGS
F
Figure 1. Examples of vertical industry structures: a) no vertical integration —
users choose logical layer (green) and physical layer (white) providers separately; b) with vertical integration, users can either choose logical and physical
layer providers separately, or choose an integrated provider; c) all firms vertically integrated — users choose one integrated provider; there is a higher entry
barrier for new entrants; d) competition at the logical layer with monopoly at
the physical layer — the user has no choice in physical layer service; a vertically integrated firm has control over whether or how the user has choice in logical layer service.
ample alternatives for both device and carrier, so
tying is not considered problematic from an antitrust perspective. On the other hand, should one or
both of the tying firms enjoy sufficient market power
in their loci, the practice may in fact be enjoined by
antitrust laws (e.g., the Sherman Antitrust Act and
the Clayton Act in the United States).
For the 700 MHz spectrum auction in the
United States in 2008, the FCC adopted “open
device” and “open application” rules for the
highly sought after “C” Block. The rules effectively preempt Verizon, the eventual auction
winner, from engaging in tying of devices or
applications to services offered over its 700 MHz
network. Given the ability of the winner to build
out a high-quality nationwide network using the
spectrum, the open rules were believed to
encourage greater innovation in the loci of both
mobile devices and applications.
DELEGATION, GATEKEEPING, AND
NETWORK NEUTRALITY
Both design for choice and design for competition principles stipulate that users be permitted
to express their preferences for services. In a
general sense, this may be realized by an interface that allows users to make selections in each
of the loci of competition in the architecture.
Some users or applications will clearly benefit
from the ability to exercise full control over the
selection of services in all loci. However, it is not
clear that, even in a simple 2 × 2 architecture, most
or many users will actually wish to make separate,
explicit selections for each of the loci. Instead, it is
more realistic to expect a typical user to:
• Select a service provider in one of the loci,
relying on the service provider to select services in the other loci
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
41
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
U
G
U
U
U
(a)
B
B
G
(c)
B
U
G
(b)
(d)
Figure 2. Examples of delegation and gatekeeping. Selections made by user,
broker, and gatekeeper are labeled U, B, and G, respectively: a) mix and
match — user selects logical (green) and physical (white) layer services; b)
delegation — user delegates choice to third party broker; c) physical gatekeeper
— user selects physical layer service, which in turn selects logical layer services;
d) logical gatekeeper — user selects logical layer service, which in turn selects
physical layer services.
• Delegate the service selection decisions to a
third-party broker, who will compose a service package out of options in the various
loci to match the preferences of the user
Figure 2 illustrates, for a simple two-layer
network architecture example, how delegation
may be differently realized. Figure 2a shows, in
the absence of delegation, how a user will have
to explicitly select providers in both the physical
and logical layers. In Fig. 2b, the user selects a
third-party broker, and delegates the selection of
physical and logical services to the broker. In
Fig. 2c, the user selects a physical layer provider,
and relies on the latter to select logical layer services on its behalf. In this case, the physical layer
provider serves as the gatekeeper to logical layer
services. Finally, in Fig. 2d, the user selects a
logical layer provider, and relies on the latter to
select physical layer services on its behalf. The
logical layer provider serves as the gatekeeper to
the physical layer services.
Given the complexity and dynamism of large
distributed networks, service providers or thirdparty brokers are naturally in a much better position to collect and act on low-level network
information to make service composition decisions
on behalf of their users. However, is there any difference, from a competition or innovation perspective, between the various forms of delegation?
The answer is yes. If there is a relative difference in strength of competition in the two loci,
allowing the firms in the less competitive locus to
serve as gatekeepers controlling access to services
in the other locus will result in reduced competition and innovation in both loci [4]. Typically,
competition is weaker in the physical layer
because of the necessity of significant up-front
capital investments. In this case, the physical gatekeeper model of delegation in Fig. 2c would be
the least desirable from an economic perspective.
Architecturally, this means that it is important for
the interface to allow end users or third party
brokers to explicitly select logical layer services,
independent of the physical layer provider. This
will prevent a physical gatekeeper from becoming
42
Communications
IEEE
A
BEMaGS
F
an obstacle between innovative logical layer services and the users who desire them. Architectural proposals like ROSE (Routing as a Service)
[5], CABO [6], and Cabernet [7] offer possible
paths for the network to move away from a physical gatekeeper model, which currently dominates
the Internet, to a model based on logical gatekeepers and/or delegation to third-party brokers.
Generalizing to more than two loci, we should
ensure that the network architecture does not
allow a competition-challenged locus to assume
a gatekeeping role over any other locus. This is
entirely consistent with, and perhaps provides a
fundamental economic argument for, the principle of network neutrality. If the physical access
network locus is the competition-challenged
locus, then we must prevent the dominant firms
in this locus from having the ability to discriminate against different offerings in the applications, services, content, and devices loci.
In the absence of explicit prohibition, architecturally or otherwise, we should expect firms in the
competition-challenged loci to exercise their market power over service selection and become de
facto gatekeepers. At the same time, these firms
have no incentives whatsoever to embrace
changes in the architecture that may shift the
power of service selection to other loci. Therefore, a network architecture cannot afford to be
silent on this issue. One possible design choice is
to foreclose, at the architectural level, the possibility of gatekeeping by the competition-challenged loci. This leaves open the possibility of
gatekeeping by other more competitive loci. However, if we are uncertain of the long-run relative
competitiveness of different loci in the architecture, an alternate approach may be to explicitly
disallow gatekeeping by any locus, and to rely on
delegation to independent third-party brokers.
EVOLUTION
Network architectures evolve over time, and so
do the loci of competition.
The evolution can occur in several ways.
First, the level of competition in a given locus
may change over time. Consolidation via horizontal mergers may change a competitive locus
to an oligopolistic or even a monopolistic one.
Conversely, new entrants with novel technologies
or business models may introduce competition
to a locus. As we have seen in the preceding discussions, these changes can affect levels of competition in adjacent loci as well.
Second, new technologies and services may
create new loci of competition that did not previously exist and/or destroy old ones. In the process, the adjacency of loci may also be rearranged.
In the context of the current Internet architecture, we have seen an emergence of content distribution networks (CDNs) and large content
providers (LCPs, e.g., Google) as major sources
of interdomain traffic [8]. We can consider the
provision of these services as new loci of competition. Furthermore, the datacenters for these
CDN/LCP servers are being deployed with direct
connections to the consumer access networks,
bypassing the backbone networks altogether.
Consequently, the vertical relationship between
the oligopolistic local access locus and the com-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
petitive long-distance locus, traditionally subject
to scrutiny by telecom economists and regulators,
may be supplanted by the bilateral oligopolistic
relationship between the local access and the
CDN/LCP loci. Traditionally, bilateral oligopolies
are characterized by long-term contracts negotiated between players with strong market power.
This has important implications to innovation in
both of the loci. The “paid peering” agreements
that are currently being negotiated between local
access networks and large content providers are
precisely the type of long-term contracts that will
shape the industry for years to come.
At the same time, new loci may also create
new opportunities for vertical expansion by existing firms and redefine tussle boundaries. For
example, backbone operator Level 3 expanded
into the CDN locus and began to deliver video
streams for customers like Netflix. This led to a
dispute between Level 3 and Comcast in late
2010 regarding compensation for Comcast’s carriage of Netflix traffic. At issue is whether Comcast and Level 3 are two Tier-1 networks with
settlement-free peering agreements, or if Level 3
is a CDN terminating traffic within the access
networks of Comcast. This ongoing dispute highlights the challenge in characterizing the business relationship between two firms when both
are vertically integrated across different but
overlapping loci of competition.
MINIMIZE SWITCHING COSTS
In addition to ensuring that a multiplicity of providers can be supported at each locus, there is
another important role to be played by the architect. It is to influence how often, and how easily,
the choice of providers can be exercised at each
locus. Specifically, the lower the cost of switching providers, the easier it is for service consumers to try out new innovative services, and
the more competitive a locus can be.
In the search engine market, for example, Hal
Varian contends that Google has every incentive to
continue to innovate because “competition is only
one click away” [9]. So, should a network architecture allow choices to be made on a per-year basis,
per-application basis, or per-packet basis?
The appropriate timescale of choice will likely differ for each locus, depending on the nature
of service in question and the amount of overhead incurred by the service provider each time
a customer adds or drops the service. Extra
attention should be given to those loci that are
inherently more competition-challenged (e.g.,
due to strong scale economies). For example,
number portability proved to be a particularly
effective way to reduce switching costs in the
local access locus of the telephony network
architecture. Developments in software-defined
radio technologies, as another example, offer the
prospect of dramatically lowering switching costs
for the wireless local access locus.
Importantly, the objective should be to minimize switching costs on both the customer side
and the provider side. If we focus on only the
former but ignore the latter, the provider will
have greater incentive to manage its own
incurred costs by discouraging switching behavior altogether. In particular, the provider can
create an artificial switching cost to the customers by employing long-term service contracts
with early termination penalties. Such a strategy
of contractual lock-in may serve the provider’s
goal of churn reduction, but it can also dampen
competition significantly.
IEEE
BEMaGS
TAKEAWAYS
Let us close with a summary of main takeaways
from this article for network architects who are
contemplating the design of future Internet
architectures.
First, the architecture should ensure that a
multiplicity of providers can be supported at
each locus. We do not want to unnecessarily
foreclose competition anywhere in the network.
Second, we need to recognize that architecture outlasts technologies and applications. The
level of competition in each locus may change
over time, and we cannot anticipate or dictate
the number of choices in each locus.
Third, in choosing designs that facilitate competition at each locus, we want to pay particular attention
to those loci that are naturally competition-challenged due to stronger economies of scale, etc. In
general, minimizing switching costs can be an effective way to promote competition in any locus.
Finally, remembering that “loci are not silos,”
we cannot design for competition one locus at a
time. Instead, the network architecture must facilitate robust competition in face of a wide range of
possible strategic interactions between providers
in different loci, as well as providers that straddle
multiple loci. Interfaces must be carefully developed to manage and lubricate both the technical
and market transactions between the loci.
F
Remembering that
“loci are not silos,”
we cannot design for
competition one
locus at a time.
Instead, the network
architecture must
facilitate robust
competition in the
face of a wide range
of possible strategic
interactions between
providers in different
loci, as well as
providers that
straddle multiple loci.
REFERENCES
[1] D. Geer et al., “Cyberinsecurity: The Cost of Monopoly,”
Comp. and Commun. Industry Assn. Report, 2003.
[2] D. Clark et al., “Tussles in Cyberspace: Defining Tomorrow’s Internet,” IEEE/ACM Trans. Net., vol. 13, no. 3,
June 2005, pp. 462–75.
[3] T. Anderson et al., “Overcoming the Internet Impasse
Through Virtualization,” Computer, vol. 38, no. 4, Apr.
2005, pp. 31–41.
[4] P. Laskowski and J. Chuang, “Innovations and Upgrades
in Virtualized Network Architectures,” Proc. Wksp. Economics of Networks, Systems, and Computation, 2010.
[5] K. Lakshminarayanan, I. Stoica, and S. Shenker, “Routing as a Service,” UC Berkeley EECS tech. rep. UCB/CSD04-1327, 2004.
[6] N. Feamster, L. Gao, and J. Rexford, “How to Lease the
Internet in Your Spare Time,” ACM Computer Commun.
Rev., Jan. 2006.
[7] Y. Zhu et al., “Cabernet: Connectivity Architecture for
Better Network Services,” Proc. Wksp. Rearchitecting
the Internet, 2008.
[8] C. Labovitz et al., “Internet Inter-Domain Traffic,” Proc.
ACM SIGCOMM, 2010.
[9] H. Varian, “The Economics of Internet Search,” Angelo
Costa Lecture delivered in Rome, Feb. 2007.
BIOGRAPHY
JOHN CHUANG ([email protected])
________________ is a professor
in the School of Information at the University of California
at Berkeley, with an affiliate appointment in the Department of Electrical Engineering and Computer Science. His
research interests are in economics of network architectures, economics of information security, incentives for
peer-to-peer systems, and ICT for development. He received
his Ph.D. in engineering and public policy from Carnegie
Mellon University, his M.S.E.E. from Stanford University,
and graduated summa cum laude in electrical engineering
from the University of Southern California.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
43
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FUTURE INTERNET ARCHITECTURES: DESIGN AND
DEPLOYMENT PERSPECTIVES
Biological Principles for Future Internet
Architecture Design
Sasitharan Balasubramaniam, Waterford Institute of Technology
Kenji Leibnitz, National Institute of Information and Communications Technology and Osaka University
Pietro Lio’, University of Cambridge
Dmitri Botvich, Waterford Institute of Technology
Masayuki Murata, Osaka University
ABSTRACT
Currently, a large number of activities on
Internet redesign are being discussed in the
research community. While today’s Internet was
initially planned as a datagram-oriented communication network among research facilities, it
has grown and evolved to accommodate unexpected diversity in services and applications. For
the future Internet this trend is anticipated to
continue even more. Such developments
demand that the architecture of the new-generation Internet be designed in a dynamic, modular, and adaptive way. Features like these can
often be observed in biological processes that
serve as inspiration for designing new cooperative architectural concepts. Our contribution in
this article is twofold. First, unlike previous discussions on biologically inspired network control
mechanisms, we do not limit ourselves to a single method, but consider ecosystems and coexisting environments of entities that can
cooperate based on biological principles. Second, we illustrate our grand view by not only
taking inspiration from biology in the design
process, but also sketching a possible way to
implement biologically driven control in a future
Internet architecture.
INTRODUCTION
The Internet has transformed and changed our
lives in many aspects over recent years, where
the increase in popular and sophisticated services continues to attract users. However, the
current Internet infrastructure, which was built
mainly for conservative data traffic usage, is
approaching its limit. This has led the research
community to investigate solutions toward the
future Internet within various project initiatives
(e.g., GENI, FIND, AKARI, FIRE) [1]. The
motivation for this research development is
largely driven by the behavioral changes and
44
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
needs of users toward the Internet. While the
original Internet was designed mainly to allow
users to exchange information (mostly to support research and work), today we see highly
diverse sets of services, many of which are used
daily to enhance our quality of life. These services range from information gathering mechanisms tailored to our personal and societal
needs,to support for various social problems, as
well as entertainment.
In order to fully support such diverse services, the future Internet will require new architectural and protocol designs. This new
architectural design would need to integrate
highly intelligent processes to improve their
robustness, scalability, efficiency, and reliability.
This is particularly crucial because the number
of devices in the future is expected to drastically
increase. One approach to provide this capability
that communications researchers have recently
started investigating is through bio-inspired processes [2–4]. Biologically inspired mechanisms
have been applied in recent years to diverse
types of networks (e.g., sensors, wireless, fixed,
services). However, from the viewpoint of the
future Internet, current bio-inspired approaches
are only a first step toward realizing a fully functional system. The main reason behind this is
because most of these bio-inspired solutions
have only tackled specific problems for a particular type of network. In order to realize the full
potential of bio-inspired solutions for the future
Internet, these disparate solutions need to be
designed to function in a fully integrated manner.
In this article, we aim at paving the way for
this vision to become a reality. We first summarize some of the key requirements of the future
Internet, in particular from the core network
infrastructure perspective. This is then followed
by discussions on relevant properties found in
biological processes that enable multiple organisms and systems to coexist in an ecosystem,
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
where our aim is to combine various bio-inspired
network control mechanisms within the future
Internet. Our proposal is built on two intrinsic
properties of biological systems, which includes a
fully integrated system of systems within an
organism, as well as organisms that can coexist
in an ecosystem. The article illustrates how our
idea could be augmented with some existing proposed architectures for the future Internet. Lastly, we present our grand vision of future
communication networks that are directly driven
by biological systems, taking the concept of biologically inspired networking to a new level.
REQUIREMENTS OF THE
FUTURE INTERNET
While the future Internet will cover various different types of agendas, we only focus in this
article on the core network infrastructure of the
Internet. In the future, we can anticipate greater
heterogeneity, coexistence, and cooperation
among different types of networks. This can
range from different content and service distribution networks to virtual networks that all
operate over the same physical network. At the
same time, communication networks of the
future will require emergent properties embedded directly into the networks. This is often
dubbed as self-* properties in the literature
(self-organization self-management, etc.). In this
section we list the requirements expected in our
view of the future Internet.
VIRTUALIZATION AND
ADAPTIVE RESOURCE MANAGEMENT
A crucial component of communication networks is resource management, and its efficient
usage will determine the quality of service (QoS)
delivered to end users. Before the Internet
gained its current popularity, single network providers usually owned the communication infrastructures. However, this situation is slowly
transforming into a new business model, where a
distinction between network (infrastructure) providers and service providers is becoming apparent. This is usually referred to as virtual networks,
where service providers lease the resources they
need from network providers and are allowed to
have a certain control over usage of these
resources. The increased flexibility means that
service providers may configure their virtual network according to the services they are offering,
while the network provider needs to safeguard
the fair usage of the network. However, the
dynamics of services may change over short
timescales, leading to the need for dynamic
resource subscription policies from the network
provider. Another crucial requirement of the
future Internet is the adaptive usage of resources
through efficient routing. One important
research agenda is the need for scalable, robust,
and distributed routing applicable to large-scale
networks. The majority of current routing solutions are based on optimization methods, where
prior knowledge of traffic demand exists, and
the demand does not change frequently. Performing routing this way is ideal if reconfigura-
tions are only required over long timescales.
However, as the number of services increases
and evolves at a fast pace, more reactive and
intelligent routing mechanisms are required.
IEEE
BEMaGS
ENERGY EFFICIENCY
As the Internet’s popularity increases, so has its
supporting information and communications
technology (ICT) infrastructure. This ranges
from increases in data centers to host services,
network access technologies, as well as end-user
devices. Overall, this has led to a steadily increasing consumption of energy to operate the infrastructure. As the traffic volume in the future is
anticipated to increase, this in turn will also lead
to a higher amount of energy consumption by
networking equipment. A common approach
toward saving energy today is switching devices
off or putting them into sleep state. However,
with the large number of nodes anticipated in
the future Internet, this process should be performed in a collaborative manner, while ensuring that end users’ requirements are met.
F
In the future, we can
anticipate greater
heterogeneity,
coexistence, and
cooperation among
different types of
networks. This can
range from different
content and service
distribution networks
to virtual networks
that all operate over
the same physical
network.
FLEXIBLE AND EVOLVABLE INFRASTRUCTURE
A major factor behind the requirement of
redesigning the Internet is the fact that the original Internet was designed mainly for accommodating data traffic with stable traffic patterns.
However, it is neither feasible nor practical to
perform a complete redesign of the Internet
each time new requirements, or drastic technological or social changes arise that do not fit the
current architecture. Therefore, the design of
the future Internet should include a sustainable
infrastructure that is able to support evolvability.
This should enable new protocols to be introduced with minimal conflict to existing ones. At
the same time, the design of architectures and
protocols should be made in a modular way,
where protocol components can have cross-layer
interactions. The evolvability of the future Internet should also allow for a certain degree of
openness, where protocols with the same functionalities can be deployed by various entities to
suit their own needs; but these protocols must be
able to coexist with each other and minimize any
possible conflicts. For example, different service
providers should be able to deploy their own
routing algorithm that best suits their customers’
QoS/quality of experience (QoE) requirements.
SERVICE-ORIENTED PROVISIONING
A key point that has attracted users to the Internet is the continual development and provisioning of new and more advanced services (rich
multimedia content). It is expected that this
trend will also continue in the future. Therefore,
as a multitude of new services start to flood into
the Internet, it is essential that these services are
autonomous and capable of exhibiting self-*
properties, similar to the requirements for network devices of the future. These properties
should allow services to autonomously discover
and combine with other services in an efficient
and distributed manner. At the same time,
deployment of these services should not be
restricted to end systems, but may also be
embedded into network routers.
This section discussed some of the require-
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
45
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Over billions of years,
Communication
system
evolved to suit the
changes of the
environment, and for
this very reason
Internal
regulatory system
Communication
system
Sensory
system
adaptability has
Sensory
system
this resilience and
Cognitive
system
Molecular scale
bio-inspired
Organism scale
techniques have
provided inspiration
Internal
regulatory system
Society
Society
Ecosystem
Figure 1. Hierarchical ecosystem in a biological system.
for communication
network researchers.
ments for designing a sustainable future Internet
that is able to handle current and new challenges. Its main features are that the network
should support virtual networks and allow each
virtual network to adaptively subscribe resources
from underlying networks, have self-* properties
for managing itself, enable energy efficiency, and
support a diverse set of services in a flexible way.
Similar characteristics can often be found in
biology, and the following section discusses some
biological processes that may serve as inspiration
to meet these requirements.
BASIC MECHANISMS OF
BIOLOGICAL COOPERATION
Biological systems have remarkable capabilities
of resilience and adaptability. These capabilities
are found in various biological organisms, ranging from microorganisms to flocks of animals
and even human society. Over billions of years,
this resilience and adaptability has evolved to
suit the changes of the environment, and for this
very reason bio-inspired techniques have provided inspiration for communication network
researchers [2–4]. In particular, there are two
especially appealing aspects of biological systems
that could be beneficial in designing architectures of the future Internet. First, biological systems are always composed of a multitude of
protocols that combine various processes to control different elements of an organism. Second,
biological systems as a whole exhibit a hierarchical ecosystem structure that allows various
organisms and systems to coexist.
Figure 1 illustrates an example of both these
aspects, presenting an abstract layered view of
internal functionalities within organisms, composed of an internal regulatory system, a cognitive system, a sensory system, and a
communication system. In the remainder of this
section we describe some key features of this
abstract model and its importance in allowing
biological systems to coexist in the manner in
which they do.
INTERNAL REGULATORY SYSTEMS
In order for biological systems to maintain stability and survive through age, there is a need
for self-regulation to balance the system and
maintain constant conditions in the face of external and internal perturbations. One example of
46
Communications
IEEE
this self-regulation process found in organisms is
homeostasis. There are a number of different
homeostasis processes ranging from thermoregulation to blood glucose regulation. Homeostasis
requires the integration of information from different parts of the body, as well as the analysis
and forecast of resources. Another example of
an internal regulatory system within an organism
is the immune system, which is able to fend off
non-self invader cells. These biological mechanisms can serve as good inspiration in the design
of self-management and self-regulation in communication networks since they operate efficiently, and are robust without centralized
control.
BIOLOGICAL SENSORY SYSTEMS
Organisms possess a number of sophisticated
sensory systems to maintain internal balance.
These systems receive their external inputs from
sensors and propagate the stimulus through a
complex hierarchical network to various components within an organism. Examples of sensory
systems are the central and peripheral nervous
systems in the human body. Another example is
the lateral line, which is a sensing organ found in
aquatic organisms. Sensory systems interconnected through the nervous system or lateral line
provide a medium for coordinating and transmitting signals between various parts of the body.
Insight on where to locate processing units within a communication network can be gained from
observing the structure of the nervous system
(e.g., the location of ganglia serving as hubs
between the peripheral and central nervous systems).
BIOLOGICAL COMMUNICATION AND SIGNALING
A key property of biological systems is the ability
for entities to communicate and signal between
each other. This form of signaling can come in
various forms, ranging from speech to chemical
signaling. Signaling is required for synchronization between organisms. At the microorganism
level, reaction-diffusion describes the concept of
morphogenesis, the process where chemicals are
released and diffused between cells during tissue
development to explain patterns of stripes or
spots on animal coats. Another example is quorum sensing, which is when cells signal among
each other and cooperate in the face of environmental changes. All of these processes are cooperative, and different entities communicate in a
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
fully distributed way among each other. In most
cases the organism is unaware of the emergent
outcome of synchronization in the whole system.
Societies are formed between various organisms through such signaling and communication
processes. They are the very foundation that
allows organisms to function collectively and
exhibit various self-* mechanisms in order to
coordinate various tasks. During migration,
flocks of birds use signaling between members in
the pack to rotate the leader of the flock to balance energy expenditure of each bird due to
wind resistance. Social insects, such as ants and
bees, show a high degree of cooperation, and
perform division of labor and self-organization
while foraging [5].
INTERACTING POPULATION DYNAMICS
Population models, such as predator-prey interactions, competition, and symbiosis, describe the
interactions between different species while
coexisting within a common space or ecosystem.
In predator-prey, the predators are the dominant of two interacting species and feed on the
prey. On the other hand, symbiosis occurs when
both species coexist and mutually benefit in their
growth, while under competition both populations mutually inhibit each other. The population dynamics in the ecosystem determines
whether the system is able to maintain balance
among competing species. Attaining balanced
coexistence among heterogeneous populations
(networks, services) in a common ecosystem is
one of the goals of our proposal.
FUTURE INTERNET ARCHITECTURE
In this section we discuss how we could possibly
map the biological mechanisms of the previous
section to example architectures that have been
proposed for the core infrastructure of the future
Internet. These architectures are the Services
Integration, Control, and Optimization (SILO)
architecture [1] and Information Transfer Data
Services (ITDS) [1], although other architectures
can also be treated in a similar way. Our aim is
to use an ecosystem model, as shown in Fig. 1,
as a basis to ensure that biological processes can
simultaneously coexist with the other processes
that are involved in its environment.
EXTENSION OF THE SILO ARCHITECTURE
The aim of the SILO architecture is to create a
modularized architecture for the future Internet
that can:
• Create building blocks from fine-grained
services
• Allow these building blocks to be combined
in order to support complex communication
tasks
• Allow cross-layer interactions between different services
Based on these aspects, the aim is to allow
the application layer to select the most appropriate services to support its needs. A positive
aspect of the SILO architecture is its ability to
create modular architectures based on the capabilities and resources of the end devices. For
example, in resource constrained device such as
sensors, only vital services are embedded directly
into the device to ensure minimum energy consumption, while other services are loaded on
demand.
For these reasons, SILO is very suitable for
our proposed bio-inspired future Internet architecture, where each biological process can represent a SILO service. Similar to the SILO
solution, the processes invoked by the biological
mechanisms depend on rules and constraints
that govern the relationship between the processes. Therefore, each biological process will
have a description that includes:
1. The specific function it performs (e.g., routing)
2. The key requirements of those functionalities from external parameters (e.g., measurements)
3. The interfaces that are compatible with
other processes
In the case of 1, this allows applications to
determine the appropriate biological process
that meets its requirements in the deployed environment. Such characteristics include overhead
of the protocols and delay incurred when the
process is applied to a specific topology. At the
same time, parameters are defined for each of
these processes to maintain behavioral constraints, which could be applied through policies.
The SILO architecture currently uses a control agent that determines the services to be
composed together. In a similar fashion, a control agent can determine the most appropriate
biological process to support the type of application. Figure 2 illustrates an example of the bioinspired SILO architecture and its application to
a virtual network above a physical network. The
figure also illustrates the protocol stack and the
different processes fitting into the stack. The
paths along the virtual network are selected
using a noise-driven internal regulatory mechanism known as attractor-selection, an internal
regulation system found in E. coli cells [6]. The
underlying network uses the reaction-diffusion
mechanism for signaling between the nodes, and
the routing process is based on chemotaxis [7],
which is a motility mechanism used by microorganisms to attract a gradient found in an environment. Also similar to the original SILO
services, each process is equipped with “knobs”
to allow external tuning of parameters.
The layered protocol shows how the different
processes in the routing and overlay path layers
interact with each other. The attractor-selection
mechanism is defined through a number of states
and is driven by noise. Internal regulation
changes the state due to external influences; that
is, this internal regulation controls the bandwidth resource for the virtual path. Once a certain threshold is exceeded, attractor-selection
interacts with the underlying network, which
uses chemotaxis to discover new routes. The
chemotaxis mechanism is a distributed routing
mechanism that selects the path node by node
from the source to the destination following the
highest gradient. The gradient is formed through
the node-to-node interaction of the reaction-diffusion process [7].
An example of the interaction between two
bio-inspired control mechanisms is shown in Fig.
3. The figure shows how paths are discovered
IEEE
BEMaGS
F
The population
dynamics in the
ecosystem
determines whether
the system is able to
maintain balance
among competing
species. Attaining
balanced coexistence
among heterogeneous populations
(networks, services)
in a common
ecosystem is one of
the goals of our
proposal.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
47
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Application layer
Overlay network
Attractor selection
Attractor Attractor
state 1
state 2
Attractor
state n
Objective
Knobs
Noise-driven selection
Attractor-selection
Interface
Measurement
QoS, SRC, DST
Chemotaxis
Reaction
diffusion
Route discovery
Knobs
Underlay network
Node
load
Link
load
Hop
count
Chemotaxis
Interface
Signaling information
Route discovery
Knobs
Reaction-diffusion
Physical layer
Figure 2. Bio-inspired SILO architecture.
through the gradients using the chemotaxis
model for both the primary and secondary paths.
Initially, attractor-selection determines the overlay path 1 (red line) to be chosen. Since the QoS
requirements are fulfilled, the system remains in
a stable state and is kept at the dynamic attractor for path 1 despite small fluctuations. At time
step t = 400 congestion occurs, leading to the
path becoming unstable, which in turn decreases
the QoS of the overlay layer. The path is then
switched from path 1 to path 2 (blue line), which
offers greater stability, and at time step t = 600
the system becomes stable again. This simple
example shows how different dynamic biologically inspired control schemes can symbiotically
cooperate in reacting to congestion and changes
of traffic conditions at various timescales. Coming back to our ecosystem model presented in
Fig. 1, we can see that the different biological
mechanisms are applied at the molecular level
(attractor-selection and chemotaxis), and are
able to coexist within an organism.
Figure 4 shows how the concept of modularity and openness can be realized within the bioinspired future Internet. Two routing
mechanisms (ant-based routing [5] and chemotaxis) have been applied to the underlying networks. Each routing algorithm consumes a
certain quantity of resources, but both mechanisms can symbiotically coexist in the network,
supporting the requirement of openness. The
most ideal routing mechanism may depend on
different objectives (e.g., scalability, timeliness
for route discovery, reaction to dynamics, energy
of signaling overhead). Therefore, through the
symbiotic requirements, various new protocols
can be updated and added.
EXTENSION OF THE ITDS ARCHITECTURE
Another example is the bio-inspired ITDS, which
focuses on service provisioning as a key requirement. The aim of ITDS is to have transfer of
48
Communications
IEEE
information in the underlying network rather
than just raw data as is currently done. This proposal is realized through sets of services that are
embedded into the routers to perform application-based processing. This is far from the traditional approach, which only allows end hosts to
have service intelligence while the underlying
network is used for forwarding packets. Whereas
the original ITDS only handles the application
layer requirements, we extend this by allowing
embedded processes into the network layer as
well. This in turn allows the network to be highly
adaptive and support evolving services.
An example of bio-inspired ITDS and its corresponding protocol stack are shown in Fig. 5. In
this figure we show a combination of two services on two virtualized planes, the security and
multimedia planes. On the security plane we
have a reliability function that works in sequence
with the privacy function, where the reliability
function is based on an immune system mechanism. At the multimedia plane, we consider a
codec service that works in conjunction with a
caching service that can extract data to serve
various end users. We assume that each of these
services is able to migrate from node to node.
Since one of the objectives of the future Internet
is energy efficiency, we also have an embedded
service that measures energy output within a
node. Cooperative signaling is also performed
between the nodes to permit certain nodes to be
switched off while others take the burden of the
traffic to minimize overall energy consumption.
Cooperative signaling is performed using the
quorum sensing process, and the energy efficiency service is based on the internal thermoregulation process of an organism. When we map this
back to the ecosystem model of Fig. 1, we can
see that this example expands further from the
example used for bio-inspired SILO. In this
example, our bio-inspired processes in the underlying network are based on mechanisms found at
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Primary
path
Secondary
path
Path 1
Path 1
Path 2
Path 2
Destination
Chemotaxis
gradient
over links
Congestion
1.2
1.2
1
1
0.8
0.8
Path priority
Attractor
selection
switches
primary path
Path priority
Source
0.6
0.4
0.2
0
0.6
0.4
0.2
0
100 200 300 400 500 600
Time steps
0
600
700
800
900
Time steps
1000
Figure 3. Illustration of cooperative adaptation between attractor-selection and chemotaxis gradient-based routing.
the molecular level, while the processes used to
manage the security and multimedia plane, as
well as their interactions, are based on internal
regulatory system of an organism. So the example shows how processes at the microorganism
level can coexist with processes at the organism
level, representing a virtual network operating
over a physical network.
BIOLOGICALLY-DRIVEN
FUTURE NETWORKS
While biologically inspired mechanisms have
provided increased capability and adaptability
for communication networks, a major challenge
is understanding a biological process and developing an appropriate algorithm. At the same
time, most bio-inspired algorithm designers use
refined biological processes that omit various
hidden functionalities, where these functionalities may solve foreseeable future problems.
Therefore, a viable alternative to the current
approaches is to allow systems to be directly
driven by biological systems — or as we term it
biologically driven future networks — bypassing
the step of using artificial bio-inspired algorithms. An example of our proposed concept is
illustrated in Fig. 6. In this example a cell may
represent a virtualized overlay network, and in
the event of changes in the overlay network
environment, this triggers feedback into the cell
culture. This feedback may lead to mitosis,
where the output from the mitosis process can
be filtered back to the overlay to reconfigure the
overlay network into two virtual networks. This
vision may be one possible solution toward the
development of network devices for the future
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
49
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Homeostasis
Chemotaxis
Application layer
Shortest
path
Homeostasis
Chemotaxis
Antrouting
Symbiosis
Symbiosis
Physical layer
Chemotaxis
Antrouting
Figure 4. Illustration of evolvable protocol for SILO architecture.
no requirement for defining a complex protocol
and designing countermeasures for all possible
kinds of communication network scenarios.
However, there are still a number of challenges before such an approach could finally be
realized. First, biological systems require a
favorable environment in terms of nutrients and
temperature to be cultivated, cells may die
faster than they reproduce, and they may react
differently or uncontrollably, all of which must
be catered for. It may also lead to new security
challenges. At the same time, synchronization
between the biological organism and operations
within the physical network will be a challenge,
since biological processes at microorganism
level can take hours to show some effects. New
software and hardware design will also be
required, where hardware modules must be able
to house the biological culture, and the network
conditions must be fed back to the biological
environment.
Internet, particularly to cope with increased and
unknown complexities. As shown in the figure, a
biological culture of microorganisms could
directly drive the behavior of the underlying network, and when a change is experienced in the
physical network, a feedback process could manually be injected to change the environment of
the biological culture. From a practical perspective, we need to limit this vision to the use of
microorganisms. In essence, this allows us to
harness and directly exploit communication processes at the nano/molecular scale of biological
systems [8] to control communication systems.
Directly using biological systems for various systems has been investigated previously. For example, in [9] slime mould was used to design the
Greater Tokyo railway network, while in [10]
slime mould was again used to design the U.S.
road networks. However, we believe that a similar concept could also be extended to the realtime management of communication networks.
Through this approach, a new methodology of
tackling problems in future networks can be
devised with the concept of biological software/
hardware co-design (where biological cultures
could be interfaced to software and hardware
systems). A major benefit would be that there is
CONCLUSION
In this article we discuss approaches to support
the design of the future Internet architecture
by making use of biological mechanisms. A
Multimedia
caching
Video
codec
Multimedia plane
Autonomic
nervous
system
Video codec
Reliability
Reliability
Privacy
Thermoregulations
Caching
Privacy
Immune system
Autonomic nervous system
Security plane
Chemotaxis
Quorum
sensing
Network layer
Network layer
Chemotaxis
Quorum sensing
Network layer
Link layer
Link layer
Link layer
Physical layer
Physical layer
Thermoregulation
Physical layer
Figure 5. Bio-inspired ITDS.
50
Communications
IEEE
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Culture of cells, microorganisms
Interface between living organism and
communication networks
Feedback
Virtualized
network 1
Virtual
overlay
Virtualized
network 2
Underlying
network
Figure 6. Biologically driven future networks.
large number of different bio-inspired methods
have previously been proposed for enabling
communication networks to exhibit self-* capabilities, where the majority of these solutions
has only focused on individual specific mechanisms to solve a particular problem. However,
from the future Internet perspective, we need
to take a step back to see how these existing
bio-inspired solutions can be integrated to
meet the many requirements of the future
Internet. In this article we outline possible
solutions of integrating biologically inspired
processes into the future Internet architecture
to support this need. Specifically, we illustrate
how this could be augmented with two existing
architectures proposed for the future Internet,
SILO, and ITDS. While applying biologically
inspired methods may improve the robustness,
adaptability, and evolvability of a new Internet
design, our grand vision is communication networks of the future that can be directly driven
by biological systems.
ACKNOWLEDGMENT
The authors would like to thank Tokuko
Haraguchi and Yuji Chikashige of NICT, Japan,
for permitting usage of the HeLa cell images.
The authors wish to acknowledge the following
funding support: Science Foundation Ireland via
the “Federated, Autonomic Management of
End-to-End Communications Services” (grant
no. 08/SRC/I1403), Science Foundation Ireland
via “A Biologically Inspired Framework Supporting Network Management for the Future
Internet” (grant no. 09/SIRG/I1643), and EU
FP7 grant “RECOGNITION: Relevance and
Cognition for Self-Awareness in a Content-Centric Internet.”
REFERENCES
[1] S. Paul, J. Pan, and R. Jain, “Architectures for the Future
Networks and the Next Generation Internet: A Survey,”
Comp. Commun., vol. 34, no. 1, 15 Jan. 2011, pp. 2–42.
[2] M. Meisel, V. Pappas, and L. Zhang, “A Taxonomy of Biologically Inspired Research in Computer Networking,”
Comp. Networks, vol. 54, no. 6, Apr. 2010, pp. 901–16.
[3] F. Dressler and O. B. Akan, “Bio-Inspired Networking:
From Theory to Practice,” IEEE Commun. Mag., vol. 48,
no. 11, Nov. 2010, pp. 176–83.
[4] P. Lio’ and D. Verma, “Biologically Inspired Networking,”
IEEE Network, vol. 24, no. 3, May/June 2010, p. 4.
[5] G. Di Caro and M. Dorigo, “AntNet: Distributed Stigermetic Control for Communication Networks,” J. Artificial Intelligence Research, vol. 9, 1998, pp. 317–65.
[6] K. Leibnitz, N. Wakamiya, and M. Murata, “BiologicallyInspired Self-Adaptive Multi-Path Routing in Overlay
Networks,” Commun. ACM, vol. 49, no. 3, Mar. 2006,
pp. 62–67.
[7] S. Balasubramaniam et al., “Parameterised Gradient
Based Routing for the Future Internet,” Proc. IEEE
Advanced Information Networking and Application
(AINA), Bradford, U.K., 2009.
[8] I. F. Akyildiz, F. Brunetti, and C. Blazquez, “Nanonetworks: A New Communication Paradigm,” Computer
Networks, vol. 52, June 2008, pp. 2260–79.
[9] A. Tero et al., “Rules for Biologically Inspired Adaptive
Network Design,” Science, vol. 327, no. 5964, Jan.
2010, pp. 439–42.
[10] A. Adamatzky, “Physarum Machines: Computers from
Slime Mould,” World Scientific Series on Nonlinear Science, Series A, 2010.
BIOGRAPHIES
SASITHARAN BALASUBRAMANIAM ([email protected])
________ received his
Bachelor (electrical and electronic engineering) and Ph.D.
degrees from the University of Queensland in 1998 and
2005, respectively, and Master’s (computer and communication engineering) degree in 1999 from Queensland University of Technology. He joined the Telecommunication
Software and Systems Group (TSSG), Waterford Institute of
Technology, Ireland, right after completion of his Ph.D.. He
is currently the manager for the Bio-Inspired Network
research unit at the TSSG. He has worked on a number of
Irish funded (e.g., Science Foundation Ireland, PRTLI) and
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
51
A
BEMaGS
F
Communications
IEEE
52
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
EU projects. His research interests include the bio-inspired
future Internet as well as molecular communications.
diseases dynamics, and bio-inspired communications and
technology.
KENJI LEIBNITZ ([email protected])
_____________ received his Master
and Ph.D. degrees in information science from the University of Würzburg, Germany, where he was also a research
fellow at the Institute of Computer Science. In May 2004,
he joined Osaka University, Japan, as a Postdoctoral
Researcher and from 2006 until March 2010 a Specially
Appointed Associate Professor at the Graduate School of
Information Science and Technology. Since April he is a
senior researcher at the Brain ICT Laboratory of the National Institute of Information and Communications Technology in Kobe, Japan, and an invited associate professor at
Osaka University. His research interests are in modeling and
performance analysis of communication networks, especially the application of biologically inspired mechanisms to
self-organization in future networks.
DMITRI BOTVICH ([email protected])
__________ received his Bachelor’s
and Ph.D. degrees in mathematics from Moscow State University, Faculty of Mechanics and Mathematics, Russia, in
1980 and 1984, respectively. He is currently the chief scientist of the Scientific and Technical Board at the Telecommunication Software and Systems Group, Waterford
Institute of Technology. He currently leads the PRTLI
FutureComm project at the TSSG, and has coordinated and
worked in a number of EU and Science Foundation Ireland
projects. He has published over 100 papers in conferences
and journals, and currently supervises seven Ph.D. students.
His research interests include bio-inspired autonomic network management, security, trust management, wireless
networking, queuing theory, optimization methods, and
mathematical physics.
P IETRO L IO ’ ([email protected])
__________ is a senior lecturer and a
member of the Artificial Intelligence Division of the Computer Laboratory, University of Cambridge. He has an
interdisciplinary approach to research and teaching due
to the fact that he holds a Ph.D. in complex systems and
nonlinear dynamics (School of Informatics, Department of
Engineering of the University of Firenze, Italy) and a Ph.D.
in (theoretical) genetics (University of Pavia, Italy). He
supervises six Ph.D. students and teaches the following
courses: Bioinformatics (Computer Laboratory), Modeling
in System Biology (System Biology Tripos, Department of
Biochemistry), Models and Methods in Genomics (MPHIL
Comp Biology — Department of Mathematics), and 4G1
System Biology (Department of Engineering). His main
interests are in investigating relationships and effectiveness of different biosystems modeling methodologies
(particularly multiscale approaches), modeling infectious
MASAYUKI MURATA [M] ([email protected])
______________ received
M.E. and D.E. degrees in information and computer science
from Osaka University, Japan, in 1984 and 1988, respectively. In April 1984 he joined the Tokyo Research Laboratory, IBM Japan, as a researcher. From September 1987 to
January 1989 he was an assistant professor with the Computation Center, Osaka University. In February 1989 he
moved to the Department of Information and Computer
Sciences, Faculty of Engineering Science, Osaka University.
In April he became a professor at the Cybermedia Center,
Osaka University, and is now with the Graduate School of
Information Science and Technology, Osaka University since
April 2004. He has more than 400 papers in international
and domestic journals and conferences. His research interests include computer communication networks, performance modeling, and evaluation. He is a member of ACM
and IEICE.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CONTINUING EDUCATION
FOR COMMUNICATIONS PROFESSIONALS
- Increase your problem-solving skills
- Review and expand on what you already know
- Quickly become familiar with the latest advancements, regulations, & standards
- Learn from experts who focus on practical application
- Earn CEUs to meet your professional development requirement
COMSOC TRAINING EVENTS
Wireless Communications Engineering - Current Practice
Virtual One-Day Course
Instructor: Javan Erfanian
Wed, July 20, 2011 - 9:00am - 4:30pm EDT
LTE for the Wireless Engineering Practitioner:
Fundamentals & Applications
Virtual One-Day Course
Instructor: Luis Blanco
Wed, August 3, 2011 - 9:00am - 4:30pm EDT
To learn more, visit comsoc.org/training
comsoc.org/training
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FUTURE INTERNET ARCHITECTURES: DESIGN AND
DEPLOYMENT PERSPECTIVES
Enabling Future Internet Research:
The FEDERICA Case
Peter Szegedi, Trans-European Research and Education Networking Association
Jordi Ferrer Riera and Joan A. García-Espín, Fundació i2CAT
Markus Hidell, Peter Sjödin, and Pehr Söderman, KTH Royal Institute of Technology
Marco Ruffini and Donal O’Mahony, Trinity College Dublin
Andrea Bianco and Luca Giraudo, Politecnico di Torino
Miguel Ponce de Leon and Gemma Power, Waterford Institute of Technology
Cristina Cervelló-Pastor, Universitat Politècnica de Catalunya
Víctor López, Universidad Autónoma de Madrid
Susanne Naegele-Jackson, Friedrich-Alexander University of Erlangen-Nuremberg
54
Communications
IEEE
ABSTRACT
INTRODUCTION
The Internet, undoubtedly, is the most influential technical invention of the 20th century
that affects and constantly changes all aspects
of our day-to-day lives nowadays. Although it
is hard to predict its long-term consequences,
the potential future of the Internet definitely
relies on future Internet research. Prior to
every development and deployment project, an
extensive and comprehensive research study
must be performed in order to design, model,
analyze, and evaluate all impacts of the new
initiative on the existing environment. Taking
the ever-growing size of the Internet and the
increasing complexity of novel Internet-based
applications and services into account, the
evaluation and validation of new ideas cannot
be effectively carried out over local test beds
and small experimental networks. The gap
which exists between the small-scale pilots in
academic and research test beds and the realsize validations and actual deployments in production networks can be bridged by using
virtual infrastructures. FEDERICA is one of
the facilities, based on virtualization capabilities in both network and computing resources,
which creates custom-made virtual environments and makes them available for Future
Internet Researchers. This article provides a
comprehensive overview of the state-of-the-art
research projects that have been using the virtual infrastructure slices of FEDERICA in
order to validate their research concepts, even
when they are disruptive to the test bed’s infrastructure, to obtain results in realistic network
environments.
The European Community co-funded project
FEDERICA (Federated E-Infrastructure Dedicated to European Researchers Innovating in
Computing Network Architectures) [1] supports
the development of the future Internet by definition. The project consortium is an optimal mixture of research institutes, universities, and
industrial partners in order to foster cross-sector
collaboration. Although, the two-and-a-half-year
project officially ended on 30 October 2010,
after a four-month extension, the infrastructure
and its services are still up and running thanks to
the voluntary efforts of the European Research
and Education Networking (NREN) organizations, which are actively participating in the
operation of the infrastructure.
FEDERICA’s architecture has been designed
in such a way that allows users to request “virtual slices” of the infrastructure that are completely isolated from each other and behave exactly as
if they were physical infrastructure from the
point of view of the users’ experiments. The
major objective of the deployment of such an
infrastructure is twofold. On one hand, FEDERICA allows extensive research on virtualization
based architecture itself. Experiments on various
resource virtualization platforms, virtual resource
descriptions, monitoring and management procedures, virtual network slice compositions, isolations or federations can be performed. On the
other hand, the virtual slices of FEDERICA
enable multidisciplinary research on future
Internet within the slices.
This article is organized as follows. After a
brief summary of the FEDERICA virtualization
0163-6804/11/$25.00 © 2011 IEEE
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
capable infrastructure and its services, we provide an overview of similar initiatives around the
world emphasizing the importance of federated
facilities. Then, we discuss the special features of
FEDERICA that make the infrastructure unique
(as of today) by way of supporting a wide variety
of future Internet research activities, maybe
leading to new paradigms. As FEDERICA is
highly user-centric, we introduce the broad spectrum of its user community by looking at the
actual users and use cases. The contribution of
FEDERICA user experiments to the whole
Internet research space is also illustrated in this
article that finally concludes with the project’s
perspective for the future.
Infrastructure
1 Gb/s circuits
Core links
Non-core
IEEE
F
KTH
HEANet
PSNC
DFN
CESNET
HUNGARNET
SWITCH
SARR
RedIRIS
I2CAT
FCCN
GRNET
ICCS
Figure 1. FEDERICA infrastructure, the substrate topology.
tions can be preselected by users). This may
eventually lead to a complete portfolio of cloud
services, where all of these services share simplicity in access, typically through a web interface, simple scalability, and ease of use.
FEDERATED FACILITIES
WORLDWIDE
FEDERICA does not operate in isolation. There
are initiatives all over the world, which are of
particular relevance and represent the natural
set of peers for liaison and collaboration with
FEDERICA.
United States — The Global Environment for
Network Innovation (GENI) initiative [3] in the
United States is an activity that aims at creating
a unique virtual laboratory for at-scale networking experimentation. GENI is in its third phase,
called spiral 3, of exploratory rapid prototyping
with about 50 experiments and facilities operational. Work in spiral 3 should improve the integration, operations, tools, and documentation of
GENI prototypes to lower the barriers of use.
User projects have to collectively agree on a federation architecture to create the GENI environment spanning multiple facilities. The
FEDERICA project has obtained excellent
interactions with the GENI project office and
some GENI experiments (e.g., ProtoGENI,
GpENI) through regular meetings.
Japan — The AKARI Architecture Design
Project in Japan [4] aims to implement a new
generation network by 2015, developing a network architecture and creating a network design
based on that architecture. The philosophy is to
pursue an ideal solution by researching new net-
IEEE Communications Magazine • July 2011
Communications
BEMaGS
NORDUNET
FEDERICA AS A SERVICE
FEDERICA project participants have designed
and deployed a virtualization capable infrastructure substrate, including programmable highend routers, multi-protocol switches, and PC-based
virtualization capable nodes, on a pan-European
footprint (Fig. 1). The physical network topology
is composed of 13 sites, 4 core and 9 non-core
nodes, connected by 19 point-to-point links. On
top of this substrate, virtual infrastructure slices
can be created, which realize whatever topologies
are requested by the users. The facility is also connected to the public Internet, thereby enabling
easy access from any location and type of connectivity (wireless or fixed line) [1].
Virtualization is the key having a profound
influence in technology. In its network-oriented
meaning, it corresponds to the creation of multiple virtual networks on top of a physical infrastructure. The network virtualization can be
performed at most layers of the ISO/OSI stack,
from the data link to the network layer. In its
system-oriented sense the process is even faster
and more intriguing, leading to multiple operating systems with varying tasks running on the
same hardware platform.
The service architecture of FEDERICA follows the Infrastructure as a Service (IaaS)
paradigm. IaaS, in principle, is the common
delivery of hardware (e.g., server, storage, network), and associated software (e.g., operating
systems virtualization technology, file systems) as
a service. It is an evolution of traditional hosting
that does not require any long term commitment
and allows users to provision resources on
demand. Amazon Web Services Elastic Compute
Cloud (EC2) and Secure Storage Service (S3)
are examples of commercial IaaS offerings [2].
FEDERICA has two major services:
• The provisioning of best effort or QoS IP
slices with the option of both preconfigured
and unconfigured resources (i.e., routing protocols, operation systems, QoS parameters)
• The provisioning of raw resources in terms
of data pipes (i.e., native Ethernet links)
and unconfigured resources (i.e., empty virtual machines, clear routing tables, no network protocols)
The current FEDERICA services may naturally evolve toward Platform as a Service (where
various preconfigured images and network scenarios can be provided from a repository), and
Application as a Service (where a set of applica-
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
55
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
45%
Based on the submitted slice requests
Based on the potential user community
40%
35%
30%
25%
20%
15%
10%
5%
0%
Configured
resources,
IP best-effort slice
Configured
resources,
IP QoS slice
Not configured
resources, IP
connectivity
Raw resources
and “data pipes”
Figure 2. User segmentation results on the exploited unique features of FEDERICA.
work architectures from a clean slate without
being impeded by existing constraints. Some prototyping projects have been started at the University of Tokyo in collaboration with the
National Institute of Information and Communications Technology (NICT).
Europe — In Europe, the FIRE initiative [5]
from the European Commission funds various
infrastructure projects (e.g., Panlab, Wisebed,
OneLab) with which FEDERICA already has
strong collaboration. The federation of OneLab
and FEDERICA facilities is an excellent example in terms of combining guaranteed and nonguaranteed resources. FEDERICA can offer
resources to OneLab that have full access to the
network layers and OneLab can provide access
to a larger set of distributed computing
resources. During the last years, OneLab has
developed an architecture to federate different
domains based on PlanetLab that is called Slice
Federation Architecture (SFA) [6]. SFA enables
FEDERICA and similar infrastructures to
become a worldwide federated facility supporting collective on future Internet research.
UNIQUE FEATURES OF FEDERICA
FEDERICA can be easily compared with PlanetLab [7]. PlanetLab started in Princeton, New
Jersey, and is a well-known global research facility based on a large set of end hosts on top of
the public Internet. The hosts are running a
common software package that includes a Linuxbased operating system. The key objective of this
software package is to support distributed virtualization (i.e., the ability to allocate a slice of
PlanetLab network-wide hardware resources to a
user application). In contrary, a FEDERICA
node is not only considered as an end host as in
PlanetLab, but a combined network and computing element that itself routes traffic and affects
network reachability. In addition, the user is not
forced to use a specific operating system or
56
Communications
IEEE
A
BEMaGS
F
application but it is free to install any software
component within the slice. While PlanetLab
connections are based on the public Internet,
FEDERICA allows the user to perform experiments on the lower layers as well.
FEDERICA architecture is more than an
evolution of the concept of a traditional test bed.
Test beds are usually devoted to a few technologies and oriented to production service or to
pure research, and are not flexible enough to
accommodate both types of use. The FEDERICA environment is also different from a commercial cloud environment, where the delay
between nodes is typically negligible. In FEDERICA the delay corresponds to physical distance of the substrate nodes. Such naturally
distributed architecture provides an ideal environment for testing medium size experiments
and migration paths in a realistic environment.
The unique features of FEDERICA (as of
today) can be summarized as follows:
•FEDERICA has its own physical substrate
fully controlled and managed by the FEDERICA Network Operation Center. This allows the
user to take over control of the lower layer
resources within their slices. At the moment,
only raw Ethernet resources are available for the
users but in the near future native optical
resources (i.e., lambdas) may also be provided.
•Thanks to having full control over the substrate links, specific QoS parameters can be
assured for the virtual connection of the user
slices and real transmission delays can be experienced. Currently the links run up to 1 Gb/s but
this may be upgraded radically in the near future.
•FEDERICA users have full flexibility and
freedom to choose any networking protocol or
operating system to be installed on their virtual
nodes. To the virtualization platform end, JUNOSbased programmable high-end Juniper platforms
and VMware-based SUN servers are both currently deployed in the FEDERICA substrate.
•FEDERICA can ensure the reproducibility
of the complete testing environment and conditions of the user experiments at a different location or time. Repeatability of the experiments
can also be ensured in the sense of obtaining the
same results given the same initial conditions at
any time.
•The overall architecture is federation-ready
in line with the Slice Federation Architecture
concept. Moreover, FEDERICA provides, for
example, non-web-based federated SSH access
to all of its resources supported by the SingleSign-On infrastructure of the research and education community.
However, it is important to mention some of
the limitations of FEDERICA, too. Typically,
compared to PlanetLab or commercial clouds,
the scalability of the FEDERICA virtual infrastructure is limited. The scalability is the function of the given size of the physical substrate
(that may be extended in the future) and the
number of active user experiments requiring
QoS assurances on links and/or computing
nodes. As a consequence of this, user access to
the FEDERICA slices must be governed with
care. This role is undertaken by the User Policy
Board of the project, which collects and approves
(or rejects) user slice applications.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
The exploitation of the aforementioned
unique features of FEDERICA is illustrated in
this article by a wide range of user projects.
70%
A
BEMaGS
F
Based on the submitted slice requests
Based on the potential user community
60%
USERS AND USE CASES
During the active lifetime of the FEDERICA
project (2008–2010) the consortium partners
approached and consulted more than 40 potential user groups all over Europe that resulted 15
slice requests for specific user experiments, as of
30 October, 2010. The segmentation and analysis
of the user community have been considered to
be important to FEDERICA. The aim of this
segmentation was to understand the user community better and the type of experiments that
can exploit the unique features of FEDERICA.
This should help to steer the future infrastructure development and user consultation
activities in the right direction so as to support
the needs of the community better.
USER SEGMENTATION
The detailed results of the FEDERICA user
segmentation are available in the public project
deliverables [8]. In the following section, we
highlight the main characteristics of the user
community.
Analyzing both the slice requests already submitted and the preliminary requirements collected from the possible user groups, Fig. 2 shows
that 40 percent of the performed experiments in
FEDERICA use a slice as a fully configured IP
network with best effort connections. There is
nothing unique in that sense, so other motivations (e.g., cost, economy of scale, simplicity of
use) must exist in the case of that 40 percent. In
contrast, 60 percent of the experiments performed use FEDERICA because of its unique
features (e.g., IP quality of service assurance or
unconfigured resource provisioning). In particular, 26.6 percent of users have requested raw
resources and data pipes (i.e., IaaS) in order to
fully configure and manage their slices. Analyzing the potential broader user base, we expect a
few more user experiments in the future exploiting the above mentioned unique features of
FEDERICA.
MAJOR EXPERIMENTS
FEDERICA’s mission is to support the development of the future Internet via Future Internet
Research projects. To measure its success, FEDERICA has collected feedback from users in
order to understand, say, the contribution of the
experiments’ results to the ICT research community as a whole.
Figure 3 shows the results of the users’ feedback on this particular aspect. Sixty percent of
the experiments performed in FEDERICA contribute to international research projects partly
funded by the European Commission. More
than 10 percent of experiments are aimed at
directly or indirectly contributing to standardization activities in the field of networking.
Requests from the commercial or private sector
has not yet been received during the active lifetime of the project, but some discussions within
the community suggest that it could be possible
in the future.
50%
40%
30%
20%
10%
0%
National project
support
International/
EC project
support
Standardisation/
open source
development
support
Commercial/
private
research
support
Figure 3. Contribution of FEDERICA experiments to ICT research community.
CONTRIBUTION TO FUTURE INTERNET RESEARCH
In the following section, we illustrate the usage
of FEDERICA via a wide spectrum of its user
experiments. The examples of user experiments
are grouped in three categories according to
their main objectives; validation of virtual infrastructure features, evaluation of multilayer network architectures, and design of novel data and
control plane protocols.
VALIDATION OF
VIRTUAL INFRASTRUCTURE FEATURES
This group of experiments aims at validating the
basic principles of virtualization capable infrastructures in general and particularly the unique
features of FEDERICA.
The Universitat Politècnica de Catalunya
(UPC), Spain, requested a FEDERICA slice,
called ISOLDA, in order to perform basic network performance and slice isolation tests. Parameters such as bandwidth, latency, jitter, and packet
loss were measured in parallel slices with the aim
to prove sufficient isolation between them. Figure
4 depicts the test scenario where two virtual connections (red and dashed green slices) share a
physical port on the FEDERICA substrate node
in the middle. This shared physical link contains
two different VLANs, one for each virtual connection. VM1 and VM3 are located on the same
virtual machine server, called VMServer 1, while
VM2 and VM4 are located on another server,
called VMServer 2. The other virtual connection
(dotted blue line) does not share any resource.
The isolation test at network level was done
by using various control and management protocols to identify whether the isolated virtual
machines can accidentally connect to each another. Isolation of virtual machines was also proven
by ping tests. The results from these tests were
affirmative, as the virtual machines on different
slices were not visible to one another. In conclusion, information was obtained about how the
FEDERICA substrate nodes should be configured to assure the isolation feature between
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
57
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
VMServer 1
eth0
VMServer 2
eth0
VM1
eth1
eth0
VM3
VM2
eth1
eth0
VM4
Figure 4. ISOLDA: Network performance and slice isolation tests.
slices that share physical resources. Following
these defined procedures in FEDERICA, the
isolation between slices is completely ensured.
KTH Royal Institute of Technology, Sweden,
requested a slice, called METER, in order to
study another important feature of FEDERICA:
the repeatability of its experiments. This user
project dealt with problems related to repeated
experiments on a shared network, where other
external activities may influence the set of results.
KTH investigated a method to identify time periods of comparable network conditions based on
metadata-contextual information about the environment where the experiment was executed.
During the timeframe of an experiment,
active background measurements were run to
collect metadata parameters. Experiment data
was time-stamped and KTH used statistical analysis on the metadata to determine if experiment
data was gathered during comparable network
conditions. An important goal was to be able to
do this without interrogating the experiment
data itself. Within the slice, service degradation
in terms of resource competition with experimental activities in other slices was rarely detected. An important part of the experimental work
was to detect differences in network conditions
e.g., in terms of latency. A set of background
measurements was used to identify periods of
comparable network conditions. Based on the
measurement results, it was possible to reduce
the confidence intervals of the outcome of the
repeated experiment, so the remaining values
were suitable for comparison.
The Friedrich-Alexander University of Erlangen-Nuremberg (FAU), Germany, in cooperation with FEDERICA partner Deutsche
Forschungsnetz (DFN), requested a slice in
order to perform Hades Active Delay Evaluation
System (HADES) measurements over the physical infrastructure of FEDERICA. The nodes of
the FEDERICA substrate were measured in a
full mesh topology; with bidirectional measurements, this yielded 42 measured links.
HADES was developed and deployed as a
tool that provides performance measurements
and offers IP performance metrics such as oneway delay, one-way delay variation, and packet
loss [9]. The main purpose of the experiment was
to collect and archive these IP performance metrics over an extended period of time and make
the data available to other FEDERICA project
partners and experiments that needed reference
data as part of their investigations on the behavior of virtualized networks and slice processing.
58
Communications
IEEE
A
BEMaGS
F
The data was also used to show that FEDERICA
users had a stable network environment and
repeatable conditions for their experiments.
EVALUATION OF MULTI-LAYER
NETWORK ARCHITECTURES
This group of experiments illustrates the capability of connecting external test beds or virtual
slices provided by other facilities to the FEDERICA user slice. In this way, multi-layer and
multi-domain network architectures and federation principles can be evaluated.
Trinity College Dublin (TCD), Ireland,
requested a slice, called OIS, in order to carry out
experiments on the Optical IP Switching (OIS)
concept, a network architecture [10] that was
developed by the CTVR Telecommunications
Research Centre based at the University of
Dublin. The idea behind OIS is to build a packet
router that can analyze traffic flows and use an
underlying optical cross-connect to bypass the
router electronics for suitable IP flows. The FEDERICA slice was used to analyze how dynamic
creation, modification or cancellation of optical
paths can degrade the quality of applications
transported over TCP protocols, on a large-scale
testbed implementation. The results presented
substantial differences to those previously
obtained through laboratory-based experiments.
The OIS network architecture was setup in
TCD’s laboratory as a core network and interconnected with vanilla IP domains, which were
realized within the FEDERICA slice. In the
experiment the FEDERICA slice was organized
in such a way that emulated a simple access network, where user data were aggregated by an IP
router and sent toward the TCD testbed. Here
packets were routed back to FEDERICA
towards the destination network. TCD could
then verify how their testbed would work when
connected to external legacy IP networks. The
results showed, for example, that UDP and TCP
transmissions are impaired when the signals are
dynamically switched from an electronic to an
optical connection if both upstream and downstream links are congested. The experiment also
gave TCD the opportunity to evaluate feasibility
and efforts required to use virtualized networks
in combination with physical testbeds.
The Telecommunications Software & Systems
Group (TSSG), based in Waterford, Ireland, was
one of the 10 partners of the European Commission funded project PERIMETER [11] which is
finished in May 2011. The project addressed
challenges such as quality of experience (QoE),
security, and end user impact in order to establish a new paradigm for seamless mobility. TSSG
requested a FEDERICA slice (incorporating five
virtual nodes and routers) to extend the existing
physical test bed and carry out more robust testing of the PERIMETER application particularly
in the areas of routing, scalability, and authentication. This was a relatively inexpensive way to
dynamically pool and use resources for experimentation of new networking concepts. The federated PERIMETER testbed and FEDERICA
slice were used to conduct a number of testing
processes including scenario conformance tests,
interoperability tests, application tests, Living
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Labs tests, and performance and scalability tests
in the project. These testing processes could not
be performed while achieving such a level of
realism without the use of FEDERICA. In conclusion, it was validated that the federation of a
physical test bed and a virtual slice allows experimental testing to proceed, using real-size platforms, real operating systems, and real
applications in a realistic environment, which is
similar to the actual target system.
The Universidad Autónoma de Madrid (UAM),
in Spain also requested a FEDERICA slice, called
ANFORA, in order to interconnect that with two
other infrastructures. This complex networking
experiment used resources of three facilities: FEDERICA, OneLab, and PASITO. Beside the FEDERICA slice, UAM already had OneLab nodes
with specific monitoring cards in order to perform
quality of service measurement and was also connected to PASITO, the Spanish national layer 2
networking facility [12], to provide lower-layer connections at the FEDERICA points of presence.
The multilayer test facility that was created (Fig. 5)
was used to evaluate the Path Computation Element (PCE) protocol, and to validate multilayer
traffic engineering (MTE) algorithms.
The FEDERICA slice allowed the creation of a
virtual network topology, which played the role of
an IP service provider’s backbone network. OneLab
measurement tools, implemented on top of FEDERICA virtual nodes, ensured precise monitoring
of traffic engineering (TE) parameters, like bandwidth or delay. The end-to-end monitoring information was sent to the PCE database so, PCE
nodes knew the actual performance of each layer.
During the experiment, end-user traffic was
routed using the IP layer (i.e., FEDERICA slice)
by default, while the network performance was
adequate. However, when there was IP congestion,
the lower layer connections of PASITO infrastructure were used to provide the desired quality
of service for the end users. PCE computed the
end-to-end routes and decided on which network
layer the traffic should be transmitted. As a result
of the experiment it was concluded that, PCE is
suitable for multilayer algorithms since their computation is more complex than traditional routing
algorithms. Moreover, the PCE database was filled
with information of both layers providing a global
view of the network to the operator. The interconnection of FEDERICA, OneLab, and PASITO
provided a unique test environment in order to
successfully validate various MTE algorithms.
DESIGN OF NOVEL DATA AND CONTROL
PLANE PROTOCOLS
This group of experiments aims at designing and
validating novel data and control plane protocols
as well as architectures.
The Politecnico di Torino (PoliTo), Italy,
requested a slice, called VMSR, in order to
experiment with the Virtual Multistage Software
Router (VMSR) distributed architecture. Distributed routers are logical devices whose functionalities are distributed on multiple internal
elements, running on virtual machines, in order
to achieve larger aggregated throughput and
improved reliability. The VMSR multistage
router architecture [13] was composed by three
UMA
premises
IEEE
BEMaGS
F
UPNA
premises
One-lab
FEDERICA
End user
End user
PASITO
Figure 5. ANFORA: investigation of federation and multilayer network scenarios.
internal stages: layer-2 load balancers, Ethernet
switches to interconnect components, and standard Linux based software routers. In order to
coordinate the internal elements and to allow
VMSR to behave externally as a single router,
custom-made control and management protocols
were designed by PoliTo.
The VMSR experiment in FEDERICA consisted of three L2 balancers (Linux virtual
machines running Click on Ethernet Virtual
Switch), one Ethernet switch, three L3 routers
(i.e., standard PCs running the Linux protocol
stack and XORP), and three external nodes that
generate and sink traffic. The main advantage of
implementing the VMSR experiment in FEDERICA was related to the ability of controlling
and monitoring network resources allocation
inside the architecture, which would not be possible in a testbed (e.g., on top of the public
Internet). Functional tests on the control and
management plane protocols were successfully
performed. PoliTo measured throughput and
latencies on all the involved links while varying
the packet size of traffic generators to determine
the suitability of the infrastructure supporting
high-performance applications.
Last but not least, Fundació i2CAT, the Spanish research foundation, requested a slice in
order to perform scalability tests of the Harmony multi-domain and multi-vendor network
resource brokering system developed by the
European Commission funded project PHOSPHORUS [14]. Harmony defines the architecture for a network service layer between the grid
middleware and applications and the network
resource provisioning systems (NRPS). Harmony
architecture has evolved from a centralized network service plane (NSP) model to a distributed
NSP model, passing through a middle stage, the
multilevel hierarchical NSP model, as depicted
in Fig. 6. Moreover, hybrid architectures of the
NSP have also been under consideration for
huge high-capacity environments.
The experiment performed over the FEDERICA slice aimed to analyze the performance and
scalability of the different Harmony NSP topologies under different workloads. The experiment
focused on collecting information about two
parameters: the service provisioning time and
the average rate of blocking requests. The slice
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
59
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
The PCE database
was filled with
information of both
layers providing a
global view of the
network to the
operator. The
interconnection of
Centralized
FEDERICA, OneLab
Hierarchical
Basic architectures
Daisy chain
and PASITO provided
a unique test
environment in order
to successfully
validate various
MTE algorithms.
Meshed
Hybrid
Advanced architectures
Figure 6. PHOSPHORUS: Harmony network resource brokering system scalability study.
provisioned by FEDERICA consisted of computational resources (15 virtual machines with 2
Gbytes RAM and 10 Gbytes storage each),
where the Harmony NSP entities were deployed.
The performed test cases produced measurements that helped i2CAT to better understand and
characterize the performance and scalability of the
developed system under higher loads than the ones
used in the physical test bed available in the PHOSPHORUS project. Several new results were
obtained, such as the comparison of the performance of each of the feasible architectures of the
NSP. Moreover, the PHOSPHORUS project participants obtained service provisioning times of
very different situations and loads for each test scenario through reconfiguring the FEDERICA slice.
CONCLUSION
In this article we have given a brief overview of
the unique features and services of the FEDERICA virtualization capable infrastructure, developed and deployed in the framework of a
European Commission funded project. The
broad spectrum of Future Internet Research
experiments, performed over the virtual slices of
FEDERICA, is also illustrated above. Based on
the feedback of the user community, it is clearly
signaled that the FEDERICA facility is essential
for sustainable support of Future Internet
Research in Europe and beyond. That is why the
European NREN partners of the former project
consortium have agreed to continuously support
the operation of the infrastructure and are willing to do further development under the recently launched Network Factory task of the GN3
project’s Joint Research Activity 2 [15].
ACKNOWLEDGMENT
The FP7 project FEDERICA was partially supported by the European Commission under
Grant Agreement no. RI-213107. The authors
60
Communications
IEEE
acknowledge the contribution of all project partners, user projects, and specifically for their
input, Mauro Campanella (GARR, project coordinator of FEDERICA), Peter Kaufmann
(DFN), Vasilis Maglaris (NTUA), Frances
Cleary, and Eileen Dillon (TSSG).
REFERENCES
[1] P. Szegedi et al., “With Evolution for Revolution: Managing FEDERICA for Future Internet Research,” IEEE
Commun. Mag., vol. 47, no. 7, July 2009, pp. 34–39.
[2] S. Bhardwaj, L. Jain, and S. Jain, “Cloud Computing: A
Study of Infrastructure of a Service (IaaS),” Int’l. J. Eng.
and Info. Tech., vol. 2, no. 1, Feb. 2010, pp. 60–63.
[3] J. B. Evans and D. E. Ackers, “Overview of GENI
Overview of GENI and Future Internet in the US,” 22
May 2007, http://www.geni.net/
[4] H. Harai, “AKARI Architecture Design Project,” 2nd
Japan EU Symp. New-Generation Network and Future
Internet, Oct. 13, 2009, http://www.akari-project.jp/
[5] FIRE WHITE PAPER, 2009, “FIRE: Future Internet
Research and Experimentation,” Aug. 2009,
http://www.ict-fire.eu/home/publications/papers.html
[6] S. Soltesz et al., “On the Design and Evolution of an
Architecture for Federation,” ROADS2007, July 2007,
Warsaw, Poland.
[7] L. Peterson, “PlanetLab: A Blueprint for Introducing Disruptive Technology into the Internet,” Jan. 2004,
http://www.planet-lab.org/.
[8] FP7-FEDERICA, Deliverable DNA2.3 “FEDERICA Usage
Reports,” Nov. 2010, http://www.fp7-federica.eu/.
[9] P. Holleczek et al., “Statistical Characteristics of Active
IP One Way Delay Measurements,” Int’l. Conf. Net. and
Services (ICNS’06), Silicon Valley, CA, 16–18 July 2006.
[10] M. Ruffini, D. O’Mahony, and L. Doyle, “Optical IP
Switching: A Flow-Based Approach to Distributed
Cross-Layer Provisioning,” IEEE/OSA J. Opt. Commun.
and Net., vol. 2, no. 8, Aug. 2010, pp. 609–24.
[11] E. Dillon et al., “PERIMETER: A Quality of Experience
Framework,” Int’l. Wksp. Future Internet of Things and
Services, 1 Sept. 2009, Berlin, Germany.
[12] Platform for Telecommunications Services Analysis
(PASITO), http://www.hpcn.es/projects/pasito/.
[13] A. Bianco et al., “Multistage Switching Architectures
for Software Routers,” IEEE Network, vol. 21, no. 4,
July 2007, pp. 15–21.
[14] S. Figuerola et al., “PHOSPHORUS: A Single-Step OnDemand Services Across Multi-Domain Networks for eScience,” Network Architectures, Management, and
Applications, Proc. SPIE, vol. 6784, 2007.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
[15] EC co-funded project GN3, Joint Research Activity 2,
http://www.geant.net/Research/Pages/home.aspx.
BIOGRAPHIES
P ETER S ZEGEDI ([email protected])
____________ received his M.Sc.
degree in electrical Eengineering from Budapest University
of Technology and Economics (Hungary, 2002). He then
worked toward a Ph.D. in the Department of Telecommunications. His main research interests include design and
analysis of dynamic optical networks, especially optical Ethernet architectures, network virtualization, control, and
management processes. He worked for Magyar Telekom
(2003–2007) and then joined TERENA in January 2008.
MARCO RUFFINI ([email protected])
___________ is currently a lecturer
on optical network architectures at the Department of
Computer Science of the University of Dublin, Trinity College, where he obtained his Ph.D. in 2007. He is part of
the CTVR telecommunication research centre, and his
research interests include low-based optical switching,
experimental optical testbeds, and techno-economic studies of next-generation transparent architectures. He worked
for Philips Research Laboratories in Aachen (2003–2005),
and before that as a technology consultant for Accenture,
Milan. He holds a degree in electronic engineering from
Universitá Politecnica delle Marche in Italy (2002).
D ONAL O’M AHONY ([email protected])
_______________ graduated
with first class honors in engineering from Trinity College
in 1982. After a brief career in industry at Sord Computer
Systems in Tokyo and IBM in Dublin, he rejoined Trinity
College as a lecturer in computer science in 1984, completing his Ph.D. in the area of software reusability in 1990. At
Trinity, he built up a successful research group in networks
and telecommunications. He spent the year of 1999 as a
Fulbright Fellow at Stanford University, California, before
returning to his present position as professor in computer
science at Trinity College. In July 2004 he led a team to
establish CTVR, a major multi-university research centre
established in association with Bell Labs.
JORDI FERRER RIERA ([email protected]),
____________ M.Sc., graduated
in computer science from the Technical University of Catalonia (FIB-UPC). He joined the i2CAT foundation as a software engineer in early 2007 for developing the MANTICORE
project. In 2008 he started his collaboration in the PHOSPHORUS project. He also collaborates with the GLIF Generic
Network Interface Technical Group (GNI-TG), adapting the
Harmony Service Interface (HSI) to the GNI specification. In
2010 he started collaborating actively in the GEYSERS project. He is currently a Ph.D. candidate in the Telematics
Engineering Department of UPC.
J OAN A NTONI G ARCIA -E SPIN ([email protected])
_________________
holds an M.Sc. from UPC (2007). He is a research project
manager at the Network Technologies Cluster of the i2CAT
Foundation and Ph.D. candidate at UPC. He wrote his Master’s thesis on the design and implementation of TE-enabled,
DiffServ-aware MPLS networks for providing end-to-end
QoS. He is currently working in European projects such as
GÉANT3 and GEYSERS and participated in PHOSPHORUS and
FEDERICA, as well as several Spanish research projects in the
past. He is an active contributor to and editor of the Network Service Interface standard in the Open Grid Forum.
MARKUS HIDELL ([email protected])
_________ is an assistant professor in
communication systems at the Royal Institute of Technology (KTH), Sweden. He received his M.Sc. degree in telecommunication systems and his Ph.D. degree in
telecommunication from KTH in 1992 and 2006, respectively. Since January 2008, Markus is an assistant professor
at Telecommunication Systems Laboratory (TSLab), KTH. His
current research interests include switch and router architectures, protocols, and network architectures. He has coauthored six patents in the area of network resource
management and network topologies.
PETER SJÖDIN ([email protected])
______ holds a Ph.D. from Uppsala University in computer science. Since December 2002 he is an
associate professor in communications networks at KTH in
TSLa) of the School of Information and Communication
Technology. His current research interests include network
systems, protocols and network architectures, virtualization
architectures, and residential service management. He
holds five patents in the areas of fast lookups and network
resource management, and network topologies.
PEHR SÖDERMAN ([email protected])
_______ received his M.Sc. in engineering from KTH in 2008, specializing in computer science. He joined the TSLab research group at KTH in 2009,
where he is currently a Ph.D. student. His current research
interests include measurement methodology, experiment
repeatability, and security in large-scale research networks.
IEEE
BEMaGS
______________ is an associate
A NDREA B IANCO ([email protected])
professor in the Electronics Department of Politecnico di
Torino, Italy. He was Technical Program Co-Chair of HPSR
2003 and 2008, DRCN 2005, and the IEEE ICC 2010 ONS
Symposium. He was a TPC member of several conferences,
including IEEE INFOCOM, IEEE GLOBECOM, and IEEE ICC.
He is editor of the Elsevier Computer Communications journal. His current main research interests are in the fields of
protocols and architectures for all-optical networks and
switch architectures for high-speed networks.
L UCA G IRAUDO ([email protected])
_____________ graduated from
Politecnico di Torino with a degree in telematics engineering in April 2007. Between June and December 2007 he
was involved in the design and implementation of distributed software router architecture (BORA-BORA project). Since
January 2008 he has been with Politecnico di Torino as a
Ph.D. student under the supervision of Prof. Andrea Bianco,
working on software routers, network virtualization,
resource distribution, and centralized control in networks.
F
The European NREN
partners of the
former project
consortium have
agreed to continuously support the
operation of the
infrastructure and
are willing to do further development
under the recently
launched “Network
Factory” task of the
GN3 project’s Joint
Research Activity 2.
CRISTINA CERVELLO-PASTOR ([email protected])
____________ received
her M.Sc. degree in telecom engineering and Ph.D. degree
in telecommunication engineering, both from the Escola
Tècnica Superior d’Enginyers de Telecomunicació, UPC,
Barcelona, Spain. She is currently an associate professor in
the Department of Telematics Engineering at UPC, which
she joined in 1989, and leader of the optical networks
research group within the BAMPLA group.
V ICTOR L OPEZ ([email protected])
____________ received his M.Sc.
degree in telecommunications engineering from Universidad de Alcalá de Henares (2005) and his Ph.D. degree in
computer science and telecommunications engineering
from Universidad Autónoma de Madrid (UAM) in 2009. In
2004 he joined Telefónica I+D, where he was involved in
next-generation networks for metro, core, and access. In
2006 he joined the High-Performance Computing and Networking Research Group at UAM. Currently, he is an assistant professor at UAM, where he is involved in optical
metro-core projects (BONE, MAINS). His research interests
include Internet services integration over optical networks
(OBS solutions and multilayer architectures).
MIGUEL PONCE DE LEON ([email protected])
___________ is a Waterford
Institute of Technology (WIT) graduate with a degree in
electronic engineering, and he is currently head of Communication Infrastructure Management, a unit of the TSSG, a
research group that focuses on telecommunications software services management and Internet technologies. He
has participated in over 35 international re-search projects
focusing on the future Internet, where his research team of
30 staff concentrate on the architecture and design of the
future Internet, looking specifically at self-management of
virtual resources, threats to communication-based service
information, and the Living Labs concept. He was General
Chair for TridentCom 2008 and is a member of FITCE.
GEMMA POWER ([email protected])
_________ holds a first class honors
degree from WIT and has extensive industry experience
with Apple. Since joining the TSSG in 2007, she has
worked on the EU MORE project in the area of wireless
sensor networks and validation and test. She is currently
working on the EU PERIMETER project where her primary
focus is the testbed setup, configuration, and federation,
in addition to the code integration and testing, proof of
concept, and demonstration aspects of the project.
SUSANNE NAEGELE-JACKSON (susanne.naegele-jackson@rrze.
_________________
uni-erlangen.de)
________ graduated with an M.Sc. in computer science from Western Kentucky University and the University
of Ulm, Germany. She received her Ph.D. (Dr.-Ing.) from
the University of Erlangen-Nuremberg in computer science.
She has worked at the Regional Computing Center of the
University of Erlangen-Nuremberg since 1998 on a variety
of national and international research projects such as GTB,
Uni-TV, Uni-TV2, VIOLA, MUPBED, EGEE-III, GN3, FEDERICA,
and NOVI. She has authored and co-authored over 30 scientific publications, and teaches classes on multimedia networking at the Regional Computing Center.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
61
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FUTURE INTERNET ARCHITECTURES: DESIGN AND
DEPLOYMENT PERSPECTIVES
Content, Connectivity, and Cloud:
Ingredients for the
Network of the Future
Bengt Ahlgren, Swedish Institute of Computer Science
Pedro A. Aranda, Telefónica, Investigación y Desarrollo
Prosper Chemouil and Sara Oueslati, Orange Labs
Luis M. Correia, IST/IT-Technical University of Lisbon
Holger Karl, University of Paderborn
Michael Söllner, Bell Labs/Alcatel-Lucent
Annikki Welin, Ericsson Research
ABSTRACT
A new network architecture for the Internet
needs ingredients from three approaches: information-centric networking, cloud computing
integrated with networking, and open connectivity. Information-centric networking considers
pieces of information as first-class entities of a
networking architecture, rather than only indirectly identifying and manipulating them via a
node hosting that information; this way, information becomes independent from the devices
they are stored in, enabling efficient and application-independent information caching in the network. Cloud networking offers a combination
and integration of cloud computing and virtual
networking. It is a solution that distributes the
benefits of cloud computing more deeply into
the network, and provides a tighter integration
of virtualization features at computing and networking levels. To support these concepts, open
connectivity services need to provide advanced
transport and networking mechanisms, making
use of network and path diversity (even leveraging direct optical paths) and encoding techniques, and dealing with ubiquitous mobility of
user, content and information objects in a unified way.
INTRODUCTION
The Internet’s architectural model has sustained
continuous development for the past four
decades and provided an excellent substrate for
a wide range of applications. The amount of
mobile data has been growing exponentially, and
one can expect this tremendous growth to con-
62
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
tinue. Despite the Internet’s uncontested successes, some challenges for this model are
becoming apparent, like adding applications
more complex than simple client/server or peerto-peer ones (e.g., multitier), or deploying information-centric ones distributed over different
providers; moreover, the range of so-far successful business models seems limited. Also, coordinating and integrating more diverse technologies,
networks, and edge devices is getting overly
expensive, and security issues are becoming real
barriers to deployment and use.
Information itself has become more and more
important in all aspects of communication and
networking. Most of the traffic in today’s Internet is related to content distribution, which
includes file sharing, collaboration applications,
and media streaming, among others. The interaction patterns of emerging applications no
longer involve simply exchanging data end-toend. These new patterns are centered on pieces
of information, being accessed in a variety of
ways. Instead of accessing and manipulating
information only via an indirection of servers
hosting them, putting named information objects
themselves at the center of networking is appealing, from the viewpoint of information flow and
storage. This information-centric usage of the
Internet raises various architectural challenges,
many of them not being handled effectively by
the current network architecture, which makes
information-centric networking an important
research field. In this new paradigm, storage for
caching information is part of the basic network
infrastructure, a network service being defined in
terms of named information objects (e.g., web
pages, photos, movies, or text documents), inde-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
pendently of where and how they are stored or
transported. This approach is believed to enable
an efficient and application-independent largescale information distribution.
Another problem is related to network applications, which can fluctuate rapidly in popularity
and in terms of the amount of user interaction.
This makes provisioning of both server and storage, as well as of networks, a difficult problem.
On the server and storage side, cloud computing
has successfully addressed many of these challenges, using virtualization as a core technique.
However, it is still unclear how to provide suitable network support for such highly variable
applications when they run not just over the
tightly controlled, custom-tailored network of a
cloud computing operator, but rather inside
more complex and diverse operator networks. In
such a network, it might be possible to provide
the computational resources, but it is not obvious how to dynamically provide the necessary
networking support/capacity or the complex networking topology required. Furthermore, security in both networks and cloud computing is a
key challenge to success. One needs an integration of network resource management with cloud
computing, an integration of provisioning distributed cloud resources with the network services to connect such distributed resources
reliably at a required quality. This combination
is called cloud networking.
Transport of information is another matter
that needs to be addressed. In the current Internet, transport relies on connectionless forwarding of small data packets that is not able to
exploit the additional (semantic) information
that is available in the end or edge systems;
additionally, it is incapable of making use of the
context information that defines and controls
related flows throughout different network
aggregation layers, leveraging the capabilities of
heterogeneous transmission technologies. For
example, it is practically impossible to exploit
the diversity existing over different communication technologies between two endpoints (e.g.,
random variations in channel quality or structural differences in channel properties, like different delay/data rate trade-offs), switching
between technologies as the flow’s required data
rate changes. Similarly, efficient multi-path/protocol/layer optimization is still unfeasible. In
order to efficiently use such high-speed future
network technologies, it is critical to implement
cross-layer coordination with new interdomain
transport, switching, and routing protocols. Furthermore, the current Internet is a flat, serviceneutral infrastructure; this is reflected in today’s
rigid peering agreements, which limit the type of
business models and service agreements that can
be applied at interprovider interfaces. In today’s
cellular networks, the introduction of new services is a cumbersome process due to the complexity of setting up the necessary roaming
agreements, since different networks may have
different releases and features (besides the
billing problem). Open connectivity offers an
approach to address these problems.
The aspects addressed above can be put into
a perspective of joint planes for a new architecture (Fig. 1). Three approaches are addressed in
Cloud
networking
aspect
IEEE
BEMaGS
F
Information-centric
networking
aspect
Open connectivity aspect
Figure 1. Three aspects of a new network architecture.
the current article: information-centric networking, cloud networking, and open connectivity.
The next sections present these concepts and
discuss them in detail.
A TARGETED SCENARIO
A scenario can help to put some of the previously mentioned aspects into perspective, and explicitly show an integrated approach to the problem.
Obviously, a single example scenario cannot convey all aspects of a complex situation, but it is
important to understand how the various
approaches can fit together.
Consider a user, Alice, offering some piece of
information (be it static or dynamic, e.g., streaming video) from her mobile handset to her content repository in the network. She shares this
content, which becomes unusually popular, being
viewed by many people, Alice’s “followers,”
most of whom use different network operators,
thus causing a large amount of relatively slow
and quite expensive cross-operator traffic (Fig.
2a). This situation creates some incentive for the
network operator to improve this content delivery situation (out of self-interest, but also to
improve user-perceived quality).
A possible solution is the usage of the network-centric architecture, together with some
open connectivity services, so that the increased
load causes additional instances of Alice’s content repository to be quickly spun up within
some of these other operators’ own networks
(Fig. 2b). The replication of the popular information to another location is facilitated by information-centric caching mechanisms. If necessary,
this infrastructure, with the necessary processing
means, ensures that information is processed at
and delivered from topologically advantageous
places — unlike today’s cloud computing, where
the processing can take place far away, with long
round-trip delays. This allows for reduction in
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
63
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Alice’s operator
Another operator
Alice’s operator
Another operator
Cloud networking infrastructure
Request
Alice
F
Alice’s operator
Another operator
Cloud networking infrastructure
Alice
Video
Video
BEMaGS
Create
Alice
Video
A
Video
Video
Video
“Followers”
“Followers”
(a)
(b)
“Followers”
(c)
More
“Followers”
Figure 2. Three steps in an advanced user-content-provisioning scenario.
cross-operator traffic, since each of Alice’s followers can now access information using only
network operator local traffic, and Alice’s video
is only replicated once between her operator and
each of the other operators.
However, this opens some transport problems: by the time the additional nodes are operational, a substantial amount of video may
already have been buffered at Alice’s (home network) node, which can cause problematic delays
for new followers; existing followers that have
been receiving cross-operator traffic will need to
switch to their now-local instance of Alice’s
node. These problems may be addressed by multipath transport connectivity, which can handle
the transport of the initial (previously buffered)
video burst via higher-bandwidth links for interoperator traffic before seamlessly falling back to
the cheaper connectivity that is sufficient to keep
up with Alice’s ongoing video stream (Fig. 2c).
Hence, storing, processing, and transporting
information turns into an integrated problem,
while today only isolated solutions are available.
INFORMATION-CENTRIC
NETWORKING
THE NOTION
The notion of information-centric networking
(ICN) has been proposed by several initiatives in
the last few years. The core idea of most proposals is to consider pieces of information as the
main entities of a networking architecture, rather
than only indirectly identifying and manipulating
them via a node hosting that information. Thus,
information becomes independent of the devices
in which it is stored, enabling efficient and application-independent information caching in the
network. This approach is believed to result in a
network that is better adapted to information
distribution and retrieval, which are the prevailing uses of current network technologies.
Notable examples are the work on contentcentric networking (CCN) [1], publish/subscribe
schemes [2], directly embedding publish/subscribe
schemes into the network fabric (PSIRP project)
[3], the NetInf work by the 4WARD project [4],
upon which our own ongoing work is mostly
based, or, earlier, the DONA project [5]. Similar
ideas have also been considered in the context of
wireless sensor networks (e.g., the idea to use
predicate-based “interests” to identify which data
64
Communications
IEEE
shall be transported, with protocols like directed
diffusion [6] realizing that idea).
AN EXAMPLE ARCHITECTURE: NETINF
Let us consider one of the approaches in more
detail. The NetInf ICN architecture developed
in the 4WARD project comprises three major
components: a naming scheme for information
objects (IOs), a name resolution and routing system, and in-network storage for caching. These
components are illustrated at a high level in Fig.
3 and described in the following paragraphs.
The naming scheme is important for making
information objects independent of the devices
storing them. The hard part is not to make the
names (information object identifier, IO ID in
the figure) location-independent, but rather to
fulfill the security requirements that result from
the location independence. One cannot depend
on host-based authentication of a delivering
server, since one wants any node in the network,
dedicated caches as well as end hosts, holding a
copy to be able to share that with others. In
order to be able to trust a copy of an information object coming from an untrusted device, the
receiver must be able to independently verify the
integrity of the object so that it becomes impossible to make forgeries. Therefore, the naming
scheme has a cryptographic binding between the
name itself (using field A of the ID) and the
object, similar to DONA. Furthermore, the naming scheme supports dynamic objects, owner
authentication and identification, changing the
owner of an object, and anonymous owners.
The purpose of the name resolution and routing system is to link the object names (IO IDs)
to the actual information objects, so they can be
queried and retrieved. It is a major challenge for
all information-centric approaches to design this
resolution system so that it scales to the global
level. The system can be viewed as a variant of
the Internet’s Domain Name System (DNS).
The system can also be viewed as a variant of IP
routing, where query packets are routed toward
the location of the resolution records, or all the
way to an actual copy of the information object.
NetInf supports both of these models by allowing different name resolution protocols in different parts of the network. Protocols based on
distributed hash tables (DHTs) have been investigated as one suitable technology; multicastbased protocols have been investigated for
implementing resolution in a local scope.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Storage for caching the actual bits of the
information objects — the bit-level objects
(BOs) in Fig. 3 — is an integral part of the network service. Potentially, BOs are cached at all
routers and end systems in the network. The
goal is to deliver the requested object from the
best cache(s) holding a copy to the client. A
cached copy can be found either through the
name resolution system or by a cache-aware
transport protocol.
The NetInf application programming interface (API) is inspired by publish/subscribe. A
producer of information can publish an information object, creating a binding in the name resolution system, and revoke a publication, removing
the binding. A consumer can resolve an information object name, returning the corresponding
binding(s), and retrieve an object using the information in the binding(s).
IO ID: Type A=hash(PKIO)
A
BEMaGS
F
L=label
Name resolution system
Storage/caching
Name resolution records
Name
resolution
Type
Meta
data
Metadata
Content
IO
IO ID’
BO
LocatorA
A
BO
B
BO
BO
LocatorB
Figure 3. Major components of the 4WARD NetInf architecture.
COMPARISON OF APPROACHES
Other approaches make different design choices.
Table 1 summarizes the main conceptual differences for four exemplarily chosen popular ICN
variants, previously mentioned. The main aspects
of difference are:
• The choice of what to consider a piece of
information with the corresponding naming
model
• Whether and how names are resolved into
routable addresses of a simpler system (like
IP) or whether name-based routing is used
• How transport and caching are integrated
CHALLENGES IN ICN
The ICN approach is still young, with many
remaining research challenges, some of the most
important ones being outlined in what follows.
Global scalability: An ICN needs to handle on
the order of 1015 unique information objects at
the global scale. Some solutions have been proposed (e.g., using DHTs), and calculations have
been made suggesting that it is feasible to construct a global name resolution/routing system
meeting this requirement. It still remains to be
proven by experiments using real implementations.
Cache management: Resource management
needs to go beyond considering link capacity,
and has to address, in particular, cache storage.
Some control of caching is needed to deliver a
predictable service. Different cache replacement
algorithms might be needed for different applications and usage patterns. Cache management
protocols are needed for, say, collaboration
between caches. Performance models are needed, accounting for distributed caching, statistical
features of queried pieces of information (popularity, content size, usage patterns, correlations
between objects), and the interplay between
caching and data rate, notably for dimensioning.
Congestion control: ICNs depart from today’s
Internet in two ways: they are receiver-oriented,
and they change the end-to-end principle. While
the former implies that end users may control
the rate of information delivery, the latter creates an opportunity for implementing congestion
control protocols between relevant nodes inside
the network, through (chunk) query message
pacing. This pacing mechanism may, for exam-
ple, be enforced between border routers of two
different network providers in a consistent manner with respect to the charging model in use for
information transport.
Deployment issues: To deploy informationcentric schemes, there must be both incentives
for users and operators, as well as the technical
feasibility to introduce them. For operators, the
appeal might lie in new business models (act as
information host, cache provider) and operational advantages (reduce interoperator traffic,
since information has to be exchanged only once
between two operators). Incremental deployment is also a sine qua non condition; it is facilitated by schemes that can use existing routing
and forwarding infrastructures (e.g., like NetInf
can use different name resolution systems as
plug-ins and directly run on top of IP as well as
on lower layers).
CLOUD NETWORKING
CLOUDS ARE RESTRICTIVE
Provisioning of data processing and storage in
clouds sitting at the edge of a network has
proven to be extremely useful for a wide range
of conventional applications; it is also a model
that is well in accordance with today’s network
architecture. But when one considers either
more demanding applications (e.g., with stringent latency requirements) or an advanced networking architecture, there are reasons to
rethink the current cloud model.
Consider ICN as a case study: ICN requires
storage and computing facilities distributed into
the network at a very fine granularity level, in
particular, if it has to go beyond pure content
distribution services and embrace active information objects. Leveraging current cloud computing solutions, based on server farms sitting at
the edge of the network, provides an insufficient
level of flexibility and performance, in particular,
latency, to the end user. Serving ICN requests
from the edge of the network will not result in
acceptable performance, hence, while ICN will
require cloud-like functionality, the notion of
cloud computing has to be reconsidered. This
implies the need to embed computation and
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
65
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Design aspect
CCN
NetInf
PSIRP
DONA
Naming and security of
information objects
Hierarchical, need to
trust signing key to
establish integrity
Flat, self-certifying; support for versioning, and
transfer of ownership
Flat, self-certifying;
notion of scope
Flat, self-certifying
Name resolution and
routing
Name-based routing
using longest prefix of
hierarchical names
Allows both resolution
using, e.g., DHTs, and
name-based routing
Name resolution using
a rendezvous function,
within a specified scope
REGISTER and FIND
primitives; hierarchical
resolution handlers
Transport and caching
Transport using namebased routing; finds
cached objects through
local search as well as on
the path to the publisher
Allows multiple transport protocols; finds
cached objects through
name resolution as well
as cache-aware transport
Transport routing and
forwarding, using separate forwarding identifiers
Caching in resolution
handlers
A
BEMaGS
F
Table 1. Comparison of different concepts.
storage deeply into the network to provide the
required quality of experience. A cloud system
serving an ICN architecture has to create ICN
instances at various places in the network (and
not just outside), and it has to provide these
instances with a suitable and secure, possibly private, network. Hence, one needs to integrate
cloud and (possibly virtual) networking services
into cloud networking.
More generally, in a traditional cloud, massive amounts of data will be “sitting in the
cloud,” waiting to be accessed by users anywhere
and anytime. “Sitting in the cloud” also implies
the need for a higher level of flexibility in the
network: on the one hand, applications will
reside in it, will be massively distributed (even
over several cloud centers), and will be accessible to a massive number of users; on the other,
the network itself will be composed of a vast
range of different network infrastructures, which
will be undoubtedly managed by different operators. These requirements are not new, and the
TeleManagement Forum had been addressing
them for some time in initiatives like IPSphere.
But today’s cloud solutions are based on concepts inherited from grid computing, and as
such, they do foresee massive deployment of
computing resources located at the edge of the
network in general. Advanced solutions for distributed services were inspired by the grid (e.g.,
Eucalyptus), but will not serve our purpose
either. They implement the Infrastructure as a
Service (IaaS) paradigm, being massively based
on pushing computing and content to virtual
machines localized at a few locations at the edge
of the network, i.e., at large data centers. None
of these approaches is suitable to act as an execution platform for ICN, where both storage and
computing will be distributed, yet might still
heavily interact with each other.
For a pervasive deployment of the kind of
infrastructure one is aiming at, there is the need
to provide the network with mechanisms to
access and to create such computing resources at
any place in the network they might be deployed
at. More important, tighter integration with virtualization at all possible levels is necessary:
applications in the cloud run in parallel, sharing
the infrastructure, and need to be isolated from
one another to provide predictable security and
66
Communications
IEEE
performance guarantees at all levels, including
the network plane. Additionally, current
approaches to network virtualization are too
static: virtual private networks (VPNs) at layers
2 and 3 are conceived as semi-static entities,
which require often manual intervention when
connections to end-user locations are created or
destroyed. Signaling, a possibility to make the
operation of VPNs more dynamic, is currently
based on protocols like BGP-4, which have been
designed to minimize the oscillation probability
in the infrastructure, and therefore are not too
dynamic.
BENEFITS OF AN INTEGRATED APPROACH
One needs a solution that distributes the cloud
(and its main benefits, on-demand availability of
computing and storage with massive benefits of
scale) more deeply into the network and disperses the cloud closer to the end user to reduce
latency: one might talk about mist computing
instead of cloud computing (Fig. 4). Moreover,
these “misty” resources need to be flexibly networked across a backbone network, with isolation and security in place, the allocation of
storage, computation, and networking connectivity between them becoming an integrated problem — applications can only be mapped onto a
part of a cloud, when the required networking
resources are in place, both to other parts of a
cloud and to the end-user population the cloud
part is intended to serve. The approach proposed here intrinsically takes the finer level of
granularity needed in order to implement an
infrastructure into account, which is highly
responsive and provides a tighter integration of
virtualization features at computing and networking levels, possibly trading off computing
and networking against each other (e.g., use
slower computing nearby vs. fast computing far
away).
The levels of envisioned adaptability provide
better adaptation to network conditions and
higher robustness to flash crowd effects. The
network will adapt the amount of resources to
the traffic needs, and move computational
resources nearer to the physical locations where
they are needed, creating the required network
connectivity on demand. Additionally, it will support a rich ecosystem of middleware, which can
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
Some lines of current
research in cloud
Cloud 1
Cloud 1
networking focus on
optimizing networking inside big cloud
Internet
Internet
data centers, since
measurements in
such environments
show that most of
Cloud 2
(a)
(b)
the traffic stays
within the data
Figure 4. From cloud (a) to mist (b) computing, supported by cloud networking: resources of cloud 2
(shown in green) are spread much finer and deeper into the network, close to the actual point of usage.
centre. The envisioned architecture
does not geographi-
be run concurrently and isolated in different virtual infrastructures; cloud networking is not
meant as an exclusive runtime environment for
ICN alone, but as a generic service accessible for
many different kinds of applications that need to
run in the network with similar levels of adaptability and scale. For example, the Software as a
Service (SaaS) paradigm should also benefit
from cloud networking, moving the provided
software away from today’s centralized and
remote data centers closer to the customer.
Moving closer to the customer is in the interest
of resource efficiency too: usage dictates the network portions that are activated for a specific
service.
CHALLENGES
Some lines of current research in cloud networking focus on optimizing networking inside big
cloud data centers [7], since measurements in
such environments show that most of the traffic
stays within the data center [8]. The envisioned
architecture does not geographically confine
traffic in this way. The impact of traffic patterns
associated with cloud networking applications [8]
needs to be studied in highly distributed scenarios as considered here.
Another challenge that arises in massively
distributed environments is failure protection: an
application at a given network location might
not work as expected. This situation needs to be
detected and corrected. Approaches like the one
presented in [9] need to be explored.
OPEN CONNECTIVITY
CHALLENGES OF INTERNET TRANSPORT AND
CONNECTIVITY ARCHITECTURES
So far, the current Internet transport paradigm
focused to a large extent on the provisioning of a
transparent TCP/IP based point-to-point connectivity between addressable hosts irrespective of
the underlying transport technologies. However,
there is a tremendous increase in capacity in the
lower-level network technologies (fiber, copper,
and wireless technologies), but the usable network capacity increasingly lags behind the
demands of emerging resource-hungry net-
worked applications (created by, e.g., content
distribution, cloud computing, or social networking). In addition, the heterogeneity of deployed
network technologies makes it hard to exploit
the particular network resources and features on
an end-to-end, or even edge-to-edge, basis for
the sake of new evolutions, like ICN or cloud
networking.
Therefore, there is also a need for an
advanced open connectivity service framework
that addresses the issues in the transport mechanisms of a Future Internet. It aims at leveraging
advanced features (e.g., multipoint, multipath,
dynamic switching, and extended bandwidth) of
link technologies, especially of optical transport
networks, making use of network (and path)
diversity and advanced encoding techniques, and
at dealing with ubiquitous mobility of user, content and information objects in a unified way.
Access to these mechanisms should be provided
through new open and extensible interfaces,
between client (user) and network, as well as
between networks. Figure 5 presents multilayer
transport architecture and interfaces for open
connectivity services.
While the physical networks offer an evergrowing optical bandwidth, and tend to aggregate links and switching/routing capabilities as
much as possible for efficiency purposes in the
core network, connectivity for the ICN approach
will require high-performance distribution, referencing and managing a large number of interlinked, but relatively small, information chunks
located all over the world, preferably at the edge
of the network. Today, this seems like diverging
interests, a challenge that needs to be addressed
in an evolved transport system.
Therefore, the use cases for open connectivity
will include the special needs for Wide Area
Networks interconnectivity of new players, like
distributed service centers and large enterprises
(acting as information-centric nodes or cloud
service providers), providing them with advanced
and easy-to-use open APIs, to set up and efficiently control their private “virtual cloud networks” across multiple transport technologies
and domains. Data centers in such a context will
also comprise “mobile operation centers,” such
as traditional IP Multimedia Subsystem (IMS)
cally confine traffic
in this way.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
67
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Data
center
Cloud networking,
information-centric networking
Virtualized
node
Service
router
API
API
Optically switched networking (OTN)
Packet/
label
router
Sub-λ
switch
Control and management
Open UNI
Control and management
Open NNI
Open UNI
“Electrical” networking (IP/MPLS)
All-optical networking
λ switch
Fiber
switch
λ switch
Figure 5. Open Connectivity Services: multi-layer transport architecture and interfaces (UNI: user-to-network interface, NNI: network-tonetwork interface).
network functionalities running in a specific
“mobility cloud.”
LEVERAGING LOWER-LAYER
TRANSPORT EFFICIENCY
The current Internet is not capable of making
use of context information that defines and controls related flows throughout different network
aggregation layers, leveraging the capabilities of
heterogeneous transmission technologies, including IP/multiprotocol label switching (MPLS),
WiFi, third generation/Long Term Evolution
(3G/LTE), Ethernet, and optical networks. For
example, TCP end-to-end transport with error
protection, flow control, and congestion control
is completely decoupled from the routing and
forwarding aspects of interconnected networks.
This architecture does also not allow leveraging advanced features of upcoming global network technologies, such as carrier-grade
Ethernet or advanced optical switching techniques (e.g., concerning path management,
resilience, or quality of service [QoS] mechanisms). For efficiently utilizing such high-speed
future network technologies, it is critical that
there is cross-layer coordination with new interdomain transport, switching, and routing protocols [10].
The evolution of transport networks is mainly
driven by advances in optical transmission technologies, increasing the usable transmission
bandwidth in optical fibers, as well as by the evolution of photonic integrated circuit technologies
and the electrical processing in silicon, more and
more used for switching of sub-lambda, single
wavelengths and wavebands in a dynamical way
[11]. The ability of using direct lightpaths and
optical technologies for traffic off-loading the
Internet core, and reducing (electrical) processing in intermediate hops, will have a beneficial
68
Communications
IEEE
impact on the energy budget of the global Internet overall, a problem being recognized only in
recent years in the context of so-called green
ICT [12]. However, this requires a new modified
addressing and routing architecture, if a packet
should be processed in less electronic steps, and
then put into an optical path that ends up near
the destination, in order not to run into routing
table explosion and scaling problems caused by
the extensive use of multilayer techniques across
multiple domains [13].
RELATED WORK
The design of a new transport architecture for
the Future Internet has partly been addressed in
previous projects. Most notably, the European
project 4WARD (http://www.4ward-project.eu)
has developed a functional architecture for
generic paths that enables new forms of in-network processing. The 4WARD Generic Path
concept encapsulates both interfaces and functionality central to data transport in an objectoriented fashion, thereby enabling access to
routes, paths and transport functionalities, allowing to stack and connect them dynamically (e.g.,
to implement distributed mobility management).
Such principles can now be extended to develop
a lightweight, flow-aware concept of routing
across and managing of rich communication
paths (i.e., multipoint, multiprotocol, multipath).
This should allow for the development of open
connectivity services that satisfy the needs of
flash network slices (i.e., capable of being virtualized) and ICN (i.e., connecting content and
data centers).
The European project Trilogy (http://www.
trilogy-project.org) proposes a unified architecture for control, divided into a delivery service
and transport services. The delivery service is
composed of a reachability plane, responsible for
the outgoing link selection, enabling network-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
wide reachability, and a resource plane, responsible for sharing the transmission resource
between packets. The transport services provide
functions for reliability, flow control and message framing.
Likewise, alternative control plane architectures arise out of the ongoing future Internet
research activities, such as OpenFlow
(http://www.openflowswitch.org), which provides
a new interface to Ethernet switches, enabling
experiments with novel protocols at flow level on
the one hand, and allows for programmable networking on the other.
Further related activities exist in other ongoing European projects working towards the
Future Internet, such as ETICS, GEYSERS and
STRONGEST; generally, all these projects deal
with variants of multi-layer network architectures and their multi-domain interconnection.
To our knowledge, only the SAIL project
(http://www.sail-project.eu) focuses on providing
the means for user/application controlled access
to establish connectivity tailored to the needs of
future internet stakeholders, such as cloud networking or ICN.
CONNECTIVITY AS AN OPEN SERVICE
The novel proposed open connectivity approach
will extend the current point-to-point related
connectivity toward multi-p* (i.e., multipath/
point/protocol) transport and routing, investigating the interactions between multi-p* transport,
path selection and routing, and having an endto-end cross-layer and cross-domain approach
for multi-p* management.
The proposed solution is based on a multidomain architecture, allowing for open and extensible communication and cooperation between the
control planes of different network domains
(user-to-network, and network-to-network) in a
unified way. It enables the generic exchange of
resource information for data flows across technology boundaries, in order to support content
and information delivery in ICN, and provide
appropriate dynamic and virtualized connectivity
for cloud networking. That will also allow endto-end optimization concerning transport energy
efficiency or intelligent sharing of network
resources, caches and data processing.
As an application, one expects the rise of specific cloud services provided by distributed and
interconnected data centers, such as mobile
access to community and social networks running in a specific “mobility cloud.” Networking
of such mobility centers might require new forms
of mobility management that go beyond serving
mobile users in the access networks, and include
mobility of content and information within the
interconnected distributed operation centers. A
significant efficiency benefit can be expected by
making use of path diversity, in both advanced
optical and wireless network technologies. A
promising alternative to state-of-the-art mobility
management, with its single centralized anchor
point, is a dynamic distributed mobility management [14]. In the cloud, the network dynamically
chooses the optimal location of mobility service
anchor points on a per-user/per-device or even
per-flow basis. Open connectivity services will
enable the cloud to make use and manage the
multi-flow and multi-path routing capabilities
provided edge-to-edge across the networks.
IEEE
BEMaGS
CONCLUSIONS
The Internet has been based up to now on an
architectural model that has coped with a sustained continuous development and provided a
good environment for a wide range of applications. Nevertheless, challenges for this model
became apparent, namely at the applications
level, not only from the technical viewpoint but
also from the business one. This article addresses aspects of a new architecture, from three
approaches: information-centric networking,
cloud networking, and open connectivity services.
Information-centric networking considers
pieces of information as main entities of a networking architecture, rather than only indirectly
identifying and manipulating them via a node
hosting that information; this way, information
becomes independent from the devices they are
stored in, enabling efficient and applicationindependent information caching in the network.
Major challenges include global scalability, cache
management, congestion control, and deployment issues.
Cloud networking offers a combination and
integration of cloud computing and virtual networking. It is a solution that distributes the benefits of cloud computing more deeply into the
network, and provides a tighter integration of
virtualization features at computing and networking levels. Current challenges encompass
the optimization of networking inside cloud data
centers, the study of the impact of traffic patterns associated with cloud networking applications in highly distributed scenarios, and failure
protection in massively distributed environments.
Open connectivity services address transport
mechanisms issues, aiming at leveraging
advanced features of link technologies, namely
in optical networks, making use of network (and
path) diversity and advanced encoding techniques, and at dealing with ubiquitous mobility
of user, content and information objects in a
unified way. Challenges address, among others,
the development of a lightweight flow-aware
concept of routing across and managing of multipoint/protocol/path communications, satisfying
the needs of flash network slices, supporting
content and information delivery in informationcentric networks, and providing appropriate
dynamic and virtualized connectivity for cloud
networking.
F
Cloud networking
offers a combination
and integration of
cloud computing
and virtual networking. It is a solution
that distributes the
benefits of cloud
computing more
deeply into the
network, and
provides tighter
integration of virtualization features at
the computing and
networking levels.
ACKNOWLEDGMENTS
The authors would like to thank all their colleagues from the SAIL project team. Without
their contributions, this article and the insights
behind it would not have happened. This work
has been partially funded by the European Commission, under grant FP7-ICT-2009-5-257448SAIL.
REFERENCES
[1] V. Jacobson et al., “Networking Named Content,” Proc.
CoNEXT’09 — 5th Int’l. Conf. Emerging Networking
Experiments and Technologies, Rome, Italy, Dec. 2009.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
69
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
[2] P. T. Eugster et al., “The Many Faces of Publish/Subscribe,” ACM Computing Surveys, vol. 35, no. 2, June
2003, pp. 114-31.
[3] A. Zahemszky et al., “Exploring the Pub/Sub Routing &
Forwarding Space,” Proc. IEEE ICC Wksp. Networks of
the Future, Dresden, Germany, June 2009.
[4] B. Tarnauca and S. Nechifor, Eds., Netinf Evaluation, EC
FP7-ICT-4WARD Project, Deliv. D-6.3, June 2010
(http://www.4ward-project.eu).
[5] T. Koponen et al., “A Data-Oriented (and Beyond) Network Architecture,” Proc. ACM SIGCOMM ’07, Kyoto,
Japan, Aug. 2007.
[6] C. Intanagonwiwat et al., “Directed Diffusion for Wireless Sensor Networking,” IEEE/ACM Trans. Net., vol. 11,
no. 1, Feb. 2003, pp. 2–16.
[7] M. Al-Fares, A. Loukissas and A. Vahdat, “A Scalable,
Commodity Data Center Network Architecture,” Proc.
ACM SIGCOMM ’08, Seattle, WA, Aug. 2008.
[8] Arista Networks, Switching Architectures for Cloud Network Designs, http://www.aristanetworks.com/media/
system/pdf/SwitchingArchitecture_wp.pdf,
_______________________ Apr. 2010,
Architecting Low Latency Cloud Networks, http://www.
______________________________
aristanetworks.com/media/system/pdf/CloudNetworkLatency.pdf,
_____ May 2009, Menlo Park, CA.
[9] A. Carzaniga, A. Gorla, and M. Pezze, “Healing Web
Applications Through Automatic Workarounds,” Int’l. J.
Software Tools for Tech. Transfer, vol. 10, no. 6, Oct.
2008, pp. 493–502.
[10] K. Sato and H. Hasegawa, “Optical Networking Technologies That Will Create Future Bandwidth-Abundant
Networks,” IEEE/OSA J. Opt. Commun. and Net., vol. 1,
no. 2, July 2009, pp. A81–A93.
[11] D.T. Neilson, “Photonics for Switching and Routing,”
IEEE J. Selected Topics in Quantum Electronics, vol. 12,
no. 4, July–Aug. 2006, pp. 669–78.
[12] J. Baliga et al., “Green Cloud Computing: Balancing
Energy in Processing, Storage and Transport,” Proc.
IEEE, vol. 99, no. 1, Jan. 2011, pp. 149–67.
[13] G. J. Eilenberger et al., “Energy-Efficient Transport for
the Future Internet,” Bell Labs Tech. J., vol. 15, issue 2,
Sept. 2010, pp. 147–67.
[14] F. Bertin, Ed., Description of Generic Path Mechanism
based on Resource Sharing and Mobility Management,
EC FP7-ICT-4WARD Project, Deliv. D-5.2.1, Dec. 2009
(http://www.4ward-project.eu).
BIOGRAPHIES
B ENGT A HLGREN received his Ph.D. in computer systems in
1998 from Uppsala University, Sweden. He conducts
research in the area of computer networking including the
protocols and mechanisms of the Internet infrastructure.
His main interest is the evolution of the Internet architecture, especially issues with naming and addressing on a
global scale. Lately his research focus is on designing networks based on an information-centric paradigm.
70
Communications
IEEE
A
BEMaGS
F
P EDRO A. A RANDA obtained his Telecommunications Engineer title at Polytechnic University of Madrid’s (UPM’s)
Telecommunications School, Spain. He joined Telefónica
I+D in 1991 and is currently a technology specialist, conducting research in the areas of the future of the Internet
and service agnostic networks. His main research interests
are the design of Internet grade architectures and the
behavior of BGP-4. Lately he has been working on the evolution of the Internet, especially issues related to interprovider and interdomain relationships.
PROSPER CHEMOUIL [F’03] received his Ph.D. in control theory
in 1978 from Nantes University. In 1980 he joined Orange
Labs (then CNET), France Telecom’s R&D Centre, where he
is currently director of a research program concerned with
the design and management of future networks.
LUIS M. CORREIA [SM’03] received his Ph.D. in electrical and
computer engineering from IST-TUL in 1991, where he is
currently a professor in telecommunications, with his work
focused on wireless/mobile communications. He has been
active in various projects within European frameworks. He
was part of the COST Domain Committee on ICT and has
been involved in Net!Works activities.
HOLGER KARL received his Ph.D. in 1999 from Humboldt University Berlin; afterward he joined Technical University
Berlin. Since 2004 he is a professor of computer networks
at the University of Paderborn. He is also responsible for
the Paderborn Centre for Parallel Computing, and has been
involved in various European and national research projects.
S ARA O UESLATI received her Ph.D. degree in computer science and networks from École Nationale Supérieur des
Télécommunications, Paris, in 2000. She next joined France
Telecom R&D as a research engineer in the field of performance evaluation and design of traffic controls for multiservice networks, and has led the Traffic and Resource
Management research team since 2005.
M ICHAEL S ÖLLNER is a technical manager at Alcatel-Lucent
Bell Labs in Stuttgart, Germany. After he received a Ph.D.
degree in applied mathematics (1984), he held various
positions in the communication industry where he focused
on systems engineering and research for network architectures and protocols, and currently for mobile systems
beyond 3G and the future internet. He has been involved
in various European cross-industry research projects.
A NNIKKI W ELIN is a senior researcher at Ericsson Research,
Packet Transport and Routing Department. She joined Ericsson in 1998. Her research interests include packet transport and overlay networks. She has co-authored more than
20 papers and over 20 patents. She has been active in various projects within European frameworks.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FUTURE INTERNET ARCHITECTURES: DESIGN AND
DEPLOYMENT PERSPECTIVES
PEARL: A Programmable
Virtual Router Platform
Gaogang Xie, Peng He, Hongtao Guan, Zhenyu Li, Yingke Xie, Layong Luo, Jianhua Zhang, and Yonggong Wang, Chinese Academy of Sciences
Kavé Salamatian, University of Savoie
ABSTRACT
Programmable routers supporting virtualization are a key building block for bridging the gap
between new Internet protocols and their
deployment in real operational networks. This
article presents the design and implementation
of PEARL, a programmable virtual router platform with relatively high performance. It offers
high flexibility by allowing users to control the
configuration of both hardware and software
data paths. The platform makes use of fast
lookup in hardware and software exceptions in
commodity multicore CPUs to achieve highspeed packet processing. Multiple isolated packet streams and virtualization techniques ensure
isolation among virtual router instances.
INTRODUCTION
Deploying, experimenting, and testing new
protocols and systems over the Internet have
always been major issues. While, one could use
simulation tools for the evaluation of a new system aimed at large-scale deployment, real experimentation in an experimental environment with
realistic enough settings, such as real traffic
workload and application mix, is mandatory.
Therefore, easy programmable platforms that
can support high-performance packet forwarding
and enable parallel deployment and experiment
of different research ideas are highly demanded.
Moreover, we are moving toward a future
Internet architecture that seems to be polymorphic rather than monolithic: the architecture will
have to accommodate simultaneous coexistence
of several architectures (the Named Data Network (NDN) [1], etc.) including the current
Internet. Therefore, the future Internet should
be based on platforms running different architectures in virtual slices enabling independent
programming and configuration of the functions
of each individual slice.
Current commercial routers, the most important building blocks in the Internet, while attaining very high performance, offer only very
limited access to researchers and developers for
IEEE Communications Magazine • July 2011
Communications
IEEE
their internal components to implement and
deploy innovative networking architectures. In
contrast, open-software-based routers naturally
facilitate access to and adaptation of almost all
of their components, although frequently with
low packet processing performance.
As an example, recent OpenFlow switches
provide flexibility by allowing programmers to
configure the 10-tuple of flow table entries,
enabling the change of packet processing of a
flow. OpenFlow switches are not ready for nonIP-based packet flows, such as NDN. Moreover,
while the switches allow a number of slices for
different routing protocols through the FlowVisor, the slices are not isolated in terms of processing capacity, memory, and bandwidth.
Motivated by these facts, we have designed
and built a programmable virtual router platform,
PEARL, that can guarantee high performance.
The challenges are twofold: first to manage the
flexibility vs. performance trade-off that translates
into pushing functionality to hardware for performance vs. programming them in software for flexibility, and second to ensure isolation between
virtual router instances in both hardware and
software with low performance overhead.
This article describes the PEARL router’s
architecture, its key components, and the performance results. It shows that PEARL meets the
design goals of flexibility, high performance, and
isolation. In the next section, we describe the
design goals of the PEARL platform. These
goals can be taken as the main features of our
designed platform. We then detail the design
and implementation of the platform, including
both hardware and software platforms. In the
next section, we evaluate the performance of a
PEARL-based router using the Spirent TestCenter by injecting into it both IPv4 and IPv6 traffic.
Finally, we briefly summarize related work and
conclude the article.
DESIGN GOALS
In the past few years, several future Internet
architectures and protocols at different layers
have been proposed to cope with the challenges
0163-6804/11/$25.00 © 2011 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
71
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Software platform
VM 1
VM 2
VM 3
APPs
APPs
APPs
Protocol
(IP/
Non-IP)
Protocol
(IP/
Non-IP)
Protocol
(IP/
Non-IP)
VM
5
VM
4
Net IO proxy
Generalized multicore CPUs
m
ea
am
m
trea
st r
e
str
IO stream
IO s
IO
IO
PCIE bus
VM
N
DMA interface
MAC
and PHY
RX process
TX process
Hardware platform
Specialized packet
processing card
Figure 1. Overview of the PEARL architecture.
the current Internet faces [1]. Evaluating the
reliability and performance of these proposed
innovative mechanisms is mandatory before
envisioning real-scale deployment. Besides theoretically evaluating the performance of a system
and simulating these implementations, one needs
to deploy them in a production network with a
real user behavior, traffic workload, resource
distribution, and applications mixture. However,
a major principle in experimental deployment is
the “no harm” principle that states that normal
services on a production network should not be
impacted by the deployment of a new service.
Moreover, no unique architecture or protocol
stack will be able to support all actual and future
Internet services and we might need specific
packet processing for given services. Obviously, a
flexible router platform with high-speed packet
processing ability and support of multiple parallel and virtualized independent architectures is
extremely attractive for both Internet research
and operation. Based on this observation one
can define isolation, flexibility, and high performance as the needed characteristics and design
goals of a router platform future Internet.
In particular, the platform should be able to
cope with various types of packets including
IPv4, IPv6, and even non-IP, and be able to
apply packet routing as well as circuit switching.
Various software solutions like Quagga or
XORP [2] have provided such flexible platform
that is able to adapt their packet-processing
components as well as to customize the functionalities of their data, control and management
planes. However, these approaches fail to be fast
enough to be used in operational context where
a wire-speed is needed. Nevertheless, by adding
and configuring convenient hardware packet
processing resources such as FPGA, CPU cores
and memory storage one can hope to meet the
performance requirements. Indeed, flexibility
and high performance are in conflict in most situations. Flexibility requires more functionalities
to be implemented in software to maximize the
programmability. On other hand, high performance cannot be reached in software and needs
72
Communications
IEEE
A
BEMaGS
F
custom hardware. A major challenge for PEARL
is to allocate enough hardware and multi-cores
in order to achieve both flexibility and high performance.
Another design goal is relative to isolation.
By isolation we mean a mechanism that enables
different architectures or protocols running in
parallel on separate virtual router instances without impacting each other’s performance. In
order to achieve isolation, we should provide a
mechanism to ensure that one instance can only
use its allocated hardware (CPU cores and
cycles, memory, resources, etc.) and software
resources (lookup routing tables, packet queue,
etc.), and is forbidden to access resources of
other instances even when they are idle. We also
need a dispatching component to ensure that IP
or non-IP packets are delivered to specified
instances following custom rules defined over
medium access control (MAC) layer parameters,
protocols, flow labels, or packet header fields.
PEARL offers high flexibility through the
custom configurations of both the hardware and
software data paths. Multiple isolated packet
streams and virtualization techniques enable isolation among virtual router instances, while the
fast lookup hardware provides the capacity to
achieve high performance
PLATFORM DESIGN AND
IMPLEMENTATION
SYSTEM OVERVIEW
PEARL uses commodity multicore CPU hardware platforms that run generic software as well
as specialized packet processing cards for highperformance packet processing as shown in Fig.
1. The virtualization environment is built using
the Linux-based LXC solution. This enables
multiple virtual router instances to run in parallel over a CPU core or one router instance over
multiple CPU cores. Each virtual machine can
be logically viewed as a separate host. The hardware platform contains a field programmable
gate array (FPGA)-based packet processing card
with embedded TCAM and SRAM. This card
enables fast packet processing and strong isolation.
Isolation — PEARL implements multiple simultaneous fast virtual data planes by allocating separate hardware resources to each virtual data
plane. This facilitates strong isolation among the
hardware virtual data planes. Moreover, LXC
takes advantage of a group of the kernel feature
(namespace, Cgroup) to ensure isolation in software between virtual router instances. A multistream high-performance DMA engine is also
used in PEARL, which receives and transmits
packets via high-speed PCI Express bus between
hardware and software platforms. Each IO
stream can be either assigned to a dedicated virtual router or shared by several virtual routers
using a net IO proxy.
Flexibility — We use TAP/TUN device as the
network interface in each virtual machine. Each
virtual machine could be considered as a stan-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Routing table
lookup
VR 0
routing table
Action logic
1
MAC RxQ
VR 1
routing table
..
.
Action logic
2
..
.
MAC RxQ
CPU
transceiver
CPU action
MAC RxQ
MAC RxQ
Header
extractor
VID
lookup
The operations of
Action logic
routing table lookup
and packet dispatch
MAC TxQ
Output
queues
Out port
scheduler
machines are always
the performance
MAC TxQ
MAC TxQ
to different virtual
Multistream
DMA
Rx engine
PCIE bus
Mapping
10-tuple
to virtual
router ID
F
Multistream
DMA
Tx engine
bottleneck. PEARL
offloads these two
operations into
hardware to achieve
high speed packet
forwarding.
MAC TxQ
Figure 2. The architecture of PEARL's hardware data plane.
dard Linux host containing multiple network
ports. Thus, the IPv4, IPv6, OpenFlow, and even
non-IP protocol stacks can easily be loaded.
Adding new functions to a router is also convenient though programming Linux applications.
For example, to load IPv4 or IPv6, the Quagga
routing software suite can be used as the control
plane inside each Linux container.
can classify packets of any kind of protocol into
different virtual routers as long as they are Ethernet based, such as IPV4, IPv6 and non-IP protocols. The VID lookup table is managed by a
software controller which enables users to define
the fields as needed. The VID of the virtual
router to which a packet belongs is appended on
the packet as a custom header.
High Performance — The operations of routing table lookup and packet dispatch to different
virtual machines are always performance bottlenecks. PEARL offloads these two operations
into hardware to achieve high-speed packet forwarding. In addition, since LXC is a lightweight
virtualization technique with low overhead, the
performance is further improved.
Routing Table Lookup — In a network virtualization environment, each virtual router should
have a distinct routing table. Since there are no
standards for non-IP protocols until now, we
only consider the storage and lookup of routing
tables for IP-based protocols in hardware. It is
worth noting that routing tables for non-IP protocols can be accommodated through FPGA in
the cards.
Given limited hardware resources, we implement four routing tables in the current design.
The tables are stored in TCAM as well. We take
the VID combined with the destination IP
address as a search key. The VID part of the key
is performed exact matching and the IP part is
performed the longest prefix matching in TCAM.
Once a packet matches in the hardware, it
needn’t be sent to the kernel for further processing, greatly improving the packet forwarding performance.
For non-IP protocols or the IP-based protocols that are not accommodated in the hardware,
we integrate a CPU transceiver module. The
module is responsible for transmitting the packet
to the CPU directly without looking up routing
tables. Whether a packet should be transmitted
by the CPU transceiver module is completely
determined by software. With the flexibility
offered by the CPU transceiver module, it is
easy to integrate more flexible software virtual
data planes into PEARL.
HARDWARE PLATFORM
To provide both high performance and strong
isolation in PEARL, we design a specialized
packet processing card. Figure 2 shows the architecture of hardware data plane. It is a pipelinebased architecture, which consists of two main
data paths: transmitting and receiving. The
receiving data path is responsible for processing
the ingress packets, and the transmitting data
path processes the egress packets. In the following, the processing stages of the pipeline are
detailed.
Header Extractor — For each incoming packet,
one or many fields are extracted from the packet
header. These fields are used for virtual router
ID (VID) lookup in the next processing stage.
For IP-based protocols, a 10-tuple, defined following OpenFlow [3], is extracted, while for nonIP protocols, the MAC address is extracted.
VID Lookup — Each virtual router in the platform is marked by a unique VID. This stage
classifies the packet based on the fields extracted
in the previous stage. We use TCAM for the
storage and lookup of the fields, which can be
configured by the users. Due to the special features of TCAM, each field of the rules in a VID
lookup table can be a wildcard. Hence, PEARL
Action Logic — The result of routing table
lookup is the next hop information, including
output card number, output port number, the
destination MAC address, and so on, which is
stored in an SRAM-based table. It defines how
to process the packet, so it can be considered as
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
73
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
High-priority
virtual router
The DMA engine can
transfer packets to
the pre-allocated
New
protocols
Routed
A
BEMaGS
F
Low-priority
virtual router
Physical machine
vmmd
huge static buffer at
naed
contiguous memory
locations. It greatly
User space
low_proxy
Applications
Plug-in
netio_proxy
decreases the
number of memory
accesses required
Kernel space
veth0..4
tap0
macvlan
veth0..4
to transfer a packet
to CPU.
Driver
DMA Tx/Rx
DMA Tx/Rx
Registers
DMA Tx/Rx
Network cards
Figure 3. Packet path in the software.
an action associated with each entry of the routing tables. Based on the next hop information,
this stage performs some decisions such as forwarding, dropping, broadcasting, decrementing
time to live (TTL), or updating MAC address.
Multi-Stream DMA Engine — To accelerate
the IO performance and greatly exploit the parallel processing power of a multicore processor,
we design a multistream high-performance DMA
engine in PEARL. It can receive packets of different virtual routers from the network card to
different memory regions in the host, and transmit packets in the opposite direction via highspeed PCI Express bus. From a software
programmer’s perspective, there are multiple
independent DMA engines, and the packets of
different virtual routers are directed into different memory regions, which is convenient and
lockless for programming. Meanwhile, we make
a trade-off between flexibility and high performance of DMA transfer mechanism, and carefully redesign the DMA engine in FPGA. The
DMA engine can transfer packets to the preallocated huge static buffer at contiguous memory locations. It greatly decreases the number of
memory accesses required to transfer a packet to
CPU. Each packet transported between the
CPU and the network card is equipped with a
custom header, which is used for carrying processing information to the destination, such as
the matching results.
Output Scheduler — The egress packets sent
back by the CPU are scheduled based on their
output port number, which is a specific field in
the custom header of the packet. Each physical
output port is equipped with an output queue.
The scheduler puts each packet in the appropriate output queue for transmitting.
SOFTWARE PLATFORM
Our software platform of PEARL consists of
several components, as shown in Fig. 3. These
include vmmd, to provide the basic management
functions for the virtual routers and packet processing cards; nacd, to offer a uniform interface
74
Communications
IEEE
to the underlying processing cards outside the
virtual environment; routed, to translate the forwarding rules generated by the kernel or user
applications into a uniform format in each virtual router, and install these rules into the TCAM
of the processing cards; netio_proxy, to transmit
the packets between the physical interfaces and
virtual interfaces, and low_proxy, to dispatch
packets into low-priority virtual routers that
share one pair of DMA Rx/Tx buffers. With different configurations and combinations of these
programs, PEARL can generate different types
of virtual routers to achieve flexibility.
There are two types of virtual routers in our
PEARL: high- and low-priority virtual routers.
Each high-priority virtual router is assigned one
pair of DMA Rx/Tx buffers and an independent
TCAM space for lookup. With the high-speed
lookup based on TCAM, and efficient IO channels provided by the hardware, the virtual router
can achieve the maximum throughput in PEARL
platform. For low-priority virtual routers, all the
virtual routers share only one pair of DMA
Rx/Tx buffers and they can’t utilize TCAM for
lookup. The macvlan mechanism is adopted to
dispatch packets between multiple virtual
routers. The applications inside the low priority
virtual routers can use the socket application
programming interfaces (APIs) to capture packets from the virtual interfaces. Note that each
packets needs to go through a context switch
(system calls) at least two times during transmission to the user application, resulting in relatively low IO performance.
We take IPv4 and IPv6 as two Internet protocols to show how the virtual routers can easily be
implemented on PEARL.
High-Priority IPv4 Virtual Router — To create a high-priority IPv4 virtual router in our
platform, the vmmd process first starts a new
Linux container with several virtual interfaces,
and collects the MAC address of each virtual
interface, installs these addresses in the underlying cards via the nacd process so that the hardware can identify the packets heading to this
virtual router, and copy the packet into the cer-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
tain DMA Tx buffer which assigned to this virtual router.
Then, the routed process is launched in the
new container. It extracts the routes through the
NETLINK socket inside the virtual router and
installs routes and the forwarding action in hardware, so the hardware can fill a little structure in
the memory to notify the netio_proxy process
when a packet match a route in TCAM. The
netio_proxy process delivers the packets either
to the virtual interface or directly to the physical
interface according to the forwarding action in
memory.
For example, most of time, normal packets
will match a route in the hardware. When the
netio_proxy receive these packets, it will directly
send them through a DMA Tx buffer. An ARP
request packet will not match any rules in the
TCAM, the netio_proxy process will deliver this
packet to the virtual interface, receive the corresponding ARP reply packet from the virtual interface, and then send it to the physical interface.
Low-Priority Virtual Router — To create lowpriority virtual routers, a tap device is set up by
the vmmd process (tap0). Low priority virtual
routers are configured to share the network traffic of this tap device through the macvlan mechanism. The low_proxy process acts like a bridge,
transmitting packets between DMA buffers and
the tap device in both directions. As the MAC
addresses of the virtual interfaces are generated
randomly, we can encode the MAC addresses to
identify the virtual interface from which a packet
comes. For example, we can use the last byte of
the MAC address to identify the virtual interfaces,: if the low_proxy process receives a packet
with source MAC address 02:00:00:00:00:00, it
knows that the packet is from the first virtual
interface in one of the low-priority virtual
routers, and transmits the packet to the first
physical interface immediately. We adopted this
method in the low_proxy and vmmd processes,
and use the second byte to identify the different
low virtual routers. It not only saves the time
consumed by the inefficient MAC hash lookup
to determine where the packet comes from, but
also saves the space in TCAM, because all the
low priority virtual routes only need one rule in
TCAM (02:*:00:00:00:*).
IPv4/IPv6 Virtual Router With Kernel Forwarding — In this configuration, the routed
process does not extract the route from the kernel routing table; instead, it enables the ip_forward options of the kernel. As a result, all
packets will match the default route in TCAM
without the forwarding action. The netio_proxy
process transmits all these packets into the virtual interfaces so that the kernel will forward the
packet instead of the underlying hardware. The
tap/tun device is used as the virtual interface.
Since the netio_proxy is a user space process,
each packet needs two system calls to complete
the whole forwarding.
User-Defined Virtual Router — User-define
packet process procedure can be implemented as
a plug-in loaded by the netio_proxy process,
which makes the PEARL extensible. We opened
6
IEEE
BEMaGS
F
64 bytes
512 bytes
1518 bytes
4
3
2
1
0
1
2
3
4
Number of VRs
Figure 4. Throughput of high performance IPv4 virtual routers.
the basic packet APIs to users, such as
read_packet(), send_packet(). Users can write
their own process module in C and run it in the
independent virtual routers. For lightweight
applications that do not need to deal with huge
amounts of network traffic, users can also write
a socket program in either high- or low-priority
virtual routers.
EVALUATION AND DISCUSSION
We implemented a PEARL prototype using a
common server with our specialized network
card. The common server is equipped with a
Xeon 2.5GHz 64-bit CPU and 16G DDR2
RAM. The OS-level virtualization technique
Linux Containers (LXC) is used to isolate the
different virtual routers (VRs).
In order to demonstrate the performance and
flexibility of PEARL, our implementation is
evaluated in three different configurations: a
high-performance IPv4 VR, a kernel-forwarding
IPv4/IPv6 VR, and IPv4 forwarding in a low-priority VR.
We conducted four independent subnetworks
with Spirent TestCenter to measure the performance of the three configurations in one to four
VRs. Three different packet lengths (64, 512,
and 1518) were used (for IPv6 packet, the packet length is 78, 512, and 1518.; 78 bytes is the
minimal IPv6 packet length supported by TestCenter).
Figure 4 shows the throughputs of increasing
numbers of VRs using configuration 1. Each virtual data plane has been assigned a pair of DMA
Rx/Tx buffers, and the independent routing table
space in the TCAM, resulting in efficient IO
performance and high-speed IP lookup. The
result shows, when the number of VRs reaches
2, the throughput of the minimal IPv4 packet of
PEARL is up to 4 Gb/s, which is the maximum
theatrical speed of our implementation.
IEEE Communications Magazine • July 2011
Communications
A
5
Throughput (G/ps)
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
75
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Figure 5 illustrates the throughputs of
increasing numbers of VRs using configuration
2. Each virtual data plane has the same configuration as configuration 1, except that no virtual
data plane has its own routing space in TCAM.
In Fig. 5, the throughput of minimal IPv4/IPv6
packet forwarding is only 100 Mb/s when there is
only one VR. This is because we used the original kernel for packet forwarding, so each packet
needs to go through the context switch twice and
memory copy three times in our design. We can
optimize this by re-implementing the forwarding
functions as a plug-in in the netio_proxy process
in VRs.
Figure 6 show the throughputs of increasing
numbers of low-priority VRs using configuration
A
BEMaGS
F
3. Low-priority VRs shares only one pair of
DMA TX/RX IO channels and cannot take
advantage of TCAM to speed up the IP lookup.
It can be used to verify the applications that
handle little traffic (new routing protocols etc.).
We can see from the results that the total
throughput of the minimal IPv4 packet remains
60 Mb/s as the number of VRs increases. The
macvlan mechanism used for sharing the network traffic between multiple VRs results in a
long kernel path for packet processing, so the
total performance is even lower than the IPv4
kernel forwarding in configuration 2. We can
improve the performance by developing a new
mechanism that suits our case.
RELATED WORK
4
IPv4 64 byte
IPv4 512 byte
IPv4 1518 byte
IPv6 78 byte
IPv6 512 byte
IPv6 1518 byte
3.5
Throughput (Gb/ps)
3
2.5
2
1.5
1
0.5
0
0
1
2
3
Number of kernel forwarding VRs
4
Figure 5. Throughput of kernel forwarding IPv4/IPv6 virtual routers.
3
64 bytes
512 bytes
1518 bytes
2.5
Throughput (G/ps)
2
1.5
1
0.5
0
0
1
2
3
Number of low-priority VRs
Figure 6. Throughput of low-priority IPv4 virtual routers.
76
Communications
IEEE
4
Recent research, such as the vRouter Project [4],
RouteBricks [5] and PacketShader [6], have
exploited the tremendous power of modern multicore CPUs and multiqueue network interface
cards (NICs) to build high-performance software
routers on commodity hardware. Memory latency or IO performance becomes the bottleneck
for small packets in such platforms. Due to the
complexity of the DMA transfer mechanism in
commodity NICs, the performance of the highspeed PCI Express bus is not fully exploited.
Meanwhile, traditional DMA transfer and routing table lookup result in multiple memory
accesses. In PEARL, we simplify the DMA
transfer mechanism and redesign the DMA
engine to accelerate IO performance, and
offload the routing table lookup operation to the
hardware platform.
OpenFlow [3] enables rapid innovation of
various new protocols, and divides the function
into control plane and data plane. OpenFlow
provides virtualization through flowvisor, without isolation in hardware. In contrast, PEARL
hosts multiple virtual data planes in the hardware itself, which could offer both strong isolation and high performance.
SwitchBlade [7] builds a virtualized data
plane in an FPGA-based hardware platform. It
classifies packets based on MAC address.
PEARL, on the other hand, dispatches packets
into different VRs based on the 10-tuple. With
the flexibility of the 10-tuple and wildcard,
PEARL has the capability to classify packets
based on the protocol of any layer. Moreover, in
PEARL, all packets are transmitted to the CPU
for switching.
The Supercharged Planetlab Platform (SPP)
[8] is a high-performance multi-application overlay network platform. SPP utilizes the network
processor (NP) to enhance the performance of
traditional Planetlab [9] nodes. SPP divides the
whole data plane functions into different
pipeline stages on NP. Some of these pipeline
stages can be replaced by custom code, resulting
in extensible packet processing. However, the
flexibility and isolation of SPP is limited due to
the inherent vendor-specific architecture of NP.
PEARL takes advantage of the flexibility and
full parallelism of FPGA to run multiple highperformance virtual data planes in parallel, while
keeping good isolation in a fast data path.
CoreLab [10] is a new network testbed archi-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
tecture supporting a full-featured development
environment. It enables excellent flexibility by
offering researchers a fully virtualized environment on each node for any arrangement.
PEARL can support similar flexibility in the
software data path. Meanwhile, PEARL offers a
high-performance hardware data path for performance-critical virtual network applications.
CONCLUSIONS
We aim at flexible routers to bridge the gap
between new Internet protocols and practical
test and deployment. To this end, this work presents a programmable virtual router platform,
PEARL. The platform allows users to easily
implement new Internet protocols and run multiple isolated virtual data planes concurrently. A
PEARL router consists of a hardware data plane
and a software data plane with DMA engines for
packet transmission. The hardware data plane is
built on top of an FPGA-based packet processing card with TCAM embedded. The card facilitates fast packet processing and IO virtualization.
The software plane is built by a number of modular components and provides easy program
interfaces. We have implemented and evaluated
the virtual routers on PEARL.
ACKNOWLEDGMENT
This work was supported by the National Basic
Research Program of China under grant no.
2007CB310702, the National Natural Science
Foundation of China (NSFC) under grant no.
60903208, the NSFC-ANR Agence Nationale de
Recherche, France) under grant no.
61061130562, and the Instrument Developing
Project of the Chinese Academy of Sciences
under grant no. YZ200926.
REFERENCES
[1] V. Jacobson et al., “(PARC) Networking Named Content,” ACM CoNEXT 2009, Rome, Dec. 2009.
[2] M. Handley, O. Hodson, E. Kohler, “XORP: an Open
Platform for Network Research,” ACM SIGCOMM
Comp. Commun., vol. 33, issue 1, Jan. 2003.
[3] N. McKeown et al., “OpenFlow: Enabling Innovation in
Campus Networks,” Comp. Commun. Rev., vol. 38, Apr.
2008, pp. 69–74.
[4] N. Egi et al., “Towards High Performance Virtual
Routers on Commodity Hardware,” Proc. 2008 ACM
CoNEXT Conf., Madrid, Spain, 2008.
[5] M. Dobrescu et al., “RouteBricks: Exploiting Parallelism
to Scale Software Routers,” Proc. ACM SIGOPS 22nd
Symp. Op. Sys. Principles, Big Sky, Montana, 2009.
[6] S. Han et al., “PacketShader: a GPU-Accelerated Software Router,” ACM SIGCOMM Comp. Commun. Rev.,
vol. 40, 2010, pp. 195–206.
[7] M. Anwer et al., “SwitchBlade: A Platform for Rapid
Deployment of Network Protocols on Programmable
Hardware,” ACM SIGCOMM Comp. Commun. Rev., vol.
40, 2010, pp. 183–94.
[8] J. Turner et al., “Supercharging PlanetLab — A High Performance, Multi-Application, Overlay Network Platform,”
ACM SIGCOMM ’07, Kyoto, Japan, Aug. 27–31, 2007.
[9] B. Chun et al., “Planetlab: an Overlay Testbed for
Broad-Coverage Services,” ACM SIGCOMM ’03, Karlsruhe, Germany, Aug. 25–29, 2003.
[10] CoreLab, http://www.corelab.jp/
IEEE
BEMaGS
PEARL takes
flexibility and full
parallelism of FPGA
BIOGRAPHIES
to run multiple
G AOGANG X IE ([email protected])
________ received a Ph.D. degree in
computer science from Hunan University in 2002. He is a
professor at the Institute of Computing Technology (ICT),
Chinese Academy of Sciences (CAS). His research interests include future Internet architecture, programmable
virtual router platforms, and Internet measurement and
modeling.
high-performance
P ENG H E ([email protected])
__________ received his B.S. degree in
electronic and information engineering from Huazhong
University of Science and Technology in 2004. He is now
pursuing a Ph.D. degree in computer architecture at the
ICT, CAS. His research interests include high-performance
packet processing and virtual router design.
F
advantage of the
virtual data planes in
parallel, while
keeping good
isolation in fast
data path.
H ONGTAO G UAN ([email protected])
______________ is currently an
assistant professor of ICT, CAS. His research interests
include visualization technology of computer networks and
routers. He studied computer science at Tsinghua University from 1999 to 2010 and obtained B.E. and Ph.D. degrees.
He took part in the BitEngine12000 project from 2002 to
2005, which was the first IPv6 core router in China.
Z HENYU L I ([email protected])
________ received a Ph.D. degree from
ICT/CAS in 2009, where he serves as an assistant professor.
His research interests include future Internet design, P2P
systems, and online social networks.
YINGKE XIE ([email protected])
________ received his Ph.D. degree from
ICT, CAS, where he serves as an associate professor. His
research interests include high-performance packet processing, reconfigurable computing, and future network
architecture.
L AYONG L UO ([email protected])
____________ is currently working
toward his Ph.D. degree at ICT, CAS. He is currently a
research assistant in the Future Internet Research Group of
ICT. His research interests include programmable virtual
router, reconfigurable computing (FPGA), and network virtualization.
JIANHUA ZHANG ([email protected])
_____________ received his B.S.
and M.S. from University of Science & Technology Beijing,
China, in 2006 and 2009, respectively. He is currently a
Ph.D. student at ICT, CAS. His research interest is in the
future Internet.
YONGGONG WANG ([email protected])
______________ received his
B.S. and M.S. from Xidian Univiserty, Xi’an, China, in 2005
and 2008, respectively. He is currently a Ph.D. student at
ICT, CAS. His research interest is in the future Internet.
KAVÉ SALAMATIAN ([email protected])
_______________ is a professor at the University of Savoie. His main areas of research
are Internet measurement and modeling and networking
information theory. He was previously a reader at Lancaster University, United Kingdom, and an associate professor
at Université Pierre et Marie Curie. He graduated in 1998
from Paris SUD-Orsay University, where he worked on joint
source channel coding applied to multimedia transmission
over Internet for his Ph.D. He is currently a visiting professor at the Chinese Academy of Science.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
77
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SERIES EDITORIAL
TOPICS IN NETWORK AND SERVICE MANAGEMENT
George Pavlou
T
his is the 11th issue of the series on Network and
Service Management, which is typically published
twice a year. It was originally published in April and October but since last year it is published in July and December. The series provides articles on the latest developments
in this well-established discipline, highlighting recent
research achievements and providing insight into both theoretical and practical issues related to the evolution of the
discipline from different perspectives. The series provides
a forum for the publication of both academic and industrial research, addressing the state of the art, theory. and
practice in network and service management.
During the 12th Integrated Management Symposium
(IM 2011), which was held in Trinity College Dublin last
May, the new CNOM committee was elected after a call
for nominations. The previous Chair, Dr. Marcus Brunner
of NEC Europe, was re-elected, while the previous Vice
Chair, Prof. George Pavlou of University College London,
stepped down after having served a term of two years. The
previous Technical Program Chair, Prof. Lisandro
Granville of the Federal University of Rio Grande, Brazil,
was elected Vice Chair, while Dr. Filip de Turck of Ghent
University, Belgium, was elected Technical Program Chair.
Finally, the previous Secretary, Dr. Brendan Jennings of
Waterford Institute of Technology, Ireland, was re-elected
to this position. The new Chairs plan to continue the organization of successful events such as IM, NOMS, CNSM,
and others. Key to the success of these events has been the
good collaboration with the IFIP TC6 sister organization
WG 6.6, whose Chair is Prof. Aiko Pras of the University
of Twente, Netherlands, with Co-Chair Dr. Olivier Festor
of INRIA, France.
During the same Symposium (IM 2011) it was
announced that Prof. Morris Sloman has been elected
IEEE Fellow. Professor Sloman is amongt the most influential researchers in our field, having established and provided seminal contributions in policy-based management,
and current Editor-in-Chief of IEEE Transactions on Network and Service Management (TNSM). In addition, Prof.
George Pavlou received the Dan Stokesberry Award. This
78
Communications
IEEE
Aiko Pras
award is given in memory of IM ’97 Chair Dan Stokesberry at each IM Symposium to an individual who has made
particularly distinguished technical contributions to the
growth of the field. Professor Pavlou has contributed significantly in various areas, including most notably management protocols and quality of service management. The
previous recipients of the Dan Stokesberry award have
been Robbie Cohen of AT&T (1997), Morris Sloman of
Imperial Collage London (1999), Heinz-Gerd Hegering of
Ludwig Maximilian University of Munich (2001), Aurel
Lazar of Columbia University (2003), John Strassner of
Motorola Research (2005), Joe Hellerstein of Microsoft
(2007), and Raouf Boutaba of the University of Waterloo
(2009).
The European Conference on Autonomous Infrastructure Security & Management (AIMS), which was
established by the EMANICS Network of Excellence, continued running this year, with its fifth instance (AIMS
2011) taking place at Nancy, France; http://www.aims-con_______________
ference.org/2011/. The key annual event in this area, which
____________
this year was the 12th IEEE/IFIP Integrated Management
Symposium (IM 2011), was held 23–27 May in Dublin, Ireland; http://www.ieee-im.org/2011/. The second key annual
event in this area is the relatively new IEEE/IFIP Conference on Network & Service Management (CNSM 2011),
which has become another flagship event, complementing
IM and NOMS. This year’s second CNSM will take place
2428 October in Paris, France, and aims to repeat the success of last year’s conference in Niagara Falls, Canada;
http://www.cnsm2011.fr/.
We again experienced overwhelming interest in the 11th
issue, receiving 22 submissions in total. For all the articles
we got at least three independent reviews each. We finally
selected four articles, resulting in an acceptance rate of
18.2 percent. It should be mentioned that the acceptance
rate for all the previous issues has ranged between 14 and
25 percent, making this series a highly competitive place to
publish. We intend to maintain our rigorous review process in future issues, thus maintaining the high quality of
the published articles.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SERIES EDITORIAL
The first article, “Toward Decentralized Probabilistic
Management” by Gonzalez Prieto, Gillblad, Miron, and
Steinert, argues that the adoption of a decentralized and
probabilistic paradigm for network management will be
crucial to meet the challenges of future networks and illustrates the paradigm through example solutions of real-time
monitoring and anomaly detection.
The second article, “Network Resilience: A Systematic
Approach” by Smith, Scholler, Fessi, Karaliopoulos, Lac,
Hutchison, Sterbenz, and Plattner, presents a systematic
approach to building resilient networked systems, studying
fundamental elements at a framework level and demonstrating its applicability through a concrete case study.
The third article, “A Survey of Virtual LAN Usage in
Campus Networks” by Yu, Sun, Feamster, Rao, and Rexford, describes how three university campuses and one academic department use VLANs to achieve a variety of
goals, arguing that VLANs are ill suited to some of these
goals and that their use leads to significant complexity in
the configuration of network devices.
Finally, the fourth article, “Toward Fine-Grained Traffic Classification” by Park, Won, and Hong, proposes a
fine-grained traffic classification scheme based on the analysis of existing classification methodologies and demonstrates that the proposed scheme can provide more
in-depth classification results for analyzing user contexts.
We hope that readers of this issue find again the articles informative and we will endeavor to continue with
similar issues in the future. We would finally like to thank
all the authors who submitted articles to this series and the
reviewers for their valuable feedback and comments on the
articles.
BIOGRAPHIES
G EORGE P AVLOU ([email protected])
_____________ is a professor of communication
networks in the Department of Electronic and Electrical Engineering, University College London, United Kingdom. He received a Diploma in engineering from the National Technical University of Athens, Greece, and
M.Sc. and Ph.D. degrees in computer science from University College London. His research interests focus on network management, networking,
and service engineering, including aspects such as traffic engineering and
quality of service management, policy-based systems, autonomic networking, content-centric networking, and communications middleware. He has
been instrumental in a number of European and U.K. research projects, and
has contributed to standardization activities in ISO, ITU-T and IETF. He is
Vice Chair of IEEE CNOM, and was the technical program co-chair of the
Seventh IFIP/IEEE Integrated Management Symposium (IM 2001) and the
Tenth IFIP/IEEE International Conference on Management of Multimedia
and Mobile Networks and Services (MMNS 2008).
AIKO PRAS ([email protected])
__________ is an associate professor in the Departments
of Electrical Engineering and Computer Science at the University of Twente,
the Netherlands, and a member of the Design and Analysis of Communication Systems Group. He received a Ph.D. degree from the same university
for his thesis, Network Management Architectures. His research interests
include network management technologies, network monitoring, measurements, and security. He chairs IFIP Working Group 6.6 on “Management of
Networks and Distributed Systems,” and has been research leader in the
European Network of Excellence on Management of the Internet and Complex Services (EMANICS). He is a Steering Committee member of the
IFIP/IEEE NOMS, IM, CNSM, and AIMS Symposia, and has been technical
program chair of several conferences, including IFIP/IEEE Integrated Management Symposium 2005 (IM 2005) and IFIP/IEEE Management Week
2009 (ManWeek2009).
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
79
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN NETWORK AND SERVICE MANAGEMENT
Toward Decentralized
Probabilistic Management
Alberto Gonzalez Prieto, Cisco Systems
Daniel Gillblad and Rebecca Steinert, Swedish Institute of Computer Science
Avi Miron, Israel Institute of Technology
ABSTRACT
In recent years, data communication networks have grown to immense size and have
been diversified by the mobile revolution.
Existing management solutions are based on a
centralized deterministic paradigm, which is
appropriate for networks of moderate size
operating in relatively stable conditions. However, it is becoming increasingly apparent that
these management solutions are not able to
cope with the large dynamic networks that are
emerging. In this article, we argue that the
adoption of a decentralized and probabilistic
paradigm for network management will be crucial to meet the challenges of future networks,
such as efficient resource usage, scalability,
robustness, and adaptability. We discuss the
potential of decentralized probabilistic management and its impact on management operations, and illustrate the paradigm by three
example solutions for real-time monitoring and
anomaly detection.
INTRODUCTION
The work presented in this
article was done while the
first author was at the
Royal Institute of Technology (KTH), Sweden.
80
Communications
IEEE
Current network management solutions typically
follow a centralized and deterministic paradigm,
under which a dedicated station executes all
management tasks in order to manage a set of
devices [1]. This paradigm has proved successful
for communication networks of moderate size
operating under relatively stable network conditions. However, the growing complexity of communication networks, with millions of network
elements operating under highly dynamic network conditions, poses new challenges to network management, including efficient resource
usage, scalability, robustness, and adaptability.
Scalability and robustness are essential for coping with the increasing network sizes while providing resilience against failures and
disturbances. Adaptability of management solutions will be crucial, as networks to a higher
degree will operate under network conditions
that vary over time. Furthermore, since communication and computational capacities are limited, it is critical that management solutions use
0163-6804/11/$25.00 © 2011 IEEE
network resources efficiently. In order to meet
these challenges, new approaches to network
management must be considered.
We believe that two key approaches to network management will become increasingly
important: decentralized management, where
management algorithms operate locally at the
managed nodes without centralized control; and
probabilistic management, in which management
decisions and policies are not based on deterministic and guaranteed measurements or objectives.
As we argue in this article, decentralized and
probabilistic approaches are well positioned to
solve the critical network management challenges
listed above, and therefore a move toward decentralized probabilistic management seems likely.
This move, however, means a fundamental shift
in how network performance is viewed and controlled, and introduces operational changes for
those that configure and maintain networks.
In this article we discuss decentralized probabilistic management, describing how it can solve
pressing challenges that large-scale dynamic networks bring to management, and how the adoption of this approach impacts network
operations. We discuss the benefits, drawbacks,
and impact of moving toward decentralized
probabilistic management. We provide evidence
and directions of current developments, while
we present three concrete examples that demonstrate merits of decentralized probabilistic management.
DECENTRALIZED
PROBABILISTIC MANAGEMENT
ASPECTS OF PROBABILISTIC MANAGEMENT
Management solutions can be probabilistic in
one or several aspects. A probabilistic management algorithm:
•Makes use of probabilistic models for representing the network state. That is, the algorithm
represents the network state as probability distributions, using, for example, simple parametric
representations [2], more complex graphical- or
mixture models [3, 4], or sampling from generative models.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
•Does not necessarily use deterministic objectives, but rather objectives that are specified in
terms of probabilities and uncertainties [2, 5].
This could, for example, mean specifying a
bound on service failure probability.
•Might provide its output in terms of probability distributions, instead of deterministic values. As an example, the average link load over a
network could be reported with an expected
mean value and variance instead of a single
value [6].
•Might implement a probabilistic sampling
algorithm, say, by randomly turning on and off
management functions [6], relying on a random
subset of nodes for estimating an aggregate
value for all nodes [7], or sampling relevant network parameters at random rather than regular
intervals [8].
•Might perform network control actions or
explore the consequences of control actions
using an element of randomness [9]. For example, the efficiency of different routing strategies
could be continuously explored to adapt to
changing network conditions.
The first of these aspects is inherent to all
probabilistic management solutions, whereas the
other aspects listed may or may not be present
in a specific algorithm.
POTENTIAL BENEFITS AND DRAWBACKS
Decentralized approaches have several benefits:
they scale very well with increasing network size,
adapt to churn well, avoid single points of failure, and thus can be significantly more robust
than centralized solutions.
Probabilistic approaches can be significantly
more resource-efficient than deterministic ones.
By using the slack provided by the use of probabilistic rather than deterministic objectives
(which often assume worst-case scenarios), the
amount of bandwidth and processing resources
consumed by the solutions can be significantly
reduced. This feature is highly valuable for
volatile networks, such as in cloud computing.
Probabilistic approaches are also highly suitable
for efficiently managing uncertainty and noise,
thereby improving robustness in algorithm performance. Furthermore, probabilistic management changes the way networks are configured,
as goals of the management algorithms can be
stated as acceptable probabilities of distributions
of performance metrics, allowing for intuitive
interaction with the operator.
A possible drawback of decentralized
approaches is that the lack of centralized control
typically leads to suboptimal solutions to management problems. For probabilistic approaches
a possible drawback follows from the introduction of uncertainty. It does not provide operators
with deterministic control over the state of managed devices. Note that while probabilistic management cannot provide hard guarantees on the
accuracy of network measurements, capturing an
accurate snapshot of today’s dynamic and volatile
networks is already impossible. Managers must
already accept some level of uncertainty in the
data collected for management operations, and
the introduction of probabilistic approaches
might therefore come with acceptable cost in
this regard.
The combined benefits of decentralized and
probabilistic approaches amount to management
solutions that operate in a failure-resilient and
resource-efficient manner. The potential drawbacks are less detailed control over managed
devices and suboptimal solutions.
IEEE
BEMaGS
F
Network managers
will need to think
about the network
in terms of uncer-
IMPACT ON NETWORK MANAGEMENT
tainties and likely
The introduction of decentralized probabilistic
management approaches will bring a paradigm
shift in how we specify network objectives and
view network performance. Specifically, terms
such as risk and uncertainty must now be taken
into account when configuring and running networks. For example, although probabilistic management solutions may achieve better
performance on average, they may, with some
small probability miss objectives or perform
badly. Managers can no longer rely on strict
guarantees on network performance, and need
to view the goals of the network as an expected
service quality while allowing for some variance.
Additionally, the information that management
and configuration decisions are based on is no
longer guaranteed to be within deterministic
bounds, and could, with a low probability, be out
of the expected range.
With the new paradigm, network managers
will not only be utilizing strict rules, but will also
be specifying objectives that include uncertainties. In addition to monitoring network performance, the role of a network manager is likely to
also include monitoring the performance of
decentralized probabilistic management solutions. Related to this, managers will have to analyze problems in the network differently, as the
increased autonomy and uncertainty of decentralized probabilistic management solutions
could potentially allow for some underlying
problems to remain undetected.
In summary, network managers will need to
think about the network in terms of uncertainties and likely scenarios while considering the
expected cost and cost variability. This is a very
different approach from current practices,
where managers focus on upholding strict guarantees on parameters staying within predetermined limits.
scenarios while
considering the
expected cost and
cost variability. This is
a very different
approach from
current practices.
FURTHER CONSIDERATIONS
Autonomous mechanisms are an important part
of efficient management processes in complex
networks. Naturally, adaptability is key to achieving the autonomy necessary to reduce operative
costs in future networks. Although facilitated by
a probabilistic approach, adaptability is by no
means an inherent property and must be considered during method development. However, the
combination of decentralized and probabilistic
approaches allows for the design of highly adaptable solutions.
Increased autonomy and adaptability will
likely lead to less detailed control of managed
resources for the network operators. In our view,
this means that decentralized probabilistic solutions need to be developed with an additional
property in mind to gain acceptance: the solutions must provide managers with an accurate
prediction of the performance of a management
solution (e.g., the amount of resources it con-
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
81
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Probabilistic practices
are already widely
used in various
networking and
communications
processes. Examples
include CSMA/CD,
the Ethernet protocol
ensuring that only
one node is transmitting its data on the
network wire at any
point of time; and
statistical multiplexing in IP networks
for better resource
efficiency.
82
Communications
IEEE
sumes), and managers need to be able to control
this performance. As illustrated by specific
examples provided later, this type of performance control and prediction is actually rather
straightforward to express in the adaptive probabilistic models we envision.
CURRENT DEVELOPMENT
This section presents prior work in the area of
decentralized probabilistic management systems,
discussing, in turn, the management framework,
and management operations in the area of monitoring, diagnosis and traffic management.
Probabilistic approaches can be used to
enhance the management infrastructure in terms
of resource efficiency [6, 10]. A probabilistic
decentralized framework for network management is presented in [6], in which management
functions are randomly turned on or off, thereby
effectively exploiting redundancy in those functions. By means of simulation, the study demonstrates reduced effort and resources required for
performance and fault management operations,
while still achieving a sound level of accuracy in
the overall network view. Relying on probabilistic estimates, self-organization of sensor and ad
hoc networks is discussed in [10]. The study
makes use of connectivity probability information in order to select the management clusters
that can efficiently carry out the management
tasks.
A further example makes use of random sampling among network entities, where network
monitoring operations benefit from gossip-based
solutions for aggregation of network management data. This can be carried out by nodes that
collaborate with randomly selected neighbors
[11]. Neighbor sampling during neighbor discovery results in more efficient data dissemination
than non-gossip flooding schemes. A more
recent study [12] further improves the efficiency
of the neighbor discovery process by implementing a probabilistic eyesight direction function,
which practically narrows the direction through
which neighbors are sought.
Network diagnosis is explored in multiple
studies using probabilistic representations such
as graphical models to infer the fault. Two studies also add decentralized processing [3, 4].
Moreover, [4] implements a collaborative
approach for a Bayesian network model. The
scheme effectively handles uncertainty in dynamic networks, which was demonstrated on three
different scenarios. The probabilistic approach
makes it possible to provide diagnostics with a
limited amount of information, although the
higher the amount of evidencs, the greater certainty the system gets. The study reported in [3]
presents an extensible Bayesian-inference architecture for Internet diagnosis, which deploys distributed diagnostic agents and includes a
component ontology-based probabilistic
approach to diagnosis. The proposed architecture was successfully demonstrated with realworld Internet failures over a prototype network.
Traffic management operations can also take
advantage of probabilistic approaches. In [8], the
authors propose sampling the data such that
only a small number of packets are actually cap-
A
BEMaGS
F
tured and reported, thereby reducing the problem to a more manageable size. End user congestion control mechanisms are presented in
[13], which interact with a probabilistic active
queue management of flows. The model captures the packet level dynamics and the probabilistic nature of the marking mechanism for
investigating the bottleneck link and profiling
the queue fluctuations, eventually gaining better
understanding regarding the dynamics of the
queue, and means to cope with congestion.
The sample studies presented above demonstrate preliminary positive experience gained
with decentralized probabilistic approaches for
network management, and exemplifies directions
for further evolution toward decentralized probabilistic management.
Probabilistic practices are already widely used
in various networking and communications processes. Examples include carrier sense multiple
access with collision detection (CSMA/CD), the
Ethernet protocol ensuring that only one node is
transmitting its data on the network wire at any
point of time; and statistical multiplexing in IP
networks for better resource efficiency. These
examples cope with vast amounts of information
and limited resource availability by implementing probabilistic methods in a distributed manner. There is no question that such probabilistic
practices are successful, as shown by the fact
that they are standardized and widely deployed.
As networks increase in scale, it is likely that
network management operations will also start
deploying decentralized probabilistic approaches.
The studies reported above represent early
examples in such a development.
PROBABILISTIC
MANAGEMENT ALGORITHMS
In this section we present three algorithms to
illustrate different aspects of decentralized probabilistic management and applications within
fault and performance monitoring. Such algorithms are responsible for estimating the network state, and are crucial functions of a
complete management system as they support
other management tasks, including fault, configuration, accounting, performance, and security
management.
First, we discuss a probabilistic approach to
anomaly detection and localization, which makes
use of local probabilistic models to adapt to
local network conditions, while setting the management objectives in probabilistic terms. Second, we describe a tree-based algorithm for
probabilistic estimation of global metrics, in
which both objectives and reported results are
probabilistic in nature. Finally, we present an
alternative scheme for the estimation of network
metrics by counting the number of nodes in a
group, which makes use of random sampling
over network entities in addition to the use of
probabilistic objectives and outputs.
PROBABILISTIC ANOMALY DETECTION
We have devised a distributed monitoring algorithm that provides autonomous detection and
localization of faults and disturbances, while
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
adapting its use of network resources to local
network conditions [2, 14]. The decentralized
approach is based on local probabilistic models
created from monitored quality of service (QoS)
parameters, such as drop rate and link latency,
measured by probing. Based on the probabilistic
models, the distributed algorithm provides link
disturbance monitoring and localization of faults
and anomalies.
The anomaly detection approach serves as an
example for some of the different aspects of
using probabilistic management. The algorithm
operates based on probabilistic objectives as
input, instead of deterministically set parameters. This significantly reduces the requirements
on manual configuration, of either individual
network components or across the network, even
when network conditions are highly variable.
Given such probabilistic objectives, estimated
probability models are used for adjusting relevant low-level algorithm parameters. The
autonomous adjustment of low-level parameters
matching the set of probabilistic objectives
enables predictive control of the algorithm performance within probabilistic guarantees. Moreover, the probabilistic models can be used for
prediction and decision making, as estimated
model parameters can be extracted from the
algorithm. This enables other parts of the network management (e.g., traffic management) to
take advantage of the estimated probabilistic
models for autonomous configuration of network parameters.
To run the algorithm, the managing operator
specifies a number of high-level management
requirements, in terms of network resources, to
run the algorithm. Here, the high-level requirements are expressed as probabilities related to
probing traffic and detection delays. Low-level
parameters, such as probing rates and probing
intervals, autonomously adapt to current network conditions and the management requirements.
Specifically, the operator sets the acceptable
fraction of false alarms in detected anomalies.
Moreover, the operator specifies a fraction of
the estimated probability mass of observed probe
response delays on a link [2]. Thereby, accuracy
and probing rates are specified as probabilities
rather than being specified in terms of a fixed
probing rate across the entire network, providing
a typical example of configurations expressed as
probabilistic goals rather than in terms of deterministic limits. The use of these probabilistic
parameters effectively determines the normally
observed probing rates for each link and how
quickly action is taken to confirm a suspected
failure, while losses and delays are accounted for
[2].
Figure 1 depicts examples of the algorithm
behavior for one network link. We observe that
the obtained rate of false alarms successfully
meets the management objective of the specified
acceptable false alarm rate for different rates of
packet drop (Fig. 1a). In fact, the acceptable
rate of false alarms is here an upper limit of the
expected amount of false positives on an individual link, which exemplifies the difference
between strict performance guarantees and predictive performance control, when using proba-
Rate (log. scale)
IEEE
IEEE
BEMaGS
F
-3
-5
-3
-2.5
-2
-1.5
Fraction of acceptable false alarms (log. scale)
-1
(a)
50
30
10
-3.5
-3
-2.5
-2
-1.5
Fraction of acceptable false alarms (log. scale)
-1
(b)
Drop=0.2
Drop=0.3
Drop=0.4
Drop=0.5
Figure 1. a) Rate of false alarms given a fraction of acceptable false alarms and
drop; b) adaptive probe rates given a fraction of acceptable false alarms and
drop.
bilistic management algorithms. Similarly, we
show that the number of probes needed for
detecting a failure adapts to the observed network conditions in order to meet the same
requirements on the rate of acceptable false
alarms (Fig. 1b), exemplifying how the estimated
probabilistic models can be used for autonomous
adjustments of low-level parameters.
TREE-BASED PROBABILISTIC ESTIMATION OF
GLOBAL METRICS
Accuracy — Generic Aggregation Protocol (AGAP) is a monitoring algorithm that provides a
management station with a continuous estimate
of a global metric for given performance objectives [5]. A global metric denotes the result of
computing a multivariate function (e.g., sum,
average, and max) whose variables are local metrics from nodes across the networked system
(e.g., device counters or local protocol states).
Examples of global metrics in the context of the
Internet are the total number of VoIP flows in a
domain or the list of the 50 subscribers with the
longest end-to-end delay.
A-GAP computes global metrics in a distributed manner using a mechanism we refer to
as in-network aggregation. It uses a spanning
tree, whereby each node holds information about
its children in the tree, in order to incrementally
compute the global metric. The computation is
push-based in the sense that updates of monitored metrics are sent toward the management
station along the spanning tree. In order to
achieve efficiency, we combine the concepts of
in-network aggregation and filtering. Filtering
drops updates that are not significant when computing a global metric for a given accuracy objective, reducing the management overhead.
A key part of A-GAP is the model it uses for
the distributed monitoring process, which is
based on discrete-time Markov chains. The
model allows us to describe the behavior of individual nodes in their steady state and relates
IEEE Communications Magazine • July 2011
Communications
A
-1
-3.5
Rate
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
83
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
300
Overall updates/s
250
200
150
100
50
0
0
10
20
30
40
50
Aggregating nodes
Figure 2. Overall management overhead as a function of the number of aggregating nodes (for a network with 200 nodes).
performance metrics to control parameters. The
model has been instrumental in designing a
monitoring protocol that is controllable and
achieves given performance objectives.
The managing operator specifies the global
metric of interest and then the desired accuracy
as a probability distribution for the estimation
error. Examples of input A-GAP supports
include the average error, percentile errors, and
maximum error. As a consequence, our design
permits any administrator with a basic understanding of performance metrics to use our solution, without the need for detailed knowledge of
our solution internals.
Based on this input, the algorithm continuously adapts its configuration (e.g., filters), providing the global metric with the required
accuracy. The output of the algorithm is a continuous estimate of the global metric with the
required accuracy. The output also includes predictions for the error distribution and traffic
overhead. This output is provided in real time at
the root node of the spanning tree.
A-GAP has proved to make an efficient use
of resources. Figure 2 is a representative example of A-GAP’s performance: its maximum overhead increases sublinearly with the network size
(for the same relative accuracy). In addition, the
overall management overhead scales logarithmically with the number of internal (i.e., aggregating) nodes in the spanning tree. This behavior is
consistent in all our experiments, where we have
used both synthetic and real traces, and a wide
range of network topologies [5]. Our experiments include both simulation and testbed
implementations.
PROBABILISTIC ESTIMATION OF GROUP SIZES
Not All at Once! (NATO!) is a probabilistic
algorithm for precisely estimating the size of a
group of nodes meeting an arbitrary criterion
without explicit notification from every node
[7]. The algorithm represents an example of a
probabilistic sampling approach and provides
an alternative to aggregation techniques. It
can be used to collect information such as the
number of nodes with high packet rate, indicating emerging congestion. By not having
84
Communications
IEEE
A
BEMaGS
F
each node reporting its above-normal metrics
independently, available capacity and
resources are efficiently utilized, thereby
avoiding excessive amounts of traffic at the
ingress channel of the management station.
The scheme provides control over the tradeoffs between data accuracy, the time required
for data collection, and the amount of overhead incurred.
NATO! is an example of a family of algorithms that implement probabilistic polling for
estimating the size of a population. It implements a distributed scheme in which nodes periodically and synchronously send reports only if
their metrics exceed a threshold after waiting a
random amount of time sampled from an agreed
time distribution function. The network management station waits until it receives a sufficient
number of reports to estimate the total number
of nodes with the desired precision, and broadcasts a stop message, notifying the nodes that
have not yet reported not to send their reports.
The management station then analyzes the transmission time of the received reports, defines a
likelihood function, and computes the number of
affected nodes for which the likelihood function
is maximized. Typically, with only 10 report messages coming from a group of 1000 or 10,000
nodes, the estimation error is practically eliminated. This significant reduction in network load
is achieved at the expense of marginal computation load at each node and the broadcast messages. The scheme is an effective monitoring
platform that demonstrates efficient resource
usage, scalability, robustness, adaptability, autonomy, and ease of use.
Network managers control and configure
NATO! by means of a number of high-level
parameters: the desired metrics to monitor and
their threshold values, acceptable overhead
(specified as the maximum allowed rate of
incoming messages), the time it takes to conclude the number of nodes experiencing an
abnormal condition, and the desired accuracy of
the estimation. A simple heuristic translates
these parameters to a specific time distribution
function and a time interval, and the frequency
at which NATO! is implicitly invoked.
This configuration controls the trade-off
between accuracy, timeliness, and overhead. It
can be dynamically adapted when the network
conditions or management objectives change:
faster estimations can be delivered, setting a
shorter time interval for the time distribution
function, at the expense of higher density of the
incoming messages; for faster reaction to
changes in network conditions, the frequency at
which NATO! is invoked can be increased; when
in-depth analysis is required, threshold values of
network metrics can be changed and new network metrics can be added; for better fault localization, local NATO! managers can be assigned
to collect data in their subnetworks. All of these
configuration changes can become active by
means of a broadcast message from the management station, which adapts the monitoring task
to the current needs for best performance under
acceptable cost.
Due to its probabilistic nature, the scheme is
practically scalable to any network size. There is
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
an insignificant incremental overhead for larger
network domains at the egress channel, delivering the broadcast messages from the management station to a larger group of nodes.
However, this overhead fans out quickly while
following the topology tree, without any detrimental effect on the stressed ingress channel of
the management station.
LEVERAGING DECENTRALIZED
PROBABILISTIC MANAGEMENT
All three algorithms presented in this section
demonstrate the benefits of decentralized probabilistic approaches to network management: they
enable efficient resource usage compared to
deterministic approaches, exploiting currently
available resources, while taking noise and variations in the network into account. The algorithms are scalable, robust, and adaptive to
network conditions.
The decentralized approach for anomaly
detection enables scalable and efficient usage of
resources. The use of adaptive probabilistic
models allows for capturing the local network
behavior and predicting the algorithm performance.
For A-GAP, decentralization enables efficiency, and probabilistic management provides
performance control. Computing metrics in a
distributed fashion along an aggregation tree
permits reducing the monitoring traffic compared to a centralized approach. The use of
probabilistic models permits A-GAP to predict
its performance and therefore meet the objectives of the network manager.
Thanks to its decentralized and probabilistic
nature, NATO! is scalable to any network size,
avoiding congestion at the ingress channel of the
management station while effectively controlling
the trade-off between accuracy, timeliness, and
overhead.
CONCLUSIONS
In this article we have advocated for the adopt ion o f a de c e nt ra l i z e d an d p r o b ab i l i s t i c
paradigm for network management. We have
argued that solutions based on it can meet the
challenges posed by large-scale dynamic networks, including efficient resource usage, scalability, robustness, and adaptability. We have
exemplified this with three specific solutions,
and discussed how they address such challenges.
A key challenge in the adoption of this
paradigm is acceptance by network managers.
They will need to think about the network in
terms of uncertainties and likely scenarios while
considering the expected cost and cost variability. This is a very different approach from current practices, where managers focus on
upholding strict guarantees on parameters staying within pre-determined limits. Operators
must look at the state of their networks in
terms of probabilities and accept a certain
degree of uncertainty. Solutions that can quantify that uncertainty are, from this point of
view, of great relevance.
This paradigm shift is also likely to cause
some reluctance among network operators to
deploy this type of solutions. We believe that in
order to mitigate this reluctance, it will be key to
show that decentralized probabilistic approaches
can reduce operational expenditures. This reduction is enabled by their higher degree of automation compared to traditional approaches.
However, at this point, this is a conjecture and
must be supported by developing use cases that
quantify the potential savings in different scenarios. For this purpose, experimental evaluations
in large-scale testbeds and production networks
are a must.
While we expect some reluctance in adopting
a new management paradigm, we strongly
believe that not doing it would have a major
negative impact on the ability to manage larger
and more complex networks, and as the need for
solutions for such networks increases, we are
likely to see more widespread adoption.
IEEE
BEMaGS
F
While we expect
some reluctance in
adopting a new
management
paradigm, we
strongly believe that
not doing it would
have a major
negative impact on
the ability to manage
larger and more
complex networks
ACKNOWLEDGEMENT
This work was supported in part by the European Union through the 4WARD and SAIL
projects (http://www.4ward-project.eu/, http://
____
www.sail-project.eu/) in the 7th Framework Programme.
The authors would like to thank Reuven
Cohen (Technion), Björn Levin (SICS), Danny
Raz (Technion), and Rolf Stadler (KTH) for
their valuable input to this work.
REFERENCES
[1] G. Pavlou, “On the Evolution of Management Approaches, Framework and Protocols: A Historical Perspective,”
J. Network and Sys. Mgmt., vol. 15, no. 4, Dec. 2007,
pp. 425–45.
[2] R. Steinert and D. Gillblad, “Towards Distributed and
Adaptive Detection and Localisation Of Network Faults,”
AICT 2010, Barcelona, Spain, May 2010.
[3] G. J. Lee, CAPRI: A Common Architecture for Distributed Probabilistic Internet Fault Diagnosis, Ph.D. dissertation, CSAIL-MIT, Cambridge, MA, 2007.
[4] F. J. Garcia-Algarra et al., “A Lightweight Approach to
Distributed Network Diagnosis under Uncertainty,”
INCOS ’09, Barcelona, Spain, Nov. 2009.
[5] A. Gonzalez Prieto, “Adaptive Real-Time Monitoring for
Large-Scale Networked Systems,” Ph.D. dissertation,
Dept. Elect. Eng., Royal Insti. Technology, KTH, 2008.
[6] M. Brunner et al., “Probabilistic Decentralized Network
Management,” Proc. IEEE IM ’09, New York, NY, 2009.
[7] R. Cohen and A. Landau, “Not All At Once! — A Generic Scheme for Estimating the Number of Affected
Nodes While Avoiding Feedback Implosion,” INFOCOM
2009 Mini-Conf., Rio di Janeiro, Brazil, Apr. 2009.
[8] K. C. Claffy, G. C. Polyzos, and H.-W. Braun, “Application of Sampling Methodologies to Network Traffic
Characterization,” ACM SIGCOMM Comp. Commun.
Rev., vol. 23, no. 4, Oct. 1993, pp. 194–203.
[9] E. Stevens-Navarro, L. Yuxia, and V. W. S. Wong, “An
MDP-Based Vertical Handoff Decision Algorithm for
Heterogeneous Wireless Networks,” IEEE Trans. Vehic.
Tech., vol. 57, no. 2, 2008.
[10] R. Badonnel, R. State, and O. Festor, “Probabilistic
Management of Ad Hoc Networks,” Proc. NOMS ’06,
Vancouver, Canada, Apr. 2006, p. 339–50.
[11] A. G. Dimakis, A. D. Sarwate, and M. Wainwright,
“Geographic Gossip: Efficient Aggregation for Sensor
Networks,” IPSN 2006, Nashville, TN, Apr. 2006.
[12] L. Guardalben et al., “A Cooperative Hide and Seek
Discovery over in Network Management,” IEEE/IFIP
NOMS Wksps. ’10 Osaka, Japan, Apr. 2010, pp. 217–24.
[13] P. Tinnakornsrisuphap and R. J. La, “Characterization
of Queue Fluctuations in Probabilistic AQM Mechanisms,” Proc. ACM SIGMETRICS, 2004, pp. 283–94.
[14] R. Steinert and D. Gillblad, “Long-Term Adaptation
and Distributed Detection of Local Network Changes,”
IEEE GLOBECOM, Miami, FL, Dec. 2010.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
85
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
BIOGRAPHIES
___________ received his
ALBERTO GONZALEZ PRIETO ([email protected])
M.Sc. in electrical engineering from the Universidad Politecnica de Cataluña, Spain, and his Ph.D. in electrical engineering from the Royal Institute of Technology (KTH),
Stockholm, Sweden. He has been with Cisco Systems since
2010. He was an intern at NEC Network Laboratories, Heidelberg, Germany, in 2001, and at AT&T Labs Research,
Florham Park, New Jersey, in 2007. His research interests
include management of large-scale networks, real-time network monitoring, and distributed algorithms.
DANIEL GILLBLAD ([email protected])
______ has a background in statistical
machine learning and data analysis, and has extensive
experience in applying such methods in industrial systems.
He holds an M.Sc. in electrical engineering and a Ph.D. in
computer science, both from KTH. He has been with the
Swedish Institute of Computer Science (SICS) since 1999,
where he currently manages the network management and
diagnostics group within the Industrial Applications and
Methods (IAM) laboratory. His research interests are currently focused around network management, diagnostics,
data mining and mobility modeling.
86
Communications
IEEE
A
BEMaGS
F
AVI MIRON ([email protected])
_____________ is a researcher at the
Computer Science Department of the Israeli Institute of
Technology (Technion). Graduated from the University of
Southern California, Los Angeles, he has participated in a
few EU-funded research projects, including BIONETS and
4WARD, and now in SAIL and ETICS. He is an experienced
high-tech executive and an entrepreneur in the area of
tele/data communications, in both Israel and the United
States.
REBECCA STEINERT ([email protected])
________ is with IAM at SICS since
2006. She has a background in statistical machine learning
and data mining, and in 2008 she received her M.Sc. from
KTH in computer science with emphasis on autonomous
systems. Since the beginning of 2010, she is pursuing her
Ph.D. at KTH with focus on statistical approaches for network fault management. She has worked in the EU project
4WARD and is currently involved in SAIL, focusing on fault
management in cloud computing and network virtualization. She also contributes to the network management
research within the SICS Center for Networked Systems.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Palexpo, Geneva
Exhibition
Conference
19 - 21 September
18 - 22 September
Much more than a
Conference and Exhibition
Visitor Registration Now Open
Register for FREE at www.ecocexhibition.com
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN NETWORK AND SERVICE MANAGEMENT
Network Resilience:
A Systematic Approach
Paul Smith, Lancaster University
David Hutchison, Lancaster University
James P. G. Sterbenz, University of Kansas and Lancaster University
Marcus Schöller, NEC Laboratories Europe
Ali Fessi, Technische Universität München
Merkouris Karaliopoulos, NKU Athens
Chidung Lac, France Telecom (Orange Labs)
Bernhard Plattner, ETH Zurich
ABSTRACT
The cost of failures within communication
networks is significant and will only increase as
their reach further extends into the way our society functions. Some aspects of network
resilience, such as the application of fault-tolerant systems techniques to optical switching, have
been studied and applied to great effect. However, networks — and the Internet in particular —
are still vulnerable to malicious attacks, human
mistakes such as misconfigurations, and a range
of environmental challenges. We argue that this
is, in part, due to a lack of a holistic view of the
resilience problem, leading to inappropriate and
difficult-to-manage solutions. In this article, we
present a systematic approach to building
resilient networked systems. We first study fundamental elements at the framework level such
as metrics, policies, and information sensing
mechanisms. Their understanding drives the
design of a distributed multilevel architecture
that lets the network defend itself against, detect,
and dynamically respond to challenges. We then
use a concrete case study to show how the framework and mechanisms we have developed can be
applied to enhance resilience.
INTRODUCTION
Data communication networks are serving all
kinds of human activities. Whether used for professional or leisure purposes, for safety-critical
applications or e-commerce, the Internet in particular has become an integral part of our everyday lives, affecting the way societies operate.
However, the Internet was not intended to serve
all these roles and, as such, is vulnerable to a
wide range of challenges. Malicious attacks, software and hardware faults, human mistakes (e.g.,
software and hardware misconfigurations), and
88
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
large-scale natural disasters threaten its normal
operation.
Resilience, the ability of a network to defend
against and maintain an acceptable level of service in the presence of such challenges, is viewed
today, more than ever before, as a major requirement and design objective. These concerns are
reflected in, among other ways, in the Cyber
Storm III exercise carried out in the United
States in September 2010, and the “cyber stress
tests” conducted in Europe by the European
Network and Information Security Agency
(ENISA) in November 2010; both aimed precisely at assessing the resilience of the Internet,
this “critical infrastructure used by citizens, governments, and businesses.”
Resilience evidently cuts through several thematic areas, such as information and network
security, fault tolerance, software dependability,
and network survivability. A significant body of
research has been carried out around these
themes, typically focusing on specific mechanisms for resilience and subsets of the challenge
space. We refer the reader to Sterbenz et al. [1]
for a discussion on the relation of various
resilience disciplines, and to a survey by Cholda
et al. [2] on research work for network resilience.
A shortcoming of existing research and
deployed systems is the lack of a systematic view
of the resilience problem, that is, a view of how
to engineer networks that are resilient to challenges that transcend those considered by a single thematic area. A non-systematic approach to
understanding resilience targets and challenges
(e.g., one that does not cover thematic areas)
leads to an impoverished view of resilience
objectives, potentially resulting in ill suited solutions. Additionally, a patchwork of resilience
mechanisms that are incoherently devised and
deployed can result in undesirable behavior and
increased management complexity under chal-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
BEMaGS
F
Challenges
Resilience
objectives
described,
e.g.,
in SLAs
Used to determine
whether targets are
being met
4
1
Resilience
target
Management decisions
to control resilience
mechanisms
5
Resilience
estimator
Resilience
manager
Networks
and services
Provided
service
Resilience
mechanisms
Detects and
characterises
challenges
5
3
Redundant and
diverse
infrastructure
provisioning and
self-protecting
services
A
Challenge
analysis
Protocols
and services
embedded
in the
network
Defensive measures
2
Figure 1. The resilience control loop: derived from the real-time component of the D2R2 + DR resilience strategy.
lenge conditions, encumbering the overall network management task [3].
The EU-funded ResumeNet project argues for
resilience as a critical and integral property of
networks. It advances the state of the art by
adopting a systematic approach to resilience,
which takes into account the wide-variety of
challenges that may occur. At the core of our
approach is a coherent resilience framework,
which includes implementation guidelines, processes, and toolsets that can be used to underpin
the design of resilience mechanisms at various
levels in the network. In this article, we first
describe our framework, which forms the basis
of a systematic approach to resilience. Central to
the framework is a control loop, which defines
necessary conceptual components to ensure network resilience. The other elements — a risk
assessment process, metrics definitions, policybased network management, and information
sensing mechanisms — emerge from the control
loop as necessary elements to realize our systematic approach. We show how these elements
drive the design of a novel architecture and
mechanisms for resilience. Finally, we illustrate
these mechanisms in a concrete case study being
explored in ResumeNet: a future Internet smart
environments application.
FRAMEWORK FOR RESILIENCE
Our resilience framework builds on work by Sterbenz et al. [1], whereby a number of resilience
principles are defined, including a resilience
strategy, called D 2 R 2 + DR: Defend, Detect,
Remediate, Recover, and Diagnose and Refine.
The strategy describes a real-time control loop to
allow dynamic adaptation of networks in response
to challenges, and a non-real time control loop
that aims to improve the design of the network,
including the real-time loop operation, reflecting
on past operational experience.
The framework represents our systematic
approach to the engineering of network
resilience. At its core is a control loop comprising a number of conceptual components that
realize the real-time aspect of the D2R2 + DR
strategy, and consequently implement network
resilience. Based on the resilience control loop,
other necessary elements of our framework are
derived, namely resilience metrics, understanding challenges and risks, a distributed information store, and policy-based management. The
remainder of this section describes the resilience
control loop, then motivates the need for these
framework elements.
RESILIENCE CONTROL LOOP
Based on the real-time component of the D2R2
+ DR strategy, we have developed a resilience
control loop, depicted in Fig. 1, in which a controller modulates the input to a system under
control in order to steer the system and its output towards a desired reference value. The control loop forms the basis of our systematic
approach to network resilience — it defines necessary components for network resilience from
which the elements of our framework, discussed
in this section, are derived. Its operation can be
described using the following list; items correspond to the numbers shown in Fig. 1:
1. The reference value we aim to achieve is
expressed in terms of a resilience target, which is
described using resilience metrics. The resilience
target reflects the requirements of end users,
network operators, and service providers.
2. Defensive measures need to be put in place
proactively to alleviate the impact of challenges
on the network, and maintain its ability to realize the resilience target. A process for identifying the challenges that should be considered in
this defense step of the strategy (e.g., those happening more frequently and having high impact)
is necessary.
3. Despite the defensive measures, some challenges may cause the service delivered to users
to deviate from the resilience target. These challenges could include unforeseen attacks or mis-
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
89
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Severely
degraded
Impaired
Sc
Remediate
Sr
Detect
Acceptable
Service parameters P
Unacceptable
Partially
degraded
Defend
Recover
S0
Figure 2. Resilience state space
configurations. Challenge analysis components
detect and characterize them using a variety of
information sources.
4. Based on output from challenge analysis
and the state of the network, a resilience estimator determines whether the resilience target is
being met. This measure is based on resilience
metrics, and is influenced by the effectiveness of
defense and remediation mechanisms to respond
to challenges.
5. Output from the resilience estimator and
challenge analysis is fed to a resilience manager.
It is then its responsibility to control resilience
mechanisms embedded in the network and service infrastructure, to preserve the target service
provision level or ensure its graceful degradation. This adaptation is directed using resilience
knowledge, not shown in Fig. 1, such as policies
and challenge models. We anticipate a cost of
remediation in terms of a potentially unavoidable degradation in quality of service (QoS),
which should not be incurred if the challenge
abates. Consequently, the network should aim to
recover to normal operation after a challenge
has ceased.
The purpose of the background loop in the
D2R2 + DR strategy is to improve the operation
of the resilience control loop such that it meets
an idealized system operation. This improvement
could be in response to market forces, leading to
new resilience targets, new challenges, or suboptimal performance. The diagnose phase identifies
areas for improvement, including defense, that
are enacted through refinement. In reality, and for
the foreseeable future, we anticipate this outer
loop to be realized with human intervention.
90
Communications
IEEE
BEMaGS
F
RESILIENCE METRICS
Operational state N
Normal
operation
A
Defining a resilience target requires appropriate
metrics. Ideally, we would like to express the
resilience of a network using a single value, R,
in the interval [0,1], but this is not a simple
problem because of the number of parameters
that contribute to and measure resilience, and
due to the multilayer aspects in which each level
of resilience (e.g., resilient topology) is the foundation for the next level up (e.g., resilient routing). We model resilience as a two-dimensional
state space in which the vertical axis P is a measure of the service provided when the operational state N is challenged, as shown in Fig. 2.
Resilience is then modeled as the trajectory
through the state as the network goes from delivering acceptable service under normal operations S 0 to degraded service S c . Remediation
improves service to S r and recovery returns to
the normal state S0. We can measure resilience
at a particular service level as the area under
this trajectory, R.
We have developed a number of tools for
evaluating network resilience. For example, we
use MATLAB or ns-3 simulation models to
measure the service at each level and plot the
results under various challenges and attacks, as
in Fig. 2, where each axis is an objective function
of the relevant parameters [4]. Furthermore, we
have developed the Graph Explorer tool [5] that
takes as input a network topology and associated
traffic matrix, a description of challenges, and a
set of metrics to be evaluated. The result of the
analysis is a series of plots that show the metric
envelope values (m i (min), m i (max)) for each
specified metric mi, and topology maps indicating the resilience across network regions.
Figure 3 shows an example of the resilience
of the European academic network GÉANT2 to
link failures. The set of plots in Fig. 3a show
metric envelopes at different protocol levels —
the aim is to understand how jitter responds in
comparison with metrics at other levels, such as
queue length and connectivity. Surprisingly, jitter
is not clearly related to queue length, and a
monotonic increase in path length does not yield
a similar increase in queue length for all scenarios of link failures. In fact, the fourth link failure
disconnects a region of the network; whereas up
to three failures, the heavy use of a certain path
resulted in increasing queue lengths and jitter.
The partition increases path length, because
route lengths are set to infinity, and decreases
connectivity, which is accompanied by a reduction in jitter, shown with the blue arrows in Fig.
3a. The topology map in Fig. 3b highlights the
vulnerability of regions of GÉANT2 with a heat
map, which can be used by network planners.
Our framework for resilience metrics (i.e., the
multilevel two-dimensional state space and the
use of metric envelopes) can be used to understand the resilience of networks to a broad range
of challenges, such as misconfigurations, faults,
and attacks. The ability to evaluate a given network’s resilience to a specific challenge is limited
by the capability of the tools to create complex
challenge scenarios — this is an area for further
work, in which our effort should be focused on
modeling pertinent high-impact challenges.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
0.01
0
1
2
Failures
3
4
1
2
Failures
3
4
Level 1
physical
Connectivity
Level 2
link
1
0.99
0.98
0.97
0.96
0.95
0.94
0.93
0
1
2
Failures
3
80
75
70
65
60
55
50
0
1
2
Failures
3
0.28
0.26
0.24
0.22
0.2
0.18
0.16
0.13
1
2
Failures
3
4
2
Failures
3
4
4
0
Clustering
Path length
0
Betweenness
11
9
7
5
3
Connectivity
Cluttering coefficient
Level 3
routing
Hop count
14
0
Betweenness
Level 4
transport
1400
1200
1000
800
600
400
200
0
Queue length
0.011
0
1
2
Failures
3
4
-0.05
-0.1
Assortativity
0.012
Queue length
Level 7
application
0.013
End-to-end jitter
Jitter
0.014
-0.15
-0.2
-0.27
0
1
4
Network design
Network operation
(a)
(b)
Figure 3. Example output from the Graph Explorer, developed in the ResumeNet project: a) plots showing the relationship between metrics at various layers in response to link failures on the GÉANT2 topology; b) a heat map showing vulnerable regions of the topology
with respect to a given set of metrics. Reprinted from [5].
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
91
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
UNDERSTANDING CHALLENGES AND RISKS
We advocate the use
of a policy-based
management framework to define the
behavior of real-time
loop instantiations.
Consequently, the
implementation of
resilience mechanisms can be decoupled from the
resilience management strategies,
which are expressed
in policies.
Engineering resilience has a monetary cost. To
maximize the effectiveness of the resources committed to resilience, a good understanding of the
challenges a network may face is mandatory. We
have developed a structured risk assessment
approach that identifies and ranks challenges in
line with their probability of occurrence and their
impact on network operation (i.e., how disruptive
they are to the provision of its services). The
approach should be carried out at the stage of network design when proactive defensive measures
are deployed, and repeated regularly over time as
part of the process of network improvements.
Central to determining the impact of a challenge is to identify the critical services the network
provides and the cost of their disruption: a measure of impact. Various approaches can be used to
identify the critical services, such as discussion
groups involving the network’s stakeholders. Networked systems are implemented via a set of
dependent subsystems and services (e.g., web and
Session Initiation Protocol [SIP] services rely on
Domain Name Service [DNS]). To identify whether
challenges will cause a degradation of a service, it
is necessary to explicate these dependencies.
The next phase is to identify the occurrence
probabilities of challenges (challenge_prob).
Some challenges will be unique to a network’s
context (e.g., because of the services it provides),
while others will not. In relation to these challenges, shortcomings of the system (e.g., in terms
of faults) should be identified. The aim is to
determine the probability that a challenge will
lead to a failure (fail_prob). We can use tools,
such as our Graph Explorer, analytical modeling,
and previous experience (e.g., in advisories) to
help identify these probabilities. Given this
information, a measure of exposure can be
derived using the following equation:
exposure = (challenge_prob × fail_prob) × impact
With the measures of exposure at hand,
resilience resources can be targeted at the challenges that are likely to have the highest impact.
INFORMATION SOURCES AND
SHARING FOR RESILIENCE
For the most part, network management decisions are made based on information obtained
from monitoring systems in the network (e.g., via
Simple Network Management Protocol
[SNMP]). However, to be able to make autonomic decisions about the nature of a wide range
of challenges and how to respond to them — a
necessary property of resilient networks — a
broader range of information needs to be used.
In addition to traditional network monitoring
information, context information, which is sometimes “external” to the system can be used. Earlier work has demonstrated how the use of
weather information, an example of context,
improves the resilience of millimeter-wave wireless mesh networks, which perform poorly in
heavy rain [4]. Also, in addition to node-centric
monitoring tools, such as NetFlow and SNMP,
task-centric tools can be used to determine the
92
Communications
IEEE
A
BEMaGS
F
root cause of failures. For example, X-trace [6]
is a promising task-centric monitoring approach
that can be used to associate network and service state (e.g., router queue lengths and DNS
records) with service requests (e.g., retrieving a
web page). This multilevel information can then
be used to determine the root causes of failures.
We are developing a Distributed Store for
Challenges and their Outcome (DISco), which
uses a publish-subscribe messaging pattern to
disseminate information between subsystems
that realize the real-time loop. Such information
includes actions performed to detect and remediate challenges. Information sources may report
more data than we can afford or wish to relay on
the network, particularly during challenge occurrences. DISco is able to aggregate information
from multiple sources to tackle this problem.
Decoupling information sources from components that use them allows adaptation of challenge analysis components without needing to
modify information sources. To assist the two
phases of the outer loop, DISco employs a distributed peer-to-peer storage system for longerterm persistence of data, which is aware of
available storage capacity and demand.
POLICIES FOR RESILIENCE
We advocate the use of a policy-based management framework to define the behavior of realtime loop instantiations. Consequently, the
implementation of resilience mechanisms can be
decoupled from the resilience management strategies, which are expressed in policies. This has
two immediate benefits: the nature of challenges
changes over time — management strategies can
be adapted accordingly without the need for network down-time; and policies allow network
operators to clearly express when they would
like to intervene in the network’s operation (e.g.,
when a remediation action needs to be invoked).
Research outcomes from the policy-based
management field can help address the complexities of resilience management [7]. A difficult
task is deriving implementable policies from
high-level resilience requirements, say, expressed
in service level agreements (SLAs). With appropriate modifications, techniques for policy refinement can be used to build tools to automate
aspects of this process. Policy-based learning,
which relies on the use of logical rules for knowledge representation and reasoning, is being
exploited to assist with the improvement stages
of our strategy. Techniques for policy ratification
are currently used to ensure that invocation of
different resilience strategy sets does not yield
undesirable conflicting behavior. Conflicts can
occur horizontally between components that
realize the resilience control loop, and vertically
across protocol levels. For example, a mechanism that replicates a service using virtualization
techniques at the service level could conflict with
a mechanism that is rate-limiting traffic at the
network level. Example policies of this sort are
shown in Fig. 4. So that these forms of conflict
can be detected, Agrawal et al. [8] provide a theoretical foundation for conflict resolution that
needs to be extended with domain-specific
knowledge, for example, regarding the nature of
resilience mechanisms.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
Since challenges may
on highUtilisation (link) {
do {
RateLimiterMO limit (link , 50%)
}
}
vary broadly from
topology-level link
Conflict
ServerB
Virtualised
services
failures to application-level malware,
defensive measures
against anticipated
high-impact
ServerA
challenges need to
be applied at
Internet
different levels
and locations.
DDoS
attackers
DDoS traffic
Replication traffic
on highServiceUtilisation (service) {
do {
VMReplicator replicateService (service, ServerB)
}
}
Figure 4. Potentially conflicting policies at the service level (the replication of a service) and the network
level (rate-limiting traffic) that could be triggered by the same challenge, such as a distributed denial of service (DDoS) attack. Rate limiting traffic could cause the replication to fail.
DEFENSE AND DYNAMIC
ADAPTATION ARCHITECTURE
In this section, we describe a set of defensive
mechanisms and an architecture that realize our
systematic approach to resilience, described earlier. The architecture, shown in Fig. 5, consists
of several subsystems implementing the various
tasks of the communication system as well as
the challenge detection components and adaptation capabilities. The behavior of all these subsystems is directed by the resilience manager
using policies, which are held in a resilience
knowledge base. Central to this architecture is
DISco, which acts as a publish-subscribe and
persistent storage system, containing information regarding ongoing detection and remediation activities. From an implementation
perspective, based on the deployment context,
we envisage components of the architecture to
be distributed (e.g., in an Internet service
provider [ISP] network) or functioning entirely
on a single device (e.g., nodes in a delay-tolerant network).
DEFENSIVE MEASURES
As a first step, defensive measures need to be
put in place to alleviate the impact of challenges
on the network. Since challenges may vary
broadly from topology-level link failures to
application-level malware, defensive measures
against anticipated high-impact challenges need
to be applied at different levels and locations: in
the network topology design phase, and within
protocols; across a network domain, as well as at
individual nodes. Defensive measures can either
prevent a challenge from affecting the system or
contain erroneous behavior within a subsystem
in such a way that the delivered service still
meets its specification. A selection of defensive
measures developed in the ResumeNet project is
shown in Table 1.
DETECTION SUBSYSTEMS
The second step is to detect challenges affecting
the system leading to a deviation in delivered
service. We propose an incremental approach to
challenge analysis. Thereby, the understanding
about the nature of a challenge evolves as more
inputs become available from a variety of information sources. There are two apparent advantages of this incremental approach. First, it
readily accommodates the varying computational
overhead, timescales, and potentially limited
accuracy of current detection approaches [9].
Second, relatively lightweight detection mechanisms that are always on can be used to promptly initiate remediation, thus providing the
network with a first level of protection, while
further mechanisms are invoked to better understand the challenge and improve the network
response. Lightweight detection mechanisms can
be driven by local measurements carried out in
the immediate neighborhood of affected nodes.
For example, consider high-traffic volume
challenges, such as a DDoS attack or a flash
crowd event. Initially, always-on simple queue
monitoring could generate an alarm if queue
lengths exceed a threshold for a sustained period. This could trigger the rate limiting of links
associated with high traffic volumes. More
expensive traffic flow classification could then be
used to identify and block malicious flows, consequently not subjecting benign flows to rate
limiting. Challenge models, shown in Fig. 5,
describe symptoms of challenges and drive the
analysis process. They can be used to initially
identify broad classes of challenge, and later to
refine identification to more specific instances.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
93
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
generic approach to
resilience through
concrete study cases
Resilience knowledge
base
Inform
Challenge
models
Policies
Inform
paradigms: wireless
peer-to-peer voice
tiv
Subscribe(challenge) /
publish(report) /
lookup(alarm)
Challenge
analysis
Consultants
Respond
Deliver
(challenge)R
e
at
e
gis
ter
Subscribe(alarm) /
publish(challenge)
mesh and delay tolerant networks,
F
Request
Network
resilience
manager
Ac
that cover a range of
future networking
BEMaGS
Remediation and recovery
We are currently
evaluating our
A
Managed
entities
DISco
Deliver(alarm)
conferencing and
Publish(alarm)
service provision over
heterogeneous smart
Apply
e
at
pd
U
Information
sources
environments.
Detection
Defensive
measures
Defend
Figure 5. A dynamic adaptation architecture that realizes the resilience control loop.
REMEDIATION AND RECOVERY SUBSYSTEMS
The challenge detection subsystem interfaces
with the remediation and recovery subsystem,
the third and final step, by issuing alerts to DISco
using the publish(challenge) primitive.
These alerts contain information about the challenge and its impact on the network, in terms of
the metrics that are falling short of the resilience
target. The network resilience manager takes this
information as context data, and, based on policies, selects an adaptation strategy. In doing so,
the network resilience manager realizes the
resilience management functionality in Fig. 1. If
further information is required by the network
resilience manager that is not contained in the
alert, the lookup(alarm) primitive can be
used. Furthermore, the network resilience manager can make use of consultants, such as path
computation elements, which can compute new
topological configurations, such as new channel
allocations or new forwarding structures.
Resilience mechanisms are deployed by enforcing new configurations on the managed entities
(e.g., routers and end hosts) in the network. To
implement the resilience estimator, the network
resilience manager assesses the success of chosen remedies. The assessment is stored in DISco
to aid the diagnosis and refinement steps of the
background loop. Carrying out this assessment is
not straightforward since it requires spatio-temporal correlation of changes in network state,
which is an issue for further work.
RESILIENCE IN SMART
ENVIRONMENTS: A CASE STUDY
We are currently evaluating our generic
approach to resilience through concrete study
cases that cover a range of future networking
94
Communications
IEEE
paradigms: wireless mesh and delay-tolerant
networks, peer-to-peer voice conferencing,
and service provision over heterogeneous
smart environments. Herein, we focus our discussion on the last study case. The widespread
use of smart mobile devices, together with
identifiers such as radio frequency identification (RFID), embedded in objects such as
products, enables communication with, and
about, these objects. The French national project Infrastructure for the Future Trade
(ICOM) has developed an intra- and interenterprise infrastructure, depicted in Fig. 6,
that allows the connection of objects with
enterprise information systems and fixed or
mobile terminals. This ICOM platform can be
used as a foundation for a number of enterprise applications. The experimentation makes
use of three different entities:
• The data acquisition site is the data source
— items identified by RFID, for example,
are read and their information sent to a
processing centre located remotely.
• The data processing site houses different
modules of the platform (e.g., data collection, aggregation, and tracking), which will
forward the enriched data to the core application.
• The application provision site hosts the
platform’s central element — it is also
where the data subscriber applications (web
services, legal application, etc.) are linked.
Based on outcomes of our risk assessment
approach, high-impact challenges to the platform include those that are intentional and accidental: malicious attacks that threaten the
confidentiality and integrity of commercially
sensitive data, DDoS attacks by extortionists,
and, given the immature nature of the platform,
software and hardware faults. This understanding ensures that we implement appropriate
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
Defensive measure
Description
Innovation
Survivable Network
Design (SND) [11]
During network planning, SND optimizes network
operations, such as routing and transport, in the
presence of high-impact challenges.
Expansion of the methodology to derive a cooperationfriendly routing scheme for Wireless Mesh Networks
(WMNs) to cope with node selfishness, explicitly
accounting for radio interference constraints [12].
Game-theoretical
node protection
[13]
Node protection schemes are deployed against
propagation of malware, which may compromise
network nodes and threaten the network resilience.
The game-theoretic formulation of the problem confirms heavy dependence on the underlying topology and
allows for optimal tuning of node protection level.
Rope-ladder routing
[14]
Multi-path forwarding structure combining link and
node protection in a way that the loss gap and QoS
penalty, e.g., delay, during fail-over is minimized.
Better use of path diversity for support of real-time traffic, e.g., voice flows, for which burst packet loss during
the path recovery time matters.
Cooperative SIP
(CoSIP) [15]
An extension of the Session Initiation Protocol (SIP),
whereby endpoints are organized into a peer-topeer (P2P) network. The P2P network stores location
information and is used when the SIP server infrastructure is unavailable.
Optimal setting of the number of replica nodes in the
P2P network for given service reliability levels, inline
with an enhanced trace-driven reliability model.
Virtual service
migration
Enables redundancy and spatial diversity by relocating service instances on-the-fly, such that a continuous acceptable service can be provided to its users.
Existing approaches are tailored toward resilience to
hardware failures within data centers. The derivation of
service migration strategies from migration primitives,
providing resilience against a variety of challenges.
Table 1. A selection of defensive measures developed in the ResumeNet Project.
defensive measures and dynamic adaptation
strategies.
Consequently, defensive measures primarily
include secure VPN connections between sites,
enabling confidentiality and integrity of the data
in transit. Security mechanisms, such as authentication and firewalls, are also implemented.
Redundancy of infrastructure and implementation diversity of services are exploited to maintain reliability and availability in the presence of
failures caused by software faults.
Incremental challenge analysis is realized
using the Chronicle Recognition System (CRS),
a temporal reasoning system aimed at alarmdriven automated supervision of data networks
[10]. Lightweight detection mechanisms generate alarms based on metrics, such as anomalous
application response times and data processing
request rates. Finally, policy-based adaptation,
implementing remediation and recovery, is
achieved through the specification of the platform’s nominal and challenge context behavior
(i.e., its configuration in response to anticipated
challenges). In our case study, challenge context policies describe configurations in response
to alarms indicating a DDoS attack. For example, modified firewall configurations are defined
to block traffic deemed to be malicious; service
virtualization configurations that make use of
redundant infrastructure are also specified to
load balance increased resource demands. The
transition between behaviors is based on alert
messages, generated via challenge analysis, and
outcomes from continuous threat level assessment. The case study sketched above illustrates
the gain from applying our resilience strategies
in a systematic approach: starting from a risk
assessment, challenges are derived, allowing
defense measures to be deployed. The following step is the specification of chronicles —
temporal descriptions of challenges — for
detection by the CRS, and policy-driven mechanisms to remediate and recover from unforeseen failures.
CONCLUSION
Given the dependence of our society on network infrastructures, and the Internet in particular, we take the position that resilience should
be an integral property of future networks. In
this article, we have described a systematic
approach to network resilience. Aspects of our
work represent a longer-term vision of resilience
and necessitate more radical changes in the way
network operators currently think about
resilience. Further experimentation and closer
engagement with operators through initiatives
like ENISA, which focus on the resilience of
public communication networks and services,
are required before some of this research
becomes standard practice. On the other hand,
application-level measures, such as service virtualization, necessitate fewer changes at the network core and lend to easier implementation.
Further benefits for network practitioners are
anticipated through the use of tools like the
Graph Explorer, which can explore correlations
among metrics at various levels of network
operation.
ACKNOWLEDGEMENTS
The work presented in this article is supported
by the European Commission under Grant No.
FP7-224619 (the ResumeNet project). The
authors are grateful to the members of the
ResumeNet consortium, whose research has
contributed to this article, and in particular to
Christian Doerr and his colleagues at TU Delft
for the work presented in Fig. 3.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
95
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Data acquisition site
A
BEMaGS
F
Acquisition
Application provision site
1D barcode
Scanner
NFC mobile
Payment
device
Legal
applications
Defend
Mobile photo
Defend
2D barcode
Web
applications
Web
services
Software
packages
VPN
Defend
RFID reader
VPN
RFID tag
Data processing site
VPN
Transport
Defend
Traceability
Defend
Collection
aggregation
Internet
Remediate and recover
Alarm
Nominal
behaviour
Persistant
storage
Challenge
contexts
Timeout
Figure 6. The ICOM platform connecting enterprise sites that perform data processing and application provisioning with objects in a smart
environment. Selected resilience mechanisms are shown that can be used to mitigate identified challenges.
REFERENCES
[1] J. P. G. Sterbenz et al., “Resilience and Survivability in
Communication Networks: Strategies, Principles, and
Survey of Disciplines,” Elsevier Computer Networks,
Special Issue on Resilient and Survivable Networks, vol.
54, no. 8, June 2010, pp. 1243–42.
[2] P. Cholda et al., “A Survey of Resilience Differentiation
Frameworks in Communication Networks,” IEEE Commun. Surveys & Tutorials, vol. 9, no. 4, 2007, pp.
32–55.
[3] ENISA Virtual Working Group on Network Providers’
Resilience Measures, “Network Resilience and Security: Challenges and Measures,” tech. rep. v1.0, Dec.
2009.
[4] J. P. G. Sterbenz et al., “Evaluation of Network
Resilience, Survivability, and Disruption Tolerance: Analysis, Topology Generation, Simulation, and Experimentation (invited paper),” Springer Telecommun. Sys.,
2011, accepted Mar. 2011.
[5] C. Doerr and J. Martin-Hernandez, “A Computational
Approach to Multi-Level Analysis of Network
Resilience,” Proc. 3rd Int’l. Conf. Dependability, Venice,
Italy, July 2010.
[6] R. Fonseca et al., “X-trace: A Pervasive Network Tracing
Framework,” 4th USENIX Symp. Networked Sys. Design
& Implementation, Santa Clara, CA, June 2007, pp.
271–84.
96
Communications
IEEE
[7] P. Smith et al., “Strategies for Network Resilience: Capitalizing on Policies,” AIMS 2010, Zürich, Switzerland,
June 2010, pp. 118–22.
[8] D. Agrawal et al., “Policy Ratification,” 6th IEEE Int’l.
Wksp. Policies for Distrib. Sys. and Networks, Stockholm, Sweden, June 2005, pp. 223–32.
[9] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly
Detection: A Survey,” ACM Comp. Surveys, vol. 41, July
2009, pp. 1–58.
[10] M.-O. Cordier and C. Dousson, “Alarm Driven Monitoring Based on Chronicles,” 4th Symp. Fault Detection,
Supervision and Safety for Technical Processes,
Budapest, Hungary, June 2000, pp. 286–91.
[11] E. Gourdin, “A Mixed-Integer Model for the Sparsest
Cut Problem,” Int’l. Symp. Combinatorial Optimization,
Hammamet, Tunisia, Mar. 2010, pp. 111–18.
[12] G. Popa et al., “On Maximizing Collaboration in Wireless Mesh Networks Without Monetary Incentives,” 8th
Int’l. Symp. Modeling and Optimization in Mobile, Ad
Hoc and Wireless Networks, May 2010, pp. 402–11.
[13] J. Omic, A. Orda, and P. Van Mieghem, “Protecting
Against Network Infections: A Game Theoretic Perspective,” Proc. 28th IEEE INFOCOM, Rio de Janeiro, Brazil,
Apr. 2009, pp. 1485–93.
[14] J. Lessman et al., “Rope Ladder Routing: PositionBased Multipath Routing for Wireless Mesh Networks,”
Proc. 2nd IEEE WoWMoM Wksp. Hot Topics in Mesh
Networking, Montreal, Canada, June 2010, pp. 1–6.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
[15] A. Fessi et al., “A Cooperative SIP Infrastructure for
Highly Reliable Telecommunication Services,” ACM
Conf. Principles, Sys. and Apps. of IP Telecommun.,
New York, NY, July 2007, pp. 29–38.
BIOGRAPHIES
PAUL SMITH is a senior research associate at Lancaster University’s School of Computing and Communications. He
submitted his Ph.D. thesis in the area of programmable
networking resource discovery in September 2003, and
graduated in 1999 with an honors degree in computer science from Lancaster. In general, he is interested in the various ways that networked (socio-technical) systems fail to
provide a desired service when under duress from various
challenges, such as attacks and misconfigurations, and
developing approaches to improving their resilience. In particular, his work has focused on the rich set of challenges
that face community-driven wireless mesh networks.
DAVID HUTCHISON is director of InfoLab21 and professor of
computing at Lancaster University, and has worked in the
areas of computer communications and networking for
more than 25 years, recently focusing his research efforts
on network resilience. He has served as member or chair of
numerous TPCs (including the flagship ACM SIGCOMM and
IEEE INFOCOM), and is an editor of the renowned Springer
Lecture Notes in Computer Science and the Wiley CNDS
book series.
J AMES P. G. S TERBENZ is director of the ResiliNets research
group at the Information & Telecommunication Technology
Center and associate professor of electrical engineering
and computer science at The University of Kansas, a visiting professor of computing in InfoLab21 at Lancaster University, and has held senior staff and research management
positions at BBN Technologies, GTE Laboratories, and IBM
Research. He received a doctorate in computer science
from Washington University in St. Louis, Missouri. His
research is centered on resilient, survivable, and disruptiontolerant networking for the future Internet for which he is
involved in the NSF FIND and GENI programs as well as the
EU FIRE program. He is principal author of the book HighSpeed Networking: A Systematic Approach to High-Bandwidth Low-Latency Communication. He is a member of the
ACM, IET/IEE, and IEICE.
MARCUS SCHÖLLER is a research scientist at NEC Laboratories
Europe, Germany. He received a diploma in computer science from the University of Karlsruhe, Germany, in 2001
and his doctorate in engineering in 2006 on robustness
and stability of programmable networks. Afterward he
IEEE
BEMaGS
F
held a postdoc position at Lancaster University, United
Kingdom, focusing his research on autonomic networks
and network resilience. He is currently working on resilience
for future networks, fault management in femtocell
deployments, and infrastructure service virtualization. His
interests also include network and system security, intrusion detection, self-organization of networks, future network architectures, and mobile networks including mesh
and opportunistic networks.
A LI F ESSI is a researcher at the Technische Universität
München (TUM). He holds a Ph.D. from TUM and a Diplom
(Master’s) from the Technische Universität Kaiserslautern.
His research currently focuses on the resilience of network
services, such as web and SIP, using different techniques
(e.g., P2P networking, virtualization, and cryptographic
protocols). He is a regular reviewer of several scientific conferences and journals, such as ACM IPTComm, IEEE GLOBECOM, IFIP Networking, and IEEE/ACM Transactions on
Networking.
M ERKOURIS K ARALIOPOULOS is a Marie Curie Fellow in the
Department of Informatics and Telecommunications,
National and Kapodistrian University of Athens, Greece,
since September 2010. He was a postdoctoral researcher in
the University of North Carolina, Chapel Hill, in 2006 and a
senior researcher and lecturer at ETH Zurich, Switzerland,
from 2007 until 2010. His research interests lie in the general area of wireless networking, currently focusing on network resilience problems related to node selfishness and
misbehavior.
C HIDUNG L AC is a senior researcher at France Telecom
(Orange Labs). Besides activities linked with network architecture evolution, for which he contributes to the design
of scenarios and roadmaps, his research interests are centered on network and services resilience, particularly
through his involvement in European projects such as the
ReSIST Network of Excellence (2006–2009) and the present
STREP ResumeNet (2008–2011). He holds a Doctorat
d’Etat-és-Sciences Physiques (1987) from the University of
Paris XI Orsay.
BERNHARD PLATTNER is a professor of computer engineering
at ETH Zurich, where he leads the communication systems
research group. He has a diploma in electrical engineering
and a doctoral degree in computer science from ETH
Zurich. His research currently focuses on self-organizing
networks, systems-oriented aspects of information security,
and future Internet research. He is the author or co-author
of several books and has published over 160 refereed
papers in international journals and conferences.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
97
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN NETWORK AND SERVICE MANAGEMENT
A Survey of Virtual LAN Usage in
Campus Networks
Minlan Yu and Jennifer Rexford, Princeton University
Xin Sun and Sanjay Rao, Purdue University
Nick Feamster, Georgia Institute of Technology
ABSTRACT
VLANs are widely used in today’s enterprise
networks to improve Ethernet scalability and
support network policies. However, manuals and
textbooks offer very little information about how
VLANs are actually used in practice. Through
discussions with network administrators and analysis of configuration data, we describe how three
university campuses and one academic department use VLANs to achieve a variety of goals.
We argue that VLANs are ill-suited to some of
these goals (e.g., VLANs are often used to realize access control policies, but constrain the types
of policies that can be expressed). Furthermore,
the use of VLANs leads to significant complexity
in the configuration of network devices.
INTRODUCTION
Enterprise networks, which connect the computers within a college campus or corporate location, differ markedly from backbone networks.
These networks have distinctive topologies, protocols, policies, and configuration practices. Yet,
the unique challenges in enterprise networks are
not well understood outside of the operator
community. One prominent example is virtual
LANs (VLANs) — a widely- used technology
that is barely discussed in networking textbooks.
VLANs were initially intended to allow network administrators to connect a group of hosts
in the same broadcast domain, independent of
their physical location. However, today’s enterprise administrators use VLANs for a variety of
other purposes, most notably for better scalability and flexible specification of policies. However,
enterprise administrators have seen many problems of VLANs because VLANs are used for
other functions they were not designed for.
Understandably, VLANs are at best an incomplete solution for some of these problems. As a
result, managing VLANs is one of the most challenging tasks they face.
In this article, we study four networks —
three university campuses and one academic
department — to better understand how VLANs
are used in practice. Through discussions with
network administrators, and targeted analysis of
98
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
router configuration data, we have obtained
deeper insights into how the administrators use
VLANs to achieve a variety of design goals, and
the difficulties they encounter in the process. We
show that VLANs are not well-suited for many
of the tasks that they support today, and argue
that future enterprise network architectures
should decouple policy specification from scalability concerns with layer-2 protocols, topology,
and addressing.
After a brief survey of VLAN technology, we
describe how the four networks use VLANs to
support resource isolation, access control, decentralized management, and host mobility. However, VLANs were not designed with these goals in
mind — network administrators use VLANs for
the lack of a better alternative. We argue that
VLANs are too crude a mechanism for specifying policies, due to scalability constraints (on the
number and size of VLANs) and the coarsegrained ways of assigning traffic to different
VLANs. Further, VLAN configuration is far too
complicated, due to the tight coupling with spanning-tree construction, failure recovery, host
address assignment, and IP routing, as discussed.
We conclude the article.
VIRTUAL LOCAL AREA NETWORKS
An enterprise network consists of islands of Ethernet switches connected both to each other and
to the rest of the Internet by IP routers, as shown
in Fig. 1. We describe how administrators group
related hosts into VLANs, and how the switches
and routers forward traffic between hosts.
CONVENTIONAL LOCAL AREA NETWORKS
In a traditional local area network (LAN), hosts
are connected by a network of hubs and switches. The switches cooperate to construct a spanning tree for delivering traffic. Each switch
forwards Ethernet frames based on its destination MAC address. If the switch contains no forwarding-table entry for the frame’s destination
MAC address, the switch floods each frame over
the entire spanning tree. A switch learns how to
reach a MAC address by remembering the
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
incoming link for frames sent by that MAC
address and creating a mapping between the
MAC address and that port.
To connect to the rest of the enterprise network (and the rest of the Internet), the island of
Ethernet switches connects to IP routers that
forward traffic to and from remote hosts. Each
host interface in the LAN has an IP address
from a common IP prefix (or set of prefixes).
Traffic sent to an IP address in the same subnet
stays within the LAN; the sending host uses the
Address Resolution Protocol (ARP) to determine the MAC address associated with the destination IP address. For traffic destined to remote
IP addresses, the host forwards the packets to
the gateway router, which forwards packets further toward their destinations.
COMMUNICATION WITHIN A VLAN
Administrators use VLANs to construct network
segments that behave logically like a conventional LAN but are independent of the physical
locations of the hosts; for example, hosts H1 and
H3 in Fig. 1 both belong to VLAN1. As in a
conventional physical LAN, the switches in a
VLAN construct a spanning tree, and use flooding and learning to forward traffic between
hosts. For example, the switches S3, S4, and S5
form a spanning tree for VLAN2.
Communication between hosts in the same
VLAN stays within the VLAN, with the switches
forwarding Ethernet frames along the spanning
tree to the destination MAC address. For example, hosts H2 and H4 communicate over the 2
spanning tree in VLAN2 based on their MAC
addresses. Similarly, hosts H1 and H3 communicate over the spanning tree in VLAN1, where
some of the IP routers (e.g., R1, R2, and R2)
may also act as switches in the spanning tree;
alternatively, a tunnel between R1 and R2 could
participate in VLAN1 so the links in the IP backbone do not need to participate in the VLANs.
COMMUNICATION BBETWEEN VLANS
Each host has an IP address from an IP prefix
(or prefixes) associated with its VLAN; IP
routers forward packets based on these prefixes,
over paths computed in the routing protocol
(e.g., Open Shortest Path First [OSPF] or Routing Information Protocol [RIP]). Hence, traffic
between hosts in different VLANs must traverse
an intermediate IP router. For example, traffic
between hosts H3 and H4 would traverse router
R2, even though the two hosts connect to the same
switch. For example, when sending traffic to H4,
host H3 forwards the packets to its gateway
router R2, since the destination IP address
belongs to a different prefix. R2 would then look
up the destination IP address to forward the
packet to H4 in VLAN2. If H4 sends an IP packet to H1, then H4’s router R3 forwards the packet based on the IP routing protocol toward the
router announcing H1’s IP prefix, and that router
would then forward the packet over the spanning tree for VLAN1.
CONFIGURING VLAN PORTS
Supporting VLANs requires a way to associate
switch ports with one or more VLANs. Administrators configure each port as either an access
IEEE
BEMaGS
Ethernet
island
VLAN2
R2
S2
R1
R3
IP router
backbone
VLAN1
H1
VLAN1
S5
S4
S1
R5
Ethernet
island
F
H3
H4
S6
S3
VLAN2
R4
Ethernet
island
H2
Figure 1. Enterprise network with Ethernet islands interconnected by IP routers.
port, which is connected to a host; or a trunk
port, which is connected to another switch. An
access port typically transports traffic for a single
VLAN; the VLAN associated with a port may
be either statically configured or dynamically
assigned when the host connects, based on the
host’s MAC address (e.g., using VLAN Management Policy Server VMPS [1]). In either case,
the access port can tag incoming frames with the
12-bit VLAN identifier and removes the tag
from outgoing frames, obviating the need for the
hosts to support VLANs.
In contrast, a trunk port may carry traffic for
multiple VLANs; for example, switch S4’s port
connecting to S5 must forward traffic for both
VLAN1 and VLAN2 (and participate in each
VLAN’s spanning tree protocol), but the trunk
port to S3 does not. The administrators either
manually configure each trunk port with a list of
VLAN identifiers, or run a protocol like VLAN
Trunking Protocol (VTP) [2] or Multiple VLAN
Registration Protocol (MVRP) [3] to automatically determine which VLANs a trunk link should
handle. Configuring a VLAN also requires configuring the gateway router to announce the associated IP prefixes into the routing protocol; each
host interface must be assigned an IP address
from the prefix associated with its VLAN.
VLAN USAGE IN
CAMPUS NETWORKS
Our campus network administrators use VLANs
to achieve four main policy objectives — limiting
the scope of broadcast traffic, simplifying access
control policies, supporting decentralized network management, and enabling seamless host
mobility for wireless users. The four networks
include two large universities (campuses 1 and 2)
and a department network (campus 3) within
another university-wide network (campus 4). All
four networks primarily run IPv4, with relatively
limited experimental deployment of IPv6.
SCOPING BROADCAST TRAFFIC
VLANs enable administrators to limit the scope
of broadcast traffic and network-wide flooding,
to reduce network overhead and enhance both
privacy and security.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
99
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
VLANs provide an
effective way to
enforce access control by directing
inter-VLAN traffic
through routers. In
addition, by allowing
administrators to
assign related hosts
to IP addresses in the
same subnet, VLANs
simplify access control configuration by
making packet-classification rules more
concise.
Limiting the Broadcast/Flooding Overhead —
End hosts broadcast Dynamic Host Configuration Protocol (DHCP) traffic when joining the
LAN, and routinely broadcast Address Resolution Protocol (ARP) requests to learn the medium access control (MAC) addresses of other
hosts in the same IP subnet. For example, campus 2 has one IP subnet with up to 4000 hosts
with around 300 packets/s of broadcast traffic;
this broadcast traffic is dominated by ARP,
iTunes broadcast messages, and NetBios. It not
only consumes network bandwidth, but also consumes bandwidth and energy resources on the
end hosts (particularly for mobile devices).
Switches also flood packets to a destination
MAC address they have not yet learned how to
reach. This consumes bandwidth resources, especially if the switches’ forwarding tables are not
large enough to store an entry for each MAC
address on the LAN. Administrators often divide
large networks into multiple VLANs to limit the
scope of broadcast messages and flooding traffic.
For example, campuses 1 and 4 assign each
building a different IP subnet, each associated
with its own VLAN. The resulting broadcast
domains are small enough to limit the overhead
on the switches and the end hosts.
Protecting Security and Privacy — Broadcast
and flooding traffic also raise security and privacy concerns. Sending excessive broadcast traffic
is an effective denial-of-service attack on the
network. In addition, a malicious host can intentionally overload switch forwarding tables (e.g.,
by spoofing many source MAC addresses), forcing switches to flood legitimate traffic that can
be easily monitored by the attacking host. ARP
is also vulnerable to man-in-the-middle attacks,
where a malicious host sends unsolicited ARP
responses to impersonate another host on the
LAN, thereby intercepting all traffic sent to the
victim. Network administrators can reduce these
risks by constraining which users can belong to
the same VLAN. For example, campus 3 has
separate subnets for faculty, graduate students,
and undergraduate students, and assigns each
subnet to one VLAN based on the registered
MAC addresses of the user machines. This
ensures that students cannot intercept faculty
traffic (e.g., a midterm exam en route to the
printer), and that research experiments on the
graduate-student VLAN do not inadvertently
overload the faculty VLAN.
SIMPLIFYING ACCESS CONTROL POLICIES
VLANs provide an effective way to enforce
access control by directing inter-VLAN traffic
through routers. In addition, by allowing administrators to assign related hosts to IP addresses
in the same subnet, VLANs simplify access control configuration by making packet classification
rules more concise.
Imposing Access Control Policies — VLANs
provide a way to restrict communication between
hosts. In Fig. 1, router 3 (R3) can apply access
control lists (ACLs) to limit the traffic between
hosts H3 and H4 that belong to different
VLANs. Along the same lines, administrators do
not place hosts in the same VLAN unless they
100
Communications
IEEE
A
BEMaGS
F
are allowed to communicate freely. Campus 3,
for example, places all infrastructure services —
such as e-mail and DHCP servers — on a single
VLAN since these managed services all trust
each other. As another example, campus 1 has
several “private” VLANs that have no IP router
connecting them to the rest of the IP network;
for example, the automatic teller machines
(ATMs) belong to a private VLAN to protect
them from attacks by other hosts.
Concise Access Control Lists — Routers and
firewalls apply ACLs based on the five-tuple of
the source and destination IP addresses, the
source and destination TCP/UDP port numbers,
and the protocol. Wildcards enable shorter lists
of rules for permitting and denying traffic, which
simplifies ACL configuration and also makes
efficient use of the limited high-speed memory
(e.g., TCAMs) for applying the rules. VLANs
enable more compact ACLs by allowing administrators to group hosts with common access control policies into a common IP subnet. For
example, campus 3 identifies user machines
through a small number of IP prefixes (corresponding to the faculty and student VLANs),
allowing concise ACLs for traffic sent by user
machines (e.g., to ensure only SMTP traffic is
allowed to reach the email servers on the infrastructure VLAN).
Preventing Source IP Address Spoofing —
Source IP address spoofing is a serious security
problem, since spoofing allows attackers to
evade detection or shift blame for their attacks
to others. Assigning host addresses from a common IP prefix simplifies the preventive filtering
of packets with spoofed source IP addresses.
Hosts in the same VLAN are assigned IP
addresses from the same subnet(s). This allows
network administrators to configure ACLs at the
VLAN’s gateway router to drop any packets with
source IP addresses from other prefixes. Campus
3 does precisely that.
Supporting Quality of Service — Classifying
packets based on IP prefixes applies not only to
access control, but also to quality of service
(QoS) policies. For example, administrators can
configure a router to place IP packets in different queues (with different priority levels) based
on the source or destination IP prefix, if hosts
are grouped into VLANs based on their QoS
requirements. None of the campuses in our
study apply these kinds of QoS policies.
DECENTRALIZING NETWORK MANAGEMENT
VLANs allow administrators to delegate some
management tasks to individual departments.
VLANs also simplify network troubleshooting by
allowing an administrator to observe connectivity
from any part of the campus simply by trunking
a port to a VLAN.
Federated Management — Campus network
administrators sometimes assign all hosts in one
department to a VLAN, so each department can
have its own control over its hosts in different
locations on campus while sharing the same
physical infrastructure. Some campuses allocate
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
portions of the VLAN ID space to departments
and allow those departments to manage their
networks independently. For example, campus 1
has a university-wide IT group and many smaller
IT groups. The university-wide group allocates a
contiguous block of IP addresses to one VLAN
and hands it over to a smaller IT group. One IT
group manages a “classroom” VLAN that consists of a computer in each classroom across 60
buildings. Campus 2 allocates a portion of the
VLAN ID space to the computer science department and provides a web interface to help the
administrators manage the router and firewall
settings between the department and the rest of
the campus. Campus 4 assigns different gymnasiums across the campus to the same VLAN;
administrators for that VLAN can then set firewall rules independently from the rest of the
campus.
Easier Troubleshooting — VLANs allow network administrators to group hosts based on
policy requirements, independent of their locations. If two hosts in the same policy group are
in different locations on the campus, administrators can still assign them to the same VLAN so
that they can communicate with each other,
without interference from intermediate firewalls
or routers. In campus 4, the dormitory VLAN
spans the campus, including places outside the
dormitories; such a setup allows network administrators to help student users diagnose problems
since they can put a host on this VLAN anywhere on the campus. Campus 2 also has some
VLANs across campus, such as a network-wide
VLAN for the IT support team and a VLAN for
deploying new experimental management architectures based on OpenFlow [4].
ENABLING HOST MOBILITY
VLANs make host mobility easier on a campus
wireless network, because hosts can retain their
original IP addresses when they move from one
access point to another. Allocating a single
VLAN to the campus wireless network, as is
done in campus 2, allows devices to move anywhere on the campus without having to obtain a
new IP address. The campus 2 wireless network
has about 6000 active hosts on the same VLAN.
These hosts include laptops, mobile phones,
passenger counters, and vehicle locators. As
users move across the campus on foot or in
vehicles, they can remain connected to the campus network, migrating between access points
without experiencing disruptions to ongoing
connections.
PROBLEM:
LIMITED GRANULARITY OF POLICY
VLANs are a relatively inflexible way to support
policies. In this section, we discuss three main
limitations VLANs impose on the granularity of
policies — limits on the number of VLANs, limits on the number of hosts per VLAN, and the
difficulty of assigning an access port to multiple
VLANs without end-host support. We also discuss the incomplete ways administrators try to
work around these limitations.
IEEE
BEMaGS
F
LIMITED NUMBER OF VLANS
VLANs allow
The total number of VLANs is limited because
of built-in protocol limitations (i.e., VLAN ID
space) and implementation limitations (i.e.,
switch and router resources):
• VLAN ID space: The VLAN ID is a 12-bit
header field, limiting a network to 4096
VLANs.1
• Switch memory: Limited memory for storing bridge tables often restricts individual
switches to supporting 300–500 VLANs.
• Router resources: Inter-VLAN traffic
imposes additional load on the routers.
Administrators work around these limitations in
two ways.
administrators to
delegate some
management tasks
to individual
departments. VLANs
also simplify network
troubleshooting by
allowing an
administrator to
observe connectivity
from any part of the
Placing Multiple Groups in the Same VLAN
— Administrators can assign multiple groups of
hosts to a single VLAN and configure finergrained access control policies at the routers to
differentiate between hosts in different groups.
Campus 1 combines some groups of hosts
together, assigning each group a different block
of IP addresses within a larger shared subnet.
From the configuration data, we see that about
11 percent of the VLANs have ACLs expressed
on smaller IP address blocks. For example, one
VLAN contains the DNS servers, logging and
management servers, and some dorm network
web servers. Although these hosts reside in different locations, are used for different purposes,
and have different reachability policies, they are
placed in a single VLAN because they are managed by an IT group that has a single VLAN ID
and one IP subnet.
campus simply by
trunking a port
to a VLAN.
Reusing the Limited VLAN Identifiers — To
deal with limitations on the number of VLAN
IDs, administrators can use the same VLAN ID
for multiple VLANs, as long as the VLANs do
not have any links or switches in common.
Unfortunately, reusing VLAN IDs makes configuration more difficult, since administrators must
take care that these VLANs remain disjoint as
new hosts, links, and switches are added to the
network. Campus 1, in particular, reuses VLAN
IDs quite extensively.
LIMITED NUMBER OF HOSTS PER VLAN
The overheads of broadcast traffic, flooding, and
spanning tree impose limits on the number of
hosts in each VLAN. For example, campus 1 has
a wireless VLAN with 3000 access points and
thousands of mobile hosts that receive a large
amount of broadcast traffic. These scalability
limitations make it difficult to represent large
groups with a single VLAN. Administrators
work around this problem by artificially partitioning these larger groups.
Dividing a Large Group into Multiple
VLANs — A large group can be divided into
multiple VLANs. For example, campus 1 has
public computer laboratories with 2500 hosts
across 16 VLANs. The 1200 hosts in one academic college in campus 1 are divided into eight
VLANs. Dividing a large group into multiple
VLANs unfortunately prevents mobile hosts
from retaining their IP addresses as they move
1
IEEE 802.1QinQ provides a way to extend the
ID space using multiple
tags.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
101
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
To deal with
limitations on the
from one location to another. Additionally, the
VLANs must be configured with the same access
control policy to retain the semantics that would
exist if hosts belonged to a single larger group.
number of VLAN
IDs, administrators
can use the same
VLAN ID for multiple
VLANs, as long as
the VLANs do not
have any links or
switches in common.
Unfortunately,
reusing VLAN IDs
makes configuration
more difficult.
COARSE-GRAINED ASSIGNMENT OF
TRAFFIC TO VLANS
Although they are natural for grouping traffic
by end host, VLANs are a clumsy way to group
traffic across other dimensions (e.g., by application). With end-host support for VLAN tagging,
hosts can assign different virtual interfaces to
different VLANs. For example, a computer
hosting multiple virtual machines can run a software switch that has a different access port
(and, hence, can assign a different VLAN) for
each virtual interface. However, the end host
must support VLANs making it hard to work
with the heterogeneous user devices common
on college campuses. In addition, the campus
administrator must trust the user machine to
faithfully apply the appropriate VLAN tag —
introducing potential security risks. Although
protocols like 802.1x can help authenticate
hosts, many campuses do not force all hosts to
use these mechanisms.
Unexpected problems can arise when administrators assign VLANs directly to access ports. For
example, campus 3 assigns each access port to a
(single) VLAN dynamically, based on the source
MAC address of the attached host. If multiple
hosts connect to a single wall jack (e.g., via a
common hub or an unmanaged switch), the hosts
are assigned to the same VLAN — based on the
MAC address of whatever host sends the first
packet. Since campus 3 has different VLANs for
faculty and students, this can raise security problems when a student plugs into a hub in a faculty
member’s office or vice versa. The same problem
arises if a single computer runs multiple virtual
machines, each with its own virtual interface and
MAC address. By connecting to the same switch
access port, all of these virtual interfaces would
be assigned to the same VLAN, a problem raised
by the administrators in campus 2.
Restricting each access port to a single VLAN
significantly limits the kinds of policies the network can support. For example, administrators
cannot assign a single host interface to multiple
groups (e.g., a faculty member in the systems
group cannot belong to both the faculty VLAN
and the systemsgroup VLAN) or have different
applications belong to different groups (e.g., web
traffic cannot belong to a different VLAN than
Skype traffic).
PROBLEM:
COMPLEX CONFIGURATION
2
IPv6 might solve the
problem but will not be
widely deployed in the
foreseeable future.
102
Communications
IEEE
Although Ethernet was designed with the goal of
“zero configuration,” VLAN configuration is
challenging and errorprone [5], for two main
reasons. First, each host’s IP address must be
consistent with the IP subnet of its VLAN. Second, the switches require configuration to ensure
each VLAN has an efficient spanning tree that
remains connected under common failure scenarios.
A
BEMaGS
F
HOST ADDRESS ASSIGNMENT
Administrators associate each VLAN with one
or more IP subnets and must ensure that the
host interfaces within that VLAN are assigned
addresses from that block. The tight coupling
between VLANs and IP address assignment
leads to two problems.
Wasting IP Addresses — All four campuses
have a one-to-one mapping between an IP subnet and a VLAN. Since IP prefixes must align
with power-of-two boundaries, VLANs can lead
to fragmentation of the available address space
— especially if 5 some VLANs have fewer hosts
than others.2 Campus 1, for instance, originally
assigned a /24 prefix to each VLAN but, after
running out of address space, was forced to use
smaller subnets for some VLANs.
Complex Host Address Assignment — To
ensure that host IP addresses are consistent with
the VLAN subnets, Campus 1 manually configures each host with a static IP address from the
appropriate VLAN, except for a few VLANs
(e.g., the wireless network) that use DHCP. The
other campuses use DHCP to automatically
assign IP addresses based on the hosts’ MAC
addresses. However, the administrators must
ensure that DHCP requests reach the DHCP
server, even though broadcast traffic only reaches machines in the same VLAN. Rather than
devote a DHCP server to each VLAN, campuses
2, 3, and 4 use relay agents to forward requests to
a common DHCP server, requiring additional
configuration on the routers [6]. Either way, the
DHCP server configuration must be consistent
with whatever system is used to assign hosts to
VLANs.
SPANNING TREE COMPUTATION
Switches must be configured to know which
VLANs they should support on each trunk link.
Administrators must explicitly configure both
ends of every trunk link with the list of VLANs
to participate in. For example, in Fig. 1, VLAN1
must be allowed on the link between S1 and S2,
while VLAN2 need not be permitted. Wrongly
omitting a VLAN from that list disrupts communication between the hosts on that VLAN.
Unnecessarily including extra VLANs leads to
extra broadcast/flooding traffic and larger bridge
tables. Determining which links should participate in a VLAN, and which switch should serve
as the root bridge of the spanning tree, is often
difficult.
Limitations of Automated Trunk Configuration — Manual configuration of trunk links is
error-prone [7], and inconsistencies often arise
as the network evolves [8]. Automated tools, like
Cisco’s VLAN Trunk Protocol (VTP) [2], reduce
the need for manual trunk configuration. However, these tools require administrators to divide
the network into VTP domains, where switches
in the same domain cooperate to identify which
VLANs each link should support. Each switch
must participate in all VLANs in its domain,
leading to extra overhead; in fact, some commercial switches can only participate in a handful of
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
spanning-tree instances, limiting the effective
size of VTP domains. As a result, campus 1 is
divided into several smaller VTP domains, using
manually-configured trunk links to interconnect
the domains. Campus 2 does not use VTP
because some of its switches come from another
vendor that does not support Cisco’s proprietary
protocol. Campus 3 does not use VTP because
the administrators prefer to know by design
which links participate in each VLAN, to simplify network troubleshooting.
Enabling Extra Links to Survive Failures —
Although Ethernet switches can compute a
spanning tree automatically, administrators
must often intervene to ensure that each VLAN
remains connected after a failure. To prevent
partitioning of the VLANs, campus 1 installs
parallel links between switches and treats them
as one logical link; this ensures that the VLANs
remain connected even if a physical link fails.
To survive switch failures, campus 1 configures
the trunk links between the core switches to
participate in all VLANs. In general, identifying
which links to include is challenging, since
enabling too many links in the VLAN is wasteful, but having too few can lead to partitions
during failures.
Distributing Load Over the Root Bridges —
The switches near the root of a spanning tree
must carry a large amount of traffic. Dividing
the network into multiple VLANs can help distribute the load over multiple spanning trees
with different root bridges. By default, the switch
with the smallest identifier becomes the root of
the spanning tree, resulting in the same switch
serving as the root bridge in multiple VLANs.
To distribute traffic load more evenly, administrators often configure the root bridge of each
VLAN manually. For example, the administrators of campus 1 select the most powerful switches to serve as root bridges.
CONCLUSION
We have surveyed four campus networks to better understand and illustrate how VLANs are
used in practice. Our analysis indicates that
VLANs are used for many objectives that they
were not originally intended for, and are often
ill-suited for the tasks Further, the use of VLANs
complicates network configuration management.
We believe future enterprise networks should
look at ways to minimize the use of VLANs and
explore more direct ways to achieve the network
administrators’ objectives with the goal to make
management easier for campus and enterprise
administrators.
To extend our understanding of the VLAN
usage in practice, we call for operators of campus and enterprise networks to participate in the
survey available at [9].
ACKNOWLEDGMENTS
We thank Russ Clark (Georgia Tech), Brad
Devine (Purdue), Duane Kyburz (Purdue), Peter
Olenick (Princeton), and Chris Tengi (Princeton) for sharing their expertise and experiences
about network management and VLANs.
IEEE
BEMaGS
F
REFERENCES
[1] “VLAN Management Policy Server,” http://www.cisco.
com/en/US/tech/tk389/tk689/technologiestech
note09186a00800c4548.shtml.
________________
[2] “VLAN Trunking Protocol,” http://www.cisco.com/
e______________________________
n/US/tech/tk389/tk689/technologiestech
note09186a0080094c52.shtml.
________________
[3] “Multiple VLAN Registration Protocol,” http://www.
cisco.com/en/US/docs/switches/lan/catalyst6500/catos/8.
x/configuration/guide/mvrp.pdf.
_________________
[4] N. McKeown et al., “OpenFlow: Enabling Innovation in
Campus Networks,” ACM Comp. Commun. Rev., Apr.
2008.
[5] T. Benson, A. Akella, and D. Maltz, “Unraveling the
Complexity of Network Management,” Proc. NSDI, Apr.
2009.
[6] C. J. Tengi et al., “autoMAC: A Tool for Automating
Network Moves, Adds, and Changes,” Proc. Large
Installation Sys. Admin. Conf., 2004.
[7] P. Garimella et al., “Characterizing VLAN Usage in an
Operational Network,” Proc. Wksp. Internet Network
Mgmt., Aug. 2007.
[8] X. Sun et al., “A Systematic Approach for Evolving
VLAN Design,” IEEE INFOCOM, 2010.
[9] http://www.surveymonkey.com/s/X5K5GLM.
We believe future
enterprise networks
should look at ways
to minimize the use
of VLANs and
explore more direct
ways to achieve
the network
administrators’
objectives with the
goal to make
management easier
for campus
and enterprise
administrators.
BIOGRAPHIES
MINLAN YU ([email protected])
_______________ is a Ph.D. student
in the computer science department at Princeton University. She received her B.S. in computer science and mathematics from Peking University in 2006 and her M.S. in
computer science from Princeton University in 2008. She
has interned at Bell Labs, AT&T Labs Research, and
Microsoft. Her research interest is in network virtualization,
and enterprise and data center networks.
XIN SUN is a Ph.D. candidate in the School of Electrical and
Computer Engineering at Purdue University, West Lafayette,
Indiana, where he works with Prof. Sanjay Rao. His research
interests are in the design and configuration of large-scale
enterprise networks, and the migration of such networks
to new architectures. He received his B.Eng. degree in computer engineering from the University of Science and Technology of China in 2005.
N ICK F EAMSTER is an associate professor in the College of
Computing at Georgia Tech. He received his Ph.D. in computer science from MIT in 2005, and his S.B. and M.Eng.
degrees in electrical engineering and computer science
from MIT in 2000 and 2001, respectively. His research
focuses on many aspects of computer networking and networked systems, including the design, measurement, and
analysis of network routing protocols, network operations
and security, and anonymous communication systems. In
December 2008 he received the Presidential Early Career
Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, notably spam filtering. His honors
include the Technology Review 35 “Top Young Innovators
Under 35” award, a Sloan Research Fellowship, the NSF
CAREER award, the IBM Faculty Fellowship, and award
papers at SIGCOMM 2006 (network-level behavior of
spammers), NSDI 2005 (fault detection in router configuration), Usenix Security 2002 (circumventing web censorship
using Infranet), and Usenix Security 2001 (web cookie analysis).
SANJAY RAO is an assistant pProfessor in the School of Electrical and Computer Engineering, Purdue University. He
obtained his Ph.D. in computer science from Carnegie Mellon University. His current research interests are in enterprise management and cloud computing. In the past, he
has done pioneering work on live streaming using peer-topeer systems. He is a recipient of an NSF Career award,
and has served as a Technical Program Chair of the
INM/WREN workshop.
J ENNIFER R EXFORD is a professor in the Computer Science
Department at Princeton University. From 1996 to 2004
she was a member of the Network Management and Performance Department at AT&T Labs–Research. She is coauthor of the book Web Protocols and Practice
(Addison-Wesley, May 2001). She received her B.S.E.
degree in electrical engineering from Princeton University
in 1991 and her Ph.D. degree in EECS from the University
of Michigan in 1996.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
103
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN NETWORK AND SERVICE MANAGEMENT
Toward Fine-Grained
Traffic Classification
Byungchul Park and James Won-Ki Hong, POSTECH
Young J. Won, Internet Initiative Japan
ABSTRACT
A decade of research on traffic classification
has provided various methodologies to investigate the traffic composition in data communication networks. Many variants or combinations of
such methodologies have been introduced continuously to improve the classification accuracy
and efficiency. However, the level of classification details is often bounded to identifying protocols or applications in use. In this article, we
propose a fine-grained traffic classification
scheme based on the analysis of existing classification methodologies. This scheme allows to
classify traffic according to the functionalities in
an application. In particular, we present a traffic
classifier which utilizes a document retrieval
technique and applies multiple signatures to
detect the peer-to-peer application traffic
according to different functionalities in it. We
show that the proposed scheme can provide
more in-depth classification results for analyzing
user contexts.
INTRODUCTION
Understanding traffic behavior is an important
part of network operations and management.
A decade of research on traffic classification
has provided various techniques to identify
types of traffic information. As the Internet
continuously evolves in scope and complexity,
its traffic characteristics are also changing in
terms of traffic composition and volume. Peerto-peer (P2P) and multimedia traffic applications have rapidly grown in popularity, and
their traffic occupies a great portion of the
total Internet traffic volume these days. Kim et
al. [1] have shown that P2P applications generate a substantial volume in enterprise networks. In 2008, a study by a Japanese Internet
service provider (ISP) [2] observed that a significant portion of P2P traffic is recently being
replaced by multimedia and web traffic. In particular, a newer generation of P2P applications
is incorporated with various obfuscation strategies, such as ephemeral port allocation and
proprietary protocols, to avoid detection and
filtering. A popular communication application
like Skype eludes detection by payload encryption or plain-text ciphers [3]. The dynamic
104
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
nature of Internet traffic adversely affects the
accuracy of traffic classification and makes it a
more challenging task.
The previous studies have discussed various
classification methodologies (e.g., well-known
port number matching, payload contents analysis, machine learning, etc.). Many variants of
such methodologies have been introduced continuously to improve the classification accuracy
and efficiency. However, it is extremely difficult
for any method to claim 100 percent accuracy
due to fast-changing and dynamic nature of the
Internet traffic. The classification accuracy is
also questionable since there is often no ground
truth dataset available. In another respect, each
research aims at different levels of classification.
Some only had a coarse classification goal such
as classifying traffic protocol or application type;
while others had more detailed classification
goal such as identifying the exact application
name. Therefore, it is often unfair to cross-compare each classification method in terms of accuracy. To overcome this issue, we need to
investigate how we can provide more meaningful
information with such limited traffic classification results rather than focusing on improving 1
or 2 percent of classification accuracy.
This article proposes the concept of finegrained traffic classification. A single application
typically has several functions and each function
triggers a unique traffic characteristics. The finegrained traffic classification can classify various
types of traffic, which are generated by a single
application. We investigated existing traffic classification studies in terms of classification
schemes rather than classification methods.
While previous studies focused on classification
methods, we have focused on classification output itself. By analyzing the output categories of
the other classification research, we propose a
new traffic classification scheme. We also present an example of fine-grained traffic classification by applying it to real P2P application traffic.
The organization of the article is as follows.
We present our related work and our motivation
for fine-grained traffic classification. We explain
our proposed method, which utilizes a text
retrieval technique. We then describe our experiments with the real-world traffic dataset. Finally, concluding remarks and possible future work
are discussed.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
BACKGROUND
In this section, we describe different traffic classification research according to the level of classification requirements and its analysis capability.
RELATED WORK
Application Protocol Breakdown Scheme —
Traffic classification is a process of identifying
network traffic based on the features that can be
passively observed in the traffic. The features
and classification results may vary according to
specific classification requirements and analysis
needs. In early days, traffic classification was
performed as part of traffic characterization
work, often motivated by the dominance of a
certain protocol in a network. Several studies [4,
5] analyzed the packet and byte distributions
regarding transport and application layer protocols. TCP/UDP port numbers mapped to a wellknown TCP/UDP protocol. The application
protocol breakdown scheme shows a rough estimation of the traffic composition and is still a
popular solution at the Internet backbone
because of its high and even increasing traffic
volumes and limited computing resources for
traffic analysis.
Borgnat et al. [2] showed that a significant
portion of P2P traffic is replaced by multimedia
and Web traffic by analyzing longitudinal traffic
characteristics of trans-Pacific backbone links.
Although they aggregated the P2P traffic with
other unknown protocols, they also utilized wellknown port numbers for application protocol
breakdown.
Traffic Clustering Scheme — Straight-forward
classification approaches (e.g., protocol or portbased) cannot provide in-depth classification of
similar traffic type generated by different protocols. Traffic clustering scheme refers to traffic
workload characteristics rather than protocol
traffic decomposition. McGregor et al. [6] proposed a machine learning-based classification
method which can break down into clusters:
Bulk transfer, small transactions, and multiple
transactions. It allows us to understand the
major types of traffic in network.
Application Breakdown Scheme — The
dominance of P2P traffic in the Internet has
had a huge influence on traffic classification
research and led to more sophisticated heuristics. In this context, many researchers have
focused on identifying the exact application
represented by the traffic. Discovering byte
signatures [7] has been a popular solution.
Regardless of its proven accuracy, the signature-based solution possesses high processing
overhead and privacy-breaching issues because
it requires a packet header and payload inspection. Recently, machine learning techniques
which use statistical information of the transport layer [8] are introduced to overcome privacy legislation related to packet payload
inspection. They focus on the fact that different applications have different communication
patterns (behaviors). Moreover, Szabo et al.
[9] introduced combinations of these existing
methods in order to balance between the level
of classification completeness and accuracy.
All these efforts focused on classifying network traffic according to the name of application in use.
IEEE
BEMaGS
F
While the application
protocol breakdown
scheme cannot
Application-type Breakdown Scheme —
BLINC [10] is a connection pattern-based classification method. The idea behind BLINC is to
investigate the communication pattern generated
by a host and extract behavioral patterns which
may represent distinct activities or applications.
It categorizes network traffic according to application-type rather than a specific application
name, such as Web, game, chat, P2P, streaming,
mail, and attack activities. This scheme resides
between former two schemes.
distinguish the traffic
MOTIVATION
name represented
Figure 1 shows different traffic classification
schemes according to their classification level.
The application protocol breakdown scheme
presents network traffic into different protocols rather than application types or names.
For example, all ftp traffic is classified under
the ftp protocol group although there are many
distinct ftp client programs since all clients
employ the same ftp protocol for data transfer.
The traffic clustering scheme was proposed in
different perspective to traffic classification.
While the application protocol breakdown
focuses on identifying certain protocol, the
clustering scheme can capture common characteristics shared among the distinct applications
using a single or multiple protocols. In addition, the application breakdown scheme can
provide more detailed classification results,
especially for P2P applications. It would classify distinct application names even if the corresponding traffic is generated from the same
protocol. For example, there are many descendant applications which use the BitTorrent
protocol. While the application protocol breakdown scheme cannot distinguish the traffic
generated by different BitTorrent clients, the
application breakdown scheme can classify the
traffic according to the exact client name represented by the traffic.
The application-type breakdown scheme
resides between the traffic clustering and application protocol breakdown schemes in terms of
classification level. It characterizes the traffic
based on connection pattern or host profiles and
classifies into various application-types, such as
Web, game, chat, P2P, streaming, mail, and
security attack activities. One application-type
can be a superset of both application and application protocol.
The fine-grained traffic classification can classify various types of traffic which are generated
by a single application. As shown in Fig. 1 a single application typically has several functions
and each function triggers a unique traffic characteristic. While top n protocol or application
analysis is possible with the other schemes, our
scheme enables new analysis categories, such as
average browsing time to initialize a file download and popular functions in use among users.
It is also a tool to analyze user behavior and
design future applications in the Internet. When
it applies to Web traffic, analyzing the most pop-
by the traffic.
generated by
different BitTorrent
clients, the application breakdown
scheme can classify
the traffic according
to the exact client
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
105
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
A key to the
fined-grained traffic
Function. 2
(e.g., searching)
Function. 1
(e.g., login)
classification is how
Function. k
(e.g., downloading)
Fine-grained
classification
to categorize single
App. j
application traffic
into different traffic
groups. It is quite
App. protocol n
similar to the
traditional traffic
classification problem
except for the
App. 1
App. m
App. 1
App. i
App. 1
App. j
Application breakdown
degree of
classification details.
Application
protocol 1
Application
protocol 2
Application type 1
Application
protocol n
Application
protocol breakdown
Application
type t
Application-type
breakdown
Transport layer
Traffic clustering
IP layer
Figure 1. Traffic classification schemes according to different classification levels.
ular function of the Web site (e.g., Facebook) is
also possible. This will extend the traffic classification research from network administrative oriented research to user-context-dependent
research.
FINE-GRAINED TRAFFIC
CLASSIFICATION
A key to fine-grained traffic classification is how
to categorize single-application traffic into different traffic groups. It is quite similar to the
traditional traffic classification problem except
for the degree of classification details. Accordingly, various existing methods can be applied to
this classification scheme. Most of them use a
classifier per application or protocol.
We simplify the fine-grained traffic classification problem as follows: to build arbitrary
classifiers (e.g., application signature, connection behavior model, statistical model) per
application where each classifier corresponds
to a distinct function in the application. Among
many classifiers, we have selected signature as
the classifier because a lot of other previous
work has demonstrated that the signaturebased approach is by far the most reliable for
accuracy. Even some statistical approaches
have used signature as the ground truth for
validation [8]. In addition, it is convenient to
apply new fine-grained signatures into existing
traffic classification systems and commercial
traffic shapers, and intrusion detection devices
utilize application signatures for their classification .
106
Communications
IEEE
Our methodology for fine-grained traffic classification consists of three parts:
1 Input data collection
2 Extraction of fine-grained signatures
3 Traffic classification using fine-grained signatures
This article focuses on generating fine-grained
classifiers (steps 1 and 2). For signature extraction, we used our previous work, the LASER
algorithm [3], which can generate an application signature automatically. Step 1 is not part
of the actual signature extraction; however, it is
crucial for the entire signature generation process because the input data for the LASER
algorithm directly affects the reliability of the
signature.
INPUT DATA COLLECTION
The LASER algorithm requires sanitized packet
collection as its input data. The sanitized raw
packets refer to the packets belonging to the target application only. We have developed a continuous packet dump agent using Libpcap to
collect the packet trace for every running process in the OS. The collecting agent divides the
sanitized packets according to each flow and
stores them in a separate packet dump file
tagged with the origin process name. It is important to keep the datasets separate according to
each flow and process name because it can
reduce unnecessary packet comparison overhead
in the pattern extraction step. If the given dataset
is a mixture of many different applications, it
may be difficult to discover a common pattern.
Such a design decision is necessary to guarantee
the efficiency and accuracy of signature extrac-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
BEMaGS
App. 2
F
The idea behind
Host
App. 1
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Application n
App. n
Function Function
module 1 module 2
document retrieval is
Function
module m
that the similarity
between documents
Login
Network interface
Searching
can be measured by
Downloading
the frequency of
keywords in the
Network interface
documents. We have
Traffic dump agent
: Traffic
: Process information
: Separated packet dump file
Sanitized traffic trace
App. 1
App. 2
defined several terms
to apply document
similarity to traffic
classification.
App. n
Fine-grained traffic trace
Traffic type 1
(e.g., login)
Traffic type 2
(e.g., searching)
Fined-grained traffic classifier
Traffic type mi
(e.g., downloading)
Figure 2. Input data collection process.
tion; thus, we remove any uncertainty of traffic
being fed to the signature extraction algorithm.
It increases the possibility of finding a reliable application signature to extract it from similar traffic types in the pool of sanitized traffic.
As many functions are embedded in network
applications, an application generates different
types of traffic according to its function or purpose. For example, a P2P application has various
functions such as login, searching, downloading,
advertisement, and chatting. In some cases, even
Web browsing is included. In order to resolve
this issue, we developed a fine-grained flow classifier that could group sanitized flows into several subtypes according to traffic type.
Figure 2 illustrates the workflow of the input
data collection process. When n different applications are running on a host, each application
executes m i different functional modules (the
subscript i indicates that the value of mi differs
from application to application) and each module in an application generates different types of
traffic. The traffic dump agent monitors the network interface continuously and captures all
traffic data passing through the network interface. The agent aggregates the traffic data into
flows and stores each flow in a separate file.
Every flow is tagged with the application or process name acquired from the OS. The stored n
groups of sanitized traffic, labeled with an application name, is fed into the fine-grained traffic
classifier. The fine-grained traffic classifier classifies the sanitized traffic into mi subcategories
according to flow type. Finally, we can get n × mi
groups of flow data. Each group of flow data is
used as input data for LASER. In this case, a
number of LASER’s output is n × mi and a sin-
gle application can have at most mi signatures.
Most of the prior research on signature-based
traffic classification used a single signature per
application. However, this may lead to an
increase in the false negative ratio as well. More
details on the fine-grained traffic classifier are in
the upcoming section.
TRAFFIC CLASSIFIER
In order to build the fine-grained traffic classifier, we adopted a document retrieval technique
[11] which is one of the main research areas in
the natural language processing field. The idea
behind document retrieval is that the similarity
between documents can be measured by the frequency of keywords in the documents. We have
defined several terms to apply document similarity to traffic classification. The following provides our payload vector conversion, vector
comparison, and flow comparison methodologies.
Payload Vector Conversion — To represent
network traffic as a text document, we used vector space modeling (VSM). VSM is an algebraic
model which represents text documents as vectors. The objective of document retrieval is to
find a subset of documents from a set of stored
text documents D that satisfy certain information
requests or queries Q. Considering a document
space consists of documents Di, each identified
by one or more index terms Tj; the terms may be
weighted according to their importance [11]. A
typical way to determine the significance of a
term is measuring the occurrence of the term Tj.
When t different index terms are presented in
document Di, each document Di is represented
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
107
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
One strength of
using Jaccard
similarity instead of
Euclidean distance is
that the similarity
value can be
normalized and then
the similarity
calculated by the dot
product, and it
approaches one if
the vectors are
similar and zero
otherwise.
A
BEMaGS
F
1: procedure FLOW GROUPING()
2:
FG = {[ ], [ ], u u u , [ ]}
// empty flow group (each group consists of flows)
// sanitized flows
3:
Flow = {f1, f2, u u u , fn}
4:
while 1 ) i ) n do
5:
if i = 1 then
6:
FG[0] @ fi
7:
else
8:
F = PMF(fi)
// convert flow into PFM
9:
while 1 ) j ) number of flow group do
10:
Similarity{} @ Similarity_Score(FG[j], F)
11:
end while
12:
if Max(Similarity) * threshold then
13:
FG[Max index] @ fi
14:
else
//Create new flow group
15:
FG @ fi
16:
end if
17:
end if
18:
end while
19:
return FG
20: end procedure
Algorithm 1. Flow grouping using similarity.
by a t-dimensional term-frequency vector Di =
(di1, di2, u u u, dij) where dij represents the frequency of the jth term. While text documents are
composed of terms (words), which are units of
language as a principal carrier of meaning, a
packet does not have basic units containing certain meanings. We have defined the term of a
payload as follows to come up with this problem:
A term is a payload data within an i-bytes sliding
window where the position of the sliding window
can be 1, 2, u u u, n – i+1 with n bytes payload.
The size of the term set is 28×i, and the length of
a term is i.
If the word length i is too short, the word
cannot reflect the sequence of the byte patterns
in the payload. In this case, we cannot recognize
the differences among permutations of byte patterns, such as “0 × 01 0 × 02 0 × 03” and “0 × 03
0 × 01 0 × 02.” If the word length is too long, the
number of whole representative words increases
exponentially. With the definition of term, a
packet can be represented as a term-frequency
vector called a payload vector.
When w i is the occurrence of the i-th term
that appears repeatedly in a payload, the payload vector is
Payload Vector = [w1w2 u u u wn]T,
(1)
where n is the size of a whole representative
term set.
We set the sliding window size i to 2 because
it is the simplest case for representing the order
of content in payloads. When the term size is 2
bytes, the size of all terms is 216. Therefore, the
payload vector is represented as a 2 16 -dimensional term-frequency vector.
Payload Vector Comparison — Once packets
are converted into vectors, the similarity between
packets can be calculated by measuring the distance between vectors. We used Jaccard similarity [12] as a distance metric. In our previous work
[13], we compared three different similarity metrics: Jaccard similarity, Cosine similarity, and
108
Communications
IEEE
RBF. Jaccard similarity showed the best performance without using any sophisticated techniques. The Jaccard similarity J(X, Y) uses word
sets from the comparison instances to evaluate
similarity. J(X, Y) is defined as the size of the
intersection of the word sets divided by the size
of the union of the sample sets X and Y:
J ( X,Y ) =
X EY
X FY
.
One strength of using Jaccard similarity
instead of Euclidean distance is that the similarity value can be normalized and then the similarity calculated by the dot product, and it
approaches one if the vectors are similar and
zero otherwise. If two payload vectors are generated by different applications, the contents of
each payload consist of distinct binary sequences
and their vectors are also very different.
Flow Similarity Comparison — Formula 3
defines the payload flow matrix (PFM). The i-th
row of a PFM is the payload vector of the i-th
packet in the flow. PFM is a k × n matrix, where
k is the number of packets and n is the dimension of payload vectors.
Payload flow matrix (PFM) is
T
PMF = •– p1 p2 pk —˜ ,
where A
pi is the payload vector defined in formula 1.
The similarity score between PFMs can be
calculated by simple summation addition of the
packet similarity values (Similarity Score = Yki=1
J(p i , pvi ), where p i and pvi are i-th packet of the
first and second flow accordingly).
Algorithm 1 describes the flow grouping process to generate fine-grained flows. The flow
grouping procedure reads sanitized flows and
groups them into flow groups based on similarity
scores. If a flow group set is empty, the first flow
f 1 creates a new flow group FG[0] (lines 5–6).
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
5000
7000
3750
5250
Mbytes
Mbytes
BEMaGS
F
Signature (application breakdown)
Fined-grained traffic classification
Signature (application breakdown)
Fined-grained traffic classification
2500
1250
0
A
3500
1750
1
2
Hours
3
0
1
(a)
2
Hours
3
(b)
Figure 3. Traffic volume: fine-grained traffic classification vs. application breakdown: a) Fileguri; b) BitTorrent.
Otherwise, the input flow is compared with existing flow groups and inserted into the flow group
which has the maximum flow similarity score
(lines 10–12). When the maximum similarity
score is less than threshold, a new group is created, and flow f i becomes a member of this new
group (line 14). Our flow grouping is motivated
by an unsupervised machine learning approach
since it relies on unlabeled payload vectors to
find natural groups, functional clusters in this
context. On the contrary, the supervised
approach requires pre-labeled datasets to construct a classifier for each cluster. It is difficult
to determine the number of functionalities of
application in advance. Thus, unsupervised clustering is suitable for fine-grained classifier which
intends to identify functional characteristics.
EXPERIMENTS
In this section, we provide fine-grained traffic
classification results of two representative applications as validation of our proposed classification scheme. We selected P2P applications for
verification because of its behavioral (or functional) complexity and popularity in network.
We feel that our selection of P2P applications
strongly represents the complexity of Internet
applications. First, we choose a regionally popular P2P application, called Fileguri, which provides multiple functions — web browsing,
searching, downloading, messenger, and commercial advertisement. Second, BitTorrent, a
globally favored application especially in Europe
and the United States, provides mostly a downloading function. We generated each signature
using fine-grained classifier and LASER. To
show the advantage of the fine-grained approach,
we also analyze the average search counts of
user, which cannot be obtained by protocol or
application breakdown schemes. For dataset, we
collected a full packet trace from our campus
network — 3 hours (450 Gbytes) on 16 August,
2007. No port blocking or filtering policy was in
effect at the time of measurement.
CLASSIFIER GENERATION PROCESS
To generate the input dataset (training data) for
fine-grained classifier, we ran our target application while the packet dump agent, described
earlier, continuously captures the sanitized
trace. Using this sanitized trace as input data,
our fine-grained classifier groups flows into
clusters. Since there is no ground truth from the
perspective of application’s functionality, we
manually analyzed flows in each group. For
Fileguri, it was possible to determine the functionality by examining the URI fields and
requested objects in HTTP “GET” message.
After labeling each cluster with functionality,
LASER was applied to capture the common
patterns shared by clusters. For BitTorrent, the
fine-grained classifier grouped the sanitized
traffic into nine clusters, which were seemingly
many since BitTorrent is simply known for
download functionality. We examine the packet
payload accordingly to BitTorrent protocol
specification and labeled each cluster as downloading, tracker access 1, tracker access 2, distributed hash table (DHT) management 1, DHT
management 2, and so on. Note that, DHT managing traffic is not generated by all available
BitTorrent clients. It only applies to clients,
such as BitTorrent, +Torrent, Transmission,
rTorrent, KTorrent, BitComet, and Deluge.
However, LASER was not able to generate signatures for one of six DHT clusters. So, we
made a simple heuristic to detect the DHT
management cluster based on packet size and
IP address which generates other BitTorrent
traffic as an alternative classifier.
CLASSIFICATION RESULTS
We classified our target application traffic using
both fine-grained traffic classification and traditional application breakdown methods which use
an application signature as a classifier. Figure 3
shows the traffic volume identified by finegrained classification and application breakdown. There is about 10–40 percent difference
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
109
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Downloading
Searching
Messenger
Web browsing
Advertisement
Downloading
Tracker access
75
75
Percent
100
Percent
100
20
BEMaGS
F
DHT management
20
25
25
0
A
0
1
2
Hours
3
(a)
1
2
Hours
3
(b)
Figure 4. Traffic composition of each functionality: a) Fileguri; b) BitTorrent.
where the fine-grained approach discovers more
traffic than signature-based application breakdown in each hour. Figure 4 shows the functional decomposition of traffic. Note that we
aggregated two tracker access clusters and six
DHT management clusters into one functionality
for BitTorrent. The downloading portion is dominant (74 and 90 percent) and its volume is close
to the volume identified by application breakdown. It implies that application breakdown,
which employs signatures, is incapable of detecting other than downloading traffic. Moreover,
the web browsing traffic of Fileguri occupies
about 12–14 percent, and the same traffic is
wrongfully classified as normal HTTP traffic by
well-known port matching. It does not even
appea under the signature-based method. While
previous work has focused on detecting download traffic, it is worthwhile to highlight that the
traffic volume of the other traffic in P2P applications is not negligible.
The ground truth was verified by the traffic
measurement agent (TMA) [3]. It collects process and traffic information in allocation from
the host operating system (OS) directly; thus,
the information may be the closest possible
ground truth available. While verifying the
accuracy of fine-grained approach against TMA,
there exists a small false positive/negative. We
made a few interesting observations on misclassified traffic portions. First, every false positive
of Fileguri traffic was caused by unclear boundary to the Web traffic. Although Fileguri provides a limited web browsing function by fixing
the “user agent” as Mozilla, users can access
the same websites via other Mozilla-based web
browsers, such as Firefox, which also sets “user
agent” as Mozilla. Second, a false negative is
not caused by search or download functionality
but an update patch. Both P2P clients update
copyrights and prohibited search keywords regularly. We could not easily capture this update
traffic for sanitized traffic generation because
of its temporal and sporadic communication
behavior.
110
Communications
IEEE
EXAMPLE OF USER BEHAVIOR ANALYSIS
We provide a simple user behavior analysis
using fine-grained traffic classification results.
With the fine-grained classification results, we
can analyze the average search counts when a
user initializes downloading in our packet
trace. The ratio of searching to downloading in
terms of transaction number was 56,392:1. We
have empirically confirmed that the Fileguri
client generated about 6,000 TCP transactions
in a single keyword search. Thus, we conclude
that a Fileguri user performs about 9.398
searches on average before downloading from
the P2P network. The goal of this simple analysis is to provide the average searching counts
of users. However, we believe that the finegrained traffic classification has a much wider
application.
CONCLUDING REMARKS
Various traffic classification methods have been
suggested in order to offer better classification
accuracy and information about traffic composition in target networks. In this article, we
have proposed a new traffic classification
scheme which can classify different traffic types
within a single application. In particular, we
have presented a fine-grained traffic classifier
which utilizes a text retrieval technique and
applies multiple signatures to detect P2P traffic
according to different functionalities. Our proposed scheme can provide more in-depth classification results for analyzing user contexts. It
also benefits network operators who need to
view the detailed traffic composition of network, and researchers who want to study the
user behavior.
For future work, we plan to analyze the flexibility of our approach by applying different classification methodologies instead of multiple
signatures. We also plan to conduct various user
behavior and context analysis based on finegrained traffic classification.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ACKNOWLEDGMENTS
This research was supported by the World Class
University (WCU) program through the National Research Foundation of Korea funded by the
Ministry of Education, Science and Technology
(R31-2010-000-10100-0) and the Korea Communications Commission (KCC) under the Novel
Study on Highly Manageable Network and Service Architecture for the New Generation support program supervised by the Korea
Communications Agency (KCA; KCA-201110921-05003).
REFERENCES
BIOGRAPHIES
BYUNGCHUL PARK [S] ([email protected])
___________ received his B.Sc.
degree in computer science from POSTECH, Korea, in 2006.
He is a Ph.D. student in the Department of Computer Science and Engineering, POSTECH. His research interests
include Internet traffic measurement and analysis, and
intelligent traffic classification.
F
YOUNG J. WON [M] ([email protected])
_________ is a researcher at IIJ
Research Laboratory, Tokyo, Japan. Prior to IIJ, he was a
postdoctoral researcher at INRIA, France. He received his
B.Math (2003) from the University of Waterloo, Canada,
and M.S. (2006) and Ph.D. (2010) from POSTECH.
JAMES WON-KI HONG [SM] ([email protected])
_____________ is a professor and head of the Division of IT Convergence Engineering at POSTECH. He received a Ph.D. degree from the
University of Waterloo in 1991. His research interests
include network management, network monitoring and
analysis, convergence engineering, ubiquitous computing,
and smartphonomics. He has served as Chair (2005–2009)
of the IEEE ComSoc Committee on Network Operations
and Management (CNOM). He is serving as Director of
Online Content for IEEE ComSoc. He is a NOMS/IM Steering
Committee Member and a Steering Committee Member of
APNOMS. He was General Chair of APNOMS 2006, and
General Co-Chair of APNOMS 2008 and APNOMS 2011. He
was General Co-Chair of IEEE/IFIPS NOMS 2010. He is an
Associate Editor-in-Chief of IJNM and an editorial board
member of IEEE TNSM, JNSM, JCN, and JTM.
IEEE Communications Magazine • July 2011
IEEE
BEMaGS
[9] G. Szabó, I. Szabó, and D. Orincsay, “Accurate Traffic
Classification,” IEEE WOWMOM 2007, Helsinki, Finland,
June 18–21, 2007, pp. 1–8.
[10] T. Karagiannis, K. Papagiannaki, and M. Faoutsos,
“BLINC: Multilevel Traffic Classification in the Dark,”
ACM SIGCOMM 2005, Philadelphia, PA, USA, Aug.
22–26, 2005, pp. 229–40.
[11] G. Salton, A. Wong, and C.-S. Yang, “A Vector Space
Model for Automatic Indexing,” Commun. ACM, vol.
18, no. 11, 1975, pp. 613–20.
[12] L. Hamersa et al., “Similarity Measures in Scientometric Research: The Jaccard Index Versus Salton’s Cosine
Formula,” Info. Processing and Mgmt.: An Int’l. Journal, vol. 25, no. 3, May 1989, pp. 315–18.
[13] J. Y. Chung et al., “An Effective Similarity Metric for
Application Traffic Classification,” IEEE/IFIP NOMS 2010,
Osaka, Japan, Apr. 19–23, 2010, pp. 286–92.
[1] M.-Sup Kim, Y. J. Won, and J. W. Hong, “Characteristic
Analysis of Internet Traffic from the Perspective of
Flows,” J. Comp. Commun., vol. 29, no. 10, June 19,
2006, pp. 1639–52.
[2] P. Borgnat et al., “Seven Years and One Day: Sketching
the Evolution of Internet Traffic,” IEEE INFOCOM
2009, Rio de Janeiro, Brazil, Apr. 19–25, 2009, pp.
711–19.
[3] B.-C. Park et al., “Towards Automated Application Signature Generation for Traffic Identification,” IEEE/IFIP
NOMS 2008, Salvador, Bahia, Brazil, Apr. 7–11, 2008,
pp. 160–67.
[4] K. Thompson, G. J. Miller, and R. Wilder, “Wide-Area
Internet Traffic Patterns and Characteristics,” IEEE Network, vol. 11, no. 6, 1997, pp. 10–23.
[5] D. Moore et al., “The CoralReef Software Suite as a
Tool for System and Network Administrators,” 15th
USENIX Conf. System Administration, San Diego, CA,
USA, Dec. 2001, pp. 133–44.
[6] A. McGregor et al., “Flow Clustering Using Machine
Learning Techniques,” PAM Wksp. 2004, Antibes Juanles-Pins, France, Apr. 19–20, 2004, pp. 205–14.
[7] S. Sen, O. Spatscheck, and D. Wang, “Accurate, Scalable In-Network Identification of P2P Traffic Using
Application Signatures,” WWW Conf. 2004, New York,
NY, USA, May 17–20, 2004, pp. 512–21.
[8] H. Kim et al., “Internet Traffic Classification Demystified:
Myths, Caveats, and the Best Practices,” ACM CoNEXT
Conf., Madrid, Spain, Dec. 9–12, 2008, pp. 1–12.
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
111
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SERIES EDITORIAL
SOME RECENT IEEE STANDARDS FOR
WIRELESS DATA COMMUNICATIONS
Mostafa Hashem
Sherif
W
ireless communication would not be possible
without the radio spectrum as a transport medium. Like air, water or oil, this natural resource needs careful management, including frequency licensing, release and
reallocation. The following articles provide a glance at the
work many IEEE standards committees have undertaken
to satisfy demands for reliable and high-throughput wireless data communication by taking advantage of recent
regulatory changes, mostly in the unlicensed part of the
spectrum.
The first article by Baykas et al. is entitled “IEEE
802.15.3c: The First IEEE Wireless Standard for Data
Rates over 1 Gb/s.” It presents the specifications approved
in 2009 for personal area networks transmitting at rates
exceeding 1 Gb/s and operating in the unlicensed 60 GHz
(millimeter) wave band. In the 1990s, regulatory bodies
made this band available for use worldwide. The standard
divides the frequency band into four common global channels and defines three physical layers, each with its data
rate, modulation scheme, and error correction mechanism.
An optional codebook-based beamforming protocol was
agreed to extend the range of communication. The protocol is independent of the physical layer and is applicable to
many antenna configurations. Ancillary activities included
the development of a new 60 GHz indoor channel model
that takes into account the effects of smaller wavelength
and high directivity. Another is the development of a
frame aggregation method with low latency suitable for the
high data rates of transmission of very short commands,
such as in the case of bidirectional communication between
a PC and its peripherals.
The next contribution by Ying Li et al., “Overview of
Femtocell Support of Advanced WiMAX Systems,” concerns the use of femtocells in the next generation of wireless metropolitan networks. Femtocells give wireless
broadband access in the licensed part of the spectrum to a
service provider’s network on a plug-and-play basis. The
112
Communications
IEEE
Yoichi Maeda
article highlights the main agreements made in 2009 and
2010 concerning the evolution of the WiMAX architecture
in the IEEE and the WiMAX Forum as documented in
the 2011 version of IEEE 802.16m (Part 16: Air Interface
for Broadband Wireless Access Systems — Advanced Air
Interface). In particular, the article details the recent
development for femtocell design in the area of reliability
and interference management.
A new generation of utility networks (for electricity,
water, natural gas, and sewers) could use telemetry and
data communication to improve operational efficiency.
Because these networks are associated with the smart grid
concept, they are often called smart utility networks
(SUNs). In the final article, “Smart Utility Networks in TV
White Space,” Sum et al. evaluate possible ways that TV
white space could be used for communication in SUNs.
TV white space is the spectrum of radio frequencies located between existing TV stations that can be released for
broadcast TV or low-power wireless devices to provide
unlicensed communication. The authors advocate the use
of a global set of operating frequencies for SUN usage so
that products could interoperate on a worldwide basis.
Among the relevant IEEE standardization projects they
discuss is the one by Task Group 802.15.4g to ensure SUN
interoperability in regionally available unlicensed frequency bands. The scope of Task Group 802.11ah is the unlicensed bands below 1 GHz for smart grid and smart utility
communications. The activities of IEEE 802.11af concern
modifications to the physical and medium access control
(MAC) layers of 802.11 to enable communications in TV
white space. IEEE 802.22 addresses the use of cognitive
radio to share resources in TV white space from the perspective of a regional area network. Finally, IEEE Technical Advisory Group (TAG) 802.19 is considering various
“coexistence scenarios” to allow network elements and TV
devices to share the frequency band without interference.
The articles required several revisions to reach the
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SERIES EDITORIAL
appropriate level of detail that would benefit the readers
of IEEE Communication Magazine. The editors would like
to thank the authors for their cooperation. In addition, the
19 reviewers listed below in alphabetical order gave excellent comments that shaped and improved the initial submissions:
Bader, Faouzi, Centre Tecnologic de Telecomunicacions
de Catalunya — CTTC, Spain
Bongartz, Harald, Fraunhofer Institute for
Communication, Information Processing and Ergonomics
(FKIE), Germany
Chang, Kuo-Hsin, Elster Solutions, United States
Comstock, David, Huawei, United States
Ding, Gang, Olympus Communication Technology of
America, United States
Eichen, Elliott, Massachusetts Institute of Technology
(MIT), United States
Garroppo, Rosario, University of Pisa, Italy
He, Xingze, University of Southern California,
United States
Howie, John, Microsoft, United States
Liu, Dake, Linköping University, Sweden
Lohier, Stéphane, Université Paris-Est, France
Pucker, Lee, Wireless Innovation Forum, United States
Shyy, D. J, MITRE, United States
Singh, Summit, University of California, Santa Barbara,
United States
Tarchi, Daniele, University of Florence, Italy
Uslar, Mathias, OFFIS (Institute for Information Systems),
Germany
Vardhe, Kanchan, Illinois Institute of Technology, United
States
Web, William, Ofcom, United Kingdom
Wolff, Richard, Montana State University, United States
BIOGRAPHIES
MOSTAFA HASHEM SHERIF ([email protected])
__________ has been with AT&T in various
capacities since 1983. He has a Ph.D. from the University of California, Los
Angeles, an M.S. in the management of technology from Stevens Institute
of Technology, New Jersey, and is a certified project manager of the Project
Management Institute (PMI). Among the books he has authored are Protocols for Secure Electronic Commerce (2nd ed., CRC Press, 2003), Paiements
électroniques sécurisés (Presses polytechniques et universitaires romandes,
2006), and Managing Projects in Telecommunication Services (Wiley, 2006).
He is a co-editor of two books on the management of technology published by Elsevier Science and World Scientific Publications in 2006 and
2008, respectively, and is the editor of the Handbook of Enterprise Integration (Auerbarch, 2009).
Y OICHI M AEDA [M] ([email protected])
________________ received B.E. and M.E.
degrees in electronic engineering from Shizuoka University, Japan, in 1978
and 1980, respectively. Since joining NTT in 1980, for the last 30 years he
has been engaged in research and development on access network transport systems for broadband communications including SDH, ATM, and IP.
From 1988 to 1989 he worked for British Telecom Research Laboratories in
the United Kingdom as an exchange research engineer. He currently leads
the Japanese telecommunication standardication organization, the
Telecommunication Technology Committee (TTC), since October 2010. In
October 2008 at the World Telecommunication Standardization Assembly
(WTSA-08), he was appointed chair of ITU-T SG15 for the 2009–2012 study
period for his second term. He is a Fellow of the IEICE of Japan. He has
been a Series Editor of the Standards Series in IEEE Communications Magazine since 1999.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
113
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN STANDARDS
IEEE 802.15.3c: The First IEEE Wireless
Standard for Data Rates over 1 Gb/s
Tuncer Baykas, Chin-Sean Sum, Zhou Lan, Junyi Wang, M. Azizur Rahman, and Hiroshi Harada, NICT
Shuzo Kato, NICT and Tohoku University
ABSTRACT
This article explains the important features of
IEEE 802.15.3c, the first wireless standard from
IEEE in the 60-GHz (millimeter wave) band and
its development. The standard provides three
PHY modes for specific market segments, with
mandatory data rates exceeding 1 Gb/s. During
the span of the standard development, new contributions to wireless communication technology
were also made, including a new channel model,
a codebook-based beamforming scheme, and a
low-latency aggregation method.
INTRODUCTION
Wireless system designers dream of replacing all
cables for indoor data communication with
high-speed wireless connections. Unfortunately,
the dedicated unlicensed frequency spectrum
for this purpose was insufficient until the U.S.
Federal Communications Commission (FCC)
declared that the 57–64 GHz band could be
used [1]. Japan, in turn, allocated the 59–66
GHz band. With the latest adoption of the
European Telecommunications Standards Institute (ETSI) 57–66 GHz band, there is now a
common continuous 5 GHz band available
around 60 GHz in most of the major markets
(Fig. 1). As the wavelength of a signal at 60
GHz is around 5 mm, it is called the millimeter
wave (mmWave) band.
Another important breakthrough was the
introduction of relatively cheap and power-efficient complementary metal oxide semiconductor
(CMOS) processing for semiconductor manufacturing of 60 GHz band devices. As a result, the
price and power requirements for consumer
devices were met. For successful commercialization, the final need for developers was a standard that would support almost all usage models.
The IEEE 802 LAN/MAN Standards Committee has many success stories in developing
global wireless standards, such as 802.11 (WiFi)
and 802.15.4 (Zigbee). Within IEEE 802, interest in developing an mmWave physical layer
(PHY) began in July 2003, with the formation
of an interest group under the 802.15 working
group for wireless personal area networks
114
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
(WPANs). According to the IEEE 802 procedure, if the interest group is successful, it is followed by a study group, which decides the scope
of the new standard. In March 2004, a study
group for mmwave PHY was formally created.
The study group members agreed that development of a new PHY to transmit 1 Gb/s or higher data rates is feasible. It was decided to reuse
an existing medium access control (MAC) layer
(IEEE 802.15.3b), with necessary modifications
and extensions. After the approval of the project authorization request, a task group was created. The task group first focused on creating
usage models, a 60 GHz indoor channel model,
and evaluation criteria. After two years of hard
work, three PHY modes and multiple MAC
improvements were selected to support different usage models. After various letter ballots,
sponsor ballots, and resulting improvements, in
September 2009 the IEEE-SA Standards Board
approved IEEE 802.15.3c-2009 [2]. It took four
and a half years for the task group to complete
the standard. Such a duration has been common for many IEEE standards that provide
new PHYs.
The rest of this article explains the salient
features of the standard, as well as some important outcomes. The organization of the article
follows the order of the task group’s standardization process. First, we provide the details of the
usage models and channel model, which were
finalized before other topics. Afterward, channelization is explained, which is common for all
PHY modes. Details of the three different PHY
modes, new MAC layer features, and beamforming procedures are explained in the following
sections, respectively. Finally, the last section
presents the conclusions.
USAGE MODELS OF 802.15.3C
During the beginning of the standardization process, the 802.15.3c Task Group conducted a
detailed analysis of the possible consumer applications in the 60 GHz band. A total of five usage
models (UMs) were accepted by the group [3].
UM 1) Uncompressed video streaming: The
particularly large bandwidth available in the 60
GHz band enables sending HDTV signals, thus
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
eliminating the need for video cables from highdefinition video players to display devices. The
802.15.3c Task Group assumed 1920 × 1080 pixel
resolution and 24 b/pixel for video signals.
Assuming a rate of 60 frames/s, the required
data rate is found to be over 3.5 Gb/s. The operation range should be 10 m with a pixel error
rate below 10–9.
UM 2) Uncompressed multivideo streaming:
In some applications, multiple video signals
could be supplied by a single transmitter. For
example, a TV and a DVD recorder could
receive signal from a single set-top box, or a TV
can display two TV channels side by side on the
same screen. According to this UM, the
802.15.3c system should be able to provide video
signals for at least two 0.62 Gb/s streams. This
data rate corresponds to a video signal with 720
× 480 pixels/frame.
UM 3) Office desktop: In this UM, it is
assumed that a personal computer communicates with external computer peripherals, including printers, display devices, and hard disks,
unidirectionally or bidirectionally. The model
mainly assumes data communications, where
retransmissions are possible. A target packet
error rate of 0.08 is assumed.
UM 4) Conference ad hoc: This UM considers a scenario where many computers are communicating with each other using one 802.15.3c
network. Most communications are bidirectional,
asynchronous, and packet-based. The conference
ad hoc UM requires longer ranges than the
office desktop UM for improved quality of service.
UM 5) Kiosk file downloading: In the last
UM, the task group assumed electronic kiosks
that enable wireless data uploads and downloads
with their fixed antennas. Users will operate
handheld devices, such as cell phones and cameras with low-complexity, low-power transceivers.
One possible application is downloading video
and music files. This model requires 1.5 Gb/s at
1 m range.
Figure 2 illustrates the uncompressed video
streaming, office desktop, and kiosk file downloading UMs.
60 GHZ CHANNEL MODEL
When compared with other indoor wireless systems at 2.4 and 5 GHz, 60 GHz systems have
much smaller wavelength and thus the potential
for higher directivity. The conventional SalehValenzuela (S-V) channel model [4], which fits
the non-line-of-sight (NLOS) communications of
previous IEEE 802.11 and IEEE 802.15 specifications, does not suit 60 GHz applications well.
A new channel model was accepted by the IEEE
802.15.3c channel modeling subcommittee. This
new model combines a line-of-sight (LOS) component using a two-path model with the NLOS
reflective clusters of the S-V model [5]. Figure 3
illustrates the new model, including some of the
important parameters used to define it. The
group decided on a set of eight different parameters to cover all the possible scenarios, as shown
in Fig. 3. In addition, path loss coefficients and
shadowing values have been extracted for different scenarios.
2160 MHz
1760 MHz
240
MHz
IEEE
BEMaGS
1
58
2
59
60
57 GHz
57 GHz
62
USA, KOREA
59 GHz
4
3
61
F
120
MHz
63
64
65
fGHz
64 GHz
JAPAN
66 GHz
EUROPE
66 GHz
Figure 1. Channelization of 802.15.3c and unlicensed bands around the globe.
CHANNELIZATION OF 802.15.3C
One of the main challenges during standardization was providing a common channelization for
all the available bands around the world. The
bands should be large enough to support the
data rates required in the usage models with
robust low-spectrum-efficiency modulation
schemes. Common global channels should be
created for early market penetration. Lower and
upper guard bands should also be large enough
to minimize interference with other bands. These
requirements were satisfied by allocating four
bands of 2160 MHz, as shown in Fig. 1. Channelization allows two common global channels,
channels 2 and 3. Devices in the United States
can use channels 1–3, and those in Japan can
use channels 2–4. The lower and upper guard
bands have been set as 240 and 120 MHz,
respectively. Although they are not equal, the
guard bands are wide enough to reduce out-ofband emissions. Following IEEE 802.15.3c, three
other standardization bodies, the European
Computer Manufacturers Association (ECMA),
WirelessHD, and IEEE 802.11 TGad, accepted
the same channelization. This is extremely
important for the coexistence of heterogeneous
60 GHz systems, because it simplifies detection
of other systems and avoidance.
PHY LAYER DESIGN IN 802.15.3C
Due to conflicting requirements of different
UMs, three different PHY modes have been
developed:
• Single carrier mode of the mmWave PHY
(SC PHY)
• High-speed interface mode of the mmWave
PHY (HSI PHY)
• Audio/visual mode of the mmWave PHY
(AV PHY)
The SC PHY is best suited for kiosk file
downloading (UM5) and office desktop (UM3)
usage models. The HSI PHY is designed mainly
for the bidirectional, NLOS, low-latency communication of the conference ad hoc model (UM4).
The AV PHY is designed to provide high
throughput for video signals in video streaming
usage models (UM1, UM2). A comparison of
the different PHY modes is given in Table 1.
The main difference between the different
PHYs is the modulation scheme. The SC PHY
uses single carrier modulation, whereas the AV
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
115
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
In the receiver for
the LOS environment, conventional
matched filtering is
sufficient for achieving acceptable performance. For the
NLOS environment,
such as the frequency domain equaliza-
(b)
(a)
optional methods
(c)
Figure 2. Usage models for 60 GHz applications: a) uncompressed video; b) office desktop; c) kiosk downloading.
included to mitigate
Relative
power
σ1
Γ: Cluster decay factor
1/Λ: Cluster arrival rate
γ: Ray decay factor
1/λ: Ray arrival rate
σ1: Cluster lognormal standard deviation
σ2: Ray lognormal standard deviation
σϕ: Angle spread of rays within cluster
(Laplace distribution)
Ω0: Average power of the first ray
of the first cluster
σ2
An
gle
of
a
multipath fading.
rriv
al
tion (FDE) may be
Γ
γ
Ω0
σϕ
1/Λ
1/λ
Time of arrival
Figure 3. Graphical representation of the 60 GHz channel model [6].
PHY and HSI PHY use the orthogonal frequency-division multiplexing (OFDM) modulation. In SC modulation, one symbol occupies
the whole frequency band, and thus its duration
is very short. In OFDM, the available frequency
band is divided into orthogonal subcarriers, and
data symbols are sent using those subcarriers.
In general, SC modulation allows lower complexity and low power operation, whereas
OFDM suits well in high spectral efficiency and
NLOS channel conditions. Orthogonality of the
subcarriers in OFDM allows the use of inverse
fast Fourier transform (IFFT) at the transmitter and fast Fourier transform (FFT) at the
receiver.
All PHY modes have a typical signal frame
format consisting of a preamble, header, and
payload. The preamble is used for frame detection, channel estimation, frequency recovery,
and timing acquisition. The header contains
essential information such as payload size, modulation, and coding used in the payload. The
payload includes the data to be transmitted.
SINGLE CARRIER MODE OF THE MMWAVE PHY
The SC PHY provides three classes of modulation and coding schemes (MCSs) focusing on
different wireless connectivity applications. Class
1 is specifically designed for addressing kiosk file
116
Communications
IEEE
downloading and the low-power, low-cost mobile
market with data rates of up to 1.5 Gb/s. Class 2
is specified for the office desktop UM achieving
data rates up to 3 Gb/s, and class 3 is specified
for supporting high-performance applications
with data rates exceeding 3 Gb/s.
Regarding the modulation schemes used in
the SC PHY, the support of //2-shifted binary
phase shift keying (//2 BPSK) is mandatory for
all devices. It is achieved by adding anti-clockwise //2 rotation to the BPSK constellation after
each symbol transmission. Added rotation
reduces spectrum regrowth due to power amplifier nonlinearities and improves peak-to-average
power ratio. Another important point is that //2
BPSK can be realized by minimum shift keying
(MSK) modulation with proper filtering. Other
supported modulation schemes are //2 quadrature PSK (QPSK), //2 8-PSK, and //2 16-quadrature amplitude modulation (QAM), on-off
keying (OOK), and dual alternate mark inversion (DAMI).
There are two main forward error correction
(FEC) schemes specified in the standard, ReedSolomon (RS) block codes and low-density parity check (LDPC) block codes. RS codes are
selected for their low complexity in high-speed
communication. RS(255,239) is the main FEC in
the system and is used for payload protection.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
SC PHY
HSI PHY
AV PHY
Main usage model
Kiosk downloading, office desktop
Conference ad hoc, office desktop
Video streaming, multivideo
streaming
Data rates
0.3 Mb/s–5.28 Gb/s
1.54–5.78 Gb/s
0.95–3.8 Gb/s
Modulation scheme
Single carrier
Orthogonal frequency-division
multiplexing
Orthogonal frequency-division
multiplexing
Forward error control
coding options
Reed Solomon code, low-density
parity check codes
Low-density parity check codes
Reed Solomon code, convolutional
coding
Block size/fast Fourier
transform size
512
512
512
Table 1. Comparison of the three modes of mmWave PHY.
HIGH-SPEED INTERFACE MODE OF THE
MMWAVE PHY
As mentioned earlier, the HSI PHY is designed
mainly for computer peripherals that require
low-latency bidirectional high-speed data, focusing on the conference ad hoc UM, and uses
OFDM. The FFT size is selected as 512, which is
necessary in the 60 GHz channel.
As OFDM modulation has an inherent complexity due to IFFT and FFT operations, only
the LDPC coding scheme is used in the HSI
PHY, which offers better coding gains than RS
coding.
Four FEC rates are obtained using
LDPC(672,336), LDPC(672,504), LDPC(672,420),
and LDPC(672,588) codes.
In terms of modulation, three modulation
schemes are selected: QPSK, 16-QAM, and 64QAM. The highest PHY SAP data rate is 5.775
Gb/s.
In this mode, a special low-data-rate modulation and coding scheme is created by applying a
spreading factor of 48, which can only be used
for beaconing and control signals.
45
Modulation and coding schemes with LDPC coding
Modulation and coding schemes with RS coding
40
35
30
Range (m)
Four LDPC coding schemes (LDPC(672,336),
LDPC(672,504), LDPC(672,588) and LDPC
(1440.1344)) with different coding rates are
specified to provide higher coding gain with reasonable implementation complexity. In Fig. 4, we
provide a performance comparison between RS
and LDPC codes in terms of data rate and range
in an LOS environment. We have assumed a
transmit power of 10 dBm, antenna gain of 10
dBi, and MCSs with //2 BPSK and //2 QPSK
modulation at a packet error rate (PER) of 0.08.
The results indicate that with LDPC codes, it is
possible to increase the range up to 1.5 times at
lower data rates.
Spreading with either linear feedback shift
register code or Golay sequence is also applied
to further increase system robustness. The possible spreading values are 2, 4, and 64.
In the receiver for the LOS environment,
conventional matched filtering is sufficient for
achieving acceptable performance. For the
NLOS environment, optional methods such as
the frequency domain equalization (FDE) may
be included to mitigate multipath fading.
25
20
15
10
5
0
500
1000
1500
2000
Data rate (Mbps)
2500
3000
Figure 4. Range vs. data rates of several SC PHY mode modulation and coding
schemes. Schemes with LDPC coding use //2 BPSK with spreading factors 2
and 1, and //2 QPSK. Schemes with RS coding use //2 BPSK with spreading
factors 4, 2, and 1, and //2 QPSK.
AUDIO/VISUAL MODE OF THE MMWAVE PHY
As video and audio devices could be designed
only as a wireless data source (e.g., DVD player)
or only as a data sink (e.g., HDTV), highly asymmetric data transmission is possible; hence, the
designers of the AV PHY mode created two different sub-PHY modes: high-rate PHY (HRP)
for video transmission and low-rate PHY (LRP)
for the control signal. Both of the sub-PHY
modes use OFDM.
The HRP mode has an FFT size of 512 and
uses all the channel bandwidth available. There
are three MCSs with equal error protection,
delivering data rates of 0.952, 1.904, and 3.807
Gb/s. There are two MCSs with unequal error
protection and two MCSs, in which only the
most significant bits are sent.
On the other hand, the LRP mode occupies
only 98 MHz bandwidth, and three LRPs are
arranged per HRP channel. This allocation is to
accommodate three different networks in one
IEEE Communications Magazine • July 2011
Communications
IEEE
3500
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
117
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
MSDU#3
MSDU#2
A
BEMaGS
F
MSDU#1
Step 1: Fragmentation
MSDU#3
fragment #2
MSDU#3
fragment #1
MSDU#2
fragment #2
MSDU#2
fragment #1
MSDU#1
Step 2: Mapping
Sh#1
Sh#2
MSDU#1
Sh#3
Subframe#2
Sh#4
Subframe#3
MSDU#2
Fragment #1
FCS
Subframe#4
MSDU#2
Fragment #2
FCS
FCS
FCS
MSDU#3
Fragment #1
MAC header
Subframe#1
Step 3: PHY to transmit
Subframe#2
Subframe#1
MAC
subheaders
HCS
Subframe#3
HCS
Subframe#4
MAC header PHY header
(a)
MSDU#3
MSDU#2
MSDU#1
Step 1: Mapping
Subframe#2
MSDU#1
Sh#1
Subframe#3
MSDU#2
FCS
Sh#2
Subframe#4
Zero length
MSDU
FCS
Sh#3
FCS
MSDU#3
MAC
subheader
MAC header
Subframe#1
Step 2: PHY to transmit
Subframe#2
Subframe#1
MAC
subheader
HCS
Subframe#3
HCS
Subframe#4
MAC header PHY header
(b)
Figure 5. Aggregation methods in 802.15.3c: a) standard; b) low-latency.
channel, because the HRP modes are assumed
to have high beamforming gains.
The AV PHY uses RS code as the outer code
and convolutional coding as the inner code in
the HRP mode, whereas only convolutional coding is used in the LRP mode. Modulation
schemes used in the AV PHY are limited to
QPSK and 16-QAM.
118
Communications
IEEE
both header and payload using a Golay sequence
of length 64 chips. The CMS preamble is carefully designed to provide good performance for
synchronization and channel estimation, even in
poor channel conditions.
COMMON-MODE SIGNALING
MAC LAYER ENHANCEMENTS OF
IEEE 802.15.3C
Common mode signaling (CMS) is a low-datarate SC PHY MCS, designed to mitigate interference among different PHY modes. CMS is a
common platform that enables different PHY
modes to communicate with each other before
conducting their respective data transmissions.
CMS is used for transmission of the beacon
frame, synchronization (sync) frame, and other
important command frames, such as the association frame and beamforming training sequence
frames.
The modulation for CMS is //2 BPSK. The
payload part of CMS is encoded with
RS(255,239). The FEC for the CMS header is
RS(33,17), which is a shortened version of
RS(255,239). The code spreading is applied to
Before going into the details of the MAC layer
enhancements, we briefly introduce the IEEE
802.15.3c MAC. It is based on the IEEE
802.15.3b standard, which itself is an improvement over IEEE 802.15.3. In the standard a network is called a piconet, which is formed in an
ad hoc fashion. Among a group of devices
(DEVs), one will act as the piconet coordinator
(PNC) to provide the piconet’s synchronization
and to manage access control of the rest of the
DEVs. The necessary control information is
embedded in beacons. Upon receiving a beacon
from a PNC, the DEVs become aware of the
existence of the piconet. Beacons provide information about when and how DEVs can access
the network.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
During network operation, time is divided
into sequential superframes (SFs). Each SF has
three segments: a beacon period, a contention
access period (CAP), and a channel time allocation period (CTAP). During the beacon period, the PNC sends one or multiple beacons.
The CAP is reserved mainly for command and
control communication between PNC and
DEVs. Since such a communication is mainly
asynchronous, a suitable access method is
selected, carrier sense multiple access with collision avoidance (CSMA/CA). The remaining
time of an SF includes the CTAP, which provides time-division multiple access (TDMA)
communications. The CTAP is composed of
multiple channel time allocations (CTAs). Each
CTA is a time slot granted by the PNC for a
certain pair of DEVs. Time-sensitive applications such as AV streaming use the CTAP for
guaranteed data transmission. With these specifications, system designers had already developed an efficient and well structured MAC
layer that only required improvements in three
major areas:
• Providing coexistence among different
PHYs and avoiding interference from hidden devices
• Improving transmission efficiency to enable
MAC SAP rates over 1 Gb/s and providing
low-latency transmissions for delay-sensitive
applications
• Supporting directivity inherent to 60 GHz
signals and beamforming antennas
In the next two subsections, we explain
enhancements in the first two areas. The solution in the last area requires a complex beamforming algorithm, which is explained later.
COEXISTENCE AMONG 802.15.3C PHYS
To achieve better coexistence among DEVs
using different PHY modes, a sync frame is
introduced. A sync frame includes information
about the duration of the SF and timing information of the CAP and each CTA. Sync frames
are modulated using the CMS mentioned in the
previous section. According to 802.15.3c rules, it
is mandatory for all PNC-capable DEVs to transmit a sync frame in every SF. In addition, any
PNC-capable DEV shall be able to receive and
decode sync frames and other command frames
modulated with CMS.
As a result, any PNC-capable DEV, regardless of its PHY mode in operation, will be
informed about the existence of nearby
piconets. It will then have the opportunity to
join one instead of starting another independent piconet. The sync frame transmission can
thus be seen as an effective coexistence method
to mitigate potential co-channel interference
from other piconets. Apart from the rules for
PNC-capable DEVs, an optional rule related
to non-PNC-capable DEVs is also defined in
the standard: Any DEV capable of transmitting
a sync frame may do so in the first granted
CTA in an SF and in every predefined number
of SFs.
This rule is intended to further extend the
coverage area of the sync frame. It allows nonPNC-capable DEVs to participate in the sync
frame transmission.
IEEE
BEMaGS
F
FRAME AGGREGATION
In high-speed WPAN and WLAN systems, transmission efficiency decreases with the increase in
transmission speed due to the increased ratio of
overhead time to payload transmission time. To
improve transmission efficiency and throughput
performance, frame aggregation can be
employed. The basic idea of frame aggregation
is to reduce the overhead, such as the preamble
and PHY/MAC header, by concatenating multiple MAC service data units (MSDUs) to form a
frame with a long payload. In the IEEE 802.15.3c
standard, two novel aggregation methods are
specified: standard aggregation and low-latency
aggregation.
Standard Aggregation — Standard aggregation is designed to support transmission of
uncompressed video streaming. Figure 5a shows
the basic procedure of standard aggregation.
The MAC layer of the transmitter, upon receiving an MSDU from the upper layer, divides the
MSDU into small pieces of data blocks if the
length of the MSDU exceeds a predefined
threshold. This process is called fragmentation.
The MAC attaches a frame check sequence
(FCS) to each data block to form a subframe.
For each subframe, there is a subheader (Sh)
created to carry information needed for the
receiver to decode individual subframes, such as
subframe length, MSDU sequence number, and
used MCS. The MAC header, on the other hand,
carries high-level control information applicable
to all the subframes, such as source and destination addresses. All the subheaders are placed
back-to-back and attached to a single header
check sequence (HCS) to form a MAC subheader. The MAC layer then transfers the subframes,
MAC subheader, and MAC header to the PHY
layer. The PHY layer performs channel coding
and modulation, and delivers the data to the
receiver over the wireless channel afterward. An
important aspect of this method is that instead
of distributing the subheaders between the subframes, all the subheaders are concatenated and
put in front of the subframes. The reason for
such a design is that video streaming contains
both data and control information, which should
be treated with different priorities. Changing
MCS over subframes is a common approach to
support priority. However, when operating at a
speed of gigabits per second, timely changing of
the MCSs subframe by subframe can be difficult
for the receiver. However, putting the subheaders in front enables the receiver to know the
MCS of each subframe in advance, helping to
realize timely MCS switching. It was reported
that over 80 percent efficiency improvement and
above 4 Gb/s throughput are achieved with standard aggregation [6].
In high-speed WPAN
and WLAN systems,
transmission
efficiency decreases
with the increase in
transmission speed
due to the increased
ratio of overhead
time to payload
transmission time. To
improve transmission
efficiency and
throughput
performance, frame
aggregation can be
employed.
Low-Latency Aggregation — Low-latency
aggregation is designed to support bidirectional
communications for PC peripheral applications.
These applications require aggregation to
improve their transmission efficiency. However,
they are sensitive to delay, because they need to
frequently transmit very short command frames.
Figure 5b shows the procedure of low-latency
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
119
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
The ultimate purpose
of the 60-GHz
WPAN systems is to
deliver MAC
throughput of the
order of multi-Gb/s
over a reasonable
range. To accomplish
this, system designers have to increase
the transmission
range, especially in
non-line-of-sight
channels.
aggregation. Unlike standard aggregation, the
MSDUs from the upper layer are directly
mapped into the subframes without fragmentation. The MAC subheader only contains global
control information for all the subframes. The
control information for individual subframes is
distributed to each subframe as an MSDU subheader. Once there is one MSDU available, the
transmission of the aggregated frame starts without waiting for the rest of the MSDUs to arrive.
If no new MSDU arrives after all the available
MSDUs are sent out, the MAC layer continues
sending a special subframe, referred to as a
zero-length MSDU, to fill the gap until new
MSDUs become available. Compared with standard aggregation, transmission of the first part
of the aggregated frame can start without waiting for the arrival of the whole frame, so the
packaging delay of the aggregation process is
reduced. However, some part of the control
information has to be inserted between the subframes, which negatively impacts the efficiency.
Assuming a frame payload of 512 bytes and a
transmission speed of 1 Gb/s, a simulation found
that up to 300 +s delay performance improvement can be achieved [8].
BEAMFORMING
The ultimate purpose of 60 GHz WPAN systems
is to deliver MAC throughput of multiple gigabits per second over a reasonable range. To
accomplish this, system designers have to
increase the transmission range, especially in
NLOS channels [9]. To compensate for the high
propagation loss in 60 GHz channels and reduce
the effects of shadowing, the use of an antenna
array has been proposed. The integration of
multiple antennas into portable devices can be
achieved easily, because the dimensions and necessary spacing of the 60 GHz antennas are on
the order of millimeters [10]. As multiple antennas are available at both the transmitter and the
receiver, multiple-input multiple-output (MIMO)
techniques can be employed to increase spectral
efficiency. However, MIMO techniques require
multiple radio frequency (RF) chains, which significantly increase the complexity and cost.
Therefore, the group focused on improving only
the operational range with directional transmission based on antenna array beamforming.
IEEE 802.15.3c specifies an optional beamcodebook-based beamforming protocol (BP)
without the need for angle of departure, angle of
arrival, or channel state information estimation.
The proposed BP has the following features:
• The BP consists of three stages: sector-level
(coarse) searching, beam-level (fine) searching, and an optional tracking phase. The
division of the stages facilitates a significant
reduction in setup time compared with
beamforming protocols with exhaustive
searching mechanisms.
• The BP employs only discrete phase shifts,
which simplifies the DEVs’ structure compared to conventional beamforming with
phase and amplitude adjustment.
• The BP protocol is designed to be PHYindependent and is applicable to different
antenna configurations.
120
Communications
IEEE
A
BEMaGS
F
• The BP is a complete MAC procedure. It
efficiently sets up a directional communication link based on codebooks.
Two types of BP are specified: on-demand
beamforming and proactive beamforming.
On-demand beamforming may be used
between two DEVs or between the PNC and a
DEV. It takes place in the CTA allocated to the
DEV for the purpose of beamforming.
Proactive beamforming may be used when
the PNC is the source of the data to one or multiple DEVs. It allows multiple DEVs to train
their own receiver antennas for optimal reception from the PNC, with lower overhead. During
proactive beamforming, the sector-level training
from PNC to DEV takes place in the beacon
period. The sector-level training from DEV to
PNC and the beam-level training in both directions take place in the CTAP.
For both the beamforming modes, two forms
of beamforming criteria are specified: beam
switching and tracking (BST) criteria, and pattern estimation and tracking (PET) criteria. BST
is suitable for any antenna configuration and
must be supported by any DEV with beamforming capability. It is based on selecting the best
beam from a given set of beams. On the other
hand, PET is suitable only if 1D linear antenna
arrays and 2D planar antenna arrays are
employed in the DEV. Its support is optional,
and is based on finding the optimal beamformer
and beamcombiner vectors (i.e., antenna
weights) that do not necessarily fall into the
given set of beams.
CONCLUSION
In this article we have explained 802.15.3c, which
is the first IEEE standard achieving over 1 Gb/s
at the MAC SAP. During its standardization, its
task group not only created three new PHY
modes, but also improved an existing MAC by
adding aggregation and beamforming capability.
Its channelization allows rapid deployment
throughout the world. Already, customers can
buy products based on the 802.15.3c standard.
However, an open question left to the market to
answer is which one of the PHY modes will be
the most successful. Nevertheless, there are sufficient procedures within the standard to enable
coexistence between the PHY modes. As a
result, customers do not need to worry about
interference from other compliant devices. From
the standardization point of view, there are some
possible features that were omitted due to the
limited development time. For example, a PNC
cannot enable multiple CTAs at a given time. In
the future, a new task group could be created to
add such features to improve the quality of enduser experience.
REFERENCES
[1] FCC, “Amendment of Parts 2, 15, and 97 of the Commission’s Rules to Permit Use of Radio Frequencies
Above 40 GHz for New Radio Applications,” FCC 95499, ET Docket no. 94-124, RM-8308, Dec. 1995.
[2] IEEE Std 802.15.3c-2009 (Amendment to IEEE Std
802.15.3-2003), “IEEE Standard for Information Technology — Telecommunications and Information
Exchange between Systems — Local and Metropolitan
Area Networks — Specific Requirements. Part 15.3:
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Wireless Medium Access Control (MAC) and Physical
Layer (PHY) Specifications for High Rate Wireless Personal Area Networks (WPANs) Amendment 2: Millimeter-Wave-Based Alternative Physical Layer Extension,”
Oct. 2009, pp. c1–187.
[3] A. Sadri, “Summary of the Usage Models for
802.15.3c,” IEEE 802.15-06-0369-09-003c, Nov. 2006.
[4] A. A. M. Saleh and R. A. Valenzuela, “A Statistical
Model for Indoor Multipath Propagation,” IEEE JSAC,
vol. 5, Feb. 1987, pp. 128–37.
[5] S.-K. Yong, “TG3c Channel Modeling Sub-committee
Final Report,” IEEE 802.15-07-0584-01-003c, Mar.
2007.
[6] S. Kato et al., “Single Carrier Transmission for MultiGigabit 60-ghz WPAN Systems,” IEEE JSAC, vol. 27, no.
8, Oct. 2009, pp. 1466–78.
[7] F. Kojima et al., “Necessary Modifications on Conventional IEEE 802.15.3b MAC to Achieve IEEE 802.15.3c
Millimeter Wave WPAN,” Proc. IEEE PIMRC 2007, Sept.
2007, pp. 1–5.
[8] X. An et al., “Performance Analysis of the Frame Aggregation Mechanisms in IEEE 802.15.3c,” Proc. IEEE
PIMRC 2009, Sept. 2009, pp. 2095–2100.
[9] K. Sato et al., “60 GHz Applications and Propagation
Characteristics,” IEEE 802.15-08-0651-01-003c, Sept.
2008.
[10] J. A. Howarth et al., “Towards a 60 GHz Gigabit System-On-Chip,” Proc. 16th Wireless World Research
Forum Meeting, Apr. 2006, pp. 1–8.
BIOGRAPHIES
T UNCER B AYKAS ([email protected])
__________ received his B.A.Sc
degree from Bogazici University in 2000, and M.A.Sc. and
Ph.D. degrees from the University of Ottawa in 2002 and
2007, respectively. He served as secretary and assistant editor for IEEE 802.15 WPAN Task Group 3c. Currently he is
chair of IEEE 802.19 TG1, Wireless Coexistence in the TV
White Space. He served as TPC vice chair of PIMRC 2009
and TPC chair of Cogcloud 2010.
C HIN -S EAN S UM ([email protected])
_________ received his M.E. from
University Technology of Malaysia (UTM) in 2002 and his
Ph.D. from Niigata University in 2007. In June 2007 he
joined the NICT, Japan, as an expert researcher. He was
involved in the IEEE 802.15.3c (TG3c) standardization activities, where he served as task group secretary and assistant
editor. He is currently the coexistence contributing editor in
IEEE 802.15.4g and an active contributor in IEEE 802.11af.
Z HOU L AN ([email protected])
_________ received his B.S. and Ph.D.
degrees in electrical engineering from Beijing University of
Posts and Telecommunications (BUPT) in 2000 and 2005,
respectively. He is currently with NICT of Japan as an
expert researcher. He served as TPC vice chair of IEEE
PIMRC 2009 and assistant editor of IEEE 802.15.3c. He is
currently secretary of the IEEE 802.11af task group. He is
also active in other IEEE WLAN and WPAN working groups.
IEEE
BEMaGS
F
Already, customers
can buy products
based on the
JUNYI WANG ([email protected])
____________ received his B.Eng and
M.Eng degrees from Harbin Institute of Technology, China,
and his Ph.D. degree at Tokyo Institute of Technology. Currently he is an expert researcher at NICT working on TV
white space communication systems. He is secretary of the
IEEE P802.19 WG and TG1, and a voting member of the
IEEE 802.15, IEEE 802.11, and IEEE 802.19 working groups.
He was a guest editor of Wireless Communications and
Mobile Computing.
802.15.3c standard.
M. A ZIZUR R AHMAN ([email protected])
_________ received his Ph.D.
from Niigata University in 2008. In recent years, while with
NICT, he has worked on a number of projects focusing on
millimeterwave, cognitive radio, wireless coexistence, white
space communications, and sensing technologies, and contributed to wireless communications standardization within
IEEE. He has authored/co-authored numerous articles and
has over 25 patents pending. He was a recipient of the
2005 Japan Telecommunication Advancement Foundation
Technology Award.
the most
However, an open
question left to the
market to answer is
which one of the
PHY modes will be
successful.
HIROSHI HARADA ([email protected])
__________ joined the Communications Research Laboratory, Ministry of Posts and Communications, in 1995 (currently NICT). Currently he is director of
the Smart Wireless Laboratory at NICT and NICT’s Singapore Wireless Communication Laboratory. He is serving on
the board of directors of the SDR Forum, as chair of IEEE
DySPAN Standards Committee, and as vice chair of IEEE
P1900.4 and IEEE 802.15.4g. He is a visiting professor at
the University of Electro-Communications, Tokyo, Japan,
and is the author of Simulation and Software Radio for
Mobile Communications (Artech House, 2002).
SHUZO KATO [F] ([email protected])
_______________ currently is a
professor, Research Institute of Electrical Communications,
Tohoku University, Japan. He was program coordinator,
Ubiquitous Mobile Communications at NICT working on
wireless communications systems R&D. He served as vicechair of IEEE802.15.3c and chair of the Consortium of Millimeter Wave Systems for Practical Applications, promoting
millimeter wave systems globally. He is a Fellow of IEICE
Japan and served as an Editor of IEEE Transactions on
Communications, Chairman of the Satellite and Space
Communications Committee, IEEE ComSoc, and a Board
Member of IEICE Japan.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
121
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN STANDARDS
Overview of Femtocell Support in
Advanced WiMAX Systems
Ying Li, Samsung Telecommunications America
Andreas Maeder and Linghang Fan, NEC Network Laboratories Europe
Anshuman Nigam, Samsung India Software Operations
Joey Chou, Intel Corporation
ABSTRACT
The femtocell concept is an integral part of
the telecommunication industry’s efforts to provide high-throughput, high quality services into
the users’ home. In contrast to conventional cell
types which are well-planned by the operators,
femtocell base stations are supposed to be
installed by customers themselves, similar to a
WiFi access point. Unlike WiFi, however, femtocells operate mainly in licensed bands, such that
operators are in control of the radio interface.
This brings new challenges as well as opportunities for femtocell design; these include sophisticated mobility and interference management,
increased reliability, as well as deployment in a
plug-and-play manner. Extensive progress in
femtocell design has been made in Advanced
WiMAX recently, which is associated with the
IEEE 802.16m update in 2011. This article gives
an overview and update on novel concepts and
mechanisms for femtocell support in the network architecture and air interface that have
been adopted into the WiMAX Forum network
specifications and the IEEE 802.16m specification.
INTRODUCTION
The explosive increase in demand for wireless
data traffic has created opportunities for new network architectures incorporating multitier base
stations (BSs) with diverse sizes. Support for
small-sized low-power BSs such as femtocell BSs is
gaining momentum in cellular systems, because of
their potential advantages such as low cost deployments, traffic offloading from macrocells, and the
capability to deliver services to mobile stations
which require large amounts of data [1–3].
Femtocells are supported by current thirdgeneration (3G) technologies and future nextgeneration cellular systems, such as Advanced
WiMAX systems. Advanced WiMAX, which will
provide up to 1 Gb/s peak throughput with the
IEEE 802.16m [4] update in 2011, is one of the
technologies for the ongoing International
Mobile Telecommunications (IMT)-Advanced
program [4, 5] for the fourth generation (4G) of
mobile wireless broadband access. IEEE
802.16m defines the wireless metropolitan area
122
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
network (WirelessMAN) advanced air interface
as an amendment to the ratified IEEE 802.162009 specification [6] with the purpose of
enhancing performance such that IMT-Advanced
requirements are fulfilled. In Advanced
WiMAX, femtocell support is one of the solutions to provide high performance services even
in indoor scenarios.
In Advanced WiMAX, a femtocell BS, or
WiMAX femtocell access point (WFAP), is a
low-power BS intended to provide in-house
and/or small office/home office (SOHO) coverage. With conventional macro- or microcells,
indoor coverage is challenging and expensive to
achieve due to the high penetration losses of
most buildings. WFAPs are usually self-deployed
by customers inside their premises, and connected to the radio access network (RAN) of the
service provider via available broadband connections like digital subscriber line (DSL) or fiber
to the home (FTTH).
The self-deployment of WFAPs has implications on the requirements for operation and
management. The WFAP must be able to react,
in a highly flexible manner, to different interference situations, since neither the location nor
the radio propagation environment can be predicted in advance. Furthermore, the customers
are in physical control of the WFAP, meaning
that they can switch it on or off at any time.
Other factors like unreliable backhaul connections must also be considered.
Considering these scenarios, some of the
technical challenges and requirements can be
identified as follows [7]:
• Tight integration into the existing WiMAX
architecture for support of seamless mobility between macrocells and WFAPs, lowcomplexity network synchronization, and
localization
• Advanced interference mitigation techniques to guarantee quality of service and
coverage in macrocells as well as femtocells
• Access control for different groups of subscribers as well as energy efficient-recognition of access rules by the mobile station
(MS)
• Support for increased reliability and
autonomous reaction on irregular network
or WFAP conditions
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
To design WFAPs capable of meeting these
challenges and requirements, standardization
efforts are being made in both IEEE 802.16 and
WiMAX Forum. IEEE 802.16 Task Group m
(TGm) defined the air interface support for femtocells. In parallel, the WiMAX Forum Network
Working Group (NWG) is driving the development of specifications for femtocell solutions in
the access and core networks. NWG defined two
phases for femtocell support. In phase 1, support
for the IEEE802.16-2009 air interface was specified in NWG Rel. 1.6 [8–10]. In phase 2, femtocell specifications to support the advanced air
interface features as defined in IEEE 802.16m
are developed.
A high-level description of some of the technologies to support WFAPs can be found in [1,
2], but these have not yet been technically specified in the corresponding working groups at the
time of writing. As IEEE 802.16m has just been
completed, a significant number of technical
details have been discussed, evaluated, and finally adopted into the specification. This article
provides a detailed update on these recent developments in the standardization process for femtocell design in Advanced WiMAX. For example,
no solution was discussed in [1, 2] for the coverage hole problem caused by the private femtocell interfering with the macrocell MS, but this
article provides some solutions to it. For another
example, this article also provides details on the
final specification on femtocell reliability, femtocell low duty mode, access control, and interference management, which are not discussed in
detail in [1, 2].
In the next section we describe how WFAPs
are integrated into the WiMAX network architecture, which is mainly defined by the WiMAX
Forum NWG. Advanced air interface support
for mobility in WFAPs, interference mitigation,
WFAP reliability, and WFAP low duty mode is
introduced in later, mainly based on the most
recent efforts in IEEE 802.16m. The article concludes with some summarizing observations and
open issues.
NETWORK ARCHITECTURE FOR
WIMAX FEMTOCELLS
Figure 1 shows the general WiMAX network
architecture with additional support for femtocells. In the following section, the main functional entities are described.
GENERAL WIMAX NETWORK ARCHITECTURE
The network service provider (NSP) provides IP
data services to WiMAX subscribers, while the
network access provider (NAP) provides
WiMAX radio access infrastructure to one or
more WiMAX NSPs. A WiMAX operator may
act as NSP and NAP. An NAP implements the
infrastructure using one or more access service
nodes (ASNs). An ASN is composed of one or
more ASN gateways and one or more BSs to
provide mobile Internet services to subscribers.
The ASN gateway serves as the portal to an
ASN by aggregating BS control plane and data
plane traffic to be transferred to a connectivity
service network (CSN). An ASN may be shared
by more than one CSN. A CSN may be deployed
as part of a WiMAX NSP. A CSN may comprise
the authentication, authorization, and accounting (AAA) entity and the home agent (HA) to
provide a set of network functions (e.g. roaming,
mobility, subscription profile, subscriber billing)
that are needed to serve each WiMAX subscriber.
IEEE
BEMaGS
NETWORK ARCHITECTURE TO
SUPPORT WIMAX FEMTOCELL
For the support of femtocells, a Femto NAP and
a Femto NSP are introduced. Additionally, SON
(Self-Organizing Networks) functionalities are
added.
Femto NAP: A Femto NAP implements the
infrastructure using one or more Femto ASNs to
provide short range radio access services to femtocell subscribers. A Femto ASN is mainly differentiated from a Macro ASN in that the
WFAP backhaul is transported over the public
Internet. Therefore, the security gateway (GW)
is needed for WFAP authentication. When a
WFAP is booted, it first communicates with the
bootstrap server to download the initial configuration information, including the IP address of
the security GW. The WFAP and security GW
authenticate each other, and create a secure
Internet Protocol Security (IPSec) tunnel. The
security GW then communicates with the femto
AAA server in the femto NSP to verify whether
the WFAP is authorized. The femto GW acts as
the portal to a femto ASN that transfers both
control and bearer data between MS and CSN,
and control data between WFAP and femto
NSP.
Femto NSP: The femto NSP manages and
controls entities in the femto ASN. The AAA
function performs authentication and authorization of the WFAP. The management server
implements management plane protocols and
procedures to provide operation, administration,
maintenance, and provisioning (OAM&P) functions to entities in the femto ASN. OAM&P
enables the automation of network operations
and business processes that are critical to
WiMAX Femtocell deployment. WFAP management includes fault management, configuration
management, performance management, and
security management.
SON functionalities to support femtocell:
Self-optimized network (SON) functionalities
can be divided into self-configuration and selfoptimization. Due to the large number of
WFAPs expected, self-configuration is primarily
intended to enable auto-configuration and avoid
truck rolls. However, since femtocell deployments are not planned by operators, it is very
important that the configuration (e.g., radio
parameters setting) should take its neighbors
into account by not adding interference to users.
The wireless environment changes dynamically,
as WFAPs can be powered on and off at any
time. Self-optimization provides a mechanism to
collect measurements from MSs and fine tune
system parameters periodically in order to
achieve optimal system capacity, coverage, and
performance. Therefore, the SON server needs
to interact with SON clients not only in the
F
The WFAP must be
able to react, in a
highly flexible
manner, to different
interference situations, since neither
the location nor the
radio propagation
environment can be
predicted in advance.
Furthermore, the
customers are in
physical control of
the WFAP.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
123
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Macro NAP
Macro ASN #N
Macro ASN #1
CSN (NSP)
BS
AAA
MS
HA
ASN GW
BS
Femto NAP
Femto ASN #N
Femto NSP
Femto ASN #1
Secure tunnel
Security
GW
SON
server
Management
server
HA
AAA
Femto
GW
WFAP
MS
Public Internet
via DSL/cable
Bootstrap
server
WFAP operational states
Power
on/off
Initialization
Operational state
- Network
attachment
- Synchronization
- Topology
acquisition
Normal
operation mode
Control / management plane interface
Data plane interface
Low duty mode
AAA: Authentication, authorization, and accounting
ASN: Access service network
CSN: Connectivity service network
ASN GW: ASN gateway
HA: Home agent
NAP: Network access provider
NSP: Network service provider
Security GW: Security gateway
SON: Self-organizing network
WFAP: WiMAX femto access point
BS: Base station
MS: Mobile station
Figure 1. WiMAX network architecture.
femto ASN but also in the macro ASN. The
information elements are exchanged between
the SON server and the SON clients on the
management plane. Therefore, they will be
transported using the same management plane
protocols as defined in the femtocell management specification.
WFAP: For proper integration into the operator’s RAN, the WFAP enters the initialization
state before becoming operational. In this state,
it performs procedures such as attachment to the
operators’ network, configuration of radio interface parameters, time/frequency synchronization,
and network topology acquisition. After successfully completing initialization, the WFAP is integrated into the RAN and operates normally. In
operational state, normal and low duty operation
124
Communications
IEEE
modes are supported. In low duty mode, the
WFAP reduces radio interface activity in order
to reduce energy consumption and interference
to neighboring cells. The low duty mode is discussed in more detail in a separate section.
AIR INTERFACE SUPPORT FOR
WIMAX FEMTOCELLS
The IEEE 802.16m standard amends the IEEE
802.16 WirelessMAN orthogonal frequency-division multiple access (OFDMA) specification to
provide an advanced air interface for operation
in licensed bands. It meets the cellular layer
requirements of IMT-Advanced next-generation
mobile networks and provides continuing sup-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
port for legacy WMAN-OFDMA equipment.
In IEEE 802.16m, the basic features of the
air interface are identical for femtocells and
macrocells. For example, it supports both Time
Division Duplex (TDD) and Frequency Division
Duplex (FDD), it supports various bandwidth
from 5 to 20 MHz for single carrier and wider
bandwidth for multiple carrier, and it supports
types of traffic such as VoIP, best- effort data,
etc. However, there are some aspects for femtocells (as specified in Section 16.4 in IEEE
802.16m [4]) different from macrocells or other
cells, mostly on Medium Access Control (MAC)
layer and a few on Physical (PHY) layer.
The IEEE 802.16m draft specification provides support for the operation of WFAPs and
the integration of WFAPs in macrocell networks
to provide functionality such as access control,
network topology acquisition, mobility support,
interference mitigation, reliability, and WFAP
low duty mode. In the following section, these
features are introduced in more technical detail.
Discussions and contributed documents are
archived in IEEE 802.16 Task Group m (TGm)
website.
FEMTOCELL ACCESS CONTROL
For the typical use case of WFAPs as a “private
base station,” access control schemes must be
supported. A closed subscriber group (CSG)
containing a list of subscribers restricts access to
WFAPs or certain service levels. IEEE 802.16m
defines three modes for WFAP access; these
are:
CSG-Closed WFAPs are accessible exclusively
to members of the CSG, except for emergency services.
CSG-Open or hybrid WFAPs grant CSG
members preferential access. However, subscribers which are not listed can still access
the WFAP at a lower priority.
OSG (Open Subscriber Group) WFAPs are
accessible by any MS much like a normal
macro BS.
For efficient identification of subscriptions
and accessibility of WFAPs, a femto-enabled MS
can maintain a CSG whitelist, containing a set of
WFAPs and corresponding attributes like geographical location or overlaying macrocell identifiers. To avoid large CSG whitelists, a CSG
identifier (CSGID) is defined which describes a
group of WFAPs within the same CSG. The
CSGID can be derived directly without any additional information from the global unique BS
identifier (BSID) of the WFAP.
NETWORK TOPOLOGY ACQUISITION
Knowledge of the network topology is critical for
efficient interference mitigation and mobility
management between macrocells and WFAPs
and among WFAPs. Both macrocells and
WFAPs have to be aware if a WFAP enters or
leaves the environment, thus changing interference and mobility conditions. Furthermore, MSs
can perform cell searching and handovers in a
more efficient way if the type and the access
policies of the WFAPs in connection range are
known beforehand. IEEE 802.16m supports MSassisted network topology acquisition, but the
WFAPs can also scan the radio environment to
find neighbor or overlay cells. Figure 2 shows
some approaches for network topology acquisition.
IEEE
BEMaGS
MS Acquisition of WFAP Topology — IEEE
802.16m adopts an energy-efficient two-step
scanning method for the MS to identify neighboring WFAPs, and further to efficiently identify
whether an MS is allowed to access the WFAP.
Identified WFAPs and their attributes can then
be reported to overlaying macrocells and neighboring WFAPs.
Base station types are differentiated by the
frame preamble sequence, which is uniquely
mapped to an IDcell identifier. The total number of preamble sequences is partitioned into
subsets to differentiate between BS types. To
make the scanning and possible network entry
efficient, the set of IDcells is partitioned into sets
for macro and non-macro cells, where the latter
set is further partitioned into private (further
partitioned into CSG-closed and CSG-open) and
public cells (further partitioned into pico and
relay).
The two-step scanning method works as follows. In the first step, an MS scans the frame
preamble sequence to determine the BS type.
However, the number of WFAPs within the coverage area of a macrocell may well exceed the
number of available IDcell identifiers, such that
the identity of a WFAP may not be resolved
uniquely. To solve this WFAP ambiguity problem, the MS decodes in the second step the periodically broadcasted superframe header (SFH)
to obtain the unique BSID identifier. Note that
to save battery energy, the second step is only
performed if necessarily. In the second step, the
MS can also derive the CSGID of the WFAP,
and compare with its local CSG whitelist to
determine whether the detected WFAP is an
accessible cell for the MS.
F
The IEEE 802.16m
draft specification
provides support for
the operation of
WFAPs and the
integration of WFAPs
in macrocell
networks to provide
functionality such as
access control,
network topology
acquisition, mobility
support, interference
mitigation, reliability,
and WFAP low
duty mode.
WFAP Acquisition of Neighboring Cells —
The WFAP can acquire the network topology
from the backhaul, from the reporting MSs, or
by active scanning. The WFAP can scan and
measure its neighboring cells, such as overlaying
macrocells, or other nearby WFAPs, for interference management and to assist the cell (re)selection of the MS. The WFAP in this way acts like
an MS. However, in TDD (time division duplex)
systems, the WFAP cannot transmit frame
preambles and SFH during scanning. Hence, the
WFAP broadcasts a SON-ADV (SON advertisement) message which includes the timing information of the scanning interval, in which the
WFAP scans the other cells, while its own
preambles and SFH may not be available for the
MS in its coverage to scan. This message informs
MS that WFAP is not available for scanning.
FEMTOCELL MOBILITY MANAGEMENT
Femtocell networks, especially in the case of
dense deployments, are challenging for mobility
and hand-over functions due to the large number of small cells with different access types.
Special focus must be set on cell scanning functions to avoid high energy consumption on the
MS side. Also, seamless handover must be supported to avoid QoS degradations. Figure 3
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
125
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
MS scanning
neighboring BSs
MS scanning
neighboring WFAPs
Level one scanning
MS containing
information on IDcell
partitioning for
different cell types
SON-ADV message: containing the timing
of the scanning interval in which
1) the WFAP scans its neighboring cells,
2) unavailable for the MS.
Synchronization channel,
containing preamble, or IDcell
MS
MS determines whether the
scanned BS is a CSG-closed,
CSG-open, Open type.
Macro BS
WFAP
WFAP
H,
et
c.
Level two scanning is less
frequent than level one scanning
Pr
Macro BS
ea
m
bl
e,
SF
Preamble, SFH, etc.
n.
atio
nic eport t
u
m yr
ul
ComS ma g res
M nnin
sca
MS
Level two scanning
MS containing white
list: a list of the
femto CSG the MS
subscribes
SON-ADV:
unavailable
interval
SFH, containing BSID, from which
CSGID can be derived. Zero
overhead for CSGID.
WFAP
MS
WFAP
n.
atio
nic eport t
u
m yr
ul
ComS ma g res
M nnin
sca
MS gets femto BSID and CSGID.
If the received CSGID is in its
local white list, the MS is a
subscriber of the CSG.
WFAP uses an
interval to scan its
neighboring cells
Macro BS
Figure 2. Network topology acquisition.
shows some optimized mobility management
support in IEEE 802.16m.
Optimized MS Scanning of WFAPs — Macrocell BSs and WFAPs can help MSs in the process of scanning for WFAPs by conveying
information on the WFAP network topology.
This is achieved by broadcast, unicast, and
request-response message exchanges. Specifically, a macrocell BS can broadcast information on
OSG WFAPs in their coverage area like carrier
frequencies or IDcell partitions to reduce the
scanning time for MSs. Furthermore, after successful association of an MS to the macrocell
network, a macrocell BS can transmit a list of
accessible neighboring WFAPs. An MS may also
explicitly request a list of accessible WFAPs.
An MS may request additional scanning
opportunities from a BS by sending a message
including the detected IDcell index and carrier
frequency information. Upon reception of the
message, the BS can respond with list of accessible neighbor WFAPs.
Scanning of closed-subscriber group WFAPs
should be minimized as far as possible as long as
the subscriber is not authorized. Therefore, the
MS may provide CSGIDs of CSG whitelists to
the current serving base station to obtain instruction on how and when to scan, these instructions
may include a list of WFAP BSIDs which are
associated with the requested CSGIDs.
Furthermore, information on the location of
126
Communications
IEEE
WFAPs is also exploited to optimize MS scanning of WFAPs (e.g., by triggering MS scanning
when the MS or the network judges that the MS
is near a WFAP) based on the location of the
WFAPs and MS. The CSG whitelist may include
location information of CSG WFAPs, such as
GPS info or overlay macrocell BSID. The network may also instruct (by sending a message,
which may include a list of allowable WFAPs
nearby) the MS to scan WFAPs based on location information available at the network.
Handover Support for WFAPs — Handovers
between macrocells and WFAPs as well as interWFAP handovers should be transparent and
seamless for high QoS, besides taking user preferences into account. For example, subscribers
may prefer their home WFAPs even if the signal
strength of the WFAPs is lower than that of
adjacent macrocell BSs. To this end, the
WMAN-advanced air interface defines handover
and scanning trigger conditions, and target BS
priorities for femtocells based on the BS type.
Trigger conditions can be defined such that an
unwanted handover from a home WFAP to the
macrocell network is avoided, or vice versa handovers to WFAPs are preferred.
In addition, the network can instruct the MS
on how to prioritize the cell (re)selection. For
example, the network or the serving BS can send
the MS a message that includes a prioritized list
of the candidate target base stations.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Since WFAPs are
MS can scan autonomously or
based on the instruction by BS.
MS can have
preferences on its
subscribed femto
F
MS and its serving BS negotiate the timings for
MS to scan its neighboring WFAPs.
MS can report its location, detected BS,
subscribed CSGIDs, etc.
MS
overlaid by macroServing
macro BS
Serving BS can unicast an optimized neighbor list
which includes possibly accessible WFAPs, to help
the MS scan neighboring WFAPs energy-efficiently
cells, interference
occurs not only in
the tier of WFAPs,
but also across tiers,
i.e. between WFAPs
and macrocells.
Advanced interfer-
Targeting
WFAP
Note: Different types of
BS can apply different
trigger conditions for
handover
ence management is
therefore crucial for
viable operation of
femtocell networks,
especially in
Figure 3. Optimized handover scheme.
dense-deployment
INTERFERENCE MANAGEMENT
Since WFAPs are overlaid by macrocells, interference occurs not only in the tier of WFAPs,
but also across tiers, i.e. between WFAPs and
macrocells. Advanced interference management
is therefore crucial for viable operation of femtocell networks, especially in dense-deployment
scenarios. Several factors need to be considered, such as whether the WFAP and overlay
macrocell use the same frequency, whether
multiple frequency carriers are available,
whether interference management is applied to
control channel or data channel or both, and
whether the interfered MS is in connected or in
idle mode.
In order to provide seamless connectivity
and high QoS to mobile stations, the WMANadvanced air interface supports advanced
interference management methods with a set
of technologies targeting different scenarios.
The purpose is to achieve efficient inter-cell
interference mitigation with acceptable complexity in an optimized manner. Advanced
interference management crosses multiple layers, such as physical layer (e.g., power control,
carrier change), MAC layer (e.g., signaling,
messages, resource management such as
resource reservation), network layer (e.g.,
security, SON server coordination), and other
higher layers (e.g. mobile station QoS requirement and provisioning). Some of these technologies are described below:
Resource Reservation and Blocking — A
CSG WFAP may become a strong source of
interference for non-member MSs which are
associated to a nearby macrocell. In this case,
the WFAP blocks a radio resource region (i.e. a
time/frequency partition of the radio frame)
exclusively for non-member MSs for communication with the macrocell BS. This approach may
reduce the capacity of the WFAP. However,
since typically the number of MSs served by
WFAP is expected to be low, in many cases it is
suitable and it does not severely downgrade subscribers’ services.
Power Control — A CSG WFAP adjusts the
transmit power to reduce interference at nonmember MSs. For example, the transmit power
may be reduced to satisfy the minimum QoS
requirements of its member MSs if the WFAP is
strongly interfering non-member MS(s). The
power level may be restored again to provide
better QoS to its member MSs as soon as the
non-member MS left the coverage area of the
WFAP. This approach may reduce the WFAP
coverage or it may reduce the subscribers’
throughput. It can be suitable for indoor scenarios with high wall penetration losses between
macro and WFAPs, or in scenario where no MSs
are served by WFAP at the edge, or the subscribers’ throughput requirement is not high.
scenarios.
Coordinated Fractional Frequency Reuse —
Both WFAP and macrocells coordinate their frequency partitions and the associated power levels over the frequency partitions. This approach
can be suitable for interference management of
user data channels, but it may not help the control channel interference management. In addition, the cell may have reduced throughput due
to resources restrictions.
Frequency Carrier Change — The WFAP can
change to another frequency carrier with less
interference if there are more than one frequency carriers available. This approach requires
more than one frequency carriers.
Spatial Coordinated Beamforming — If
beamforming is supported by the WFAP, the
WFAP and/or the macrocell can coordinate their
antenna precoding weights to avoid or mitigate
interference. This approach requires timely signaling between WFAP and macrocell. The coordination may need accurate cell synchronization
which can be challenging.
Femtocell-Macrocell Coordinated Handoff
Scheme — A CSG WFAP can hand off some of
its member MSs to a nearby macrocell so that
the WFAP can adjust radio resources (e.g., by
means of power control or radio resource reser-
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
127
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Note that IEEE
802.16m specifies
different techniques
A
BEMaGS
F
SON server, etc.
Can coordinate with cells, e.g., macro BSs and
WFAPs, optimizing configuration and parameters,
such as transmitting power, carrier frequency,
reserved resource, WFAP access type, etc.
Backhaul
as enablers for interference manage-
Step 2: WFAP negotiates
with network and performs
interference mitigation
ment. The
implementation and
utilization of these
techniques is not
mandatory, such that
operators and vendors can choose
Macro BS
Member MS
Step 1: MS signals to
WFAP about its
situation
which set of options
Non-member MS
is most suitable for
their concrete
deployment scenario.
WFAP
Macrocell MS, who is not a member of CSGclosed WFAP. The MS cannot connect with
macro BS due to the interference from WFAP.
Figure 4. Two-step interference management in case a CSG-closed WFAP generates high interference at a
non-member MS.
vation) to reduce interference to non-member
MSs served by the macrocell. The timing of the
resource adjustment can be adaptively set to
accommodate the QoS requirements from both
WFAP member MSs and non-member MSs.
This approach may bring an accounting disadvantage to the member MSs if the communication with WFAP is cheaper than with the macro,
and it may weaken the WFAP to offload the
traffic in macrocell.
Femtocell Type Change Under Service
Agreement — If required, a CSG WFAP can
temporarily change its subscriber type (e.g., from
CSG-Closed to CSG-Open) if it strongly interferes with a non-member MS, such that the MS
can hand over to the now CSG-Open WFAP.
The subscriber type is restored as soon as the
non-member MS leaves the coverage area of the
WFAP. This approach requires that the owner
of the WFAP agrees that other users temporarily use the WFAP for data services.
One of the biggest problems for the operation of heterogeneous macrocell/WFAP deployments is the creation of coverage holes for
macrocell users by CSG-closed WFAPs. If a
mobile station is not a member of the subscriber
group of a CSG-closed WFAP, the received signal power is experienced as interference. This
may lead to service degradation and in the worst
case to connection loss — i.e. a coverage hole is
created. To solve this problem, IEEE 802.16m
defines a two-step solution, as shown in Fig. 4.
Step 1: After scanning, an MS detects that
the only BS with acceptable signal quality is a
CSG-closed WFAP where the MS is not listed as
member. Normally, a non-member MS should
not try to access the CSG-closed WFAP [3, 7].
However in the exceptional case of a coverage
hole generated by the CSG-closed WFAP, the
non-member MS can signal the coverage hole
128
Communications
IEEE
situation to the WFAP by means of a reserved
CDMA ranging code.
Step 2: The WFAP can notify the macrocell
and a network entity such as a SON server to
request coordinated interference mitigation. It
has to be noted that the coordinated interference management not only means that the nonmember MS served by the macrocell BS will get
desired QoS, but also the WFAP tries to guarantee its member MSs desired QoS. Depending on
the scenario, interference mitigation approaches
such as resource reservation, power control,
FFR, or beam-forming can be applied.
The two-step approach can be used in a general case when an MS is connected with a macrocell, where in step 1 the MS can report the
interference to the macro BS, and in step 2 the
macro BS can coordinate with the interfering
WFAP via the backhaul network and then the
interference mitigation approaches can be
applied.
Note that IEEE 802.16m specifies different
techniques (see examples above) as enablers for
interference management. The implementation
and utilization of these techniques is not mandatory, so operators and vendors can choose which
set of options is most suitable for their concrete
deployment scenario. How to make choices
would be open to operators and vendors, and
further simulation studies may be needed to
make the choice.
WFAP SERVICE RELIABILITY
Since WFAP BSs are under physical control of
the customers, normal operation may be interrupted for various reasons. Typical examples are
loss of power support or backhaul connectivity
problems. Also, operators may schedule maintenance times and network topology reacquisition
or interference mitigation procedures. However,
service continuity should be maintained as much
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
as possible in these cases. IEEE 802.16m introduces features for increased reliability and continuous service to the MS (Fig. 5) should such
scenarios arise.
As a basic rule, the WFAP will try to inform
the network and MSs in case of any service disruptions. On the air interface, this is done by
means of a periodically broadcasted message.
The message encodes the reason for the unavailable time, relevant system parameters like transmit power reduction and frequency allocation
index, and eventually the duration of the air
interface absence, if known beforehand. Additionally, a list of recommended BSs for the MS
to handover can be included. The message is
broadcasted until the WFAP disables the air
interface. This allows the MS to initiate a handover to a BS based on the recommended list or
to any previously cached neighbor BS list of its
preference. Alternatively, the WFAP can instruct
the MS to handover to other BSs before a scheduled unavailable time is due. For optimized network re-entry to a WFAP that becomes available
again, the WFAP may store medium access control (MAC) context information of the served
MSs (e.g. basic capabilities, security capabilities,
etc.).
If the WFAP recovers from failure of backhaul, or power down, or reconfiguration or it
regains some resources from interference coordination, it may inform the network or notify the
current serving BS of the MS through the backhaul network interface. Based on the cell types
of the current serving BS and the WFAP and
the associated mobility management policy, the
current serving BS may then initiate a handover
back to the WFAP, where the recovered WFAP
is prioritized.
WFAP OPERATION IN LOW DUTY MODE
A novel optional operational mode, low duty
mode (LDM), was introduced into IEEE
802.16m in order to reduce interference and
energy consumption in femtocell deployments.
The principle of the LDM is to reduce air interface activity as much as possible by transmitting
on the air interface only if it is required. To this
end, default LDM patterns consisting of available intervals (AI) and unavailable intervals
(UAI) are defined which enables a pattern of
activity and inactivity for the WFAP. The UAI
allows disabling or switching off of certain parts
of WFAP hardware components such as the
transmitter chain. Another possibility is to use
UAIs for scanning and measurement of the
radio environment in order to improve interference mitigation or for synchronization to the
macrocell network. During an AI, the WFAP is
available for any kind of transmission just as in
normal operation state besides being guaranteed
to be available for scanning by the AMSs.
The LDM is designed with two basic
paradigms: First, a WFAP may enter LDM only
in case there is no active MS attached. This rule
is established in order to avoid complex signaling
and possible QoS degradation at the user side.
Second, the impact on the operational complexity of the MS should be minimized in order to
keep implementation costs low.
The Default LDM pattern is either pre-provi-
IEEE
BEMaGS
F
SON-ADV message: self-organizednetwork advertisement message, including
the reason of WFAP being unavailable,
reconfiguration information, etc.
Macro BS
MS
Handover
to macro
SON-ADV
message
Power
WFAP
Cable
WFAP reduces or
reconfigures its air
interface resource for
interference
management, etc.
Figure 5. Illustration of WFAP reliability design.
sioned, unicasted during network entry or may
be broadcasted, such that MSs have the necessary information on when the WFAP is available, for example for requesting bandwidth. The
WFAP switches back to normal mode on explicit
request from the backhaul network or implicitly
by receiving any triggering of data activity from
the MSs.
An AI will be scheduled by the WFAP whenever there is an operational need for it. Therefore, the resulting AI pattern at the WFAP is
the superposition of the Default LDM pattern
and any additional AIs necessary for normal MS
operation. This is illustrated in Fig. 6. Note that
for interference mitigation, it is desirable to
reduce the transmitting time of the LDM as
much as possible, for example by aligning paging
and LDM Default Patterns.
SUMMARY
The femtocell concept is supported in Advanced
WiMAX for low-cost deployment, high throughput, and high quality of service in indoor scenarios. Recent standardization activities in IEEE
802.16m and the WiMAX Forum, the technical
details for the next generation WiMAX femtocell design have been defined. Advanced
WiMAX introduces innovative solutions for femtocell support into the WiMAX network architecture and the WirelessMAN-Advanced Air
Interface. This article highlights the challenges
and design principles of Advanced WiMAX
Femtocells. The features and mechanisms that
solve the unique problems of deploying and
operating femtocell networks have been illustrated. These include network topology acquisition,
enhanced mobility management, coordinated
interference management, increased service reliability, and operation of low duty mode.
Some of the technologies to support femtocells in Advanced WiMAX may be further
researched and optimized, such as coordinated
interference management, operation of low duty
mode, and enhanced mobility management. System performance evaluation, modeling, and simulations may be further studied. Furthermore,
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
129
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
MS paging cycle
Some of the technologies to support
femtocells in
Advanced WiMAX
may be further
researched and optimized, such as coor-
PLI
WFAP
transmission pattern
Default LDM pattern
ment, etc. System
performance evalua-
MS paging cycle
WFAP
default LDM pattern
management, opera-
mobility manage-
F
MS
in idle mode
AI
mode, enhanced
BEMaGS
PUI
dinated interference
tion of low duty
A
UAI
AI
UAI
AI UAI
Default LDM pattern
AI
UAI
UAI
AI
Default LDM pattern
AI: Available interval
UAI: Unavailable interval
PLI: Paging listening interval
PUI: Paging unavailable interval
Figure 6. WFAP operation in low duty mode.
tion, modeling and
simulations, may be
further studied.
some topics are not included in the IEEE
802.16m air interface, such as support for downlink transmission power setting or power control
of femtocells and multicarrier support in femtocells. These topics may be included in study
items in future standards releases. In May 2010
the study group on Hierarchical Networks started and is ongoing since, one of whose aims is to
extend the IEEE 802.16m air interface for multitier deployments. A formal task group on this
topic is expected to be created.
REFERENCES
[1] S.-P. Yeh et al., “WiMAX Femtocells: A Perspective on
Network Architecture, Capacity, and Coverage,” IEEE
Commun. Mag., vol. 46, no. 10, Oct. 2008
[2] R. Y. Kim, J. Sam Kwak, and K. Etemad, “WiMAX Femtocell: Requirements, Challenges, and Solutions,” IEEE
Commun. Mag., vol. 47, no. 9, Sept., 2009.
[3] D. N. Knisely, T. Yoshizawa, and F. Favichia, “Standardization of Femtocells in 3GPP,” IEEE Commun. Mag.,
vol. 47, no. 9, Sept., 2009.
[4] IEEE Std 802.16m-2011, “Part 16: Air Interface for
Broadband Wireless Access Systems Amendment 3:
Advanced Aire Interface,” May 2011.
[5] ITU-R, “IMT-ADV/8-E, Acknowledgement of Candidate
Submission from 3GPP Proponent (3GPP Organization
Partners of ARIB, ATIS, CCSA, ETSI, TTA AND TTC) under
Step 3 of the IMT-Advanced Process (3GPP Technology),” Oct., 2009.
[6] IEEE 802.16-2009, “IEEE 802 Part 16: Air Interface for
Broadband Wireless Access Systems.”
[7] IEEE, “IEEE 802.16m-07/002r10, IEEE 802.16m System
Requirements,” Jan. 2010.
[8] WiMAX Forum, “DRAFT-T33-118-R016v01-C_FemtoCore,” May 2010.
[9] WiMAX Forum, “DRAFT-T33-119-R016v01-D_Femtomgmt,” May 2010.
[10] WiMAX Forum, “DRAFT-T33-120-R016v01-C_SON,”
June 2010.
BIOGRAPHIES
____________ received her Ph.D. degree
YING LI ([email protected])
in electrical engineering from Princeton University, New
Jersey, in October 2008. She received her B.E. degree (with
honors) in electrical engineering from Xi’an Jiaotong University, China. Since October 2008 she has been with Samsung Telecommunications America, Dallas, Texas, where
she is involved in study, development, and standardization
of heterogeneous networks for next - wireless communications. She is actively involved in IEEE 802.16m standardization, especially for femtocell support, in which she has
chaired ad -hoc sessions in the IEEE 802.16m technical
working group. Her current research interests include wire-
130
Communications
IEEE
less networks, heterogeneous networks, resource allocation, cross-layer design, and optimization.
ANDREAS MAEDER ([email protected])
_______________ received his
diploma and doctoral degree in computer science from the
Department of Distributed Systems at the University of
Würzburg, Germany, in 2003 and 2008, respectively. Since
2008 he has been affiliated as a research scientist with
NEC’s Network Laboratories Europe in the Mobile and
Wireless Network Group. His current responsibilities include
IEEE 802.16m standardization with a special focus on femtocells, IMT-Advanced evaluation, and research on radio
resource algorithms. He was also involved in the development of femtocell solutions for 3GPP LTE systems. His
research interests focus on radio resource management
schemes for IMT-Advanced systems, femtocells, and M2M
communications.
LINGHANG FAN ([email protected])
_______________ is a consultant
with NEC Laboratories Europe. He received his B.Eng. in
automatic control from Southeast University, China, and
his M.Sc. and Ph.D. in telecommunications from the University of Bradford, United Kingdom. In 2003 he joined the
University of Surrey as a postdoc research fellow and
worked on the EU projects STRIKE, Ambient Networks,
MAESTRO, SatNEx, SATSIX, and EC-GIN. His research interests include wireless/mobile communications and next-generation Internet. From 1998 to 2000 he was a researcher at
the University of Bradford and worked on the EU projects
SINUS and SUMO. He has published more than 50 papers
in international journals and conferences, and edited a
book.
A NSHUMAN N IGAM ([email protected])
_____________ received his
Bachelor’s degree in electrical engineering from the Indian
Institute of Technology at Kanpur in 2000. Since 2000 he
has been affiliated with Samsung India. He has worked on
the design and development of the access part of GPRS,
UMTS, LTE, and WiMAX systems. His current responsibilities
include IEEE802.16m standardization where he is actively
participating in the femtocell and control plane areas. He is
also actively involved in design and development of
IEEE802.16m systems. His research interests are in femtocells and cooperative communications in wireless mobile
networks.
J OEY C HOU ([email protected])
____________ received M.S. and B.S.
degrees in electrical engineering from Georgia Institute of
Technology and the National Taiwan Institute of Technology, respectively. Since 2003 he has been working in the
creation and evangelization of WiMAX technology in IEEE
802.16 and WiMAX Forum. He worked on the product
development of the first WiMAX chipset, Rosedale at Intel.
He played several key roles in the WiMAX standardization
in both IEEE 802.16 and WiMAX Forum, including chief
technical editor and team lead. He was actively involved in
standard developments in the ATM Forum and DSL Forum.
Prior to joining Intel, he worked at Siemens, AT&T, and
Motorola, where he worked on the Iridium and Teledesic
projects.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
____________________
_________________________________________
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TOPICS IN STANDARDS
Smart Utility Networks in
TV White Space
Chin-Sean Sum, Hiroshi Harada, Fumihide Kojima, Zhou Lan, and Ryuhei Funada, NICT
ABSTRACT
This article presents an overview of the background, technology, regulation, and standardization in the course of deploying smart utility
networks (SUNs) in TV white space (TVWS)
communications, two wireless technologies currently receiving overwhelming interest in the
wireless industry and academia. They are independent to and uncorrelated with each other,
but share the same mission: conserving resources
and increasing efficiency. This article reviews the
systems as separate technologies, and then combines them to propose a hybrid solution that
draws out their respective advantages. The first
part focuses on SUNs and describes the SUN
usage model with typical application requirements and practical examples, followed by the
latest developments in standardization initiatives
with emphasis on the currently active IEEE
802.15.4g and 802.11ah groups. The second part
discusses TVWS, studying and summarizing the
regulations governing its usage, and then reports
on the standardization bodies’ responses to these
regulations with a focus on IEEE 802.11af,
802.19.1, and 802.22. Finally, the third part
amalgamates the SUN usage model with TVWS
regulations and deployment scenarios, providing
relationship mapping between the SUN components and regulation-compliant TVWS devices.
Further discussions concentrate on the opportunities and challenges along the path of realizing
a practical SUN in the TVWS spectrum under
the current technical and regulatory conditions.
Several recommendations are made from both
regulatory and technical standpoints to further
increase utilization of SUNs in TVWS.
INTRODUCTION
It is the best of times, it is the worst of times.
Scarce radio resources have arrived at a bottleneck wherein communication systems must compete and struggle in order to deliver adequate
performance. Meanwhile, demand for “ecofriendly” radios has also opened a new paradigm
and direction for radio designers to mobilize
their efforts.
Communication systems that are able to conserve or reuse the already scarce radio resources
are currently leading the way in next-generation
132
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
radio design. While minimizing potential impacts
on the congested radio spectrum, next-generation communication systems are expected to
help manufacturers and operators achieve longterm profitability. Mobilizing efforts to spur
progress of these eco-friendly systems has recently become not only the mission of communication engineers, but also that of every resident
feeding on the Earth’s resources. In line with
these green-related efforts, the main intent here
is to combine two eco-friendly systems to produce a hybrid technology. We hope that this
innovation can bring more effective use of radio
resources while allowing a broader market to be
explored. This article’s main objective is to provide a high-level discussion to explore the potential of these technologies in changing our
lifestyles, based on the current technology, market and regulatory status.
The first eco-friendly system of interest is the
smart utility network (SUN). A SUN is a telemetry system closely related to the smart grid
framework, which targets designing a modernized electricity network as a way of addressing
energy independence, global warming, and emergency response. A SUN is a ubiquitous network
that facilitates efficient management of utilities
such as electricity, water, natural gas, and
sewage. Effective management of utility services
is expected to encourage energy conservation
and reduce resource wastage.
The second system is TV white space
(TVWS) communication, which occupies the frequencies allocated to TV broadcasting services
but not used locally. In other words, it is an
enabling technology to reuse the spectrum not
used by primary licensed users. With the spectrum below 10 GHz becoming more congested,
this is a timely opportunity for secondary wireless systems to access the surplus TVWS.
This article promotes combining both technologies into a hybrid eco-friendly system. By
deploying the SUN telemetry system in TVWS
bands, we are able to obtain both the energy
conservation of a SUN and the spectrum
reusability of TVWS, thus creating a hybrid ecofriendly wireless technology. Both systems are
now popular topics in industry and academia,
and it is only a matter of time until a hybrid
SUN-in-TVWS wireless system becomes the
focus of attention. It is also notable that while
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
Data collectors form
a mesh network and
are connected to the
External
network
utility providers via
Utility provider 2
wireless or other
wired solutions.
In the case of an
Data collector
Utility provider 1
emergency or link
outage of fixed data
collectors, a mobile
data collector may
Data collector
be deployed as an
alternative.
Data collector
Electric meter
SUN link
Gas meter
Mobile data
collector
Water meter
Figure 1. Usage model of smart utility networks.
amalgamating SUN with TVWS brings encouraging opportunities, there are several accompanying challenges.
To address the issues at hand, the article is
organized as follows. We discuss the usage model
of the SUN and the international standardization initiatives working on related topics. We
also discuss the regulations, deployment scenarios, and corresponding standardization initiatives
of TVWS communications. Then we analyze the
potential of SUN occupying TVWS, with weighted focus on the opportunities and challenges
therein. Finally, we provide several recommendations on plowing through the challenges to
harvest the advantages of a hybrid SUN-inTVWS system.
SMART UTILITY NETWORKS
Smart utility networks (SUNs) are next-generation utility networks allowing digital technologies
to provide efficient control and management of
utilities such as electricity, water, natural gas,
and sewage. One key technology of SUNs is the
advanced metering infrastructure (AMI), which
facilitates monitoring, command, and control for
service providers at one end, and measurement,
data collection, and analysis for consumers at
the other. AMI covers a wide range of technologies including communication protocols, hardware, software, data management systems, and
customer-related systems. Communication technology is the essential element for enabling for-
mation of networks where control messages and
metering data can be exchanged. This section
discusses the potential usage scenario of SUNs,
and gives an overview of recent developments in
international standardization initiatives.
SUN USAGE MODEL
Conventional utility control and metering are
typically performed by manual or semi-manual
operations with many limitations. For utility service providers, the time is right to conduct a
paradigm shift to a more intelligent networking
system to increase service efficiency and costeffectiveness. Radio frequency (RF)-based mesh
networking systems that improve the quality of
conventional utility networks provide a good
usage model with huge market potential.
To realize such an efficient and inexpensive
utility network, several key application properties are needed. First, the network must be ubiquitous and far-reaching. Since there are
potentially millions of nodes to be deployed,
with multiple nodes per customer, it is crucial to
have “everyone connected to everyone” to
ensure continuous connectivity via self-healing
and self-forming networks. The network must
also be responsive to system failure. In emergencies such as service outages, real-time response
is a critical factor for recovery. Third, the network must be scalable. Given the diverse population density from metropolitan areas to sparse
rural towns, the network must be able to support
a range of pervasive traffic loads. Lastly, the net-
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
133
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
To align with the
environmental
conditions encountered in smart
metering deployments, a SUN is
required to achieve
an optimal energyefficient link margin.
It is also required to
Regulatory domain
Frequency band
Worldwide
2400–2483.5 MHz
Japan
950–958 MHz
United States
902–928 MHz
Europe
863–870 MHz
China
470–510 MHz
Table 1. Major allocated frequency bands for
SUN.
support up to three
collocated orthogonal networks.
work must be cost effective. Network components should be easily deployed, with low complexity and a long life span.
Figure 1 illustrates the RF-based utility mesh
network known as the SUN. In the usage model,
meters of different utilities, such as in a residence, are connected via the SUN link. Each
house is connected to another, also via the SUN
link. A collective number of households form a
service area that is covered by a data collector.
Data collectors form a mesh network and are
connected to the utility providers via wireless or
other wired solutions. In the case of an emergency or link outage of fixed data collectors, a
mobile data collector may be deployed as an
alternative.
Figure 1 gives an overview of the primary
usage case for the SUN. The usage model in the
figure should also support a wide range of applications. Apart from the most commonly known,
automatic meter reading (AMR), these applications include remote service connect/disconnect,
outage detection, reliability monitoring, and
quality monitoring.
DEVELOPMENT IN
STANDARDIZATION INITIATIVES
The usage model mentioned earlier is seen as a
good scenario for next-generation utility networks. This section presents recent developments in some of the latest standardization
initiatives targeting specifications of these networks.
IEEE 802.15.4g — IEEE 802.15.4 is a set of
standards specifying communication systems in
wireless personal area networks (WPANs). The
standards are collectively managed by a mother
entity, the 802.15 Working Group (WG). The
WG distributes specific task areas to smaller
subgroups, called Task Groups (TGs), and each
TG is assigned to standardize a particular field
in the WPAN system. For the next-generation
utility system, IEEE 802.15.4g, better known as
TG4g, is responsible for developing the SUN
standard.
TG4g specifies the necessary SUN-related
physical (PHY) layer design amendments to
legacy IEEE 802.15.4 [1], the base standard for
low-data-rate WPANs. Medium access control
(MAC) amendments on legacy IEEE 802.15.4
[1], however, are handled by a separate TG
134
Communications
IEEE
A
BEMaGS
F
called IEEE 802.15.4e, or TG4e. The MAC
amendments specified by TG4e provide the
MAC to be applied by TG4g SUN operations.
The TG4g scope is for specifying a standard
that supports SUN operability in regionally
available unlicensed frequency bands, as shown
in Table 1. The table shows just a fraction of the
complete list of frequency bands listed in TG4g.
Targeted data rates range from 40 to 1000 kb/s
corresponding to different deployment scenarios
and traffic conditions, principally in outdoor
communications. To align with the environmental conditions encountered in smart metering
deployments, SUN is required to achieve an
optimal energy-efficient link margin. It is also
required to support up to three co-located
orthogonal networks. A typical deployment scenario is connectivity to at least 1,000 direct
neighbors in a dense urban network.
In TG4g PHY layer design, three alternative
PHY layer specifications are proposed: multirate frequency shift keying (MR-FSK) PHY,
multirate orthogonal frequency-division multiplexing (MR-OFDM) PHY, and multirate offset
quadrature phase shift keying (MR-O-QPSK)
PHY. The multiple PHY designs are for tackling
different system demands in respective market
segments. The PHY frame size is set to be a
minimum of 1500 octets. To promote coexistence, a multi-PHY management (MPM) scheme
is proposed to bridge the three PHYs using a
common signaling mode (CSM).
For MAC layer design, TG4g principally
reuses the MAC protocols in [1], which employs
a PAN maintained by the PAN coordinator
(PANC). The PANC manages and controls the
time and spectrum resources to be shared among
the devices within the PAN. A device capable of
starting its own PAN is allowed to do so within
an existing PAN, thus enabling a hierarchical
cluster tree type of network. Most of the MAC
protocols and functionalities in [1] are preserved
in the MAC design for SUN, with several minor
changes related to multi-PHY coexistence, frequency diversity, and other PHY-MAC interfaces.
Referring to Fig. 1, the elements in the SUN
usage model can be mapped individually to the
components defined in TG4g. Correspondingly,
the data collectors, electric/gas/water meters in
Fig. 1, can in TG4g be mapped as PANC and
normal devices. The mobile data collector may
be either a PANC or a device.
IEEE 802.11ah — IEEE 802.11ah (TGah) [2] is
a TG in the 802.11 wireless local area network
(WLAN) WG. Its scope is enhancing MAC and
PHY designs to operate in the license-exempt
bands below 1 GHz for smart grid and smart
utility communications. Most of the frequency
bands listed in Table 1 are also covered by TGah.
TGah is currently in the process of collecting
system design proposals.
TV WHITE SPACE COMMUNICATIONS
In line with SUN activities, TVWS is another
area receiving overwhelming attention. TVWS is
the unused TV channels in the very high frequency (VHF) and ultra high frequency (UHF)
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Fixed device
F
Personal/portable device
Mode I (client)
Mode II (independent)
Sensing-only
Geolocation awareness
Required (accuracy ± 50 m)
Not required
Required (accuracy ±50 m)
Not required
Geolocation re-establishment
Not required
Not required
Required (once/min)
Not required
Database access
Required (once/day)
Not required
Required (upon location change)
Not required
Available channels
2–51 (except 3,4,36-38)
Power (EIRP)
21–51 (except 36–38)
4W
Spectrum sensing
100 mW
Not required
50 mW
Required
Table 2. Summary of FCC governing regulations.
regions licensed by TV broadcasters and wireless
microphones. Secondary wireless services such as
WLAN and WPAN were previously not allowed
to use these bands, but recently the Federal
Communications Commission (FCC) granted
usage of TVWS with several conditions and
restrictions [3, 4]. The announcement elated the
wireless community, and this section presents
these positive updates in regulations and standardization activities.
GOVERNING REGULATIONS
The U.S. FCC issued a report and order (R&O)
[3] in November 2008, then later in September
2010 [4], outlining the governing regulations for
unlicensed usage of TVWS. The regulations for
unlicensed TVWS devices, also known as TV
band devices (TVBDs), to share TVWS with
incumbents such as TV and wireless microphones are summarized in Table 2. Many of the
following regulations are closely related to the
context of this article.
Device Classifications — There are two classes
of TVBDs — fixed and personal/portable devices
(hereinafter, portable devices). Fixed devices
operate at a fixed location with a high-power
outdoor antenna. Portable devices operate at
lower power and could take the form of a
WLAN access point or WPAN module in a
handheld device. Portable devices are further
divided into Modes I and II. Mode I devices are
client portable devices controlled by a fixed
device or Mode II portable device. Mode II
devices are independent portable devices with
the ability to access available channels.
Transmit Power — The FCC has imposed
strict rules on the allowable transmit power of
TVBDs. Fixed devices may transmit up to a 4 W
equivalent of effective isotropic radiated power
(EIRP), with 1 W output power and a 6 dBi gain
antenna. Portable devices are allowed to transmit up to a 100 mW equivalent of EIRP, with no
antenna gain, except that when operating in a
channel adjacent to a licensed user and within
the protected area of that service, the allowable
transmit power should be limited to 40 mW.
Additionally, sensing-only portable devices are
allowed to transmit up to 50 mW.
Geolocation and Database Access Requirements — TVBD geolocation awareness is an
essential capability required by the FCC regulations. Fixed devices should be professionally
installed with geographic coordinates accurate to
±50m. Mode II devices, however, should incorporate a geolocation capability (e.g., via a Global
Positioning System [GPS] module) to determine
their locations and should re-establish location
at least once every 60 s. In addition to geolocation awareness, fixed and Mode II devices are
required to access the TV band database over
the Internet to determine the locally available
list of TV channels. Fixed devices should access
the database at least once a day, whereas Mode
II devices should do so if they change location
during operation by more than 100 m from
where they last accessed the database, or each
time they are activated from a power-off condition. Mode I devices should be allowed to operate only upon receiving available TV channels
from a fixed or Mode II device.
Other Optional Requirements — Spectrum
sensing was a mandatory requirement in [3] but
was relaxed in [4]. It is currently an optional
function with the recommended detection
threshold down to –114 dBm. Spectrum sensing
may still be a means of determining available
TV channels for sensing-only devices.
DEVELOPMENT IN
STANDARDIZATION INITIATIVES
Several standardization initiatives were formed
as a response from the wireless community to
the opening of TVWS.
IEEE 802.11af — In September 2009, a TVWS
Study Group (SG) was formed by the IEEE
802.11 WG to investigate the possibility of a
WLAN TVWS standard. The TVWS SG held
two meetings in September and November 2009,
and produced a document known as the Project
Authorization Request (PAR). This was then
reviewed and approved by the IEEE Executive
Committee (EC) in November 2009. As a result
of the approval, the TGaf was officially formed
in January 2010, with a scope of defining modifications in PHY and MAC layer designs with ref-
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
135
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SMART UTILITY NETWORKS IN
TV WHITE SPACE
Mode I client
personal/portable
device
Mode I client
personal/portable
device
Sensing-only
device
Mode II independent
personal/portable
device
Mode I client
personal/portable
device
Fixed device
Sensing-only
device
Figure 2. Deployment scenario of TVBDs.
erence to the WLAN legacy standard IEEE
802.11 revision 2007 [5] to enable communications in TVWS.
IEEE 802.22 — IEEE 802.22 is a draft standard
specifying wireless regional area network
(WRAN) communication systems operating in
TVWS. The formation of the IEEE 802.22 WG
in October 2004 was in response to the Notice of
Proposed Rule Making (NPRM) issued by the
FCC in May 2004 [6]. Its scope is mainly specifying the enabling policies and procedures for
WRAN systems to share the spectrum resources
in TVWS by employing cognitive radio techniques.
IEEE 802.19.1 — IEEE 802.19 is the Wireless
Coexistence Technical Advisory Group (TAG)
in IEEE 802. The TAG deals with coexistence of
the many unlicensed wireless networks in the
IEEE 802 family. In March 2009, the 802.19
TAG received an assignment from the IEEE EC
Study Group (ECSG) to develop coexistence
scenarios and possible coexistence metrics in
TVWS, and an SG within the 802.19 TAG was
initiated as a result. In September 2009, the
802.19 SG began work on a new PAR to form a
new IEEE 802.19.1 TG focusing on coexistence
methods in TVWS.
DEPLOYMENT SCENARIOS
Based on the regulations outlined earlier, the
deployment scenario of the TVWS network
can be as illustrated in Fig. 2, which shows the
different classes of TVBDs — fixed and
portable devices. Fixed devices are allowed to
transmit up to 4 W. They can be connected to
a Mode I client device either via direct connection or through a Mode II independent
device. Mode II devices are allowed to transmit up to 100 mW. Mode I devices should be
controlled by a fixed or Mode II device and
are allowed to transmit up to 100 mW. There
is also a sensing-only device, which may transmit up to 50 mW.
136
Communications
IEEE
Despite the fact that SUN and TVWS are independent and uncorrelated technologies, they can
be combined to form a hybrid eco-friendly technology that draws out their individual benefits.
SUN operates primarily in unlicensed bands, as
shown in Table 1, which indicates that traffic
congestion and interference can significantly
degrade performance since sharing the spectrum
with other wireless systems is unavoidable.
Therefore, by deploying in TVWS, more
spectrum resources can be made available to the
SUN system, thus reducing the degrading-impact
of congestion and interference.
Additionally, the range of the SUN system
can also be extended through TVWS with longer
reach and higher penetration capabilities compared to spectrum bands in the gigahertz order.
In order to utilize the unused spectrum in
TVWS, SUN must comply with governing rules
and communication protocols specific to accessing it. To achieve this, SUN and TVWS must
display a certain level of homogeneity in terms
of deployment scenario and system behavior.
The following sections discuss the homogeneity
between SUN components and devices in a
TVWS communication system, as well as opportunities and challenges in deploying SUN in
TVWS.
SUN DEVICES TO TVBDS
For SUN to be able to utilize the spectrum in
TVWS, the SUN deployment scenario must be
capable of matching the TVWS communication
deployment scenario. In other words, SUN components must be mapped into the TVWS communication system architecture. This section
covers the relationship between SUN components and TVWS devices (i.e., TVBDs). Referring to Figs. 1 and 2, the devices in the SUN
usage model may be mapped into different classes of TVBDs, as shown in Table 3. In Fig. 1, the
utility providers have high-power base stations at
headquarters and control centers for establishing
metropolitan area networks. These base stations
can be viewed as fixed TVBDs, as shown in Fig.
2. These fixed base stations have outdoor antennas, geolocation awareness capability, and the
ability to access the TV band database for incumbent protection contours. The fixed base stations
are connected to data collectors, each covering a
smaller area, which may be either access points
(APs as in IEEE 802.11) or PANCs (as in IEEE
802.15.4 or 802.15.4g). Alternatively, simplified
models of data collectors can also be deployed
in individual households. The data collectors can
be viewed as Mode II independent TVBDs from
the perspective of TVWS regulations. The data
collectors are connected to the customer’s
premises, each with multiple smart meters of different utility services. The smart meters are
equivalent to Mode I client TVBDs. In times of
link outage, mobile data collectors may be
deployed in place of fixed data collectors. A
mobile data collector can be viewed as a sensingonly device with very low power and requiring
no geolocation awareness capability.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
SUN components
TVBD classes
Sensing capability
Geolocation awareness
Transmit power
Utility provider base station
Fixed device
Not required
Required
4W
Data collector
Mode II independent device
Not required
Required
100 mW
Electric/gas/water meter
Mode I client device
Not required
Not required
100 mW
Mobile data collector
Sensing-only device
Required
Not required
50 mW
Table 3. Mapping from SUN components to TVBD classes.
POTENTIAL ADVANTAGES
The previous section shows that the SUN components match well with the device classes in
TVWS regulations. This means that the TVBD
network may effectively be deployed in the same
network topology as the SUN usage model, indicating that the TVWS spectrum can be potentially utilized by the SUN system. Table 3 shows
a one-to-one mapping scheme between SUN
devices and TVBDs. This section provides some
further analysis on the opportunities when the
SUN system is implemented occupying the
TVWS spectrum.
Fundamentally, availability of the TVWS provides additional spectrum resources to SUN systems allocated in the current unlicensed bands.
Longer range and higher penetration capabilities
are the primary advantages offered by
VHF/UHF signals. compared to popular wireless
unlicensed bands such as 2.4 and 5 GHz,
VHF/UHF bands are significantly more suitable
for applications intended for large area coverage
with obstacles. For SUN applications, it is essential for the radio sphere of influence to cover a
large and population-density-diverse area, from
compact metropolitan areas filled with tall buildings to rural areas characterized by sparse distance between homes, hills, and heavy foliage.
Existing infrastructure such as telephony and
powerline communications has also been found
incapable of meeting the SUN’s needs. In this
sense, TVWS is a good candidate for extending
the SUN operating range. By employing TVWS,
a relatively larger area can be covered by a given
node, thereby reducing the amount of required
infrastructure in a specific area. In other words,
installation and maintenance costs can be drastically reduced. From this perspective, TVWS is a
suitable candidate for accessing rural areas.
TVWS transmission’s higher penetration characteristic also indicates that for indoor connectivity
through walls, the number of APs or PANCs can
be effectively reduced. From this perspective,
TVWS is also advantageous in inner city apartments and offices.
Global compatibility of SUN devices is another opportunity offered by taking advantage of
TVWS in SUN applications. Currently, different
regulatory domains specify respective spectrum
bands for these applications. From Table 1, it is
clear that most of these allocated bands do not
overlap, indicating that the devices must be
designed separately to accommodate respective
operating bands across the globe. Also, in many
realistic scenarios, each regulatory domain will
have its own separate market segment, thus
reducing the room for global compatibility. As
an example, SUN component manufacturers will
have to build separate systems for U.S. devices
operating in the 902–928 MHz band and European ones operating in the 863–870 MHz band.
By opening TVWS for SUN devices, manufacturers can concentrate on designing devices that
commonly operate in TVWS within the VHF/
UHF bands. This will give a more globalized
instead of segmented market, thereby improving
compatibility among devices from multiple vendors. Apart from that, globalized compatibility
of devices also increases competition among vendors and stimulates healthy growth of the SUN
industry as a whole.
RISKS: REGULATORY AND TECHNICAL
There are also challenges along the path to
incorporating TVWS for SUN applications.
Among these, the regulatory requirements for
TVWS usage are the major concerns for the
SUN usage model. As described earlier, Mode II
devices are required to have geolocation awareness capability with accuracy of ±50 m. Location
must also be re-established every 60 s. This
requirement implies that in every Mode II device
there must be a geolocation device such as a
GPS module. Although geolocation can be effectively determined via GPS in outdoor operations, accuracy is reduced significantly for indoor
operations. Mode II devices such as the SUN
data collector may be installed outdoors or
indoors depending on factors such as physical
location and cost.
The geolocation awareness capability requirement thus reduces the flexibility of SUN data
collector deployment. Ultimately, since the availability of TV channels does not change in a matter of seconds, the requirement may be imposing
an overly strict burden on the SUN system as a
whole.
The issue of license exemption in the SUN
frequency bands is an emerging topic. For SUN
applications, service reliability is of the utmost
importance. This is reasonable since the customer pays for utility services, which makes
“best effort connectivity” in conventional secondary wireless services less attractive. In unlicensed bands such as TVWS, reliability is
therefore a fundamental concern. Unlicensed
operation suggests that a SUN network in TVWS
must cease all operations upon discovering the
existence of an incoming incumbent service. All
these scenarios paint an undesirable picture for
utility services. While diversity algorithms such
as dynamic frequency switching and multichan-
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
137
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
The main challenge
is incorporating two
separate technologies into a hybrid
system. SUN technology is a low-powerconsumption,
low-data-rate, and
large-scale network
with potentially
millions of nodes,
whereas TVWS technology is a
spectrum accessing
technology requiring
external network
connectivity for
authorization.
nel utilization may provide solutions to the problem, the “secondary” characteristic of TVWS is
the fundamental issue. Currently, discussions on
licensed utility bands are becoming popular in
groups such as IEEE 802.15.4g, indicating that
utility providers are eager to find solutions that
achieve more reliable services. However, it
should be noted that this is a typical but not
unique problem in TVWS since most frequency
bands allocated to SUN currently share the same
unlicensed characteristic.
From a technical standpoint, the main challenge is incorporating two separate technologies
with respective characteristics into a hybrid system. SUN technology is a low-power-consumption, low-data-rate, and large-scale network with
potentially millions of nodes, whereas TVWS
technology is a spectrum accessing technology
requiring external network connectivity for
authorization. Each technology has a significant
level of behavior and demands not shared by the
other. For example, a SUN data collector may
need to be deployed as a battery-powered lowduty-cycle indoor device, but a TVWS Mode II
device requires a GPS module effective only in
an outdoor environment and is mandated to frequently power up to register its location. In this
case there is a level of incompatibility between
the two that needs to be addressed in the process of combining the two technologies. The former is under development by the TG4g, while
the latter is under the watch of the TGaf. Efforts
to merge the two technologies are expected to
be challenging considering that TG4g and TGaf
are going in separate and orthogonal directions.
TOWARD THE FUTURE OF
SUN IN TVWS
The future of SUN and TVWS technologies is
encouraging, and there is little limit to their
potential. Yet several crucial elements need to
be addressed in order for SUN to be able to
fully utilize the advantages of TVWS.
From a regulation standpoint, two recommendations are provided. First, requirements
specified by regulators for occupying TVWS
should be relaxed. Specifically, it is vitally important to relax the geolocation awareness capability for TVBDs intended to support simple and
low-cost design (e.g., SUN wireless household
data collectors). For SUN applications, this is
the bottleneck that dictates AMI cost effectiveness and technical feasibility. Second, the TVWS
licensing issue should be considered. Realizing a
licensed SUN band may still be premature, but
as an alternative, TVWS communications may
consider importing the idea of “light licensing,”
where a time-location-dependent license is
granted to an “enabling TVBD,” which possesses the authority to enable communications of a
client or dependent devices without having
database access performed locally. The light
licensing concept is specified in the 3.6 GHz
band long-range WLAN in the United States [7].
On the other hand, from a technical standpoint, two recommendations are provided. First,
the differences in respective system demands
must be aligned. For example, the conflict
138
Communications
IEEE
A
BEMaGS
F
between low power consumption in SUN device
requirements and a frequent power-up mode for
location re-establishment in TVWS Mode II
device requirements can be solved by replacing
household indoor data collectors with powersource-connected outdoor Mode II data collectors capable of covering an entire neighborhood.
Another example is aligning SUN channels for
optimized occupancy in TVWS. SUN communication channels are typically several tens to hundreds of kilohertz, while TV channels may span
across 6–8 MHz. With careful consideration,
multiple SUN channels can be designed to fit
into a TV channel to increase spectrum usage
efficiency. In standardization activities, efforts
on aligning the technologies should take place
separately in respective TGs with mutual understanding and collaboration. Second, instead of
merging two separate technologies, another recommended effort is a new system design that
includes elements from both. This method
reduces the amount of modifications to existing
specifications in the ongoing TGs. The 802.15
WG recently took the first step in exploring
opportunities in applications of smart grids and
smart utilities by occupying TVWS. An SG has
been formed in the WG to investigate this topic.
A new TG is being formed to specify a hybrid
system capable of extracting advantages from
both SUN and TVWS technologies.
CONCLUSION
This article presents the combination of two
emerging eco-friendly communication systems —
SUN and TVWS. The hybrid system could be
capable of conserving energy and simultaneously
reusing spectrum resources. The relationship
between the characteristics of two systems, current technology trends, and opportunities and
challenges in realizing the hybrid system are discussed.
As a conclusion, it is observed that while
encouraging advantages can be obtained by combining SUN and TVWS technologies, considerable effort is required to make that a reality.
REFERENCES
[1] IEEE Std. 802.15.4, “Wireless MAC and PHY Specifications for Low-Rate WPANs,” 8 Sept., 2006.
[2] IEEE, “Sub 1 GHz license-exempt PAR and 5C,” https://
____
mentor.ieee.org/802.11/dcn/10/11-10-0001-13-0wng_____________
900mhz-par-and-5c.doc
[3] FCC, Second Report and Order and Memorandum Opinion and Order: In the Matter of Unlicensed Operation
in the TV Broadcast Bands, Doc. 08-260, Nov. 14,
2008.
[4] FCC, Second Memorandum Opinion and Order, Doc.
10-174, Sept. 23, 2010.
[5] IEEE Standard 802.11 Revision 2007, “Wireless LAN
MAC and PHY Specifications,” 12 June 2007.
[6] FCC, Notice of Proposed Rule Making: ET Docket no.
04-113, May 25, 2004.
[7] IEEE Std. 802.11y, “Wireless LAN MAC and PHY Specifications. Amendment 3: 3650-3700 MHz Operation in
USA,” 6 Nov. 2008.
BIOGRAPHIES
CHIN-SEAN SUM ([email protected])
________ received his M.E. from the
University of Technology of Malaysia (UTM) in 2002, and
his Ph.D. from Niigata University, Japan, in 2007. In June
200, he joined the National Institute of Information and
Communications Technology (NICT), Japan, as an expert
researcher in the Ubiquitous Mobile Communications
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Group (UMCG). He is actively involved in the IEEE 802.15.3c
(TG3c) standardization activities for millimeter-wave WPAN,
where he served as the task group secretary and technical
assistant editor for the draft standard. He is the recipient
of the IEEE Working Group Chairs Awards for the IEEE
802.15.3c Standard. He is currently the coexistence subeditor in IEEE 802.15.4g WPAN Smart Utility Networks (SUN)
and an active contributor in IEEE 802.11af, TV White Space
for WLAN.
HIROSHI HARADA is director of the Ubiquitous Mobile Communication Group at NICT and is also the director of NICT’s
Singapore Wireless Communication Laboratory. He joined
the Communications Research Laboratory, Ministry of Posts
and Communications, in 1995 (currently NICT). Since 1995,
he has researched software defined radio (SDR), cognitive
radio, dynamic spectrum access network, and broadband
wireless access systems on the microwave and millimeterwave bands. He also has joined many standardization committees and fora in United States as well as in Japan and
fulfilled important roles for them, especially IEEE 802.15.3c,
IEEE 1900.4, and IEEE1900.6. He serves currently on the
board of directors of the SDR Forum and as chair of IEEE
SCC41 (IEEE P1900) and vice chair of IEEE P1900.4. He was
chair of the IEICE Technical Committee on Software Radio
(TCSR), 2005–2007. He is also involved in many other activities related to telecommunications. He is currently a visiting professor at the University of Electro-Communications,
Tokyo, Japan, and is the author of Simulation and Software Radio for Mobile Communications (Artech House,
2002).
FUMIHIDE KOJIMA [M] received B.E., M.E. and Ph.D. degrees
in electrical communications engineering from Osaka Uni-
IEEE
BEMaGS
F
versity, Japan, in 1996, 1997, and 1999, respectively. Since
he joined the Communications Research Laboratory, Ministry of Posts and Telecommunications in 1999, he has
been engaged in research on ITS telecommunications, ROF
multimedia transmissions, mobile ad hoc network for disaster radio, and wireless grid systems including smart utility networks. Currently, he is a senior researcher at the new
generation wireless research center of NICT. His current
research includes intelligent MAC protocol for radio communication systems including mobile ad hoc networks and
smart utility networks.
ZHOU LAN received his B.S. and Ph.D. degrees in electrical
engineering from Beijing University of Posts and Telecommunications (BUPT), China, in 2000 and 2005, respectively.
He is currently with NICT as an expert researcher. His
research interests include high-speed wireless MAC design,
large-scale simulation platform development, and hardware implementation. He servied as TPC vice chair of IEEE
PIMRC 2009 and TPC member of IEEE GLOBECOM 2009.
He has worked closely with industry. He served as the
assistant editor of the IEEE 802.15.3c mmwave WPAN WG.
He is also active in other IEEE WLAN and WPAN WGs.
R YUHEI F UNADA received his B.E., M.E., and Ph.D. degrees
from Chuo University, Tokyo, Japan, in 2000, 2002, and
2005, respectively. From 1999 to 2005 he was a trainee at
NICT. In 2005 he joined NICT as a postdoctoral researcher,
and is currently a permanent researcher. His research interests include OFDM-based mobile telecommunication systems, single-carrier WPAN systems, and various radio
transmission techniques. He received the Young
Researcher’s Encouragement Award of IEEE VTS Japan in
2002, and the Best Paper Award of WPMC 2006.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
139
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ACCEPTED FROM OPEN CALL
Advances in Mode-Stirred Reverberation
Chambers for Wireless Communication
Performance Evaluation
Miguel Á. García-Fernández, Juan D. Sánchez-Heredia, Antonio M. Martínez-González,
and David A. Sánchez-Hernández, Universidad Politécnica de Cartagena
Juan F. Valenzuela-Valdés, EMITE Ing
ABSTRACT
Reverberation chambers (RC) are a popular
tool for laboratory wireless communication performance evaluation, and their standardization
for Over-The-Air (OTA) measurements is
underway. Yet, the inherent limitations of singlecavity RCs to emulate isotropic Rayleigh-fading
scenarios with uniform phase distribution and
high elevation angular spread put their representation of realistic scenarios into jeopardy. Recent
advances in the last few years, however, have
solved all these limitations by using more general mode-stirred reverberation chambers (MSC),
wherein the number of cavities, their stirring and
coupling mechanisms, and their software postprocessing algorithms is far from simple, representing a new era for wireless communications
research, development, and over-the-air testing.
This article highlights recent advances in the
development of second-generation mode-stirred
chambers for wireless communications performance evaluation.
INTRODUCTION
A reverberation chamber (RC) is a highly
conductive enclosed cavity typically equipped
with metallic paddles and turntables. The independent movement of paddles and turntables
dynamically changes the electromagnetic field
boundary conditions. In this way the natural
multimode electromagnetic environment inside
the single cavity is stirred. With this continuous
mode stirring in time, the chamber provides the
same statistical distribution of fields independent
of location, except for those observation points
in close proximity to walls and nearby objects.
This required field uniformity also implies polarization balance in the chamber. At any observation point within the chamber, the field will vary
from a maximum to a minimum as the different
elements (stirrers and turntables) change the
boundary conditions [1]. The standard deviation
of the mean field throughout the chamber is typically the figure of merit used to assess the per-
140
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
formance of the RC. In a perfectly-stirred RC,
the real and imaginary parts of the rectangular
components of the electric and magnetic field
throughout the chamber are Gaussian distributed, independent with identical variances. Thus,
the electric or magnetic field inside a perfectlystirred RC follows a single-cluster Rayleigh
probability density function in amplitude and
uniform distribution of phase, which resembles
the multipath fading in urban scenarios of wireless communications systems. If we assume that
the introduction of a matched antenna does not
perturb the preexisting field distribution within
the chamber, the power received by this matched
antenna inside the RC is independent of the
antenna gain, directivity, or equivalent area [2].
This, along with the repeatability and reliability
of the stochastic reference fields emulated in the
RC, makes them ideal candidates to evaluate
antenna radiated power for wireless communications systems. Since for handheld wireless communications systems antenna radiated
power-related parameters such as Total Radiated Power (TRP) and Total Isotropic Sensitivity
(TIS) are the standardized figures of merit, RCs
have become a popular tool for evaluating wireless communication performance. Yet, propagating scenarios experienced by users outdoors
rarely follow the behavior of a uniform Rayleighfading scenario with single-cluster isotropic scattering. A single-cluster assumes that waves that
are reflected or diffracted at the receiver and
propagated toward the receiver are grouped into
just one collection, corresponding to a group of
buildings or objects in a room. In urban environments, for instance, one can find several buildings on both sides of the street and each of them
can be modeled as a cluster of scatters. Hence,
to describe properly this scattering environment,
multiple clusters are needed. An isotropic scattering scenario, also known as uniform, assumes
that all angles of arrival at the receiver have
equal probability, that is, there is no preferred
direction of upcoming waves. A distribution of
scatters that leads to a uniform distribution of
angles of arrival is also difficult to justify in prac-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
IEEE
tice [3]. In consequence, recent years have witnessed a relatively large number of papers
describing novel concepts using more general
mode-stirred reverberation chambers (MSCs) [4]
with both hardware and software modifications
to that of simple single-cavity RCs in order to
overcome their innate limitations. In MSCs, the
fields do not necessarily have to be constrained
to a single cavity or even be provided in a reverberating mode to the researcher. In consequence, MSCs may contain more than one metal
cavity that could be coupled through a variety of
means, including waveguides, slots or metal
plates, among others. Likewise, the shape of
these cavities does not have to be restricted to
the canonical ones, and additional software control and algorithms allow extraordinary advantages to the researcher over conventional
single-cavity RCs.
This contribution is a short tutorial that highlights the recent advances in wireless propagation emulation using complex MSCs instead of
simple RCs. While only a few of these enhancements have reached commercial stage, the novelty accumulated in the last few years could clearly
identify mode-stirred reverberation chambers as
a direct competitor of more expensive multiprobe multipath spatial fading emulators using
anechoic chambers and an excellent tool for
wireless communications R&D processes.
HARDWARE ADVANCES
MSCS WITH ENHANCED
RAYLEIGH-FADING EMULATION
One of the very first enhancements was related
to the ability to stir the modes more efficiently.
There are many contributions regarding the
shape and size of stirrers to ensure quasi-perfect
mode stirring. Effective paddles should be large
and asymmetrical [3], and some specific shapes
have been analyzed [3]. But not only the shapes
of the stirrers play a role in the effectiveness of
the RC. Beyond the simple linear movements of
paddles or circular movement of the turntable
typically employed in RCs, recent findings have
shown that complex paddle and device-undertest (DUT) movements also provide for some
additional enhancements. Both non-linear and
complex stirrer movements have been proposed
for enhanced field uniformity [5].
MSCS WITH RICIAN-FADING EMULATION
One important enhancement is related to the
ability to emulate Rician-fading environments.
The Rayleigh-fading case (K = 0) typically emulated by an RC is a special case of a more general Rician-fading case (K > 0). The Rician
K-factor is defined as the ratio between the
power of the coherent component (corresponding to the direct path) over the power of the
incoherent component (corresponding to the
scattered component) of the received field. In
fact, when the RC is not perfectly stirred, the
unstirred field component being preserved
defines a Rice field in coexistence with the
Rayleigh field generated by the stirred components. Stochastic plane wave superposition and
separation theories can be employed to obtain
1.0E+02
IEEE
BEMaGS
F
0
degrees
1.0E+01
30
degrees
1.0E+00
90
degrees
1.0E-01
1.0E-02
1.0E-03
1000
2000
3000
4000
5000
Frequency (MHz)
6000
7000
Figure 1. Variable K-factor in a mode-stirred reverberation chamber when altering the azimuth orientation of the transmitting antenna [6].
both stirred (equivalent to non-Line of Sight or
Rayleigh-fading components) and unstirred contributions (equivalent to Rician-fading components). Yet, in most cases the separation of
these two components is aided by employing an
excitation source that is pointed toward the
DUT, and then it is assumed that all wall reflections interact with the paddles [6]. With only one
transmitting antenna, other ways of controlling
the K-factor are now possible in an MSC. This
includes that the transmitting antenna, with a
well-defined radiation patter (azimuth change),
can be rotated with respect to the DUT, altering
the distance between the transmitting antenna
and the DUT (distance change), changing the
polarization orientation of the transmitting
antenna (polarization change), or varying the
cavity’s Q-factor by chamber loading (Q-factor
change) [6]. Some variable K-factor results in [6]
are illustrated in Fig. 1.
If two transmitting antennas are used, a wide
range of K-factors can be obtained by pointing
one of them toward the DUT and the other one
toward the stirrers [6]. Interestingly, the K-factor
obtained in a mode-stirred reverberation chamber has also been found to be dependent on the
number and position of absorbers placed within
the main cavity [7].
MSCS WITH HYPER-RAYLEIGH-FADING
EMULATION
While Rayleigh and Rician fading are commonly
used in wireless propagation emulation, smallscale fading encountered in several new scenarios such as vehicle-to-vehicle systems present
frequency-dependent and spatially-dependent
fading whose severity exceeds that predicted by
the Rayleigh fading model. These scenarios are
coined as Hyper-Rayleigh, and a very recent
paper has been able to accurately emulate these
scenarios using a modified reverberation chamber [8]. In [8], an electrically switched multi-element antenna array was added to an RC, and
the enclosure size was made considerably smaller than conventional RC for the same tested fre-
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
K-factor
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
141
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Reflective blade
Stepper motor
TX
RX
Switches
PC
Splitter
1:4
Vector network
analyzer
Figure 2. Block diagram and picture of modified RC in [8] including electrically controlled fading mechanisms and small size.
-15
-20
Rayleigh
-25
S21
-30
-35
-40
Ricean
-45
-50
-55
Hyper-Rayleigh
2.4
2.41
2.42
2.43
2.44
Frequency
2.45
2.46
2.47
2.48
x 109
Figure 3. Plots of signals that experience Rayleigh, Ricean, and Hyper-Rayleigh
fading in [8].
quency range. Figure 3 depicts the plots of signals experiencing Rayleigh, Ricean, and HyperRayleigh fading scenarios in the MSC of [8].
MSCS WITH
NON-CANONICAL CONFIGURATIONS
By carefully controlling the excitation source of
an RC, the homogeneity and isotropic characteristic of the field at a specific position can be
controlled. The key to obtain enhanced performance is the ability to shift and weight each
mode within the chamber, and an array of exciting antennas was proposed to alleviate the
mechanical requirements of RCs [9]. This is
straightforwardly derived if one takes into
account the fact that the field strength at any
observation point within the chamber can be
obtained by the integration in the source.
142
Communications
IEEE
Changing the sources therefore changes the
resulting field strengths. This particularly useful
advance has even made researchers coin new
terms for MSCs, such as scatter-field chamber
or source-stirred chamber, among others. In
order to excite additional transversal electromagnetic modes, other non-canonical chamber
configurations have been proposed. By exciting
the chamber with transmission lines [10], for
example, new TEM modes that are transversal
to those wires can be excited, further increasing
the frequency range of operation. In particular,
for the same cavity size, the lowest usable frequency becomes smaller. Different wire and
phase shift excitations are also possible. Other
noncanonical configurations include those contributions that employ a variable geometry, a
moving wall [11], or non-parallel walls [12]. In
such non-canonical MSCs no eigenmodes exist
and a diffuse, statistically uniform field is created without the use of a mechanical mode stirrer. As a result, test times can be drastically
reduced.
One recent contribution for enhanced emulation using MSCs is the opening of the door [13].
The aperture of the door transforms one wall
that was perfectly electric into a perfectly magnetic wall, but at the same time with a varying
aperture degree. Some modes will try to propagate through the opening, and therefore the
chamber can no longer be called a reverberation
chamber as both reverberating and non-reverberating modes exist. In this way, non-isotropic
fading emulation can also be performed using a
mode-stirred reverberation chamber, providing
for a different number of multipath components
(MPC), angle of arrival (AoA), or angular
spread values (AS) of the emulated scenarios.
Furthermore, the opening of the door can be
used for enhancing the accuracy of the chamber
for performing antenna radiated power measurements by a more accurate characterization of
losses in the chamber by this opening of an aper-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
0
1 absorber: τrms=187 ns
BEMaGS
F
100
3 absorber: τrms=106 ns
-5
-10
-15
7 absorber: τrms=66 ns
-20
BER
Power delay profile (dB)
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
10-1
-25
Oil refinery: 3 different positions
τrms=103 ns
τrms=102 ns
τrms=112 ns
-30
-35
-40
0
Real environment
Reverberation chamber with no absorber
Reverberation chamber with two absorbers
Through connection
10-2
0
2
4
50 100 150 200 250 300 350 400 450 500
Time (ns)
6
8
10
12
Signal-to-noise ratio
14
16
18
20
Figure 4. Different PDPs (up) and BER (down) measured using an MSC [14].
ture [2]. It seems that the door of the MSC has
been really open, in the wide sense of the word.
With the available manipulation of diverse
spatial fading multipath characteristics using
MSCs, another important step was the ability to
control the time-dependent fading performance
by being able to emulate variable root-mean
square (RMS) delay spreads. Effects such as
Doppler spread and fading, which are a consequence of a dynamically moving environment,
can also be emulated inside an MSC by moving
the paddles with different speeds or using them
in stepped or non-linear modes. With the use of
absorbers in [14], different RMS delay spread
profiles can also be achieved. The ensemble
average of the magnitude squared of the impulse
response of the MSC is referred to as the power
delay profile (PDP) and it is the way to include
effects due to time-varying multipath. The shape
of the PDP can have adverse effects on the performance of digital communication systems. The
RMS delay spread of the PDP is often used to
characterize a wireless communication environment because it is directly related to the BitError-Rate (BER) performance of a channel.
The BER is an end-to-end performance measurement that quantifies the reliability of the
entire radio system from bits in to bits out. Standardized channel models are typically characterized by RMS delay-spreads. As the RMS delay
spread in an MSC has been found to be proportional to the chamber Q-factor for a given frequency, this is yet another sign that very accurate
standardized channel fading emulation is possible with MSCs. This includes emulating the
behavior of the BER for different stirrer velocities [15, 16] and chamber loadings [14, 16], as
illustrated in Fig. 4 [14].
While specific power delay profiles can be
replicated by adding certain amounts of
absorbers and averaging over many different
paddle positions, in [17] the transmitter’s excitation signal was injected into a fading emulator
prior to introducing it into the chamber. In this
way, a channel response having multiple discrete
clustered distributions, typically found in both
urban and suburban settings where reflecting
structures may be located far from the receiver,
Stirrer
Waveguide
Patch
antenna
Figure 5. An MSC with two coupled cavities [18].
was created. Very accurate emulation of these
realistic environments can be performed using
the method described in [17]. A clear advantage
of this method compared to the one employed in
the next section is the use of only one chamber.
The disadvantage is clearly the requirement of a
fading emulator.
MSCS WITH MULTIPLE CAVITIES
Another important advance is the use of multiple cavities in order to provide for some control
of a complex multipath environment consisting
on diverse clusters with different fading characteristics. A possibility is to use a metal plate with
different-size irises separating two cavities. This
can give some control over which modes are
coupled to the main cavity and also enlarge the
delay spread at the main cavity in comparison to
single-cavity RCs [13]. Another possibility is to
connect two cavities with waveguides or wires
[18], as illustrated in Fig. 5. With this modification, the rank channel can be altered, and complex MIMO fading characteristics such keyholes
can also be emulated. This enriches the emulating possibilities of MSC, which now include the
ability to emulate degenerated H matrices as it
happens in tunnels, for example. With multiple
cavities, not only the propagation characteristics
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
143
A
BEMaGS
F
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
r2
IEEE
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
r2
Communications
0.0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
-1.0
-1.0
0.0
r1
-0.5
0.5
1.0
F
-1.0
-1.0
-0.5
0.0
r1
0.5
1.0
Figure 6. Different coherence times at GSM1800 emulated in an MSC [15].
SOFTWARE ADVANCES
30
25
Capacity (bit/Hz/s)
ACCURATE CONTROL OVER MSC ELEMENTS
K=0.001
K=0.01
K=0.098
K=0.33
K=4.26
K=49
K=0.001 [14]
K=0.01 [14]
K=0.098 [14]
K=0.33 [14]
K=4.26 [14]
K=49 [14]
20
15
10
5
0
0
5
10
15
SNR
20
25
30
Figure 7. Outdoor-measured and MSC-measured MIMO capacity vs. SNR for
three different 3 × 3 MIMO systems using the offset technique [24].
of the transmitter and receiver can be modified
independently, but MSCs can also reduce the
typically high elevation angular spread of RCs.
Variable RMS delay spreads have also been
obtained with coupled cavities, which have
demonstrated their ability to emulate indoor
environments, wideband in-vehicle environments
[19], or metallic windows, tree canopies, walls
and other artifacts in buildings [20]. Interestingly
enough, it has been found that for a typical
metal-framed window structure, the MIMO
capacity is greater than that without metal
frames. For an 8 × 8 antenna system, the MIMO
capacity is increased by about 2.5 times when
metal frames are introduced, and the presence
of leaves increases that capacity even more when
the transmitted power is kept constant [20].
These enhancements have paved the way for
new MSC testbeds for MIMO systems able to
emulate standardized fading channels.
144
Communications
IEEE
If hardware advances are impressive, progress in
software post-processing techniques using MSCs
does not fall behind. In [15, 16], for instance, an
accurate control of the coherence time of an
MSC is achieved by means of a properly tailored
modulation of the stirrer velocity, as depicted in
Fig. 6. The coherence time is the time over
which the channel can be assumed constant. This
opens the door for very realistic emulation of
the time variability of real propagation channels
for wireless device performance and signal propagation testing. The coherence time is the most
useful parameter for describing this frequency
dispersiveness in the time-domain. Another good
and useful advance in the field is to emulate
multipath fading using a random time-variable
phase for every direction of arrival [21], opening
the door for complex standardized channel emulation with time- and phase-dependent parameters. A good example of this is the recent ability
to measure the radiation patterns of antennas
using a modified reverberation chamber [22]. In
[22], the free-space field radiated by the antenna
is retrieved from measurements in an MSC and
time-reversal techniques. Accuracies typically
better than 1 dB over the main lobes were
achieved. Since more complex testing tools
based on near-field and anechoic chamber methods use the radiation patterns of antennas to
estimate the correlation properties and from
these properties estimate the MIMO parameters, it seems clear that MSC can soon achieve
the same level of performance as more complicated two-stage or multiple test probe methods.
STOCHASTIC HANDLING OF
MEASURED DATA SAMPLES
A generalized stochastic field model capable of
ensuring a continuous transition among very different scattering scenarios by a K-generalized
PDF in an MSC has been readily available since
2004 [23]. The application of stochastic sample
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
handling for mode-stirred reverberation chambers has not been suggested until recently [2426]. Stochastic handling of the measured data set
of samples is perhaps the most promising technique for further enhancements. For example,
the use of an offset technique within the set of
measured samples has been reported to emulate
Rician-fading very accurately without any hardware change [24]. For a target K-factor, the
required offset has to be defined in terms of the
radius of cluster data, the distance of centroid of
cluster from the origin, and its phase-coherence
to the selected radius. A comparison between
Rician-fading emulation using this technique and
outdoor Rician measurements can be observed
from Fig. 7. Good agreement is observed.
One very recent method that is able to emulate arbitrary fading scenarios is the sample
selection technique [25]. The sample selection
technique consists of selecting the sample subset
that conforms to a specific target fading statistical ensemble from the whole sample set measured in the MSC using genetic algorithms. It
has to be mentioned that the selection is possible because the originally-measured Rayleighfading set is composed of many different clusters
due to the multiple-cavity slots-coupled system
employed. Only with stochastic theory, this
method can really target the emulation of standardized channel models (GSCM, SCM, SCME,
Winner-II or IMT-Advanced), which is no
longer unheard of for MSCs, as illustrated in
Fig. 8. In this figure, the stand-alone normalized
1 × 2 MIMO throughput (spectral efficiency) for
a IEEE 802.11n device measured in the E200
MIMO Analyzer by EMITE Ing (illustrated in
Fig. 9) following the procedure in [27] is compared to the 1 × 2 802.11n MIMO capacity
(Shannon) measured in the E200 by EMITE Ing
with the sample selection method. The equivalent spectral efficiency calculated with the public Matlab™ code for the standardized IEEE
802.11n is also depicted for comparison purposes. The 802.11n target data sample for the sample selection algorithm was a 2 × 2 MIMO
system at a frequency of 2.4 GHz with nine
propagation paths in an office environment
(indoor). The E200 MIMO Analyzer is a twocavity MSC with dimensions of 0.82 m × 1.275
m × 1.95 m, eight exciting antennas allowing
accurate source-stirring, polarization stirring
due to aperture-coupling and to the different
orientation of the antenna exciting elements,
three mechanical and mode-coupling stirrers,
one holder-stirrer and variable iris-coupling
between the two cavities. Distribution fitness
errors below 2*10 –4 were achieved in less than
40 seconds using a hybrid linear-genetic algorithm. Despite the initial method constraints
(the target distribution has to have the same
mean power as the initial distribution), it is
clear that unheard-of emulation possibilities are
provided by the sample selection technique
using MSCs. The possibilities for arbitrary emulation and testing in all spatial-, time- and codedomains are very interesting, and MSCs could
really equal the performance of more complex
spatial-fading emulators based on anechoic
chambers at a fraction of the cost in the very
near future.
IEEE
BEMaGS
F
8
30
Initial distribution
Final distribution
Target distribution
25
7
6
20
15
10
5
0
5
0
4
0.1
0.05
Amplitude
0.14
3
2
1x2 802.11n Channel model
1x2 802.11n Meas RF device sample selection
1x2 802.11n Meas device 64QAM BW=20MHz
1x2 802.11n Meas device 64QAM BW=40MHz
15
20
SNR (dB)
1
0
5
10
Figure 8. Stand-alone normalized 1 × 2 MIMO throughput for a IEEE 802.11n
device measured in the MIMO Analyzer [26].
Switch
2xT
Tx1
Rad
com
tester
Txn
N Tx
DUT
VNA
N Rx
SMA
Rx1
SMA
Rxn
Switch
2xR
Figure 9. The two-cavity MIMO Analyzer MSC [courtesy of EMITE Ing].
CONCLUSIONS
In the last few years, different advances have
enabled mode-stirred reverberation chambers
(MSCs) to solve the inherent limitations of
conventional single-cavity reverberation chambers (RC) for wireless communication performance evaluation. It is now clear that MSCs
have considerably improved the Clarke’s model
followed by conventional single-cavity RCs,
and that with arbitrary fading emulation using
second-generation MSCs, a new era has started
for MIMO research, development, and OTA
testing.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
PDF f(x)
IEEE
Spectral efficiency / MIMO capacity (bits/s/Hz)
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
145
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
REFERENCES
[1] Electromagnetic Compatibility (EMC) Part 4-21: Testing
and Measurement Techniques Reverberation Chamber,
IEC Standard 61000-4-21, 2003.
[2] P. Corona et al., “Performance and Analysis of a Reverberating Enclosure with Variable Geometry,” IEEE
Trans. Electromagnetic Compatibility, vol. 22, no. 1,
1980, pp. 2–5.
[3] J. Clegg et al., “Optimization of Stirrer Designs in a
Reverberation Chamber,” IEEE Trans. Electromagnetic
Compatibility, vol. 47, Nov. 2005, pp. 824–32.
[4] T. A. Loughry and S. H. Gurbaxani, “The Effects of
Intrinsic Test Fixture Isolation on Material Shielding
Effectiveness Measurements Using Nested Mode-Stirred
Chambers,” IEEE Trans. Electromagnetic Compatibility,
vol. 37, no. 3, 1995, pp. 449–52.
[5] P. Plaza-Gonzalez et al., “New Approach for the Prediction of the Electric Field Distribution in Multimode
Microwave-Heating Applicators with Mode Stirrers,”
IEEE Trans. Magnetics, vol. 40, no. 3, May 2004, pp.
1672–78.
[6] C. L. Holloway et al., “On the Use of Reverberation
Chambers to Simulate a Rician Radio Environment for
the Testing of Wireless Devices,” IEEE Trans. Antennas
and Propagation, vol. 54, no. 11, 2006, pp. 3167–77.
[7] A. Sorrentino et al., “The Reverberating Chamber as a
Line-of-Sight Wireless Channel Emulator,” IEEE Trans.
Antennas and Propagation, vol. 56, no. 6, June 2008,
pp. 1825–30.
[8] J. Frolik et al., “A Compact Reverberation Chamber for
Hyper-Rayleigh Channel Emulation,” IEEE Trans. Antennas and Propagation, vol. 57, no. 12, Dec. 2009.
[9] J. S. Hong, “Multimode Chamber Excited by an Array of
Antennas,” Electronics Letters, vol. 22, no. 19, 1993,
pp. 1679–80.
[10] D. Weinzierl et al., “Numerical Evaluation of Noncanonical Reverberation Chamber Configurations,” IEEE Trans.
Magnetics, vol. 44, no. 6, June 2008, pp. 1458–61.
[11] Y. Huang and D. L. Edwards, “An Investigation of Electromagnetic Fields Inside A Moving Wall Mode-Stirred
Chamber,” Proc. 8th IET Int’l. Conf. Electromagnetic
Compatibility, 1992, pp. 115–19.
[12] F. B. J. Leferink, “High Field Strength in A Large Volume:
The Intrinsic Reverberation Chamber,” Proc. IEEE Int’l.
Symp. Electromagnetic Compatibility, 1998, pp. 24–27.
[13] J. F. Valenzuela-Valdés et al., “Diversity Gain and
MIMO Capacity for Non-Isotropic Environments Using A
Reverberation Chamber,” IEEE Antennas and Wireless
Propagation Letters, vol. 8, 2009, pp. 112–15.
[14] E. Genender et al., “Simulating the Multipath Channel
with A Reverberation Chamber: Application to Bit Error
Rate measurements,” IEEE Trans. Electromagnetic Compatibility, 2010.
[15] A. Sorrentino et al., “Characterization of NLOS Wireless Propagation Channels with A Proper Coherence
Time Value in A Continuous Mode Stirred Reverberating Chamber,” Proc. 2nd European Wireless Technology
Conf., 2009, pp. 168–71.
[16] A. Sorrentino et al., “On the Coherence Time Control
of A Continuous Mode Stirred Reverberating Chamber,”
IEEE Trans. Antennas and Propagation, vol. 57, no. 10,
Oct. 2009, pp. 3372–74.
[17] H. Fielitz et al., “Reverberation-Chamber Test Environment for Outdoor Urban Wireless Propagation Studies,”
IEEE Antennas and Wireless Propagation Letters, vol. 9,
2010, pp. 52–56.
[18] M. Lienard and P. Degauque, “Simulation of Dual
Array Multipath Channels Using Mode-Stirred Reverberation Chambers,” Electronics Letters, vol. 40, no. 10,
2004, pp. 578–80.
[19] O. Delangre et al., “Modeling in-Vehicle Wideband
Wireless Channels Using Reverberation Chamber Theory,” Proc. IEEE Vehic. Tech. Conf., Sept. 2007, pp.
2149–53.
[20] Z. Yun and M. F. Iskander, “MIMO Capacity for Realistic Wireless Communications Environments,” Proc. IEEE
Antennas and Propagation Society Int’l. Symp., June
2004, pp. 1231–34.
[21] A. Khaleghi et al., “Evaluation of Diversity Antenna
Characteristics in Narrow Band Fading Channel Using
Random Phase Generation Process,” Proc. IEEE Vehic.
Tech. Conf., 2005, pp. 257–61.
[22] A. Cozza and A.e.A. el-Aileh, “Accurate Radiation-Pattern Measurements in A Time-Reversal Electromagnetic
Chamber,” IEEE Antennas and Propagation Mag., vol.
52, no. 2, Apr. 2010, pp. 186–93.
146
Communications
IEEE
A
BEMaGS
F
[23] P. Corona et al., “Generalized Stochastic Field Model
for Reverberating Chambers,” IEEE Trans. Electromagnetic Compatibility, vol. 46, no. 4, Nov. 2004, pp.
655–60.
[24] J. F. Valenzuela-Valdés and D. A. Sánchez-Hernández,
“Emulation of MIMO Rician-Fading Environments with
Mode-Stirred Chambers,” accepted for publication at
IEEE Trans. Antennas and Propagation, 2010.
[25] J. F. Valenzuela-Valdés et al., “Sample Selection
Method for Rician-Fading Emulation using Mode-Stirred
Chambers,” IEEE Antennas and Propagation Wireless
Letters, vol. 9, 2010, pp. 409–12.
[26] CTIA Certification Program Working Group Contribution Number RCSG100302. Standardized Fading Channel Emulation for MIMO OTA Using A Mode-Stirred
Chamber With Sample Selection Method, Mar. 2010.
[27] N. Olano et al., “WLAN MIMO Throughput Test in
Reverberation Chamber,” Proc. IEEE Int’l. Symp. Antennas and Propagation, July 2008, pp. 1–4.
BIOGRAPHIES
MIGUEL Á. GARCIA-FERNANDEZ was born in Cartagena, Spain.
He received the Dipl.-Ing. degree in telecommunications
engineering from the Universidad Politécnica de Cartagena,
Murcia, Spain, in 2005 and the Ph.D. degree from the Universidad Politécnica de Cartagena, Murcia, Spain, in January 2010. From 2005 onwards, he joined the Department
of Information Technologies and Communications, Universidad Politécnica de Cartagena, Murcia, Spain. From October 2009 to September 2010 he also joined the
Department of Applied Mathematics and Statistics, Universidad Politécnica de Cartagena, Murcia, Spain. His current
research areas cover multiple-input-multiple-output communications, SAR measurements and thermoregulatory
processes due to electromagnetic field exposure.
J UAN D. S ANCHEZ -H EREDIA was born in Lorca, Spain. He
obtained his Telecommunication Engineering Degree from
the Universidad Politécnica de Cartagena in 2009 which
culminated with the Final Degree Award. In 2007 he
worked at General Electric (Cartagena), and was involved
in several projects in relation with the network infrastructure. In 2009 he joined the Department of Information Technologies and Communications, Universidad
Politécnica de Cartagena (Spain), as a Ph.D. student. Actually he is obtaining a Master Degree in Information Technologies at Universidad de Murcia. His current research
areas cover MIMO communications, multimode-stirred
chambers and electromagnetic dosimetry.
A NTONIO M. M ARTINEZ -G ONZALEZ obtained his Dipl.-Ing. in
Telecommunications Engineering from Universidad Politécnica de Valencia, Spain, in 1998 and his Ph.D. from Universidad Politécnica de Cartagena, in early 2004. From 1998
till September 1999, he was employed as technical engineer at the Electromagnetic Compatibility Laboratory of
Universidad Politécnica de Valencia, where he developed
assessment activities and compliance certifications with
European directives related with immunity and emissions
to electromagnetic radiation from diverse electrical, electronic and telecommunication equipment. From September
1999 he is an Associate Professor at Universidad Politécnica
de Cartagena. Research works developed by him were
awarded with the Spanish National Prize from Foundation
Airtel and Colegio Oficial de Ingenieros de Telecomunicación de España to the best final project on Mobile Communications in 1999. At present, his research interest is
focused on electromagnetic dosimetry, radioelectric emissions and mode stirred chambers. In December 2006 he is
one of the founders of EMITE Ing, a technological spin-off
company founded by Telecommunication Engineers and
Doctors of the Microwave, Radiocommunications and Electromagnetism Research Group (GIMRE) of the Technical
University of Cartagena (Spain). Founding of EMITE took
place right after the second i-patentes prize to innovation
and technology transfer in the Region of Murcia (Spain)
was awarded to the company founders. In 2008 GIMRE
group was awarded this prize again.
J UAN F. V ALENZUELA -V ALDÉS (juan.valenzuela@emite-inge________________
nieria.es)
_____ was born in Marbella, Spain. He received the
Degree in Telecommunications Engineering from the Universidad de Malaga, Spain, in 2003 and his Ph.D. from Universidad Politécnica de Cartagena, in May 2008. In 2004 he
worked at CETECOM (Malaga). In 2004, he joined the
Department of Information Technologies and Communica-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
tions, Universidad Politécnica de Cartagena, Spain. In 2007
he joined EMITE Ing as CTO. His current research areas
cover MIMO communications, multimode-stirred chambers
and SAR measurements.
A.
S A N C H E Z -H E R N A N D E Z
[M’00,
SM’06]
DAVID
([email protected])
____________ obtained his Dipl.-Ing. in Telecommunications Engineering from Universidad Politécnica de
Valencia, Spain, in 1992 and his Ph.D. from King’s College,
University of London, in early 1996. From 1992 to 1994 he
was employed as a Research Associate for The British
Council-CAM at King’s College London where he worked
on active and dual-band microstrip patch antennas. In
1994 he was appointed EU Research Fellow at King’s College London, working on several joint projects at 18, 38
and 60 GHz related to printed and integrated antennas on
GaAs, microstrip antenna arrays, sectorization and diversity. In 1997 he returned to Universidad Politécnica de
Valencia, Spain, to the Antennas, Microwaves and Radar
Research Group and the Microwave Heating Group. In
early 1999 he received the Readership from Universidad
Politécnica de Cartagena, and was appointed Vice Dean of
the School for Telecommunications Engineering and leader
IEEE
BEMaGS
F
of the Microwave, Radiocommunications and Electromagnetism Engineering Research Group. In late 1999 he was
appointed Vice Chancellor for Innovation & Technology
Transfer at Universidad Politécnica de Cartagena and member of several Foundations and Societies for promotion of
R&D in the Autonomous Region of Murcia, in Spain. In
May 2001 he was appointed official advisor in technology
transfer and member of The Industrial Advisory Council of
the Autonomous Government of the Region of Murcia, in
Spain, and in May 2003 he was appointed Head of Department. He is also a Chartered Engineer (CEng), IET Fellow,
IEEE Senior Member, CENELEC TC106X member, and is the
recipient of the R&D J. Langham Thompson Premium,
awarded by the Institution of Electrical Engineers (now formerly the Institution of Engineering and Technology), as
well as other national and international awards. He has
published 3 international books, over 45 scientific papers
and over 90 conference contributions, and is a reviewer of
several international journals. He holds five patents. His
current research interests encompass all aspects of the
design and application of printed multi-band antennas for
mobile communications, electromagnetic dosimetry issues
and MIMO techniques for wireless communications.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
147
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ACCEPTED FROM OPEN CALL
System-Level Simulation
Methodology and Platform for
Mobile Cellular Systems
Li Chen, Wenwen Chen, Bin Wang, Xin Zhang, Hongyang Chen, and Dacheng Yang
ABSTRACT
System-level simulation has been widely used
to evaluate the comprehensive performance of
different mobile cellular systems. System-level
simulation methodologies for different systems
have been discussed by different organizations
and institutions. However, the framework for a
unified simulation methodology and platform has
not been established. In this article, we propose a
general unified simulation methodology for different cellular systems. Both the design of the simulation structure and the establishment of the
simulation platform are studied. Meanwhile, the
unified modeling and the realization of various
modules related to the system-level simulation are
presented. The proposed unified simulation
methodology and the general simulation platform
can be used to evaluate the performance of multiple mobile communication systems fairly. Finally,
the overall performance of LTE and Mobile
WiMAX systems is evaluated through the proposed framework. The key simulation results for
both Full Buffer and VoIP traffics are presented
and discussed. It is shown that the LTE system
exhibits better performance than Mobile WiMAX.
INTRODUCTION
Mobile communication has continued to evolve
rapidly in recent years. The third-generation (3G)
mobile cellular systems, such as wideband code
division multiple access (WCDMA), cdma2000,
time division synchronous code division multiple
access (TD-SCDMA), and world interoperability
for microwave access (WiMAX), have been commercialized, while the research for the new
beyond 3G (B3G) systems (e.g., the Third Generation Partnership Project (3GPP) long term evolution (LTE) and 802.16m) or even 4G systems
(e.g., LTE-Advanced (LTE-A)) are still in
progress. Many commercial mobile systems today,
which are based on orthogonal frequency division
multiple access (OFDMA) technology with relatively wide bands, evolved from International
Mobile Telecommunications-2000 (IMT-2000).
In order to evaluate the expected performance
of these mobile cellular systems or to do research
on related key technologies of air interface, we
148
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
often have to resort to the system-level simulation. Due to the importance of system-level simulation, multiple simulation methodologies have
been proposed by 3GPP, 3GPP2, and the Institute of Electrical and Electronics Engineers
(IEEE). Each organization has published the corresponding simulation methodology for the system it standardizes. For example, 3GPP2 provided
the simulation methodology for cdma2000 1x evolution-data optimized (EV-DO)/evolution-data
and voice (EV-DV) evaluations [1], IEEE
announced [2, 3] for 802.16 series standards, and
3GPP issued [4, 5] for WCDMA. Besides, numerous related papers have been published in IEEE.
These methodologies evolve with the standardization progress. Each of them, however, focuses
only on one specific system.
One may ask a natural question: can we evaluate the performance of multiple mobile cellular
systems through system-level simulation in a unified framework? Actually, this problem has not
been studied extensively. In [6], Gao et al. has
proposed a fair manner to evaluate different systems. However, they only unified the simulation
configurations to compare the performance of
different systems fairly. There are still some
issues that need to be clarified regarding the
design and realization of a system-level simulation platform and the unified simulation methodology. Also, several other problems about the
system-level simulation e.g., how to model the
modules and interfaces, how to evaluate the key
technologies involved, and the network performance, etc., have not been fully considered yet.
All of these obstacles constitute the motivations
for our work in this article.
Our major task in this article is to establish a
unified framework for system-level simulation
methodologies for different mobile cellular systems. In addition to capturing important aspects
of the unified simulation methodology, this article also highlights the modules in the framework
for system-level simulation. Moreover, the issues
among different simulation methodologies developed by various standardization bodies are clarified. Finally, we show an example evaluating the
performance of LTE and Mobile WiMAX systems using the unified system-level simulation
methodology. The analytical and simulation
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
work provides much insight into the technical
principle and also the benefits of a deep understanding of the potential of commercial mobile
communication systems.
The rest of this article is organized as follows.
The next section gives an overview of systemlevel simulation. We study the unified modeling
of various simulation modules. After that, the
unified system-level simulation evaluation
methodology and platform are proposed. We
present simulation results by using the proposed
methodology and platform. Finally, conclusions
are summarized.
A
BEMaGS
F
Resource management
Application layer
System-level
simulation
MAC layer parameter
Traffic model
TCP/IP layer
Transport layer
Interface
MAC layer
OVERVIEW OF
SYSTEM-LEVEL SIMULATION
PURPOSE OF SIMULATION
Due to the complicated structures of mobile cellular communication systems, we cannot describe
them completely through a simple and abstracted mathematical model. Thus, we always resort
to the simulation to evaluate their performance.
Computer programs are used to simulate the
operating mechanisms of mobile cellular communication systems, the loaded traffics, etc. The
performance of these systems can be reflected by
the results obtained from the simulation programs ultimately.
Moreover, the simulation can be used as an
assistant tool for theoretical studies. Whenever
new algorithms or strategies are proposed, we
usually cannot apply them directly to real networks to evaluate their performance because of
the high cost. Since the simulation is able to
emulate the practical scenarios in a statistical
manner, qualitative or quantitative complexity
analysis and performance evaluation of any new
algorithm or strategy can rely on the simulation.
In the following, we discuss the advantages of
the simulation, including its efficiency and flexibility, which are also the reasons why we need
the simulation. The efficiency means that we can
develop a simulation platform for a mobile communication system in a very short period of time
instead of constructing a practical complicated
system. By the use of the simulation platform,
the system performance can be predicted easily
and effectively. The flexibility refers to facilely
changing any component of the system by modifying the corresponding programming module
with low cost and risk.
Valid conclusions and suggestions can be
obtained from the simulation. However, they are
not obsoletely accurate since it is impossible for
the simulation to describe the physical nature of
the actual system with infinite precision. The
simulations do not have complete authenticity,
reliability, or reproducibility, but they have
quasi-authenticity, quasi-reliability, and quasireproducibility. They only approach the physical
nature in the sense of probability.
LINK-LEVEL SIMULATION VS.
SYSTEM-LEVEL SIMULATION
The simulation for mobile communication systems includes the link-level simulation and the
system-level simulation. Both of them are widely
Physical layer parameter
Physical layer
Link-level
simulation
Propagation model
Channel model
Figure 1. Component layers and model for simulation methodology.
employed to evaluate the associated performance. The link-level simulation focuses on the
performance of a transmission between base stations (BSs) and mobile stations (MSs). The performance metrics usually include the bit error
rate (BER), signal to noise ratio (SNR), achievable rate, etc. In general, the link-level simulation concentrates on the physical layer. On the
left side of Fig.1, we show its relationship to
other components in communications. For the
purpose of theoretical studies, the performance
of modulation/demodulation or coding/decoding
schemes in different radio channel models can
be obtained from the link-level simulation.
The scenario for the system-level simulation
generally consists of a network with multiple BSs
and MSs. Different from the link-level simulation, the system-level simulation focuses on the
application layer performance metrics as
expressed by system throughput, user fairness,
user-perceived quality of service (QoS), handover delay or success rate, etc. The system-level
simulation concentrates on the higher layers
above the physical layer, such as the MAC layer,
transport layer, TCP/IP layer, and application
layer. Figure 1 shows the component layers
related to the system-level simulation. For the
purpose of theoretical studies, the performance
of resource allocation, handover, cell deployment, or other strategies can be obtained from
the system-level simulation.
STRUCTURE OF SYSTEM-LEVEL SIMULATION
The simulation methodology model is plotted in
Fig. 1. System-level simulation includes the
scheduling process, power control process, adaptive modulation and coding scheme (MCS) selection process, and other MAC layer processes.
Also, the system-level simulation needs to be
operated with incorporation of the link-level
simulation. In general, the link-level simulation
is separately abstracted to a set of SNR-BER
curves on different MCS levels. The outputs of
the link-level simulation are mapped through an
interface to the system-level simulation as inputs
of the system-level simulation.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
149
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Simulation methodologies have been
comprehensively discussed in different
organizations and
institutions. With the
evolution of communication standardization, international
organizations and
companies have
made much effort to
develop the simulation methodologies
for them and their
evolutions.
How can we know the precision of this system-level simulation methodology? How can we
know whether the proposed platform for the system-level simulation is comparable to other
works? In order to guarantee the precision and
comparability, we developed the following
mechanism. First, we establish the platform for
the system-level simulation of a certain cellular
system, and then the platform needs to be calibrated with the corresponding standard organizations, such as the simulation methodologies
from the International Telecommunication
Union Recommendations (ITU-R), 3GPP,
3GPP2, and IEEE WiMAX forum. The calibration method is to adapt the simulation platform
to the achievement of the same simulation
results as that from the standard organizations
with the same parameters and conditions. After
the common calibration, we can confirm that the
simulation platform is accurate enough to implement the performance evaluation or compare
with others.
OVERVIEW OF
EXISTING SIMULATION METHODOLOGIES
Simulation methodologies have been comprehensively discussed in different organizations
and institutions. With the evolution of communication standardization, international organizations and companies have made much effort to
develop the simulation methodologies for them
and their evolutions. The simulation methodologies for cdma2000 and Mobile WiMAX systems
have been explicitly presented in 3GPP2
C.R1002 [1] and WiMAX forum document [3],
respectively. 3GPP has also made an effort on
simulation methodology for WCDMA and its
evolutions such as HSDPA/HSUPA and LTE.
However, the methodology for LTE-A has not
been established yet. It would be improved with
the development of the LTE-A standard.
These simulation methodologies involve many
modules, including the cell layout model, channel model, radio resource allocation, interference model, physical layer abstraction, traffic
models, and other key factors. In the next section, we will study the unified modeling of these
modules in detail and discuss their realization or
deployment in the system-level simulation.
UNIFIED MODELING OF
SIMULATION MODULES
Through the overview of existing methodologies,
we find that different simulation methodologies
and configurations proposed by various organizations aim at different systems, and they are
not applicable to each other. Thus, we propose a
unified structure for a simulation model and
simulation platform, and establish a general
framework for different system-level simulation
methodologies in this article. The models proposed here provide a unified method to evaluate
the performance of various systems. It eliminates
many inconsistent aspects in previous literature
and keeps the most essential parts.
There are many static and dynamic modules
for the system-level simulation. In this section,
150
Communications
IEEE
A
BEMaGS
F
we study the unified models of various modules,
which are the components of the unified systemlevel simulation methodology. Their simulation
methods and realization in the platform are also
discussed.
CELL LAYOUT MODEL AND
WRAPAROUND TECHNOLOGY
There are several common service area models:
single-cell, 7-cell, 19-cell, and 36-cell. If the number of cells in the model is too large, which
means the system has many BSs and MSs, it will
take a long time to develop and debug the simulation platform. The reason is that the calibration
of the simulation platform needs iteratively modifying, running the platform, and analyzing
results. Thus, the model with fewer cells can
accelerate the evolution of the simulation platform. However, if the number of cells in the
model is too small, the inter-cell interference
(ICI) suffered by one cell may not be enough.
Thus, it may not be able to reflect the practice
with tolerable precision. Therefore, the scalability
and the adaptability of the simulation platform
need to be considered in the selection of the service area model. In some methodologies, the 19cell model is the preferred standard service area.
WrapAround is a technology that can simulate the ICI and at the same time improve simulation efficiency. In the 19-cell model without
WrapAround, a serious defect would happen in
the ICI calculation for the MSs in the edge cells.
After the simulation, only the data of the center
cell that are reliable can be collected, which
leads to low efficiency. Much more time is consumed by the work of the MSs that are out of
the center cell, but they only serve as foils to a
small quantity of MSs in the center cell.
Thus, WrapAround is recommended as the
cell layout of the simulation platform to form a
toroidal surface to enable faster simulation run
times. A toroidal surface is chosen because it
can be easily formed from a rhombus by joining
the opposing edges. The cell cluster of 19-cell is
virtually repeated 8 times at rhombus lattice vertices. Then the structure includes the original
19-cell cluster remaining in the center, called the
center cell cluster, while the eight copies evenly
surround this center set.
By adopting WrapAround technology, the
simulation platform can solve the ICI calculation
problem since each cell of the center cluster has
enough interference. Also, it is able to improve
the efficiency of the simulation and the data
statistics since all the data in the center cell cluster can be collected. It only makes the system
realization a little more complicated.
Typically, the MSs are uniformly distributed
in the system. Some methodologies [3, 7] specify
that every cell or sector has the same number of
MSs. This is because with the same number of
MSs, not only the traffic density is guaranteed to
be uniform in the simulation, but also the effects
of the admission control could not be taken into
account specially. In order to satisfy the requirements of debugging, testing and researching, the
simulation platform should support some fashions of MS dropping. These may include fixed
point dropping with only one MS in the service
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
area or fixed point dropping with one MS each
sector, which can ensure the fast debugging of
the platform; fixed point dropping with multiple
MSs in each sector, which is beneficial to the
fast comparison testing; hotspot dropping, which
can support the research for the hotspot, admission control and congestion control; or dropping
with fixed cell radius. It proposes the requirements for the design of the simulation system
that the operations of users and the realization
of dropping algorithm should be detached.
CHANNEL MODEL
The multipath channel model proposed in ITU-R
M.1225 [7] has been widely used for the cellular
systems with a single-input multiple-output
(SIMO) (typically 1 transmit by 2 receive antennas) antenna configuration. M.1225 defines several scenarios: indoor A/B, pedestrian A/B, and
vehicular A/B, each with six subpathes except for
pedestrian A, which has four subpathes. The relative delay profile and average power for each
subpath are also specified. Different systems
have distinct bandwidths, which leads to different multipath resolutions, and we cannot use
these models directly. Thus, channel models
need to be modified for different systems. If
multiple-input multiple-output (MIMO) technology is adopted, the spatial-time channel model
(SCM) or SCM-Enhancement (SCM-E) [8]
should be used to generate multipath fading.
Indeed, SCM-E can also be used in SIMO with
suitable setting.
The simulation platforms for different systems adopt different channel models, which are
recommended in corresponding simulation
methodologies. For the purpose of the performance comparison among several systems, the
unified mixed channel model should be applied.
In [6], the authors describe how to unify the
model parameters for ITU channels. They established the unified channel model by finding the
similarity of number of paths, the power and the
delay profile of each path, among different channel models.
RESOURCE ALLOCATION STRATEGY
Resource allocation, including subcarrier assignment among users and power allocation on subcarriers, is an important issue in OFDM-based
mobile cellular systems. In general, the resource
allocation process can be divided into several
steps: determination of the available resource;
calculation of the packet transmission order over
the air interface; implementation of the resource
allocation signaling process; and generation of
downlink (DL) or uplink (UL) MAC packets.
Typical optimization problems of interest for
resource allocation include the capacity problem,
i.e., maximizing the sum data rate subjects to a
power constraint, or the power control problem,
i.e., minimizing the transmit power so that a certain quality of service metric for each user is satisfied. Besides, in order to balance the system
efficiency and user fairness, we can also try to
maximize the aggregate utility of users in the network. More strategies on this issue for OFDM
systems are studied in [9] and its references.
A basic resource unit is the minimum unit
allocated to one user, and is unique for each sys-
tem. In our simulation platform, a basic resource
unit in the time-frequency domain is one subchannel in Mobile WiMAX or one resource
block (RB) in LTE. On each resource unit, the
proportional fair (PF), round robin scheduling
algorithm, or other scheduling strategies are
used for subcarrier assignment.
IEEE
BEMaGS
INTERFERENCE MODEL
Interference is a primary factor that significantly
affects system performance in multiuser networks, especially in OFDM systems. One of the
key benefits of the OFDMA air interface is its
ability to enable frequency reuse, that is, the
same frequency can be used in all neighboring
cells and sectors. It makes the system deployment much easier since frequency planning is no
longer needed. With high frequency reuse patterns, however, the system becomes interference
limited.
The ICI seen by an MS in DL or a BS in UL
is typically frequency- and time-selective. In the
system-level simulation, the ICI should be modeled according to the practical channel model,
including large scale fading and fast fading components. In our simulation platform, the interference is computed in real-time along with the
simulation running through the signals received
by the MS from different BSs or by the BS from
different MSs.
F
Interference is a primary factor that
significantly affects
system performance
in multiuser
networks, especially
in OFDM systems.
One of the key
benefits of the
OFDMA air interface
is its ability to enable
frequency reuse, that
is, the same
frequency can be
used in all neighboring cells and sectors.
PHYSICAL LAYER ABSTRACTION
In general, in order to reduce the complexity of
the simulation platform, the effects of the linklevel strategies are abstracted to a set of curves,
which are the inputs of the system-level simulation as described previously. In OFDM systems,
the total bandwidth is divided into a number of
orthogonal subcarriers, each of which has a signal to interference plus noise ratio (SINR). Several subcarriers are combined into a subchannel,
and its effective SINR is the combination of the
SINRs of these subcarriers. Many mapping solutions have been proposed. The 3GPP and the
WiMAX Forum AWG group recommend the
exponential effective SINR mapping (EESM)
model as a typical and default solution for the
SINR combination.
When the bandwidth is large, such as 10MHz
in Mobile WiMAX or LTE systems, the size of
fast Fourier transform (FFT) is 1024. The
amount of calculation for the SINRs on subcarriers or RBs becomes quite large, which would
greatly reduce simulation efficiency. Thus, we
propose an interpolation method, which calculates the SINR every four or eight subcarriers
first, and then the SINRs on all subcarriers can
be obtained through linear interpolation.
Through our simulation verification and validation, it can be found that if the effective SINR is
calculated by using the values on every four subcarriers instead of every subcarrier, the simulation accuracy under the conditions and models
mentioned above will not be affected, but the
simulation complexity will be greatly reduced.
SERVICE AND TRAFFIC MODELS
With the development of mobile cellular systems,
the traffics evolve from voice dominant toward
mixed ones. Different types of traffics have dif-
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
151
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Driving
module
Forward
iven
)
Slotwork()
Readtime()
k(
Slo
tw
or
k
or
Driven
object
tw
Driven
object
Slo
()
-dr
e
Tim
Driven
object
Figure 2. Structure of modules.
ferent performance evaluation output metrics.
Although there are various types of traffics in
practical systems, they can be classified into two
categories. One is the packet data traffic without
delay requirement, such as Full Buffer. System
throughput and user fairness are two guidelines
for this type of traffic. However, the throughput
is associated with the system bandwidth and typically increases when more bandwidths are occupied. Thus, the spectrum efficiency, which is
defined as the throughput divided by the effective bandwidth with unit of bits per second per
Hertz, is the normalized performance index for
this type of traffic. The other category is the
delay sensitive traffic that has a delay requirement, such as voice over IP (VoIP) or streaming
media. The packet delay and outage probability
are the most critical performance metrics.
Different traffic models generate data with different characteristics. However, all traffics can be
abstracted as a data generator in the system-level
simulation platform. In the platform, there is a
data pool at the transmitter which contains data
to be transmitted. The data generators for various
traffic models fill in this pool with different rules.
In practice, the mixed traffic is preferred.
OTHER KEY ALGORITHMS
Other key technologies such as hybrid automatic
repeat request (HARQ), adaptive modulation
and coding (AMC), channel quality indication
(CQI) feedback, power control, and interference
over thermal (IoT, or rise over thermal, RoT)
control in UL are standardized specifically in
different systems or widely used. They may be
adjusted for different systems according to their
own characteristics.
UNIFIED SYSTEM-LEVEL SIMULATION
METHODOLOGY AND PLATFORM
Based on the unified modeling of various modules, we study the unified system-level simulation
methodology and the general simulation plat-
152
Communications
IEEE
A
BEMaGS
F
form in the following. In order to compare the
performance of different systems, the simulation
settings should be unified. Since different systems operate under various parameters or configurations, each organization for one system
proposes a body of simulation methodology and
recommends a set of parameters. They cannot
match each other completely. The unified simulation parameters and configurations can be
obtained by comparing the characteristics of different systems, as discussed in [6]. For most of
the parameters and configurations, we choose a
set of common values from the values’ range or
optional configurations of different systems. If
there is no common value for some parameters
or some system specific configurations, we could
set them as close as possible for different systems.
The system-level simulation platform developed by us is established based on the model
shown in Fig. 1. By realizing all the unified modules above, we calibrate the simulation platform
by comparing it with published results. After
that, the platform can be used for the research
or system performance comparison under the
unified settings. The unified simulation methodology consists of the unified models of various
modules and this unified flow for different systems.
The general system-level simulation platform
adopts a time-driven mode to drive the system
progress, which is the combination of drop and
time slot driven. A drop is a process by which all
MSs are dropped in the service area in a certain
manner. In our platform, the MSs are uniformly
distributed in the system. A time slot is the minimum step duration for time in the simulation,
which is determined by the system-level minimum control interval. In the Mobile WiMAX
system, the period for the scheduling or the
power control is a frame, so the time slot in the
platform is 5ms, while in the LTE system, the
time slot is 1ms since the control period is a subframe. The simulation duration for a drop contains many time slots, which is long enough for
the system to become stable. The data for the
system outcomes should be collected from the
time slot when the simulated system is stable till
the end of a drop. A drop can be regarded as
one Monte-Carlo simulation. Thus, multiple
drops should be simulated in the system-level
simulation platform before average values are
obtained.
The flow for the system-level simulation
platform is as follows. Before the time slot simulation, the initialization should be implemented first, which includes reading simulation
parameters, initializing service area and BSs,
dropping MSs in the system, initializing MSs
and the path losses of links between MSs and
BSs, etc. After that, the time-driven module
drives the progress of the system, and other
modules perform their own work in every time
slot, which can be summarized in function slotwork(). The progress ends when the simulation
time is out.
Through the simulation flow, we can observe
the relationship among different modules in the
simulation platform, which is shown in Fig. 2.
The interface between the driving modules and
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
IEEE
the driven objects is the slotwork() function.
The clock in Fig. 2 is the time slot module. Only
the system driving module that is the central
controller can push the clock forward and reset
it, while the other modules are driven objects,
such as the behaviors of MSs and BSs, which
can only read the time slot module. In the whole
process of the system-level simulation, all the
modules are operating based on the time slot
module.
The proposed unified system-level simulation methodology and the general simulation
platform can be used to evaluate the performance of any mobile communication system or
mobile broadband technology. The output performance of different systems is compared fairly and credibly through the unified simulation.
In addition, the simulation platform can be
used to evaluate the comprehensive performance of the proposed algorithm reflected in
the system. With the development of mobile
communication standards, the process and
method to establish the unified simulation
methodology are similar.
The unified modeling and methodology can
also be used in other areas, such as network
planning and network optimization, for which
suggestions can be provided through the systemlevel simulation. Moreover, the unified methodology can be extended to compare different
network costs by adding some cost modules in
the platform. It would be possible for operators
to lower their capital and operating expenditures
by deploying the system with better performance.
SIMULATION RESULTS
In this section, by using the unified simulation
methodologies and the methods for the unified
parameters studied above, we evaluate the performance of LTE and Mobile WiMAX systems.
In the following, the simulation configurations
are presented first, and then the results are
shown and analyzed. Some evaluating indicators
for different traffic models are also presented in
the following analysis.
By comparing the system parameters and
configurations in [3, 10], we summarize a set of
unified configurations, as shown in Table 1. The
V-MIMO in it is the abbreviation for virtual
MIMO. The system performance greatly depends
on deployment settings and system configurations below. In the simulation, all MSs are randomly dropped in a layout of three-tier 19
hexagonal cells with three identical sectors in
each cell. The wraparound model is employed to
simulate interference from neighboring cells.
EESM is used to combine the SINRs on the
subcarriers.
Mobile WiMAX and LTE systems also have
some specific key technologies. In Mobile
WiMAX, synchronous HARQ with a four processes interlaced structure is adopted, and the
maximum number of retransmissions is 4. The
Chase Combing model is used to perform the
SINR recalculation after the retransmission.
Hard handover and AMC with six MCS levels
are used in Mobile WiMAX. Besides, basic
open-loop power control is applied in UL. How-
IEEE
BEMaGS
F
1
Fairness criterion
WiMAX
LTE
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
2
2.5
Normalized throughput
3
3.5
4
Figure 3. Fairness curves of different systems in DL and UL.
ever, in LTE, the adaptive asynchronous
HARQ, Intra-LTE handover, AMC with 15
MCS levels, and closed loop power control are
adopted.
The simulation platform is established by
Visual Studio C++. The system-level simulation
is based on the unified methodology with the
above configurations. The simulation results
about the throughput, spectrum efficiency, user
fairness for Full Buffer traffic and capacity,
packet loss, and user satisfaction rate for VoIP
traffic are shown hereafter.
Table 2 shows the performance at the peak
user rate, the system throughput, and the spectrum efficiency for Full Buffer traffic in DL and
UL. It is observed that the LTE system has higher system throughput and spectrum efficiency in
both DL and UL. Moreover, the cell-edge performance in LTE is also better than that in
Mobile WiMAX.
Another performance index for Full Buffer
traffic is the fairness among users, which can be
achieved by using a proper scheduling algorithm,
such as PF scheduler. The throughput and the
spectral efficiency discussed above refer to
aggregate system performance, whereas the fairness refers to the per-user performance, especially to the performance of cell-edge users. We
should make an effective tradeoff between fairness and throughput. In order to achieve satisfying fairness with high spectral efficiency at the
same time, we have calibrated the relevant
parameters (_-factor in PF) in scheduling algorithms so that the following two criteria are met:
• Fairness curves are similar to each other in
shape and under the fairness criterion,
which will guarantee the consistency and
fairness among different systems.
• Fairness curves are as close to the fairness
criterion as possible, which indicates that
the spectral efficiency is maximized.
Figure 3 gives the user fairness curves in DL.
It is shown that all curves are on the right side
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
CDF
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
153
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Simulation Parameters
Unified Value
System Configurations
Frequency
2 GHz
Bandwidth
10 MHz
Duplex
TDD
DL:UL ratio
22:15 for WiMAX; 3:2 for LTE
Antenna configuration
DL:2 × 2; UL:1 × 2(or 2 × 2V-MIMO)
BS-to-BS distance
1 km
Minimum distance between BS and MS
35 m
Maximum UL total path loss
140 dB
Frequency reuse factor
1
Thermal noise density
–174 dBm/Hz
Scheduler
PF for Full Buffer; MLWDF for VoIP
Subchannel model for Mobile WiMAX
PUSC [3]
Propagation Parameters
Propagation model
COST 231 Suburban
Log-normal Shadowing Std
8.9 dB
Correlation distance of shadowing
50 m
Shadowing correlation between cells
0.5
Shadowing correlation between sectors
1.0
Channel models
SCM-E
Penetration loss
20 dB
BTS Configurations
BTS transmit power
43 dBm per antenna
BTS noise figure
5 dB
BTS antenna gain with cable loss
14 dBi
Antenna height of BS
30 m
Antenna horizontal pattern
70° with 20 dB front-to-back ratio
MS Configurations
MS transmit power
23 dBm
MS noise figure
8 dB
MS antenna gain
–1 dBi
Antenna height of MS
1.5 m
MS velocity
3 km/h
MS number
10/sector
Traffic type
Full Buffer and VoIP
Table 1. Simulation parameters.
154
Communications
IEEE
A
BEMaGS
F
of the three-point fairness criterion curve, guaranteeing fairness among users. At the same time,
each curve is as close to the fairness criterion as
possible. The two curves are close to each other,
which means that they are similar in terms of
fairness. In addition, the probability of users
with smaller throughput in LTE is lower than
that in Mobile WiMAX, which accords with the
cell-edge throughput in Table 2. The fairness
curves in UL similar to that in Fig. 3 are omitted.
Figure 4 gives the average ICI performance
of LTE and Mobile WiMAX systems with
respect to the number of users per sector. It
shows that LTE has lower average ICI value
than Mobile WiMAX, which means LTE can
mitigate the ICI more effectively. With the
increase of the user number, the average ICI of
users decreases for both systems.
Full Buffer is an ideal data traffic without a
delay requirement, which can only reflect the
throughput and the spectral efficiency performance. In the following, we evaluate the performance for VoIP traffic. The voice activity factor
for VoIP is 50 percent, and the total voice payload on the air interface is 40 Bytes. The results
for VoIP traffic in DL and UL are listed in
Table 3.
VoIP capacity is the average number of users
in one sector when more than 95 percent of
users satisfy the following three criteria:
• The packet delay for each user is not more
than 50ms with probability of 98 percent.
• The packet loss for each user is lower than
3 percent.
• Capacities for users in DL and UL are comparable.
The user satisfaction rate is the proportion of
users that satisfy the packet delay and the packet
loss conditions.
From the results, we can see that the LTE
system has higher VoIP capacity than Mobile
WiMAX with a little higher packet loss proportion in DL. Besides, the user satisfaction rate of
LTE is nearly equal to or higher than that of
Mobile WiMAX. Thus, the LTE system has better performance than the Mobile WiMAX system in general.
CONCLUSION
In this article we proposed a general unified system-level simulation evaluation methodology for
mobile cellular systems and presented the framework for the establishment of a unified simulation platform. The unified methodology
highlights the features of air interfaces and the
advantages of different system standards. Moreover, in order to compare various systems’ performance comprehensively, the unified modeling
of various modules in different systems is studied. After that, the simulation structure and the
general platform are proposed. Finally, based on
the unified configurations and parameter settings, simulation results of LTE and Mobile
WiMAX systems for both Full Buffer and VoIP
traffics in DL/UL are presented through the proposed platform. The results show that the LTE
system has higher spectrum efficiency, larger
VoIP capacity, and better user satisfaction than
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Mobile WiMAX. The work in this article can
serve as a reference for researchers, product
developers, or engineers.
-73
BIOGRAPHIES
LI CHEN [S] ([email protected])
__________ received his B.Eng. degree
in communication engineering from Beijing University of
Posts and Telecommunications (BUPT) in 2007. Now he is
pursuing his Ph.D. degree in communications and information systems at BUPT. His research interests focus on Network Coding, cross-layer optimization, radio resource
management and performance evaluation of wireless communications.
WENWEN CHEN ([email protected])
________________ received her
B.Eng. (2007) degree in communication engineering,
M.Eng. (2010) in communications and information systems
from BUPT. She joined Qualcomm in 2010, as an engineer
in Qualcomm Wireless Communication Technologies
(China) Limited. Her research interests mainly focus on performance analysis of key technologies for B3G/4G systems.
BIN WANG [S] ([email protected])
_____________ received his B.Eng.
degree in communication engineering from BUPT in 2009.
Now he is pursuing his M.Eng. Degree in communications
and information systems at BUPT. His research interests
focus on Device to Device underlay cellular networks, cooperation communications, and performance evaluation of
wireless communications.
XIN ZHANG [M] ([email protected])
_____________ received his B.Eng.
(1997) degree in communication engineering, M.Eng.
(2000) degree in signal and information processing, and
Ph.D. (2003) degree in communications and information
systems from BUPT. He joined BUPT in 2003, working in
the Wireless Theories and Technologies Laboratory, and
focuses his research mainly on key technologies and performance analysis of air interfaces for wireless networks.
HONGYANG CHEN ([email protected])
_________________ received
his B.S. (2003) degree and M.S. (2006) degree in Institute
of Mobile Communications from Southwest Jiaotong University, Chengdu, China. Currently, he is a Ph.D. student of
the Graduate School of Information Science and Technology, University of Tokyo. In 2009, he was a visiting
researcher in the UCLA Adaptive Systems Laboratory, under
the supervision of Prof. Ali.H.Sayed. His research interests
include Wireless Localization, Wireless Sensor Networks,
Statistical Signal Processing. He has served as a TPC member for some flagship conferences. He organized a Session
on WSNs at IEEE MILCOM’08. He received the Best Paper
Award from IEEE PIMRC’09. He has been listed in Marquis
Who’s Who in the World.
D ACHENG Y ANG ([email protected])
____________ received his M.S.
(1982) and Ph.D. (1988) degrees in circuits and systems
F
-75
Average ICI per user (dBm)
[1] 3GPP2 C.R1002-0 v.1.0, “cdma2000 Evaluation Methodology,” Dec. 2004.
[2] R. Jain, S.-I. Chakchai, and A.-K. AL Tamimi, “SystemLevel Modeling of IEEE 802.16E Mobile WiMAX Networks: Key Issues,” IEEE Wireless Commun. Mag.,
vol.15, no.5, Oct. 2008, pp. 73–79.
[3] WiMAX Forum, “WiMAX System Evaluation Methodology V4,” July 2008.
[4] 3GPP TR 25.896, “Feasibility Study for Enhanced Uplink
for UTRA FDD,” v.6.0.0, Mar. 2004.
[5] 3GPP TS 25.214, “Physical Layer Procedures (FDD),”
v.8.0.0, Dec. 2007.
[6] Y. Gao et al., “Unified Simulation Evaluation for Mobile
Broadband Technologies,” IEEE Wireless Commun.
Mag., vol. 47, no. 3, Mar. 2009, pp. 142–49.
[7] ITU-R Rec. M. 1225, “Guidelines for Evaluation of Radio
Transmission Technologies for IMT-2000,” Jan. 1997.
[8] 3GPP TR 25.996, “Spatial Channel Model for Multiple
Input Multiple Output (MIMO) Simulations,” v.6.1.0,
Sept. 2003.
[9] L. Chen et al., “Inter-Cell Coordinated Resource Allocation for Mobile WiMAX System,” Proc. IEEE WCNC
2009, Budapest, Apr. 2009, pp. 1–6.
[10] 3GPP TS 36.211, “Evolved Universal Terrestrial Radio
Access (E-UTRA) Physical Channels and Modulation,”
v.8.2.0, Mar. 2008.
-76
-77
-78
-79
-80
-81
-82
5
10
15
20
25
30
Number of users per sector
35
40
Figure 4. Average ICI per user with different user numbers.
UL 1 × 2 (2 × 2 V-MIMO)
DL
Performance Indices
WiMAX
LTE
WiMAX
LTE
Peak User Rate (Mb/s)
31.68
80.2
5.04 (5.04)
20.3 (20.3)
System Throughput
(Mb/s)
7.91
8.23
2.38 (2.79)
2.74 (2.83)
Spectrum Efficiency
(bps/Hz)
1.33
1.44
0.59 (0.69)
0.64 (0.66)
Cell-edge Throughput
(Mb/s)
0.058
0.11
0.06 (0.04)
0.061 (0.03)
Table 2. Peak user rate, throughput and spectrum efficiency.
UL 1 × 2 (2 × 2 V-MIMO)
DL
Performance Indices
WiMAX
LTE
WiMAX
LTE
VoIP Capacity
240
390
128 (164)
250 (286)
User Average
Throughput (kb/s)
9.8
8.3
7.2 (9.1)
7.95 (9.5)
Proportion of Users
with Packet Loss > 3%
1.7%
2.0%
3.6% (3.6%)
2.4 (2.3)
User Satisfaction Rate
in %
97.3
97.2
95.1 (95.9)
96.8 (97)
Table 3. Results for VoIP traffic.
from BUPT. From 1992 through 1993 he worked at the
University of Bristol, United Kingdom, as a senior visiting
scholar, where he was engaged in Project Link-CDMA of
the RACE program. In 1993 he returned to BUPT as an
associate professor. Currently he is a professor at BUPT,
working in the Wireless Theories and Technologies Laboratory, and his research interests are focused on wireless
communications.
IEEE Communications Magazine • July 2011
IEEE
BEMaGS
LTE
WiMAX
-74
REFERENCES
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
155
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ACCEPTED FROM OPEN CALL
Layer 3 Wireless Mesh Networks:
Mobility Management Issues
Kenichi Mase, Graduate School of Science and Technology, Niigata University
ABSTRACT
Wireless Mesh Networks (WMNs) may be
broadly classified into two categories: Layer 2
and Layer 3 WMNs. This article focuses on the
Layer 3 WMN, which provides the same service
interfaces and functionalities to the conventional
mobile host (MH) as the conventional wireless
local area network. Three essential design issues
to realize seamless mobility management in the
Layer 3 WMN are identified and systematically
discussed in this article. They are IP address resolution, location management, and Media Access
Control (MAC) address resolution. The Layer 3
WMN backbone requires systematic management of the IP and MAC addresses of each MH,
which can be realized by four basic approaches:
centralized management, home management,
replication management, and distributed management. It is shown that the address pair management architecture is fundamental to realizing
efficient packet forwarding and address resolution. Design guidelines are provided to realize
Layer 3 WMN supporting seamless MH roaming, considering the applicability of address pair
management architectures with regard to WMN
scale and client mobility; this applicability is considered based on a qualitative evaluation of the
management overhead and handover performance.
INTRODUCTION
Wireless communication has afforded remarkable convenience and benefits to human life,
enabling communication at all times and from
any place. Cellular phone networks and wireless
local area networks (WLANs) are major vehicles
to provide wireless communication. The former
is categorized as a wide area network (WAN),
while the latter is a local area network (LAN).
WLANs can also be used to form access networks to WANs. The wireless mesh network
(WMN) is an emerging technology to extend the
use of wireless communication [1]. A WLAN
backbone is composed of access points (APs) to
accommodate mobile hosts (MHs) and typically
wired LANs such as IEEE 802.3 to connect the
APs. On the other hand, a WMN backbone is
composed of mesh routers (MRs) instead of
APs. MRs generally have a capability similar to
APs in a WLAN for accommodating convention-
156
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
al MHs and are usually placed in a location similar to that of APs in a WLAN. However, unlike
WLANs, neighboring MRs communicate with
each other via a wireless link. If two MRs are
not within direct wireless communication range,
intermediate MRs may function to relay communication (i.e., wireless multihop communication).
Typically, no wiring is necessary to construct the
WMN backbone. This is a significant advantage
of the WMN over the WLAN, which requires
wiring between APs; hence, cost and time to
deploy a network service is saved using the
WMN.
The WMN may be broadly classified into two
categories: Layer 2 and Layer 3 WMNs. In the
former, frame relaying (bridging) is performed
in Layer 2 (the Media Access Control (MAC)
layer) in the WMN backbone. A MAC address is
used to deliver frames from one MH to another
through the WMN backbone. Standardization of
the Layer 2 WMN is under development in
IEEE 802.11s. All wireless interfaces that form
the WMN backbone require use of the IEEE
802.11s based devices. On the other hand, in the
latter type of WMN, Internet Protocol (IP)
packet relaying is performed in Layer 3 (Network layer) in the WMN backbone [2]. An IP
address is used to deliver IP packets from one
MH to another through the WMN backbone.
There is no special requirement for the underlying wireless link systems used to connect the
MRs. It is thus possible to select the most appropriate wireless link system such as IEEE
802.11a/b/g/n or IEEE 802.16 for each link in
terms of cost and performance to form a heterogeneous wireless network for the WMN backbone; each MR in the WMN backbone may have
multiple interfaces to different wireless link systems. The broadcasting of Address Resolution
Protocol (ARP) request frames is necessary in
the Layer 2 WMN backbone, consuming precious wireless bandwidth, while it can be avoided
in the Layer 3 WMN backbone. For these reasons, Layer 3 WMN is expected to be more scalable than Layer 2 WMN.
Various approaches and technical challenges
to realize the Layer 3 WMN have been proposed
and discussed in the literatures, though standardization is yet to begin. Mobility management
is one of the major technical issues and challenges in the realization of the Layer 3 WMN. It
is necessary for MHs to continuously send and/or
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
receive IP packets for the ongoing communication sessions during roaming in WMN. Readers
may refer to [3, 4] for introductions and general
discussions of mobility management in WMNs.
This article focuses on mobility management for
Layer 3 WMN that supports client side transparency for conventional MHs. The contributions of this article are:
• The identification of the technical issues
involved in designing Layer 3 WMN to support client side transparent mobility.
• Discussing the common frameworks and
architectures for solving these issues and
realizing essential functions.
• Presenting the design guidelines, including
the selection of architectures.
MOBILITY MANAGEMENT ISSUES
AND RELATED WORKS
The architecture of WMNs can generally be
classified into three types: Infrastructure/Backbone WMNs, Client WMNs, and Hybrid WMNs
[1]. This article focuses on a primitive class of
Infrastructure/Backbone WMNs, where MRs
form a communication backbone for conventional IEEE 802.11 MHs to provide a service equivalent to that of WLAN (Fig. 1). Hereafter, this
type of WMN is referred to as “Layer 3 WMN”
or simply “WMN.” An MH establishes a linklayer association with one of the MRs through
IEEE 802.11 infrastructure mode protocol and
obtains an IP address through a Dynamic Host
Configuration Protocol (DHCP) service provided in the WMN, when it first enters the WMN.
It may roam within the service coverage of
WMN. The link-layer handover from one MR to
another follows the IEEE 802.11 handover procedure. Instead of the IEEE 802.11 infrastructure mode, an ad hoc mode is used both for
MRs and MHs in [5, 6]. However, such a restriction is not appropriate for meeting our objective
of realizing a WMN that can be used instead of
WLAN and is hence out of the scope of this
article.
There are two approaches to supporting
mobility management for the Layer 3 WMN. In
the first approach, each MH has two IP addresses, one for representing MH identification and
the other for representing the location of the
MH, which is the current point of attachment of
the MH to the WMN (one of MRs), as in the
Mobile IP (two-tier addressing model). Each
MR configures an individual Extended Service
Set (ESS) composed of one Basic Service Set
(BSS). When an MH roams from one MR to
another in the same WMN, Layer 2 handover
occurs. In addition, it conducts a new DHCP
request and response to configure its IP address
for representing the location of MH (care-ofaddress). In the second approach, each MH has
a single IP address. All MRs cooperatively configure a single ESS. MHs can maintain their IP
address constant (single addressing model) during MR-to-MR roaming. Just a link-layer handover is sufficient for MHs to realize seamless
roaming.
Since IP packet forwarding is performed in
the Layer 3 WMN backbone, the two-tier
The
Internet
MR
IEEE
BEMaGS
Heterogenous
wireless
network
GW
MR
F
MR
Wireless link
MR
MR
MR: Mesh router
GW: Gateway
MH: Mobile host
IEEE802.11
WLAN
MH MH MH
MH MH
MH MH MH
Figure 1. An example of wireless mesh network structures.
addressing model is quite straightforward, where
MHs need to have functionalities to support the
Layer 3 handover [7–9]. On the other hand, in
the single addressing model, no special functionalities are required for MHs (client side transparency in [4]). Since our goal is to provide the
same backbone service for conventional MHs as
in the WMN, only the client side transparency
seems to be a promising approach. However, in
this approach, special care is required for IP
packet forwarding without any dependence on
MH functionalities [10–13].
There are three essential design issues for
supporting mobility management for a single
addressing model.
IP address resolution: Since an IP address is
used to deliver IP packets from one MH to
another through the Layer 3 WMN backbone,
each MR needs to recognize the IP addresses of
the MHs that are associated to it. When an MH
roams from one MR to another, it completes a
link-layer association with a new MR. As a
result, the MR can recognize the MAC address
of the MH. However, it cannot freely obtain the
IP address of the MH. In [14], DHCP servers on
MRs assign IP addresses to the MHs based on
the MAC addresses of the MHs, so that the MR
can infer the IP address of the roamed MH from
its MAC address. However, this prevents efficient IP address assignment, for instance, the
possible use of address aggregation, as mentioned in the next section. How then is the IP
address of the roamed MH obtained?
Location management: Since each MH has
only one IP address in the single addressing
model and possibly roams from one MR to
another in the Layer 3 WMN backbone, this IP
address represents MH identification but does
not represent its location, unlike the care-ofaddress used in Mobile IP. How is the location
of an MH to which the packets are delivered
identified? In MobileNAT, the address representing MH identification is translated into the
address representing the current point of the
MH attachment to the Internet and is used for
routing packets to the MH [7, 8]. This technique
can be used when an MH only communicates
with a corresponding node in the Internet but
cannot be used for communications within the
WMN. On the other hand, a routing protocol
operating in the WMN backbone can be used to
create and maintain the routing table of each
MR, which includes the IP addresses of the MHs
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
157
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
5: Route update
AMS
3: Response
5: Route update
1: Roaming
4
2: IP address
inquiry
4: IP
address
notification
7: Packet forwarding
6: Packet sending
Figure 2. Packet forwarding mechanism (centralized management-Scheme A).
as the destination (flat routing or host routing).
The roaming of MHs updates the routing table
for the related MRs, resulting in a possible high
message overhead consuming wireless bandwidth. How then is efficient location management conducted?
MAC address resolution: Since Layer 3
WMN provides one ESS for MHs, ARP needs
to be supported for it. An MH (source MH) of
one MR may send an ARP request to another
MH (destination MH) belonging to a different
MR. For a straightforward emulation of ARP in
the Layer 3 WMN backbone, a flooding ARP
request over all the MRs of the WMN backbone
is required [12]; this ARP request consumes
wireless bandwidth and should be avoided. In
[13], a kind of proxy ARP was proposed. Upon
receiving the ARP request, the MR replies to
the ARP request from the MH with its own
MAC address. In [14], the MR replies with a
fake MAC address that is uniform on all MRs,
and hence, the source MH does not need to
update its ARP cache when it roams to another
MR. Unfortunately, in both approaches, a problem arises when the source and destination MHs
meet in the same MR later. How then is efficient ARP performed?
Several studies have been conducted on
mobility management in WMNs. However, the
three design issues mentioned above have not
yet been fully discussed. This article presents
frameworks to systematically address these
issues.
158
Communications
IEEE
A
BEMaGS
F
assignment for MHs in cooperation with the
DHCP relay servers located in the MRs. In type
II, the address block is divided into address
block units with a pre-determined address range.
Each MR requests an address block unit to the
address management server (AMS) in the WMN
backbone. Upon receiving the request, the AMS
assigns one or more address block units in the
form of an address prefix to the requesting MR.
An MH is assigned an IP address from the range
of the address block units provided to the MR it
associates with. (This MR is hereafter referred
to as the “home MR.”) Each MR may have a
DHCP server to perform efficient DHCP service, although a single DHCP server arrangement is also possible. An MH may roam from
one MR to another, but it is expected that the
majority of MHs remain in the original location
(home MR). For these MHs, the prefixes of
their IP addresses represent their locations.
Two MHs in the same Layer 3 WMN use the
MAC address to identify the sender and receiver
of the frame even if they connect different MRs,
while the MRs in the WMN backbone use the IP
address to identify the sender and receiver of the
corresponding IP packet included in the frame
in order to forward the IP packet. To perform
packet forwarding in the WMN backbone, the
MAC and IP addresses (an address pair) of each
MH and its location (the current MR) need to
be systematically managed in the Layer 3 WMN.
This article presents four address pair management architectures, the first three architectures
further classified into Schemes A and B, to provide a framework for the mapping between the
MAC address and the IP address of each MH.
This mapping is used to solve two design issues —
IP address resolution and MAC address resolution — mentioned in the previous section. The
address pair management architecture thus provides a foundation for the packet forwarding
mechanism based on the IP address resolution
(see next section) and MAC address resolution
mechanism (see later sections).
CENTRALIZED MANAGEMENT
ADDRESS PAIR
MANAGEMENT ARCHITECTURES
In Scheme A, a centralized server maintains the
address pair records of all MHs in the WMN
[13]. This server is also referred to as AMS for
convenience, but it may be different from the
AMS used for IP address assignment. In Scheme
B, the AMS additionally maintains the current
MR records of all MHs. When an IP address is
assigned to an MH, the home MR obtains the
address pair and sends it to the AMS. When an
MH roams between MRs, the current MR
obtains the MAC address of the MH and sends
it to the AMS, which returns the IP address of
the MH to the current MR (see 1-3 in Fig. 2). In
addition, in Scheme B the AMS updates the current MR of the MH (see 1-3 in Fig. 3).
An MH needs to obtain an IP address when it
newly joins the WMN, that is, when it establishes Layer 2 association with one of the MRs of
the WMN. Two types of the DHCP service can
be considered. In type I, an address block
reserved for MHs is provided for the entire
WMN and each MH is assigned an IP address
from the range of this address block. A DHCP
server located in the WMN performs IP address
In Scheme A, each MR maintains the address
pair records of the MHs for which it is the home
MR. In addition, a centralized server maintains
the MAC address and the home MR pair records
of all MHs in the WMN. This server is termed
the home server (HS). In Scheme B, each MR
additionally maintains the current MRs of the
HOME MANAGEMENT
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
MHs for which it is the home MR. When an IP
address is assigned to an MH, the home MR
reports the MAC address and the home MR
(itself) of the MH to the HS. When an MH
roams between MRs, the current MR obtains
the MAC address of the MH and sends it to the
HS, which returns the home MR of the MH to
the current MR. The current MR informs the
home MR of the MAC address of the MH and
obtains the IP address of the MH in return. In
addition, in Scheme B the home MR updates
the current MR of the MH [11].
REPLICATION MANAGEMENT
In Scheme A, each MR maintains the address
pair records of all MHs in the WMN [10, 11]. In
Scheme B, each MR additionally maintains the
current MRs of all MHs. When the IP address is
assigned to an MH, the home MR obtains the
address pair and sends it to all the other MRs
using flooding or multicast protocols. When an
MH roams between MRs, the current MR
obtains the MAC and IP addresses of the MH
using its own address pair records. In Scheme B,
when an MH roams between MRs, the current
MR obtains the MAC address of the MH and
sends it (or its IP address) to all the other MRs
to notify the current MR of the MH and each
MR updates the current MR of the MH (see 1-3
in Fig. 4).
DISTRIBUTED MANAGEMENT
Each MR maintains the address pair records of
MHs with which it has or had an association
[15]. When an MH roams between MRs, the
current MR obtains the MAC address of the
MH. If the current MR does not know the corresponding IP address, it sends an IP address
request that includes the MAC address of the
MH to the neighboring MRs based on the
expanded ring search. In this ring search, the
maximum number of searches is given and the
maximum number of hops per search increases
with each search trial [15]. The neighboring MR
that maintains the address pair record of the
requested MH returns the IP address of the MH
to the requesting MR (see 1–3 in Fig. 5). Assuming the MH’s typical roaming speed from one
MR to another, it is highly probable that one of
the one-hop neighboring MRs has the requested
address pair record. The maximum number of
hops in the first search may thus be limited to
one or two hops, and the first search is expected
to almost always succeed.
PACKET FORWARDING MECHANISM
Consider IP packet forwarding between MHs in
a Layer 3 WMN backbone. The MH that sends
the packets and its current MR are termed the
source MH and source MR, respectively. The
MH that receives the packets and its current
MR are termed the destination MH and destination MR, respectively. The source and destination MHs may be accommodated in the same
MR or in different MRs. In the former case,
each frame that conveys an IP packet is relayed
to the destination MH at Layer 2 using the AP
capability of the MR, as is usual in WLAN. In
the latter case, packet forwarding is performed
IEEE
BEMaGS
F
AMS
3: Current MR
update
2: IP address inquiry
3: Current MR
inquiry
3: Response
6: Response
1: Roaming
7: Packet forwarding
4: Packet sending
Figure 3. Packet forwarding mechanism (centralized management-Scheme B).
in the manner of either flat or hierarchical routing. Flat routing is employed in centralized management-Scheme A, home management-Scheme
A, replication management-Scheme A, and distributed management. MRs have route entries
for destination MHs in their routing table. Efficient location management in these schemes is
one of the design issues mentioned in the second
section. Hierarchical routing is employed in centralized management-scheme B, home management-Scheme B, and replication managementScheme B. The source MR first identifies the
destination MR of the destination MH and then
sends the packets to the destination MR by
means of IP-in-IP encapsulation or IPv6 routing
header (tunneling). The destination MR then
forwards the received packets to the destination
MH. MRs do not have the route information for
destination MHs but for destination MRs in
their routing table.
In both flat and hierarchical routing, routing protocol runs in the WMN backbone. In
flat routing, MH information (IP address) is
explicitly included in the routing messages for
route creation and update, while in hierarchical routing, it is not. The mobile ad hoc network (MANET) routing protocol is a
reasonable candidate to be used for the WMN
backbone, since it supports the wireless multihop packet forwarding capability. MANET
routing protocols are generally classified into
proactive and reactive routing protocols. Either
of these can be used in the WMN backbone.
The choice of the appropriate routing depends
on the scale of the WMN, traffic characteristics, and other conditions.
Next, the packet forwarding mechanism for
each of the address pair management approaches is described.
CENTRALIZED MANAGEMENT
Scheme A — Each MR periodically notifies the
IP address information of the MHs that associate with it to other MRs in routing messages
(proactive routing) or in response to the route
request (reactive routing). If IP address assignment type II is used in proactive routing, it does
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
159
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
MR from the IP address of the destination MH,
IP address assignment-type II is assumed above.
A kind of home management-Scheme B-Method 2
is termed the transparent MIP (Mobile IP) in
[11].
3: Current MR
update
REPLICATION MANAGEMENT
Scheme A — The route update is performed in
a manner similar to that in centralized management-Scheme A.
1: Roaming
3: Current
MR
update
2
5: Packet forwarding
2: Current
MR
notification
4: Packet sending
Scheme B — Each MR can calculate or discover a route to other MRs by means of a routing
protocol that runs in the WMN backbone. The
source MR owns the record of the current MR
of the destination MH and forwards the packet
to the current MR through tunneling (see 4–5 in
Fig. 4).
DISTRIBUTED MANAGEMENT
Figure 4. Packet forwarding mechanism (replication management-Scheme B).
not have to advertise the individual IP address
information of the associating MHs for which it
is the home MR. Instead, it advertises the prefix
information of the address block units that it
owns. As a result, the amount of IP address
information of the MHs included in the routing
messages is substantially reduced to perform
efficient location management. A source MR
can then calculate or discover a route to the
AMS and the destination MH by means of a
routing protocol that runs in the WMN backbone and forwards the packet to the destination
MH (see 4–7 in Fig. 2).
Scheme B — Each MR can calculate or discover a route to AMS and other MRs by means
of a routing protocol that runs in the WMN
backbone. If the source MR does not hold the
current MR of a received packet, it sends the
location request that includes the IP address of
the destination MH to the AMS, which then
returns the current MR of the destination MH.
The source MR then forwards the packet to
the current MR through tunneling (see 4–7 in
Fig. 3).
HOME MANAGEMENT
Scheme A — The route update is performed in
a manner similar to that in centralized management-Scheme A.
Scheme B — Each MR can calculate or discover a route to other MRs by means of a routing
protocol that runs in the WMN backbone. In
Method 1, if the source MR does not hold the
current MR of a received packet, it sends the
location request that includes the IP address of
the destination MH to its home MR, which
returns the current MR of the destination MH.
The source MR then forwards the packets to the
current MR through tunneling. In Method 2, the
source MR sends packets to the home MR of
the destination MH, after which the home MR
forwards the packets to the current MR; both
the sending and forwarding of the packets
employ tunneling. To directly resolve the home
160
Communications
IEEE
The route update is performed in a manner similar to that in centralized management-Scheme
A (see 4-6 in Fig. 5).
MAC ADDRESS
RESOLUTION MECHANISM
In WLAN, a source MH broadcasts an ARP
request that includes its MAC and IP addresses
and the IP address of the destination MH, when
the MAC address of the destination MH does
not exist in its ARP table. Upon receiving the
ARP request, the destination MH sends back an
ARP reply, which includes its MAC and IP
addresses, to the source MH and updates its
ARP table with regard to the source MH. The
source MH updates its ARP table when it
receives the ARP reply.
Similarly, ARP needs to be supported in the
Layer 3 WMN to provide the ESS service to
MHs. The receiver MAC address of the frame
originating from the source MH and conveying
an IP packet to be sent to the destination MH is
thus set to the MAC address of the destination
MH. Consider the case that the source and destination MRs are different. In such a case, IP
packets received from the source MH are forwarded from the source MR to the destination
MR without using the MAC address of the destination MH set in the receiver MAC address of
the frames originating from the source MH.
Thus, one may consider that faithful (correct)
address resolution is not necessary and that
instead of returning the actual MAC address of
the destination MH, a dummy MAC address is
returned by the source MR [13, 14]. However,
unfortunately, this consideration is inaccurate,
because if the source MH and destination MH
later meet in the same MR, the frames cannot
be relayed (bridged) in the Layer 2 of the MR;
therefore, Layer 3 forwarding is required, which
unnecessarily increases the Layer 3 processing
load of the MR.
Next, the ARP mechanism for all the centralized and distributed management approaches is
described under the assumption that the route
information to the destination MH or the destination MR is proactively calculated or reactively
discovered. Due to space constraints, the ARP
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
mechanism for other approaches is not discussed. The proposed ARP mechanism performs
faithful ARP replies, while the ARP request
multicast is not necessary in the WMN backbone.
F
5: Route update
3: Response
CENTRALIZED MANAGEMENT
Scheme A — When a source MR receives an
ARP request and contains the reply information
(MAC address of the destination MH) in its
ARP table, it returns the ARP reply on behalf of
the destination MH. Otherwise, it sends an ARP
request packet (IP packet that includes the ARP
request message) to the AMS. Upon receiving
this request, the AMS returns an ARP reply
packet (IP packet that includes the ARP reply
message) to the source MR. When the source
MR receives this reply, it updates its ARP table
and returns the ARP reply to the source MH on
behalf of the destination MH. It also sends a
gratuitous ARP reply packet (IP packet that
includes the gratuitous APR message) to the
destination MH. The destination MR intercepts
this ARP reply packet, updates its ARP table,
and sends the gratuitous ARP reply to the destination MH on behalf of the source MH. The
source and destination MHs update their ARP
tables when they receive the ARP and gratuitous
ARP replies, respectively.
Scheme B — The procedure is the same as that
employed in Scheme A. The difference is in the
sender and receiver of the gratuitous ARP. In
Scheme A, the source MR sends a gratuitous
ARP reply packet to the destination MH, while
in Scheme B, the AMS sends a gratuitous ARP
reply packet to the destination MR, since the
AMS has the current MR record of the destination MH. Upon receiving the ARP reply, the
destination MR updates its ARP table and sends
the gratuitous ARP reply to the destination MH
on behalf of the source MH.
DISTRIBUTED MANAGEMENT
When a source MR receives an ARP request
and it contains the information in its ARP table,
it returns the ARP reply on behalf of the destination MH; otherwise, it sends an ARP request
packet to the destination MH. The destination
MR intercepts this ARP request, updates its
ARP table, and returns an ARP reply packet to
the source MR. Upon receiving this reply, the
source MR updates its ARP table and returns
the ARP reply to the source MH on behalf of
the destination MH. The destination MR also
sends a gratuitous ARP reply to the destination
MH on behalf of the source MH. The source
and destination MHs update their ARP tables
when they receive the ARP and gratuitous ARP
replies, respectively. It should be noted that this
method can also be employed in centralized
management-Scheme A and home managementScheme A.
DESIGN GUIDELINES
The mechanisms and features of address pair
management architectures for realizing Layer 3
WMN that supports client side transparent
mobility management are summarized in Table
2: IP address inquiry
1: Roaming
5: Route
update
4
Packet forwarding
4: IP
address
notification
6: Packet sending
Figure 5. Packet forwarding mechanism (distributed management).
1. The requirements and guidelines in designing
and selecting the Layer 3 WMN architectures
with consideration of their applicability to WMN
scale and client mobility are given below.
BASIC REQUIREMENT
The mobility management architectures should
satisfy the following two requirements:
• Data frames that are sent from one MH to
another belonging to the same MR should
be relayed in Layer 2. Note that this condition is always satisfied in the conventional
WLAN, where AP is used to bridge data
frames in the BSS. If Layer 3 processing is
required to relay data frames within the
MR of WMN, the MR requires higher
computing power than the AP of the
WLAN and may act as a bottleneck to processing.
• ARP request message flooding or multicast
should be avoided to preserve the precious
wireless bandwidth of the WMN.
All of the four basic architectures presented
in this article satisfy the requirements mentioned
above and recommended as the basic architecture options in designing Layer 3 WMN.
ARP MESSAGE OVERHEAD
The ARP message overhead within each BSS is
common in all architectures. In the WMN backbone, the ARP request and reply messages
require two-way unicast message propagation,
while the gratuitous ARP reply requires oneway unicast message propagation; the former
message overhead is approximately double the
latter one. The relative cost for the ARP message overhead is shown in Table 1, assuming
that the number of hops from the source MR
to the AMS, to the home MR, or to the destination MR is the same, and that the two-way
unicast message propagation cost is 1 unit.
Since the ARP message traffic is usually smaller than the data packet traffic, the ARP cost
difference between different architectures may
not be the primary factor for selecting the
architecture.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
161
A
BEMaGS
F
Communications
Replication management
Home
management
Centralized management
IEEE
Scheme
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Address
pair
location
Current
MR
location
AMS
N/A
Address pair & Current MR
management message
overhead
Joining
Roaming
IP address
inquiry to AMS
Address
pair to
AMS
AMS
IP address
inquiry to AMS
Current MR
notification
Current MR
inquiry to
AMS Tunneling
Scheme
A
Home
MR
N/A
Home MR
inquiry to HS IP
address inquiry
to home MR
Route update
Home MR
inquiry to HS IP
address inquiry
to home MR
Current MR
notification to
home MR
Current MR
inquiry to
home MR
(method 1)
Tunneling
(Method 1 &
Method 2)
N/A
Route update
Home
MR
Home
MR
Scheme
A
Each MR
N/A
Scheme
B
Distributed
management
Address
pair to all
MRs
Each MR
Each MR
Current
and past
MRs
N/A
N/A
ARP request &
reply
(Relative cost)
Route update
AMS
Scheme
B
BEMaGS
F
ARP message overhead
Routing
message
overhead
Scheme
B
MAC
address &
home MR
to HS
A
Forwarding
increase
Handover delay
From source
MR to destination MH (0.5)
N/A
IP address inquiry
to AMS Route
update
From AMS to
destination
MR (0.5)
Current MR
inquiry to
AMS
IP address inquiry
to AMS
From source
MR to destination MH (0.5)
N/A
Home MR inquiry
to HS IP address
inquiry to home MR
Route update
From home
MR to destination MR (0.5)
Current MR
inquiry to
home MR
(Method 1)
Triangular
routing
(Method 2)
Home MR inquiry
to HS IP address
inquiry to home MR
Current MR notification to home MR
Gratuitous
ARP reply
From source
MR to AMS (1)
From source
MR to Home
MR (1)
From source
MR to destination MH (0.5)
N/A
Current MR
notification to
all MRs
Tunneling
IP address
inquiry to
neighbor MRs
Route update
Route update
N/A
From source
MR to destination MR (0.5)
From source
MR to destination MH (1)
N/A
Current MR notification to all MRs
N/A
IP address inquiry
to neighbor MRs
Route update
Table 1. Comparison between address pair management architectures.
CENTRALIZED MANAGEMENT VERSUS
HOME MANAGEMENT
Among the four basic architecture approaches,
centralized management and home management
are similar since both these approaches require a
centralized sever, AMS, and HS, respectively.
Two successive inquiries for resolving the home
MR and IP address are required in the case of
roaming in home management, which is its main
drawback as compared to centralized management. Home management may thus be excluded
from the architecture options.
SCHEME B OPTION IN
MANAGEMENT ARCHITECTURES
In Scheme B, the current MR of each MH is
additionally maintained in the AMS in case of
centralized management and in all the MRs in
case of replication management. In centralized
management-Scheme B, current MR notification
is performed together with IP address inquiry to
the AMS, which is also necessary in Scheme A.
Therefore, Scheme B is almost the same as
Scheme A in terms of message overhead. On the
other hand, in replication management, IP
address inquiry is not necessary and current MR
notification needs to be independently provided
for Scheme B. Therefore, Scheme B is inferior
162
Communications
IEEE
to Scheme A in terms of message overhead, and
the advantage of Scheme B over Scheme A is
debatable. Replication management-Scheme B
may thus be excluded from the architecture
options.
SCHEME A VERSUS SCHEME B IN
CENTRALIZED MANAGEMENT
In Scheme A, routing overhead and handover
delay significantly depend on inter-MR mobility
(the number of MHs that roam from one MR to
another with respect to time). With high interMR mobility, route update needs to be frequently performed to quickly follow the change
in MH locations, thereby increasing the routing
overhead. Otherwise, the route update is
delayed, resulting in increasing handover delay.
On the other hand, in Scheme B, routing overhead and handover delay do not depend on
inter-MR mobility, since route update is not
necessary. Therefore, Scheme B can be superior
to Scheme A when inter-MR mobility is significantly high.
ADDRESS PAIR MANAGEMENT OVERHEAD
The address pair management overhead significantly depends on the MH changing rate (the
number of new MHs joining WMN with time).
Consider the following two cases:
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Case 1 — Low MH changing rate: Roaming
overhead is of primary concern since joining
overhead is not significant. In replication management-Scheme A the roaming overhead is
zero, while in other managements, it increases
with inter-MR mobility. Replication management-Scheme A is thus an attractive choice in
the case of high inter-MR mobility. However,
the joining overhead may still become significant
when the WMN scale is quite high in replication
management-Scheme A. In such a case, distributed management may also be considered a
suitable option.
Case 2 — High MH changing rate: The joining overhead is of considerable concern. It is
zero in case of distributed management, while it
increases with MH changing rate in case of other
management approaches. In distributed management, the roaming overhead remains almost
unchanged with the WMN scale, since the IP
address inquiry is targeted only at the neighbor
MRs. Distributed management is thus superior
to other management options.
CONCLUSIONS
Wireless mesh network (WMN) is an emerging
mobile network technology, and extensive studies are being conducted to realize scalable and
high-performance WMNs. This article identified the essential technical issues in designing
Layer 3 WMN that supports client side transparent mobility management. In this design, the
Layer 2 system for MHs and the Layer 3 system
for MRs need to be seamlessly combined in
MRs. The most fundamental issue in terms of
architecture is to efficiently maintain the
address pair (MAC and IP addresses) and the
location information of each MH that newly
joins and roams in a WMN. Four address pair
management architectures — centralized management, home management, replication management, and distributed management — are
presented. The first three management architectures are further classified into Scheme A
and Scheme B, which employ flat and hierarchical routing, respectively. This article discusses
the use of these management architectures as
the frameworks for realizing the essential mechanisms of the Layer 3 WMN — packet forwarding and address resolution. Finally, the
requirements and guidelines in designing and
selecting Layer 3 WMN architectures supporting client side transparent mobility management are given, considering their applicability
to WMN scale and client mobility based on the
qualitative evaluation of message overhead and
handover performance. This article can be used
to lay a foundation and provide frameworks for
designing and developing the practical Layer 3
WMN.
IEEE
BEMaGS
F
REFERENCES
[1] I. F. Akyildiz, X. Wang, and W. Wang, “Wireless Mesh
Networks: A Survey,” Computer Networks, vol. 47, no.
4, pp. 445–87, 2005.
[2] K. Mase et al., “A Testbed-based Approach to Develop
Layer 3 Wireless Mesh Network Protocols,” Tridentcom2008.
[3] J. Xie and X. Wang, “A Survey of Mobility Management
in Hybrid Wireless Mesh Network,” IEEE Network, vol.
22, no. 6, 2008, pp. 34–40.
[4] V. Mirchandani and A. Prodan, “Mobility Management
in Wireless Mesh Networks,” Comp. Commun. and Networks, 2009, pp. 349–78.
[5] S. Speicher, “OLSR-FastSync: Fast Post-Handovers Route
Discovery in Wireless Mesh Networks,” Proc. IEEE Vehic.
Tech. Conf. (VTC’06-Fall), 2006, pp. 1–5.
[6] Y. Amir et al., “Fast Handover for Seamless Wireless
Mesh Networks,” Proc. ACM Int’l. Conf. Mobile Syst.,
Apps., Services (MobiSys), 2006, pp. 83–95.
[7] M. Buddhikot et al., “MobileNAT: A New Technique for
Mobility Across Heterogeneous Address Spaces,” ACM
Mobile Networks and Applications, vol. 10, no. 3,
2005, pp. 289–302.
[8] K. N. Ramacandran et al., “On the Design and Implementation of Infrastructure Mesh Networks,” Proc. 1st
IEEE Wksp. Wireless Mesh Networks, 2005.
[9] A. M. Srivatsa and J. Xie, “A Performance Study of
Mobile Handover Delay in IEEE 802.11-Based Wireless
Mesh Networks,” Proc. IEEE Int’l. Conf. Commun. (ICC),
2008, pp. 2485–89.
[10] Y. Owada and K. Mase, “A Study on Protocol, Implementation and Throughput Evaluation for Multihop
Wireless LAN,” IEEE Vehic. Tech. Conf. (VTC 2003Spring), vol. 3, 2003, pp. 1773–77.
[11] V. Navda, A. Kashyap, and S. R. Das, “Design and Evaluation of iMesh: An Infrastructure-Mode Wireless Mesh
Network,” IEEE Int’l. Symp. A World of Wireless, Mobile
and Multimedia Networks (WoWMoM), 2005, pp.
164–70.
[12] L. Iannone and S. Fdida, “MeshDV: A Distance Vector
Mobility-Tolerant Routing Protocol for Wireless Mesh
Networks,” IEEE ICPS Wksp. Multi-hop Ad Hoc Networks: From Theory to Reality (REALMAN), 2005.
[13] H. Wang et al., “A Network-Based Local Mobility Management Scheme for Wireless Mesh Networks,” Proc.
IEEE Wireless Commun. Net. Conf. (WCNC), 2007, pp.
3795–800.
[14] M. Ren et al., “MEMO: An Applied Wireless Mesh Network with Client Support and Mobility Management,”
Proc. IEEE Global Telecom. Conf. (GLOBECOM), 2007,
pp. 5075–79.
[15] N. Azuma, K. Mase, and H. Okada, “A Proposal of
Low-Overhead Routing Scheme for Layer 3 Wireless
Mesh Networks,” Int’l. Symp. Wireless Pers. Multimedia
Commun., 2009.
The most
fundamental issue in
terms of architecture
is to efficiently
maintain the
address pair
(MAC and
IP addresses) and
the location
information of each
MH that newly joins
and roams in a
WMN.
BIOGRAPHY
KENICHI MASE [F] ([email protected])
_____________ received the B.
E., M. E., and Dr. Eng. Degrees in Electrical Engineering
from Waseda University, Tokyo, Japan, in 1970, 1972, and
1983, respectively. He joined Musashino Electrical Communication Laboratories of NTT Public Corporation in 1972.
He was Executive Manager, Communications Quality Laboratory, NTT Telecommunications Networks Laboratories
from 1994 to 1996 and Communications Assessment Laboratory, NTT Multimedia Networks Laboratories from 1996
to 1998. He moved to Niigata University in 1999 and is
now Professor, Graduate School of Science and Technology, Niigata University, Niigata, Japan. He received IEICE
best paper award for the year of 1993 and the Telecommunications Advanced Foundation award in 1998. His
research interests include communications network design
and traffic control, quality of service, mobile ad hoc networks and wireless mesh networks. He is an IEICE Fellow.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
163
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ACCEPTED FROM OPEN CALL
Preamble Design, System Acquisition,
and Determination in Modern OFDMA
Cellular Communications: An Overview
Michael Mao Wang, Avneesh Agrawal, Aamod Khandekar, and Sandeep Aedudodla
ABSTRACT
The wide choices of deployment parameters
in next generation wireless communication systems, such as flexible bandwidth allocation, synchronous/asynchronous modes, FDD/TDD,
full/half duplex, and configurable cyclic prefix
duration, etc., present significant challenges in
preamble and system acquisition design. This
article addresses these challenges, as well as the
solutions provided by the next generation wireless standards, such as the IEEE 802.20 Mobile
Broadband Wireless Access (MBWA) standard
and the 3GPP LTE standard. The design facilitates the maximal flexibility of the system configuration and yet has low overhead, low acquisition
latency, and low complexity. System acquisition
and determination techniques are also discussed
in detail.
INTRODUCTION
In a wireless communication system, an access
terminal must acquire the system information
before it can access the network. A preamble is
a special signal for system acquisition, i.e., to
provide a means for the access terminal to
acquire the system parameters that are necessary to access the system. That is, the preamble
serves as a “gateway” for terminals to gain
access to the network. A wireless network periodically transmits a preamble signal for terminals to acquire. The goals of the preamble
design are low overhead and low acquisition
complexity. However, these two goals can be
hard to achieve simutaneously for a highly flexible communication system as high flexibility typically incurs high overhead and/or high
acquisition complexity. Flexible configuration
for variable deployment requirements is one of
the important and highly desirable features for
the next generation cellular communication systems (or wireless wide area networks). However,
the wide choices of deployment parameters and
modes present significant challenges in acquisition system design. This article addresses these
challenges and provides the design solution that
has been adopted by the IEEE 802.20 Mobile
164
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
Broadband Wireless Access standard (MBWA)
[1, 2] as well as other emerging wireless WAN
standards [3–5] such as 3GPP LTE. Without
loss of generality, this article uses IEEE MBWA
as a design paradigm.
The IEEE MBWA standard is the new high
mobility standard developed by the 802.20
Working Group of the 802 Committee [1], to
enable high-speed, reliable, cost-effective
mobile wireless connectivity. IEEE MBWA is a
standard optimized to provid IP-based broadband wireless access in a mobile environment.
MBWA can operate in a wide range of deployments, thereby affording network operators
with superior flexibility in optimizing their networks. This standard is targeted for use in a
wide variety of licensed frequency bands and
regulatory environments. For example, MBWA
can operate in a wide range of bandwidths
(2.5–20 MHz); this flexibility enables an operator to customize a MBWA system for the spectrum available to the operator. MBWA has a
unified design for full and half duplex FDD and
TDD and a scalable bandwidth from 2.5 to 20
MHz for variable deployment spectrum needs.
Similar to MBWA, LTE supports system bandwidth from 1.4 to 20 MHz.
IEEE MBWA employs orthogonal frequency
division multiple access (OFDMA) technology,
which utilizes the orthogonal frequency division
multiplexing (OFDM) as its key modulation
technique [6]. The subcarrier spacing is fixed at
9.6 kHz corresponding to an OFDM symbol
duration of Ts 5 104+s. The length of the cyclic
prefix of an OFDM symbol is variable, T CP =
NCPTS/16 5 6.51NCP +sec, where NCP 5 1, 2, 3, 4.
This allows the operator to choose a cyclic prefix
length that is best suited to the expected delay
spreads in the deployment.
At an MBWA transmitter, the transmitted
data are organized as superframes. For a MBWA
access network, a superframe consists of a
preamble followed by 25 PHY frames in the
FDD mode or 12 PHY frames in TDD mode.
Both the preamble and the PHY frames consist
of eight OFDM symbols. The preamble is used
by an access terminal for the purpose of system
acquisition while the PHY frames are used for
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
PHY frame
(forward/
reverse link)
H
1
2
3
4
Acquisition pilot 3
0
Acquisition pilot 2
OFDM symbol
index
5
6
7
IEEE
BEMaGS
F
Figure 1. Superframe and preamble structure in FDD/TDD mode.
PREAMBLE STRUCTURE
The IEEE MBWA preamble consists of eight
OFDM symbols. The first OFDM symbol is used
to transmit the Primary Broadcast Control Channel (PBCCH), while the next four OFDM symbols are used to transmit the SBCCH (Secondary
Broadcast Control Channel). The remaining
three OFDM symbols carry Acquisition Pilots 1,
2, and 3. The structure of the superframe preamble is depicted in Fig. 1.
The ordering of the preamble OFDM symbols, i.e., placing five PBCCH/SBCCH OFDM
symbols before the Acquisition Pilots, is to provide sufficient receiver AGC (automatic gain
control) convergence time for the Acquisition
Pilots during initial acquisition since Acquisition
Pilots are acquired first as will be seen later. As
shown in Fig. 2, the received preamble power
can be very different from that of the PHY
frames. For example, in the TDD mode, the
PHY frame before the preamble belongs to the
reverse link; the receive power can be either
very high or very low depending on the position
of the nearby terminals. Therefore, the receiver
AGC gain at the last PHY frame of the superframe can differ greatly from the gain required
for sampling Acquisition Pilot 1. Hence, the
receiver AGC may require a large amount of
time to converge to the right gain for acquiring
Acquisition Pilot 1. The five OFDM symbols in
front of the Acquisition Pilot 1 provide sufficient
convergence time for AGC. Once Acquisition
Pilots are acquired, the same AGC gain for the
Acquisition Pilots can be set for the sampling of
PBCCH and SBCCH OFDM symbols. One
shortcoming of this design is the increased acquisition delay. After Acquisition Pilots are
acquired, the receiver has to wait till the next
preamble in order to decode the PBCCH and
SBCCH packets unless the whole preamble
(eight OFDM symbols) are buffered. Another
shortcoming is that in order for the receiver to
locate the next preamble, further information
(e.g., FDD/TDD mode, etc) that affects the
superframe length is necessary to be embedded
in the Acquisition Pilots.
The preamble transmission is limited to the
IEEE Communications Magazine • July 2011
Communications
A
PHY frame
(forward/
reverse link)
PHY frame
(forward link)
Acquisition pilot 1
Preamble
(forward link)
CC
data traffic transmission. In FDD half duplex
mode, each PHY frame is separated by a guard
interval (3Ts/4), whereas there is no separation
in full duplex mode. In TDD mode, one or two
PHY frames are transmitted continuously on the
forward link and one PHY frame is transmitted
continuously on the reverse link, resulting in a
1:1 or 2:1 partitioning. In a typical 1:1 partitioning, the 12 forward and the 12 reverse PHY
frames are interlaced and separated by guard
intervals, 3Ts/4 between a forward and a reverse
PHY frame, and 5Ts/32 between a reverse and a
forward frame [1]. It is clear that the superframe
time duration depends not only on the cyclic
prefix length but the system operating mode as
well. This will affect the preamble design, as will
be seen later.
There is significantly more flexibility in
MBWA as well as other emerging wireless technologies (e.g., 3GPP LTE) compared to existing
3G systems. Flexible parameters that can affect
preamble structure are:
•Bandwidth allocation which corresponds to
a PHY frame FFT size of NFFT = 512/1024/2048
and the number of guard tones. This is a highly
desirable feature that allows the operator to
deploy the network in frequency bands with various sizes of available bandwidths.
•FDD/TDD modes: FDD includes full and
half duplex while TDD includes choice of TDD
partitioning. This feature allows the operator to
deploy the system in either FDD bands (paired)
or TDD bands. FDD half duplex mode allows
the system to serve lower-end terminals that only
support half duplex communication. Configurable TDD partitioning allows the operator to
fine tune the network performance according to
the uplink and downlink traffic loadings.
•Cyclic prefix length (four possible values):
This allows the operator to choose a cyclic prefix
length that is best suited to the expected delay
spreads in the deployment. For example, in an
urban area where a short delay spread is expected, a short CP can be selected, whereas in a suburb or a rural area where a long delay spread is
typical, a long CP may then be used.
•Synchronous/asynchronous modes: MBWA
systems can operate in synchronous mode,
where different sectors have access to a common timing reference such as the Global Positioning System (GPS), and asynchronous mode,
where they do not. For example, for base stations mounted on a high-rise tower, synchronous mode can be selected to achieve the
best overall network performance. For base stations situated inside a building where synchronization to the network is not possible,
asynchronous mode may be activated. The flexibility in MBWA system configuration requires
that the preamble be structured to provide an
efficient means for system acquisition and
determination for an access terminal.
The widely variable bandwidths used in
MBWA wireless systems, as well as the wide
choices of deployment parameters, present significant challenges in acquisition system design.
This article describes these challenges, as well as
the solutions provided by the MBWA standard.
The design scheme applies to 3GPP LTE and
3GPP2 UMB as well [4].
SB
IEEE
PBCCH
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
165
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Variable
Since acquisition
Fixed (max)
A
BEMaGS
F
Variable
both MBWA and LTE
have selected the
narrowband acquisi-
PBCCH
a practical system,
Receive power
more importance for
PHY frame
H
CC
SB
Acquisition pilot 1
Acquisition pilot 2
Acquisition pilot 3
complexity is of
PHY frame
PHY frame
PHY frame
tion pilot design. The
Time
preamble bandwidth
of LTE is limited to
Figure 2. Illustration of terminal receiving power at various portions of the superframe.
1.28 MHz (the minimum bandwidth
supported by LTE)
independent of the
system bandwidth.
166
Communications
IEEE
central 5MHz of the system bandwidth, even
when the system bandwidth of the deployment is
more than 5 MHz, as is illustrated in Fig. 3. This
design has several advantages. First and foremost, it significantly simplifies the acquisition
complexity as the system bandwidth or FFT size,
N FFT, may not be known a priori to the access
terminal (AT) during initial acquisition. Second,
it prevents “energy dilution” of time-domain signal taps. Since there are more distinguishable
channel taps for a given channel in wider bandwidths, each such tap has lower energy when
compared to a channel tap in a narrowband signal. During the acquisition of Acquisition Pilot 1,
multi-paths are typically not combined for complexity reasons. This phenomenon, which we
refer to as “energy dilution” can degrade the
performance of any algorithm that attempts to
look for channel taps in the time-domain.
Restricting the Acquisition pilots to 5 MHz mitigates the effect of energy dilution. Third, it lowers complexity by allowing for correlations with
shorter sequences (512 length sequences, as
opposed to, for example, 2048 length sequences
in 20 MHz). Finally, it helps in a faster initial
acquisition, as an access terminal roaming
between deployments of at least 5 MHz can
always tune to the central 5 MHz and perform
acquisition. The disadvantage of this design is
the loss of wideband frequency diversity as
opposed to a wideband preamble design. For
this reason, in some configurations, the narrowband preamble is allowed to hop among the
whole frequency band at the cost of increased
receiver complexity. Another disadvantage is the
loss of processing gain since wider bandwidth
acquisition pilots have larger processing gain and
thus can be detected further away from the
transmitting base station. This is useful in mobile
positioning as well as early discovery of neighbor
cells. Since acquisition complexity is of more
importance for a practical system, both MBWA
and LTE have selected the narrowband acquisition pilot design. The preamble bandwidth of
LTE is limited to 1.28 MHz (the minimum bandwidth supported by LTE) independent of the
system bandwidth.
The minimum bandwidth supported by
MBWA is 2.5 MHz. The natural choice for the
preamble bandwidth seems to be 2.5 MHz. However, MBWA optimized the design to 5 MHz
bandwidth, believing that most next-generation
system deployments have at least 5 MHz band-
width. The MBWA preamble thus has better cell
penetration/coverage than LTE as a result of
larger processing gain due to the wider preamble
bandwidth.
ACQUISITION PILOT 1
Acquisition Pilot 1 is used for initial coarse timing (the position of the Acquisition Pilot 1 symbol) acquisition and frequency offset (between
the access terminal and the access network) estimation. Since it is the very first pilot that a terminal searches for, its waveform
• Should be made as simple as possible (i.e.,
less unknown parameters) to reduce the
searching complexity.
• Should tolerate frequency offset for detection since the access terminal’s frequency
may not be synchronized to the access network at the beginning of the acquisition
process.
Acquisition Pilot 1 is transmitted on the 5th
OFDM symbol in the preamble, spans the central subcarriers, and occupies every fourth subcarrier over this span, resulting in four copies of
the same waveform in time domain. The use of
four replicas of the same waveform instead of
just one long waveform is to reduce the waveform period such that the access terminal’s frequency offset (relative to the access network)
has less effect on the correlation performed by
the access terminal during the detection for the
Acquisition Pilot 1 waveform. In addition, having four periods is helpful for frequency offset
estimation [9].
Acquisition Pilot 1 uses a frequency domain
complex sequence, i.e., the generalized chirp-like
(GCL) polyphase sequence [10] to modulate the
subcarriers. One important property of a GCL
sequence is that its Fourier transform also has a
constant magnitude. Hence, the corresponding
time domain waveform of each period has a constant magnitude that helps improve peak to
average power ratio (PAPR). Low PAPR waveforms allow for a higher power amplifier setting
at the transmitter, thereby extending coverage. It
should be noted here that the coverage requirements for acquisition are typically higher than
that for data traffic, since a mobile terminal
should be able to acquire a sector before it is in
the data coverage of a sector, thereby allowing
for seamless handoff to that sector if required.
Even if the access terminal detects Acquisition Pilot 1 (i.e., the terminal has located the
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Acquisition Pilot 1), the terminal still does not
have the OFDM symbol boundary since the
cyclic prefix length still remains unknown. Consequently, the terminal cannot determine the
boundary of the following OFDM symbol
(Acquisition Pilot 2). Therefore, Acquisition
Pilot 1 should carry CP length information. This
can be done by using different GCL sequences
to indicate various cyclic prefix lengths. Since
MBWA allows the use of four different cyclic
prefix lengths (6.51, 13.02, 19.53, and 26.04 ms),
four different waveforms are needed to indicate
the four cyclic prefix values. Therefore, a terminal needs to perform four hypothesis tests to
determine the cyclic prefix value used in the system. Information on cyclic prefix is necessary for
obtaining symbol boundary, which is needed for
detecting Acquisition Pilot 2.
An alternative process used by LTE for generating time domain waveforms with constant
profile is to directly use a time domain complex
pseudo-noise (PN) sequence (a set of bits that
are generated to be statistically random — see
[11] for a detailed description of PN sequence),
or a Zadoff-Chu complex sequence (a special
case of the GCL sequence used in LTE), modulated QPSK waveform of length 128 and is
repeated four times to form an Acquisition Pilot 1.
For four values of cyclic prefix lengths, a PN
sequence with four offset values can be used.
Each offset represents a specific cyclic prefix
value. A terminal needs to perform four hypothesis tests on four different PN offsets during the
detection of Acquisition Pilot 1. Since inter-carrier frequency interval is fixed, once cyclic prefix
length is determined, the complete OFDM symbol duration (including cyclic prefix) is known as
well as the PHY frame duration.
An alternative approach to embedding cyclic
prefix information in Acquisition Pilot 1 is to
leave the cyclic prefix hypothesis tests to the
detection of Acquisition Pilot 2, i.e., Acquisition
Pilot 1 does not include cyclic prefix information
(one common waveform). After the detection of
Acquisition Pilot 1, a terminal makes multiple
hypothesis tests on the CP length to determine
the starting position of Acquisition Pilot 2 when
detecting Acquisition Pilot 2. LTE takes this
approach. However, only three cyclic prefix
lengths (4.7, 16.7 and 53.3 +s) are supported.
This is a better design as compared to the
MBWA approach, since there is only one waveform that a terminal needs to search for. The CP
length is determined after Acquisition Pilot 1 is
detected.
To reduce the detection complexity, the same
complex PN sequence of length 512 is used for
the system with bandwidth less than 5 MHz (i.e.,
the number of usable subcarriers is less than
512). In this case, the complex PN sequence is
FFT transformed to frequency domain and is
used to QPSK modulate the 512 subcarriers. The
unusable subcarriers or the guard subcarriers,
are punctured. The effect of the guard subcarrier on PN sequence correlation properties will be
discussed in the next section. The resulting frequency domain sequence is IFFT transformed
back to time domain.
Acquisition Plot 1 provides Acquisition Pilot
1 position and OFDM symbol duration (there-
Bandwidth
< 5 MHz
5 MHz
IEEE
BEMaGS
F
Bandwidth
> 5 MHz
Figure 3. Illustration of narrowband preamble for various system bandwidths.
fore PHY frame length) information. Acquisition Pilot 1 waveform is common to all sectors
that does not identify the sectors.
ACQUISITION PILOTS 2 AND 3
Acquisition Pilot 2 is a time domain Walsh
sequence/code that carries the sector’s unique
identification, i.e., the Pilot PN offset, which
helps the access terminal to distinguish multiple
sectors in the network. Walsh sequences are
orthogonal binary sequences [11]. The advantage
of Walsh-coding the Pilot PN offset is that the
detection of a Walsh code can be efficiently
implemented using fast Hadamard transforms
(FHT), as will be seen later. The Pilot PN offset
is a 9-bit identifier of a sector. In MBWA and
LTE, a total of 29 = 512 PN offsets are used for
sector identification. The Walsh sequence length
is equal to the preamble FFT size with its index
equal to the 9-bit Sector Pilot PN offset of value
P between 0 and 511 (29 – 1). In addition, another complex PN sequence of length 512 (common
to all sectors) is used to scramble the Walsh
sequence. The resulting sequence is transformed
to frequency domain via a 512-point FFT and is
used to modulate all 512 subcarriers except the
guard subcarriers (if the system bandwidth is less
than 5 MHz). The need for the use of a PN
sequence to scramble the Walsh sequence is due
to the fact that Walsh sequences with different
indices possess very different spectral properties
(non white), and the insertion of guard subcarriers may destroy the Walsh code property
depending on the Walsh code’s spectral property. The use of complex PN scrambling of Walsh
sequence spreads the code energy evenly
throughout the spectrum and, therefore, the
insertion of guard carriers has more uniform
effect on all Walsh sequences regardless of the
individual Walsh code’s spectral property.
Also note that the insertion of guard subcarriers has two effects on the time domain waveform. First, the constant modulus property is
distorted; second, the correlation (cross and
auto) properties of complex PN scrambled Walsh
sequences are also impaired, as shown in Fig. 4.
It is seen that both the cross and auto correlation properties of the Walsh sequence degrade
as more guard carriers are inserted.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Preamble (max 5MHz)
IEEE
Frequency
Communications
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
167
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
1.0
0%
25%
50%
0.8
75%
CDF
0.6
0.4
Cross
Auto
0.2
0.0
0.0
0.2
0.4
0.6
0.8
Correlation magnitude
1.0
1.2
1.4
Figure 4. Change of cross/auto correlations CDFs of the PN scrambled Walsh
sequence as a result of bandwidth reduction. The figure shows 0 reduction, 25
percent, 50 percent, and 75 percent reduction in bandwidth.
One may use different PN sequences, instead
of Walsh sequences, to represent different sector
Pilot PN offsets. However, the detection of
Walsh sequence can be more efficient by applying the fast Hadamard transform (FHT), as
opposed to performing PN correlation even with
the help of FFT. This saving can be significant
for the terminal receiver, especially when the
number of selections is large (512 in this case).
Like Acquisition Pilot 2, Acquisition Pilot 3 is
also a time domain Walsh sequence, except that
the Walsh sequence is scrambled by the sectorspecific complex PN sequence since the sector
PN offset is available for the terminal after the
detection of Acquisition Pilot 2. The 9-bit Walsh
sequence index contains 9-bit system information
and includes synchronous/asynchronous mode
(1 bit), four least significant bits (LSBs) of the
current superframe index, FDD/TDD mode
(1 bit), full/half duplex mode (1 bit), TDD partitioning (1 bit), preamble frequency hopping
mode, etc, that is necessary for decoding the
PBCCH packet.
PBCCH AND SBCCH
After the Acquisition Pilots are detected, system
frequency and coarse timing are established.
OFDM signaling and channel coding can then
be used to convey larger amounts of system
information more efficiently. The PBCCH symbol contains the information packet that is channel coded and OFDM modulated. However,
since the bandwidth information is not yet known
to the access terminal, the PBCCH packet is
coded with very low effective rate. The PBCCH
packet contains the system information, including the complete superframe index (system time)
at the time the PBCCH packet is generated and
deployment-wide static parameters such as total
number of subcarriers, number of guard subcarriers (in units of 16), etc. Each PBCCH packet is
168
Communications
IEEE
A
BEMaGS
F
CRC (12 bits) protected, channel encoded (1/3
code rate), interleaved, repeated, scrambled with
the sector Pilot PN, and QPSK modulated. The
QPSK symbols are then mapped to OFDM subcarriers. One natural way for designing the
QPSK symbol to subcarrier mapping is to map
the symbols only to usable subcarriers, resulting
in a mapping that is bandwidth dependent. A
better design is to map the QPSK symbols to 512
subcarriers even if some of the subcarriers may
not be usable (guard carriers). The guard subcarriers (if any) are then punctured (zeroed out).
This way, the mapping of the modulation symbols to the subcarriers is independent from the
actual bandwidth of the preamble or the number
of guard subcarriers. The PBCCH modulation
symbols can thus be de-mapped without knowing
the number of guard subcarriers, which allows
the PBCCH packets to be decoded without the
full knowledge of the bandwidth.
The static nature of the PBCCH packet
allows the transmission of the PBCCH packet
with low effective coding rate without high overhead. This is achieved by updating the content
of the PBCCH packet every 16 superframes. The
same PBCCH packet is repeatedly transmitted
over 16 consecutive superframes (four repetitions in LTE). That is, the PBCCH packet is
updated when the superframe index modulo 16
is equal to zero, or, the four LSBs of the superframe index is equal to zero. This allows efficient incremental redundancy decoding at the
terminal, as will be seen in the next section.
The Secondary Broadcast Channel (SBCCH)
is carried on the OFDM symbols with indices
1 through 4 of the preamble in a superframe.
The SBCCH channel is designed under the
assumption that the PBCCH packet has been
successfully decoded. The system bandwidth
(and therefore the preamble bandwidth) is
known. A more efficient coding (higher code
rate) can then be used for SBCCH, allowing an
SBCCH packet to be much larger than the
PBCCH packet. A SBCCH packet contains further detailed system configuration information
that a terminal needs for accessing the network,
such as the number of effective antennas, common pilot channel hopping mode, common pilot
spacing, and number of sub-trees for SDMA,
etc. It is appended with CRC (12 bits), channel
encoded, interleaved, repeated, scrambled with
sector PN, and QPSK modulated onto usable
subcarriers. The SBCCH packet is updated every
superframe.
PREAMBLE ACQUISITION
The goal of the preamble acquisition is to
acquire the system synchronization information
and the system parameters necessary to access
the system from the preamble at a given carrier
frequency. The IEEE MBWA system acquisition
procedure is summarized in Fig. 5.
ACQUISITION PILOT 1 DETECTION
The MBWA system acquisition begins with
searching for Acquisition Pilot 1. At a given carrier frequency, the access terminal looks for the
Acquisition Pilot 1 signal by correlating the
received signal with each of the four hypotheses
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Start
Detect acquisition pilot 1
Detect acquisition pilot 2
Detect acquisition pilot 3
Decode PBCCH
No
Decode success?
Yes
Decode SBCCH
that non-coherent combining of four short correlation periods instead of a single long block correlation is due to the potential high frequency
offset between the access terminal and the network in the initial system acquisition. A frequency offset results in phase rotation or shift to the
sampled signals, causing reduced correlation
performance between the received signal and the
hypothesis waveform. The longer the correlation
period, the larger the phase shift and the poorer
the correlation performance. A shorter correlation period thus helps mitigate the frequency
offset effect. However, for the non-initial acquisition case (e.g., neighbor cell search) where
access terminals are synchronized to the network, performing one single block correlation
for the whole OFDM symbol duration is actually
preferred for greater processing gain.
What does a terminal gain from the detection
of Acquisition Pilot 1? First, the terminal’s frequency offset can be estimated from the four
periods of the Acquisition Pilot 1 and corrected
from the terminal’s local frequency synthesizer.
A frequency offset estimation method using four
signal segments is described in [9]. Second, the
access terminal gains the knowledge of the cyclic
prefix duration. Therefore, the OFDM symbol
boundary is determined. The Acquisition Pilot 2
can now be located as a result. A coarse timing
and frequency synchronization to the system has
now been established.
In addition, once the Acquisition Pilot 1 is
detected, one can also search for multi-paths
within a certain range that Acquisition Pilot 1 is
detected for use in Acquisition Pilots 2 and 3 for
multi-path diversity gain.
F
For the non-initial
acquisition case (e.g.,
neighbor cell search)
where access terminals are synchronized
to the network,
performing one
single block correlation for the whole
OFDM symbol duration is actually preferred for greater
processing gain.
ACQUISITION PILOT 2 PROCESSING
End
Figure 5. Flowchart of IEEE MBWA system
acquisition procedure.
over the duration of at least one superframe
until one candidate is detected. That is, a moving sum of one of the four hypothesis correlations corresponding to the four periods of
Acquisition Pilot 1 waveform is performed and
compared to a threshold. Without knowing the
actual preamble bandwidth, the received signal
is first sampled at the full preamble bandwidth
(i.e., 5MHz). Since correlation can be done
more efficiently with FFT, a 128-point FFT is
applied to each of the four period samples, and
spectrum-shaped assuming minimum bandwidth
(i.e., zero out the maximum possible guard subcarriers).1 The resulting frequency domain samples are then multiplied with one of the
hypothesis Acquisition Pilot 1 waveforms in the
frequency domain to get the correlation values.
The four correlation values are non-coherently
combined across four consecutive FFT windows
and compared with a threshold to determine if
an Acquisition Pilot 1 is present. The minimum
bandwidth of 2.5 MHz assumption results in a
maximum 3dB detection performance loss. Note
With the correction of the frequency offset and
the knowledge of Acquisition Pilot 2 location,
the whole Acquisition Pilot 2 symbol can be
sampled at once at the full preamble bandwidth.
The sampled data are first transformed to frequency domain via a 512-point FFT. As with the
Acquisition Pilot 1, the frequency domain data
are spectrum-shaped. The resulting data are
then transformed back to the time domain
sequence, descrambled, and a fast Hadamard
transform (FHT) is used on the descrambled
data to detect the Walsh sequence for the sector
Pilot PN offset.
In detail, the paths crossing the threshold
obtained from Acquisition Pilot 1 processing
over one superframe are passed on for Acquisition Pilot 2 processing, which involves performing the FHT and comparing the resulting sector
energies to a threshold. The FHT threshold is
chosen by design to maintain a false alarm probability within a desirable level. A false alarm
event is defined as a threshold-crossing occurring in an empty channel, i.e. noise-only scenario. A false alarm event would incur a penalty
time. This penalty is attributed to unnecessary
attempts at decoding system information following a false alarm event.
The FHT effectively performs correlations
with each of the Walsh sequences and the output of the FHT corresponding to the Walsh
code with an index value equal to the sector’s
9-bit PN offset.
1
Note that for certain
deployment with minimum bandwidth larger
than 5 MHz, spectrum
shaping is then not
needed.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
169
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Upon the detection of Acquisition Pilot 2, the
access terminal obtains the sector’s Pilot PN offset.
ACQUISITION PILOT 3 PROCESSING
Acquisition Pilot 3 is next sampled at the 5 MHz
bandwidth, spectrum-shaped and descrambled
using the Pilot PN detected from Acquisition
Pilot 2. Like the processing of Acquisition Pilot
2, the FHT is then applied to the descrambled
data to detect the acquisition information.
Upon the detection of Acquisition Pilot 3, the
acquisition information including synchronous/
asynchronous mode, 4 LSBs of the superframe
index, full/half duplex modes, FDD/TDD mode,
and TDD partitioning (if TDD mode), is avail-
A
BEMaGS
F
able to the access terminal. Based on this information, the access terminal is now able to determine the superframe boundary. Together with
the additional knowledge of AGC gain, the terminal is now able to locate and acquire the
PBCCH and SBCCH symbols in the next superframe preamble.
The overall detection performance of the
Acquisition Pilots is given in Fig. 6, where geometry is defined as the ratio of signal power over
interference. Geometry of 0 dB thus indicates
the cell boundary. It is seen that the Acquisition
Pilots can be detected at very low geometry, i.e.,
deep penetration into neighbor cell. This is a
desirable feature to facilitate early discovery of
new neighbor cells for potential hand off, etc.
PBCCH DECODING
16
Pedestrian B (3 km/h)
Vehicular A (120 km/h)
Detection time (number of preambles)
14
12
10
8
6
4
2
0
-14 -13 -12 -11 -10
-9
-8 -7 -6 -5
Geometry (dB)
-4
-3
-2
-1
0
Figure 6. Acquisition Pilots 1, 2, 3 joint detection performance (95 percentile,
joint false alarm probability = 0.001, 1 receive antenna, 5MHz bandwidth at
2 GHz carrier frequency, 10 ppm receiver frequency offset).
1t
sm
ons
on
issi
issi
sm
ns
ns
issio
issio
nsm
nsm
ran
ran
2t
4 tra
0.1
8 tra
PER
1
0.01
-15
-13
-11
-9
-7
-5
Geometry (dB)
-3
-1
Figure 7. PBCCH decoding performance at various levels of redundancies
(channel model: Pedestrian Type B 3km/h, one receive antenna).
170
Communications
IEEE
1
After the detection of the Acquisition Pilots, the
access terminal is ready to decode the PBCCH
packet contained in the next superframe preamble. With the acquired superframe timing, the
access terminal is able to locate the PBCCH
OFDM symbol in the following superframe
preamble. The AGC used to acquire Acquisition
Pilots is set for sampling the PBCCH OFDM
symbol. Note that the coarse timing acquired
from the Acquisition Pilot 1 is good for time
domain waveform detection (e.g., Acquisition
Pilots 2 and 3 detection) but is not sufficient for
the PBCCH OFDM symbol demodulation. Before
the PBCCH OFDM symbol can be sampled, the
optimal OFDM symbol timing (i.e., the optimal
FFT collection window) should be established to
minimize the multi-path effect using the OFDM
symbol timing technique described in [12]. The
PBCCH OFDM symbol is then sampled at the
full preamble bandwidth and performs FFT with
the preamble FFT size of 512. The frequency
domain data are spectrum-shaped, de-mapped,
demodulated, descrambled, de-interleaved, log
likelihood ratio (LLR) calculated and decoded.
As in Acquisition Pilot 1, 2, and 3 detection, the
conservative spectrum-shaping may result in loss
of SINR up to 3 dB. However, as will be seen
later, a PBCCH packet is coded with very low
code rate. The loss of 3 dB does not prevent
PBCCH from being successfully decoded.
A failure to decode a PBCCH packet is most
likely due to insufficient SINR. Therefore, if the
decoding is not successful, the access terminal
determines if the PBCCH carries the last transmission of the 16 transmissions by checking if
the four LSBs of the superframe index is equal
to 15. If the current received PBCCH is not the
last of the 16 transmissions, the LLR from the
successive transmission of the PBCCH are combined with the LLR stored in the LLR buffer
and another decoding attempt is made. Otherwise, the buffer is cleared and the LLR data are
discarded. This procedure is repeated until a
successful decoding. The maximum number of
transmissions the access terminal can combine is
16 since the PBCCH packet is updated every 16
superframes.
As shown in Fig. 7, the decoding of a PBCCH
packet rarely takes all 16 transmissions. High
geometry users are more likely to need less
redundancy for less processing gain to decode
the packet as compared to edge users. It there-
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
fore takes less time for high geometry users to
acquire the system thereby reducing the acquisition time.
A decoding failure may also be the consequence of a false detection of Acquisition Pilots
1 to 3. In the case that decoding fails, even after
the LLR buffer has combined 16 consecutive
PBCCH transmissions, the acquisition procedure
restarts.
Upon a successful decoding of the PBCCH
packet, the access terminal confirms that the
information acquired from Acquisition Pilots 1
to 3 is correct and a MBMA system indeed exists
at this carrier frequency. In addition, it obtains
the system information including superframe
index, system FFT size, and number of guard
subcarriers, etc. from the PBCCH packet. This
information is necessary for decoding the following SBCCH packet.
SBCCH DECODING
Once PBCCH is decoded, the access terminal
starts to acquire SBCCH. The FFT collection
window for the four SBCCH OFDM symbols is
determined. Four OFDM symbols from 1 to 4
are then sampled and transformed to frequency
domain via an FFT. Using the number of guard
subcarrier information from the PBCCH, the
actual guard subcarriers are zeroed out, the
modulation symbols are demodulated, descrambled, de-interleaved and decoded.
By now the access terminal has all the information necessary to access the system (e.g.,
sending an access probe on the reverse link
access channel) and the preamble acquisition
process is completed. The access terminal can
now notify its existence to the network by performing access probing through the uplink random access channel (RACH).
System determination is a set of data structures
and protocols to serve the purpose of identifying
the best system that is suitable for a terminal to
operate in a given environment. It consists of the
physical layer system acquisition (described in
the previous section) and the upper layer system
selection. System determination is performed
during:
• Terminal power up.
• Terminal rescan for better service.
• Out of service.
During system determination, a terminal has
to perform acquisition scan, i.e., to perform
acquisition at all potential carrier frequencies
with channel raster of 200 kHz. Channel raster
of 200 kHz means that the carrier center
frequency must be at an integer multiple of
200 kHz as specified by the standard. It is clear
that performing a whole band acquisition scan
consumes a lot of time and power.
To accelerate the system determination process, a Preferred Roaming List (PRL) is used,
which is an operator supplied list of systems that
defines the systems that a mobile can access.
The PRL is programmed into a terminal’s nonvolatile device prior to distribution, or at an
operator’s service center, or via an over the air
protocol.
Index
Type
Band
0
WBMA
PCS 75
1
WCDMA
AWS 21, GSM 850, 1900
2
PCS CDMA
PCS 100, 125,150,175
3
Cellular CDMA
System A
4
Cellular Analog
System B
IEEE
F
System Table
SID
GEO1
Priority2
Acquisition Index
111
0
1
0
34
1
0
3
400
0
1
1
4
1
1
0
12
1
1
3
0
1
0
4
61
0
1
0
56
1
1
1
16
1
0
4
1
“0” indicates the start of a new GEO.
“1” has higher priority than “0”.
Table 1. A sample preferred roaming list (PRL).
A PRL consists of an Acquisition Table and a
System Table. An Acquisition Table contains an
indexed (ordered according to preference) list of
RF channels to search. Each entry/record describes
the RF environment. A system table contains a list
of system descriptions keyed by system identification (SID). Each entry/record is part of a geographic area (GEO). Preference can exist within
geographic areas. An operator can specify preferences for which networks to access (Table 1).
PRL assists a terminal with system acquisition
and selection by providing information on the
relevant radio access technologies and how to
acquire them efficiently. Upon system acquisition, the terminal uses PRL to determine
whether the acquired system is usable or not,
whether the system is the optimal system on
which to operate in the given location, and what
are the systems that are more preferred and how
to acquire them efficiently if the current system
is not optimal/most preferred.
As shown in Fig. 8, upon powering up, the
access terminal builds a list of channels to perform the acquisition scan, i.e., the terminal
builds a list of systems consisting of, first, all the
IEEE Communications Magazine • July 2011
Communications
BEMaGS
Acquisition Table
2
SYSTEM DETERMINATION
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
171
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Power-up
Build acquisition list
MRU/PRL
Acquisition scan
Out of service
search
N
Service
found?
Frequency
scan
N
Service found?
Register to the system
MRU
Background
search
Update MRU
N
Most
preferred
system?
Exit
CONCLUSION
172
Communications
IEEE
F
APPENDIX
In this appendix, a mathematical analysis of the
detection of the Acquisition Pilot signals is
given. Since Acquisition Pilots 2 and 3 have the
same structure, the analysis of Acquisition Pilot 3
is thus omitted.
SYSTEM MODEL
Transmitted Signal: Acquisition Pilot 1, 2 —
The transmitted GCL sequence in the OFDM
symbol corresponding to Acquisition Pilot 1 in
the superframe preamble occupies every fourth
subcarrier. More precisely, it is given by
£
k ( k + 1) ¥
N
P4 k = exp ² < j 2/ u
, 0 ) k < FFT
´
2 NG ¦
4
¤
(1)
where N FFT = 512 is the preamble number of
subcarriers, NG = NFFT/4 – 1 = 127, and parameter u is a function of NFFT and the cyclic prefix
length [4]. To simplify the analysis, we assume
the number of guard carriers is zero. It can be
shown that the corresponding time domain waveform of each period is
£
£
k ( k + 1) ¥
n < n 2 / u ¥ NG <1
pn = exp ² j 2/
´ - exp ² < j 2/ u 2 N ´ ,
2
N
¤
¤
G ¦ k=0
G ¦
(2)
which has a constant magnitude that helps
improve peak to average power ratio (PAPR).
Then, for convenience we write
Gk = P4k, 0 ) k ) NFFT – 1
Flexible system configuration is highly desirable
in optimizing system performance for variable
deployment environments in the next generation
wireless communications systems. Preamble
design and system acquisition for flexible systems is challenging. This article uses IEEE
802.20 (MBWA) as a paradigm to illustrate the
preamble design schemes and system acquisition
techniques for any OFDMA systems. The
MBWA system allows flexible configurations to
meet different deployment needs. It supports
bandwidth from 2.5 MHz to 20 MHz with variable guard subcarriers and is scalable in units of
154 kHz. It allows for synchronous and asynchronous FDD and variable partitioning TDD.
BEMaGS
It has configurable cyclic prefix duration for
variable deployment environments and full/half
duplex operation for different access terminals.
This flexibility also makes the design of the
MBWA preamble challenging as compared to
conventional systems. This article described
these challenges, as well as the solution provided
by the IEEE MBWA standard and other emerging wireless communication standards. The
preamble design targets the requirements and
ensures the initial system acquisition for an
access terminal is efficient, i.e., low overhead,
low latency, and low complexity at the receiver.
Finally, this article discussed in depth the system
acquisition and system determination techniques
in a cellular communications system.
Figure 8. Example system determination flowchart.
most recently used (MRU) list, and second, the
PRL’s acquisition table. The list is used for system acquisition scan.
If no system is found after completing the
acquisition scan, a full band frequency scan is
necessary. If the terminal is still unable to find
service, the terminal periodically wakes up (to
save battery) and looks for systems.
If a system is found, the terminal registers to the
system from which the system ID (SID) of the
acquired system is obtained and is used to determine the terminal’s GEO. Based on the GEO, the
terminal searches for the system from the most preferred to the least preferred in that geometric area.
If the most preferred system is not found, the
terminal periodically attempts to look for the more
preferred system in the GEO in the background
since it is possible that during initial acquisition the
most desirable system was not available (e.g., the
signal was blocked). Using the SID of the current
serving system, the terminal can index into the system table and determine the mode, band, and
channels that should be used when attempting to
acquire the more desirable system.
A
(3)
The complex modulation symbols for the
Acquisition Pilot 1 OFDM symbol are given by:
¨« P G
Xi = © AP k
0,
ǻ
i = 4 k , 0 ) k ) N FFT < 1
otherwise
(4)
where PAP is the transmission power. Following
the IFFT operation, the time-domain Acquisition Pilot 1 OFDM symbol can be expressed as:
xn =
N FFT <1
-
Xi vni ,
i=0
N p <1
= PAP
-
k=0
0 ) n ) N FFT < 1
Gk v4 kn ,
(5)
0 ) n ) N FFT < 1
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
(
where the complex exponentials are given by:
k , n = 0,1,…, N FFT < 1
yn =
0 ) n ) NFFT – 1
Yk = H k X k +
+
Channel Model — The impulse response of the
SISO fading channel is given by the stochastic
tapped delay line model:
N TAP <1
-
i=0
_ i (t )b (t < o i ) + d(t )
(8)
where _i(t) is the tap gain assumed to be a complex Gaussian random variable with zero mean
and variance m 2i , and the corresponding tap
delay is denoted by oi . It is assumed that the tap
delays oi change very slowly and are assumed to
be constants. Also, it is assumed that the tap
gains _ i (t) = _ i are constant over M OFDM
symbol durations (i.e., a block-fading channel)
and that the _s are independent. The noise process d(t) is assumed to be complex Gaussian
with zero mean and variance N0.
We assume a single antenna at the receiver to
simplify the analysis. The chip-rate sampled
received signal corresponding to the Acquisition
Pilot 1 OFDM symbol, after removal of the
cyclic prefix is given by:
N TAP <1
-
i=0
_ i x( n < ni )mod N FFT
+ dn , 0 ) n ) N FFT < 1
-
j=0
j&k
(9)
where o i = n i T CHIP , with T CHIP being the chip
duration and Ts = NFFTTCHIP being the OFDM
symbol duration. Also, dn is the sample of zeromean complex Gaussian noise with variance N0.
In Eq. 9, we assume that the duration of the
channel’s impulse response is less than the cyclic
prefix duration T CP . Also, we assume in Eq. 9
that the frequency offset between the transmitter
and receiver oscillators is 6 f. We assume that
the noise component in the received signal is
dominated by the interference from other sectors. We see that the signal part of Eq. 9 represents a circular convolution of the transmitted
signal and the channel’s impulse response, which
is corrupted by the frequency offset 6 f, which
causes inter-carrier-interference (ICI). Assuming
a rectangular window for the OFDM symbol, the
ICI can be modeled as in [7]. Hence the FFT
operation on the received samples results in the
samples denoted by
to the system from
H j X j AJ ( fk + 6f )
which the system ID
(SID) of the acquired
N FFT
*
, 0 ) k ) N FFT < 1
- dn vnk
(10)
Hk =
system is obtained
and is used to determine the terminal’s
N TAP <1
-
i=0
_ i vn*i k ,
0 ) k ) N FFT < 1
(11)
GEO. Based on the
GEO, the terminal
searches for the system from the most
preferred to the least
and
preferred in that
£ f < fj ¥
A j ( f ) = sinc ²
,
¤ fs ´¦
0 ) j ) N FFT < 1 (12)
geometric area.
In Eq. 12, f j = j fu where fu = 1/Ts is the inverse
of the OFDM symbol duration and denotes the
subcarrier spacing. The second term on the
right-side of Eq. 10 represents the ICI caused by
the frequency offset 6 f. Now the FFT coefficients of the received OFDM symbol that correspond to the GCL sequence are given by:
Y4 k = H 4 k X 4 k +
+
N FFT <1
-
n=0
N FFT <1
-
j=0
j&k
H 4 j X 4 j A4 j ( f4 k + 6f )
dn v*4 kn
= PAP1H 4 k Gk + PAP1
ACQUISITION PILOT 1 DETECTION
rn = e j 2/ n6fTs / N FFT
N FFT <1
where the frequency-domain channel coefficients
are given by:
p
+
N FFT <1
-
n=0
N FFT <1
-
j=0
j&k
H 4 j G j A4 j ( f4 k + 6f )
(13)
*
dn vnk
,
0 ) k ) N FFT
v <1
where NvFFT =NFFT/4. The above frequency domain
received samples corresponding to the GCL
sequence are multiplied by the stored GCL
sequence Gk,st and the product sequence is given by:
v –1
qk = Y4kG*k,st, 0 ) k ) N FFT
(14)
v -point IFFT is performed on q k to
An N FFT
obtain the sequence Qn which can be expressed
as:
Qn =
N FFT
v <1
-
k=0
qk u kn ,
0 ) n ) N FFT
v <1
(15)
where the complex exponentials:
u kn =
1
N FFT
v <1
v <1
e j 2/ kn / N FFT
, k , n = 0,1,…, N FF
v T <1
(16)
The absolute values of the IFFT outputs are
computed to obtain Sn = «Qn«2, which are then
compared to a threshold aGCL to determine the
strong paths. The probability distribution of Sn
IEEE Communications Magazine • July 2011
Communications
IEEE
F
If a system is found,
n=0
(7)
where W NFFT, n denotes the Walsh sequence of
length NFFT with index p mod NFFT, where p is
the superframe’s sequence number.
h(t ) =
BEMaGS
the terminal registers
which are given by:
(6)
Due to the GCL sequence occupying every 4th
subcarrier, the Acquisition Pilot 1 OFDM symbol appears in time-domain as a periodic waveform with four periods.
The transmitted signal for the Acquisition
Pilot 2 symbol consists of a time-domain Walsh
sequence given by:
p
PAPW NFFT, n,
)
FFT
Yk , i.e., rn @±±
A Tk ,
1
e j 2/ kn / N FFT ,
N FFT
vnk =
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
173
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Flexible system configuration is highly
desirable in optimizing system perfor-
can be obtained as follows. We denote the tapgain vector by _ = [_0_1, … _NTAP–1]. The IFFT
outputs Qn from Eq. 15 can also be expressed as:
Qn = N FFT
v
mance for variable
= N FFT
v
deployment environments in the next
N FFT <1
-
k=0
N FFT
v <1
-
k=0
qk u kn
Y4 k G *k , st u kn
¥
£
N FFT
v <1
´
²
*
´
² PAP1 - H 4 k Gk G k , st u kn
k=0
´
²
²
N FFT
v <1
v <1 N FFT
H 4 j G j G *k , st A4 j ´
´
= N FFT
v ² + PAP1 ²
k=0
j = 0 ( f4 k + 6f )u kn ´
´
²
j&k
´
²
N FFT
v <1 N FFT <1
´
²
*
*
+
d
v
G
u
- m 4 km k , st kn
´
² ¦
¤ k =0 m=0
(17)
generation wireless
communications systems. Preamble
design and system
acquisition for flexible systems is challenging.
v – 1. From the above, we see that,
where 0 ) n ) NFFT
conditioned on _, Qn, is complex Gaussian distributed with the mean (assuming perfect timing sync)
+Q
n
=
_
N FFT
v <1
¥
´
j=0
´
N FFT
v PAP1 - ²
j&k
´
²
k=0
´
²
*
¤ H 4 j G j G k , st A4 j ( f4 k + 6f )u kn ¦
(18)
v – 1 and noise variance which
where 0 ) n ) NFFT
can be shown to be:
£
+
2
sQn«_
= N0
-
(19)
Hence the conditional probability density function
v – 1) is non-central
of Sn = «Qn«2 (0 ) n ) N FFT
chi-squared with two degrees of freedom, given by:
fs _ ( x ) =
n
1
n
_
2
¥
£
+ x ´ £ 2 +Q _ x ¥
+
Q
_
²
n
n
´,
²
exp ² <
´ I0 ²
2
2
´
m
m
Qn _
Qn _
´ ¤
²
¦
¦
¤
(20)
where I0(•) is the zeroth-order Bessel function
of the first kind. For each n, such that 0 ) n )
v
N FFT
– 1 the conditional threshold-crossing
probabilities given the threshold, a GCL , are
therefore given by:
P Dn«_ = Pr(S n > a GCL«_) for correct GCL,
i.e., {Gk,st} = {Gk}
PFn«_ = Pr(Sn > aGCL«_) for empty channel
or incorrect GCL, i.e., {Gk,st} & {Gk}
The conditional probability of threshold-crossing
for n can be expressed as:
PD
n
_
= 1 < FS
n
_ (a GCL )
¥
£ +
Qn _ 2
2a GCL ´
= Q1 ²
,
² mQ _
mQ _ ´
n
n
¦
¤
174
Communications
IEEE
BEMaGS
F
where Q1(a,b) denotes the generalized Marcum
Q-function which can be computed. In the event
of an empty channel, we note that Qn is a complex Gaussian random variable with zero-mean
and variance equal to N 0 , which results in S n
being central chi-squared distributed with twodegrees of freedom.
Populating every 4th subcarrier with the GCL
sequence in Acquisition Pilot 1 results in the
time-domain OFDM symbol containing four
v = NFFT/4
periods, each period containing N FFT
samples. Having these four periods is useful for
estimating the frequency offset 6 f. Once the
Acquisition Pilot 1 processor determines a timeoffset n for which S n crosses the threshold, it
estimates the frequency offset from the timedomain samples as
6fˆ =
(
)
1 £ 1 3 O rn + kN FFT / 4 < O(rn ) ¥
´
² k
2/ Ts ²¤ 3 k =1
´¦
(22)
where O(z) denotes the phase of complex number z. The frequency offset correction then
^
involves applying the phase ramp e–j2/n6f Ts/NFFT
to the time-domain samples.
ACQUISITION PILOT 2 DETECTION
*
N FFT
v <1 ² H 4 k Gk G k , st u kn
m Q2
A
(21)
The paths crossing the threshold obtained from
Acquisition Pilot 1 processing over one superframe
are passed on for Acquisition Pilot 2 processing,
which involves performing the FHT and comparing
the resulting sector energies to a threshold. The
full analysis for Acquisition Pilot 2 is similar to that
of Acquisition Pilot 1 and is thus not included
here. However, the calculation of the threshold
that is used during Acquisition Pilot 2 processing is
presented. The FHT threshold is chosen by design
to maintain a false alarm probability within a desirable level PF,desired. A false alarm event is defined
as a threshold-crossing occurring in an empty channel, i.e., noise-only scenario. A false alarm event
would incur a penalty time of TFA. This penalty is
attributed to unnecessary attempts at decoding system information following a false alarm event.
When only noise is present, the received signal corresponding to the Acquisition Pilot 2 OFDM symbol can be expressed as:
rn = dn, 0 ) n ) NFFT – 1
(23)
The FHT effectively performs correlations
with each of the Walsh sequences, and the output of the FHT corresponding to the Walsh
code with index can be expressed as:
FHT p =
N FFT <1
-
n=0
dnWNp*
FFT , n
,
0 ) p ) N FFT < 1 (24)
which can be shown to be a zero-mean Gaussian
random variable with variance NFFTN0. In a single-antenna scenario, the decision statistic is
given by the strength of the FHT output: «FHT2
p« , which is central chi-squared distributed with
two-degrees of freedom. Given the FHT threshold aFHT, the false alarm probability can therefore be expressed as [8]:
¥
£
a
PF = exp ² < FHT ´
¤ N 0 N FFT ¦
(25)
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Hence to achieve a probability of false alarm
PF ) PF,desired, the FHT threshold should be chosen that
aFHT = –N0NFFT log(PF,desired)
(26)
The average time taken for acquisition given a
non-empty channel can be shown to be approximately TACQ = TSF/PD, where TSF is the superframe duration.
ACKNOWLEDGEMENTS
The author would like to thank the editors and
the reviewers for their excellent comments.
REFERENCES
[1] Standard for Local and Metropolitan Area Networks
Part 20: Air Interface for Mobile Broadband Wireless
Access Systems Supporting Vehicular Mobility — Physical and Media Access Control Layer Specification, IEEE
Std., 2008.
[2] A. Greenspan, M. Klerer, J. Tomcik, R. Canchi, J. Wilson,
“IEEE 802.20: Mobile Broadband Wireless Access for
the Twenty-First Century,” IEEE Commun. Mag., July
2008, pp. 56–63.
[3] 3GPP TS36.201, LTE Physical Layer — General Description, Aug. 2007.
[4] 3GPP2 C.S0084-001 v2.0, “Ultra Mobile Broadband Air
Interface Specification,” Sept. 2007.
[5] F. Khan, LTE for 4G Mobile Broadband: Air Interface
Technologies and Performance, Cambridge University
Press, 2009.
[6] A. Bahai, B. Saltzberg, and M. Ergen, Multi-Carrier Digital Communications Theory and Applications of OFDM,
New York: Springer, 2004.
[7] L. Hanjo et al., OFDM and MC-CDMA for Broadband
Multi-User Communications, WLANs and Broadcasting,
IEEE Press, 2003, pp. 118–20, and pp. 224–26.
[8] J. G. Proakis, Digital Communications, McGraw-Hill,
New York, 2001, pp. 43–44.
[9] T. Brown and M. Wang, “An Iterative Algorithm for Single-Frequency Estimation,” IEEE Trans. Sig. Proc., vol.
50, Nov. 2002, pp. 2671–84.
[10] B. M. Popovic, “Generalized Chirp-Like Polyphase
Sequences with Optimum Correlation Properties,” IEEE
Trans.Info. Theory, vol. 38, July 1992, pp. 1406–09.
[11] J. Lee and L. Miller, CDMA Systems Engineering Handbook, Artech House Publishers, 1998.
[12] M. Wang et al., “Optimal Symbol Timing for OFDM
Wireless Communications,” IEEE Trans. Wireless Commun., vol. 8, Oct. 2009, pp. 5328–37.
[13] Channel Models Document for IEEE 802.20 MBWA
System Simulations — IEEE Document 802.20-PD-08.
IEEE
BEMaGS
BIOGRAPHIES
MICHAEL MAO WANG ([email protected])
_____________ received the
B.S. and M.S. degrees in electrical engineering from Southeastern University (Nanjing Institute of Technology), China,
and the M.S. degree in biomedical engineering and the
Ph.D. degree in electrical engineering from the University
of Kentucky, Lexington, Kentucky. From 1995 and 2003, he
was a Distinguished Member of Technical Staff at Motorola
Advanced Radio Technology, Cellular Infrastructure Group,
Arlington Heights, Illinois. He is currently with Qualcomm
Corporate Research Center, San Diego, California, where he
has been actively involved in research and development of
future generation wireless communication technologies.
His current research interests are in the areas of wireless
communications and signal processing. Dr. Wang is one of
the key contributors to IEEE 802.20 Mobile Broadband
Wireless Access, TIA-1099 Terrestrial Multimedia Multicast,
and 3GPP2 Ultra Mobile Broadband.
AVNEESH AGRAWAL is Senior Vice President of Product Management at Qualcomm CDMA Technologies (QCT). He is
responsible for wireless connectivity (LAN/PAN & Broadcast)
chipsets in QCT. Prior to his current role, he led Qualcomm
Corporate Research Center in OFDMA based next generation wireless technologies. He holds more than 50 patents
in the field of wireless communications. He holds a Bachelor of Science degree in computer systems engineering and
Master of Science and Ph.D. degrees in electrical engineering, all from Stanford University.
F
The paths crossing
the threshold
obtained from
Acquisition Pilot 1
processing over one
superframe are
passed on for Acquisition Pilot 2 processing, which involves
performing the FHT
and comparing the
resulting sector energies to a threshold.
A AMOD K HANDEKAR received his Bachelor of Technology
degree in Electrical Engineering from IIT Bombay in 1998
and his Ph.D. in electrical engineering from the California
Institute of Technology in 2002. He has been working at
Qualcomm since 2002, where his work has involved the
design and standardization of wireless communication systems.
S ANDEEP A EDUDODLA received the Bachelor of Technology
degree degree in electronics and communication engineering from the Indian Institute of Technology, Guwahati,
India, in 2002. He received the M.S. and Ph.D. degrees in
electrical and computer engineering from the University of
Florida, Gainesville, in 2004 and 2006, respectively. In the
summer of 2004 he was an intern at the Mitsubishi Electric
Research Labs, Cambridge, MA, where he was involved in
the development of a UWB-based physical layer for the
upcoming IEEE 802.15.4a standard. Since 2006 he has
been with Qualcomm in Boulder, CO, where he is involved
in wireless systems design for 3G and 4G modem ASICs for
mobile devices and femtocells.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
175
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ACCEPTED FROM OPEN CALL
Evaluating Strategies for Evolution of
Passive Optical Networks
Marilet De Andrade, University of California and Universitat Politècnica de Catalunya
Glen Kramer, Broadcom Corporation
Lena Wosinska and Jiajia Chen, Royal Institute of Technology (KTH)
Sebastià Sallent, Universitat Politècnica de Catalunya, and i2Cat
Biswanath Mukherjee, University of California
ABSTRACT
Rapidly-increasing traffic demands will
require the upgrade of optical access networks,
namely deployed Passive Optical Networks
(PONs), which may soon face capacity exhaustion. Such upgrade options must consider several
technical and cost factors for evolution toward a
shared multiple-channel PON using WavelengthDivision Multiplexing (WDM). WDM can facilitate the seamless upgrade of PONs, since
capacity can be increased by adding new wavelength channels. We study the requirements for
optimal migration toward higher bandwidth per
user, and examine scenarios and cost-effective
solutions for PON evolution.
INTRODUCTION
Network evolution is a natural way to handle
increasing traffic. Access networks are experiencing demands to offer higher bandwidths to
subscribers. Several architectures have been
proposed for next-generation Passive Optical
Networks (PON) [1]. We investigate the evolution path for future generations of PONs. We
study strategies for increasing the PON’s
capacity regardless of its technology: EPON
(Ethernet-based PON) or GPON (Gigabitcapable PON).
In PON, a fiber is extended from an OLT
(Optical Line Terminal) at the Central Office
(CO) to a remote node (RN) (usually an optical
power splitter) located in the service area (10–20 km
from CO). From the RN, fiber drops are extended to each subscriber or ONU (Optical Network
Unit) [2].
Legacy PONs (EPON, GPON) generally use
two wavelengths as transmission channels. The
downstream channel (1490 nm) is broadcast in
nature, and any ONU can filter the data intended for it. The upstream channel (1310 nm) is
shared in time among all ONUs. Thus, legacy
PONs are referred to as TDM (Time-Division
Multiplexing) PON. OLT authorizes timeslots
when an ONU can transmit. Timeslot sizing is
176
Communications
IEEE
0163-6804/11/$25.00 © 2011 IEEE
part of a dynamic bandwidth allocation algorithm, which provides fairness and differentiated
services to users by exchanging control information between OLT and ONUs [3].
Bandwidth supported by legacy PONs is limited: 1 Gb/s upstream and downstream for
EPON, and up to 2.5 Gb/s downstream/1.25
Gb/s upstream for GPON today. Sustained
growth of Internet traffic is being observed with
new applications such as multi-player gaming,
e-health, e-learning, 3D full-HD (High-Definition) video, etc. which increase bandwidth
demands to unprecedented levels. Current
GPON and EPON need to be upgraded to cope
with these demands.
Recent publications [4, 5] overview candidates and architectures for next-generation
GPON. Our article focuses on long-term evolution of currently-deployed PONs (EPON or
GPON), and considers basic requirements for
future PON generations. We anticipate three
principal evolutionary phases, where WDM is
the main technology that allows coexistence
among PON generations. To the best of our
knowledge, we are the first to evaluate the combined generations of PON according to the
defined migration requirements, optical power
budget, CAPEX, and capacity usage. Moreover,
we introduce the immediate WDM-based migration phase as a suitable option to allow transparent coexistence among a number of generations.
We also evaluate gradual capacity upgrades,
which are cost-efficient and accomplish the
migration requirements.
REQUIREMENTS FOR
FUTURE PON GENERATIONS
Future PON generations may take diverse evolution paths, for which we define constraints to
identify key enabling technologies and architectures for PONs. We present five requirements
for the evolutionary path (Fig. 1), as discussed
below.
Minimize Equipment-related Investments:
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
For PON migration, a new technology (including
components) may need to be deployed, besides
existing ones or as a replacement at end points
(e.g., at OLT and ONUs). Capital expenditures
must be evaluated with current and future benefits for a cost-effective evolution path.
Support Coexistence: PONs must support
legacy devices. Coexistence means that next-generation devices must operate on the same infrastructure without interfering with existing
operation whenever possible. Backward-compatible devices need to be considered for coexistence.
In PON evolution, even within one category
of users, traffic demands may be different. Some
users will be satisfied with minimal service and
will not upgrade to newer devices or will upgrade
much later, when prices become comparable.
Therefore, network upgrades must allow coexistence among new-generation and legacy devices.
Maximize Profit from Existing Resources:
Usage maximization of current and extended
capacities can be achieved by dynamically allocating bandwidth among users. Efficient capacity
utilization brings revenue to the service provider
and facilitates recovery of initial and subsequent
investments.
Keep and Reuse Fiber Infrastructure: For
cost-effective upgrade, neither the Remote Node
(RN) should be changed, nor should more fiber
be added to the existing PON. Most of the fiber
is lying underground, so civil engineering/deployment increases capital expenditure (CAPEX).
Although changes to outside plant could help
further upgrades, they can cause service disruptions.
Avoid Disruptions: Some service disruptions
are expected during network migration, but we
need to reduce their number and effects depending on which devices/fibers are being replaced. A
disruption at an ONU only affects its users, and
not the rest of the network, unlike changing the
OLT or the RN, where the entire PON is affected. However, making a change at the OLT is
performable under a more-protected environment than replacing the RN, which is a field
operation.
MAIN EVOLUTION PHASES AND
SCENARIOS
PON evolution depends on many factors, including technology advances and their implementation cost. Based on current standardization
efforts, to introduce 10 Gb/s rate on PONs, we
anticipate three principal evolutionary phases:
• Line-rate upgrade
• Multi-wavelength channel migration
• Other future PON technologies
LINE-RATE UPGRADE
A natural PON evolution is to increase existing
PON capacity to a higher line rate, namely
10 Gb/s. Work has been conducted by IEEE and
ITU-T to standardize next-generation 10 Gb/sPONs. The standards are influenced by the ability to coexist with legacy PONs, price, and
implementation feasibility. IEEE ratified a new
standard for 10 Gb/s-EPON (IEEE-802.3av) in
Minimize equipment
related investments
Objective:
“optimal” migration
solution for the
PON using WDM
IEEE
BEMaGS
F
Keep and reuse
fiber infrastructure
Avoid
disruptions
Support
coexistence
Maximize profit of
existing resources
Figure 1. Constraints for PON evolution.
September 2009. Also, ITU-T (Question 2, Study
Group 15) released a series of recommendations
for 10 Gb/s-GPON (XG-PON), namely
G-987.1, G-987.2 (both approved in January
2010) and G-987.3 (approved in October 2010).
Both IEEE-802.3av and ITU-T-proposed architectures (in NGA1, Next-Generation Access 1)
[5] are good examples of line-rate upgrades that
allow coexistence with current PONs.
Longer-term PON evolution may consider
higher line rates: 40 Gb/s or 100 Gb/s. However,
for higher line rates, it is difficult to reach the
typical PON distances without signal amplification.
This migration can occur in an “as-needed”
fashion, and two sub-phases of evolution are
expected: asymmetric and symmetric line-rate
upgrades [5, 6].
Asymmetric Line-Rate Upgrade — Downstream traffic from OLT to ONUs is traditionally higher than upstream traffic. PONs are
attractive due to their broadcast capability on
the downstream channel. With growth of broadcast services (e.g., Internet Protocol High-Definition TV), we have the first part of line-rate
upgrade. Another reason for asymmetric migration is the fact that adding 10 Gb/s upstream
capability (symmetric approach) would require
more expensive ONU devices.
Figure 2 shows a new downstream channel
added to the PON using WDM. To not interfere
with the existing legacy PON (light-colored
ONUs in Fig. 2), the new wavelength channel
can be taken from the L-band. A new OLT card
or module can manage legacy and 10 Gb/s downstream services. We call this module EnhancedOLT (E-OLT). New ONUs (dark-colored ONUs
in Fig. 2) are added to the PON to support
10 Gb/s service.
However, some precautions are needed to
support this coexistence. New wavelengthblocking filters (boxes next to each ONU in
Fig. 2) should be attached to ONUs to avoid
interferences between downstream channels.
Reference [7] shows that adding these filters
during legacy PON deployment can significantly
reduce the migration cost. These filters can
ease coexistence with future-generation PONs,
as discussed later.
An external or embedded amplifier may be
needed at the OLT due to the low sensitivity of
the ONUs’ receivers and the low optical power
level needed to reach the receiver of high-line-
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
177
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Symmetric Line-Rate Upgrade — Symmetric
line-rate upgrade is achieved when both downstream and upstream directions operate at the
same rate, say 10 Gb/s. This depends on the
Symmetric Line-Rate Upgrade with TDM
Coexistence — The upstream channel can be
upgraded to 10 Gb/s by sharing a wavelength in
time and using two different line rates (Fig. 3a).
This approach is approved in IEEE for 10 Gb/sEPON, where the 10G-service upstream channel
(1260–1280 nm) overlaps with the legacy-service
channel (1260–1360 nm). It can reduce deployment cost, because the legacy upstream channel
is on the lower-dispersion fiber band. New
ONUs can operate with commercially-available
distributed feedback (DFB) lasers, and the optical transmission system can be reused to reduce
cost. However, network implementation becomes
complex since an extra control mechanism is
needed to manage the upstream channel with
different rates, and it must also deal with time
alignments.
An important challenge is imposed on the
OLT’s burst-mode receiver, which now has to
adapt its sensitivity to the incoming optical burst
signal, to detect different-line-rate traffic on the
same channel. This problem affects the PON at
the discovery stage, when the OLT incorporates
ONUs with unexpected rates. IEEE 10 Gb/sEPON standard addresses this problem by allowing separate discovery windows for 1G- and
10G-services.
Legacy service
S band
10G service
L band
Filter
Figure 2. Asymmetric line-rate upgrade to 10 Gb/s.
Legacy and
10G services TDM
O band
Splitter
Legacy service
S band
10G service
L band
Filter
a)
10G service
C band
Legacy service
O band
WDM
Splitter
Legacy service
S band
10G service
L band
Filter
b)
Figure 3. Symmetric line-rate upgrade example to 10 Gb/s: a) TDM coexistence
and b) WDM coexistence.
Communications
IEEE
F
symmetry of traffic demands, e.g., due to new
peer-to-peer communications, multimedia realtime applications, and 3D Internet services. Two
approaches can be considered: TDM and WDM
coexistence [8].
Splitter
178
BEMaGS
rate signals (at 10 Gb/s). OLT may operate at
dual rate in the downstream channel, with two
MAC (Medium Access Control) layer stacks;
consequently, a new class of PON chipsets is
needed [6].
Legacy service
O band
WDM filter
A
Symmetric Line-Rate Upgrade with WDM
Coexistence — The alternative to a shared
upstream channel upgrade is to add another
upstream channel at 10 Gb/s (Fig. 3b). Now,
independent OLTs can manage legacy (OLT)
and 10 Gb/s (E-OLT) services. The new optical transmission for ONUs can be slightlymore expensive because the transmission
system cannot be reused as before. Now, the
laser at the enhanced-ONU has to transmit
at a different wavelength in C or L bands,
e.g., at 1550 nm [8]. However, this wavelength is currently reserved for analog video
broadcasting.
Other wavelength bands may be explored
to support coexistence. For example, in [4],
two symmetric non-overlapping upstream
channels are located in the O band (1270 nm
and 1310 nm). Now, the legacy ONUs (often
covering the whole O band, centered at 1310
nm) would need narrower transmitters (e.g.,
coarse-WDM or dense-WDM transmitters) to
not overlap with the new channel at 1270 nm
in the same band.
Network disruption can occur due to installation of a WDM filter (box near OLT and
E-OLT in Fig. 3b). The WDM filter separates
wavelengths directed to the legacy OLT from
the ones to the E-OLT. For guaranteed services,
the OLT can be installed in a redundant way
such that changes to any module do not generate disruptions since the spare OLT will be
working. Many current deployments do not use
protection schemes, but protection will become
important in the future.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
MULTIPLE WAVELENGTH CHANNEL MIGRATION
The natural second step for PON evolution is
based on WDM technology. However, other
technologies (part of the third migration phase)
may change this expectation, due to a reduction
of their cost and better implementation feasibility. The advantage of WDM is that it allows coexistence between two or more PON generations
over the same infrastructure. Provisioning multiple channels on the PON allows deployment of
different migration technologies or capacity
extensions transparently, where devices of a generation are unaware of the coexistence with
other generations.
Today, there are concerns regarding challenges to implement WDM in PONs, especially
regarding: type of transceivers at the OLT and
ONU, sharp filtering, and type of RN. Details of
enabling technologies and challenges can be
found in [1, 9]. Another consideration is wavelength planning. Initially, when there are few
wavelengths, they could be spaced far apart, e.g.,
using the 100G grid. If more wavelengths are
needed, unused wavelengths from the 50G grid
can be invoked. Care must be taken to ensure
that closely-spaced wavelengths are operating at
lower rates to reduce interference. Practical
aspects such as these must be handled by the
Service Provider in its actual deployment and
upgrade situations.
Diverse architectures for this migration stage
can be considered [1, 9]. Some WDM-based
PON architectures involve changes at the RN,
including addition of active components [10].
In this article, we consider changes that allow
the network to remain passive (RN is fully passive), and we study two main architectures:
WDM-PON and Overlaid-PONs.
WDM-PON — WDM-PON is known as wavelength-routed or wavelength-locked WDM-PON.
It requires the replacement of the optical power
splitter by an Arrayed Waveguide Grating
(AWG) (Fig. 1a). In the upstream direction, the
AWG acts as a multiplexer of different wavelengths into a single fiber; and in the downstream direction, the AWG is used as a
de-multiplexer by directing a different wavelength to each fiber drop. Therefore, AWG
allows a fixed assignment of two wavelengths
(upstream and downstream channels) to each
ONU.
Devoting an optical channel to each ONU
implies a substantial increase in the offered
capacity per user. However, fixed-channel assignment is inflexible and does not allow dynamic
reuse of wavelengths by different ONUs for efficient capacity utilization, especially when traffic
demands are bursty.
ONUs in WDM-PON will require new transmitters working on different wavelengths. A
good option is to use colorless ONUs either with
tunable lasers or RSOAs (Reflective Semiconductor Optical Amplifier). However, today the
price of RSOAs is one order of magnitude higher than an entire (EPON-based) ONU, whereas
tunable lasers are significantly more expensive
than RSOAs.
A WDM-PON with cascaded TDM-PON can
λ1
λ1 λ2 λ3 λ4 λ5
λ6
λ2
A
W
G
λ6 λ7λ8 λ9 λ10
λ7
λ3
λ4
λ8
λ9
λ5
λ10
a)
λ1
λ1 λ2 λ3 λ4 λ5
λ6
λ2
A
W
G
λ6 λ7λ8 λ9 λ10
λ7
λ3
λ4
λ8
λ9
λ10
λ5
Splitter
b)
Figure 4. WDM-PON with five ONUs: a) two different wavelengths assigned to
each ONU by using an AWG and b) WDM-PON with cascaded TDM-PON.
dynamically allocate unused bandwidth from one
ONU to other ONUs (Fig. 4b). Addition of a
splitter in one (or more) fiber drops allows timesharing the dedicated wavelengths among some
ONUs in that PON branch. This architecture
can improve the maximum number of ONUs
supported by a single PON, but it does not facilitate capacity upgrades in an “as-needed” fashion
by adding wavelengths.
WDM-PON is a highly-disruptive migration
option since the RN has to be replaced by another device (AWG). This procedure will provoke a
major PON disruption unless the RN is installed
in a protected configuration. More importantly,
all existing devices on the network must migrate
at the same time, and this does not meet the
coexistence requirement. A complete migration
of all user devices will lead to prohibitive costs,
especially when some users may not want a
capacity upgrade. Although WDM-PON is considered to be a next-generation PON after
10 Gb/s, the above arguments suggest that it is
not suitable for a smooth PON evolution.
Overlaid-PONs Using WDM — Overlaid-PONs
form a valuable option for the second migration
phase. They exploit WDM technology, but now
the RN remains an optical splitter, and it does
not need to be replaced by an AWG as in WDM-
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
179
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
When using Overlaid-PONs, some disruptions observed in
OLT
ONU 1
Legacy
OLT
ONU 2
WDM-PON are minimized since there is
no need to replace
the RN; only end-
10 Gb/s
ONU N
a
b
devices will require a
change. Moreover,
some users may
λ1
λ1
need capacity exten-
OLT
OLT
sion while other
λ2 and λ4
λ2
ONUs may remain
the same, so “asneeded” growth is
accomplished.
c
λ3
d
λ3 and λ5
Figure 5. Evolution using Overlaid-PONs: a) legacy PON, (b) partial upgrade to 10G-PON, c) extending
capacity by adding downstream (and/or upstream) channel to a set of ONUs; and d) extending capacity by
adding more channels to sets of ONUs as needed.
PON. In Overlaid-PONs, PON capacity is incremented by adding more wavelength channels
based on traffic demands. If existing channels
are time-shared among users, a new channel will
also be time-shared by the ONUs on the new
wavelength. OLT will control an ONU’s usage of
a wavelength at a specific timeslot. Thus, ONUs
working on a new wavelength form the set of
devices pertaining to the new overlaid-PON
(over the legacy PON or previous-generation
service). Some devices may belong to two or
more different overlaid-PONs according to their
hardware capabilities which can lead to a flexible
distribution of bandwidth. Overlaid-PONs form
a next-generation architecture for GPON in the
NGA1 proposal (ITU-T, Study Group 15).
When using Overlaid-PONs, some disruptions observed in WDM-PON are minimized
since there is no need to replace the RN; only
end-devices will require a change. Moreover,
some users may need capacity extension while
other ONUs may remain the same, so “as-needed” growth is accomplished. The network
becomes flexible for efficient distribution of
capacity among users who operate on the same
wavelength channel(s).
Overlaid-PONs require that new ONUs and
OLT operate at different wavelengths than existing ones in legacy PON and 10G-PON. Existing
legacy standards, for cost reasons, allocated wide
bands for upstream and downstream channels
which may interfere with the new optical channels. Thus, we need blocking filters at the first
migration phase for all ONUs. These filters can
be costly because they should have a very steep
response characteristic in order to fit into the
narrow guard band left between the channels.
The suggested evolution path allows migrating
first toward an intermediate line-rate upgrade
phase which may give time to fully migrate exist-
180
Communications
IEEE
ing legacy ONUs, before moving to the second
migration phase. New wavelengths can be targeted at the legacy bands. By that time, the filters’
prices may become affordable.
To transmit over more than one wavelength,
an ONU may use:
• Tunable lasers
• Fixed-wavelength laser arrays
Tunable lasers increase network flexibility, but
their price is high. Fixed-wavelength laser array
is cheaper but less flexible compared to tunable
lasers. The choice of lasers for ONUs will
depend on their price.
Using L-band could be an immediate solution
for a capacity upgrade using WDM. Future
increments in the number of wavelengths can be
obtained through the spectral space left empty
by a total migration of previous generations
working at lower bands.
Overlaid-PONs allow the coexistence of multiple generations on the same fiber infrastructure
(Fig. 5). Starting from a legacy PON (Fig. 5a),
the first evolution is a line-rate upgrade for
some ONUs (Fig. 5b), which requires the addition of wavelengths for coexistence with the
legacy PON. Later, some users may need more
capacity, which can be resolved by adding a new
wavelength to any or both traffic-flow directions
(Fig. 5c). Some ONUs can share two or more
wavelengths as required. Finally, some ONUs
may need to increase the number of wavelengths
to be shared among them (Fig. 5d). Thus, Overlaid-PONs using WDM not only can increase a
PON’s capacity by adding wavelengths, but also
keep PON generations coexisting by stacking
them with different wavelengths.
Consider traffic growth in a PON. Figure 6a
shows the number of ONUs per service during
each period (which approximates a year) and
PON traffic is assumed to grow by a factor of
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Number of ONUs per wavelength
25
20
15
BEMaGS
F
30
Filters
AWG
Splitter
Fiber
25
Power loss (dB)
Legacy PON 1 Gb/s,
lambda 1
10 Gb/s PON,
lambda 2
10 Gb/s PON,
lambda 3
10 Gb/s PON,
lambda 4
10 Gb/s PON,
lambda 5
30
A
20
15
10
10
5
5
0
Legacy PON
1 Gb/s
10 Gb/s
PON
WDM-PON
0
Hybrid
WDM-PON/
TDM-PON
Overlaid
PONs
Period Period Period Period Period Period Period Period Period
1
2
3
4
5
6
7
8
9
(b)
(a)
90000
30000
10G-PON
WDM-PON
Overlaid-PONs
80000
70000
25000
CAPEX (in USD)
Total unused bandwidth (Mb/s)
35000
20000
15000
10000
10G-PON
WDM-PON
Overlaid-PONs
60000
50000
40000
30000
20000
5000
10000
0
1
2
3
4
5
6
7
8
0
9
Periods
Periods
1-5
Period 6
Period 7
(c)
Period 8
Period 9
Total
CAPEX
(d)
35%
10G-PON+WDM-PON
10G-PON+Overlaid-PON
Percentage difference
in total CAPEX
30%
25%
20%
15%
10%
5%
0%
+20% OLT
+50% OLT
+20% ONU
+50% ONU
+20% RN
+50% RN
+20%
filters
+50%
filters
(e)
Figure 6. Quantitative results: a) number of ONUs migrating to 10 Gb/s line rate and to extra optical channels per period using the Overlaid-PONs approach (when each period approximates a year); b) optical power loss for different upgrading approaches; c) total unused
bandwidth per period for 10 G-line rate upgrade combined with WDM-PON or Overlaid-PONs; d) CAPEX for 10 G-line rate upgrade
combined with WDM-PON or Overlaid-PONs; and e) percentage difference of total CAPEX between respective total CAPEX in (d)
and total CAPEX adding 20 percent and 50 percent to the cost of WDM-based PON elements.
1.5 per period. The number of ONUs during
these periods is constant (32) and traffic at
each ONU will grow on average in the same
proportion. Initially, just before period 1, the
legacy-PON’s capacity is totally consumed
(total traffic volume is 1Gb/s on average). In
this PON, existing ONUs will be upgraded
gradually (a line-rate upgrade first, then wavelengths at 10 Gb/s are added as needed) trying
to utilize the available capacity in previous services as much as possible.
Figure 6a shows that coexistence among 1G
and 10G services can last for eight periods.
From period 6, additional wavelengths are needed to support the growing traffic demand. In the
last period shown, there are four channels serving eight ONUs each. Note that the capacity of
the four channels can be shared among a subset
of ONUs, according to their needs. That would
require colorless ONUs and a wavelength-assignment algorithm. Determining the time instants
to run the provisioning and its bandwidth granularity is a challenge for the network operator.
Note that this is an illustrative example assuming
a constant traffic growth factor, which leads to a
nine-year interval to operate with four channels
(considering only one flow direction). Actual
upgrade decision periods will be affected by
many other factors, namely economy and trafficgrowth evolution.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
181
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
OCDM-PON
(Optical-CDM PON)
technology addresses
capacity upgrade in
PONs by adding a
code-based
dimension to the
system. However,
the design of
orthogonal codes to
reduce interference
and noise when the
number of users
grows is an
open issue.
182
Communications
IEEE
Quantitative Comparisons: WDM-PON vs.
Overlaid-PONs — To compare the upgrading
alternatives WDM-PON and Overlaid-PONs, we
address optical power loss, unused capacity, and
capital expenses (CAPEX).
PONs require a higher optical power budget
to compensate for increased insertion loss along
the paths between OLT and ONUs. We calculate the lower bound of the total power loss
(without adding the optical penalty to cope with
physical impairments) for different upgrade
approaches. As shown in Fig. 6b, WDM-PON
offers the minimum total power loss, which
means that it can support longer distance or
more ONUs. WDM-PON with cascaded TDMPON increases considerably the power loss if we
assume the insertion of a 1:32 splitter. Furthermore, Overlaid-PONs experience the highest
total optical power loss due to the insertion of
filters at ONUs (1dB) and at OLT (3 dB) [7].
However, the maximum optical power budget,
usually 29 dB (e.g., IEEE 802.3av, for 1:32 splitting ratio), is not reached. This is an important
consideration for adding more devices to the system.
Using the example presented earlier, we evaluate the amount of unused capacity for WDMPON and Overlaid-PONs. In periods 1 to 5, we
upgrade the network using 10G-PON, and after
that, we upgrade with WDM-PON or OverlaidPONs, as presented in Fig. 6c. In this case, we
have set the maximum channel capacity of
WDM-PON and Overlaid-PONs to 1 Gb/s; however, if we set 10 Gb/s as the maximum per channel, then the unused capacity would be
proportionally larger. The amount of unused
capacity in the case of WDM-PON is very high
compared to the case of Overlaid-PONs in Fig.
6c. However, the extra capacity in WDM-PON
cannot be shared among ONUs, unless the service provider implements a WDM-PON with
cascaded TDM-PON.
Now we analyze the CAPEX impact that a
new technology will have on the upgrade process. At the moment, WDM-PON is not a widely
deployed technology, hence the exact cost of this
technology is difficult to estimate or forecast.
However, some current technical challenges
(type of transceivers, wavelength plan) suggest
the high cost of components required to implement WDM-PON. To illustrate the CAPEX
required for the example mentioned earlier, we
use the cost per device in [11]. We assume a cost
reduction of seven percent per period (which
approximates a year). We also assume that the
cost of 10 Gb/s equipment is in the middle
between the cost of WDM-PON equipment and
the cost of Legacy PON (TDM-PON in [11]).
Overlaid-PON ONU and WDM-PON ONU
have the same cost in this calculation ($525).
In Fig. 6d, we present the CAPEX needed in
our example. We have two stages: period 1-5
when we upgrade to 10 Gb/s, and period 6-9
when we upgrade using WDM technology (i.e.,
adding wavelength channels). We calculate the
required CAPEX for each period, and the total
CAPEX for both WDM-PON and OverlaidPONs (i.e., 10G-PON and WDM-PON, or 10GPON and Overlaid-PONs). The CAPEX for
Overlaid-PONs is lower that for WDM-PON
A
BEMaGS
F
due to the gradual investments needed (split
over several periods) in Overlaid-PONs, which is
attractive. However, it is reasonable that in this
example they have comparable CAPEX totals
since both are WDM-based and face similar
technical challenges. Note that in this example,
ONUs’ traffic grows uniformly and at a fast rate.
However, in a practical scenario (e.g., using different growth patterns per user), the investment
for Overlaid-PONs would be distributed over
several periods, leading to more cost reductions
per period.
Finally, we evaluate the sensitivity of total
CAPEX to variations in cost of some elements.
For every network element, we increase its cost
by 20 percent and 50 percent. Figure 6e shows
the percentage difference between the total
CAPEX in Fig. 6d and the new recalculated
CAPEX. We observe that, compared to the base
cost (total CAPEX in Fig. 6d), the CAPEX is
more sensitive to cost variations for OLT, especially when its price increases by 50 percent.
Otherwise, the effect on CAPEX is not large
(<15 percent). Although in Fig. 6e it may seem
that the combined evolution of 10G-PON and
Overlaid-PONs is more expensive than the combined 10G-PON and WDM-PON, note that percentage differences are calculated using their
respective base total CAPEX (in Fig. 6d) as a
reference.
OTHER FUTURE PON TECHNOLOGIES
The third PON-migration phase can be based on
different possibilities. It can carry different
hybrids between WDM and other multiplexing
technologies such as CDM (Code-Division Multiplexing) and SCM (Sub-Carrier Multiplexing)
[12], or it can be an upgrade of WDM-based
PONs by using Coherent PONs. By using separate wavelengths for different PON generations,
any subsequent generation can be deployed over
specific wavelength channels, forming a hybrid.
Below, we briefly discuss future hybrids.
CDM Hybrids — OCDM-PON (Optical-CDM
PON) technology addresses capacity upgrade in
PONs by adding a code-based dimension to the
system. However, the design of orthogonal codes
to reduce interference and noise when the number
of users grows is an open issue. Coders/ decoders
and corresponding transceivers are still in the
early stages of development. Few orthogonal
codes can be implemented to create more Overlaid-PONs (WDM/CDM) [13]. The combination
of some codes on different channels (as needed)
can provide more flexibility to the network.
SCM Hybrids — With SCM, signals are separated (electronically or optically), and shifted to
different subcarrier channels using modulation
techniques. This option may require a different
wavelength to support it in a hybrid fashion to
avoid interference with existing and operating
services on other channels. A good example of
PONs using this technology is OFDM (Orthogonal Frequency-Division Multiplexing) PON [14].
Coherent PONs — An attractive trend for
PONs is where transmitters are based on coherent lasers (using ultra-dense-WDM band,
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
U-DWDM), and optical heterodyne or homodyne reception [15]. This may be a good candidate for future U-DWDM-based PONs. To
upgrade an existing WDM-PON, only enddevices (ONU and OLT) need to be replaced.
Coherent PON allows longer reach (100 km)
and a splitting factor of 1:1000, and can provide
one different wavelength channel per user.
CONCLUSION
We introduced and evaluated different options
for the evolution of PONs toward higher bandwidth per user. The evolution is divided into
three migration phases: line-rate upgrade, multichannel migration, and future PON technologies. The first migration phase is in the process
of standardization and can follow two sub-phases: asymmetric and symmetric upgrades. Asymmetric sub-phase aims at adding a new channel
at 10 Gb/s in the downstream direction. Symmetric sub-phase delivers 10 Gb/s in the upstream
direction also by either time-sharing with legacy
services or adding a new upstream channel. The
second phase is based on multiple channels
(wavelengths) in both downstream and upstream
directions, and the technology used is WDM. In
the second phase, it is necessary to have filters at
existing and new ONUs to select the appropriate
optical signals. The quality of these filters plays
an important role in future migration procedures
over the same network, especially if they are
installed at an early stage. Finally, a third migration phase includes new hybrids with previous
technologies: TDM and WDM. Possibilities are
OFDM and OCDM, coexisting with previous
generations. Another future technology may
consider an extension of WDM technologies by
deploying Coherent PONs.
Any evolution path can lead to bifurcations at
different phases. The second migration phase
can be chosen by using WDM-PON or OverlaidPONs. The benefits of Overlaid-PONs over
WDM-PON are disruption minimization and
coexistence. WDM-PON does not allow the flexibility to build different channels that could be
shared among a number of ONUs. OverlaidPONs ease the implementation of future generations by preserving coexistence with the previous
one through the addition of new wavelengths per
service and not per ONU.
The third migration phase (other future PON
technologies) can be considered with many
options, each of which can be implemented independently over the PON by using different channels, guaranteeing coexistence. The evolution
path enabled by Overlaid-PONs is more convenient when the aim is to permit coexistence
between different evolution generations and
technologies.
Important open issues need to be addressed.
First, an insightful cost analysis of future network evolution and investment is needed, for
which research on colorless ONUs is important.
Second, a smart allocation and coexistence of
new and existing users is needed, together with a
graceful combination of different types of users
such as residential and business subscribers.
Consequently, higher network revenue can be
obtained by designing the best user-coexistence
combination. Third, increasing the optical power
budget is essential to follow Overlaid-PON’s
solution. Fourth, an analysis of future PON technologies is needed if it can be related to cost and
ease of implementation. Finally, amplified PON
for longer reach is important to take into account
in PON evolution. Therefore, long-distance
effects over different technological candidates to
Next- and Future-Generation PONs should be
evaluated.
IEEE
BEMaGS
REFERENCES
[1] A. Banerjee et al., “Wavelength-Division Multiplexed
Passive Optical Network (WDM-PON) Technologies for
Broadband Access: A Review [Invited],” OSA J. Opt.
Net., vol. 4, no. 11, Nov. 2005, pp. 737–58.
[2] G. Kramer, Ethernet Passive Optical Networks, McGrawHill, 2005.
[3] F. Effenberger et al., “An Introduction to PON Technologies,” IEEE Commun. Mag., vol. 45, no. 3, Mar.
2007, pp. S17–S25.
[4] F. Effenberger et al., “Next-Generation PON-Part II: Candidate Systems for Next-Generation PON,” IEEE Commun. Mag., vol. 47, no. 11, Nov. 2009, pp. 50–57.
[5] J. Zhang et al., “Next-Generation PONs: A Performance
Investigation of Candidate Architectures for Next-Generation Access Stage 1,” IEEE Commun. Mag., vol. 47,
no. 8, Aug. 2009, pp. 49–57.
[6] M. Hajduczenia, H. Da Silva, and P. Monteiro, “10G
EPON Development Process,” Proc. Int’l. Conf. Transparent Optical Networks (ICTON), vol. 1, July 2007, pp.
276–82.
[7] K. McCammon and S. W. Wong, “Experimental Validation of an Access Evolution Strategy: Smooth FTTP Service Migration Path,” Proc. OFC/NFOEC 2007, Mar.
2007.
[8] F. Effenberger and H. Lin, “Backward Compatible Coexistence of PON Systems,” Proc. OFC/NFOEC 2009, Mar.
2009.
[9] L. Kazovsky et al., “Next-Generation Optical Access Network,” IEEE/OSA J. Lightwave Tech., vol. 25, no. 11,
Nov. 2007, pp. 3428–42.
[10] K. Choi et al., “An Efficient Evolution Method From
TDM-PON to Next-Generation PON,” IEEE Photonics
Tech. Lett., vol. 19, no. 9, 2007, pp. 647–49.
[11] J. Chen et al., “Cost vs. Reliability Performance Study
of Fiber Access Network Architectures,” IEEE Commun.
Mag., vol. 48, no. 2, Feb. 2010, pp. 56–65.
[12] A. Shami, M. Maier, and C. Assi, Eds., Broadband
Access Networks, Technologies and Deployments,
Springer, 2009.
[13] K. Kitayama, X. Wang, and N. Wada, “OCDMA over
WDM PON-Solution Path to Gigabit-Symmetric FTTH,”
IEEE/OSA J. Lightwave Tech., vol. 24, no. 4, Apr. 2006,
pp. 1654–62.
[14] D. Qian et al., “Optical OFDM Transmission in
Metro/Access Networks,” Proc. OFC/NFOEC 2009, Mar.
2009.
[15] J. M. Fabrega, L. Vilabru, and J. Prat, “Experimental
Demonstration of Heterodyne Phase-Locked Loop for
optical homodyne PSK receivers in PONs,” Proc. Int’l.
Conf. Transparent Optical Networks (ICTON), vol. 1,
June 2008, pp. 222–25.
F
Important open
issues need to be
addressed. First, an
insightful cost analysis of future network
evolution and investment is needed, for
which research on
colorless ONUs is
important. Second, a
smart allocation and
coexistence of new
and existing users is
needed, together
with a graceful combination of different
types of users.
BIOGRAPHIES
MARILET DE ANDRADE ([email protected])
_____________ received her
Ph.D. degree in Telematics Engineering from Universitat
Politècnica de Catalunya (UPC) in 2010, with Cum Laude
distinction. She worked for a Telcel Bellsouth in Venezuela
(currently Telefónica Movistar) for three years, and she also
was a visiting scholar at the University of California, Davis,
for one year. Currently, she is a visiting researcher at the
Next Generation Optical Networks (NEGONET) group, KTH,
Sweden. Her research interests are PON evolution and
resource management in broadband access networks.
G LEN K RAMER ([email protected])
_______________ is a Technical
Director of Ethernet Access at Broadcom Corporation. He
has joined Broadcom through its acquisition of Teknovus,
Inc., where he served as Chief Scientist. He has done extensive research in areas of traffic management, quality of service, and fairness in access networks. Glen chairs the IEEE
P1904.1 Working Group that develops a standard for Ser-
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
183
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
vice Interoperability in Ethernet Passive Optical Networks.
Previously he served as chair of IEEE P802.3av “10 Gb/s
Ethernet Passive Optical Networks” task force and as EPON
protocol clause editor in IEEE 802.3ah “Ethernet in the First
Mile” task force. Prior to Teknovus, Glen worked at the
Advanced Technology Lab at Alloptic, Inc., where he was
responsible for design and performance analysis of PON
scheduling protocols and was involved in prototyping the
very first EPON system. He received his M.S. and Ph.D.
degrees in computer science from the University of California at Davis, where he was awarded an NSF grant to study
next-generation broadband access networks. Glen has
authored 16 patents. His book Ethernet Passive Optical
Networks has been published in English (McGraw-Hill,
2005) and Chinese (BUPT Press, 2007).
LENA WOSINSKA ([email protected])
_________ received her Ph.D. degree
in photonics and Docent degree in optical networking from
KTH. She joined KTH in 1986, where she is currently an
associate professor in the School of ICT, heading a research
group in optical networking (NEGONET), and coordinating
a number of national and international scientific projects.
Her research interests include optical network survivability,
photonics in switching, and fiber access networks. She has
been involved in a number of professional activities including guest editorship of OSA, Elsevier and Springer journals,
membership in TPC of several conferences, as well as
reviewer for many journals and project proposals. In 20072009 she has been an Associate Editor of OSA Journal of
Optical Networking, and since April 2009 she serves on the
Editorial Board of IEEE/OSA Journal of Optical Communications and Networking.
JIAJIA CHEN ([email protected])
________ received a B.S. degree in information engineering from Zhejiang University, Hangzhou,
China, in 2004, and a Ph.D. degree from the Royal Institute
of Technology (KTH), Stockholm, Sweden, in 2009. Currently, she is working as a post-doctoral researcher with the
Next Generation Optical Networks (NEGONET) group, KTH.
Her research interests include fiber access networks and
switched optical networks.
SEBASTIA SALLENT ([email protected])
____________ received his Ph.D.
(1988) degree in Telecommunications Engineering at Universitat Politècnica de Catalunya (UPC), in Barcelona,
Spain. His field of study is optical communications, Inter-
184
Communications
IEEE
A
BEMaGS
F
net architectures and traffic measurement. Currently, he
holds a position of full professor at UPC, where he leads
the Broadband Networks research group within the
Department of Telematics Engineering. He is also the
Director of the i2Cat Foundation, a non-profit organization for the promotion of IT in Catalonia, Spain. He has
participated in more than 15 research projects, funded by
the EU (Federica, Phosphorus, NOVI, and Euro- NF, among
others) the Spanish government, and private companies
(Pais, Tarifa). He is co-author of more than 100 publications. He has been the President of Spanish Telematic
Association. He has been a TPC Member for several conferences, and has served as a reviewer for several conferences and journals.
______________ holds
BISWANATH MUKHERJEE [F] ([email protected])
the Child Family Endowed Chair Professorship at University
of California, Davis, where he has been since 1987, and
served as Chairman of the Department of Computer Science during 1997 to 2000. He received the B.Tech. (Hons)
degree from Indian Institute of Technology, Kharagpur, in
1980, and the Ph.D. degree from University of Washington,
Seattle, in 1987. He served as Technical Program Co-Chair
of the Optical Fiber Communications (OFC) Conference
2009. He served as the Technical Program Chair of the IEEE
INFOCOM ’96 conference. He is Editor of Springer’s Optical
Networks Book Series. He serves or has served on the editorial boards of eight journals, most notable IEEE/ACM
Transactions on Networking and IEEE Network. He is Steering Committee Chair of the IEEE Advanced Networks and
Telecom Systems (ANTS) Conference (the leading networking conference in India promoting industry-university interactions), and he served as General Co-Chair of ANTS in
2007 and 2008. He is co-winner of the Optical Networking
Symposium Best Paper Awards at the IEEE Globecom 2007
and IEEE Globecom 2008 conferences. To date, he has
supervised to completion the Ph.D. Dissertations of 45 students, and he is currently supervising approximately 20
Ph.D. students and research scholars. He is author of the
textbook “Optical WDM Networks” published by Springer
in January 2006. He served a 5-year term as a Founding
Member of the Board of Directors of IPLocks, Inc., a Silicon
Valley startup company. He has served on the Technical
Advisory Board of a number of startup companies in networking, most recently Teknovus, Intelligent Fiber Optic
Systems, and LookAhead Decisions Inc. (LDI).
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ACCEPTED FROM OPEN CALL
On Assuring End-to-End QoE in
Next Generation Networks:
Challenges and a Possible Solution
Jingjing Zhang and Nirwan Ansari
ABSTRACT
In next generation networks, voice, data, and
multimedia services will be converged onto a single network platform with increasing complexity
and heterogeneity of underlying wireless and
optical networking systems. These services
should be delivered in the most cost- and
resource-efficient manner with ensured user satisfaction. To this end, service providers are now
switching the focus from network Quality of Service (QoS) to user Quality of Experience (QoE),
which describes the overall performance of a
network from the user perspective. High network QoS can, in many cases, result in high
QoE, but it cannot assure high QoE. Optimizing
end-to-end QoE must consider other contributing factors of QoE such as the application-level
QoS, the capability of terminal equipment and
customer premises networks, and subjective user
factors. This article discusses challenges and a
possible solution for optimizing end-to-end QoE
in Next Generation Networks.
INTRODUCTION
Currently, Wireless Broadband Access (WBA)
technologies are rapidly deployed while the traditional telecom networks are migrating to Internet Protocol (IP) technology. The future will
witness a clear trend of Fixed Mobile Internet
Convergence (FMIC) in Next Generation Networks (NGN) [1]. To realize this convergence,
NGN will employ an open architecture and global interfaces to create a multi-vendor and multioperator network environment. Moreover, NGN
will employ multiple networking technologies for
the best service provisioning. While core networks in NGN are going to employ a common
network layer protocol to carry the current and
foreseeable future services, the access networks
will use a variety of technologies, such as 2G/3G,
LTE, WiMAX, UWB, WLAN, WPAN, Bluetooth, Ethernet cable, DSL, and optical fiber, to
meet the diversified requirements from end
users. Under the multi-operator, multi-network,
and multi-vendor converged network environment, users are expected to experience a heterogeneous wireline and wireless high-bandwidth
IEEE Communications Magazine • July 2011
Communications
IEEE
ubiquitous network access as well as diversified
service provisioning.
Since NGN can offer multiple services over a
single network, it potentially simplifies network
operation and management, and thus operational expenditure (OPEX). While enjoying the
benefit of the decreased OPEX, service providers will encounter fierce competition provisioned by the availability of fixed-mobile
convergence. In order to sustain and sharpen
their competitive edges, service providers need
to satisfy users’ needs to retain and attract lucrative customers. For this reason, service providers
may explore management and control decisions
based on user Quality of Experience (QoE). As
the ultimate measure of services tendered by a
network, QoE is defined as the overall acceptability of an application or service as perceived
subjectively by the end-user [2].
Figure 1 illustrates typical constituents in an
NGN. The core network consists of four major
candidate transport technologies, i.e., ATM, Ethernet, IP, and IP/MPLS, where IP-based core
networks possess two QoS models (DiffServ and
IntServ) standardized by IETF. The access networks accommodate various wireless and wireline
access technologies to provide consistent and
ubiquitous services to end users. End-to-End
(E2E) communications between users or between
a user and an application server may span fixed
and wireless mobile networks belonging to multiple operators and employing multiple networking
technologies with their respective characteristics
from different aspects, such as QoS models, service classes, data rates, and mobility support. The
multiplicity of provider domains and diversity of
transport technologies pose challenges for network interconnection, interworking, and interoperation, and therefore E2E QoE. QoE includes
the complete E2E system effects ranging from
users, terminals, customer premises networks,
core and access networks, to services infrastructures. Besides the E2E network QoS, QoE is
affected by many other factors such as user subjective factors, capabilities of terminal devices,
properties of the applications, and characteristics
of the user’s physical environment. Such a variety
of contributing factors of QoE exacerbate the
difficulty for assuring E2E QoE.
0163-6804/10/$25.00 © 2010 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
185
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
2G/3G cellular
FTTx
Customer
premise
network Computer
WLAN
WPAN
LTE
UWB
Core network
domain IP,
MPLS
Core network
domain 3
DiffServ
Core network
domain 2 ATM
Ethernet
cable
Core network
domain 4
IntServ
xDSL
WiMAX
PSTN
ISDN
Figure 1. Typical constituents in NGN.
From the user’s perspective, in order to assure
user QoE, transport functions and applicationlevel parameter configurations should be adaptive
to other influencing factors of QoE such as user
subjective factors. From the network’s perspective, the NGN system needs to intelligently allocate its resources among all users and properly
adjust its transport functions to satisfy all users’
demands. However, many challenging issues, such
as QoE measurement, monitoring, diagnosis, and
management, must be addressed before these
goals can be achieved. It requires efforts across
all layers of the protocol stack of each traversed
network [13]. That is to say, functions such as
admission control, access network selection, routing, resource allocation, QoS mapping, transmission control, session establishment, and source
coding are expected to be adaptive to user QoE.
Instead of addressing one of these challenging problems or investigating solutions to assure
QoE for one particular application, this article
discusses possible challenging issues involved in
assuring E2E QoE for all users in an NGN, and
describes the general framework of an E2E QoE
assurance system, which can possibly be implemented in an NGN to assure user QoE.
The rest of the article is organized as follows.
We first discuss the intrinsic properties of QoE.
Then, the challenges involved in assuring E2E
QoE are described. Finally, we detail the constituents and functions of the proposed E2E QoE
assurance system, and then conclude the article.
PROPERTIES OF QOE
QoE has many contributing factors, among
which some are subjective and not controllable,
while others are objective and can be controlled
[3, 4]. Subjective factors include user emotion,
experience, and expectation; objective factors
consist of both technical and non-technical
aspects of services. The end-to-end network
quality, the network/service coverage, and the
terminal functionality are typical technical factors, and ease of service setup, service content,
pricing, and customer support are some examples of non-technical factors. Poor performance
186
Communications
IEEE
A
BEMaGS
F
in any of these objective contributing factors can
degrade user QoE significantly.
Some of these subjective and objective factors
are dynamically morphing during an on-line session, while some others are relatively stable and
are less likely to change during a user’s session.
Dynamically changeable factors include user subjective factors and some technical factors, in particular, network-level QoS. Relatively stable
factors include non-technical factors and some
technical factors such as network coverage. In
addressing the real-time E2E QoE assurance
problem in this article, we assume that users are
satisfied with the performances of those relatively
stable factors. QoE possesses the following properties owing to the variety of contributing factors.
USER-DEPENDENT
Users receive different QoE even when they are
provided with services of the same qualities.
First, users may show different preferences
towards their sessions established over the network. For example, residential subscribers and
business subscribers may exhibit rather different
preferences over on-line gaming and file transfer
services, respectively. Second, owing to the differences in user subjective emotion, experience,
and expectation, users may yield different subjective evaluations for services with the same
objective QoS. Furthermore, users’ preferences
over sessions, and their emotion, experience, and
expectation factors, may not be stable but vary
from time to time.
APPLICATION-DEPENDENT
NGN will enable and accommodate a broad range
of applications, including voice telephony, data,
multimedia, E-health, E-education, public network computing, messaging, interactive gaming,
and call center services. Applications exert different impacts on user QoE. First, from the user
perspective, applications are of different importance to different users. Second, these applications may have diversified network-level QoS
requirements [3]. Voice, video, and data constitute three main categories of applications. Generally, voice and video are more delay and jitter
sensitive than data traffic is. Each of the three
categories further encompasses a number of
applications with different QoS requirements. For
example, video conferencing and real-time
streaming TV belong to the video category; nevertheless, users may have higher requirements on
the perceived resolution, transmission rates,
delay, and jitter for real-time streaming TV than
those for video conferencing. Third, each application may use its own parameters to quantify application-level QoS. Resolution, frame rate, color,
and encoding schemes are typical parameters for
video applications; HTML throughput and HTML
file retrieval time are parameters for web access
applications. Different application-level QoS performances bring different effects on user QoE.
TERMINAL-DEPENDENT
Currently, a variety of terminal devices are available to accommodate an application. For video
applications, the terminal device can be a cell
phone, a PDA, a computer, or a TV. Each of
these devices is characterized by its own media
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
processing and terminal capabilities, such as resolution, color, panel size, coding, and receiver sensitivity. The capabilities of terminal devices may
blur the perceptual difference between network
provisioned functionalities and terminal enabled
functionalities. Terminal equipment (TE) affects
users’ QoE in three main ways. First, owing to the
powerful processing and storage capabilities of
the devices, users with more powerful devices
may experience higher QoE when they are provisioned with the same network-level QoS. Second,
in order to capitalize on the merits of devices,
users with more powerful devices may require the
network to provision higher QoS. For example, as
compared to users with the standard definition
TV, users with the high definition TV may have
higher expectations on their received QoS, and
are likely to desire higher bit rate and lower data
loss of TV signal transmission. Third, user QoE
may greatly depend on the performances of terminal devices, such as energy consumption of cell
phones and PDAs.
TIME-VARIANT AND DIFFICULT TO CONTROL
Many contributing factors of QoE change over
time and are difficult, if not impossible, to control. First, user subjective factors may fluctuate
and cannot be controlled by transport functions
and application-layer configurations. Second, in
wireless communications, multi-path propagation
and shadowing induce dynamically changing
wireless channel conditions, which will have significant impact on user received signal strength,
and thus network-level QoS, and finally QoE.
Owing to the above properties, QoE is
desired to be managed on a per-user, per-application, and per-terminal basis in a real-time
manner in NGN. However, to achieve this goal
many challenging issues need to be addressed.
CHALLENGES IN
E2E QOE ASSURANCE
This section discusses several important challenging issues in assuring a sustained user QoE
in real-time. These issues include but are not
limited to QoE measurement, monitoring, diagnosis, and management.
QoE Measurement: For online QoE measurement, there are two general approaches: the subjective approach and the objective approach [4].
With the subjective approach, users evaluate
and give scores to their experienced services in
real-time. The subjective method may generate
accurate measurement results since QoE reflects
users’ subjective perception to the service. However, users are usually unlikely to spend time in
evaluating their experienced services unless poor
QoE is experienced [5], let alone provide
detailed information about causal factors of their
poor experience in real-time. Such limited information provided by the subjective approach
challenges the following QoE diagnosis process,
which is an essential part of QoE assurance.
Besides, with the subjective approach alone,
users may take advantage of the measurement
system to demand higher quality than they
deserve or maliciously consume network
resources and degrade other users’ QoE.
The objective approach derives the subjective
user QoE by using algorithms or formulas based
on the objective parameters of networks, application, terminals, environment, and users. This
method usually models QoE as functions of
application-level and network-level QoS parameters, and then refines the model by theoretical
derivation [6] or testing subjective QoE [4].
Machine learning or computational intelligence,
such as neural networks and genetic algorithms,
may be employed to learn user subjective perception based on the historical QoE information
of users to deduce the subjective measurement
[7]. Recently, many research efforts have been
made to improve the accuracy of objective measurement. However, there is no standard technology to map objective parameters to QoE for
all applications, all terminal devices, and all user
subjective factors.
QoE Monitoring and Feedback: Since QoE
characterizes the perception of services experienced by end users, accurate QoE performance
should be measured and monitored at end users,
and then fed back to the network [8].
In order for the NGN system to respond
promptly to a degraded QoE, the QoE of end
users is expected to be fed back to each network
in real-time. However, it takes some time for the
QoE value to reach networks and sources that
can be users or application servers. QoE values
may be outdated by the transmission delay that
will further mislead the transport function adjustment and the application-layer parameter configurations. On the other hand, frequent
reporting or probing QoE and QoS parameters
can help transport networks and sources track
the user status more accurately, but the extra
injected traffic may increase the network burden.
In order to prevent QoE degradation, it is
necessary to monitor the status of each network
element in the E2E path of a user session [9].
Core routers, edge routers, access nodes, and
wireless channels are typical network elements.
However, for one particular network element, it
is hard to tell the degree of the impact of its performance on the E2E QoE without the information of all the other network elements’
performances. Therefore, ideally, each network
element needs to be monitored in real time. This
will introduce high monitoring overhead. Moreover, the performances of all network elements
need to be incorporated together to obtain the
E2E effect. However, this is difficult to achieve
in an NGN that is distributed and heterogeneous
in nature.
QoE Diagnosis: When poor QoE is experienced, it is better to figure out the causal factor
of QoE degradation so as to improve QoE. However, this may not be easy to achieve for three
reasons. First, since user subjective factors are
dynamically morphing and difficult to measure,
it is not easy to distinguish the variation of subjective factors from causal objective QoS performance degradation. Second, performances of
some contributing factors of QoE, especially
non-technical aspects of services, may not be
available for diagnosis. Inaccurate diagnosis may
be resulted without the comprehensive information of all contributing factors. Third, networklevel QoS performances are determined by all
IEEE
BEMaGS
F
The subjective
method may generate accurate measurement results
since QoE reflects
users’ subjective perception to the service. However, users
are usually unlikely
to spend time in
evaluating their
experienced services
unless poor QoE is
experienced.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
187
A
BEMaGS
F
Communications
IEEE
Data
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Source
Network
1
Network
2
.....
User
QoE
TE
Figure 2. The abstraction of the E2E QoE assurance in NGN.
traversed networks, which may belong to different domains and do not disclose detailed information to each other. As a result, it may be
difficult to know the exact network element that
causes the poor performance.
QoE Management: First, as a multitude of
users with a variety of applications and terminal
devices are being developed and accommodated
at a rapid pace in NGN, managing QoE on a
per-user, per-application, and per-terminal basis
raises the scalability issue. Second, achieving a
target QoE requires that the performances in
each QoS metric satisfy certain quantitative
requirements. However, guaranteeing quantitative QoS is a challenging issue in networks with
qualitative QoS control such as DiffServ. Third,
achieving a given QoS requires proper adjustment of transport functions such as access network selection, routing, QoS mapping, QoS
budget allocation, resource allocation, admission
control, scheduling, queuing, and transmission
control [10]. Any of these functions may not be
easily addressed.
E2E QoE assurance may involve some other
challenging issues. For a given application, some
unique issues may exist in assuring QoE, and
hence calling for specific solutions. In particular,
QoE assurance for VoIP and IPTV applications
has received intensive research attention recently
[11–13]. Rather than addressing the above
described challenging issues or addressing the
QoE assurance problem for one particular application, we propose one possible E2E QoE assurance system that aims at ensuring QoE for all
users in an NGN.
AN E2E QOE ASSURANCE SYSTEM
In this section, we will describe the general
framework of a proposed E2E QoE assurance
system, which can possibly be implemented in
NGN to assure user QoE.
QoE/QoS
reporting
1
QoE
management
2
TE
1 Collect QoE/QoS reports
2 Send out QoE/QoS report
3
4
Network
5
F
The E2E QoE assurance system is designed
based on two assumptions. First, motivated by
bettering their own experiences, users are ready
to enable their devices with the function of
reporting their received QoE and QoS performances by using some particular chips or software in their TE. Second, motivated by attracting
more customers, service providers would like to
maximize user QoE in allocating resources and
configuring their networks.
Figure 2 shows the abstraction of the E2E
QoE assurance system, which is modeled as a
closed-loop control system. Generally, TE measures user QoE/QoS performances and feeds
back these values to networks and sources; networks and sources adjust their respective functions accordingly based on their received
QoE/QoS measurement results. Theoretically,
the overall data transmission system is considered as a closed-loop control system, with user
QoE as the system output, and source and network configuration parameters as control variables. QoE is determined by the network and
source configurations, which are in turn configured based on QoE.
Figure 3 describes the major constituents of
the QoE assurance system as well as their functions. The system contains two major components: the QoE/QoS reporting component at TE,
and the QoE management component at networks and sources. The QoE/QoS reporting
component collects user QoE/QoS parameters,
and then reports them to networks and sources.
The QoE management component receives
QoE/QoS reports, analyzes them locally, and
adjusts their transport functions or reconfigures
application parameters accordingly. After the
adjustment, the QoE management component
estimates the up-to-date QoE/QoS performances
of end users, and then sends the updated information further to other networks and sources.
We shall next detail the constituents and
functions of the QoE/QoS reporting component
and the QoE management component.
QOE/QOS REPORTING COMPONENT
As described in Fig. 4, the QoE/QoS reporting
component contains four blocks: the networklevel QoS measurement block, the applicationlevel QoS measurement block, the user
subjective QoE measurement block, and the
QoE/QoS reporting block. Both network-level
QoE
management
3
4
Network
3 Receive QoE/QoS reports
4 Probe network status
and adjust transport functions
5 Send out updated QoE/QoS report
5
QoE
management
6
7
Source
6 Receive QoE/QoS reports
7 Adjust applicator-level
configurations
Figure 3. The major functions of the E2E QoE assurance system.
188
Communications
IEEE
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
QoS and application-level QoS can be derived
by analyzing the received packets. For subjective
QoE measurement, we assume that users will
interact with the terminal device when they
experience poor performances, and the interactions between the user and the terminal device
can help derive user subjective QoE.
The function of the QoE/QoS block is to prepare and send out the report message. The
report message can be sent out periodically or
only when performance degradation happens.
The latter approach can reduce the extra traffic
injected into the network as well as the cost
related to reporting. Regarding the report message, it may contain all these three kinds of measurement results such that networks and sources
can have comprehensive information about the
user. However, this may incur a big report message. An alternative way is to report the performances of the QoS metrics, which do not meet
the requirements. This intelligent reporting
scheme implies some QoE diagnosis capability
within the QoE/QoS reporting block.
QOE MANAGEMENT COMPONENT
Figure 5 shows the implementation of the QoE
management block in NGN. Functions implemented in the QoE management component
belong to the service stratum. In order to manage user QoE, the QoE management component
interacts with the Network Attachment Control
Function (NACF) and Resource and Admission
Control Functions (RACF) in the transport stratum to negotiate network-level QoS and adjust
transport functions accordingly.
Figure 6 describes constituents of the QoE
management component. It contains four blocks:
the user QoE database, the QoE/QoS performance receiving/transmitting block, the QoE
inference/diagnosis block, and the QoE control/
management block.
QoE database: Owing to the properties of
QoE, the QoE database is organized on a peruser, per-terminal, and per-service basis. For a
given service and TE, QoE of the user is considered as a function of network-level QoS performances, application-level QoS performances,
and user subjective factors.
Based on the fact that poor performance in
any of objective parameters may result in significant QoE degradation regardless of good perfor-
T
E
F
2
1
QoE/QoS
reporting
Applicator-level QoS
measurement
2
1
Network-level QoS
measurement
2
1 Trigger QoE/QoS reporting
2 Collect QoE/QoS performances
Figure 4. The block diagram of the QoE/QoS reporting component.
mances in all other factors, each QoS metric
may need to satisfy certain threshold requirements in order to achieve a given QoE value.
For some QoS metrics, such as packet loss ratio,
delay, and jitter, the threshold requirements are
the maximum allowable value, while for some
other QoS metrics, such as throughput and picture resolution, the threshold requirements are
the minimum allowable value. These threshold
requirements can characterize QoE functions,
and are stored in the QoE database.
User subjective factors affect user QoE and
impact the threshold requirements on objective
QoS performances. Considering the dynamically
changing user subjective factors, the above
threshold requirements of objective QoS metrics
are not deterministic, but vary within some
ranges. These variation ranges are stored in the
QoE database as well.
QoE/QoS receiving/transmitting block: The
function of this block is to receive the QoE/QoS
reports, and report to other networks with the
updated QoE/QoS performances. After the QoE
management component adjusts network transport functions, QoE/QoS performances of end
users change accordingly. This block gets the
updated QoE/QoS performances from the QoE
inference/diagnosis block and reports them to
other networks.
QoE inference/diagnosis block: This block
has two main functions. One is to infer QoE by
Service stratum
QoE
management
Customer
premise
BEMaGS
1
User subjective QoE
measurement
Service stratum
QoE/QoS
reporting
A
QoE
management
NACF
RACF
NACF
Access
Core
Core
Transport stratum
QoE/QoS
reporting
RACF
Access
Customer
premise
T
E
Transport stratum
NN
Figure 5. The implementation of the QoE management component in NGN.
IEEE Communications Magazine • July 2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
189
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
User QoE database
QoE inference/diagnosis
QoE
control/
management
QoE/QoS receiving/transmitting
Transport layer
QoS
negotiator
Figure 6. The block diagram of the QoE management component.
using the objective QoE measurement approach;
the other is to diagnose the causal factors leading to QoE degradation. For given QoS performances, the corresponding QoE can be inferred
from the information stored in the QoE
database. QoE diagnosis is the reverse process
of QoE inference. QoE diagnosis can be fulfilled
by comparing the actual QoS performances with
the threshold requirements for a target QoE.
Besides QoS performances, the report may
contain the user QoE value measured by the
subjective approach at the user end. There may
exist disagreement between the inferred QoE
and the reported QoE value. To narrow down
their difference, the objective QoE measurement
model is dynamically modified by adjusting the
threshold requirements of QoS metrics.
QoE control/management block: The function
of this block is to determine the target QoE for
users and negotiate with the Resource Admission
Control Function (RACF) and the Network
Attachment Control Function (NACF) in the
transport stratum to achieve the target QoE. In
the ideal case, the network resources can assure
every user with the largest QoE. When users
have large traffic demands and the ideal case
cannot be achieved, equalizing QoE among users,
or maximizing the sum of QoE of all users, can
be regarded as the objective of QoE management. After determining the target QoE of users,
this block communicates with the QoE inference/diagnosis block to derive the corresponding
required QoS performances, and then negotiates
with the RACF and NACF functions to achieve
these QoS requirements. Solutions for determining the target QoE of users and adjusting transport functions to achieve this QoE are rather
network specific. Generally, it is much easier to
be addressed in networks with quantitative QoS
control such as IntServ and RSVP than networks
with qualitative QoS control such as DiffServ.
Addressing these two problems, though important and critical, is not the focus of this article.
In the proposed E2E QoE assurance system,
each network independently and locally maximizes the QoE of its users. If all networks in the
NGN implement the same QoE management
functions and regard equalizing QoE of its users
190
Communications
IEEE
A
BEMaGS
F
as the management objective, all users in the
inter-connected NGN environment will be provided with the same QoE when the closed-loop
system enters into the stable status. In the real
implementation, the inside detailed constituents
and functions of the QoE management component are decided by each network itself. Different networks may have different objective QoE
measurement models. Some networks may not
want to implement a QoE management component, and some networks may want to maximize
the sum of QoE of its users rather than equalizing the QoE of all users. Owing to these differences, users may experience different QoE
depending on the networks their sessions traverse. When the networks traversed by a session
cannot provide the desired network quality for
the user to achieve a good QoE, the user and
source may try to adjust parameters at their
sides or select other networks to traverse.
CONCLUSION
Owing to the time-variant, user-dependent,
application-dependent, and terminal-dependent
properties of QoE, E2E QoE assurance is particularly challenging in the multi-vendor, multiprovider, and multi-network environment of
NGN. E2E QoE depends on the effects of the
whole system, including networks, terminals, customer premises networks, and users. To assure
user QoE, network operations in all vertical network layers of all network elements may need to
be performed based on user real-time QoE.
However, achieving this goal needs to address
many challenging issues, among which QoE
measurement, monitoring, diagnosis, and management are typical ones. In this article, we propose an E2E QoE assurance system that contains
two major components: a QoE/QoS performance reporting component installed at TE, and
the QoE management component installed at
networks and sources. The QoE/QoS reporting
components measure QoE and QoS performances received by users, and then report them
to networks and sources. The QoE management
components adjust transport functions and
reconfigure application-layer parameters to maximize user QoE. Since each network independently and locally maximizes the QoE of its
users, the E2E QoE assurance system can possibly be implemented in an NGN that is distributed and heterogeneous in nature. Generally, E2E
QoE assurance in an NGN still needs to address
many research issues, and will receive intense
research attention from both academia and
industry, driven by the strong desire to generate
revenues and increase the competitiveness of
service providers.
REFERENCES
[1] K. Knightson, N. Morita, and T. Towle, “NGN Architecture: Generic Principles, Functional Architecture, and
Implementation,” IEEE Commun. Mag., vol. 43, no.10,
Oct. 2005, pp. 49–56.
[2] ITU-T, “P.10/G.100 (2006) Amendment 1 (01/07): New
Appendix I, Definition of Quality of Experience (QoE), “
2007.
[3] T. Rahrer, R. Faindra, and S. Wright, “Triple-play Services Quality of Experience (QoE) Requirements,” Architecture & Transport Working Group, Technical Report
TR-126, 2006, DSL-Forum.
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
[4] P. Brooks and B. Hestnes, “User Measures of Quality of
Experience: Why Being Objective and Quantitative is
Important,” IEEE Network, vol. 24, no. 2, Mar.–Apr.
2010, pp. 8–13.
[5] K. Chen, C. Tu, and W. Xiao, “OneClick: A Framework
for Measuring Network Quality of Experience,” Proc.
IEEE INFOCOM 2009.
[6] M. Fiedler, T. Hossfeld, and P. Tran-Gia, “A Generic
Quantitative Relationship between Quality of Experience and Quality of Service,” IEEE Network, vol. 24, no.
2, Mar.–Apr. 2010, pp. 36–41.
[7] Y. Zhang et al., “QoEScope: Adaptive IP Service Management for Heterogeneous Enterprise Networks,”
Proc. 17th Int’l. Wksp. Quality of Service (IWQoS), July
2009.
[8] D. Soldani, “Means and Methods for Collecting and
Analyzing QoE Measurements in Wireless Networks,”
Proc. Int’l. Wksp. Wireless Mobile Multimedia (WOWMOM), June 26–29, 2006.
[9] J. Asghar, F. Le Faucheur, and I. Hood, “Preserving
Video Quality in IPTV Networks,” IEEE Trans. Broadcasting, vol. 55, no. 2, June 2009, pp. 386–95.
[10] D. Soldani, M. Li, and R. Cuny, QoS and QoE Management in UMTS Cellular Systems, Wiley, 2006.
[11] T. Huang et al., “Could Skype Be More Satisfying? A
QoE-Centric Study of the FEC Mechanism in an Internet-Scale VoIP System,” IEEE Network, vol. 24, no. 2,
Mar.–Apr. 2010, pp. 42–48.
[12] A. Mahimkar et al., “Towards Automated Performance
Diagnosis in a Large IPTV Network,” Proc. ACM SIGCOMM 2009.
[13] M. Volk et al., “Quality-Assured Provisioning of IPTV
Services within the NGN Environment,” IEEE Commun.
Mag., vol. 46, no. 5, May 2008, pp. 118–26.
BIOGRAPHIES
JINGJING ZHANG (S’09) received the B.E. degree in electrical
engineering from Xi’an Institute of Posts and Telecommunications, Xi’an, China, in 2003, the M.E. degree in electrical engineering from Shanghai Jiao Tong University,
Shanghai, China, in 2006, and the Ph.D. degree in electrical engineering at the New Jersey Institute of Technology
(NJIT), Newark, in May 2011. Her research interests include
IEEE
BEMaGS
F
planning, capacity analysis, and resource allocation of
broadband access networks, QoE provisioning in next-generation networks, and energy-efficient networking. Ms.
Zhang received a 2010 New Jersey Inventors Hall of Fame
Graduate Student Award.
N IRWAN A NSARI [S’78, M’83, SM’94, F’09] received the
B.S.E.E. (summa cum laude with a perfect gpa) from the
New Jersey Institute of Technology (NJIT), Newark, in
1982, the M.S.E.E. degree from University of Michigan,
Ann Arbor, in 1983, and the Ph.D. degree from Purdue
University, West Lafayette, IN, in 1988. He joined NJIT’s
Department of Electrical and Computer Engineering as
Assistant Professor in 1988, tenured and promoted to
Associate Professor in 1993, and has been Full Professor
since 1997. He has also assumed various administrative
positions at NJIT. He authored Computational Intelligence
for Optimization (Springer, 1997, translated into Chinese
in 2000) with E.S.H. Hou, and edited Neural Networks in
Telecommunications (Springer, 1994) with B. Yuhas. His
research focuses on various aspects of broadband networks and multimedia communications. He has also contributed over 350 technical papers, over one third of
which were published in widely cited refereed
journals/magazines. He has also guest edited a number of
special issues, covering various emerging topics in communications and networking. He was/is serving on the Advisory Board and Editorial Board of eight journals, including
as a Senior Technical Editor of IEEE Communications Magazine (2006–2009). He had/has been serving the IEEE in
various capacities such as Chair of IEEE North Jersey COMSOC Chapter, Chair of IEEE North Jersey Section, Member
of IEEE Region 1 Board of Governors, Chair of IEEE COMSOC Networking TC Cluster, Chair of IEEE COMSOC Technical Committee on Ad Hoc and Sensor Networks, and
Chair/TPC Chair of several conferences/symposia. Some of
his recent awards and recognitions include IEEE Leadership Award (2007, from Central Jersey/Princeton Section),
the NJIT Excellence in Teaching in Outstanding Professional Development (2008), IEEE MGA Leadership Award
(2008), the NCE Excellence in Teaching Award (2009), a
number of best paper awards, a Thomas Alva Edison
Patent Award (2010), and designation as an IEEE Communications Society Distinguished Lecturer.
IEEE Communications Magazine • July 2011
Communications
A
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
191
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ADVERTISERS’ INDEX
Company
Page
Cisco ...........................................................................................................Cover 2
ECOC 2011 ................................................................................................87
European Microwave Week .....................................................................23
GL Communications.................................................................................25
IEEE Communications Society TechFocus ............................................3
IEEE Communications Society Tutorials/Evolution to 4G Wireless ...11
IEEE Communications Society Tutorials/IPv6.......................................1
IEEE Communications Society Tutorials/Next-Generation Video/Television
Services and Standards ........................................................................15
OFC/NFOEC.............................................................................................5
Samsung .....................................................................................................Cover 4
Wiley-Blackwell .........................................................................................Cover 3
ADVERTISING SALES OFFICES
Closing date for space reservation:
1st of the month prior to issue date
NATIONAL SALES OFFICE
Eric L. Levine
Advertising Sales Manager
IEEE Communications Magazine
3 Park Avenue, 17th Floor
New York, NY 10016
Tel: (212) 705-8920
Fax: (212) 705-8999
Email: [email protected]
_____________
SOUTHERN CALIFORNIA
Patrick Jagendorf
7202 S. Marina Pacifica Drive
Long Beach, CA 90803
Tel: (562) 795-9134
Fax: (562) 598-8242
Email: [email protected]
________________
NORTHERN CALIFORNIA
George Roman
Roy McDonald Assoc.
2336 Harrison Street
Oakland, CA 94612
Tel: (702) 515-7247
Fax: (702) 515-7248
Cell: (702) 280-1158
Email: [email protected]
________________
SOUTHEAST
Scott Rickles
560 Jacaranda Court
Alpharetta, GA 30022
Tel: (770) 664-4567
Fax: (770) 740-1399
Email: __________
[email protected]
EUROPE
Rachel DiSanto
Huson International Media
Cambridge House, Gogmore Lane
Chertsey, Surrey, KT16 9AP
ENGLAND
Tel: +44 1428608150
Fax: +44 1 1932564998
Email: [email protected]
____________________
AUGUST 2011 EDITORIAL PREVIEW
Modeling & Analysis of Wireless Networks Utilizing Game Theory
Design and Implementation
Energy Efficiency in Communications
SEPTEMBER 2011 EDITORIAL PREVIEW
Optical Communications
Mobile Communications Middleware for Devices and Applications
Networking Testing
Radio Communications
192
Communications
IEEE
IEEE Communications Magazine • July 2011
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Communications
IEEE
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
20% Discount On These New Books
TECHNOLOGY
COMMUNICATIONS
Visit www.wiley.com and
quote code VB438* when you order
Connect with us:
WBComms
www.twitter.com/WBComms
*Discount valid until 31/07/2011
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
11-27228
Sign up for email
alerts. Visit:
www.wiley.com/commstech
A
BEMaGS
F
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
______________
Communications
IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F