List of Figures

Transcription

List of Figures
wik-Consult • Final Report
Study for the European Commission
The Economics of IP Networks
– Market, Technical and Public
Policy Issues Relating to
Internet Traffic Exchange
Main Report
and
Annexes
Bad Honnef, May 2002
Disclaimer
The opinions expressed in this Study are those of the
authors and do not necessarily reflect the views of the
European Commission.
©
ECSC – EC – EAEC, Brussels – Luxembourg 2002
Authors:
Dieter Elixmann
Mark Scanlan
With contributions from:
Alberto E. García
Klaus Hackbarth
Annette Hillebrand
Gabriele Kulenkampff
Anette Metzler
Internet traffic exchange and the economics of IP networks
I
Table of Contents
List of Figures
V
List of Tables
VI
Table of Contents of Annexes
VII
List of Annexed Figures
VIII
List of Annexed Tables
VIII
1 Introduction and Aims
1
Part I – Internet systems and data
2 General features of the Internet
5
2.1 Applications and services generating traffic on the Internet
5
2.2 How traffic is exchanged
9
2.3 Types of ISPs
13
2.4 Roots of the Internet
16
3 Technical background to traffic exchange: Addressing, routing and "Autonomous
Systems"
18
3.1 Internet addressing
18
3.1.1 Names and the Domain Name System
18
3.1.2 IPv4 addressing schemes
19
3.1.3 IPv6 addressing issues
22
3.2 Internet routing
23
3.2.1 Routing protocols
23
3.2.2 Static, dynamic and default routing
25
3.2.3 IPv6 routing issues
26
3.3 Autonomous Systems
27
3.3.1 Partitioning of a network into ASes
28
3.3.2 Multi-Exit Discriminator
30
3.3.3 Confederations
30
4 Quality of service
31
4.1 The QoS problem put in context
31
4.2 The quality of Internet service
34
4.2.1 What is QoS?
34
4.2.2 Congestion and QoS
38
II
Final Report for the European Commission
4.3 QoS problems at borders
38
4.4 QoS and the Next Generation Internet
40
4.5 Conclusion: Service quality problem areas
45
5 Interconnection and Partial Substitutes
46
5.1 The structure of connectivity
46
5.2 Interconnection: Peering
47
5.2.1 Settlement free peering
47
5.2.2 Paid peering
48
5.2.3 Private peering
49
5.2.4 Public peering
51
5.2.5 Structural variations in peering
51
5.3 Interconnection: Transit
56
5.3.1 Rationale
56
5.3.2 The structure of transit prices
57
5.3.3 QoS guarantees for transit
59
5.4 Multi-homing
60
5.5 Hosting, caching, mirroring, and content delivery networks
61
5.6 A loose hierarchy of ISP interconnection
64
6 Empirical evidence concerning traffic exchange arrangements in the Internet
67
6.1 Characteristics of Internet exchange points
67
6.1.1 Methodology and data sources
67
6.1.2 NAPs in the USA and Canada
68
6.1.3 Important NAPs in Europe
71
6.1.4 Additional features of European NAPs
77
6.1.5 U.S. Internet backbone providers at non-U.S. international NAPs
78
6.1.6 Additional features characterising a NAP
80
6.2 Main Internet backbone players
81
6.2.1 Basic features of the empirical approach
81
6.2.2 Internet Backbone Providers in Europe
82
6.2.3 Internet Backbone Providers in North America
84
6.3 Internet growth, performance and traffic flows
6.3.1 Routing table growth
85
85
Internet traffic exchange and the economics of IP networks
III
6.3.2 AS number growth
87
6.3.3 Internet Performance
88
6.3.4 Internet Traffic
92
6.4 About Peering
94
6.4.1 Peering policies
94
6.4.2 New peering initiative in the U.S.
96
6.5 Capacity exchanges
97
Part II – Public policy
7 Economic background to Internet public policy issues
7.1 Economic factors relevant in analysing market failure involving the Internet
7.1.1 The basis of public policy interest in this study
99
99
99
7.1.2 Externalities
101
7.1.3 Market power
109
7.1.4 Existing regulation
112
8 Possible public policy concerns analysed in terms of market failure
8.1 About congestion management on the Internet
114
114
8.1.1 QoS and the limitations of cheap bandwidth
114
8.1.2 Pricing and congestion
116
8.1.3 Grade-of-Service pricing
123
8.1.4 Conclusion regarding the congestion problem
124
8.2 Strategic opportunities of backbones to increase or use their market power
125
8.2.1 Network effects
125
8.2.2 Network effects and interconnection incentives
126
8.2.3 Differentiation and traffic exchange
132
8.2.4 Fragmentation and changes in industry structure
136
8.2.5 Price discrimination
138
8.3 Standardisation issues
142
8.4 Market failure concerning addressing
145
8.4.1 The replacement of IPv4 by IPv6
145
8.4.2 Routing table growth
152
8.4.3 AS number growth
154
8.4.4 Conclusions regarding addressing issues
154
IV
Final Report for the European Commission
8.5 Existing regulation
155
8.6 Summary of public policy interest
158
8.7 Main findings of the study
160
References
164
List of companies and organisations interviewed
171
Glossary
172
Internet traffic exchange and the economics of IP networks
V
List of Figures
Figure 2-1:
Value flows and payment flows of Internet based communication
Figure 2-2:
Basic features of a private end-user’s access to the content and services
8
on the Internet
10
Figure 2-3:
Hierarchical view of the Internet (I)
12
Figure 2-4:
Interrelationship of different types of ISPs to enable Internet
communication
13
Figure 3-1:
Example of an IP address expressed in dotted decimal notation
19
Figure 3-2:
Routing of traffic of an end-user connected to two ISPs
29
Figure 4-1:
Relation of current and future logical layer with physical layers
32
Figure 4-2:
End-to-end QoS
33
Figure 4-3:
Application specific loss and delay variation QoS requirements
36
Figure 4-4:
Demand for Internet service deconstructed
42
Figure 4-5:
Fitting GoSes within service QoS requirements
44
Figure 5-1:
Hierarchical View of the Internet (II): Peering and transit relationships
between ISPs
46
Figure 5-2:
Technical obstacles behind paid peering
49
Figure 5-3:
Secondary peering and transit compared
50
Figure 5-4:
Functional value chain of a NAP
52
Figure 5-5:
Multihoming between ISPs
60
Figure 5-6:
A visual depiction of caching
62
Figure 5-7:
A visual depiction of mirroring
63
Figure 5-8:
Basic features of Content Delivery Networks
64
Figure 6-1:
Overview of cities with NAPs in Europe
72
Figure 6-2:
Cities hosting the 13 most important NAPs in Europe
75
Figure 6-3:
Development of the number of routes on the Internet between
January 1994 and November 2001
Figure 6-4:
86
Development of the number of ASes on the Internet for the period October
1996 through November 2001
87
Figure 6-5:
Development of latency on the Internet between 1994 and Sept 2000
89
Figure 6-6:
Development of packet loss on the Internet between 1994 and Sept 2000
90
Figure 6-7:
Development of reachability on the Internet between 1994 and Sept 2000
91
Figure 6-8:
Daily traffic of CERN
92
VI
Final Report for the European Commission
Figure 6-9:
Weekly traffic of CERN
93
Figure 7-1:
Network effects and private ownership
104
Figure 7-2:
Maximum Internet externality benefits for private subscriptions
105
Figure 7-3:
Cost allocation under flat-rate pricing and the effect on penetration.
107
Figure 8-1:
VoIP; ‘on-net’ and ‘off-net’.
134
Figure 8-2:
Degraded interconnection and non-substitutable services having strong
network effects
135
Figure 8-3:
IPv4 address depletion
146
Figure 8-4:
Non-neutral treatment of communications services in the USA
157
List of Tables
Table 4-1:
Traffic hierarchies in next generation networks
43
Table 5-1:
A taxonomy of Internet exchange points
55
Table 6-1:
Features of the most important NAPs in the USA and Canada (ordered
according to the number of ISPs connected)
69
Table 6-2:
Classification of NAPs in Western and Eastern Europe (as of May 2001)
73
Table 6-3:
Features of the 13 most important NAPs in Europe (ordered according
to the number of ISPs connected)
76
Table 6-4:
Founding members of EURO-IX (as of May 2001)
78
Table 6-5:
Important Internet backbone providers in Europe (minimum of 5 NAP
connections to different international nodes of global importance in
Europe)
Table 6-6:
83
Important Internet backbone providers in North America as of 2000
(minimum of 4 NAP connections to different important nodes
in North America)
84
Internet traffic exchange and the economics of IP networks
VII
Table of Contents of Annexes
A Annex to Chapter 3
178
A-1 Names and the Domain Name System
178
A-2 IPv4 addressing issues
179
A-3 Route aggregation
186
B Annex to Chapter 4
B-1 Internet protocols, architecture and QoS
190
190
IP/ TCP/UDP
190
RTP
192
Resource reSerVation Protocol
193
IntServ
194
DiffServ
196
ATM and AAL
198
Multiprotocol over ATM
204
SDH and OC
204
MPLS
205
Future architectures
207
B-2 Technologies deployed by ISPs in Europe
210
B-3 Interoperability of circuit-switched and packet-switched networks in
"Next Generation Networks”
211
About H.323
212
About Session Initiation Protocol (SIP)
214
H.323/SIP Interworking
216
B-4 ENUM
217
B-5 Adoption of new Internet architectures
219
C Annex to Chapter 6
220
C-1 Important Internet traffic exchange points and players
220
C-2 Peering guidelines of Internet backbone providers
238
D Annex for Chapter 8
D-1 Modelling the strategic interests of core ISPs in the presence of network effects
250
250
VIII
Final Report for the European Commission
List of Annexed Figures
Figure A-1:
Hierarchical structure of the domain name system
179
Figure A-2:
Address formats of Class A, B, C addresses
180
Figure A-3:
Recursive division of address space using VSLM
183
Figure A-4:
Possibility of route aggregation and reduction of routing table size
by using VLSM
186
Figure A-5:
CIDR and Internet routing tables
188
Figure A-6:
The effect of a change of an ISP on routing announcements in a CIDR
environment
189
Figure B-1:
OSI and Internet protocol stack
190
Figure B-2:
Protocol-architecture enabling multi-service networking with IP
191
Figure B-3:
Integrated services capable router
195
Figure B-4:
Differentiated services field in the IP packet header
197
Figure B-5:
Fully-meshed IP routers – the n problem
200
Figure B-6:
Seven layer by three planes OSI model
207
Figure B-7:
Types of layered architectures
207
Figure B-8:
Technologies for QoS and backbone technologies used by ISPs
2
in Europe
210
Figure B-9:
H.323 network architecture
213
Figure B-10:
H.323 protocol layers
214
Figure D-1:
Range of possible outcomes of a global degradation strategy
252
Figure D-2:
Profitable regions of targeted degradation strategy
254
List of Annexed Tables
Table B-1:
QoS and ATM Forum service categories
202
Table B-2:
Suitability of ATM Forum service categories to applications
203
Table B-3:
Factors that can degrade an ATM network's QoS
203
Table B-4:
SIP interworking with ITU-T protocols
216
Table C-1:
The most important public Internet exchange points
in North-America 2001
Table C-2:
220
The most important public Internet exchange points
in North-America 2000
222
Internet traffic exchange and the economics of IP networks
Table C-3:
Extended list of Internet Exchange Points in the USA and Canada
(ordered along number of ISPs connected)
Table C-4:
IX
224
Most important IXPs in Europe (ordered according to number
of ISPs connected) [as of May 2001]
231
Table C-5:
Overview of peering guidelines – Broadwing Communications
238
Table C-6:
Overview of peering guidelines – Cable&Wireless
240
Table C-7:
Overview of peering guidelines – Electric Lightwave
241
Table C-8:
Overview of peering policy – France Télécom
243
Table C-9:
Overview of peering guidelines - Genuity
245
Table C-10:
Overview of peering guidelines – Level 3
246
Table D-1:
Interconnection strategies and network tipping
253
Table D-2:
Feasible upper bounds for c
255
Internet traffic exchange and the economics of IP networks
1
1
Introduction and Aims
There appears to be widespread acceptance in the Internet industry that the Internet will
in time largely replace circuit switch telecommunications networks and the services they
supply. Implicit in this view is that the Internet is presently closer in its development to
its beginning than its end, be that in terms of time scale, numbers of subscribers, or the
scale of the impact it will have on our lives.
Already there are enormous amounts of information and educational content on the
Internet, goods and services are sold on the Internet, entertainment is provided over the
Internet, people communicate with each over the Internet, and the Internet is also a
means through which a great deal of work is co-ordinated in our economy.
This study provides data, descriptions and analysis of commercial traffic exchange as it
concerns Internet backbone service providers (also known as IBPs {Internet backbone
providers}, or core ISPs).1 As such the study mainly looks back into the core of the
Internet and only addresses what is happening between end-users and their ISPs to the
extent that this provides valuable insights for traffic exchange at the core.2
The main subject areas included in the study are:
•
A basic description of the technical and economic features of the Internet, including
those concerning routing, addressing, quality of service (QoS) and congestion;
•
The arrangements that exist, are developing or look likely to emerge, for commercial
traffic exchange in the Internet, and
•
An analysis of the risk or fact of market failure; mainly externalities and market
power, where each has the potential to give rise to the other, and where market
1 During the course of the project we agreed with the Commission to changed the wording of the title of
the study by replacing the word ‘backbone’ by ‘traffic exchange’ with a particular focus on commercial
traffic exchange, by which we mean traffic exchange involving transit providers. The reason this
occurred is that there were difficulties in defining ‘the backbone’, and that some issues that are
important for this study have their causes downstream from what could be called the Internet
backbone. However, we still sometimes use the terms “backbone” and IBP (Internet backbone
provider) in the study.
2 Arguably, the first time this aspect of the Internet attracted a great deal of interest from the authorities
was in 1998 when the European Union and US authorities intervened over the proposed merger
between MCI and WorldCom. Similar issues emerged again in 2000 over the proposed merger
between WorldCom and Sprint. One of the main things the competition law authorities had to do at
those times was to find a proper market definition, and to understand how competition between IBPs
worked. Otherwise stated, what was needed was for the Merger authorities to come to an
understanding of the industry’s structure and whether market power was or might become a problem
as a result of the Merger, and how any identified problems might occur. In the present study it is,
however, not intended to analyse relevant markets and related market power [0]issues, as would be
undertaken by a competition law authority. We have addressed some of these issues in our study,
"Market definitions and regulatory obligations in communications markets”. See Squire, Sanders &
Dempsey and WIK (2002).
2
Final Report
failure occurs, whether or not the issues are such that the Commission might rightly
get more closely involved.3
The study looks at the issues from several perspectives. First, with regard to technical
issues the main focus is on technologies that govern the way the Internet works.
Understanding these issues helps in coming to an understanding of the way ISPs relate
to each other, including strategies that leading players may employ to shape the
evolution of the market, and possibly, to distort competition or impede its development.
Second, we are interested in the institutional and contractual side of traffic exchange
arrangements. Of particular interest are the peculiarities of the commercial relationships
that exist between ISPs that may be in an up and downstream relationship to each
other, but can also be in competition with each other.
Third, there is a focus on economics and economic incentives. Issues here include:
network effects, industry structural factors, and strategies of the markets players as they
affect traffic exchange on the Internet. Of these, network effects are most important. For
example, are the largest players able to manipulate interconnection in a way that
increases their market power?
Additionally, the study presents empirical evidence about the exchange of Internet
traffic. Data presented includes, for example, the identity of the main players providing
traffic exchange on the Internet, where they are exchanging traffic, and information
about the growth of Internet traffic.
There are three categories of market failure that we look for: (i) market power; (ii)
externalities, and (iii) existing regulations. (i) and (ii) are dealt with in detail in the report.
(iii) makes up a smaller section of the study as Internet backbones are not regulated
directly. Nevertheless, regulations do surround the Internet, even if they are not written
specifically for it. Given that the Internet is converging with the regulated industries that
surround it (e.g. traditional telecoms and broadcasting), there are predictable regulatory
problem areas looking forward, and we outline the basis of these.
Our approach with each category of market failure is to look at the issues that would
appear to be of particular interest to public policy makers and their advisers, with a view
to coming to a conclusion as to whether the authorities should be seriously concerned
that some form of market failure may be present or may occur in the near future.4
There are several other areas of policy interest not addressed by this study, including:
decency, confidentiality, security, and not least, universal service issues and the rapid
3 Thus, the present study takes on a broader perspective than would be adopted by merger authorities
in that we analyse the industry with a view to identifying whether there is presently, or is likely to
develop in the near future, significant market failure.
4 There are a number of excellent papers that provide an overview of some of the issues we address in
this study. Two that come to mind are Cawley (1997) and OECD (1999).
Internet traffic exchange and the economics of IP networks
3
uptake of broadband access by households and SMEs.5 These topics are of direct
concern to consumers. None of these issues will feature to a significant degree in this
study, i.e. our study is not concerned per se with end-users and ISPs that serve them.
The Internet is a complex and rapidly evolving industry, and we suspect it is beyond the
skills and knowledge of any organisation or individual to predict its future in any detail.
Thus, while we look at present and future technology that may affect competition and
the relationships between Internet firms that make up the Internet backbone, there is
always the risk that future developments could detract from the relevance of our
discussion.
The report is divided into two parts: Part I "Internet systems and data" is largely
descriptive. It describes the types of players, provides a description of internet
addressing and numbering systems, and includes a discussion of internet technology.
More specifically, Chapter 2 gives an overview of general features of the Internet
focusing on applications and services that are generating traffic on the Internet, the way
traffic is exchanged and a basic characterisation of the types of ISPs that provide
services. Chapter 3 deals with the technical background to traffic exchange, in
particular, addressing, routing and "Autonomous Systems". Chapter 4 discusses Quality
of Service (QoS) issues. We look at the technical features of QoS, why QoS is
important for real-time services, as well as the range of QoS problems that are holding
back the Next Generation Internet (NGI).6 Chapter 5 is devoted to the features of the
two categories of interconnection, namely peering and transit, as well as services that
function as partial substitutes to these two interconnection categories. In addition we
explain the loose hierarchy of ISP interconnection that make up the Internet. Chapter 6
is concerned with empirical evidence concerning traffic exchange arrangements in the
Internet. The main topics dealt with are: characteristics of Internet exchange points; the
main Internet backbone providers; information about Internet growth; performance and
traffic; evidence on peering policies, and some remarks on the role of capacity
exchanges.
Part II focuses on areas where there is some unease as to the industry's ability to
resolve problem areas in the public interest, and which are related in some significant
way to commercial traffic exchange. This Part is written with those authorities in mind
who are concerned with (tele)communications competition policy and regulation. The
approach is to provide an analysis of the issues with a view to determining whether the
authorities ‘interest’ may in practice translate into a genuine public policy concern, either
now or potentially in the short to medium term; i.e. whether the authorities ought to
consider taking on a roll alongside the Internet community, be that role relatively
peripheral, such as a mediating roll where there are a large number of diverse interests
5 We discuss these in WIK (2000).
6 We define the Next Generation Internet as a network of networks integrating a large number of
services and competing in markets for telephony and real-time broadcast and interactive data and
video, in addition to those services traditionally provided by ISPs.
4
Final Report
involved, or in providing for some sort of regulatory rule(s) to correct an identified
market failure. Issues tackled in Part II are: network effects competition and market
power; problems that are delaying the next generation Internet, including QoS issues
and, Internet scaling problems and how they are being addressed. Specifically, Chapter
7 introduces the economic ideas relevant to public policy analysis of the Internet. We
discuss relevant types of market failure, and the importance of network effects. Chapter
8 is where we discuss demand management, concerns involving addressing, and
strategic competition issues involving ISPs. The analysis is aimed at uncovering
evidence of any significant market failure, including externalities and competition
distortions, and includes a brief discussion of potential future regulatory problem areas
involving converging industries.
Due to the complexity of the subject matter we have included a lot of technical,
explanatory and background material in the annexes for easy reference.
Internet traffic exchange and the economics of IP networks
5
Part I – Internet systems and data
2
General features of the Internet
This chapter provides conceptual descriptions of pertinent features of the Internet and
background information necessary to analyse how the various players relate to each
other commercially and technically. This section is devoted to aspects of Internet and IP
services both from an engineering and an economic perspective. It aims at describing
the exchange of traffic on the Internet.
2.1
Applications and services generating traffic on the Internet
In the following we briefly describe the varying kinds of services and applications that
are provided over the public Internet.
Example 1 focuses on sending and receiving e-mails. This mode of communication
consists of conveying written messages and attachments, such as a Word file or a
PowerPoint presentation, between a sender and a receiver. To enable this to occur the
receiver’s address must be known by the sender (or his PC {host}), however, it is not
necessary that he knows the location of the receiver.
Example 2 deals with the case where you are interested in an annual report of a
company quoted on the New York Stock Exchange. Downloading such a file is in
principle an easy task if you contact the web site of the Securities and Exchange
Commission (SEC) and use their search engine "EDGAR".
Assume for the purposes of our 3rd example that you would like to perform an electronic
banking transaction via your bank account. Provided you are an authorized user of this
type of communication this requires you to contact the web site of your bank, you can
do this by identifying yourself and giving the system transaction specific information.
Usually this is done via a menu based set of instructions.
In example 4 suppose you would like to offer to sell or buy something in an (electronic)
auction. Provided you have fulfilled the notification conditions with an auctioneer which
typically includes giving the auctioneer your credit card number, you can participate in
an auction by simply contacting the web site of the auctioneer, from where you are
guided further by a menu that assists you in reaching the desired electronic bidding
room.
6
Final Report
Example 5 considers the case where music or video content is offering for downloading.
There are several different models by which this can be organized.7 One is peer-to-peer
file sharing. Exchanging music or video content under this model rests requires
membership of a community which is organised on a specific electronic platform.
Members need appropriate terminal equipment and specific software which usually can
be downloaded from the operator of the platform. Once the software is installed you call
the web site of the operator, make use of a search engine, select the desired item from
a list of available items and downloading the content can begin. In a peer-to-peer model
the content is not stored centrally, rather, a file is downloaded directly from the
computer of another member. Thus, the operator of the platform mainly acts as an
information broker.
In example 6 we would like to mention the case of streaming media, i.e. a specific mode
of sending and receiving video and audio content. The end-user needs specific terminal
equipment (e.g. a sound card and/or a graphic card, and loudspeakers). A crucial
element of the streaming model is a specific streaming software (player software). The
end-user can download this software from the Internet. The software is free for the
customer and offered by firms like Microsoft and Real Networks. The video and audio
content provider also uses this software to encode the content and pays a fee to the
software developer. Provided the end-user has obtained the appropriate software all he
has to do is to contact the web site of the content provider and to activate the software,
resulting in a stream of datagrams pacing from the server on which the content is stored
to the end-user. There are two streaming technology alternatives. If bandwidth is low
content is usually stored in a buffer on the local computer or device before the user can
make use of it. If bandwidth is not a limiting factor the data packets may reach the enduser ‘immediately’ without use of a buffer. It is obvious that streaming resembles a oneto-one communication mode. Another way of getting video and/or audio content to endusers is through multicast. Multicasting is a one-to-many communication mode
(broadcasting model). Multicasting operates by requiring a multicast server to keep
membership lists of multicast clients that have joined a multicast group. There are
differences between real-time and non real-time services as regards streaming media
and multicasting. In the case of a real-time such as would be used for a live music
concert, the server capacity normally has to be much higher than for non real-time, in
order to meet demand.
Example 7 relates to a technology that has been developed since the mid 1990’s which
enables users to make telephone calls over the public Internet. Three different modes of
communication can be distinguished with respect to early Internet telephony depending
on the terminal devices that are involved: PC-to-PC, PC-to-Phone or Phone-to-PC, and
Phone-to-Phone where Phone means a traditional PSTN or ISDN terminal device.8 If a
7 Video and music file sharing involves complex copyright issues. We do not address these issues in
this study.
8 A detailed description of the different modes of Internet telephony and an analysis of the technical
arrangements and the strategies of the players involved can be found in Kulenkampff (2000).
Internet traffic exchange and the economics of IP networks
7
PC is involved in Internet telephony specific terminal equipment (software and
hardware) is necessary. PC-to-PC Internet telephony is only possible if both
communications partners are online. For each type of Internet telephony where a phone
is involved, a physical and logical connection is required which requires interconnection
between the PSTN and the Internet. There are several network devices necessary for
this interconnection as well as contractual arrangements between network operators in
order to establish the connection and to facilitate billing. Usually, Internet telephony in
which a phone is involved is not provided by the PSTN (fixed link telephony) carrier of
which the end-user is a customer, rather, it is provided by specific companies acting as
intermediaries. Since the late 1990’s several options have been under development
with the purpose of establishing a simpler protocol for establishing and the tearing down
telephone calls over IP networks. At the same time terminal devices have been
developed which look like a traditional telephone receiver but are in fact IP-based
devices.9
There are many more examples of Internet services and applications purchased by end
users. Moreover, not only private end-users originate traffic on the Internet. There are
many applications originated by business customers (and their employees) like remote
maintenance, remote access for home workers, telelearning, procurement via ebusiness platforms etc.10 One can also view these services from a supply side
perspective; organisations of very different nature possess content or enable the
production of content.11 To make this content available to the outside world they need
access to the Internet.
On the one hand the applications mentioned so far have very different characteristics,
especially with respect to the following:
•
the logic of the scheme which identifies names and addresses;
•
technical parameters necessary to enable an appropriate QoS. An important feature
of Internet based communication is whether it is time-critical or not. Internet
telephony is an example of a time critical application, whereas e-mail is not timecritical;
•
whether the communication partners are online or not, and
•
bandwidth requirements, where, for example, bandwidth requirements differ
enormously between streaming video, and a short e-mail message.
9 The protocol is known as Session Initiation Protocol (SIP) and is backed by the Internet Engineering
Task Force (IETF). For a comprehensive treatment of the SIP protocol and SIP-based networks the
reader is referred to Sinnreich and Johnston (2001). Basic characteristics of SIP can also be found in
Annex B.
10 Often these applications are facilitated with the help of private IP-based networks, i.e. infrastructure
which is logically separate from the public Internet.
11 "Content" should be understood here in a very broad sense and denotes something which can be
consumed electronically and which is valuable to individuals or companies.
8
Final Report
On the other hand the varying kinds of services and applications mentioned above are
all generating IP-based traffic. For communication to proceed between the millions of
IP-devices via the IP protocol involves a more or less complex chain of transactions
between end users, networks, operators and content suppliers all over the world.12 In
addition, there are many different payment flows between the entities involved in
Internet communication. The stylised facts of these relationships are presented in the
next figure.
Figure 2-1:
Value flows and payment flows of Internet based communication
Value Flow
End-user
Subscriber line
and call charges
Local
infrastructure
Payment Flow
Services existing
independently of
the Internet
Subscriptions
Lease charges
Internet
Access
Transit Services
Advertising
Advertising/
hosting fees
Transit fees
Content
Delivery/
Web-hosting
Direct content order / Micro payments
Content/
Products
Source: Cable and Wireless (2001, modified)
12 Often the IP protocol is mentioned together with the TCP (Transmission Control Protocol). We will see
later that IP is located on layer 3 of the OSI protocol scheme whereas TCP is located on the next
higher layer. TCP, as its name suggest, is controlling the stream of packets initiated by the IP
protocol. However, TCP is not the only protocol which is used for applications running over IP. For
more details see Annex B.
Internet traffic exchange and the economics of IP networks
2.2
9
How traffic is exchanged
A customer wanting access to the public Internet needs access to an Internet Service
Provider (ISP). Technically, Internet access occurs at a point of presence where
modems and routers accept IP-based traffic initiated by end-user to be delivered to the
public Internet, and where traffic from the public Internet is forwarded to end-users.
There are several alternatives used in order to hook up an end-user to the network of
the ISP.
Large companies are usually connected to the ISP’s point of presence through direct
access links on the basis of a permanent ISDN connection, leased lines with high
capacity, Frame Relay or ATM. Small companies and private users, however, typically
connect to an ISP through low capacity leased lines or dial-up connections over
analogue, ISDN, DSL, mobile or cable access networks.13 Figure 2-2 illustrates the
arrangements necessary for access to the Internet for a private user.14
The traffic generated by the end user generally does not terminate on the network of his
ISP, rather, it is destined for a host belonging to the network of another ISP. The traffic
therefore has to be forwarded in order to reach its termination point via devices called
routers, and this usually occurs over complex network infrastructures.15
We have seen in the preceding section that Internet communication often involves
access to content. This content has to be stored and it has to be made available on the
Internet by server hosting and housing.16 The housing location can be the point of
13 The technologies we refer to here that are involved in transporting IP traffic and involve different
physical and data layers. For more details see Appendix B. We do not discuss access to the Internet
in any detail as the subject of this study is commercial traffic exchange. A few remarks will suffice.
Using the traditional PSTN or ISDN network means that access to the Internet is normally treated like
a telephone call. This holds true at least for the local access part of the network, i.e. the physical
infrastructure between the terminal device(s) at the customer premises and the Main Distribution
Frame, MDF). Between the MDF and the PoP of the ISP, Internet traffic is usually multiplexed and
transported via ATM based networks. As regards xDSL solutions there are several alternatives which
can be distinguished by bandwidth and whether or not bandwidth is identical for upstream and
downstream traffic. xDSL solutions also use the copper infrastructure of the local loop, however,
Internet traffic and regular PSTN traffic is separated "as early as possible”, i.e. at the end-user’s
premises by so-called splitters. If Internet access is provided by a cable-modem solution it is no longer
the PSTN copper network which is used, rather, the cable-TV networks are normally made up of a
high proportion of fibre. Wireless Local Loop (WLL) is a non-wireline technology for which frequency
spectrum is used. Further details on access technologies can be found in Distelkamp (1999); Höckels
(2001); WIK (2000).
14 The network access server depicted in Figure 2.2 contains the corresponding modem pool at the
access network site, and at the IP network site, the first edge router.
15 These network infrastructures comprise transmission links connecting the network nodes of the ISP
and the operation of transmission links connecting the ISP‘s network to networks of other ISPs. In
addition Internet exchange points and data centres will normally also be operated.
16 Housing means the accommodation of servers. Security features (e.g. admission control, fire alarm
etc.) are important in the provision of housing space.
10
Final Report
presence, the network operation centre of an ISP’s network, or a special data centre
operated by the ISP or a specialised supplier (e.g. a web farm).17
Figure 2-2:
Basic features of a private end-user’s access to the content and
services on the Internet
Modem
(analogous / DSL)
Client-PC
ISDN-Card
Network
Access
Server
Router
Access Network
ISP-Network
PTT
GSM
Cable
ISP-Services
RadiusServer
MailServer
WebServer
DNSServer
Cache
Source: WIK-Consult
Thus, in principle one can highlight the Internet as consisting of the following building
blocks:
•
IP devices like a PC, a workstation or a server;
•
Several of these devices connected with one another by a Local Area Network
(LAN);
•
Several LANs connected to the network of a single regionally focused ISP within a
specific country (e.g. a local ISP);
•
Several networks of regionally focused ISPs connected to a network of a large
national ISP;
17 Of course the above value chain is highly aggregated. Further disaggregation could e.g. differentiate
between the ownership and usage of network infrastructure, content and services. Furthermore,
looking at the customer and his or her demand for services billing and customer care could be added
in the value chain. For the purpose of this study these expansions are, however, not necessary.
Internet traffic exchange and the economics of IP networks
11
•
Several networks of large national ISPs connected to the network of an international
ISP which has, however, still a regional focus;
•
Several networks of international ISPs with a regional focus connected to the
network of an international ISP with a multi-regional or even worldwide focus, and
•
The interconnection between the networks of the international ISPs with a multiregional or even worldwide focus.
These networks have several differences with respect to architecture, equipment
vendors, number of customers, traffic patterns, transmission capacity, quality of service
standards etc. However, all of these networks taken together (which are usually called
the Internet), are able to communicate with each other by virtue of the IP protocol, i.e.
by a packet based communication platform.18
Yet, any specific network in this system of networks is not connected directly to all the
other networks. Rather, the Internet is arranged in a loosely hierarchical system where
each ISP is only directly connected to one or a few other ISPs.19 The basic structure
can be seen from Figure 2-3.
Figure 2-3 shows the types of ISP mentioned above, with the relationship between them
described as follows.
•
At the bottom there are users in different countries. The bulk of the users are
connected to a local ISP (indicated in country A as a line between the end-user and
local ISP A1 and A2). However, an end-user need not necessarily be connected to
a local ISP, rather, if the user has a lot of traffic it might be directly connected to an
intra-national backbone provider (A4) or even an international backbone provider
(C1).
•
In many cases local ISPs are connected to one or more intra-national backbone
providers (like ISP A2 to A3 and A4). However, it can also occur that a local ISP
(A1) has a connection both to an intra-national backbone provider (A3) and a
regionally focused international backbone provider (C1). Moreover, a local ISP (A1)
may in some particular cases have a direct link to another local ISP (A2).
•
Intra-national backbone providers are usually linked directly (as occurs between A3
and A4). Moreover, they are linked to regionally focused international backbone
providers (like A3 to C1 and A4 to C2). At least some intra-national backbone
18 Viewed from the perspective of the OSI scheme of data communication IP is a platform for services
and applications. On the other hand IP communication requires supporting physical and data layer
technologies. See for more details Annex B.
19 There are different modes of connection which differ especially with respect to prices and the location
where the interconnection take place.
12
Final Report
providers (like A3) have a direct link to a multi-regional international backbone
provider (D1).
•
As the name indicates, international backbone providers (like C1, C2 and D1) are
connected to (intra-national) ISPs from different countries. An international
backbone provider with a regional focus (like C1) will in all likelihood be connected
to more than one international backbone provider with a regional focus (like C2).
Moreover, in most cases, international backbone providers with a regional focus
have at least one direct connection with an international backbone provider with a
multi-regional or worldwide focus (like C1 with D1 and D2).
•
Eventually, the international backbone providers with a multi-regional or worldwide
focus (like D1,...,D4) each have direct links with one another.
Figure 2-3:
Hierarchical view of the Internet (I)
D4
D3
Inter-National Backbone Providers
(multi-regional or world-wide)
D1
C1
A3
D2
...
Inter-National Backbone Providers
(with regional focus)
C2
A4
...
A1 A2
...
...
...
...
...
Country A
Country B
Country C
Intra-National Backbone Provider
Local ISPs
End-user
Source: WIK-Consult
The figure makes clear that traffic originated in one country and destined for the same
country does not necessarily remain entirely domestic traffic.
Internet traffic exchange and the economics of IP networks
13
Private and business end-users as well as content providers20 want the Internet to
provide them with global reach. However, the hierarchy of ISPs makes it obvious that
no single ISP alone can provide its customers with access to worldwide addresses,
content and services. Rather, irrespective of its size or coverage, each ISP needs the
support of other ISPs in order to get universal access to customers and content. This
situation is depicted in the following Figure.
Figure 2-4:
Interrelationship of different types of ISPs to enable Internet
communication
International Backbone ISPs
National ISPs
Local ISPs
HEWLETT
PACKARD
Router
Router
Modem
Content Provider
End-User
Source: based on WorldCom (2001)
2.3
Types of ISPs
The development of the public Internet has brought about a rich diversity of types of
ISPs. Unfortunately, there is no unique and generally accepted definition of what
constitutes an ISP. In the preceding section we identified local ISPs, intra-national
backbone providers, international backbone providers with a regional focus and the
international backbone providers with multi-regional or worldwide activities in the IP-
20 A content provider focuses on a specific type of Internet service such as WWW servers, portal sites,
search engines, news servers or FTP archives. A content provider usually buys network services from
ISPs.
14
Final Report
business. At a functional level this classification mirrors that applicable to Internet traffic
exchange.21
Minoli and Schmidt (1999, p. 38) subdivide the ISP business into three major areas.
The first category encompasses access providers, e.g. ISPs focused on providing
services to dial-up users and those with leased line access to their ISP. The second
category comprises "ISPs that provide transit or backbone type connectivity." The third
category consists of ISPs providing value-added services such as managed web
servers and/or firewalls. Taking this demarcation, the present study focuses on category
2. However, this classification is still too abstract to show exactly the study’s focus.
From the perspective of a private end-user, the main ISP types are: local, intra-national
backbone providers, online services providers and virtual ISPs.22 An online service
provider provides a service which enables a closed user group to download information
from a remote electronic storage platform or to store information on this platform, and to
communicate with other users. Usually, an online service provider makes the service
portfolio available to the end-user through a portal. According to subscriber numbers the
biggest ISP in the world is AOL/TimeWarner, which is positioned as an online service
provider. Also, many telecommunications incumbents in Europe have organised their
private end-user Internet businesses as an online service providers.23 Examples are
Deutsche Telekom with T-Online, and France Télécom with Wanadoo.
A Virtual ISP is an ISP which concentrates on the end-user contact, but has no IP
network whatsoever. An example is Shell in Germany which only distributes the access
software CD at their petrol stations. Taking a broader perspective one could also define
universities and large companies as ISPs. This can be the case because these entities
also provide access to the Internet, even if this access is usually limited to closed user
groups (e.g. students, employees).
Thus, ISPs are different in many respects. One difference is ISP size (in terms of
turnover or employees or network reach). A local ISP typically operates a rather limited
number of points of presence in a region or several adjacent regions in a country. The
points of presence are connected through transmission links which are usually leased
from a fixed wire operator. This ISP network has to be connected to a national or
international ISP (or several of them) in order to get up-stream connectivity to the
worldwide Internet as outlined in the preceding section. An example for a medium sized
ISP is an intra-national backbone provider like Mediaways in Germany. The
international backbone providers are large ISPs.
21 Boardwatch (2000, p.20) defines another part of the Internet called ”customer and business market".
In the article it is said that each time a small office leases a line from its office to a point of presence of
an ISP the Internet is extended.
22 For more detail see Elixmann and Metzler (2001).
23 Usually, these activities are performed by separate entities.
Internet traffic exchange and the economics of IP networks
15
A second feature to distinguish ISPs is infrastructure, more precisely, the issue of
whether they operate infrastructure or not. Online service providers usually do not
operate their own networks. They offer services to end-users by purchasing the dial-in
user platform, national transmission services and up-stream connectivity as input
services from national or international network providers.24 Moreover, by definition a
virtual ISP is without infrastructure. Intra-national and international backbone providers,
however, operate own infrastructure. In this context operation of transmission facilities
does not necessarily mean that a carrier also owns this infrastructure. Often large parts
of intra-national and international ISP’s networks are based on leased lines. Moreover,
pan-European and North-American ISPs which have a fibre infrastructure often swap
capacity in order to expand the reach of their networks.25
A third feature that differentiates types of ISP is whether they are focused on ISP and IP
business or whether their business is integrated with (traditional fixed-link and/or cellular
mobile) telecommunication activities. Between the 1980’s and mid 1990’s the largest
long distance telecommunications incumbents in North America (at that time AT&T,
MCI, Sprint) were also important ISPs. Since this time many new players have
appeared who provide fibre infrastructure provider, with several focussed on providing
services for big companies and carriers, i.e. as a wholesaler.26
Since the liberalisation of the telecommunications markets in Europe many intranational and international ISPs have entered the market. This holds true both nationally
(in Germany e.g. Mediaways, in the UK e.g. FreeServe) and internationally. Indeed, at
the end of the 1990’s there were more than 20 pan-European infrastructure providers
most of which were also ISPs.27 Yet, in Europe the telecommunications incumbents are
also important ISPs, at least on a national level. However, for important accounts, ISP
services are normally provided by the parent company or a separate subsidiary.28,29
As the present study focuses on commercial traffic exchange on the Internet the focus
is on ISPs that operate infrastructure. Furthermore, we concentrate on "backbone"
issues and, thus, primarily on large ISP and other ISPs that obtain connectivity from
them. However, there is no accepted definition of what constitutes the Internet
backbone. Rather, there is a part of each firm’s network which is its backbone. This
means that a-priori both national and international ISPs are operating a backbone.
24 AOL/Time Warner in Germany primarily uses the services of Mediaways. T-Online uses the Internet
platform "T-Interconect" of its parent company Deutsche Telekom.
25 See Elixmann (2001) for an empirical examination of market characteristics and player strategies in
the market for fibre transmission capacity in Europe and North America. A swap of capacity is the
exchange of existing (usually lit) transmission capacity between two companies.
26 Boardwatch publishes a list of the most important North American ISPs at least once a year as well as
a list of all ISPs.
27 See Elixmann (2001) and the literature cited therein. Infrastructure providers who are not ISPs
concentrate on selling dark or lit fibre.
28 In Germany T-Systems provides services to Deutsche Telekom’s key accounts whereas another
entity provides the technical Internet platform and the transmission services.
29 In the latter case, accounting separation requirements mean that all such transactions should be
transparently priced.
16
Final Report
However, the underlying network infrastructure and its role within the global Internet for
supplying transmission services, as well as access to content and services, is different.
In the present study we assume a pragmatic demarcation of the Internet backbone.
Noting that the focus is on "backbone services" it is clear that it is not on access
services. Thus, the backbone network of an ISP only comprises facilities beyond its
edge routers, these being devices where Internet traffic of the ISP’s customers is
collected and forwarded to those routers which open "the window to the world". The
latter routers are known as core routers. Thus, we identify the backbone with certain
network facilities, i.e. the transmission links, routing and switching equipment that is
vital for connecting core routers and enabling communication between them. In
addition, in our empirical investigation we concentrate on operators of these facilities
with cross-border activities.
2.4
Roots of the Internet
The history of the Internet30 dates back to the late 1960s with the Advanced Research
Project Agency’s ARPAnet built to connect the U.S. Department of Defence, other U.S.
governmental agencies, universities and research organisations.
In 1985 the National Science Foundation (NSF) initiated the establishment of a 56 kbps
network, originally linking 5 national computer centres but offering access to any of the
regional university computer centres that could physically reach the network (NSFnet) .
Since 1987 the capacity of this network was intermittently increased (from 56 Kbps to
T1 (1,544 Mbps) and T3 (45 Mbps)), and by 1990 the Internet had grown to consist of
more than 3,500 sub-networks. Since this time, the operation of the network was
transferred to a joint venture of IBM, MCI, the State of Michigan and the Merit
Computing Centre of the University of Michigan, at Ann Arbor.
Around 1993 fundamental changes occurred. First, private commercial backbone
operators set up a "Commercial Internet Exchange (CIX)" in Santa Clara, California.
Second, the NSF solicited bids for the establishment of "Network Access Points
(NAPs)" announcing that it was getting out of the Internet backbone business. Similarly
to the CIX concept, NAPs were deemed to be sites where private commercial backbone
operators could interconnect and exchange traffic with all other service providers. In
February 1994, four NAPs were built: San Francisco, operated by PacBell; Chicago,
operated by Bellcore and Ameritech; New York (actually Pennsauken (NJ)), operated
by SprintLink and MAE East (MAE = Metropolitan Area Ethernet or Exchange) in
Washington, D.C. operated by Metropolitan Fibre Systems (MFS, today part of
WorldCom).
30 The following overview draws heavily on Boardwatch Magazine (2000). See section 6.1.2 for empirical
evidence of the situation today.
Internet traffic exchange and the economics of IP networks
17
Although not made mandatory by the NSF, the contractual form of the co-operation at
these exchanges was "peering", i.e. the agreement to accept the traffic of all peers in
exchange for having them accept your traffic without the inclusion of any monetary
payments.31 MFS in particular subsequently set up four more NAPs in San Jose (MAE
West), Los Angeles (MAE LA), Chicago (MAE Chicago) and another one in
Washington, DC (MAE East+). Finally, two Federal Internet Exchange Points (FIX-East
and FIX-West) were built largely to interconnect the NASA net and some other federal
government networks.
Thus, at the end of the Millennium there were 11 major public interconnection points in
the U.S. (the four official NAPs, three historical NAPs (CIX, FIX-East, FIX-West) and
four de-facto NAPs (the MAEs). Most of the national backbone operators in the U.S.
were and remain connected to all four official NAPs as well as to most of the MAEs.
With the extremely rapidly growth in Internet traffic NAPs were by all accounts frequent
points of congestion. In addition to NAPs a multitude of private exchanges were opened
up where backbone operators cross-connect with other backbones.
In Europe, college and research institutions set up their own Local Area Networks
(LANs) in the 1970s. Public peering took place through co-operation with the NSF and
in several research centres including London, Paris and Darmstadt (Germany). Today,
most countries in the world has at least one major NAP, where domestic and
international ISPs and Internet backbone providers (IBPs) can interconnect. Many
countries, particularly in Western Europe, host several major NAPs.
31 We discuss peering and transit in detail in Chapter 5.
18
3
Final Report
Technical background to traffic exchange: Addressing, routing
and "Autonomous Systems"32
We have seen in the preceding section that the Internet comprises a vast number of IP
networks each of them usually consisting of many communicating devices like, for
example PC’s, workstations, and servers. In the following we speak of hosts to denote
these communication devices. This Chapter focuses on the technical features that
support traffic exchange, i.e. it describes how traffic is exchanged.
Viewed from an end-user perspective hosts often have "names", however, in order to
enable hosts to communicate with one another they need "addresses". The forwarding
of packets on the Internet rests on routers and routing protocols. "Autonomous
Systems" is a scheme used to segregate the Internet into subsections.
Addressing and routing, especially routing between "Autonomous Systems", are
important on a "micro-level", i.e. for single ISPs. This holds true with respect to the way
ISPs design their access and backbone networks and also with respect to traffic
exchange with other parts of the Internet.
At a "macro-level" IP-addresses are a finite resource, with address exhaustion a
possibility. Technical developments are possible, however, that would make available
addresses last longer. Moreover, the growth of the Internet represented by larger and
larger so called "routing tables" and more and more "Autonomous Systems" affects the
necessary capacity of routers and the speed with which they perform their tasks. Thus,
the evolution of the Internet and its quality of service depends on these basic technical
principles.
This section aims to illuminate the concepts mentioned so far, and which are vital for
traffic exchange on the Internet.
3.1
Internet addressing
3.1.1 Names and the Domain Name System
Names on the Internet have become more and more a matter of trademarks and
branding. Broadly speaking a name is used primarily to identify a host whereas the IP
address contains the way to get there. The link between the two concepts is provided
by the Domain Name System (DNS). The DNS requires a specific syntax and rests on a
distributed database encompassing a great number of servers world-wide. Annex A-1
contains more details as regards the DNS system.
32 In this Chapter we draw on Marcus (1999) and Semeria (1996).
19
Internet traffic exchange and the economics of IP networks
In practice the system works as follows. Suppose someone wants to visit the web site of
the German Regulatory Agency for Telecommunications and Posts, the name of which
is www.regtp.de. The browser on the PC of this person issues a query to a local DNS
server which for residential users and small businesses is usually provided by an ISP.
In the case that this DNS server does not know the destination address related to the
caller, a query is sent to the next server up the AS hierarchy, which returns the address
of the server hosting the zone "de" to which the DNS server of the caller then resubmits
its query. In case that the upper server does not know the DNS server corresponding to
the destination address the query is directed to the next server up the AS hierarchy, this
occurring until the appropriate DNS server is found. The server of the domain "de" gives
the address of the server hosting "regtp.de" which provides the IP address for the site in
question.
3.1.2 IPv4 addressing schemes
Throughout this sub-section we focus on addressing schemes relating to the Internet
Protocol, Version 4 (IPv4). When this IP protocol was standardised in 1981, the
specification required each host attached to an IP-based network to be assigned a
unique 32 bit Internet address.33 IP addresses are usually expressed in "dotted decimal
notation", i.e. the 32-bit IP address is divided into 8-bit fields, each expressed as
decimal number and separated by a dot. Figure 3.1 provides an example of the dotted
decimal notation.
Figure 3-1:
Example of an IP address expressed in dotted decimal notation
bit #
31
0
10 010001
145
. 00001010 . 00100010 . 00000011
10
34
3
145.10.34.3
Source: Semeria (1996)
The initial addressing system rested on three classes of addresses denoted by A, B and
C.34 Thus, historically addressing on the Internet has been conducted via a hierarchical
33 Some hosts, e.g. routers have interfaces to more than one network, i.e. they are dual homed or multihomed. Thus, a unique IP address has to be assigned for each network interface.
34 The reader can find more details on Class A,B,C addresses in Annex A-2.
20
Final Report
scheme, which in principle offers the possibility of connecting more than 4 billion (i.e.
232) devices.
In the original concept of the Internet each LAN was to be assigned a distinct network
address. This was reasonable at the time in view of the four billion possible addresses,
and when addresses where considered a virtually inexhaustible resource. Thus, it is not
surprising that in the early days of the Internet the seemingly unlimited address space
resulted in IP addresses being assigned to any organisation asking for one. However,
inherently in a class scheme involving large blocks of addresses there is a great deal of
inefficient use of addresses i.e. addressing space is assigned which remains largely
unused. This is especially true for large and medium sized entities which were allocated
a Class B address which will support more than 65,000 hosts. For most such entities a
Class C address would be too small as it would only support 254 hosts. Thus, in the
past organisations with anywhere between a several hundred and several thousands of
hosts were normally assigned a Class B address. This resulted in a rapid depletion of
Class B address space and also in the rapid growth in the size of Internet routing tables.
Since the mid 1980’s there have been several new developments that have improved
the efficiency of IP address allocation. Arguably the three main developments are as
follows:
•
Subnetting with a fixed mask
•
Variable Length Subnet Masking (VLSM)
•
Classless Inter-Domain Routing (CIDR).
The most important of these was CIDR which has lead to much more flexibility and
efficient use of addressing. The concept of Classless Inter-Domain Routing (CIDR)35
was developed in 1993 to tackle the problems of future address exhaustion especially in
regard to Class B address space, and the rapid increase in the global Internet’s routing
table.36 The allocation scheme of CIDR creates the capability for address aggregation,
i.e. the possibility to aggregate a contiguous block of addresses into a single routing
table entry. We discuss these concepts in more detail in Annex A-2.
Private IPv4 address allocation
Hosts which need no access to the public Internet at all do not need an IP address
visible on the Internet. For these hosts a private address allocation scheme can be
applied. There is a small group of permanently assigned IP network prefixes that are
reserved for these purposes, i.e. they are never supported or visible for exterior
35 As the name indicates CIDR’s focus is on routing, although there are also implications for addressing.
We discuss the addressing issues in this section while routing implications are presented in section
3.2.
36 See also section 3.2 and section 6.3 of this report.
Internet traffic exchange and the economics of IP networks
21
routing.37 Thus, the same prefixes can be used contemporaneously by an unlimited
number of organisations.
It can also be the case that a host only needs limited access to the Internet, such as if it
needs only access to e-mail or FTP and otherwise works "privately". In this case some
kind of address translation from the private address space to IP addresses that are valid
in the global Internet has to take place. There are two approaches used to accomplish
this:
•
Network Address Translation (NAT)
•
Application Layer Gateways (ALG).38.
If an organisation chooses to use private addressing it would typically create a DNS for
internal users and another one for the rest of the world. Only for those services that the
organisation wishes to announce to the public would the external DNS contain
entries.39
Dynamic IPv4 address allocation
If a private end-user is connecting to an ISP via a dial-up connection it will generally be
the case that the ISP will not assign a permanent address to the user, rather, a session
specific address that is unused at that moment will be assigned.40 There is thus no
need for an ISP to have as many addresses as customers. Rather, it is the (expected)
maximum number of customers connecting to the ISP simultaneously which yields the
necessary address space, and this number is on average much lower than the number
of customers.
Organisations responsible for the assignment of addresses
As delegated by ICANN, IP addresses are managed by three regional organisations,
each distinguished by a different geographical focus:41
•
The "American Registry for Internet Numbers (ARIN)", responsible for North and
South America, Caribbean and Saharan Africa;
•
The "Réseaux IP Européennes Network Coordination Centre (RIPE NCC)"
responsible for Europe, Middle-East, North of Africa42 and parts of Asia, and
37 See next section for more details on routing.
38 For more details on these approaches see Marcus (1999, p. 221 and p. 223). ALG firewalls allow an
entire network to appear externally as a single Class C network.
39 See Marcus (1999, p. 233).
40 In a dial up connection firstly the PPP (Point to Point Protocol, RFC 1661) is established between the
customer and the ISP. After that the DHCP protocol (Dynamic Host Configuration Protocol, Latest
RFC 2131) temporarily assigns the customer an available IP address out of the ISP’s address space.
41 For more details on these organisations the reader is referred to the respective web sites, see
http://www.icann.org/general/icann-org-chart_frame.htm
22
•
Final Report
The "Asia Pacific Network Information Centre (APNIC)", responsible for the rest of
Asia and Pacific.
3.1.3 IPv6 addressing issues
IP version 6 (IPv6) is a new version of the IP protocol and designed to be a successor
to IPv4.43 Instead of the 32-bit address code of IPv4 the new version 6 has 128-bit long
addresses, structured into eight 16-bit pieces. Marcus argues that there are three types
of IPv6 addresses that are relevant for practical purposes:44
•
Provider-assigned IPv6 addresses,
•
Link-local IPv6 addresses, and
•
IPv6 addresses embedding IPv4 addresses.
The first type comprises those addresses which an ISP assigns to an organisation from
its allocated address block. These addresses are globally unique. The second type
comprises addresses which are to be used by organisations which do not currently
connect to the public Internet but intend to do so. They are unique within the
organisation to which they are relevant.45 The third type, IPv6 addresses embedding
IPv4 addresses, reflect the fact that IPv4 addresses can be translated into equivalent
IPv6 addresses. Actually, there are two types of embedded IPv4 addresses:
•
IPv4-compatible IPv6 addresses, and
•
IPv4-mapped IPv6 addresses.
The first case is applied for nodes that utilise a technique for hosts and routers to
tunnel46 IPv6 packets over IPv4 routing infrastructure. In this case the IPv6 nodes are
assigned special IPv6 uni-cast addresses that carry an IPv4 address in the right-most
32-bits. The second case mirrors the situation where IPv4-only nodes are involved, i.e.
A distinct organisation for the African region (AFRINIC) is about to be created.
Subsequently we refer RFC 2373, July 1998
See Marcus (1999, p. 238).
Link-local IPv6 addresses offer the possibility that the subnet address for the link to the organisation
could serve as a prefix to the internal address scheme of the attached devices.
46 If a router is able to interpret the format of a protocol, traffic between networks can be switched or
routed directly. If, however, the format of a protocol is not known and, thus, not supported by
hardware/software capabilities, a packet will be silently discarded. To remedy this routers can be
configured to treat all router hops between any two routers as a single hop. This technique which is
used to pass packets through a subnetwork is known as “tunnelling”. See Minoli and Schmidt (1999,
p. 316) and section 3.2.3.
42
43
44
45
Internet traffic exchange and the economics of IP networks
23
nodes that do not support IPv6. These nodes are assigned IPv6 addresses where the
IPv4 portion also appears in the right most 32-bits.47
Carpenter (2001) reports that according to one IPv6 implementation model each
network running IPv6 would be given a 48-bit prefix, leaving 80 bits for "local" use. Such
an arrangement would mean that theoretically there could be more than 35 trillion IPv6
networks. Thus, the number of addressable networks raises dramatically with IPv6
compared to IPv4.
3.2
Internet routing
Packet-switched networks segment data into packets, each packet usually containing a
few hundred bytes of data, which are sent across a network. The most important packet
protocol is IP which is located on layer 3 (the network layer) of the OSI protocol stack.48
It has become the standard connectionless protocol for both LANs and WANs. The
Internet is currently comprised of more than 100,000 TCP/IP networks. Traffic within
and between these networks relies on the use of specific devices called routers, the
equivalent of switches in traditional circuit switched telecommunications networks.
Generally speaking, a router is located at the boundary point between two logical or
physical subnetworks.49 Simply speaking, a router forwards IP datagrams, i.e. it moves
an IP packet from an input transmission medium to an output transmission medium. Yet
it is not the forwarding as such which is the most important purpose of a router, but the
exchange of information with other routers concerning where to send a packet, i.e. the
determination of an optimal route. Routers work on the basis of routing tables which are
built by information obtained using so called routing protocols.
3.2.1 Routing protocols
Routing protocols make decisions on the appropriate path to use to reach a target
address. The determination of a path is accomplished through the use of algorithms.
Although there are differences across the available algorithms their common goals
are:50
•
The determination of the optimal path to a destination and the collection of
appropriate information used in making this calculation, such as the path length
(hop count), latency, the available bandwidth, and the path QoS;
47 The only difference between the two are the 16 bits from no. 81 through to no. 96, which are encoded
as "0000” in the first case and "FFFF” in the latter case. The first 80 bits in both cases are set to zero.
48 We discuss protocol stacks in Annex B and to a lesser extend in Chapter 4.
49 In practice a router is used for Internetworking between two (sub)networks that use the same network
layer protocol (namely IP) but which have different data link layer protocols (i.e. on the layer below IP),
see Minoli and Schmidt (1999, p.309).
50 See Minoli and Schmidt (1999, p. 314).
24
Final Report
•
Minimisation of network bandwidth necessary for routing advertisements as well as
of router processing to calculate the optimal routes, and
•
Rapid convergence in deciding new optimal routes following a change in network
topology, especially in the case that a router or a link fails.
The values calculated by the algorithm to determine a path are stored in the routing
tables. The entries of a routing table are based on local as well as remote information
that is sent around the network.51 A router advertises52 routes53 to other routers and it
learns new routes from other routers. Thus, routers signal their presence to the outside
network and through their advertisements they signal to other routers destinations
which can be reached through the router that is advertising.
There are two categories of routing protocols which are used to create routing tables:
Interior Gateway Protocols and Exterior Gateway Protocols.
Interior Gateway Protocols
Interior routing protocols suitable for use with the IP protocol include:
•
Routing Information Protocol (RIP-1, RIP-2),
•
Interior Gateway Routing Protocol (IGRP),
•
Open Shortest Path First (OSPF).
RIP-1 was used when ARPANET was in charge of the Internet. This protocol allows for
only one subnet mask to be used within each network number,54 i.e. it does not support
VLSM or CIDR. This has been changed with the advent of RIP-2 in 1994 which
supports the deployment of variable masks (VLSM).55 IGRP is a proprietary standard of
Cisco.56 OSPF is a non-proprietary protocol which according to Marcus has significant
advantages over both RIP and IGRP.57 It enables the deployment of VLSM, i.e. it
conveys the extended network prefix length or mask value along with each route
announcement.
See Minoli and Schmidt (1999, p.313).
We use the terms route announcement and route advertisement interchangeably.
In essence a route is a path to a block of addresses.
RIP-1 does not provide subnet mask information as part of its messages concerning the updating of
routing tables, see Semeria (1996).
55 Marcus (1999, p. 245) argues that RIP-2 "is likely to continue to be used extensively for some time to
come for simple networks that do not require the functions and ...complexity of a more full-featured
routing protocol. (I)t will continue to be useful at the edge of large networks, even when the core
requires a more complex and comprehensive routing protocol."
56 For more information the reader is referred to Marcus (1999, pp. 245) and the reference given there.
57 See Marcus (1999), p.246.
51
52
53
54
Internet traffic exchange and the economics of IP networks
25
Exterior Gateway Protocols
The most common exterior routing protocol used today is the Border Gateway Protocol
Version 4 (BGP-4) which was published in 1995. A key characteristic of BGP is that it
relies on a shortest path algorithm in terms of AS hops.58 Thus, for each destination AS,
BGP determines one preferred AS path.59 BGP also supports CIDR route aggregation,
i.e. it can accept routes from downstream ASes and aggregate those advertisements
into a single supernetwork.60
BGP routers exchange information about the ASes they can reach and the metrics
associated with those systems.61 In more detail, the information exchanged with
another BGP-based router will involve newly advertised routes and routes that have
been withdrawn. Advertising a route as being reachable mainly implies specifying the
CIDR block and the list of ASes traversed between the current router and the original
subnetwork (AS path list).
3.2.2 Static, dynamic and default routing
Static routing requires a network manager to build and maintain the routing tables at
each router, i.e. network routes are manually listed in the router. In case of a failure a
router using static routing will not automatically update the routing table to enable traffic
to be forwarded around the failure. In contrast, dynamic routing results in route tables
being automatically update and for optimal paths to be recalculated on the basis of realtime network conditions including failures or congestion.
Static and default routing are applied in specific cases where only limited routing is
necessary and they tend to be applied in different circumstances.62 First, suppose an
organisation is connected to only one ISP. Moreover, suppose the organisation uses
addresses from the ISP’s address block and operates as part of the ISP’s AS. In this
case the organisation does not need an exterior routing protocol for traffic to and from
the ISP. Rather, for traffic not destined for a host within the organisation’s network, a
default route is used to get to the ISP. For its part, the ISP would use static routing to
get traffic to the organisation.63
58 It is obvious that the shortest number of AS hops might be quite different from the actual number of IP
hops.
59 The shortest past algorithm is not the only decision made by a router configured with BGP. If, for
example, there are two preferred routes (i.e. routes with the same number of hops), then the router
also checks the time it takes to forward a packet to the target and the available bandwidth on the
different connections. In addition, policy routing can be used to prioritise routes.
60 As regards route aggregation see Annex A-3.
61 See Minoli and Schmidt (1999, p. 356).
62 For more information refer to Marcus (1999, pp. 243).
63 Static routing procedures are based on fixed routing tables implemented in the routers. These tables
assign an input IP subnet address directly to an output IP subnet address with their corresponding
26
Final Report
Second, in many networks there are routers at the edge which only have a single
connection to a core network router.64 In this case static routing can be used to route
traffic from the core to the correct edge router. This requires the address plan to be
configured in a specific way. In order to determine the route back default routing is, i.e.
it is used to route traffic from the edge to the core. A default route is an instruction to the
edge router to forward any traffic where the target address is not accepted on the edge
router, to the nearest core router.65
As we have seen above there are internal and external routing procedures. Both routing
procedures are related to one another. Usually external routing is dynamic,66 while
internal routing may be dynamic or static.
3.2.3 IPv6 routing issues
Options provided by IPv6 67
In addition to its vastly greater address space, IPv6 has several advantages over IPv4,
the main ones being as follows:
•
The addressing capabilities are expanded.
•
The header format is changed allowing the flexible use of options, i.e. the
mandatory header is simplified from 13 fields in IPv4 to only seven fields in IPv6
with the effect of reducing processing costs of packet handling and limiting the
bandwidth cost of the header.
•
A new capability is added to enable the labelling of particular traffic flows,68 i.e.
sequences of packets sent from a particular source to a particular destination, for
which the source desires special handling by the intervening routers, such as nondefault quality of service or real-time service.
64
65
66
67
68
input/output ports. Changes and updates of the routing table of the routers are carried out externally,
usually by a management service of the ISP. This type of routing is usually applied in border routers.
Dynamic routing procedures take statistics of the state of the network and updates the routing table of
the router according to these statistics. A typical example of dynamic routing is the OSPF protocol that
is based on the state of the links of the network.
Edge routers are usually located at those places where the traffic from end users is collected such as
at the PoPs of an ISP. A core router, however, is usually located in the Network Operation Centre of
an organisation and it provides the up-link to a transit partner or to a peering point or its own
backbone. Core and edge routers are each configured according to their specific purpose, i.e. they
differ with respect to the number of ports and other functional features.
Commercially, this is equivalent to routing traffic to a larger ISP with which the smaller ISP has a
contract for the provision of transit. We discuss transit in detail later in this Chapter and also in
Chapter 5.
BGP enables an ISP to statically specify certain routes, and to use dynamic routing, thus, making
them dependent from the status of the network and operational changes in the Internet.
We refer here to RFC 2460, December 1998.
IPv6 has a 20-bit Flow Label field in the header.
Internet traffic exchange and the economics of IP networks
27
•
ISPs that want to change their service provider can do so more easily and at lower
cost when they are using IPv6 compared to if they were using IPv4;
•
IPv6 is able to identify and distinguish between different classes or priorities of IPv6
packets.69 IPv4 has this capability also, but has room for only four different types of
service, and
•
There are enhanced security capabilities supporting authentication, data integrity,
and (optional) data confidentiality.
Tunnelling of IPv6 within IPv4
The specification of IPv6 allows IPv6 datagrams to be tunnelled within IPv4 datagrams.
In this case the IPv6 datagram additionally gets an IPv4 header. This packet is then
forwarded to the next hop according to the instructions contained in the IPv4 header. At
the tunnel end point70 the process is reversed. Tunnels can be used router to router,
router to host, host to router, or host to host.71 Tunnelling will be mainly applied to
connect IPv6 islands inside an IPv4 network.
In regard to several of the bulleted features above, IPv6’s advantage over IPv4 has
recently narrowed due to technological developments. We discuss possible public
policy issues regarding IPv4 address exhaustion and the adoption of IPv6 in Chapter 8.
3.3
Autonomous Systems
An Autonomous System (AS) is a routing domain (i.e. a collection of routers) that is
under the administrative control of a single organisation. An AS may comprise one
single IP network or many. Otherwise stated, the routers belonging to an AS are under
a common administration, they belong to a defined (set of) network(s) over which a
common set of addressing and routing policies is implemented.72 The Internet consists
of a great many ASes with each AS being unique. ASes are identified by numbers. AS
numbers are assigned by the three Registries ARIN, APNIC, and RIPE.
Within a single AS, interior routing protocols provide for routing to occur between the
routers. Exterior routing protocols are used to handle traffic between different ASes.
These protocols are therefore applied in particular to enable ISPs or IBPs to send
transit or peering traffic to one another.
69
70
71
72
IPv6 has an 8-bit Traffic Class field in the header.
The tunnel end point need not necessarily be the final destination of the IPv6 packet.
See Marcus (1999, p. 239).
Marcus (1999, pp. 225 and 242) and Minoli and Schmidt (1999, p.354).
28
Final Report
3.3.1 Partitioning of a network into ASes
A priori the network manager of a newly established IP network may have no AS
number at all, a single AS number, or more than one AS number.73 Usually there is no
need to have an AS number if an IP network has only a single connection to an ISP. In
such cases, the IP network can be treated as part of the ISP’s AS for exterior routing
purposes. This situation is not generally changed if there are two or more connections
to a single ISP and if these connections are to a single AS of the ISP. However, if the IP
network is connected to two or more different ISPs or, equivalently, to two or more
distinct ASes of a single ISP then there is a need for the IP network to have its own AS
number.
Minoli and Schmidt (1999, p. 354-355) distinguish three different types of ASes:
•
Single-homed AS,
•
Multi-homed non-transit AS,
•
Multi-homed transit AS.
A single-homed AS corresponds to an ISP that has only one possible outbound route.74
Multi-homed non-transit ASes are used by ISPs that have Internet connectivity to either
two different ISPs or have connectivity at two different locations on the Internet. Nontransit in this context means that the AS only advertises reachability to IP addresses
from within its domain. Minoli and Schmidt point out that there is neither a need for a
non-transit AS to advertise its own routes nor to transmit routes learned from other
ASes outside of its domain.75 Multi-homed transit ASes correspond to networks with
multiple entry and exit points which in addition are carrying transit traffic, i.e. traffic that
contains source and destination addresses that are not within the ISP’s domain.
In this context the issue arises as to whether an entity should opt for a single AS
network or a multi-AS network. Advantages of having a collection of ASes rather than
one single AS include scalability and fault tolerance.76 The latter means that problems
in one AS do not necessarily carry over to the other ASes.
In our interviews, experts from IBPs have sometimes argued that there are advantages
of having a single AS number, namely:
•
73
74
75
76
Cost efficiency (you need only a single operation centre), and
More details on this can be found in Marcus (1999, chapter 12).
As we have seen above in this case either static or default routing can be applied.
They conclude that a multi-homed non-transit AS does not need to run BGP.
Marcus (1999), p. 249.
Internet traffic exchange and the economics of IP networks
•
29
The characteristic of BGP-4 to forward traffic on the basis of the shortest number of
AS hops.
In regard to the latter argument ISPs that are multi-homed will chose between
alternative routes according to which one involves the least number of hops, and a hop
is synonymous with an AS number. Thus, if an IBP can show a single AS number to the
world it has a competitive advantage over those IBPs that show more than one AS
number. A diagrammatic example is set out in Figure 3-2. It shows an Australian
example and assumes an end-user with dual connections that provide alternative
access to the backbone, and who requires access to content located in the U.S.77 The
situation in the example no longer applies in practice as C&W has sold Optus. However,
it shows the advantage C&W would have had over IBPs that show several AS numbers
to the world like such as UUNet.78 In our interviews we were told that Telia, Sprint, and
Global Crossing79 are each moving to a single AS number.
Figure 3-2:
Routing of traffic of an end-user connected to two ISPs
US
In fo rm atio n
U U N et U S A
(A S 7 01 )
C&W
A S 3561
U U N et A ustralia
(A S 7 03 )
(C & W )
O p tu s
(A u stralia)
Source: Based on information given by Cable & Wireless
77 In Australia around 30 % of homes are dual-connect.
78 UUNet’s Asia Pacific AS number is 703 and its US AS number is 701. UUNet follows a confederated
approach for the three parts of the world they are active in (North America, Asia Pacific, Europe); as
regards confederation see below.
79 Global Crossing filed for bankruptcy protection under chapter 11 in early 2002.
30
Final Report
3.3.2 Multi-Exit Discriminator
If two ASes are physically contiguous and if there is more than one point of
interconnection between the ASes, each AS will route to the other by using shortest exit
as a default rule. From the perspective of cost this rule is therefore inherently in favour
of the sender. Otherwise stated, the sending network does not take account of the
distance that the traffic has to go on the receiving network. This is also known as "hotpotato" routing.
One procedure to partially counter the hot-potato effect and keep traffic on the sending
ISP's network for longer, is called Multi-Exit Discriminator (MED). MED is one of the
path attributes provided by BGP and used to convey a preference for data flows
between two ISPs interconnected at multiple locations.80 Applied by the receiving
network, MED sets up a "preference order" across the multiple interconnection points
and imposes a preferred interconnection point on the sending network. Thus, MED
tends to lessen the transmission costs for the receiving network and increase the
transmission costs for the sending network.
3.3.3 Confederations
Confederations are a particular case of several ASes in which the (usually very large)
network of an organisation is split. The characteristic feature of a confederated network
is that a single core or transit AS serves as a backbone which interconnects all of the
other ASes of the network. The routing protocol applied in this situation is usually BGP4. Under this arrangement, external ASes only connect to the transit AS, i.e. they would
regard the entire network as a single AS, namely the transit AS (to which all the other
ASes are connected). The outside world does not need to know these sub-ASes such
that non-core ASes are only visible internally.81
80 See also Minoli and Schmidt (1999, p. 361).
81 Confederations are very common and standardised in routers.
Internet traffic exchange and the economics of IP networks
4
4.1
31
Quality of service
The QoS problem put in context
Packet switched networks such as those using IP have existed for about 40 years. They
have traditionally been associated with relatively cheap and, relative to the PSTN, low
QoS networks suitable for low priority data traffic. The performance characteristics of
circuit switched networks (the PSTN) and packet switched networks were so different
that they could not be considered as being in the same market,82 i.e. except for some
minor substitution on the margin, they were not substitutes for each other, and thus
neither provided a significant competitive constraint on the other. Most traffic was voice
traffic, and most of the revenues in the industry came from voice calls on the PSTN.
In the year of publication of this report (2002) it is still the case that only a very small
percentage of all voice traffic is originated, switched, and terminated on other than
traditional switched circuit networks. While in terms of the quantity of bits, however,
more are attributable to packet oriented networks, in terms of revenues, voice calls still
provide more revenue, although data on growth rates suggests this will not last much
longer.
For long-haul transmissions, most communications are being sent over the same optical
transmission links. As such voice circuits are becoming more virtual than real when
looking at long-haul routes. The architectures of networks are increasingly comprised of
a core packet network, with ATM and IP facilities operating above optical ports and
PSTN switching facilities integrated into this optical environment over electrical SDH
equipment. The situation is depicted in context in Figure 4-1, in which we note the plane
for the traditional Internet. Also included in this figure is an indication of where the Next
Generation Internet (NGI) will be situated. We discuss NGI later in this chapter and in
Annex B.
Technologies are constantly being developed which improve the optimal capacity
assignment to a service requirement, improving network utilisation and opening
possibilities which may play a crucial role in enabling end-users to receive from their
ISP services with different qualities such as high quality ‘real-time’ service, these being
in addition to the availability of the traditional best effort service. In recent years such
developments include the integration of ATM and IP networks, tag switching,
prioritisation of packets, capacity reservation capability, softswitching, MultiProtocol
labelling, and in the foreseeable future, optical switching under the Internet control
plane.83 These technologies are discussed in further detailed in the Annex B.
82 We use the term in its antitrust sense.
83 For an interesting discussion of the scientic basis of these developments, see the papers in Scientific
American, January 2001, especially those by Bishop, Giles, and Das (2001); Blumenthal, (2001).
32
Final Report
Figure 4-1:
Relation of current and future logical layer with physical layers
PSTN/ISDN
Logical layer
Traditional Internet
& data networks
like FR
First Generation
BB Networks
ATM
Next
Generation
Internet
SDH Lower Layer E1, E3
Electrical
CCX
SDH Higher Layer E4, STM-1
Optical Layer OC 48 / OC 192
Optical CCX
Fibre Layer 40G and more
Cable topology
Source: WIK-Consult
Presently, congestion in the Internet backbone is not a problem most Internet users
are usually aware of. This is due to several factors the most significant of which are the
following:
(i)
The relatively slow speed provided by most residential access lines;84
(ii)
The bottleneck in the access network part e.g. in xDSL access, or the virtual
connection between the DSLAM and the backbone network point;
(iii)
Bottlenecks that occur within LANs and end-user ISPs, and in WAINs and at
respective interconnection points;85
(iv)
Practices adopted by the industry to localise IP traffic and addressing functions
which avoid long-haul requests and long haul traffic delivery; these include
caching, secondary peering, IP multicasting, mirror cites and digital warehousing,
and moving DNS servers to regions outside the USA,86 and
(v)
Services requiring high QoS have not been developed sufficiently for there to be
significant effective demand (although there may be considerable latent
demand).87
84 In practice with a 56 Kbps analogue modem, the existing standard for most residential users,
theoretical speeds normally reach no more than 32 kbps with a maximum of about 50 Kbps.
85 See Clark (1997).
86 We discuss these services in more detail in Chapter 5.
87 Services provides at present are: e-mail, file transfer {FTP}, WWW, streaming audio and video, VoIP,
and real time video. The latter two require strong QoS values which tends to be lacking on the Internet
33
Internet traffic exchange and the economics of IP networks
This study is concerned with traffic exchange involving core ISPs, and is not directly
concerned with the first three points, although, as indicated in Figure 4-2 some of the
details concerning traffic exchange involving core ISPs may be explained by what is
happening downstream – in markets closer to end users.
Figure 4-2:
Host
End-to-end QoS
Host
Router
H EW LE TT
P AC KA RD
IBP
HEW LETT
PAC KAR D
HE WL ETT
PA CKAR D
ISP (B)
HE W LETT
PA CK AR D
ISP (A)
Superior GoS
‘Best effort’ GoS
Not normally a commercially viable
service if it is not end-to-end
Source: WIK-Consult
The flow of revenues on the Internet is dependent on the provision of an end-to-end
service. If a service with its corresponding service attributes can not be carried end-toend in a way that enhances the service purchased by end users, or lowers service
prices, then the revenues necessary to support it will not normally be forthcoming.88
Like any communication, a communication over the Internet needs to get to the place or
person intended for it to have value. Indeed, in a very fundamental way each of the
different revenue streams that appear in Figure 2-1 are dependent on there being an
end-to-end service.
Essentially the same arguments apply in getting low blocking probability to services with
special QoS parameters. If a particular service which is superior to ‘best effort’ can not
be detected by the users on either end of the communication then it is hard to see
where the revenues will come from to pay for the cost of the ‘superior’ service. A
pictorial representation of this point can be seen in Figure 4-2 which shows the
at present while the streaming services already works under weak QoS conditions and can be
received in most cases.
88 We can envisage services that might still be purchased by firms if they consider it disadvantages their
rivals; i.e. part of a strategy that may enhance a firm’s market power.
34
Final Report
commercially unlikely case of a superior QoS / GoS (Grade of Service) being provided
on a section falling somewhere in between the two hosts.89
The forth point above is suggestive of the competition between Internet backbone
providers and firms involved in moving content closer to the edges of the Internet. We
address this issue in Chapter 5.
The fifth point refers to factors that are crucial to the development of the Internet
generally, although as noted in the paragraph above, the absence of services or service
options, or the existence of interoperability problems between backbones may have
their cause outside of Internet backbones and their transit buying customers, such as in
QoS problems in access networks and LANS. They may also have their origin in the
competition between equipment manufacturers, with individual manufacturers possibly
finding it in their interest to sponsor certain standards. We briefly discuss standards
issues in Chapter 8.
When looking at technical and economic aspects of QoS, there is thus a need to look
further than at what is happening in the vicinity of traffic which is exchanged between
ISPs, but also to look into the linkages with downstream QoS issues, and any
downstream commercial and economic considerations that appear to have significant
implications for (upstream) transit traffic exchange.
In this chapter and its annex (Annex B) we look into technical aspects of QoS and
congestion management. Economic considerations are addressed in Chapters 7 and 8.
We proceed by describing QoS before moving onto the issue of QoS at network
borders.
4.2
The quality of Internet service
4.2.1 What is QoS?
The Internet sends information in packets, which is very different to switch circuit
connections provided by the PSTN. This means that from the point of view of a pure
transport stream, the Internet provides a much lower ‘connection’ quality than does the
PSTN. On the other hand packet switched networks enable the introduction of error
checking and error control by retransmission of packets on one or more OSI layer
(Transport, network or link). However, one reason for the attractiveness of packet
networks compared with the PSTN, is that packet networks provide for a much greater
level of utilisation efficiency than do switched circuit networks, meaning that costs per
89 Perhaps CDNs, caches and mirror sites may be considered as a violation of this statement, but not if
we consider CDNs, caches and mirror sites as the end point of a request, and the beginning point for
the corresponding reply.
Internet traffic exchange and the economics of IP networks
35
bit are lower on the former. The way this superior utilisation efficiency is obtained is
through what is called statistical multiplexing. In terms of QoS it means that most things
concerning Internet traffic are uncertain and have to be defined probabilistically, such
that packets do not receive equal treatment, however, as packets are randomised
irrespective of the source or application to which that data is being put, then in this
sense no packet can be said to be targeted for superior treatment.
In general, QoS is a set of parameters with corresponding values that relate to a flow
produced by a service session over a corresponding connection, either point to point or
multipoint, multicast and broadcast. The most important QoS parameters in packet/cell
switched networks are:
•
Packet or cell loss ratio (PLR), or arrival which is too late to be useful;
•
Packet or cell insertion rate (PIR) (caused by errors in the packet label or virtual
channel identifier which leads to a valid address of an other connection);
•
Packet or cell transfer delay (PTD) – the time it takes for packets to go from sender
to receiver (Latency);90
•
Packet or cell transfer delay variation (PTDV) – also referred to as latency variation
(Jitter).
In order to fully describe quality perceived by end-users, we should add a parameter to
these QoS characteristics which is actually classified as being a Grade of Service (GoS)
parameter:
•
Availability (how often and for how long is the service unavailable?).
Grade of Service (GoS) is a parameter in service admission while QoS parameters
refers to a service already admitted. Limitations in QoS values are caused either by
processing in the host, at the network access interface, or inside the network. Hence,
network connections must provide limited values for loss ratio, insertion rate, and delay
and delay variation, in order to fulfil specific QoS service parameters. This holds
generally, even for "best effort" service (even if it has no special QoS requirement), and
for a network which is correctly dimensioned in order to avoid congestion.91
90 Latency often depends on geography. An American interview partner pointed out that it will involve a
different figure for east to west coast of the US (e.g. 70 ms round trip) compared to trans-Atlantic (e.g.
7 ms round trip). This figure should be controlled, the pure propagation delay is 4micros /km resulting
over a transatlantic cable at least 4 ms, the pure transmission delay for a packet of 2 kbyte results
over an DS1 connection approximately 10ms + waiting time in case of a charge of A=0,5 Erlang
results other 5ms totally 19ms. This figure may be true in case of a high speed connection over STM1 because in this case the propagation delay dominate the total delay.
91 As a general rule, networks are considered to be correctly dimensioned when the load in the packet
processor and on the transmission links does not exceed 70%.
36
Final Report
The range of services that can be provided over the Internet make different demands on
QoS. VoIP is said to tolerate a certain level of latency, jitter and bandwidth, while video
streaming requires higher bandwidth, although may tolerate slightly more latency and
jitter than does VoIP.92 To the extent the ‘real time’ applications can adjust the time of
playback93 then latency and jitter statistics required of the network will be
correspondingly lower. Adjusting play-back times is not possible for VoIP, although it is
for streaming video. In the case of interactive services, minor adjustment in play-back
times would be workable for some interactive services, but some will require real-time
QoS. Figure 4-3 shows loss and delay variation parameters applicable to various
applications.
Figure 4-3:
Application specific loss and delay variation QoS requirements
Voice
10-2
File transfer
Interactive data
Loss ratio
10-4
Web browsing
10-6
Interactive video
10-8
Circuit Emulation
Broadcast video
10-10
10-4
10-3
10-2
10-1
10 0
10 1
Maximum delay variation (seconds)
Source: McDysan (2000)
As originally envisaged, the Internet was not designed for services with real time
requirements. We noted in Chapter 3 that as opposed to the traditional telephone
network's use of switched circuits that are opened for the duration of a telephone
conversation, the Internet uses datagrams that may have other datagrams just in front
and behind that belong to different communication sources and are destined to
92 Only about 5% of packets are permitted to arrive after 30-50 milliseconds if VoIP is to be of an
acceptable quality, see Peha (1991) cited in Wang et al. (1997). This value may be extended up to
200 ms using echo cancelling which is included in most VoIP standards (e.g. G.729).
93 It does this by 'buffering' – i.e. by holding packets in memory for very short periods of time until the
packets can be 'played back' in the correct order without noticeable delay.
Internet traffic exchange and the economics of IP networks
37
terminate on a different host. Thus, datagrams from very many sources are organised
for transport, often sharing the same transport pipe, channels (one direction) or circuits
(bidirectional). This is part of the traffic management function and is known as statistical
multiplexing which involves the aggregation of data from many different sources so as
to optimise the use of network resources.
Many technological changes have occurred over the years, but it remains the case that
quality of service on the Internet is still seen as essentially a technical matter.94 In
Annex B to this chapter, and in a less detailed way in the remainder of this chapter, we
address the architectural features of the Internet that have a significant bearing on QoS.
We note, however, that arguably the main means by which the Internet has tried to
reduce QoS problems is through over-dimensioning (or over-provisioning or overengineering). We discuss the efficacy of this approach in Chapter 8.
The Internet provides two main ways in which traffic can be managed selectively for
QoS.
1. To mark the packets with different priorities (tagging), or
2. To periodically reserve capacities on connections where higher QoS is required.
The first one provides for packets requiring high priority to be treated preferentially
compared to packets with lower priority. This approach to QoS is implemented in the
router and provides for different queues for priority and non-priority packets, where
selection of the next packet to go out is determined through a weighted queuing
algorithm. In the second case a form of signalling is introduced which tries to guarantee
a minimum value of capacity for the corresponding packet flows which requires a higher
QoS degree. In both cases the highest quality option involves the emulation of virtual
circuits (VC) or virtual paths (VP), and not types of end-user services.
In neither case can fixed values of QoS be guaranteed as can be provided in case of
pure circuit switching. Rather, QoS still needs to be viewed probabilistically, i.e. in terms
of QoS statistics that are superior in some way to the standard QoS. In short, it is still a
"best effort" service, although on average best effort will provide a significant QoS
improvement over the standard Internet. It is implemented by the real-time transport
protocol (RTP) and is supplemented by a corresponding control protocol known as realtime control protocol (RTCP) which controls the virtual connection used by this
technology.95,96
94 Perhaps the main ones are: service segregation by DiffServ; Capacity reservation under the IntServ
concept and MPLS; Layer reduction mainly in the physical one due progress in switching and routing
technology, and introduction of the concept of Metropolitan and Wide area Ethernet.
95 See RFC1889 and RFC1890.
96 Competition between ISPs that sell transit services has forced them into provide QoS agreements in
order to attract transit buying customers (smaller ISPs). These 'guarantees are based on probabilities
38
Final Report
Where end-users experience service quality problems a large number of factors may be
the cause, e.g. equipment and cable failures, software bugs, accidents like interruption
in electricity provision. But the most pervasive of QoS problems are caused by
congestion, and a lack of interoperability above level 3 of the OSI. We discuss
congestion immediately below. Technical and interoperability issues are discussed in
Section 4.3 and in Annex B.
4.2.2 Congestion and QoS
Congestion on an Internet backbone can be caused by many different types of
bottleneck. A list of the main ones has been proposed by McDysan (2000), and they are
as follows:
•
Transmission link capacity
•
Router packet forwarding rate
•
Specialised resource (e.g. tone receiver) availability
•
Call processor rate
•
Buffer capacity
Congestion management on the Internet divides into two functional steps: congestion
avoidance, and congestion control, both of which are applied inside the network or at its
boundaries. Congestion avoidance at the boundary functions by refusing to accept new
service admission when by doing so it may degrade the QoS parameters for the flows
of services that have already been accepted. Inside the network congestion is avoided
by rerouting traffic from congested network elements to non congested ones through
the use of a dynamic routing protocol e.g. OSPF.
4.3
QoS problems at borders
When traffic is exchanged between ISP networks it becomes what is termed "off-net"
traffic.97 A range of QoS problems arise with off-net traffic. These can be placed into
two groups:
1. Those that give rise to the QoS problems themselves, and
that statistics will not meet certain QoS. Transit providers sometimes have to compensate their clients
where performance statistics have fallen short of those that were contracted for.
97 On-Net traffic means traffic that is interchanged between hosts connected to the same AS and hence
routed with an interior gateway routing protocol (IGP), in contrast to Off-Net traffic which is routed
between different ASes by an exterior gateway routing protocol (EGP).
Internet traffic exchange and the economics of IP networks
39
2. Those that delay possible solutions to these problems.
We begin here by first addressing (1) (the off-net QoS problems themselves, and their
causes), before moving onto to look at (2), (some of the reasons that solutions to these
problems may be delayed).
The main off-net QoS problems appear to be explained by the following:
(i)
Where interconnecting networks use different vendor equipment, and this
equipment does not involve wholly standardised industry offerings:98
• At the physical network layer where the optical technologies operate,99 the
equipment needs to be from the same vendor if ring structures with ADM
multiplexers are to be used. These ring structures are cheaper to implement
than structures with DX4 cross-connectors, and corresponding point-to-point
transmission systems. However, ring structures are less flexible in rerouting
STM-1 contributions due to their implementation by fixed hardware cards.
Using the so called "self healing" ring an automatic restoration mechanism is
implemented in case of ring failures.100
• Modification of the connections at the optical level for client ISPs are
problematic in part because there is currently no automated way of doing it.101
Even if in future some optical cross connecting systems OCX could do this,
only high speed signals in the range of OC48 or OC192 can be managed and
rerouted in case of transmission failure.
• In regard to meshed DX4 structures, equipment from different vendors may
interconnect in the transport plane but in many cases not in the management
one. As network management is a key feature for rerouting capability at the
STM-1 level, a network operator will in practice have to use equipment from a
limited number of vendors in order to maintain network management
capabilities.
• The ITU also provided a standardized interface for management between
networks but it remains largely unimplemented.102 In order to overcome the
off-net QoS problems caused by possible non interoperability between
proprietary management systems, a special umbrella management system
would be required wherever two or more vendors' equipment is used.
98 See the discussion on standardisation in Chapter 8 for background information on this topic.
99 SDH is used in Europe; SONET in North America.
rd
100 In cases where optical connection between networks need to pass through a 3 operator, the lack of
an inter-operator network interface mainly gives rise to capacity availability problems, which can be
overcome by stand-by capacities and in future by wavelength rerouting.
101 See Bogaert (2001).
102 Its implementation is costly and it has not yet been widely adopted. For a short introduction see
Bocker (1997).
40
(ii)
Final Report
Service level agreements (SLAs) offered by transit providers are all different. The
statistical properties of ISP networks are different, and are not readily comparable
for reasons that include differences in the way this data is collected. The upshot is
that degradation of QoS at borders is very common.
In the case of ATM quantitative values regarding QoS parameters are not defined
in ATM’s specification. This has resulted in the ISPs using different specifications
for VBR service, with the consequence that when traffic crosses borders QoS is
not maintained.
(iii)
Equipment that is two or more years old is less likely to provide the QoS
capabilities that are being offered by other networks, or other parts of the network,
such that QoS tends to degrade toward a level where there is interoperability,
perhaps the most likely being the TCP/IP format which was designed to enable
interoperability between networks using different hardware and software solutions.
In regard to point (2) (on the previous page), all networks that handle datagrams being
sent between communicating hosts (‘terminal equipment’ in PSTN language), need to
be able to retain the QoS parameters that are provided by the originating network if
those QoS parameters are going to be actualised between the hosts or communicating
parties. In other words, if one of the ISP networks involved in providing the
communication imparts a lower QoS in its part of the network than is provided by the
others, the QoS of the flow will be corresponding reduced. The situation was shown in
Figure 4-2. This may give rise to a coordination problem, as individual networks may be
reluctant to invest in higher QoS without there being some way of coordinating such an
upgrade with others in the chain.
Moreover, QoS problems at borders may also be a function of the competitive
strategies of ISPs, especially the largest transit providers, as these firms recognise that
they have some control over the competitive environment in which they operate. These
aspects of QoS are addressed in Chapters 7 and 8.
4.4
QoS and the Next Generation Internet
The Internet presently provides a number of different services to end-users and the
range of services seems likely to become greater in future. The Internet is a converging
force between historically different platforms providing different services. These
include: traditional point-to-point telecommunications services, point-to-multipoint
conference services and multicast and broadcast distribution services like video
streaming and TV. Note, that two-way CATV networks are already providing the
integration of traditional point-to-point call services like voice telephony with TV
broadcast distribution and pay-per-view video services and classical Internet services
like e-mail and WWW access.
Internet traffic exchange and the economics of IP networks
41
In a complete services integrated IP network all of this information can be organised
into packets or cells (datagrams) and transmitted over the Internet, although as
provided through the Internet IP platform, the experience of consumers with at least
some of these services is typically of relatively low quality in comparison with the
service quality provided by relevant legacy platforms.103
Improvement in QoS on the Internet is key for the implementation of the next generation
Internet. By this we mean the ability to provide the speed and reliability of packet
delivery needed for services like VoIP and interactive services, to be provided over the
Internet to a quality that enables mass market uptake. Indeed, we define the Next
Generation Internet as a network of networks integrating a large number of services and
competing in markets for telephony and real-time broadcast and interactive data and
video, in addition to those services traditionally provided by ISPs. For convergence of
the Internet to occur with switched telephone services and broadcasting (i.e. for the
Internet to become a good substitute for the PSTN, and existing broadcasting
network/service providers), requires significant improvements in the existing QoS on the
Internet.
Although many of the specific technologies we discuss in Annex B, are not yet fully
developed, several offer the prospect that high quality real-time services could to be
commonly provided over the Internet in the medium term. Actual business solutions that
rely on these technologies are yet to materialise,104 however, due in part to the highly
diverse nature of the Internet, and especially off-net QoS problems, which we discussed
in Section 4.3.105
It is worth noting that the bulk of revenues that pay for the Internet come from endusers (organisations and households). Where for a certain service only one GoS/QoS
is offered all demands are treated equally. Moreover, where only one GoS/QoS is
offered, each end-user’s demand may look singular, but it will in fact be made up of
untapped demand for various GoSes, depending on such things as the application
requested, and preferences that may only be known to each end-user. One truth we
can be sure of is that the demand for multiple GoSes/QoSes will be derived from the
demands of end-users. ISPs will be keeping this in mind as these GoS development
materialise. We shown this situation in Figure 4-4.
The distribution on the lower part of the figure captures all Internet customers, from
those specialised organisation that have mission critical services that require a high
103 Exceptions do arise, such as on intranets where network designers are better able to address end-toend QoS.
104 See the various articles in the special issue "Next Generation Now”, of Alcatel Telecommunication
Review (2001).
105 See Keagy (2000) for more details. Ingenious developments exist, however, which take advantage of
the present state of Internet QoS. ITXC, for example, provides VoIP service using software that
enables them to search different ISP networks for the best QoS being offered at that time. Where no
QoS is available that would provide acceptable quality, calls are routed over the PSTN. See
http://www.itxc.com/
42
Final Report
admission probability and strong QoS values as well as needing traditional e-mail and
browsing services, to those who only use the Internet to send non urgent messages.
However, most customers who makes up this distribution can be expected to use the
Internet for several different purposes, and to consume several different services, such
as e-mail, file transfer, WWW, and video streaming. Differences in an individual’s
demand for admission and QoS depend in part on the application and purpose of the
communication, as indicated by the bar graphs at the top of the Figure 4-4.
Figure 4-4:
Demand for Internet service deconstructed
Bits per month
Demand for
services
requiring high
QoS
Demand for
Demand for
services not
services
requiring QoS requiring
moderate QoS
Demand for
services
requiring high
QoS
Distribution of all
demand for QoS by
Internet subscriber
Zero WTP
for QoS
Sing
le cu
stom
e
r
Demand for
Demand for
services not
services
requiring QoS requiring
moderate QoS
Single customer
Number of
Internet
customers
Bits per month
High WTP for
QoS
Notes: WTP willingness to pay.
Source: WIK-Consult.
Before convergence can be said to have occurred, there will be a transition period
during which real-time services such as VoIP, begin to provide real competitive
pressure on legacy PSTN providers, and it is interesting to think about how QoS on this
transition Internet will differ to what is provided today.
One of the main transitional problems over the next few years may have less to do with
the quality of real-time sessions, but with service admission control which can enable
over-loading of packets in the IP datagram network to be avoided during congested
Internet traffic exchange and the economics of IP networks
43
periods. In traditional networks like PSTN or switched frame relay networks where
capacity is assigned during the connection admission phase, the blocking probability for
a service is described by a stochastic model, and the value provided defines the GoS.
As we show later (and especially in Annex B), the main capacity bottleneck inside the
network lies in the access part and hence service admission control can be limited to
these areas of the network under corresponding protocols e.g. MPLS. Under such
circumstances the service admission controls nearly always avoid the over-flooding of
the capacities in the backbone part of a future Internet and QoS differences between
the services can easily be controlled by simpler DiffServ protocols.
In multi service networks, as the NGI Service Admission control under GoS values are
not described in the same way as for traditional networks, but through more
sophisticated models and algorithms.106 Without a means of demand management
considerable over-provisioning will be required if large numbers of users of real-time
services, especially VoIP, are not to experience instances of network unavailability that
are too frequent for them to tolerate. What may happen is that subscribers who are
sensitive to service availability will remain with the PSTN for much longer than
subscribers who are more price sensitive and who do not mind being unable to make a
call say, 1 in 3 attempts.
Currently there are different option to provide QoS on IP Networks and these include
MPLS, DiffServ, and Ipv6 or perhaps most easily the TOS octet in the Ipv4 header for
the definition and recognition of a traffic hierarchy. For the Next Generation Internet five
traffic levels are envisaged which are shown in Table 4-1.
Table 4-1:
Traffic hierarchies in next generation networks
Traffic level
Traffic type
Service example
NJ4
Traffic for
functions
NJ3
Real time bi-directional traffic
Voice and video communication
NJ2
Real time uni-directional traffic
Audio Video streaming, TV
distribution
NJ1
Guaranteed data traffic
Retrieval services
NJ0
Non guarantied data traffic
Best effort information service
OAM
and
signalling
Network or connection Monitoring
Source: Melian et. al. (2002)
Excluding NJ4, which is mainly used for internal network use, the four different QoSes
identified in Table 4-1 should allow all the services identified in Figure 4-3 to fit
106 See Ross (1995).
44
Final Report
relatively well into at least one of the four QoS options shown. Figure 4-5 suggests
what these might look like.
Figure 4-5:
Fitting GoSes within service QoS requirements
Voice
10-2
File transfer
10-4
Loss ratio
Interactive data?
Web browsing ?
10-6
Interactive video
10-8
Circuit Emulation
Broadcast video
10-10
10-4
10-3
10-2
10-1
10 0
10 1
Maximum delay variation (seconds)
According to the options identified by Table 4-1, when a user initiates a session she
would have to pay a tariff corresponding to the service class. The price will decrease,
from NJ3 to NJ0. In cases where the network does not have sufficient capacity for the
required service the user may chose one with lower QoS values. Moreover, network
designers have to consider that with specified service admission control, a certain GoS
value has to be fulfilled for each class of service (CoS). This may lead to the situation
where a service request for a lower class of service is rejected even though there is
sufficient free capacity for the reservation to be made, in order to provide this capacity
for a future request for higher service class which is charged at a higher price.107
For interactive data end-users may pay for a QoS that was in excess of what they
needed (NJ1), or they could pay a lower price but would obtain a QoS that was rather
less than the type of service required (NJ0). When network resources were nearing
capacity the latter would predictably result in some perceivable QoS problems for the
user.
107 Under economic considerations this traffic engineering problem is known as a "stochastic knapsack”
problem, well known in Operations Research. It describes the situation where a mountain climber fills
his backpack with items and each product has two values: one for usefulness and one for weight. The
climber then needs to maximize his benefit under the weight constraint, see Ross (1995).
Internet traffic exchange and the economics of IP networks
4.5
45
Conclusion: Service quality problem areas
There are several factors presently holding back the development of the Internet into an
integrated services network. These can be grouped into several overlapping
categories:108
•
Congestion management on IP networks is not yet especially well developed, and
often results in inadequate quality of service for some types of service, e.g.
VoIP;109
•
The superior QoS can not be retained between ISPs due to technical reasons, such
as software and even hardware incompatibility (an ISP’s software/hardware may not
support the QoS features provided by another ISP);
•
There is a lack of accounting information systems able to provide the necessary
measurement and billing between networks;
•
There is no interface with end-users that enables different GoS/QoS combinations
to be chosen in a way that provides value to users, and
•
The quality of access networks is presently insufficient for QoS problems between
backbones to be noticed by end-users under most circumstances.
These issues remain largely unresolved, although considerable effort is being
undertaken to overcome them.
108 As this study concerns the Internet backbone, we do not address issues relating per se to customer
access.
109 According to McDysan (2000), a number of resource types may be a bottleneck in a communications
network, the main ones being the following: transmission link capacity; router packet forwarding rate;
specialised resource (e.g. tone receiver) availability; call processor rate, and buffer capacity.
46
5
Final Report
Interconnection and Partial Substitutes
5.1
The structure of connectivity
Traffic exchange on the Internet is performed between ISPs that tend to differ in terms
of their geographical coverage and often also in terms of the hierarchical layer of the
Internet each occupies. Interconnection between ISPs can be categorised into peering
and transit. ISPs are connected to one or more ISPs by a combination of peering and
transit arrangements. Visualisation of the structure of interconnecting ISPs is shown in
Figure 5-1 and which was introduced in a slightly more simplified form in Section 2.2.
Figure 5-1:
Hierarchical View of the Internet (II): Peering and transit relationships
between ISPs
Inter-National Backbone Providers or Core
ISPs (multi-regional or world-wide)
P
P
P
T
Virtually default free zone
T
T
SP
P
P
T
T
Inter-National Backbone Providers (with
regional focus)
T
SP
P
Intra-National Backbone Provider
P
T
T
T
...
Country A
...
...
Country B
Country C
Local ISPs
End-user
T = Transit
P = Peering
SP = Part-address peering
Source: WIK-Consult own construction
This Chapter explains peering and transit, and includes a description of services that
tend to serve as partial substitutes for these two forms of interconnection.
Internet traffic exchange and the economics of IP networks
5.2
47
Interconnection: Peering
5.2.1 Settlement free peering
Peering denotes a bilateral business and technical arrangement between two ISPs who
agree to exchange traffic and related routing information between their networks.
Peering has sometimes been seen as a superior interconnection product compared to
transit, such that refusals to peer have sometimes been looked on with suspicion by the
authorities. Probably in part because of this history peering is often associated with the
most powerful ISPs; it is virtually the only form of interconnection negotiated by core
ISPs.
While there was a restructuring of interconnection 'agreements' in the mid 1990s
whereby many firms were shifted onto transit contracts from peering contracts, in the
last few years peering has proliferated at a regional level among smaller ISPs (i.e. ISPs
that have transit contracts for a significant proportion of their traffic). ISPs who can offer
each other similar peering values will often choose to peer. For example, peering has
become common among regional ISPs.
An ISP will only terminate traffic under a peering arrangement which is destined for one
of its own customers. Packets handed over at interconnection points that are not
destined for termination on the receiving ISPs network, will only be accepted under a
transit arrangement, which we discuss below.110
Interconnection between peers does not involve monetary payment (except in rare
cases which we discuss below under 'paid peering'). This type of interconnection is
known variously as: "bill-and-keep"; "settlement-free"; "sender-keeps-all".111
If packets are accepted under a peering arrangement (i.e. for termination on the peering
partners network), there is no charge levied by the accepting network. This reciprocity
does not specifically require that peering partners be of equal size or have the same
number of customers. Rather, it requires that both network operators incur roughly
comparable net benefits from the agreement; with arguably the main elements being
the routing and carrying of each other's traffic, and the benefits provided by access to
the other ISP’s address space. There is thus an implicit price for peered interconnection
– the cost of providing the reciprocal service.
110 If two ISPs peer they accept traffic from one another and from one another’s customers (and thus
from their customers‘ customers). Peering does, however, not include the obligation to carry traffic to
third parties. See Marcus (2001b).
111 This type of interconnection it not unknown in switched circuit networks, and has been recommended
in an FCC working paper, see DeGraba (2000a).
48
Final Report
It is not possible to conduct any empirical analysis of the implicit or shadow price for
peering, given that the terms of such agreements are, ordinarily, subject to
confidentiality and non-disclosure obligations. However, several core ISPs do provide a
list of the conditions which other ISPs will need to meet if peering interconnection is to
be agreed between them, and these are set out in Section 6.4 and Annex C. We note,
however, that despite the increasing commercialisation of the Internet and the attempt
of a number of backbone providers to formalise their peering practices of late, pricing is
still largely about impressions of value. There is no formalised way to measure the
values involved, which will vary on a case-by-case basis. We do not think that this
situation should per se be cause for concern by the authorities.112
It may occur that a larger and smaller ISP agree to peer in a region.113 In Figure 5-1
above we have referred to this as part-address peering. This is because in such cases
it is typical for the peering arrangement not to involve the larger ISP agreeing with the
smaller ISP to provide access to all its customers according to a sender-keeps-all
arrangement. Rather, the addresses that are made available under the settlement free
arrangement will only include an agreed list which is roughly comparable to that
provided to the larger ISP by the smaller ISP, i.e. peering addresses which the smaller
ISP can advertise are a subset of the larger ISP's total addresses. If this did not occur
the smaller ISP would in all likelihood be getting a free-ride on the larger ISPs more
extensive network – this being either its physical network, or its larger network of
customers. Sometimes ISPs will agree to peer across separate countries, such as
when one ISP provides its German address to another ISP in exchange for the other
ISPs Italian addresses.
5.2.2 Paid peering
Paid Peering is suggestive of a traffic imbalance issue in an otherwise peering
environment. Suppose a situation applies where an ISP is interested in the routes (i.e.
broadly speaking the address space) offered by an IBP and meets virtually all the
peering criteria by the IBP. The only crucial feature is that it is considered that the IBP
will end up transporting rather more packet kilometres handed over for termination by
the ISP, than the ISP would transport packet Km handed over by the IBP for
termination by the ISP. In this situation paid peering may take place. We understand
that paid peering is very rare, and not formally offered by IBPs, but is occasionally
negotiated.
Paid peering arrangements also have a technical rationale. Suppose an IBP operates in
more than one region of the world, and suppose an ISP clearly meets all peering
112 Interestingly, one of the results that comes out of the industry model of Laffont, Marcus, Rey and
Tirole (LMRT) (2001a), is that where competition is sufficiently intense throughout the industry, the
choice between peering and transit is irrelevant. We discuss LMRT in detail in Chapter 8.
113 By larger we refer e.g. to customer base or geographic network coverage or both.
49
Internet traffic exchange and the economics of IP networks
requirements in one of those regions (this would be address subset peering), but also
wants to interconnect in another region where transit would be the most suitable
arrangement. The implementation of this system is not possible if the IBP and the ISP
each show a single AS number to the outside world. In this case ISPs are technically
prevented from having a peering arrangement in one region and a transit arrangement
in another. If on balance transit traffic would be a relatively small proportion of the ISPs
total interconnected traffic handed over to the IBP, the situation suggests a peering
relationship would be more suitable. The situation is shown in Figure 5-2. More
common, however, than paid peering is discounted transit, which we discuss below.
Figure 5-2:
Technical obstacles behind paid peering
USA
Europe
ISPXYZ
Cannot be a transit
arrangement ∴paid
peering is used
Peering (C&W giving
Euro routes only)
ISPC&W
Single AS
Source: WIK-Consult on the basis of information derived from interviews
5.2.3 Private peering
Peering may be either public or private. Private peering occurs on a bilateral basis in a
place agreed by the parties, and thus may not be observed by any other internet entity.
It seems likely that a significant part of Internet traffic at the time of writing of the study
is exchanged through private peering. Indeed, it was estimated a few years ago that at
50
Final Report
least 80% of Internet traffic in the United States was exchanged through private
peering.114 In the EU the figure, however, is thought to be significantly lower. One
reason for this may be the fact that most Web sites are hosted in the USA even those
for which a majority of users are outside the USA. This appears to be the result of the
lower hosting costs in the USA compared with other places (e.g. Europe), this
apparently compensating for the higher cost of transporting datagrams for long
distances. The growth in private peering that occurred in the mid to late 1990s can at
least in part be explained by traffic congestion at NAPs, and because it can be more
cost-effective for the operators (for example, traffic originating and terminating in the
same city but on different networks, there is no need for this traffic to be carried
elsewhere to a NAP to be handed-off).
Figure 5-3:
Secondary peering and transit compared
Transit
IBP / large ISP
Regional /
local ISP
Secondary peering
Transit
IBP / large ISP
Regional /
local ISP
Secondary peering will occur when the cost of the connection to each ISP is
less than the money each saves due to reduced transit traffic
Source: Marcus (2001b)
Private peering can take place between IBPs or it can occur between lower level ISPs,
such as between two local or regional ISPs. Where the latter occurs it is also known as
secondary peering, and is increasingly common in the EU (and elsewhere) as among
other things, liberalisation has helped make the infrastructure available at much lower
prices than had previously been the case. Secondary peering enables regional and
local ISPs to access end-users and content and application service providers’ services
situated on the networks of neighbouring ISPs, without having to route traffic up
through the hierarchy. Figure 5-3 shows the relationship between secondary peering
114 Michael Gaddis, CTO of SAVVIS Communications in "ISP Survival Guide", inter@active week online,
7 December 1998; see also Gareiss (1999).
Internet traffic exchange and the economics of IP networks
51
and transit. The mix of hierarchical and horizontal interconnection which forms the
Internet can be seen in Figure 5-1.115 The growth of secondary peering has prompted
the emergence of a new class of "connectivity aggregators", whose function is to
combine the traffic of smaller ISPs and acquire connectivity from larger ISPs.
Examples include the new breed of NAP which provide a range of services to
interconnecting ISPs. We discuss these further in Section 5.5 below.
5.2.4 Public peering
Public peering occurs at public peering points, also known as Network Access Points
(NAPs), or Metropolitan Area Exchanges "MAEs" in the United States. Large numbers
of networks exchange traffic at these points.116 NAPs are geographically dispersed
and like private peering, peering partners use "hot-potato routing" to pass traffic to
each other at the earliest point of exchange. Private peering differs from public peering
to the extent that it occurs at a point agreed by the two interconnecting network
operators.
5.2.5 Structural variations in peering
5.2.5.1 Third Party Administrator
In this case several ISPs interconnect at a location where interconnection
administration is operated by a party who is not an ISP network. This model of
interconnection was developed in the pre-commercial period by the National Science
Foundation (NSF) with network access points (NAPs), and subsequently by the
Commercial Internet Exchange (CIX) due the inability of the NSF system to provide
access points quickly enough to meet the rapid traffic and network growth that was
occurring at the time. These public Internet exchange points (IXPs) are open to all
ISPs that comply with specific rules set by the operator of the NAP.
NAPs provide an opportunity for ISPs to engage in secondary peering and multihoming, and growth in this form of interconnection has reportedly been increasing in
the EU (and elsewhere) in recent years. Indeed, the percentage of all interconnected
Internet traffic which is handed over at NAPs may actually be increasing relative to
traffic handed over at private peering points. An indication of this is that a much greater
115 We understand that entities such as Digital Island, InterNAP, AboveNet, and SAVVIS peer with
hundreds of regional providers. AboveNet, the architect of the global one-stop Internet Service
TM
Exchange (a network delivering connectivity and collocation for high-bandwidth applications) claims
to have more than 420 peering relationships.
116 See Section 6-1 for empirical evidence on main peering points in North America and Europe.
52
Final Report
proportion of traffic is now remaining in the regions rather than passing through upper
layer ISP networks, than was the case a few years ago.
The explanation for these very important and fairly recent developments in secondary
peering (and multi-homing) is again explained by liberalisation, but also by the routing
standard BGP4 which was first available in 1995 and which has enabled ISPs to adopt
"policy routing" and has made alternatives to hierarchical routing "drastically cheaper"
than they would have been prior to its availability.117 The increased use of ATM by
NAPs may also have been a factor as this has reportedly improved service quality at
NAPs, which have long been considered as congestion points in the Internet.
Moreover, a network structure of point-to-point bilateral connections does not have
good scaling qualities.118 By taking lines to a local exchange and engaging in multiple
interconnection, fewer circuits are needed, scalability is improved, and costs are
reduced.119
Functions performed at a NAP
The following Figure provides an overview of the main set of functions that are
provided by NAPs.
Figure 5-4:
Functional value chain of a NAP
additional services:
transmission
internet
access
provision
internet
service
provision
HW + SW
policy
equipment
manage- (racks,
ment
LAN,
switch)
housing
space
and basic
security*
content
control
system
(e.g. protection
manageof children
ment**
and young
people)
additional
security
services,
(e.g.
CERT)
* e.g. space; secure non-stop power supply; help desk; climate; physical access control
** e.g. LAN-Service; routeserver service
CERT: Computer Emergency Response Team
Source: WIK-Consult
The first of these are transmission services, i.e. each ISP being a member of a NAP
needs a physical connection between the premises of his own network nodes to the
premises of the NAP. In most cases these are provided by more than one NAP-
117 See Besen et al (2001), Milgrom et al (1999), and BoardWatch Magazine, July 1999. Details about
BGP-4 can be found in Minoli and Schmidt (1999, section 8.4).
118 In the USA the largest ISP backbone have apparently begun to interconnect at specified NAP-like
locations established for the purpose; see section 6.4.2.
119 See Huston (1999b).
Internet traffic exchange and the economics of IP networks
53
specific company (carriers)120 that provide leased lines or other modes of transmission
infrastructure. Usually, the number of carriers providing transmission services to and
from the NAP is much lower than the number of members at the NAP.121 From an
institutional point of view a carrier providing these services is not a member of the
NAP, although in all likelihood the carrier will have a significant stake in an ISP that will
be a member.122
A key service for running a NAP besides the hardware and software supplied for the
NAP’s LAN is peering policy management. Setting up membership rules for ISPs that
aim to connect to the NAP, and the development of a pricing and a peering policy, is a
crucial precondition. Furthermore, housing space with basic security features like
physical access control, climate control, failsafe power supply and a technical 24/7
help line is needed to secure the availability of the exchange point. The system
management for the LAN and its switches is another crucial service within the NAP.
In addition to these basic services a NAP may also provide for its member ISPs
services like content control or other security services. For example, some countries
require ISPs to employ staff to keep out certain content, such as that which exploits
children or young people. This function can be integrated within the NAP structure for
all member ISPs. Another possibility is to offer additional security services by a CERT
(Computer Emergency Response Team) that warns and protects member ISPs from
viruses or attacks by hackers. Because of their central position NAPs are an ideal spot
within the Internet network for additional services to be offered to ISPs.
The scaling qualities of the NAP structure are likely to be important in explaining the
recent adoption by the largest IBPs in the US of points like NAPs where meshed
interconnection occurs between them. We discuss this further below.
Value chain of a NAP and the institutional perspective
From an institutional perspective the division of labour along these stages of the value
chain can be high, i.e. no single company provides all the functions identified in Figure
5.4. Rather, there are several firm fulfilling different tasks. Of these we can distinguish
the following:
120 Most of the NAPs are connected to the networks of several carriers. Yet, this is not always the case:
at MAE East WorldCom is e.g. the only carrier.
121 An example might make this clearer: the Internet exchange point DE-CIX in Frankfurt has 75
members, however, only 17 carriers provide access to the premises of this NAP. The incumbent
telecommunications company of Germany, Deutsche Telekom, provides leased lines in Frankfurt to
and from DE-CIX’s site, however, Deutsche Telekom is not a member at DE-CIX.
122 Thus, if one company provides both activities one should in principle distinguish the carrier function
(providing transmission capacity to and from the NAP) and the actual membership at a NAP
representing the ISP function.
54
Final Report
•
The legal operator like an association of ISPs owning the technical resources such
as are needed for switching and routing. This firm makes investment decisions and
develops peering policy;
•
The technical operator like a web housing company which will provide collocation
space, and
•
The company providing the system management, i.e. running the technical
infrastructure.
Often web housing companies offer a complete service covering the actual housing
and the system management.
General features of peering policies at NAPs
As neutral exchange points all NAPs have a peering policy that covers the following
topics:
•
The conditions needed to become a member (e.g. the number of peering
agreements outside of the NAP such as national and international connections);
•
The kind of peering arrangement (e.g. is pricing allowed, or perhaps only
settlements free peering is permitted;
•
Whether transiting is allowed or not (at many NAPs transiting is forbidden), and
•
The extent to which multilateral peering is required (i.e. peering on an equal basis
with all other members) or whether bilateral peering is accepted practiced.
To become a member of a NAP ISPs usually have to peer with a minimum percentage
of the other members of the NAP. However, the details of peering agreements
between members are not the business of the legal operator of the NAP.
5.2.5.2 Co-operative Agreement
This model differs from the third party administrator one in that the point of exchange is
managed by a committee of the interconnecting ISPs. This model was used when all
those interconnecting were non-commercial US Government supported networks. It
continues to exist but not for interconnection between profit seeking ISPs.
5.2.5.3 Taxonomy of Internet Exchange Points
The following table contains a taxonomy of IXPs. The table reveals that NAPs might be
run for research traffic only or for commercial traffic. If they are used for commercial
traffic they might be managed by non-profit organisations or by profit oriented
Internet traffic exchange and the economics of IP networks
55
companies. Often the non-profit organisations are a consortium of national or regional
ISPs that have engaged in a co-operative agreement. This type of NAPs is often called
"CIX". If NAPs are run as profit oriented entities the operator is often a carrier or a web
housing company.
The rationale for establishing NAPs might be different for different ISPs. For local and
regional ISPs NAPs offer an opportunity to interconnect with Internet backbone
providers. Smaller ISPs usually have difficulties to peer privately with a larger ISP.
Settlement-free peering at a NAP helps to reduce costs for upstream connections and
peering with many other ISPs at these exchange points also avoids transit fees for the
percentage of an ISP’s traffic that is governed by peering contracts. Large ISPs might
have an incentive to peer at a NAP for QoS reasons; for example, local/regional peering
can avoid congestion on an ISP’s backbone infrastructure. Moreover, large ISPs obtain
the benefit of the customer base of the regional ISPs connected to the NAP,123 access
to content providers and redundancy of connection to the backbone providers they are
connected to. In addition, with its inherently neutral organisational structure,
membership at a non-profit NAP offers the opportunity to influence investments and
peering policies at the NAP, i.e. members have voting rights.
Table 5-1:
Name
of IXP
A taxonomy of Internet exchange points
Neutral/Network Access Point (NAP)
(= public IXP)
Features
Private
Peering Point
(= private IXP)
(Third Party)
Private Peering
Point (bilateral)
Customers
Universities,
Research
institutions
Commercial
ISPs
(often called
"CIX")
Commercial
ISPs
Commercial
ISPs
Two ISPs
engaged in
private peering
Operator
Non-profit
organisation
e.g. DFN
Non-profit
organisation
e.g. ECO
Forum e.V.
IBPs
e.g. WorldCom
(called "MAE")
Carrierindependent
data centre
operators
e.g. Telecity
(Housing and
Hosting
Services)
one or both of
the two ISPs
Objective
function
Presumably
cost coverage
e.g. DFN IXP
in Berlin
Presumably
cost coverage
e.g. DE-CIX
Profit
maximisation
e.g. MAE FFM
Profit
maximisation
e.g. Telecity
data centre in
Frankfurt/Main
(Presumably)
cost
minimisation
Source: WIK-Consult
Grey marked area = 'For profit' IXPs
123 Large ISPs are most likely to agree to peer in regard to a subset of their total address space, when
peering is with a smaller ISP (see Figure 5-1).
56
5.3
Final Report
Interconnection: Transit
The scope of the contractual obligation provided by transit interconnection is much
broader than that of a peering relationship. Transit interconnection is the principle
means by which most ISPs obtain ubiquitous connectivity on the Internet. Transit
denotes a bilateral business and technical arrangement where the transit provider
agrees to carry an ISP’s traffic to third parties.124
5.3.1 Rationale
Most interconnection arrangements (but not most interconnected datagrams) involving
the Internet are now hierarchical bilateral. In the last 5 years, IBPs have moved more
and more ISPs from sender-keeps-all relationships, to a transit arrangement, i.e. a
situation where the ISPs have to pay for interconnection.
In defending their adoption of interconnection charges IBPs argued that they face
higher infrastructure costs as a result of ISPs handing over traffic, on average, a very
great distance from the destination point. IBPs say that for a period after
commercialisation of the Internet when sender-keeps-all interconnection still dominated,
smaller ISPs got a 'free-ride' on the national networks of IBPs. The incentive for ISPs is
to hand over traffic at a point of interconnection as soon as they can, as this tends to
lower the ISPs costs, although increasing the costs of the receiving ISP. The
phenomenon is known as "hot-potato" routing and is an accepted principle among all
ISPs whether they are international backbone providers or regional ISPs.
Transit is the most important means through which most ISPs obtain global
connectivity. Transit must be purchased when one ISP wants to hand packets over to a
2nd ISP which are not for delivery on the second ISP's network (i.e. they are for handing
over to a 3rd ISP). In a transit arrangement, one ISP pays another for interconnection.
Unlike in a peering relationship, the ISP selling transit services will accept traffic that is
not for termination on its network, and will route this transit traffic to its peering partners,
or will itself purchase transit where the termination address for packets is not on the
network of any of its peering partner. As such, a transit agreement offers connection to
all end-users on the Internet, which is much more than is provided under a peering
arrangement. Even though many ISP are multi-homed it is unlikely that any significant
ISP acquires access to all Internet addresses through self-provision and a single
peering or transit arrangement. Rather, ISPs will often enter into multiple transit and
peering arrangements, more than is required to secure access to all the Internet.
124 Usually, under a transit arrangement the transit provider carries traffic to and from its other customers
and to and from every destination on the Internet, see Marcus (2001b).
Internet traffic exchange and the economics of IP networks
57
However, ISPs also provide IBPs (and their customers) with benefits when they
interconnect, and the way these are taken into account is through IBPs providing transit
at a price net of some of the benefits that the ISP brings to the IBP. IBPs are compelled
to offer discounted transit to the extent that competition takes place between them. If an
ISP does not like the IBP's net transit price it can go to a number of other IBPs who may
be more prepared to negotiate a better net price.125 A discounted transit scheme would
be applied, for example, if a peering arrangement appeared suitable in one region,
while a transit arrangement appeared most suitable in another. As technically this
arrangement consists of a combination of transit and peering, which is only possible
where each region of each network used a different AS number (see Chapter 3), the
only feasible arrangement is a transit relationship. However, the IBP has an incentive to
offer a discount because the routes offered by the ISP to the IBP have a positive value
to the IBP.
5.3.2 The structure of transit prices
There are several possible charging arrangements for transit. The end-user's ISP (or
online service providers – OSPs) could pay as the receiver of transit traffic), the Web
hosting ISP (the firm sending the requested data) could pay, or both ISPs could pay the
transit provider.
In practice, transit is typically charged on a return traffic basis, i.e. on the basis of the
traffic handed over to the ISP whose customer requests the information. ISPs that
provide transiting (mainly large ISPs and IBPs) charge on the basis that traffic flows
from themselves to their ISP customers. Transit providers do not pay anything to their
ISP customers even though they receive traffic from them, albeit much less than the
traffic flowing from transit providers to customers. While this may not seem very
equitable at first glance, present transit charging arrangements have some economic
advantages. Not least of these is that it is the largest flow which tends to dictate network
capacity, especially at points of interconnection. As most transited packets flow from
Web hosting ISPs through transit to another ISP to online service providers, it is this
traffic that appears to give rise to congestion and governs the investment needs of ISPs
that provide transit.
In analysing transit pricing arrangements it is useful to do so in terms of the quality of
the economic signals the prices send to the parties involved, especially concerning
investment, congestion management, usage, and competition between transit
providers. In practice, however, this is made difficult and prone to error as the
information is not publicly available and information that is provided verbally tends to be
quite general.
125 In some regions it may be the case that competition to provide transit is less effective than it is higher
up the Internet.
58
Final Report
Our information suggests that there is no accepted industry model that governs the
structure of these prices. Some larger ISPs are able to negotiate a price structure with
the transit provider, while others choose from a menu. There appear to be three basic
dimensions around which transit price offers are structured:
•
A fixed rate for a certain number of bits per month;
•
A variable rate for bits in excess of this amount, and
•
A rate based on peak throughput, which may include:
-
pipe size, representing the option for peak throughput, and
-
some measure of actual peak throughput (‘burstiness’).
Two part-tariffs appear standard where the fixed charge may be relatively low per bit
compared to the variable component.126 To the extent that ISP can accurately estimate
their future monthly usage, such arrangements allow ISPs to pay transit charges in the
form of a predetermined monthly charge, any extra bits being charged a premium.
Premiums may be high but quite possibly in keeping with the transit providers costs in
making this extra capacity available for peak demand.
However, we understand that some transit buyers pay a flat rate only option. The
rationale for flat rate option is that it provides certainty to network customers who have
an annual budget to spend on communications, and who prefer the riskless option of
paying a certain amount known in advance for all their traffic requirements.127 We
would expect that for such customers their overall transit costs are higher than they
would be under a two part tariff, as they have effectively rejected any congestion pricing
component that would restrict their peak demand.
It is common for larger customers to negotiate specific details according to their
particular requirements. Large content providers that maintain their own router, and
many ISPs (all of whom do likewise) will frequently have interconnection arrangements
with more than one transit provider.
Where the non-usage price makes up a low proportion of an ISP’s monthly transit bill
(as we are told is fairly common in the case of one major transit provider) this price
structure may improve the transit buyer’s ability to bargain for competitive prices from
transit providers. Such a pricing structure may make multi-homing a more effective
policy for ISPs and large content providers, as in addition to a small pipe-size based
charge, ISPs and content providers will only pay for the packets they send to their IBP
transit provider. Thus, the ISP could choose to send all of its traffic via the IBP that is
126 Routers keep a record of traffic statistics, i.e. there are counters in the router (port). Usually there is no
time of day pricing as regards the variable component.
127 The flat rate scheme offers no measurement saving as traffic will be measured anyway.
Internet traffic exchange and the economics of IP networks
59
providing the best price/QoS, but retains the option to switch its traffic to the other IBP
should its price/QoS offer become superior, or should an outage occur on the IBP's
network the ISP is currently using for transit. This arrangement appears to provide a
valuable option to switch between IBPs which multi-homed ISPs or content providers
may not be paying for directly.128
The flat rate price structure is thus a take-or-pay arrangement which detracts from the
ability of ISPs and content providers to play off IBPs against each other over the period
of the contract. For some firms that take the flat rate option, however, it can meet their
needs for revenue certainty over the duration of the contract.
It seems to us that there are reasons for IBPs to prefer a prominent role for base load
and optional capacity pricing, and including some type of payment for the peak capacity
option like pipe size. A price that was also based on the variability of traffic throughput
would enable those transit buyers who send a relatively constant bit rate to receive a
lower price in keeping with their relatively greater reliance on base load capacity rather
than peak load capacity.129
5.3.3 QoS guarantees for transit
For transit contracts IBPs offer QoS guarantees which usually address three QoS
dimensions: latency, packet loss, and availability. IBPs keep the statistical data
necessary to verify their own QoS and provide periodic reports to clients. Any breach of
QoS parameters must be confirmed by the IBP's own data. Contracts that require 100%
availability are apparently the norm today due to competition, although obviously it will
not be met in reality, so very occasionally IBPs will have to pay agreed compensation in
cases of outage.
The ability of IBPs to start offering service level agreements (SLAs) under
corresponding QoS parameters coincided with operators' use of ATM in their transport
networks. Under this technology the corresponding IP packets are transmitted over
different 'virtual tubes', referred to in ATM terminology as virtual paths (VP). However,
QoS guarantees only apply if the flow of cells received conform to the traffic parameters
that have been negotiated. Such conditions require networks to shape their traffic at the
border, just before handing over for transit or delivery.
In ATM networks arriving cells fill a logical bucket which leaks according to specific
traffic parameters, and these parameters form the basis for QoS contracts. The
parameters can include: cell loss rate (CLR); cell time delay (CTD); cell delay variation
128 In many markets such options are purchased directly. Indeed, in some cases there are markets in
which options are bought and sold.
129 One European ISP said in an interview that transit prices had dropped by 90% in the three years to
March 2000. Another said that in Eastern Europe they had dropped by 50% between March and
October 2001.
60
Final Report
(CDV); peak cell rate (PCR), substantial cell rate (SCR), minimal cell rate (MinCR), and
explicit congestion indication (ECI). Table B-1 in Annex B shows which of these
parameters apply in regard to the five ATM Forum QoS categories.
Operators have recently begun to implement a form of WAN IP-switched architecture
under Multi Protocol Label Switching (MPSL). The adoption of this technology will result
in some changes in SLAs that ISPs have with their transit provider. MPLS is discussed
in Annex B.
5.4
Multi-homing
When an ISP has a transit contract with more than one transit provider, the ISP is said
to be multi-homed. This enables the ISP to send traffic up the Internet hierarchy through
connections with different transit providers. For smaller and medium sized ISPs, multihoming was made economically viable by the development of BGP4 and subsequently
by cheap and easily operated routing equipment that uses it.130 Figure 5-5
demonstrates a multi-homed ISP configuration. ISP(A), ISP(B), ISP(Z) and ISP(Y) are multihomed; ISP(C) and ISP(X) are not.
Figure 5-5:
Multihoming between ISPs
IBP(M)
IBP(N)
ISP(A)
ISP(z)
End-users
ISP(B)
ISP(C)
ISP(x)
Secondary
peering
ISP(y)
Source: WIK-Consult
130 See for example, BoardWatch Magazine, July 1999
Internet traffic exchange and the economics of IP networks
61
There are a number of reasons why ISPs may choose to multi-home. The main ones
appear to be the following:
5.5
•
To ensure connectivity to the Internet in case one connection is disrupted, i.e. it
works as a substitute for upstream service resilience.131 When congestion,
accidents and outages inevitably occur, multi-homing improves average service
quality for ISPs and ultimately for end-users.132
•
It can assist in the optimisation of traffic flows, particularly in conjunction with
BGP.133
Hosting, caching, mirroring, and content delivery networks
In this section we explain the terms hosting, caching, mirroring and content delivery
networks (CDNs). These are services provided either by ISPs or by content providers.
Hosting is when a content provider builds a server cluster where it stores different
WWW content like web pages and search engines. Web Hosting is the process of
maintaining and operating the Web servers.
Caching, mirroring and content delivery networks (CDNs) involve within-network
storage of content which is primarily available from other network sources. All three are
variations on a single theme which is to move content closer to the edges of the
Internet. Their purpose is to lower transit costs for local and regional ISPs. They may
also provide improved QoS through providing for a greater proportion of traffic to be
localised, providing less strain on edge routers, which is where much of the processing
is required in order to deliver packets to their correct destination. Content delivery
services do not involve an agreement between the ISP and the content provider.
A cache is a computer network that intercept requests from end-users to access or
download content, such as web pages that are located on other networks. Rather than
request the same page, say, a thousand times a day, a cache retrieves the requested
content and stores it, and only periodically updates it. When the object is requested
again the cache sees that it has it in storage and provides the object it has stored to
the requesting end-user (or her ISP) and in so doing avoids transit charges for the ISP.
For off-net objects that are most frequently requested, pre-fetching caches are
programmed to download this content regularly from a distant website in anticipation of
131 See Huston (2001a).
132 We understand that round trip times for packets traversing a particular number of hops have reduced
as a result of the new BGP4 protocol which was largely responsible for making multihoming
commercially viable.
133 It can provide information useful for traffic engineering and capacity planning decisions where AS
numbers are uniquely assigned to both service providers and multi-homed enterprises. See Packet™
rd
Magazine, 3 Quarter 1999: "Design of Non-stop E-Commerce”.
62
Final Report
requests.134 A caching service thus involves the ISP periodically download the most
visited WWW contents to their own servers. The reduction in inter-network traffic that
results from caching is said to be significant, with potential for considerable growth in
the future.135
Figure 5-6:
A visual depiction of caching
IBP(M)
IBP(N)
Regional
ISPs
ISP(A)
Local ISPs
ISP(B)
ISP(C)
ISP(z)
ISP(x)
ISP(y)
End-users
Cache
Hit?
Cache
Hit?
Source: WIK-Consult
Large IBPs have also provided caching to ISPs and content providers as a wholesale
service, but as caching also competes with their core business which is selling transit
services, we gather that there is now little interest on the part of backbones in
providing caching services to ISPs.136 Rather, caching is mainly done at the level of
regional or local ISPs.137 A diagrammatic representation of caching is shown in Figure
5-6. Let us explain by focussing on an end-user of ISP(x) who wants to get content from
134 More sophisticated caching techniques, such as satellite caching/content distribution, are being used
by entities such as Edgix and Cidera. These services use very large caches placed in ISP networks
that are connected to satellite networks. When one cache stores a new object, a copy of it is
transmitted over the satellite network to all other caches.
135 Intel claims that a cache can reduce the amount of outbound bandwidth required by as much as 30%
to 40% ("Caching for improved content delivery", 2000). According to EXPAND Networks, Web
caching both reduces the response time for access to web content, and increases the average
throughput by about 30%.
136 A large IBP told us that with the decline in the cost of bandwidth it prefers to use its network to fetch
content that could be cached.
137 AOL claims to make extensive use of caches.
63
Internet traffic exchange and the economics of IP networks
a content provider who is connected to the network of ISP(A). In this case a request for
content may be held at the local ISP’s cache, or it may be held at the regional ISP’s
cache; otherwise the request will go all the way back to the content provider’s site.
In the case of a mirroring service, the ISP and the content provider agree to have the
complete content accessible from the ISP’s remote servers. Such an agreement is
typically necessary with this type of arrangement because there are ownership /
intellectual property issues involved. In other respects it is the same as caching. The
decision of the parties is based on the realisation that it is a profitable strategy for both
of them.
What differentiates mirroring from caching is that a mirror is held within the server of
the ISP rather than as a separate network as is the case with cache. A mirror
removes the legal risks as mirroring is done on behalf of the original site owner/content
provider. Thus, firms like Coca-Cola may ask their ISP to provide mirror sites in
Western Europe, Russia, Japan etc. As is the case with caching, however, mirroring is
used by ISPs to store frequently requested items on their servers, thus reducing their
transit costs and improving download times for end-users. As with caches, mirror sites
are not commonly provided by IBPs as they compete with their core business. Figure
5-7 shows a diagrammatic presentation of mirroring.
Figure 5-7:
A visual depiction of mirroring
ISP Backbones
Local ISPs
End-users
Source: WIK-Consult
Response
Request
Regional
ISPs
Content
Mirror
64
Final Report
Content delivery networks (CDNs) involve caches and mirrors placed on multiple
networks in different regions, which are programmed to periodically update frequently
requested items. A diagrammatic presentation of CDNs in shown in Figure 5-8.
Figure 5-8:
Basic features of Content Delivery Networks
Content from
Provider B
Peering Connections
ISP A
ISPB
Content from
Provider C
Content from
Provider C
Content from
Provider A
Sub-Networks / Backbones
Content from
Provider A
ISP C
Content from
Provider B
Source: Derived from material provided by WorldCom
5.6
A loose hierarchy of ISP interconnection
Referring to our basic discussion of how traffic is exchanged among the main types of
players which we introduced in Chapter 2, we now provide a more detailed discussion.
IP networks that make up the Internet can be categorised into four groups:
1.
The top layer comprising ISPs with large international networks, (sometimes
referred to as Tier 1 ISPs, IBPs, or core ISPs). These networks may be based on
own or leased infrastructure. Virtually all traffic exchanged between core ISPs is
according to settlement free peering, typically occurring on a private bilateral basis.
Core ISPs are very limited in number and largely for historical reasons, are mainly
Internet traffic exchange and the economics of IP networks
65
centred on the USA.138 These networks also provide services to end-users, such
as large corporations and important web sites (typically through private leased line
services). Through their vertical integration into regional or local ISPs, some core
ISPs also provide services to a full range of end-user customers.
2.
A second layer comprising ISPs (i.e. smaller IBPs - sometimes referred to as Tier 2
and Tier 3 ISPs), have significant national and possibly cross border networks.
These ISPs provide transit services for other than core ISPs. They also purchase
transit from core ISPs, and may also have regional peering arrangements among
themselves and with core ISPs. These Tier 2 and 3 ISPs usually also have a full
range of end-user customers.
3.
The third layer of the Internet comprises ISPs with some infrastructure (typically
leased). They are regional or local in their presence. They peer among each other
and may provide transit services to other ISPs, most likely forth layer ISPs. Most of
their connectivity is provided through purchasing transit from upstream ISPs.
4.
Fourth layer of ISPs have virtually no infrastructure of their own, but typically
operate over the infrastructure provided by larger ISPs. These forth layer ISPs as
typically very small service providers whose connectivity is solely provided trough
purchasing transit.
There appears to be no widely accepted definition of what distinguishes a Tier 1 from a
Tier 2 ISPs. Indeed, even if there was an accepted definition the information needed to
make the distinction is typically not available.139 As far as we can gather transit is
purchased by all ISPs, although for the largest of them no transit will be purchased in
the United States. For these core ISPs transit may still be purchased in other parts of
the World.140
While the hierarchical structure of the Internet is a very loose one it would not be
feasible to abandon this hierarchy as it would require each ISP to interconnect with
each and every other ISP, i.e. each ISP would need to maintain a business
relationship with every other ISP. It would require the network to be fully meshed. In
this case each ISP would need routers that maintained full routing tables, and these
routers would need much greater processing power than the vast majority of routers
currently have. The costs of this structure, the cost of routers that would be needed to
maintain full routing tables, and the cost of maintaining those tables, would require
financial resources beyond what most existing ISPs would be prepared to support.
138 Indeed, while Tier 1 ISP are very active in other parts of the world, and some even have their head
office outside the US, it can be argued that the core of the Internet is still located in the US.
139 In practice, it will not be obvious for many ISPs which group they should be placed in. For example,
some very small ISP will have some infrastructure, which may make it difficult to decide whether they
should be in ISP group 3 or ISP group 4.
140 One of the big IBPs of the world said in discussions that it purchased transit in about 12 countries,
including Japan and Singapore.
66
Final Report
To avoid this cumbersome and probably unworkable structure, the Internet uses the
hierarchical addressing and routing system described in Chapter 3. Routers of nonTier 1 ISPs only have access to the limited customer addresses held by their own
router and those opened to them by peering partners. For these ISPs, packets that are
not addressed to recognised addresses must be sent up the hierarchy through a
default router and to a transit provider with which the ISP has a transit contract. To
avoid default routers sending packets to each other, router management needs coordination.
This approach requires the top layer of ISPs to maintain core routers that do not rely
on default routes. Usually if packets are encountered with addresses that are not
recognised, these packet are dumped and an error message returned to the source.
ISPs at the top of the Internet hierarchy must, therefore, maintain full routing tables in
order that full network connectivity be maintained.141 This vertical structure
economises on transaction and routing costs. It has, however, meant that the top layer
of ISPs have been seen to be potentially in a position to influence market outcomes to
their own commercial advantage.142 Recent developments in routing standards
appear to have assisted significantly in undermining any market power that was held
by top level ISPs and will have thus reduced competition concerns, while not
fundamentally changed the hierarchical structure of the Internet. We discuss these
issues further in Chapter 8 which addresses the topic of market failure.
141 While an increasing proportion of all Internet traffic is remaining regional, compared to a few years
ago, due to the increased use of caching, mirror sites, and secondary peering, transit arrangements
are important in order to provide vital network connectivity.
142 See Milgrom et al (1999).
Internet traffic exchange and the economics of IP networks
6
67
Empirical evidence concerning traffic exchange arrangements in
the Internet
In this chapter we aim to highlight different empirical aspects of traffic exchange on the
Internet. Topics addressed are: firstly characteristics of Internet exchange points;
secondly a classification of main Internet backbone players; thirdly empirical evidence
as regards growth of the Internet, performance and traffic flows; fourthly an examination
of the peering policies of selected ISPs, and fifthly some remarks on capacity
exchanges. We focus both on status quo aspects and past trends. Throughout this
Chapter the main geographical focus is on North America and Europe. We do not take
account of Africa, Asia, Australia/New Zealand, and Latin America.
6.1
Characteristics of Internet exchange points
In this section we focus on characteristics of major international traffic exchange points
of the Internet. We primarily aim to provide information about:
•
The operators of Internet backbone nodes ("Who");
•
The ownership of the respective resources at these nodes, and
•
The location of the nodes.
Data on investment plans unfortunately are not publicly available. In this report we focus
on public exchange points (neutral or network exchange points – NAPs). Most
information about these exchange points is publicly available. Information about private
exchange points is usually proprietary.143
6.1.1 Methodology and data sources
Along with private peering points between ISPs NAPs are crucial elements of the
Internet’s topology. NAPs offer connectivity both for regional ISPs and for IBPs. In
theory central NAPs interconnect with the entire Internet since they offer the possibility
to peer with the most important international backbone providers. Later in this Chapter
we highlight the most significant NAPs in the USA/Canada and in Europe.
The information provided is based on desk research and interviews of industry experts.
Sources include: OECD (1998); TeleGeography (2000, 2001), and Boardwatch (2000,
2001). A comprehensive list of NAPs is provided at the EP.NET,LLC website
(www.ep.net). More details are available on the exchange points’ homepages. The
143 We investigate the general guidelines for private and public peering in section 6.4.
68
Final Report
interview partners relevant for this section represent in particular, Telecity, COLT, and
DE-CIX.
6.1.2 NAPs in the USA and Canada
To identify the most important NAPs in the USA and Canada we start with the IBPs in
North America. Boardwatch (1999, 2000 and 2001) contains a list of the most important
IBPs in North America. We adopt take this classification in our analysis below. The
assumption is, that a NAP can be viewed as important if one or more of the IBPs of the
Boardwatch lists of 2000 and 2001 declare this NAP as a major national and
international peering point.
By analysing the data in Boardwatch (2000) we have identified 17 internationally
important NAPs.144 The following table contains the official name of the NAP, its
location, the legal operator (who is not necessarily the technical operator), the
homepage address, an indication of the profit status (non-profit (y for yes) or for-profit (n
for no)), and the number of ISPs connected as of April 2001.
It can be seen from Table 6-1 that the four official NAPs started by NSF in 1994 and
built up in 1995, still play a crucial role in interconnecting international Internet traffic
(Pacific Bell San Francisco NAP; Ameritech Chicago NAP; MAE East Washington D.C.;
Sprint Link New York NAP, Pennsauken). Besides the NAPs located in the U.S. the
TORIX and the CANIX NAP in Canada are important for U.S. Internet backbone
providers.
The NAPs connect between 11 and 123 members, with nearly half of them having more
than 50 members. The majority of exchange points are run as profit oriented
enterprises. They are managed by well known telecommunications companies like
WorldCom (5 NAPs), PAIX.net, a subsidiary of Metromedia Fiber (2 NAPs), SBC (3
NAPs, 2 of them run by Pacific Bell), Sprint, Telehouse (each one NAP). Two of the
most important US NAPs and one Canadian NAP are provided by ISPs´ associations
for their members, with another one run by the research organization ISI. The US NAPs
are located around the prosperous economic centres of the West Coast and the East
Coast, e.g. New England and the Silicon Valley region.
However, more and more regional NAPs are built. Annex C-1 contains information
about 58 NAPs in the US and Canada which we have identified by Internet based
research (EP.NET, homepages of exchange points) as of June 2001. The EP.NET
website formerly run by ISI and now regularly updated by a private company contains a
full link list to all NAPs worldwide. Sources also include TeleGeography (2000), OECD
144 For a complete overview of the companies who have access to these 17 NAPs see Annex C-1.
69
Internet traffic exchange and the economics of IP networks
(1998), Boardwatch (2000, 2001) and Colt (2001). We note, however, that the Internet
architecture is changing rapidly and additional NAPs may be added in the future.
Table 6-1:
Name of IXP
Features of the most important NAPs in the USA and Canada
(ordered according to the number of ISPs connected)
Location
(town, state)
Legal
Operator
PAIX Palo Alto
Palo Alto
(California)
Ameritech
Chicago NAP
Chicago
(Illinois)
CIX Herndon
Herndon
(Virginia)
CIX Palo Alto
Palo Alto
(California)
MAE East
Washington
D.C.
MAE West
PacBell Los
Angeles NAP
San José
(California)
Los Angeles
(California)
PacBell San
Francisco NAP
San Francisco
(California)
MAE East
Vienna
Vienna
(Virginia)
NY II X (New
York
International IX)
New York (New Telehouse
York)
URL
PAIX.net Inc.
www.paix.net
(Above Net Metromedia
Fiber N.)**
SBC/Ameritech http://nap.aads.net/
main.html
one of the original National
Science Foundation
exchange points
Commercial
www.cix.org
Internet
eXchange
Association
Commercial
http://www.cix.org/
Internet
index.htm
eXchange
moved to PAIX Palo Alto
Association
WorldCom
www.mae.net/#east. Html
one of the original National
Science Foundation
exchange points
WorldCom/
www.mae.net/#west. Html
NASA Ames
Pacific Bell http://www.pacbell.com/Prod
SBC
ucts_Services/Business/Pro
dInfo_1/1,1973,146-16,00.html
one of the original National
Science Foundation
exchange points
Pacific Bell http://www.pacbell.com/Prod
SBC
ucts_Services/Business/Pro
dInfo_1/1,1973,146-16,00.html
one of the original National
Science Foundation
exchange points
WorldCom
www.mae.net/#east.html
http://www.nyiix.net/
international and local IXP
service
Nonprofit
# of ISPs
connected
N
139
N
123
Y
66
Y
66
N
64
n/y
60
N
59
N
59
N
57
(closing,
ISPs move
to MAE
East)
42
N
70
Name of IXP
Final Report
Location
(town, state)
Seattle Internet
Exchange
PAIX-VA 1
Seattle
(Washington)
Vienna
(Virginia)
MAE Dallas
TORIX
Dallas (Texas)
Toronto
(Ontario),
Canada
Toronto
(Ontario),
Canada
Los Angeles
(California)
CANIX
MAE Los
Angeles / LAP
Sprint New York Pennsauken
NAP
(New Jersey)
Legal
Operator
URL
Nonprofit
# of ISPs
connected
ISI
www.altopia.com/six
y
42
PAIX.net Inc.
(Above Net Metromedia
Fiber N.)
WorldCom
Toronto
Internet
Exchange Inc.
www.paix.net
n
30
http://www.mae.net/#Central
www.torix.net
n
y
19
11
no public information
NA
NA
WorldCom
LAP: USC/ISI
www.mfs data net.com.MAE
http://www.isi.edu/div7/lap/
n/y
Sprint
http://www.sprintbiz.com/ind
ex.html
one of the original National
Science Foundation
exchange points
NA
NA
MAE LA is
not
accepting
new
customers
NA
Source: TeleGeography 2000, OECD 1998, Boardwatch 2000, Colt 2001, EP.NET,LLC, homepages of
exchange points145
NA = information not available
y = not-for-profit public Internet exchange
n = for-profit public exchange
Most of the regional NAPs in the US handle local Internet traffic in the big city centres,
e.g. in Houston, Dallas, Chicago, Atlanta, Seattle or Tucson. Especially in Texas
several new NAPs have been founded in recent years.
Some of the players in the Internet exchange business like WorldCom, PAIX.net, Pacific
Bell, Telehouse and Equinix, have started to build a network of NAPs all over the U.S.
This strategy allows the members of one exchange points to become members at
several NAPs under the same conditions.
In 2001 the importance of some U.S. international NAPs changed significantly.146 MAE
Dallas gained 4 members and MAE West five. The traditional MAE East Vienna is
closing down and does not accept new providers. The new MAE East, however, is
proving to be successful with 31 IBPs connected. PAIX Palo Alto lost eight members in
2001, and the Sprint New York NAP lost four. San Diego NAP and Oregon IX are new
NAPs with 3 and 1 IBP connected there respectively.
145 MAE (Metropolitan Access Exchange) is a trademark of WorldCom for their IXPs
IXP = general expression for Internet exchange point
CIX = commercial Internet exchange (but generally not-for-profit)
NAP = Network/Neutral Access Point
146 See Boardwatch 2001, pp. 30-132 and Annex C-1.
Internet traffic exchange and the economics of IP networks
71
The landscape of Internet backbone providers also changed significantly between 2000
and 2001. New players arose with many connections at NAPs, for example, OptiGate
Networks (11 connections), Cogent Communications (8 connections), Telia Internet (7
connections)147 and One Call Communications (4 connections). 25 of the 36 IBPs
classified as important in Boardwatch (2001) have five or more connections to NAPs.
OptiGate has the most NAP connections in 2001 (11), followed by Cogent, Epoch, and
XO Communications (former Concentric) with eight connections, ICG, Quest, and Telia
with 7 connections and C&W, Electric Lightwave, Multa Communications, Netrail,
Teleglobe, Williams, and Winstar interconnecting at 6 international U.S. NAPs. Five
connections are kept up by Aleron, AT&T, Broadwing, E.spire, Exite@Home148, Level
3, Lightning Internet Services, One Call Communications, ServInt, Sprint, and
WorldCom.
6.1.3 Important NAPs in Europe
North-American NAPs still play a leading role in the provision of Internet backbone
services in other parts of the world. However, in some regions they are losing their
importance. In Europe Internet traffic is increasingly being kept in Europe due in part to
the development of a meshed network of European NAPs. In the present section we
identify and classify the most important NAPs in Europe from the perspective of both
North-American and European IBPs. This is done in several steps.
Step 1: Identification of basic number of NAPs and their location
Adding up the NAPs in Europe as identified by Boardwatch, TeleGeography, OECD and
the EP.NET,LLC website, yields around 60-70 NAPs located both in Western and
Eastern Europe, see Annex C-1.
Figure 6-1 shows the cities in Europe where these NAPs are located.
Annex C-1 suggests that in the middle of 2001 up to 125 ISPs were connected at
European NAPs. 24 of these have 20 or more members, and 8 have more than 50
members. These NAPs are located in Amsterdam, London, Grenoble, Moscow,
Frankfurt/Main, Vienna, Milan and Paris.
147 Telia Internet, Inc. was acquired in October 2001 by Aleron. Telia’s Internet network was ranked
highest among all backbone Internet providers according to Boardwatch (2001) achieving better
results than nearly 40 other backbones. Telia sold this subsidiary presumably because it was a loss
making entity, see Dow Jones International News October 3, 2001.
148 In the course of the year 2001, Winstar and Excite@home have filed for bankruptcy protection under
chapter 11.
72
Final Report
Figure 6-1:
Overview of cities with NAPs in Europe
Helsinki
Oslo
Stockholm
Saint Petersburg
Edinburgh
Riga
Dublin
Moscow
Lyngby
Perm
Manchester
Hamburg
Warsaw
Amsterdam
Berlin
Brussels
Frankfurt/Main
Luxemburg Darmstadt Prague
Novosibirsk
London
Paris
Bratislava
Zurich Munich
Bern
Vienna
Budapest
Geneve
Milan
Grenoble
Bucharest
Zagreb
Ekaterinburg
Dnepropetrovsk
Samara
Madrid
Lisboa
Barcelona
Rom
Athen
Cyprus
Source: WIK-Consult (own research)
Step 2: Taking account of availability of further information
We start by assuming that a reasonable criterion for judging the importance of each of
these NAPs is the number of ISPs connected. The membership figures represent the
number of possible peering arrangements that new members could establish.
Unfortunately, only 44 NAPs make information about their membership publicly
available. In practice, the number is a little lower than this because there are a few
exchange points for which the information could not be used. One reason for this was
that the information was only publish in Cyrillic.149 A second reason applied to TIX in
Switzerland which publishes AS-numbers only.150 As we have not collected empirical
evidence to match AS-numbers and the respective carriers, TIX is omitted from the list.
Third, the available information of IXEurope in Zurich does not seem to contain
members connected to the NAP, but rather "partnerships". With these omissions, we
have analysed membership information from a total of 37 NAPs.151
149 Due to our limited resources we have not invested in translation of this information, rather, we have
omitted these exchange points in our analysis.
150 See section 3 for a discussion of AS-numbers.
151 We admit that by concentrating on the 44 and in turn on the 37 NAPs we are skipping a-priori relevant
NAPs, e.g. MAE-Paris, MAE-FFM or Ebone NAPs. However, as public information is not available
about their participants more information could only be collected by contacting each of these NAPs
Internet traffic exchange and the economics of IP networks
73
Step 3: Classification of NAPs
We have defined three different groups of NAPs in Europe. Those with: International or
global importance, European importance, and regional importance. In order to assign a
NAP to one of these groups we have taken as an indicator the structure of its
membership. Thus, a NAP in Europe is defined to be of:
•
International or global importance if it has at least three U.S. IBPs connected;152
•
European importance if five or more ISPs are connected which are either active
throughout Europe and/or are U.S. IBPs and the number of IBPs falling into the
latter category is limited to two,
•
Regional importance if it is neither of international nor European importance.
The result of this classification procedure can be found in Table 6-2.153
The classification included in Table 6-2 does not take into consideration the size of the
connected networks because of a lack of information about this.154 It refers only to the
names of the providers connected and not to the exact (part of the) network and the
address space of ISPs connected to each of the NAPs. If comprehensive information
about Autonomous System (AS) numbers had been available, a clustering of NAPs on
this basis would in all likelihood have provided a better indication of the most important
NAPs.
individually. This would be time consuming and the outcome a-priori is not clear. As we have only
limited resources in the project devoted to the work package empirical evidence we decided to base
our collection of data as regards nodes and edges in Europe on the 37 NAPs and their members.
152 We rely here on the IBPs provided by Boardwatch (2000, 2001).
153 The assignment of two NAPs might be a bit ambiguous: MIX in Milan is classified as international and
of global importance in particular because AT&T, Digital Island (now Cable & Wireless), Global One
and Concert (then a cooperation of BT and AT&T) are members. Espanix in Spain has as members
(among others) AT&T, Cable & Wireless, Global One, BT and COLT. Companies with a NorthAmerican presence are of course AT&T, Digital Island/Cable & Wireless. BT and COLT in our view
have a European focus. Thus, the assignment of MIX and Espanix depends on the assignment of
activities of Global One (now part of France Télécom/Equant) and Concert. Even though both are not
named as important North American IBPs we have assigned MIX to the category international and of
global importance because both Concert and Global One have a European and a North-American
focus, so the threshold value (three North American IBPs) is reached. This is however, not the case
with Espanix which, thus, is classified as a NAP with European importance.
154 It should be noted that defining size is a non-trivial matter. There are different measures of size. For
example, one could focus on the bandwidth (usually from 2 Mbit/s up to 2,488 Gbit/s) with which a
member ISP is connected to the NAP. Alternatively, one could focus on the speed with which a
member ISP is connected to the internal LAN (Ethernet-based) of the NAP. The Ethernet-technology
usually differ between Standard-Ethernet (10baseX), Fast-Ethernet (100baseX) and Giga-Ethernet
(1000baseX) solutions.
74
Table 6-2:
Final Report
Classification of NAPs in Western and Eastern Europe (as of May
2001)
International and of global
importance
European importance
Regional importance
AMS-IX, Amsterdam,
Netherlands
INXS, Munich, Germany
GNI, Grenoble, France
LINX, London, UK
BIX, Budapest, Hungary
M9-IX, Moscow, Russia
DE-CIX, Frankfurt/M.,
Germany
FICIX, Helsinki, Finland
LoNAP, London, UK
VIX, Vienna, Austria
Espanix, Madrid, Spain
SIX-Slovak IX, Bratislava,
Slovakia
MIX, Milan, Italy
MaNAP, Manchester, UK
SFINX, Paris, France
PIX, Lisbon, Portugal
BNIX, Brussels, Belgium
WIX, Warsaw, Poland
NIX, Oslo, Norway
L-GIX, Riga, Latvia
SE-DGIX, Stockholm,
Sweden
CATNIX, Barcelona, Spain
CIXP, Geneva, Switzerland
Manda, Darmstadt, Germany
NIX.CZ, Prague, Czech
Republic
AIX, Athens, Greece
DIX, Lyngby, Denmark
INEX, Dublin, Ireland
PARIX, Paris, France
NAP Nautilus, Rome, Italy
LIX, Luxembourg
BUHIX, Bucharest, Romania
SPB-IX, Petersburg, Russia
SIX-Z, Zürich, Switzerland
World.IX, Edinburgh, UK
CyIX, Cyprus
SIX-B, Berne, Switzerland
Source: WIK-Consult
Unfortunately, to identify the AS numbers of the ISPs and IBPs of the world, and the
position of the AS in the ISP’s network hierarchy, is information and resource intensive.
Only a very limited number of NAPs publish information about the AS numbers of the
ISPs connected to them on their websites. One alternative therefore is to search on the
web sites of the ISPs and IBPs, respectively. However, in practice this provides only
limited success. There is a search engine available containing a collection of all
available AS numbers and an assignment of these AS numbers to ISPs/IBPs. However,
one cannot derive from this source which geographical part of the network of a
particular ISP is assigned which AS number. Thus, it became obvious to us that one
has to gather the necessary information from the ISPs through personal interviews in
Internet traffic exchange and the economics of IP networks
75
order to have the chance of establishing a sound data base based on AS numbers. Due
to our limited resources in the project we have not put more resources into this matter.
Table 6-2 suggests that 13 IBPs in Europe can be classified as international and of
global importance. Moreover, there exist 4 NAPs with a European importance and 20
NAPs which in all likelihood only possess a regional importance in the area of Western
and Eastern Europe.
Analysis of the 13 most important NAPs in Europe
Figure 6-2 displays the location of the 13 most important NAPs in Europe.
Figure 6-2:
Cities hosting the 13 most important NAPs in Europe
Oslo
Stockholm
Lyngby
London
Amsterdam
Brussels
Frankfurt/Main
Prague
Paris
Geneve
Vienne
Milan
Source: WIK-Consult (own research)
Table 6-3 gives an overview of features of the 13 most important NAPs in Europe, all of
which are all located in capitals of European countries or other central cities. In contrast
to U.S. or Canadian NAPs almost all of the NAPs are managed as non-profit
organisations. Six are operated by associations of Internet providers, six are run by
universities or research centres and only one is operated as a for-profit enterprise by a
telecommunications operator (France Telecom).
76
Final Report
AMS-IX (Amsterdam), LINX (London), both founded in 1994, and DE-CIX
(Frankfurt/Main), established 1995, are considered to be the most important of the
European NAPs. These three Internet exchange points are still growing. In terms of the
number of members of each of the three NAPs, DE-CIX expected to have the most
members during 2001.
Table 6-3:
Name of
NAP
Features of the 13 most important NAPs in Europe (ordered
according to the number of ISPs connected)
Location
Operator
URL
www.ams-ix.net/
Nonprofit
# of
ISPs
connected
y
125
AMS-IX
Netherlands,
Amsterdam
AMS-IX Association
LINX
UK, London
London Internet Exchange www.linx.net
Limited
y
118
DE-CIX
Germany,
Frankfurt/Main
Eco Forum e.V. (not-forprofit industry association
of ISPs)
www.eco.de
y
75
VIX
Austria, Vienna
Vienna University
http://www.vix.at/
y
72
MIX-Milan
Italy, Milan
MIX S.r.L.
http://www.mix-it.net/i
y
66
SFINX
France, Paris
Renater
http://www.sfinx.tm.fr/
y
59
BNIX
Belgium,
Brussels
Belnet – Belgian National
Research Network
www.belnet.be/bnix
y
45
NIX
Norway, Oslo
Centre for Information
Technology Services
(USIT), University of Oslo
http://www.uio.no/nix/infoenglish-short.html
y
39
SE-DGIX
(Netnod)
Sweden,
Stockholm
Netnod Internet Exchange http://www.netnod.se/indexi Sverige AB with SOF
eng.html
(The Swedish Operators
several locations
Forum)
y
39
CIXP
Switzerland,
Geneva
CERN IX Point
Wwwcs.cern.ch/public/servi
ces/cixp/index.html
two locations: CERN and
Telehouse
y
37
NIX.CZ
Czech
Republic,
Prague
NIX.CZ Internet Service
Provider association
http://www.nix.cz/
y
31
DIX
Denmark,
Lyngby
UNI-C (partner of the
Danish Ministry of
Education)
www.uni-c.dk/dix/
y
29
PARIX
France, Paris
France Telecom
http://www.parix.net/anglais
/
n
25
Source: WIK-Consult research on the basis of TeleGeography (2000), OECD (1998), Boardwatch (2000),
Colt (2001), EP.NET,LLC, homepages of exchange points
Internet traffic exchange and the economics of IP networks
77
6.1.4 Additional features of European NAPs
Local production conditions of a NAP
Some of the NAPs are not located at a single site in a city, rather, they build
metropolitan area ethernet systems that connect several sites. For example, LINX is
located in 8 buildings all over London. In May 2001 DE-CIX opened a second node also
located in Frankfurt. AMS-IX is also a distributed exchange, currently present at four
sites in Amsterdam.
Product differentiation of NAP operators
Operators of the NAPs are seeking agreement from members to expand the range
services available for their members to include, for example, security services
(Computer Emergency Response Teams – CERTs), content control for the protection of
children and young people, and other services that might be required by law, such as
those concerning confidentiality of telecommunications. In addition, there are
organisations that lobby in regard to issues concerning telecommunications regulation,
law enforcement, content regulations, education and qualification of IT specialists, etc.
Fees of NAPs
Today NAPs are financed by flat rate charges levied on members. In the future experts
suggest that this method of financing NAPs might be replaced by fees that are based on
some measure of usage, such as traffic flows at a NAP.
Other players like housing companies or Internet businesses do not appear to be
interested in building up networks of non-profit NAPs similar to the existing NAPs run by
ISP associations. Rather, housing companies seem to focus on offering private peering
opportunities at their housing facilities and establish private Internet exchange points on
a for-profit basis. They establish their own individual pricing schemes for such services.
Cooperation between NAPs
Today, existing NAPs that are run by non-profit ISP associations cooperate closely on a
European-wide level. This cooperation has been assisted by the establishment of the
association known as EURO-IX which was formed in spring 2001. The aim of EURO-IX
is to establish a NAP membership for ISPs that covers the connection to several of the
important European NAPs. In future it has been suggested that a membership fee may
be introduced that covers the connection to several or all of the EURO-IX NAPs. Table
6-4 shows the founding members of EURO-IX.
78
Final Report
Table 6-4:
Founding members of EURO-IX (as of May 2001)
Member IXP
Location
non-profit
AMS-IX
Amsterdam, Netherlands
y
BNIX
Brussels, Belgium
y
DE-CIX
Frankfurt/Main, Germany
y
LINX
London, UK
y
MIX-Milan
Milan, Italy
y
SE-DGIX (Netnod)
Stockholm, Sweden
y
VIX
Vienna, Austria
y
Source: EURO-IX
Seven of the 13 most important European NAPs are members of the new association.
The membership of EURO-IX is predicted to grow in the future. NAPs interested in a
membership include AIX in Athens, CIXP in Geneva, PIX in Lisbon, LIX in Luxembourg,
NDIX in the Netherlands (Enschede), ESPANIX in Madrid, MIX-Malta and the
XchangePoint in London (which is a new for-profit enterprise founded by former LINX
managers).
6.1.5 U.S. Internet backbone providers at non-U.S. international NAPs
Connections outside the U.S. and Europe
In year 2000, outside Europe and the U.S., American Internet Backbone providers
peered only at a few international NAPs.155 Two of these NAPs are located in Canada,
and four in Asia:
•
CANIX, Toronto, Canada,
•
TORIX, Toronto, Canada,
•
JPIX, Tokyo, Japan,
•
NSP/IXP II, Tokyo, Japan,
•
HKIX, Hong Kong, China,
•
Taipei, Taiwan.
155 See Boardwatch 2000, pp.32-151.
Internet traffic exchange and the economics of IP networks
79
In 2001 KINX in Seoul, Korea, was also used by U.S.-IBPs. The HKIX in Hong Kong
gained 4 additional U.S.-IBPs. For reasons discussed below, the NSP/IXP II in Tokyo,
the TORIX, Canada and the Taiwan-NAP were no longer frequented by American
backbone providers.156
Connections to European NAPs
Boardwatch (2000) identifies 39 U.S. Internet backbone providers. Only 10 peered at
more than one of the following 18 international NAPs in Western Europe.157 Four of
these NAPs are located in Germany, three in the UK and two in France:
•
AMS-IX, Amsterdam, Netherlands,
•
BNIX, Brussels, Belgium,
•
CIXP, Geneva, Switzerland,
•
DE-CIX, Frankfurt/M., Germany,
•
DFN, Berlin, Germany,
•
DIX, Lyngby, Denmark,
•
Ebone, London, UK,
•
Ebone, Munich, Germany,
•
ESPANIX, Madrid, Spain,
•
LINX, London, UK,
•
MAE-FFM, Germany,
•
MAE-Paris, France,
•
MaNAP, Manchester, UK,
•
MIX, Milan, Italy,
•
PARIX, Paris, France,
•
SE-DGIX, Stockholm, Sweden,
•
SFINX, Paris, France,
•
Vix, Vienna, Austria.
156 See Boardwatch 2001, pp. 30-132.
157 See Boardwatch 2000, pp.32-151.
80
Final Report
In 2001 some of these NAPs lost their importance for U.S. backbone providers. From
the 36 U.S. backbone providers identified by Boardwatch (2001) - three less than in
2000 – none of these peered at the following NAPs: BNIX in Belgium, DFN in Germany,
MaNAP in the UK, Ebone London; Ebone Munich, and SE-DGIX.158 Information from
year 2001 shows that U.S. IBPs are connecting at the following European NAPs:
•
AIX, Athens, Greece,
•
BIX, Budapest, Hungary,
•
FICIX, Helsinki, Finland,
•
MAE Paris, Paris, France,
•
NIX, Oslo, Norway
•
PIX, Lisbon, Portugal,
•
SIX, Bratislava, Slovakia,
•
SWISSIX, Zurich, Switzerland.
The changes may reflect the growing importance of Internet services in Southern
Europe and especially Eastern Europe.
In 2001 significant changes occurred regarding the Internet backbone providers
connecting at European NAPs. AT&T established 15 connections where it previously
had no connection of its own, and Cable and Wireless also became a new player in
Europe with 4 NAPs. Quest connected at 10 NAPs (+ 9 compared to 2000). Level 3
became a member at 9 NAPs (+ 5). In comparison, only a few providers gave up their
NAP memberships. Multa, ServInt, Teleglobe, and Winstar reduced their connections
by one. Only Lightning Internet Services gave up all its 6 connections in Europe
between 2000 and 2001.
6.1.6 Additional features characterising a NAP
The preceding sections have made clear that NAPs differ with respect to legal,
organisational and geographical aspects as well as the number of member-ISPs. Other
features which characterise NAPs which appear importance are as follows:.
•
The type of member-ISPs (e.g. as regards the size of the connected network);
158 Sometimes the changes might have their reasons in the NAPs peering policy. For example, at the
Stockholm exchange you have to allow transiting.
Internet traffic exchange and the economics of IP networks
•
The established peering agreements between members, and
•
Technical features (e.g. capacities at ports or within the LAN of a NAP).
81
Some for-profit Internet exchange points regard all information about their customers as
confidential, including the number of members. Moreover, it is common for information
about the size of the connected network not to be provided. The same is true for the
capacity at ports. Websites of public NAPs rarely offer detailed information about
capacities and traffic. In some cases there exist aggregated traffic information of the
NAP relating to a day, a week or the last year. However, traffic information is not made
public by all of the important U.S. and European NAPs we have identified. Likewise,
concrete peering agreements at NAPs are classified as confidential information by
almost all NAP operators and their member ISPs.
6.2
Main Internet backbone players
In this section we identify important operators of Internet backbone connections and
analyse supply-side structures in the Internet backbone market. As in the previous
section we focus on the European and North American continent.
6.2.1 Basic features of the empirical approach
In analysing arrangements for traffic exchange on the Internet we focus on the most
important public NAPs in North America and on the international NAPs in Europe that
are of global importance.159 Our assumption is that there is a strong preference for
important Internet backbone providers to get connected to the most important NAPs in
North America and Europe. Thus, an Internet backbone provider can be considered as
important if it is connected to a pre-specified number of important NAPs in the
worldwide market.
In the following we initially concentrate on the European and then on the U.S. market.
With respect to Europe we will proceed from a bottom-up perspective examining the
international nodes of global importance representing a strategic position in the Internet
backbone value chain. Concerning the U.S. market we will follow a top-down approach,
i.e. we will rely on the empirical studies of the Internet backbone market contained in
Boardwatch (2000, 2001).
159 For simplification in the following analysis we call these NAPs ”important NAPs".
82
Final Report
6.2.2 Internet Backbone Providers in Europe
Our analysis takes into account the entire list of ISPs connected to the European
international NAPs of global importance. This list comprises 760 NAP-members (as of
May 2001) encompassing a variety of companies like well-known incumbents, entrants,
online service providers, cable companies and many other enterprises of unknown
classification. Most of the 760 are European companies, although there are a number
that come from North America and Asia.
Many providers are connected through subsidiaries or affiliates to the NAPs. To make
the analysis easier we have firstly allocated subsidiaries or affiliates to the respective
parent company, which we refer to as a holding entity.160 Secondly, we have only taken
account of a holding entity if altogether the companies allocated to the holding have
connections to at least four NAPs (threshold value).161 Following this procedure yields
a total of 39 providers operating a variety of connections to international nodes of global
importance in Europe (not necessarily to different nodes)162. If we take the main
business focus and the country of origin as qualifiers these companies can be assigned
to the following categories:
•
European incumbents (Belgacom, BT, Deutsche Telekom, France Telecom, KPN,
Sonera, Swisscom, Tele Danmark, Telefonica, Telenor, Telia);
•
Companies which started as entrants into the European telephony market (the then
Mannesmann, Tele2);
•
Backbone infrastructure providers and/or IBPs with headquarters in North America
(AT&T, Metromedia Fiber Network, Level 3, PSInet163, Teleglobe, WorldCom, XO
Communications);
•
Backbone infrastructure providers and/or IBPs with a headquarter in Europe (Cable
& Wireless, Carrier1, Colt, Ebone/GTS, Energis, Iaxis164, KPNQwest, Tiscali, UPC);
160 We have assigned a company to another company if the latter owns at least a 51% share of the
former. The latter company is then called parent company.
161 An example will clarify this: Cable&Wireless (viewed as a group) appears 32 times in the data base.
With respect to the different NAPs the parent company and the subsidiaries/affiliates of C&W have 6
connections to AMS-IX, and 5 connections to LINX.
162 An example might be useful: At the exchange point in Milan Colt (viewed as a group) is connected by
its Italian subsidiary Colt Telecom -Italy, but also by Colt International.
163 In 2001 PSINet filed for bankruptcy protection under chapter 11.
164 Carrier 1 announced insolvency in February 2002 and presumably will declare bankruptcy.
Ebone/GTS announced insolvency in 2001. In October 2001 a share purchase agreement was
announced with KPNQwest stating that the latter will acquire GTS Europe which owns and operates
ebone and GTS’s Central European operating companies. Energis announced in February 2002 that
with the exception of the UK, it would sell out of all its foreign participations due to financial problems.
Iaxis has been under administration since September 2000; the acquisition by Dynegy Inc. (USenergy company) was completed in March 2001.
Internet traffic exchange and the economics of IP networks
83
•
Companies with a focus on the provision of services for multi-national enterprises
(Concert, Infonet);
•
Companies specialised in operating data centres (Globix), and
•
Others (ConXion, Easynet, Internext, Jippi, Magde.web, Via Net Works, Wirehub).
Companies which have been omitted due to the above mentioned threshold value are
incumbents (e.g. Matav, Telecom Italia, Eircom), entrants into a national European
telecoms market (e.g. Albacom (Italy), Cegetel (France), Torch Telecom (UK)),
European backbone infrastructure providers (e.g. Interoute), online service providers
(e.g. Yahoo, CompuServe) and companies from Asia (e.g. Korea Telecom, NTT, Pacific
Century Cyberworks, Singapore Telecom).
In order to concentrate effectively on those Internet backbone providers which from a
European perspective are most important, we have introduced as an additional
criterion, and this is that a company must be connected to at least five important NAPs
of global importance in Europe. This reduces the figure of 39 providers to 28 providers,
which are shown in Table 6-5.
Table 6-5:
Important Internet backbone providers in Europe (minimum of 5 NAP
connections to different international nodes of global importance in
Europe)
AT&T
British Telecom
Cable & Wireless
Carrier1
Colt
Concert
Deutsche Telecom
Easynet
Ebone/GTS
France Telecom
Globix
KPNQwest
Level3
Magdeweb
Metromedia Fiber Network
PSInet
Swisscom
TeleDanmark
Tele2
Telefonica
Teleglobe
Telenor
Telia
Tiscali
UPC
Via Net Works
Wirehub
WorldCom
Source: WIK-Consult
Summing up, as at mid-2001 there is only a relatively small group of around 30
providers that are active at the important nodes in Europe. Besides the large and well
known backbone infrastructure providers and/or IBPs like WorldCom, C&W, Level 3,
KPNQwest, Colt, MFN, there are only a small number European incumbents with an
extended market presence in Europe, these being BT, DTAG, France Telecom,
84
Final Report
Swisscom, TeleDanmark, Telefonica, Telenor, and Telia. In addition, companies like
Tiscali and UPC are showing remarkable market presence at nodes all over Europe.
6.2.3 Internet Backbone Providers in North America
The most important providers in the North American Internet backbone market can be
discovered by referring to Boardwatch (2001). We have concentrated on companies
with at least 4 "major U.S. peer interconnect points" as specified by Boardwatch. The
resulting organisations identified are shown in Table 6-6. In this table we have also
indicated the name of the parent company in case the IBP is a subsidiary or affiliate.
Table 6-6:
Important Internet backbone providers in North America as of 2000
(minimum of 4 NAP connections to different important nodes in North
America)
Aleron
Multa Communications Corp. (Multacom)
AT&T (incl. IBM Global Network)
NetRail ( 360 networks*)
Broadwing
One Call Communications
Cable & Wireless
Optigate Networks
CAIS Internet
PSInet*
Cogent Communications
Qwest Communications
e.spire Communications
ServINT Internet Services
Electric Lightwave
Sprint Communications
Epoch Internet
Teleglobe
Excite@home* ( partly AT&T)
Telia Internet**
Fiber Network Solutions
Verio ( NTT)
Genuity
Williams Communications Group*
ICG Communications
Winstar Communications*
IDT Corp.
WorldCom
Level 3 Communications
XO Communications***
Lightning Internet Services
Source: Boardwatch (2001)
* under chapter 11 bankruptcy protection (as of April 2002)
** has been sold to Aleron in 2001
***under chapter 11 bankruptcy protection as of June 2002.
Referring to a study by Elixmann (2001) one can draw some conclusions as regards
features of the American Internet backbone market. Firstly, a substantial portion of the
important Internet Backbone Providers in the USA do not operate their own fibre
infrastructure. Secondly, incumbent IXCs AT&T, Sprint and WorldCom are all important
IBPs. Thirdly, nearly all of the companies which according to the length of their fibre
optic network are "important" entrants into the market for fibre optic capacity in the U.S.
Internet traffic exchange and the economics of IP networks
85
are also important IBPs. At the time of writing the study this was true of Broadwing,
Cable & Wireless, Level 3, Qwest, Teleglobe, Williams and Winstar.
However, we also note that several players in the North American backbone market are
having financial difficulties. As one can see from Table 6-8 five of most important
Internet backbone providers in North America (Excite@Home, 360 networks, PSINet,
Williams, Winstar) have already filed for bankruptcy protection under chapter 11.
6.3
Internet growth, performance and traffic flows
This section focuses on empirical evidence as regards capacity and traffic flows. Firstly,
we investigate the growth of the Internet route tables, secondly, the growth of AS
numbers, the third sub-section is devoted to information about the development of
Internet performance, and fourthly we look at traffic data.
6.3.1 Routing table growth
Internet backbone routers are required to maintain complete routing information for the
Internet. Thus, the size of the routing table entries has implications for router memory,
the capacity and speed with which routers can perform calculations on routing table
changes, and the speed with which they can perform their forwarding algorithms.
Subsequently we focus on data from the web site of Tony Bates. This data rests on
daily updates since the beginning of 1994. Daily empirical information as regards
routing table growth prior to 1994 is not available. However, Semeria (1996) gives
numbers for two points in time. He reports that in December 1990 there were 2,190
routes and in December 1992 there were 8,500 routes. If we compare this with the
number of routes in the beginning of 1994 and roughly equal to 15,000 (as observed
from the figure below), we can state that the number of routes has roughly doubled
each year from the beginning of the 1990’s.
Figure 6-3 displays the development of the number of routes in the Internet between
January 1994 and November 2001.165
The graph’s zero point on the x-axis represents January 1, 1994 and numbers along the
x-axis denote the days passed since then. Thus, the value 1,000 roughly corresponds
to September 1996, the value 1,500 roughly corresponds to February 1998, the value
2,000 roughly corresponds to mid-1999, and the value 2,500 roughly corresponds to
November 2000.
165 The reader is reminded that a route is a path to an IP address block. Further empirical data on route
table growth is available from Geoff Huston. See also Marcus (2001a).
86
Final Report
Figure 6-3:
Development of the number of routes on the Internet between January
1994 and November 2001
Source: http://www.employees.org/~tbates/cidr.hist.plot.html
From eyeballing the graph it is clear that Internet route table growth has gone through
different phases. Between 1994 and roughly day 1,750, i.e. around the end of 1998,
there were essentially fluctuations around a rising linear trend.166 Within this five-year
period the absolute number of routes in the Internet has tripled yielding a CAGR of
roughly 25%. In the period "day 1,750 to day 2,400", i.e. between the end of 1998 and
the middle of 2000 the growth path exhibits an S-curve behaviour, i.e. there is a phase
in the beginning where the growth rates increase followed by a phase in which the
growth rates decrease.167 The point of inflection is roughly located around day 2,200,
i.e. in the beginning of the year 2000. Thus, the route table growth on the Internet
seems to be highly correlated with the Internet boom of the late 1990’s, the end of
which can be identified with the downturn of the stock market in March 2000. Since mid-
166 Using econometric methods (e.g. stepwise regression) one could presumably identify further subperiods with a distinct growth behaviour. We will, however, not follow this approach here because we
are only interested in stylised facts.
167 Obviously, the absolute number of route table entries is increasing throughout this period, however,
the size of the increase, i.e. the second derivative in mathematical terms, varies over time.
Internet traffic exchange and the economics of IP networks
87
2000 until November 2001 the growth of the Internet route table seems to have slowed.
Although there are significant short-term outliers, i.e. short phases where the absolute
number of route table entries is increasing and then decreasing, recent data still shows
that the underlying trend remains positive, however, recent growth is not as strong as
from 1998 to 2000.
Overall, the number of route table entries has roughly doubled in the nearly three year
period from 1999 until November 2001, when there were about 105,000 route table
entries.
6.3.2 AS number growth
Figure 6-4 gives an overview of the development of the number of ASes assigned
between October 1996 and November 2001.168
Figure 6-4:
Development of the number of ASes on the Internet for the period
October 1996 through November 2001
Source: http://www.employees.org:80/~tbates/cidr.as.plot.html
168 The graph is based on data from Tony Bates obtained from Marcus (2001b), who also presents
empirical data on AS number growth from other sources. The graph is updated on a daily basis.
88
Final Report
6.3.3 Internet Performance
This section concentrates on performance data. The information is taken from a publicly
available source, namely from Matrix.Net.169
Basic features of the methodology
Firstly, Matrix.Net claims that ISPs are chosen based on the proportion of routes each
ISP provides as a proportion of for global traffic, and that the destinations are chosen in
a way that is representative of the network. Secondly, devices called beacons (a
beacon is a computer) which run on Matrix.Net's proprietary software, periodically
conducts a scan,170 typically every 15 minutes, 24 hours a day, seven days a week.
During a scan, a beacon sends data to each of the destinations of an ISP and records
the time it takes to receive a response.171 Thirdly, the network data centre of Matrix.Net
periodically pulls the stored information from each beacon and evaluates it.
The Matrix:net web site contains continuously updated ratings of many ISPs in the
world providing daily, weekly and monthly performances.172 The daily and hourly
results are calculated as medians across all the destinations. Weekly and monthly
results are calculated as means of daily results.173
Performance indicators
Three different performance metrics are calculated by Matrix.Net:
•
latency,
•
packet loss and
•
reachability.
These metrics are most significant from the end-user's perspective as regards the
individual quality of service. Subsequently, we will focus on calculations of Matrix.Net
which give a historical overview of the development of these indicators.174 In the
following three figures five different graphs are plotted denoted as Internet, WWW,
DNSTLD, NAP1000 and NAP100. "Internet" focuses primarily on traffic to global
backbone routers and "WWW" focuses on traffic to web servers. "DNSTLD" refers to
169 Matrix.Net is a company that provides carriers and ISPs with performance metrics aiming at enabling
them to write better QoS-based service level agreements with their customers.
170 Beacons are external to the network being measured.
171 Beacons use a specific protocol when they scan. However, protocols like the File Transfer Protocol,
the Domain Name Service and the Simple Mail Transport Protocol are also used to measure traffic
performance.
172 Results cannot be presented here and the interested reader is referred to the web-site:
http://ratings.matrixnetsystems.com/.
173 The reason why the median is used for daily and hourly calculations is simply to make the calculations
more robust against outliers.
174 The reader is referred to: http://www.matrix.net/research/history/decade.html.
Internet traffic exchange and the economics of IP networks
89
traffic related to Top Level Domain servers of the Domain Name System. The "NAP
100" list focuses on traffic to those nodes that at the time the list was created each had
more than 100 paths through it but less than 1000. The "NAP 1000" list measures
nodes that have more than 1000 paths through them.175 Unfortunately, the list has not
been updated since September 2000.
Latency
Latency, or lag time, refers to the round trip delay between the time a beacon sends a
packet to a destination and the time the beacon receives a response packet. Latencies
are only computed for packets that receive a response. In the latency graph the x-axis
denotes time of measurement and the y-axis denotes milliseconds measured.
Figure 6-5:
Development of latency on the Internet between 1994 and Sept 2000
Source: http://www.matrix.net/research/history/decade.html
Figure 6-5 shows a downward trend in latency between 1999 and the end of 2000. In
the early 1990’s latency on the Internet was much higher (400-500 milliseconds) than at
the end of the period (around 150 milliseconds). However, the graph also shows that
there are pronounced fluctuations around this trend. These include seasonal effects
within a year.176 Both with respect to traffic going to the global backbone routers and
175 The number of paths relates to the feature of the BGP-4 protocol, see Minoli and Schmidt (1999). The
number of paths in this context can be interpreted as a measure how well a node is linked to the
Internet, i.e. the higher the value the more easily the node is accessible. In the subsequent graphs the
figures in brackets denote the number of destinations taken into account.
176 This can be seen in more detail by clicking on the respective graphs for a single year within the
period.
90
Final Report
with respect to traffic going to the WWW servers, latency has been significantly lower
since about 1998. 1997 was a year with big outliers on both sides, with latency relatively
stable at around 300 milliseconds until the end of July. However, in the beginning of
August 1997 it was very low for a short period. In September and November of 1997
latency reached a peak of more than 500 milliseconds. Latency with respect to NAP
100 and 1000 traffic as well as latency with respect to DNSTLD servers, has been
measured for a much shorter period (mainly only for the year 2000). The graphs show
that latency with respect to DNSTLD servers is higher (slightly below 200 milliseconds)
and latency with respect to NAP100/1000 traffic is lower (around 80 milliseconds) than
with respect to the other categories as measured at the end of the observation period.
The difference between the latency as regards the NAP 100 and NAP 1000 traffic is
negligible.
Packet loss
Packet loss is the percentage of packets sent to a destination that do not elicit
corresponding return packets. In Figure 6-6 the x-axis denotes time and the y-axis the
percentage of packet loss.
Figure 6-6:
Development of packet loss on the Internet between 1994 and Sept
2000
Source: http://www.matrix.net/research/history/decade.html
Packet loss with respect to the Internet backbone as well as with respect to the WWW,
remained really stable at between 20% and 30% between 1994 and the end of 1998.
Internet traffic exchange and the economics of IP networks
91
Since 1999 packet loss has declined sharply and at the end of the observation period
was around 5%. Packet loss with respect to NAP 100 and 1000 traffic as well as packet
loss with respect to DNSTLD servers has been measured only for a much shorter
period (mainly for the third and fourth quarter of 1999). The graphs that at the end of the
observation period packet loss with respect to DNSTLD servers was higher (around
16%) and packet loss with respect to NAP100/1000 traffic was lower (below 5%) than
with respect to the other categories. On average, packet loss as regards the NAP 1000
is lower than the packet loss regarding NAP 100 traffic, although there are some
pronounced outliers.
Reachability
Figure 6-7:
Development of reachability on the Internet between 1994 and Sept
2000
Source: http://www.matrix.net/research/history/decade.html
if a destination can be reached on the Internet it is reachable. The test is whether it
responds to at least one of the packets sent to make this assessment. Per scan,
reachability is expressed as a percentage of destinations that responded to at least one
packet. In Figure 6-7, the x-axis denotes time and the y-axis denotes percentage points.
Figure 6-7 shows that reachability remained relatively stable at a level of 70% to 80%
between 1994 and the end of 1998. Since the beginning of 1999 there has been a very
large improvement in reachability. As measured at the end of the observation period, it
is higher than 95%. This is true both with respect to traffic to global backbone routers
and with respect to traffic to WWW servers. Reachability with respect to NAP 100 and
92
Final Report
1000 traffic, and reachability with respect to DNSTLD servers, has been measured only
for a much shorter period (mainly only since the third and fourth quarter of the year
1999). The graphs show that at the end of the observation period reachability with
respect to DNSTLD servers is lower (between 85% and 90%) and reachability with
respect to NAP1000 traffic on the average is slightly higher (95%-100%) than with
respect to the other categories. Reachability as regards the NAP 100 is more or less
equal to reachability as regards the Internet and WWW traffic.
Summing up, the three graphs show that the performance of the Internet has made
considerable progress in the past 7-8 years. Latency and packet loss have both
decreased, and reachability has significantly increased.
6.3.4 Internet Traffic
Estimates of Internet traffic for specific organisations can normally be obtained fairly
easy. It is usual for organisations with their own networks (e.g. ISPs) to publish
continuously over the day data from their core routers. This enables the data to be
aggregated to get daily or weekly load curves. An example is given in the following two
graphs (Figures 6-8 and 6-9) which contain data from the CERN network in Geneva.
These graphs show that both with respect to daily and to weekly traffic there are clearcut peaks and troughs. Unfortunately, load curves from those IBPs we interviewed were
treated as confidential.
Figure 6-8:
Daily traffic of CERN
Source: http://sunstats.cern.ch/mrtg/ethernet-isp.html
Internet traffic exchange and the economics of IP networks
Figure 6-9:
93
Weekly traffic of CERN
Source: http://sunstats.cern.ch/mrtg/ethernet-isp.html
For the remainder of this subsection we discuss traffic growth on the Internet. This has
been a topic of discussion in the Internet community for several years, as among other
things, it has implications for addressing, routing and network planning.177 We will not
be able to reach a final conclusion here, rather, we highlight the discussion.
During the late 1990’s there was a rule-of-thumb which said that Internet traffic doubling
every 90 days. Roberts (2001) mentions that since the Internet began growing
aggressively in 1997 core ISPs had been experiencing an average per annum traffic
increase across the core of 2.8 times, spurred by mainstream interest in the web. Lynch
(2001), however, is very sceptical about these figures. He reports findings by AT&T
Research Labs which concluded that Internet demand was more likely to doubling every
year. In particular, the research findings indicate that there was no significant network
that was increasing in size by more than 150% annually. In addition, Lynch mentions
that another study indicated that the number of "active Internet users" in the U.S,
(defined as those who access the Internet at least once a week), "declined by some
10% or 7 million people between January and October 2000".178
The study by Roberts (2001) reaches a very different conclusion. His focus is on
Internet traffic in the U.S. His findings suggest that the Internet is not shrinking, nor
does it appear to be slowing in its growth. In fact, the measurements by Roberts
suggest traffic on the Internet has been growing faster as time goes by, increasing as
much as four times annually through the first quarter of 2001. Roberts' data shows
traffic has been doubling every six months on average across core IP service providers'
networks, or in other words, growing by four times annually. Thus, Roberts' findings run
177 We discuss the public policy interest in these issues in Chapter 8.
178 See Lynch (2001, p. 4).
94
Final Report
counter to those suggesting that the growth rate of Internet traffic has slowed
recently.179
6.4
About Peering
6.4.1 Peering policies
IBPs who are engaged in peering with other parties usually establish a variety of
conditions in their peering contracts. Unfortunately, the detailed requirements making
up a particular peering contract of an IBP are not usually published. Likewise, a
comprehensive overview of an IBP’s set of peering partners and the scope of IP
addresses that they are making available to each other is usually not published.
However, many IBPs publish more or less detailed peering guidelines which form the
basis of their peering contracts. Henceforth we call such guidelines the peering policy of
an IBP.
In this section we assess peering policies of a sample of major U.S. and European
based Internet backbone providers. We take account of the following operators:
Broadwing, Cable & Wireless, Electric Lightwave, France Télécom, Genuity, Level 3,
and WorldCom. The empirical data on which this section is based was taken from
publicly available information on the respective web sites of the companies.180
We highlight the main features of the peering policies of the carriers we have
investigated and assess the commonalities and differences between them. For a
detailed overview of the main peering features of these carriers the reader is referred to
the tables in Annex C..
Two general impressions deserve mention at the outset:
•
There are Internet backbone providers whose guidelines for private peering are
different from their guidelines for public peering (e.g. Level 3, France Télécom) and
there are providers for which they are the same (e.g. C&W, WorldCom).
•
The peering policy of an IBP need not necessarily be the same across all networks
(more precise: all Autonomous Systems) it operates. Rather, there can be different
policies or at least different requirements for different networks. Peering guidelines
179 At the time of writing the report we did not have better data available to favour one view over the
other.
180 We have contacted several more companies and are still waiting for response from some of them. It
seems that some Internet backbone providers are reluctant to publishing their peering guidelines. We
were told, for example, that the German branch of BT Ignite does not publish peering guidelines for
Germany.
Internet traffic exchange and the economics of IP networks
95
often refer to different geographic regions. In our sample, the main regions which
are usually demarcated are the US, Europe and the Asia/Pacific area.181
A more detailed analysis of the peering guidelines of the carriers in our sample reveals
several different characteristics. Generally, the requirements for peering partners refer
to the following categories:
•
peering link features;
•
routing;
•
network infrastructure, and
•
operational and informational features.
Peering links which are to be established between peering parties are usually specified
according to their number and location. Peering links normally have to be
geographically dispersed, i.e. specific cities or NAPs are defined where peering has to
take place. There are often further requirements as to minimum traffic volumes which
have to be exchanged over the peering links and the minimum bandwidth of the peering
links. Traffic exchanged is often further specified by limits on traffic imbalances.182 We
observed service level requirements with respect to packet loss, delay and availability,
these being QoS features discussed in Chapter 4.
Routing requirements typically include the need for consistent route announcements
and network operation using the CIDR addressing scheme at edge routers under the
BGP-4 protocol.183 The consistency of route announcements implies that both parties
have to announce the same set of address information at all peering points. As a
consequence the party requesting peering can be obliged to filter routes at its network
edges so that only pre-registered routes can be forwarded according to the peering
agreement.
The requirements for the requestor´s network infrastructure are usually defined
according to the following features:
•
size of network (such as with respect to the number and location of network nodes);
•
bandwidth of backbone circuits, and
•
network topology.
Internet backbone providers usually require the requestor’s network to be of almost the
same size in comparison to their own network. This applies particularly for the peering
181 Some providers offer peering on a country specific basis (e.g. Cable & Wireless, France Télécom,
WorldCom).
182 Usually, particular percentage quota are defined.
183 See section 2.3 for more details.
96
Final Report
region in which the requestor is asking for peering. For this purpose, some Internet
backbone providers define sub-areas of a peering region where the requestor is
required to have nodes. For network efficiency purposes the topology184 of the
requestor’s network is relevant as well. In most cases, a redundant and meshed
backbone network is required.
Operational features typically contain the provision of a network operation centre
working 24 hours a day and 7 days a week to handle peering or congestion issues.
Finally the peering requestor is required to have routes registered with the Internet
Routing Registry or another registry and to reveal all kinds of more or less formal and
network specific information (e.g. information on existing interconnection links).
Comparing the different peering guidelines of the Internet backbone providers in our
sample we can conclude as follows:
•
Most peering guidelines include a variety of minimum requirements which have to
be met by the peering requestor. This applies above all to peering link features and
network infrastructure requirements.
•
Routing, operation and informational requirements are often very similar.
•
Differentiated requirements for private and public peering can be observed
especially in regard to peering link features and network infrastructure requirements.
6.4.2 New peering initiative in the U.S.
During our interviews it was reported that Tier-1 ISPs in the U.S. (e.g. UUNet, Qwest,
AT&T, Level 3, Sprint, Genuity) have recently agreed a new network peering solution in
which will allow them to exchange traffic at a limited set of predefined locations. This
clearly has similarities with the existing NAP model.185
The initiative appears intended as a rationalisation of current private peering (which
involves about 80% of total traffic in the U.S.)186. Private peering is currently carried out
very "decentrally", usually in a city where both partners already have a PoP (not
necessarily at the same premise). Apparently connecting directly and in separate
places, with every other Tier-1 ISP is considered to be too costly presumably because
this model lacks scalability. The central feature of the new model is the collocation
facilities that are being established. There are already 4 sites in use (1 at Equinix and 3
at Level 3) and the partners aim to establish 8 collocation spaces in the U.S. Every
partner is using dark fibre to connect to the newly established facilities. No one will be
184 The term ”topology" shall denote the way a network is designed (e.g. a meshed structure, a tree
structure etc.).
185 According to a senior technology manager, traditional (existing) NAP locations are unattractive for
Tier-1 ISPs. He described them as costly and not providing provide any value to Tier-1 ISPs.
186 See Gareiss (1999).
Internet traffic exchange and the economics of IP networks
97
excluded from interconnecting at these locations, however, there is no obligation for any
ISP to peer at this facility. As well as large ISPs, large content providers like Yahoo will
likely also connect to these points, probably to three or four large ISPs, and switch
traffic between them by using its own router.
6.5
Capacity exchanges
Since the late the 1990‘s electronic wholesale markets for telecommunications services
including both voice and data communications and in particular IP services have been
established all over the world. The number of operators is thought to be around 10.187
Lehr and McKnight (1998) point out that the rationale for wholesale capacity markets is
inherent in IP technology. The authors emphasize that "IP technology is unique in its
ability to provide a spanning layer that supports the flexible integration of multiple
applications (data, voice, or video) on a variety of underlying facilities based
infrastructures (ethernet, leased lines, ATM, frame relay, etc)."
Thus, the IP protocol provides the opportunity for a vertical disintegration, i.e. IP permits
a separation of infrastructure provision from the provision of applications.188 This
suggests that IP favours the entry of new types of non-infrastructure based service
providers who need to purchase their required transport capacity ("bearer services")
from infrastructure providers (these may employ different infrastructure technologies).
Theoretically, one might expect capacity exchanges to provide a broad portfolio of
services. Lehr and McKnight (1998, p.97), for example, mention short-term spot
purchases and sales of bandwidth, switched transport, interconnect minutes,
transponder time, transponder bandwidth, as well as the purchase and sale of long-term
capacity such as leased lines, bulk transport, IRUs, transponders, forward contracts and
interconnect capacity at PoPs, and they conclude that offers might come to include
derivatives and other products.
Empirical evidence, however, shows that so far the main services traded at capacity
exchanges are bandwidth and switched and IP telephone minutes.189,190
187 Gerpott and Massengeil (2001) have identified 11 operators of private electronic market places for
telecommunications capacity worldwide. Lynch (2000b) mentions 10 different operators.
188 This is in sharp contrast to the traditional circuit-switched networks which are driven by intelligence in
the network. A the telephone network consists of two primary components: the transport function and
the signalling or control function, and the first cannot act without the second. In this regard Denton
(1999) comes to the conclusion that functions to be added to the network are defined by the owners of
the network and limited by the nature of the network. Denton concludes that a "telephone company’s
value proposition is governed by the simple idea that services are added to the network’s repertoire
exclusively by the telephone company", see Denton (1999, section 1.4).
189 See Gerpott and Massengeil (2001), Table 1.
190 Bandwidth is traded for different routes, e.g. New York and Los Angeles. Bid and offer prices are
usually quoted in US $ per mile/per month. Bandwidth is traded for different periods in the future.
98
Final Report
We think it is fair to say that up until the beginning of 2002 capacity exchanges have not
played an important role in the market, in contrast to what was predicted by several
industry commentators in the second half of the 1990‘s.191 This statement is underlined
by the recent bankruptcy of Enron.192
There are several reasons why capacity exchanges still are waiting to get off the
ground. Gerpott and Massengeil (2001) report that it is difficult for capacity exchange
operators to motivate carriers, especially the former transmission network monopoly
owners, to participate in electronic capacity market places. In this regard Lynch (2000b,
p. 38) reports that for carriers confidence that the market is not biased against them
plays a crucial role, i.e. neutrality of the pooling point seems to be critical.
Other arguments relate to the need for the participants to have confidence in the
functioning and integrity of the physical delivery mechanism. A further issue is who is
setting the rules for the trading, the exchanges or the market buyers and sellers. In
particular, there is an issue of whether any standardisation is necessary and if so how
much.193 Gerpott and Massengeil (2001) in addition point out that trust in the quality of
the products traded and in the commercial settlements processes are critical
determinants of success for the capacity exchange business model.
191 Lynch (2000a) reports that e.g. Arbinet, one of the biggest operators in the U.S. had revenues of
under 1 mill. US $ in 1999. In addition it is mentioned that RateXchange, one of the biggest operators
in the world had a total of about 1000 trades carried out until the mid of 2000. We have focused in
May 2001 on deals at several capacity exchanges in the U.S. The result was that at some of these
exchanges there have been consecutive days where no trade happened at all.
192 Enron’s roots rested on energy trading, however, they later also aimed at becoming an important
player in the telecommunications capacity trading market.
193 Lynch (2000b) quotes an executive of a fibre infrastructure based carrier who says: "No carrier wants
to standardise a competitive offering."
Internet traffic exchange and the economics of IP networks
99
Part II – Public policy
In order for the authorities to place regulations or mandatory requirements on the
industry, or to financially sponsor a new Internet technology, it needs to be shown that
market failure is occurring or is likely to occur which is costly to society, and which we
can reasonably expect official intervention with a carefully designed scheme will provide
a more efficient overall outcome. Part II addresses market failure issues relating to
Internet traffic exchange.
7
7.1
Economic background to Internet public policy issues
Economic factors relevant in analysing market failure involving the
Internet
7.1.1 The basis of public policy interest in this study
In this study we analyse the range of potential public policy concerns relating to the
Internet backbone by examining whether there is a significant market failure problem,
either existing today, or possibly arising in the foreseeable future. A little further below,
we set out in brief the arguments in defence of this approach but before proceeding we
note that other rationale exist for intervention by authorities, and these can be grouped
into three classes, being those based on:
1. Message content, primarily decency, privacy, national security;
2. In order to redistribute resources among different groups in society, or
3. Because it is considered a merit good (or merit bad), i.e. there is agreement among
a substantial sector of the community that people should consume more of
something; a good, like cultural goods (or less of something; a bad, like drug
taking.194
None of these issues, however, feature to a significant degree in this study. The two
main reasons for this are: that these issues are focussed on end-users, and our study is
not concerned per se with end-users and ISPs that serve them, and these three
rationale for official intervention are mainly based on politics and this study is not
concerned with political analysis.
194 We discuss 2 and 3 above, in the context of possible amendments to the scope of 'universal service'
in the EU, in WIK (2000).
100
Final Report
Markets have considerable advantages over other forms of economic organisation.
They enable economic activity to be organised according to units (e.g. firms) where
there is both information and incentives operating in favour of the organisation of firms
and production processes based on efficient practice, and where outputs (economic
production) are geared toward the needs of end consumers. Market-based (liberal)
economies enable entities like firms to organise people and technology in a way that
generates wealth.195 Non market-based systems are unable to do this nearly as
effectively. The relative failure of the former Soviet-type economies shows that a system
where administrators and officials organise this activity and choose the relative values
of the outputs, is profoundly flawed. Indeed, the relatively wealthy position of North
America, Western Europe, Japan and a small number of former colonies, compared to
the rest of the world, is largely because the political and legal institutions that enable
markets to flourish, are more developed in these countries than in other parts of the
world.
In liberal economies one of the main roles of the state is to provide a legal and
regulatory framework which provides a high degree of certainty in property rights and
enables economic agents (e.g. businesses, state entities, and individuals) to freely
organise and engage in economic activities without fear of confiscation of assets or
persecution. Indeed, these may be the most important factors that differentiate liberal
economies from non-liberal economies.
However, markets are not perfect in the way they organise economic activity. There are
numerous cases where we know that market performance can fall well short of
potential. Indeed, it is through understanding this 'potential' that economists analyse the
performance of industries / markets, types of economic organisation, and regulations.
This means that in analysing the performance of markets, economists will frequently
look at the opportunity cost of market imperfections in terms of the hypothetical situation
in which no market failure occurred.
Even though most markets suffer from imperfections they typically operate more
effectively than they would if they were regulated more closely. The main problem for
the authorities in seeking to improve on existing market outcomes is that although we
know markets do not operating perfectly, except for encouraging more competition it is
usually not possible to know what we can do to improve their performance – the
authorities rarely have the necessary information about what is going wrong and why. In
a majority of actual markets enough of the advantageous features of market based
competition are retained that regulatory intervention is unable to improve on the
efficiency of the overall outcome.
195 Clearly this system is not perfect, in part because information asymmetries exist between
management and shareholders. This enables managers to pursue agendas that are not in keeping
with the preferences of shareholders, i.e. managers are themselves only weakly controlled by
shareholders and more generally, by the requirements placed on them by law and by equity markets.
The recent bankruptcy of ENRON appears to demonstrate this imperfection.
Internet traffic exchange and the economics of IP networks
101
In some cases, however, markets fail quite seriously. In such cases a state may decide
to provide these goods or services itself, or it may grant firms access to an essential
resource owned by a vertically integrated competitor. In other cases the state may set a
tax or subsidy in order to adjust the economic behaviour of people or organisations in a
way that corrects for the market failure.196 These are cases of state intervention to
correct market failure.197 To understand the rationale for different forms of state
intervention in markets, we briefly discuss the causes of market failure.
There are three basic causes of market failure:
1. Externalities,
2. Market power, and
3. Existing laws and regulations.
We discuss these in turn as they concern the Internet backbone.
7.1.2 Externalities
Externalities are a benefit or cost, caused by an event, but not taken into account in the
decision to go ahead with the event. Externality benefits occur when by my doing
something, one by-product is that you are made better off. One such example is a bee
keeper who assists nearby farmers by unintentionally providing them with a pollinating
service. A daily event which involves externalities costs is second hand cigarette
smoke, long term exposure to which (i.e. passive smoking) has been shown to
significantly increase the risk of lung disease. The health cost of passive smoking is an
externality cost. With all events that entail externality costs, those that choose to go
ahead with the event (i.e. those who ‘benefit’ from it) are in part able to do so because
they do not take full account of the costs (the externality) they are imposing on others.
In the case of the Internet, there are at least 3 types of network externality that appear
to be significant, and may affect industry structure, and the development of the Internet:
(i)
direct network externality benefits;
(ii)
indirect externality benefits, and
(iii)
congestion externality costs.
196 An example is a pollution taxes (e.g. tradable carbon rights), or a per passenger tax concession to a
railway transport/passenger service because of the advantages it provides over other forms of
transport in terms of pollution, congestion, and public health and safety.
197 There are other well documented reasons that the state or its agencies may intervene, such as
regulatory capture and political favouritism, but we think all of these can be fitted into 2. above.
102
Final Report
We discuss these in turn. Firstly, however, we discuss why network effects in the
Internet are of interest to public policy officials interested in public welfare and
economics of Internet traffic exchange.
Network effects in the Internet are of interest to public policy officials for several
reasons:
•
In a (tele)communications network there are significant network benefits such that if
entirely unregulated, a single network operator will likely achieve a national or
possibly global monopoly.198 There are two principal means by which this will occur:
-
through mergers and acquisitions, and
-
through the normal attrition of competition.
Where merger rules that prevent monopolisation apply (such rules normally form
part of competition law which is a form of regulation), monopolisation by merger /
takeover is typically ruled out. Denial of interconnection as a strategy to monopolise
is typically also a breach of competition law.199
•
They are important to transit providers (or IBPs) when deciding on their competitive
strategies regarding competitor IBPs and transit providing ISPs. We shall see below
that depending on market circumstances they can provide: either a strong incentive
for co-operation with their rivals in planning and building interconnection links, and in
developing standards that enable seamless interconnection between networks; or
can lead to non co-operation in interconnection or standards, suggesting an
increasing level of market concentration, possibly also vertical integration, and the
fragmentation of the Internet.
•
They have implications for the optimal pricing behaviour of ISPs, and we argue in
this report that technical developments are needed that allow for new pricing
structures that can provide for more effective congestion management, such as GoS
pricing. This, we suggest will greatly contribute to ushering in the Next Generation
Internet.200
198 In the EU most for incumbents obtained their monopoly position through legal decree and not through
competition and takeovers aimed at capturing network externalities.
199 Where a firm has already obtained a certain level of market power, such as when network
externalities are important and a firm has capitured a sufficient level of these to tip competition in its
favour, competition rules alone are unlikely to provide an adequate anti-monopoly safeguard. In part
this is due to the courts’ (and usually also the competition authority’s) skills being in assessing
qualitative rather than quantitative evidence. More generally, in markets that lack competition,
competition law tends to be an inadequate instrument for opening up markets to competition.
200 While the terms of reference of this study do not cover subscriber pricing issues per se, we have
noted earlier in this report that the structure of prices needs to be similar at both retail and at a whole
levels if the industry is to operate efficiently. Firms typically require unusually high returns if they are to
bear the high risks such as those entailed in them having fundamentally different wholesale and retail
pricing structures.
Internet traffic exchange and the economics of IP networks
103
7.1.2.1 Direct network effects
Direct network effects occur when the value for membership increases with the addition
of new members. Where future subscribers are not encouraged through altering their
economic incentives to take this effect into account when considering whether to join
the network or not, direct network externalities will generally arise. Direct network
externalities imply an under-consumption of subscriptions, such that subscriber
numbers will be less at prevailing prices than is socially optimal.
When we refer to subscriber numbers we mean the network of users. It is commonly
argued that it is to the benefit of existing subscribers for subsidised prices to be offered
to those who are not prepared to pay the existing subscription price. The price that
should be charged would be set at the level that internalises the unconsidered
(external) benefits. This price is where the marginal social benefit equals the marginal
social cost.
There are a range of circumstances under which such subsidies would be unnecessary,
and an analysis of these circumstances also reflects on the efficiency costs of flat-rate
pricing in the Internet.201 In short, these are when network effects do not result in
significant externalities. One such circumstance occurs under fairly general
circumstances - where the network is privately owned. Indeed, a lack of rights of the
type that come with ownership has been shown to be a primary cause of externalities,
including those that represent the greatest challenges facing mankind, such as natural
resource depletion and climate change.202
The incentive of network owners to internalise network benefits is demonstrated in
Figure 7-1, where MC = the marginal cost of adding another subscriber. In many
networks, including telecommunications networks, these costs can be expected to start
rising steeply as additional subscribers become less urban. (We note that this is likely to
be much less the case with Internet subscribers where a telephone line is already
present.203 In Figure 7-2 below we address the specific case of the Internet204). In
Figure 7-1 MB is the marginal benefit of adding an additional subscriber, and this can
be shown to equal marginal revenue (MR). AB (average benefit) increases with the
number of subscribers indicating a positive network effect. It represents the maximum
price the network owner can charge, and is thus also the average revenue (AR)
function.
201 In practice, the incremental cost of service an additional customer when neighbouring customers
already receive a service, are only a small proportion of the average cost of providing service. For this
reason very low prices can be offered to incremental customers, such as through the use of self select
packages, but these will not normal require any subsidies.
202 Ronald Coase received a Nobel Prize in economics for his work in this area. See Coase (1960).
203 Another way of putting this, is that the incremental cost of providing Internet service, given that fixed
wire telephone service is already provided, is very much lower than the stand-alone cost of providing
Internet service.
204 In fact the situation tends to be complicated by the existence of significant common costs, such that
the cost of adding one additional subscriber, given that neighbours are already served, is normally
much less than is commonly acknowledged.
104
Final Report
Figure 7-1:
Network effects and private ownership
Euro
MC
MB=MR
P
AB=AR
P*
0
Q
Q*
Network participants
Source: Liebowitz & Margolis (1994) {modified}
While price at quantity Q* is less than the MC of adding subscribers, the MB to the
network owner is greater than the price charged. i.e. the owner has the incentive to
account for the network effects by charging less than marginal cost and thus the
network benefit is internalised.
The case of the Internet is rather different. It is made up of roughly 100,000 networks
most of which are independently owned. This means there is no incentive for one ISP to
reduce its prices in order to internalise network benefits, as it would give rise to very few
benefits, and in any case, most of these would be captured by other ISPs and their
customers. With fragmented ownership such as we have with the Internet, network
effects can not normally be captured by the actions of individual owners implying that
ISPs have no incentive to take account of them. 205
If we take for the moment Figure 7-1 with its particular cost and benefit functions, Q
would be the level of network participation, P would be the price charged and there
would be a network externality of value indicated by the lightly shaded area between Q*
and Q. The quantity of network subscriptions would fall short of the socially optimal
amount by 0Q*-0Q. On paper, this results suggests a strong case for official
205 As we shall see below, this may no longer be the case where one ISP has substantial market power.
105
Internet traffic exchange and the economics of IP networks
intervention in the form of a scheme that has the effect of increasing Internet
participation from 0Q to 0Q*.
Fortunately, the underlying costs of the Internet are such that network externality
benefits should be able to be internalised by ISPs acting individually. To show this we
introduce uniform pricing (i.e. where everyone pays the same), and look at the social
opportunity cost of a flat rate pricing structure. In this case the main reason for the
Internet being able to internalise network benefits is that the marginal cost for an ISP of
adding subscribers is very low where a telephone network is already in place. We show
the situation in Figure 7-2. As in Figure 7-1, MB and AB apply to the network as a
whole. Many costs are sunk before a network’s first subscriber has signed up. This
means their average cost (AC) will be higher than MC, and it is AC that must (on
average) be covered through the prices charged if the ISP is going to be commercially
viable. Thus, no ISP could stay in business if it priced at marginal cost for all
subscribers as it would not be covering its other costs. However, by pricing at average
cost (P*) there are a number of potential subscribers prepared to may more than their
own marginal costs to obtain a subscription but are not prepared to pay P* and who are
denied a subscription where a uniform price = AC is charged. These potential
customers are those between Q and Q*, and the externality associated with offering
only one price to consumers is indicated by the shaded area between Q and Q*.
Figure 7-2:
Maximum Internet externality benefits for private subscriptions
Euro
100%
penetration
MR=MB
P*
ACISP
D=AR=AB
P
0
MCISP
Q
Q* Qmax
Network participants
Source: WIK-Consult
106
Final Report
To include these customers in a way that makes economic sense (to the ISP and to
society), ISPs would offer a range of packages to the public which separate subscribers
according to the characteristics of their demand. These are the same principles used by
cellular operators (and increasingly fixed wire operators) in offering a range of tariff
options (or packages) to subscribers.206 Competition between ISPs would nevertheless
result in the average price charged being near P*. It does not appear that the Internet
has yet reached the level of penetration that would result in ISPs looking to sign up
these marginal customers. The Internet is still relatively early in its development phase,
although we expect that development to slow during the current economic downturn.207
A flat-rate pricing structure can result in a premature stagnation in the growth of
subscriber numbers by requiring those subscribers who may have little or no demand
for Internet capacity during its most congested periods, to bear a share of the peak-load
capacity costs. Flat-rate pricing implies that those who do not use the network at
congested periods still have to pay as share of that capacity. This means that those with
weak demand fpr usage at peak periods are carrying costs that ought to be assigned to
those with strong demand at peak-usage. This is both inefficient and inequitable. A
pricing structure that moved capacity costs onto those with the strongest demand for
peak-period usage would boost subscriptions and improve overall economic welfare.
We show the situation in Figure 7-3 in which we assume that the supply of capacity is
sufficient to meet the entire peak period demand when there is no usage sensitive
pricing, as is the case under flat-rate pricing. Thus, on the right hand side (r-h-s) the
supply curve is vertical.208 The cost of that capacity is shown as Pc which can also be
thought of as being the price of that capacity per subscriber. Under flat rate pricing,
subscribers all bear the same proportion of this cost even though congested period
session times are not demanded by many subscribers. For usage at non-congested
period, the cost is effectively zero – i.e. there is no marginal cost involved in using idle
capacity. For these reasons flat-rate pricing is inefficient and could also be seen as an
unnecessary constraint on the growth in Internet penetration, especially once subscriber
growth begins to stagnate.
206 In economics, packages designed for this purpose are known as separating mechanisms.
207 Another factor that suggests there are few network benefits that can not be internalised by the
industry without the intervention of the authorities, is that virtually everyone in the EU who does not
yet have a private subscription with an ISP, can still have access to the Internet (including Email)
through: work access; public access points as typically occurs at public libraries; through Internet
cafés, or though other peoples' private Internet subscriptions. Moreover, as the Internet is a nascent
industry growing rapidly without subsidies, there is no need to consider the case for subsidies at this
stage. Indeed, even in the long-term, we expect that the industry may be left to itself to offer the sort of
tariff packages required to get marginal households to subscribe to an ISP.
208 We have assumed away congestion problems that typically arise when there is no marginal cost
pricing mechanism. These are discussed in Chapter 8.
107
Internet traffic exchange and the economics of IP networks
Figure 7-3:
Cost allocation under flat-rate pricing and the effect on penetration.
Price \ cost
Price \ cost of service (averaged)
P
Cost per subscriber
Pc
D
ACa \ MCs
0
Qw
Qf
Number of subscribers (assuming flat-rate pricing)
em
an
d
Supply of capacity
D
em
an
d
Access price \ cost
Quantity of usage at peak-usage
Source: WIK-Consult
In the l-h-s of Figure 7-3 ACa represents the average cost of access.209 It also
represents the marginal cost of service for all subscribers who do not demand sessiontime at peak use periods, i.e. they use spare capacity only. Where flat-rate pricing is
used, the ISP will charge a price P = ACa + Pc implying a penetration rate of 0Qf , with
0QW - 0Qf representing those who are excluded but who would be prepared to pay the
marginal costs they cause, i.e. 0QW represents the total number who could be served
without any subsidies.
7.1.2.2 Indirect network effects
Indirect network effects occur when the value of a network increases with the supply of
complimentary services. When future suppliers of complimentary services are not
encouraged to take this effect into account when considering whether to supply these
services or not, indirect network externalities will generally arise. Indirect network
209 Figure 7-2 and the related analysis draws on the fact that the marginal cost of adding a subscriber,
given that other subscribers remain 'connected' can be much lower than the average cost of access.
We ignore these additional costs in Figure 7-3 as we are analysing another reason for the inefficiency
of flat-rate pricing. By ignoring these, however, the welfare costs indicated in Figure 7-3 will be
understated.
108
Final Report
externalities mean that the supply of complimentary services on the Internet is less than
is socially optimal. For arguments sake we can separate these into two sources:
a)
Indirect network externalities that follow directly from existing network externality
benefits, and
b)
To maximise the social benefit of the Internet where each side of the market (endusers and web-sites) is complimentary to the other, firms would need to take these
factors into account when setting prices.
We consider (a) and (b) to be defined such that the benefits can be added thus:
Total indirect network effects (INE) = (a) + (b),
Indirect network effects (a)
For those indirect network effects that are caused by the direct network effects e.g. the
rapid growth in portals, we do not see that much more needs to be written about them in
this report. Suffice to say that if there are significant direct network benefits that remain
to be internalised, there will be a correspondingly lower number of services and websites operating on the Internet than is social optimal. However, we have already noted
reasons why we believe the industry is capable of internalising most of these and thus
we do not expect direct externalities to be the cause of high indirect network
externalities.
Indirect network effects (b)
Because ownership is so fragmented, the synchronisation values are not internalised at
present. Indeed, Laffont, Marcus, Rey and Tirole (2001a,b) (LMRT) note that under
quite general conditions, ISPs have incentives to set prices to all their customers as if
their traffic were "off-net". Among other interesting implications (which we discuss in
Section 8.2.5) this suggests the unsurprising conclusion that indirect network externality
benefits occur with the Internet due to the lack of co-ordination of pricing between both
sides of the market. In particular, the sharing of costs between web-sites and end-users
does not presently take account of the relative importance of web-sites in invigorating
the virtuous circle.
The approach that would be taken by a firm that monopolised the supply of access to
both sides of the market would be to follow the Ramsey pricing principle in setting these
access prices. This requires prices to be set according to the inverse elasticity of
demand for access, and those elasticity measures would reflect the complimentarity
existing between both sides of the market. In other words, the network provider would
take account of the sensitivity of web-sites and end-users to price changes, and in
addition also factor in the relative importance that each side has in invigorating demand
on other side. A fundamental rationale behind such an approach is for the owner to
maximise the value of the Internet.
Internet traffic exchange and the economics of IP networks
109
This clearly does not happen as present; the reasons appear straight forward – mainly,
fragmented ownership of the Internet means that no firm is able to internalise the
benefits of its actions. We discuss these issues as they relate to the Internet in more
detail in Chapter 8.
7.1.2.3 Congestion Externalities
The direct and indirect network effects discussed above are concerned with externality
benefits. There is also an important externality cost that is featured in this study: a
congestion externality.
A congestion externality occurs where each user fails to factor into his own decision to
use the Internet at congested periods, the delay and packet loss his usage causes other
users. This occurs when there is no congestion price, i.e. when sending extra packets
at congested periods is free. Similarly, for the interconnection of transit traffic or
interconnection between peers, the lack of congestion pricing applied to firms
purchasing interconnection means that the networks do not face any additional charge
for packets handed over to another network during its most congested period. This
means that networks themselves do not have sufficient incentive to efficiently manage
congestion on their networks. Congestion externalities are a function of networks. In the
absence of a marginal cost pricing scheme, or an adequate proxy for it, annoying levels
of congestion will be difficult to prevent on the Internet. Congestion will tend to
undermine the Internet's development in the area of real-time services provision, like
VoIP, and will delay convergence and the development of competition between different
platforms. The absence of congestion pricing is therefore a problem which we discuss
in detail in Chapter 8.
7.1.3 Market power
Firms have market power when they face too little competition or imminent threat of
competition. Monopolies are an extreme example, although oligopolies are much more
common and they too have market power even though they face some competition from
rivals.
As the most extreme example, monopoly is known to suffer from several severe
problems that limit the economic benefits this type of market structure provides to
society.
•
Monopolies supply less and charge higher prices than would occur if the industry
was organised competitively.
110
Final Report
•
Monopolists tend to under-perform in terms of their own internal operations. Relative
to competitive industry structures, monopolists are also thought to under-perform in
terms of their uptake of investment opportunities.210
•
Where monopoly is not secured by statute, monopolists (or near monopolists) often
face some competitive threat on the periphery of their markets, and knowing this
they tend to engage in a range of strategic activities designed to keep out potential
competitors. Indeed, monopolies sometimes spend a great deal of time and money
in trying to maintain or enhance their market power in ways that do not contribute to
society’s economic welfare. However, the types of actions undertaken do not
necessarily breach competition laws.211
In the case of oligopoly many of the same problems apply as for monopoly, but they are
less severe.
•
In such markets oligopolies have some market power, i.e. they face a downward
sloping demand curve. As a result market output tends to be lower and prices higher
than would occur in a competitive market.
•
Oligopoly firms practise a range of strategies aimed at enhancing and protecting
their market power. Typically these are not illegal.212
Where there is substantial market power, such as occurs with monopoly or dominance,
and it persists in the long term, we can say that there is market failure. Problems with
attempted monopolisation or restrictive trade practices are dealt with under competition
law, through merger / take-over regulations, and through laws like article 81 of the
Treaty of Rome. Where monopoly or strong market dominance already exist, however,
and it is enduring (i.e. the market is not likely to face significant competition any time
soon, say, from a new technology), competition law is limited in its ability to prevent
such firms from taking advantage of their position.213 In such cases ex ante regulation
may be justified.214
In analysing market power issues and the Internet, network externalities are highly
relevant. The value of a network is heavily dependent on the number of people and
entities connected to the network. Where networks are perfectly interconnected,
network effects are shared among the interconnected networks independently of the
210 The cause is a lack of competitive and shareholder pressure on management. Shareholders have less
information with which to judge whether management are performing their jobs well, and as a result
monopolies and sometimes also dominant firms, tend to suffer from a significant level of underperformance by management and employees.
211 Examples include the strategic use of patenting, and lobbying those with political influence.
212 One of the most common are strategies aimed at enhancing product differentiation, say through
branding / advertising. For an introduction to these see Tirole (1988).
213 For example, monopoly profits are not generally a competition law offence.
214 We discuss these issues in more detail in our study for the European Commission titled, "Market
Definitions and Regulatory Obligations in Communications Markets". This study was undertaken with
partner Squire Sanders & Dempsey L.L.P.
Internet traffic exchange and the economics of IP networks
111
size of each network's own customer base. In such cases competition between
networks will focus on other advantages that each can provide to end-users, rather than
the size of its customer base.215 These advantages would likely focus on such things
as QoS, the range of services offered and price, although in oligopolistic markets firms
will try to avoid head-to-head price competition.
We noted above that left entirely to market forces and in the absence of regulation, a
national or possibly global (tele)communications network monopoly would likely result
due to the existence of network effects. Where several competitors exist, the way this
would occur would be mainly through merger and takeover on the one hand, and the
denial of interconnection on the other.
In practice, however, merger regulations would prevent mergers and takeovers being
used to monopolise the industry. But in an unregulated network, such as the Internet,
interconnection would remain a possible weapon for the largest network(s); not a
refusal to interconnect as such, since this would be too blatant and risk the attention of
the authorities who might then seek to regulate interconnection, but possibly through
deciding to only provide low quality (e.g. congested) interconnection. If this turned out to
be a viable strategy for the largest ISP it could have regulatory implications.
Researchers have recently made good progress in trying to determining the
circumstances under which a network would likely benefit from such a strategy. This
research was motivated by resent European Commission and U.S. Department of
Justice (DOJ) merger cases.
It is not our intention to revisit the merger cases that were heard by the Commission
and the DOJ in 1998 and 2000.216 However, some of the issues discussed in those
cases are central to an understanding of the strategic options of Tier 1 ISPs that may
involve the enhancement of market power, and are thus of interest to those involved in
public policy development. Firstly, however, we provide a recap of Internet backbone
industry structure.
We have noted elsewhere217 that in recent years the Internet has become less
hierarchical than it was, there having been a large increase in the proportion of total
traffic that can avoid Tier 1 ISP networks. There are several reasons this has occurred.
The main ones include:
•
Growth of termination through regional interconnection (e.g. secondary peering) –
aided in no small part by increased levels of liberalisation world-wide;
215 For us to be sure that the standardised offering is welfare enhancing, however, we would need to be
sure that stardardisation (or seamless interconnection) would not undermine technological
development.
216 WorldCom/MCI Case No. IV/M.1069: MCI WorldCom/Sprint Case No. COMP/M.1741.
217 See Section 5.6.
112
Final Report
•
Growth in multi-homing, which has pushed Tier 1 ISPs to compete more vigorously
with each other and with ISPs that have rather less complete routing tables;
•
Growth in other mechanisms that can reduce the levels of interregional traffic that
are exchanged e.g. caching, mirroring and content delivery networks. These
services complete with those offered by large backbones with extensive
international networks;
•
A possible increase in the number of firms that are able to provide complete or near
complete routing (i.e. Tier 1 ISPs). These are ISPs that do not purchase transit in
the USA, and indeed purchase relatively little transit outside of the USA as they peer
with most other address providers, and
•
Improvements in border gateway protocol (BGP) which make it economic for a great
many ISPs to operate BGP where they did not do previously.
Clearly, several of these points are correlated with each other. But taken together, they
raise a significant element of doubt as to whether an anti-trust market exists at the ‘top’
of the Internet, even though the Internet remains a loosely hierarchical structure.218 But
with perhaps 6 peers each providing virtually full Internet routing, and with a number of
other ISPs able to terminate a high proportion of their traffic through peering
relationships with others, including that which they are paid to transit, no single firm
providing core ISP services presently appears to have dominance given any reasonable
market definition.219 Nevertheless, the upper layer of the Internet comprises a number
of very large firms with a multinational presence suggesting an oligopolistic structure. In
such ‘markets’ each firm recognises that it has some influence over the environment in
which it operates. Such markets offer rich opportunities for strategic interaction between
rivals, and between up and downstream firms. In Chapter 8 below, we discuss the
scope for strategic behaviour among ISPs that may have public policy implications.
7.1.4 Existing regulation
It is not uncommon for existing regulation to enhance or create market power. The way
it does this may be very simple, such as in licensing only one or two firms, or it may be
much more complex involving, for example, spill-over effects running from a regulated
market to an unregulated one. While neither Internet backbones nor Internet traffic
exchange are directly regulated, regulations nevertheless surround this industry and
have potentially far reaching implications for its development and the development of
competition in the (tele)communications sector.
218 In order to establish this we would need to undertake a detailed analysis, among other things, of the
degree of substitutability posed by the first 3 bullet points above.
219 However, in some specific countries or regions of countries market power problems may exist due to
such things as a dysfunctional regulatory regime.
Internet traffic exchange and the economics of IP networks
113
To a considerable degree the Internet is built up with leased infrastructure, and a
significant proportion of this infrastructure are leased lines rented from incumbent
telecommunications operators. Leased line prices in most EU countries are not
regulated directly, but regulators have been influential in enabling competition and in
lowering the prices and improving the availability of leased lines, and thus in aiding the
development of the Internet in Europe (and elsewhere). In this regard, it helps to see
leased lines as an element in the full bouquet of services that share many costs (e.g.
with switched services sold to businesses and households).220 In such cases the prices
(and supply) of one type of service are not independent of the prices of other services,
some of which continue to be regulated. This suggests that the supply (and to a degree
the prices) of leased lines in Europe will be partly explained by prices of other services
which may be determined by regulation.221
Since liberalisation which occurred in most EU countries in January 1998, leased line
prices have tended sharply downward, especially in the case of international leased
lines and dark fibre.222 This appears to be largely explained by fundamental changes in
industry structure, from monopolised arrangements sanctioned by law and international
convention, to a situation where there are now several firms providing international
cross-border connectivity in virtually all Member States. As well as the international
switched telephone network, these new networks are also providing connectivity to the
Internet.223
Arguably the main regulatory problem that will begin to appear in the next few years will
concern the different regulatory treatment of the PSTN compared with the Internet. In
principle the move to ex-ante regulation based on significant and enduring market
power in a defined anti-trust market should prevent regulations which benefit one type
of competitor over another, e.g. VoIP compared to traditional PSTN voice services, but
this need not be the case, especially during transition. We briefly discuss possible
public policy / regulatory problems regarding this aspect of regulation in Section 8.5.
220 Cost modelling work on current network design suggests that switching costs are greater than fibre,
amplifier and regeneration costs combined. See Lardiés and Ester (2001). Leased line prices may,
however, be only loosely related to these costs.
221 Prior to liberalisation state owned operators priced certain services according to political preferences,
and these pricing oddities continue to exist in many cases.
222 Data supporting this claim was presently in our report to DG Comp in WIK’s role as advisor for the
official sector enquiry which took place between 1999 and 2000. This data is unfortunately
confidential.
223 See Elixmann (2001) who provides data about cross-border networks in Europe.
114
8
8.1
Final Report
Possible public policy concerns analysed in terms of market
failure
About congestion management on the Internet
As the Internet is an end-to-end communications service, explanations for what is
happening with commercial traffic exchange can sometimes be found elsewhere in the
network. In regard to QoS and the Next Generation Internet, this may be equally true for
either technical or economic reasons. In Chapter 4 and Annex B-1 we discussed
technological issues that impact on commercial traffic exchange. In this section we look
at the relevant economic factors relating to QoS. This includes a discussion of possible
roles for pricing and demand management in improving QoS as well a discussion on
claims that technological developments will avoid the need for demand management, in
particular, that cheap bandwidth and processing power will overcome economic
congestion (i.e. scarcity).
8.1.1 QoS and the limitations of cheap bandwidth
In recent years several people have pointed out that with the rapidly declining cost of
bandwidth and the rapid increase in computing power, "throwing bandwidth" at
congestion problems can be a cost effective way of addressing QoS problems.224
Indeed, it has been claimed that this option negates proposals to introduce pricing
mechanisms to control congestion, and may also negate those that would provide
mainly technical means to discriminate between higher and lower priority packets, such
as IntServ and DiffServ architectures, which we address in Annex B. Mainly because of
falling costs of transmission and processing, the suggestion is that congestion on the
Internet will be a temporary phenomenon, implying that there is no need to change the
structure of existing prices.
Evidence in favour of the "throw bandwidth at it" solution to congestion includes
information that shows that bandwidth has grown much faster than traffic volumes225,
with the inference being that after several more years of divergence in growth rates it
will not matter that the priority of treatment of packets on the Internet is according to the
order of arrival, and that low priority emails get the same QoS as do VoIP packets – all
packets will get a QoS which is so high that the hold-up of messages where perceived
QoS is very sensitive to latency and jitter, by those that are not, will have no material
effect on the QoS experienced by end-users. In general, the argument is that the rapidly
declining cost of bandwidth and processing will mean that more "bandwidth" will be the
224 See for example Ferguson and Huston (1998); Odlyzko (1998); and Anania and Solomon (1997)
where the claim is less explicit.
225 See Odlyzko (1998).
Internet traffic exchange and the economics of IP networks
115
cost effective means of addressing QoS problems. In short, the claim is that all services
will receive a premium QoS.
While we tend to concur that on many occasions apparent over-engineering could be an
appropriate option, we do not see that in general throwing bandwidth at congestion
problems is the cost effective way to address QoS problems that stand in the way of
VoIP and other applications that have strict QoS requirements. Indeed, even if we put
the issue of the opportunity cost of this approach to one side, we are sceptical that this
approach can sufficiently address the problem of congestion to enable an all-services
Internet to effectively compete with other platforms like the PSTN. One reason for this is
that demand for bandwidth is likely to increase enormously due to the following factors:
•
Increased access speeds for end-users (e.g. xDSL) in the short to medium term
(and access speeds several times greater than effective xDSL speeds in the next
10-20 years);
•
If a QoS capable of delivering sufficiently high quality VoIP arrives, it will likely result
in many customers (perhaps a majority of existing PSTN subscribers) moving their
demand for voice services onto the Internet as in many cases it will likely have a
significant price advantage;
•
When customer access speeds reach levels that enable HDT quality streaming
video, the Internet will have converged with CATV and broadcasting, and likely
demand for content (including from different parts of the world) will result in an
enormous increase in the volume of Internet traffic, and
•
3G and 4G mobile Internet access may also result in large increases in demand for
the Internet, be it for voice, WWW, e-mail, file transfer, or streaming video.
We have intimated in Section 7.1.2.3 above that without a marginal cost pricing
mechanism there is no thoroughly accurate means of providing the proper incentives for
ISPs to invest in a timely way in upgrading capacity. The pricing mechanism is the ideal
way of connecting investment incentives with demand and in the flat-rate pricing world
of the Internet where marginal congestion costs are far from zero, no such pricing
mechanism operates.
However, perhaps the most important issue is not whether it is possible to address QoS
problems for real-time services by throwing bandwidth at the problem, but, whether
there is not a more cost effective option to the combination of flat-rate pricing and overengineering the Internet: and if this option exists, whether it provides for a pricing
mechanism which will have a more realistic chance of meeting the claims made for it
(one that is able to better match marginal costs of capacity upgrades with marginal
revenues, when QoS is degraded by congestion).
116
Final Report
In our view a flat-rate "one-service-fits-all" Internet is very unlikely to be the
arrangement that ushers in the next generation ‘converged’ Internet i.e. an Internet
where WWW, streaming video, file transfer, email, and voice services, are provided to a
price/quality that makes these services highly substitutable with those provided over
other (existing) platforms. In short, we do not accept that falling capacity costs will result
in the Internet being able to avoid "the tragedy-of-the-commons" problem226; i.e. the
claim that supply will in practice outstrip demand. This is not in keeping with our
experience with policies that make things that are not pure public goods, free at the
point of delivery.227 Where this has occurred, experience shows that overuse /
congestion typically occurs.
Over-provisioning requires networks to be built which cope with an expected level of
peak demand.228 This tends to result in lower levels of average network utilisation and
thus higher average cost per bit. It is well known that Internet traffic tends to be very
'bursty' (demands high bandwidth for short periods). In larger ISP networks, the
'burstyness' of end-user demands tends to be somewhat smoothed due to the large
number of bursts being dispersed around a mean.229 In order to provide a service that
is not seriously compromised at higher usage periods by congestion, average peak
utilisation rates on backbones of roughly 50% may be the outcome, with very much
lower average utilisation rates over a 24 hour period.
In the last 3-4 years there has been progress in setting up standards for IP networks
that address QoS, e.g. Real time protocol (RTP), "Resource reSerVation Protocol"
(RSVP), DiffServ, and IntServ. Services provided by these protocols are not yet
commonly available on the Internet but may be implemented in the routers of some
corporate networks or academic network structures like TEN 155, and even in some
larger ISPs, although not yet between larger ISPs. We discuss these issues in more
detail in Annex B.
8.1.2 Pricing and congestion
Optimal congestion management can not normally be accomplished with technology
alone (e.g. by finding more cost effective ways to utilising existing capacity, or by
226 "The tragedy-of-the-commons" is a problem of market failure which we discuss further below.
227 In cases where there are subscription fees but users face no marginal usage costs, outcome have
been much improved, but without there being a large over-investment in capacity, some congestion is
typically still experienced.
228 In practice even in PSTN networks blocking occurs during congested periods. In the Internet world
this is done with admission control algorithms.
229 This effect is called stochastic multiplexing but it should be dealt with carefully. Some studies on
Internet traffic suggest that the length of web pages and the corresponding processing and
transmission time are not according to an exponential distribution but are better approximated by a
distribution with large variance e.g. by a Weilbull distribution. Some authors have claimed that the
distribution is Pareto resulting in a near infinite variance and cancelling any stochastic multiplexing
effect. But these studies are based generally on data traffic in academic networks, which is not
representative of traffic on the commercial Internet.
Internet traffic exchange and the economics of IP networks
117
throwing bandwidth at the problem). Therefore, technological development of the
Internet should have as one of its goals to allow for mechanisms through which demand
management can function. At present this is largely lacking on the Internet. Much of the
reason for this is that Internet technology does not (as yet) provide sufficient flexibility
through either providing the information needed that would permit ISPs to know when to
profitably upgrade capacity, or by providing end-users with the opportunity to select
other than a "plain vanilla" service. Under present Internet technology packets are
accepted by connected networks without specific guarantee (although SLAs typically
provide compensation where statistical ‘guarantees’ are breached) and on an
essentially "best effort" basis. As such, packets carrying e-mail are treated the same as
packets carrying an ongoing conversation.230 From the perspective of demand
management, this equal treatment of packets according to best efforts is problematic on
at least two counts:
1. The various applications that can be provided over the Internet require different QoS
statistics of the network in order to provide a service which is acceptable to endusers (see Figure 4-3). E-mail, file transfer, and surfing all function adequately at
present, even if annoying delays are sometimes experienced, most especially in the
case of the latter.231 For VoIP, video conferencing, and 'real-time' interactive
services, however, existing QoS - especially for off-net traffic - and that envisioned
for the near future, may well be too poor to provide a service quality sufficient to
make these services attractive enough for most end-users to be prepared to
substitute them for services provided over traditional platforms, i.e. quality of service
may be too low to put them in the same antitrust market compared to service
provision over the traditional platforms.
2. End-users have different demands for service quality in regard to any particular
service. For example, some consumers would be prepared to pay significantly more
than they do at present in order to avoid congestion delays during web browsing,
even though they are presently getting a benefit from a sometimes congestion
delayed service.
In the remainder of this section we discuss in more detail the lack of congestion pricing
on the Internet and what implications this might have for the future development of the
Internet, and what if any are the policy implications. In the section that follows this one
we discuss optional grades of service (GoS).
Various means have been proposed to address the poor economic signals which
currently govern QoS on the Internet, and these range from voluntary control, to nonprice technological mechanisms such as admission control, to measures which use a
pricing mechanism including various forms of capacity auction. A signalling channel has
230 After the adoption of ATM by large IP backbones it became feasible for them to offer QoS statistics in
their transit contracts.
231 Most of these delays occur around the edges of the Internet, not on the Internet backbone.
118
Final Report
also been proposed as a means of enabling the efficient allocation and metering of
quality differentiated resources.
In EU countries, dial-up Internet users are mainly charged by their access provider on a
per minute / second basis, just as they are for making a telephone call. Indeed, for the
access provider a dial-up session is very similar to a telephone conversation: the call is
switched to the customer's ISP.
ISPs typically only charge a subscription fee to Internet end-users, although in some
cases the ISP does not charge the end user at all, but rather shares with the access
provider the per call-minute revenues received from dial-up sessions, i.e. the price
levied on the customer for the 'telephone' call to the ISP. This is sometimes referred to
as the 'free' ISP business model.232 Per minute charges by the access provider mean
that dial-up users do face a marginal price when using the Internet and this encourages
end-users to economise on their Internet usage.
In the USA and New Zealand it is typical that dial-up users face no usage sensitive
prices and thus face no marginal price for their use of the Internet.233 In Australia, calls
on Telstra’s network are charged at A$0.25 irrespective of the length of time the call
takes.234 Dial-up users in Australia thus face a marginal price per session, but no
marginal price for the length of the session. This means that it is very expensive to use
the Internet to send a single e-mail, but much cheaper if many tasks can be completed
during a single dial-up session. Pricing of this type provides an incentive for users to
dial infrequently and to accumulate the e-mails they want to send and any web
browsing they want to do, and to do it all during a single dial-up session. Indeed, with
call forwarding onto mobile networks from a fixed line number, it may pay dial-up users
to take up the option of keeping their fixed line open to their ISP for very long periods,
even if they do not know in advance whether they will take up the option to receive
and/or send information.
The introduction of FRIACO and ADSL in EU countries seems likely to change the
future situation in Europe toward more unmetered pricing for Internet usage. The ADSL
modem splits the information into voice and data, only allowing voice to enter the
access provider's switch, with Internet data being directed to the ISP normally over an
ATM access connection between the ADSL Modem pool “DSLAM” and the first point-ofpresence (PoP) of the Internet. In this regard ADSL provides an "always on" Internet
access service. EU businesses that have leased line access to their ISP already avoid
usage sensitive prices. However, the migration of small businesses and residential uses
232 The reason the ISP can share in these revenues is that the call price is in excess of the access
provider's costs in providing that service, which on average has lower costs than a normal telephone
call, and yet it is charged at the same rate.
233 They do of course face a marginal cost of sorts, and that is the opportunity cost of their time – i.e. the
next best alternative to being on the Internet, such as watching TV, or reading a study about the
Internet.
234 This is roughly equivalent to 0.14 euro.
Internet traffic exchange and the economics of IP networks
119
to ADSL for which there is no usage sensitive pricing235 will significantly increase traffic
on the ATM access connection and also on the Internet, and ceteris paribus also tend to
increase congestion on the Internet.236,237
One of the problems which the Internet faces relates to the fact that with unmetered
service (sometimes referred to as flat-rate pricing) end-users do not face any additional
(i.e. marginal) price in sending extra packets. Where all packets are treated equally,
congestion problems tend to occur on one hand, and network investment problems on
the other. This is in large part because there are no economic signals to accurately
match demand with incentives for investment in capacity.
Where there is no limitation on subscriber numbers and users face no marginal usage
costs, the Internet is being treated much like a public good. Pure public goods are not
depleted with use; i.e. my usage of it does not effect the enjoyment you get from using
it. This is clearly not the case with the Internet and we should therefore expect it to
exhibit similar problems that plague those services that are treated as public goods, but
are in fact not. These problems are popularly referred to as the tragedy of the
commons, a problem which occurred when livestock farmers were allowed free access
to common land on which they could graze their animals. The farmers took advantage
of this offer with each one of them failing to recognise the effect that (apparently free)
grazing of their animals was having on the ability of the others to do likewise.238 The
result was over-grazing such that the commons became quite unsuitable for grazing by
anyone. A similar problem has occurred with global warming, the loss of biodiversity,
and the depletion of natural resources such as fish stocks. If usage remains 'free' and
the resource can be depleted or over-used, then the rule is that either the numbers of
users have to be rationed, or the total amount of usage must be restricted if the
resource is to remain viable.239,240
In the case of the Internet the lack of an economic mechanism for congestion
management results in a degradation of service quality for everyone. Improvements in
software, hardware, and the declining cost of capacity have, however, provided all of us
235 Some ISPs have a step function in the price charged between 2 or more levels of usage (e.g. bits per
month), but the level of usage in each category is so large that no marginal usage costs exist for the
vast majority of users.
236 The core Internet is protected against overloading due to the limitation in the ATM access connection
where most regional operators do not guarantee more then 10% of the peak capacity of the ADSL
speed. This means that as more users share the ATM access connection actual capacity experienced
declines.
237 Under such pricing arrangements end-users tend to be grouped together such that pricing tends to
result in low users subsidising high users.
238 This phenomenon is also know as an externality cost.
239 Quotas are a common approach to these types of problems, and where trading in quotas is permitted,
this tends to result in improved efficiency within the industry. Unfortunately quota numbers tend to be
difficult to police, resulting in illegal over-usage. Quotas can also be systematically over-provided
where quota numbers are not strictly set according scientific data, but are subject to political
compromise.
240 The tragedy of the commons is the most common form of market failure. For example, it explains
pollution, global warming, and natural resource depletion.
120
Final Report
with level of service that is tolerable on most occasions. But most importantly, by
applying the same network resources to all packets, Internet networks are holding back
the development of real-time services over the Internet as these require a QoS during
peak usage which is higher than can be delivered by the Internet when it is treating all
packets with the same priority, whether they concern 'real-time' service or not. On the
present Internet, packets containing email, file transfer, and WWW are randomised with
those carrying VoIP and real-time interactive messages.
Usage based pricing can in principle be designed to shift some demand from peak
periods to other times, and can also signal to ISPs when demand is such as to make it
economic for them to increase the capacity of their networks. The idea is that customers
should ration their own usage during periods of congestion according to the relative
strengths of each user's demand. For users with very weak demand (say, a willingness
to pay for service during a congested period of zero, assuming they can use it during
uncongested periods at no marginal cost to themselves), there is little benefit obtained
by the user compared to the costs imposed. At times of congestion, however, the cost
of sending extra packets would include the additional delay, packet loss and QoS
degradation that are imposed on other users.
When the Internet is uncongested,usage-based pricing is not helpful at all; it actually
has a detrimental effect on economic welfare. At these times the cost of sending an
additional number of packets is virtually zero. We say that the marginal cost of usage is
zero, and it is a demonstrable economic axiom that under these circumstances a usage
sensitive price is inefficient – it reduces economic welfare – flat rate pricing is optimal.
More generally, to be economically efficient the structure of prices should match the
structure of costs; that is, the way costs are caused should be reflecting in the way
liability is incurred by the customer.
The main classes of relevant costs are explained as follows:
(i)
Building an Internetwork involves fixed costs that do not vary with network
usage.241 Flat rate pricing is the efficient way of recovering these costs. However,
these costs can not be said to be incremental to any single customer, and so the
most efficient flat rate pricing would involve different prices being charged to each
subscriber, with those with strong demand paying more than those with weak
demand.242 Indeed, if the seller's overall prices are constrained so that she
makes only a reasonable return on her investments, the most cost effective
241 There are also development costs which are not incremental to single customers (such as software
development).
242 This point was made in Figure 7-2 and related text.
Internet traffic exchange and the economics of IP networks
121
access prices involve prices being set according to the inverse elasticity of
demand of each subscriber.243
On the basis of these costs no person should be charged a subscription price
more than their willingness to pay, which could be close to zero. The idea here is
that no one should be excluded by the prices which are intended to recover the
costs of providing the basic network and software.244,245
(ii)
There is also an initial cost for an ISP in connecting a customer to the Internet.
These are mainly administrative costs, and as they occur on a one-off basis they
should be charged in the same way if prices are to be efficient.
As there is a positive incremental cost involved with each person's subscription,
this will make up the one-off connection fee per subscriber, together with a return
on any incremental capital associated with these costs.
(iii)
As the Internet becomes congested there is a marginal cost incurred when extra
packets are sent, and this equals the delay experienced by all users at that time.
In avoiding the congestion externality this implies, marginal cost pricing would
have the following attributes. It would: (a) encourage users as a whole to
economise on their usage during congested periods (for those with weak demand
[e.g. a willingness to pay of zero] this means shifting their demand to other
periods) and, (b) send a signal in the form of additional marginal revenues to ISPs
to invest in more capacity when there is significant congestion.246
In regard to (iii), if the price is set so that it equals the margin cost of delay, then
capacity can be said to be optimal. If a price higher than this can be sustained,
and the network still becomes congested, and the incremental revenue earned is
more than the cost of building an incremental increase in capacity, it will pay the
network to increase capacity.
Where there is no system that enables end-users to purchase a QoS they demand, it
could be argued that there is also a cost associated with the absence of a market for
real-time services, as is the situation presently with the Internet.
With the telephone network, time-of-day has been used as an element of pricing.
Subscription charges are the flat rate charge, with usage being charged on a per minute
(or per second) basis, and varying according to the time of day. The idea with time-of-
243 This is commonly referred to as Ramsey Pricing, a full discussion of which can be found in Brown and
Sibley (1986).
244 Given that there are positive direct and indirect network externalities, this price may in theory be less
than zero as was suggested in Figure 7-1.
245 Remember that the cost of customer access (the local loop) should already be met by connection and
subscription charges levied by the access operator. In the case of businesses using leased lines there
will be some additional costs that are caused by the subscriber.
246 Crucially this is dependent on pricing structures between ISPs, a matter we have assumed thus far to
be unproblematic.
122
Final Report
day pricing is to dissuade usage by callers with low demand from the most congested
period, encouraging them to shift their usage to a period when per minute charges are
much lower. This is optimal because the capacity investments costs required to handle
the traffic from subscribers with weak demand during peak usage, are higher than the
present value of their willingness-to-pay (WTP) for the capacity needed to satisfy that
demand.
Both time-of-day and call minute/second charges have less relevance for the Internet
than for the PSTN. One reason for this is that peak usage of the Internet tends to be
less stable in time, and thus time-of-day pricing is unlikely to provide a fully effective
means of congestion management. It suggests that an attempt to raise session or
usage prices at a particular time of day when the Internet is most congested is likely to
miss periods of congestion, and even shift peak usage to another period. Indeed, in
some cases, the congested period may be unstable even when there is no usagebased pricing. If this were the case, then in order for time-of-day pricing to work
effectively as a congestion management device, pricing would need to shift
simultaneously with peak usage. For such a scheme to be workable those using the
Internet would need to know when the network was congested in advance of using it.
This would appear to require some sort of timely feedback mechanism – perhaps a spot
market.
Another problem with time-of-day pricing is that the Internet is made up of a great many
networks, and even in the same time zone peak usage may well occur at different times
in different places. Moreover, an ISP providing transit to several ISPs, some of which
have rather different peak usage times, suggests that different prices would apply at the
same time of day to ISPs that are in the same market competing (on the margin) with
other ISPs, even if their traffic / time patterns are not the same. This may raise
competition neutrality concerns.
A further problem is that unlike a switched circuit, which is rented exclusively by the
paying party for the duration of a call, packets of data on the Internet share capacity
with other packets such that costs are packet more than time related. Compared to a
packet-based marginal pricing system, a proxy based on time-of-usage prices is likely
to have depleted efficiency advantages relative to a packet-based marginal pricing
system. To the extent that the proxy is not accurate the level of relative efficiency costs
may be quite considerable.
An elegant and potentially highly efficient solution to the marginal cost pricing problem
with the Internet has been described by MacKie-Mason and Varian (1995) (M-V), and
referred to as the "smart market". From our perspective, the main attribute of M-V's
contribution is its pedagogic value in setting out the economic problems, in part through
the solution they propose. Theirs is more a description of what the ideal solution would
look like, rather than being a practical solution to congestion management (at least not
practical under present technology).
Internet traffic exchange and the economics of IP networks
123
M-V's scheme would impose a congestion price during congested periods which would
be determined by a real-time Vickrey auction. The way this would work is that end-users
would communicate a bid price for their packets (perhaps expressed in Mb) just prior to
beginning their session. The Vickrey auction design is known to provide a strong
incentive for all end-users to communicate their maximum willingness to pay for the
item, in this case outgoing and more importantly returning packets, i.e. it provides "the
right incentives for truthful revelation".247 This is because the price actually charged to
any end-user is not the price each person bids, but is the price bid by the marginal user
– the last person to be granted access to the Internet under the congestion
restriction.248 All users admitted to the Internet during this period pay the same price.
Those end-users with a willingness to pay which is less than the market clearing price
would not obtain access at that time, and would have to try again later. When the
Internet was uncongested all bidders would be admitted and the price charge would be
zero.
An additional attraction of the "smart market" is that under competitive conditions it
provides correct economic signals for ISPs to increase capacity. This would occur when
the marginal revenues from admitting further users at peak usage are greater than the
marginal cost of adding capacity, thus communicating a profitable investment
opportunity. Network capacity will thus be maintained so that marginal revenue equals
marginal cost, which is the most economically efficient outcome. The smart market may
lack practically, but part of its efficiency advantages can be captured through grade-ofservice (GOS) pricing.
8.1.3 Grade-of-Service pricing
The main problem currently with VoIP and 'real-time' interactive services is not that
networks can not provide the QoS statistics needed, but rather, it appears that only a
limited number do and these tend to be private IP networks i.e. intranets. ISPs providing
public Internet services have too little incentive to develop VoIP services where packets
containing voice conversations pass over other networks. We discuss technical aspects
to the problem of off-net QoS in Chapter 4 and in Annex B.249,250
247 In 1996 William Vickrey received the Nobel Prize in Economics for his early 1960s work on the
economic theory of incentives under asymmetric information.
248 The one exception to this statement is the marginal user whose WTP equals the market clearing
price.
249 We gather that firms offering VoIP tend either to be using private IP networks in order to provide
acceptable service quality, or are not using hot-potato routing in an effort to improve QoS. The later
approach would appear to raise problems due to the non geographical nature of Internet addresses.
250 The very much higher bandwidth required for real-time video-streaming would in any case presently
limit this service to end-users with sufficiently high speed network access. Indeed, the type of quality
associated with HDTV may be beyond the capability of existing telecommunication access
infrastructure for many end-users.
124
Final Report
The M-V solution was published in the mid 1990s, and while this type of auction still has
relevance for congestion management on the Internet, there have been many technical
developments which have diverted interest away from the smart-market solution.
Perhaps most significantly, technological developments will enable packets to be
treated differently such that there may be multiple virtual Internets each with different
QoS attributes, with packets being tagged according to the QoS option chosen. Even
different GoSes for connection admission may by be introduced where users with
strong demand for obtaining instant admission when requested have an especially high
probability of getting access at any time they require, while users whose demand for
admission is less strong would buy a service that provided a lower likelihood of
obtaining admission at any time.
ISPs could in principle tag packets according to the QoS they wanted from their transit
provider, although the initial request would presumably need to come from the end-user
in order to justify the higher prices that backbones would ask to transit or terminate
packets that received a higher QoS.
A system that enabled end-users to select among several different QoS the QoS that
would apply to a particular flow of packets would result in their ISP billing them
according to the numbers and types of tagged packets sent. By itself such a system
would not be ideal if it meant that ISPs would still have to work with crude price
structures. This would be the case if packets are differentiated according to the QoS
provision requested, but there was no marginal congestion price. A subscriber may pay
a premium to use the higher GoS/QoS possibly based in bits of usage as well as a
monthly subscription, but she could then be able to send all her packets during peak
usage, or perhaps at other times – there would be no difference in the price she would
pay. This means that for each GoS/QoS there would be no explicit mechanism aligning
the demand for the service at congested periods with ISPs' incentives to invest in
capacity, making congestion avoidance by the ISP difficult, even though average
revenues on such a network may be more than high enough to cover average costs. As
the higher GoS service would be sold as a real-time premium quality service, there
would be an incentive to maintain that quality, but over-provisioning would still be
required due to the lack of marginal cost price signals. We would expect there to be a
greater risk with this type of pricing structure that annoying and possibly commercially
damaging congestion delays would periodically materialise. Such solutions to the
congestion problem would therefore be rather incomplete, although they may offer
sufficient refinement to enable GoS/QoS development and the widespread provision of
real-time services on the Internet.
8.1.4 Conclusion regarding the congestion problem
One of the main points to come out of this section is that one size does not fit all. There
is increasing recognition that congestion management on the internet is an economic as
Internet traffic exchange and the economics of IP networks
125
well as a technological problem, and a solution requires a combination of economic as
well as technological developments. Greater use of pricing as a mechanism for demand
management and as a means of enabling ISPs to provide improved quality to end-users
is now an accepted vision amongst Internet design teams. While technological solutions
that enable better demand management through such things as multiple GoS/QoSes
appear to be some way off, considerable effort is apparently being directored toward
finding solutions to these problems. In this regard, we have not observed a level of
market failure that would warrant a recommendation for public sector intervention.
8.2
Strategic opportunities of backbones to increase or use their market
power
8.2.1 Network effects
While fragmented and often in competition with each other to attract subscribers and/or
sell transit services, networks operating Internet infrastructure are also complementary
to each other. This complementarity is facilitated through interconnection: i.e. the
average value per user of a small network not interconnected with others (say with
1,000 members) is typically only a fraction of the average value per user of a network of
interconnected networks such as the Internet, with hundreds of millions of members).
This suggests that as the number of network members grows, the amount people are
prepared to pay to be a member also increases. These are known as network effects.
They are very common. Perhaps the most common thing that displays network effects
is language. The network effect derives from the number of people who can directly
communicate with each other.251 The smaller is a community of language speakers, the
greater is their need for other languages. Internet subscriptions are similarly
complimentary to each other.252 These network benefits have been referred to as direct
network effects.253
As well as an estimated 550 Million subscribers,254 there are also thousands of firms
and organisations providing information and services to these subscribers over the
Internet, most commonly through web-sites. Network effects also drive growth in these
services and in web-sites numbers. The larger the network, the more firms and
organisations are likely to want to supply network accessories / add-ons / compatible
services. These complimentary services are a spill-over benefit from the primary
251 The argument can also be made for machines which need a common interface or common software
to communicate with each other.
252 Indeed, in terms of antitrust analysis they also form a "chain of substitutes" for each other, and
although individually very few subscriptions are adequate substitutes for each other, each point of
access is virtually identical with millions of others.
253 Liebowitz and Margolis use the term synchronisation value to describe network effects. See
http://wwwpub.utdallas.edu/~liebowit/netpage.html
254 See http://www.nua.net/surveys/how_many_online/world.html
126
Final Report
network (the Internet and telecommunications networks it runs over), and they are thus
referred to as indirect network effects. They are, however, very important in explaining
Internet development. They represent markets for complimentary products which in
their growth tend to mirror the growth in the primary network, which tends to be most
rapid through early to middle stages of its overall growth. This is when network benefits
are increasing most.
Examples of indirect network effects are very numerous. Where there are two computer
operating systems, one used by 90% of all computer users, the other used by 10% of
users, the complimentary range of software operating on the relatively little used
operating system would be expected to be much less than those designed to run on the
most popular operating system. One of the reasons for this is that the chances of
commercial success for complimentary products, where most costs are sunk prior to
any sales, is greater when supplying to the larger network. Another example of indirect
network effects are spare parts for cars. Getting parts for cars of obscure origin is much
more difficult than for a common make of Ford or VW.
In regard to the Internet, web-sites are the equivalent of complementary software, or in
the car example, spare parts. The indirect network effects concern products that
typically compete with each other (i.e. are substitutes for each other), but have a
complimentary impact on each other in that they contribute to the expansion in value of
the network and result in increased demand for subscriptions and increased supply and
consumption of complimentary products. Thus, while large ISPs compete to host websites, which compete between themselves (often for "hits"), web-sites are also
complimentary to the value of the Internet. Imagine how much less interest there would
be in the Internet if all you could do with it was send email, perhaps with files attached.
The direct and indirect network effects that characterise the Internet can be described
as a virtuous circle. Where networks are controlled by diverse interests they sometimes
give rise to network externalities, which are benefits or costs not taken account of
during economic decision-making. Where externalities exist, social benefits and private
benefits diverge, and where this divergence is severe it creates a justification for
regulatory intervention. We now look at public policy issues where network effects
involving commercial traffic exchange on the Internet may also give rise to externalities.
8.2.2 Network effects and interconnection incentives
The range of possible strategies that might interest Internet backbone players is
immense, depending on such things as industry structure, the specifics of economic
relationships between firms, the structure of costs, information asymmetries, and
expectations about the future behaviour of firms, consumers, and regulators
(government). This makes it impossible to outline all the strategies that might end up
Internet traffic exchange and the economics of IP networks
127
being followed by ISPs. Potentially, if one element changes, then much of the future
changes too.
What economists do instead is try to identify a set of economic relationships that
encapsulate the most important aspects of the industry and the way the participants
compete or trade with each other. The techniques used to do this are commonly used in
the analysis of industrial organisation, and these are referred to as models of games, or
game theory. The idea is that the key incentives that motivate actions by players should
be captured in dynamic models where the players plan their own actions with some
understanding of the strategies available to each other. Players will plan their own
strategies while second guessing those of other players, even though information about
this is far from perfect. The great attraction of using games involving self interested
players is not in predicting things;255 the nature of strategic interactions in complex
settings makes it unlikely that the conditions specified in any particular model of a game
will remain wholly appropriate in time, or indeed that the players will actually follow their
best strategies.256 Events will occur that are not allowed for in the model and which will
tend to obscure the predictions of a model, or perhaps invalidate the parameters of the
model. However, models of games can be interpreted as predictions about what might
evolve in the future if certain things happen. If these things did happen, then taken as a
whole, models of games will have predictive power to the degree they have captured
the essence of the underlying economic incentives and relationships of the players.
However, this is not how these models are currently used.
Rather, what games mainly provide us is insight into the behaviour of firms (decisionmakers), and markets, and how that behaviour (strategy) relates to particular industrial
structures and relationships among the players. These strategies, or perhaps more
importantly the rationale behind them, are frequently not intuitive to observers. But to
the extent that models of games capture the essence of a market they provide us with
insight into some of the main elements at work in shaping markets and industry
structure and performance.
While the Internet is a young, complex and rapidly changing industry, several authors
have attempted to analyse strategic incentives / opportunities that might be important
for shaping the Internet. We draw on this work below in setting out a number of
economic relationships, and in discussing how changes in these relationships might
work on the incentives / opportunities of the players, most especially where they
potentially involve significant market failure and can thus give rise to public policy
concerns.
255 Nor can they be used to select the best strategy that a firm should follow.
256 Although there is evidence that people do on average play their best strategy, the complexity of
games would seem to be a cause of non-systematic errors in decision-making. See Mailath (1998) for
information and analysis.
128
Final Report
Of the many issues that were discussed in the 1998 investigation by the European
Commission into the proposed merger between MCI and WorldCom,257 and which reemerged again in 2000 in the proposed Merger between WorldCom and Sprint,258
three of them stand out as potentially having the most significant implications in regard
to the issues we are addressing in this section. They are closely related, all of them
concerning interconnection. The questions posed relate to whether the merged entity
would control a sufficiently large proportion of the Internet that it had:
•
An incentive to degrade the quality of interconnection (which can also include non
co-operation in overcoming QoS problems with off-net traffic exchange);
•
An incentive to introduce proprietary standards (e.g. to differentiate on-net services
from off-net services), and
•
The ability to impose interconnection prices that are significantly above the relevant
costs involved.
Fundamental in finding answers to these questions will be the importance of network
effects and strategies that will improve the competitive position of one or more networks
visa-vis the others. The discussion in this section provides information that is relevant to
answering these questions.
When there are a total of two networks that are interconnected, and the quality of
interconnection is degraded between them (e.g. through there being insufficient
capacity at the border, other off-net / on-net QoS problems, or perhaps interconnection
is terminated), both networks initially face a demand reduction. In general, networks of
equal size gain nothing by degrading interconnection they provide to the other.
However, as the difference between the size of the two networks grows,259 it will at
some stage become advantageous for the larger network (ISP(A)) to cease co-operation
to improve off-net QoS, or to start degrading (or refusing) interconnection between the
two networks. When this occurs we can say:
The loss in the value of ISP(A)'s network due to the drop in demand following on from
a loss in network benefits associated with lost or degraded interconnection
is less than
The relative gain for ISP(A) (the larger network) in terms of its competitive advantage
over its smaller rival ISP(B).
257 WorldCom/MCI Case No. IV/M.1069.
258 Not yet published.
259 It is by no means clear how size should be defined, except that the approach to doing so would take
account of a combination of factors, mainly the numbers of subscribers and the value each contributes
to the overall value of the network.
Internet traffic exchange and the economics of IP networks
129
In this case ISP(B) will loose a larger proportion of its network value than will ISP(A) due
to the degradation (or loss) of interconnection such that subscribers value membership
to ISP(B) less than they do membership to ISP(A).260 Where one of the two networks is
dominant the larger network enjoys a relative network advantage which increases with
its dominance.
Where there are three ISPs, however, say, one with 50% of the market, Crémer, Rey
and Tirole, (1999) (CRT)261 show that under certain conditions the larger network may
strategically benefit by degrading the quality of interconnection with one (but not both)
of its two ISP competitors. The success of this strategy will be undermined to a degree
if transit between ISP(B) (B from here on) to ISP(A) (A from here on) and visa versa were
provided by ISP(C) (C). By not providing transit:
•
C has access to everyone who is connected (it is still interconnected to both A and
B);
•
A has access to ¾ of all those connected, and
•
B only has access to ½ those connected.
C's dominant strategy is therefore to tacitly collude with A as it is likely to be the
greatest gainer from A's strategy.
CRT's model provides interesting insights, although there are several features to it that
call into question its general applicability. We list these in more detail in Annex D.
CRT's paper has been very influential.262 The main conclusion that has been taken
from the paper is that:
a strategy of degrading interconnection by a dominant ISP may be profitable,
leading to network tipping in favour of a dominant ISP.
However, CRT did not expressly model the circumstances which needed to prevail for
this conclusion to hold; they just showed that it was possible. Thus, non co-operation
regarding connectivity between rivals is possible, but CRT's paper does not give us very
much insight into the conditions under which it would be likely.
In a recent paper by Malueg and Schwartz (2001) (M&S), the authors investigate the
likelihood that a dominant ISP could succeed with a strategy of degraded
260 Rather than think of degradation or a lack of co-operation, it may be helpful to think of firms either
agreeing to interconnect or not. The analysis is application in either case. We can say that if IBP(A)
refused interconnection with IBP(B), IBP(A) would have a larger installed base than IBP(B), and if the
value of connectivity is high for consumers, this can make IBP(B)'s business untenable in the long-run.
261 Crémer, Rey and Tirole acted as advisers to GTE and tabled their analysis of some of the possible
strategic implication of the merger, on GTE's behalf.
262 It appears to have been important in the decision reached by the EU competition law authorities in
1998 and 2000.
130
Final Report
interconnection such as was investigated by CRT. M&S extend CRT's model in order to
investigate the ranges over which the various outcomes are possible (including interior
solutions and various tipping outcomes, the latter occurring when customers start to
move in increasing numbers to the larger network, and the incentive to do so increases
as each customer transfers). CRT's paper focussed on separating stable from unstable
outcomes, while the intention behind M&S's approach is to provide a more complete
picture of the likelihood for success of a strategy involving degraded interconnection by
the largest network. M&S investigate the possibility that global degradation by a
'dominant' ISP (A) may be profitable where there are three other smaller competing
ISPs of approximately equal size. They then analyse the ranges in which a targeted
degradation strategy can be profitable for A. We discuss the model and its conclusions
in more detail in Annex D.
The main points to come out of M&S's paper is that according to the features of their
model which is similar to that of CRT:
for degradation of interconnection with all other ISPs to be a profitable strategy
implies
-
implausible values for the model's variables, and
-
that the dominant backbone has a market share263 of significantly greater than
50%.
M&S also investigate whether a strategy of targeted degradation of interconnection
might be profitable where global degradation is not. They follow CRT in assuming a
market share of 50% for the largest ISP – the one seeking to strengthen its market
position. The model indicates that:
•
targeted degradation is an unlikely outcome except in circumstances that are quite
different to those existing presently in the Internet.
For targeted degradation to be a viable strategy:
•
the value of network effects would need to be surprisingly high, and
•
the marginal cost of serving new customers would need to make up an implausibly
high proportion of average customer costs.
In M&S's framework it is also the case that degradation becomes less plausible as the
numbers of ISP competitors increase.
The work of CRT and M&S has quite general implications for the likely future
development of the Internet, although due to the nature of this research we should not
263 Defined simply as the proportion of the total customer base.
Internet traffic exchange and the economics of IP networks
131
label them as a lot more than elegantly supported hypotheses. So long as no single
entity gains a market share of about 50%,264 it appears that all major backbones have
an incentive to pursue seamless interconnection. This implies that they have incentives
to overcome off-net QoS problems, and to co-operate in the area of standards, rather
than trying to develop an on-net services strategy to differentiate themselves from their
main competitors. Indeed, these findings may also hold true when a single ISP has a
market share of significantly more than 50%.265
We have noted in Annex D that the CRT and M&S models provide a simplified
representation of the fundamental economic relationships that exist between backbone
networks. This is the usual case with modelling. It is necessary to enable researchers to
focus on the most important factors driving firms’ strategies and industry structure, from
those that are of second order importance.
One of the factors that was not addressed in detail by the two papers is the possible
importance of the Internet's hierarchical structure and the way this has become looser in
recent years, in regard to the ability of the largest backbones to play for control by
partitioning off network benefits.
Resent papers by Milgrom, Mitchell and Srinagesh (1999) (MMS), and Besen, Milgrom,
Mitchell and Srinagesh (2001) (BMMS) seek to analyse the competitive effects of a
looser hierarchy (as for example, has occurred through the growth in secondary peering
and multi-homing), on the market for backbone network services. In both cases their
approach is to look at the situation in terms of a bargaining game that focuses on two
networks where the one that perceives it has a better bargaining position is prepared to
get tough in order to extract as much of the benefits of a trade (i.e. interconnection) that
it can. As with CRT and M&S, during disputes the relative QoS between networks is
stated as a function of the relative size of each network's customer base. While service
disruption would result in both networks loosing customers, the smaller network will
suffer the larger loss. BMMS show that compared to a situation where alternatives to
interconnection with core ISPs are not available (e.g. secondary peering, multi-homing,
and caching and mirroring), the availability of these services reduces the bargaining
advantage the larger network (A) has over the small network (B). This is because where
networks are not of equal size an increase in the proportion of subscribers who can
communicate over more than one interface has a greater relative impact on B
compared to A, thus reducing A's relative advantage. The availability of secondary
peering, multi-homing, and firms that move content closer to the edges of the Internet,
reduces the larger network's ability to extract concessions from small networks during
bargaining over interconnection. BMMS and MMS show that increases in the proportion
of customers that have alternatives if one point of interconnection is degraded or cut,
264 On the basis of M&S, it will likely require a market share of over 60% before non co-operation in
interconnection becomes a viable strategy.
265 See Annex D and M&S (2001) for detailed discussion.
132
Final Report
are similar in their effect on market power to reductions in the market share of the larger
network.
Indeed, in a recent paper by Laffont, Marcus, Rey and Tirole (2001) (LMRT) the authors
actually find that where competition between backbones is vigorous, backbones are
(perfect) substitutes on the consumer side but have market power on the websites side
of the market, large ISPs not being backbones may be able to secure interconnection
prices with core ISPs at less than incremental cost if they can first obtain peering
arrangements with sufficient other core ISPs. The reason this can occur is that having
obtained peering arrangements with several backbones, an ISP can adopt a strategy of
offering a core ISPs that has not yet agreed to enter into a peering relationship with the
ISP, with a take-it-or-leave-it offer. If the core ISPs were to decline the offer it would
lose connectivity with all those customers of the ISP who were not multi-homed, which
would undermine its ability to compete with the other backbones. Under these
circumstances core ISPs are likely to prefer peering relationships than customer transit
contracts which are shown to give interconnection prices in line with the off-net pricing
principle (discussed below). The elasticity of customer switching rates with respect to
service degradation and the speed of this customer response will, however, tend to
undermine the payoff to this strategy for ISPs.
8.2.3 Differentiation and traffic exchange
Most firms try to differentiate themselves from their competitors, such as by offering
services or goods that are marketed according to different attributes than those offered
by their competitors. To the degree that a firm is successful in this it tends to lower the
level of price competition.266 However, competition tends to restrict firms' ability to
differentiate their products from others.267,268 In the late 1990s several authors
suggested that core ISPs might seek to differentiate themselves from their rivals by
offering additional on-net Internet services that were not available off-net. These might
include, for example, the option for end-users and websites to choose from several
different QoS options (not offered to other backbones), such as those presently possible
on ATM networks.269 In this section we ask what the opportunities are for, say, the
market leading core ISP to differentiate itself from rivals (i.e. increase its market power)
in a way that concerns traffic exchange – the subject matter of this report. Differentiation
that does not focus on traffic exchange or the exploitation of network effects are not
discussed in this report.
266 See Salop and Stiglitz (1977).
267 See Shaked and Sutten (1982).
268 The degree to which firms can in practice differentiate their products from their competitors depends
among other things on a combination of strategic and non strategic entry barriers.
269 See Annex B for details.
Internet traffic exchange and the economics of IP networks
133
Our focus here is concerned with the potential for several distinct services each
displaying network effects, not being substitutes for each other, that are capable of
being jointly provided over the Internet. The example we have in mind is basic Internet
service with its bouquet of services (WWW, FTP, E-mail), which for argument’s sake we
consider as one, and voice telephony, which is presently provided over an alternative
platform - the PSTN - but can potentially also be provided over the Internet. As we have
noted already, problems at network borders which we could abbreviate here as ‘the
quality of interconnection’, is one reason why off-net VoIP service is not presently a
good substitute for voice services provided over the traditional circuit switched network.
If we assume that on-net VoIP is a near perfect substitute for the PSTN, and that voice
communication is effectively seamless for Internet users between their ISP and the
PSTN, there appears to be the potential for a more complex strategic role for
interconnection than has been outlined by research so far. Might the market leading ISP
(or coalition of ISPs) be able to use its larger VoIP (on-net) network to leverage a
competitive advantage into the market for basic Internet service? Might this more
complex scenario mean that network tipping is a more likely outcome than was
identified by M&S?
The scenario we have in mind is the following: under what circumstances would a
market leading ISP find it profitable to degrade the quality of interconnection (or not cooperate with rivals to overcome off-net problems) to prevent off-net VoIP from
developing, given that traditional Internet service would continue to be provided
according to a "best efforts" QoS? Would this perhaps occur at lower market shares
than is indicated by M&S in the case of interconnection degradation involving basic
Internet service? A closely related question is whether the network advantage in
providing voice service to its own subscribers would be a telling factor in convincing
existing subscribers to move from other ISPs to the market leader, and for first time
future subscribers to choose the market leader ahead of other ISPs?
In deciding on the scenario to be modelled the following would seem to be relevant:
•
Where off-net QoS problems persist the voice service offered would be VoIP for onnet calls, the scope of which would depend on the ISP’s customer base, and for offnet calls the ISP would transport the VoIP traffic to a point where it was most cost
effective for handing over to a PSTN network, from where a circuit would be
established to the called party. Figure 8-1 shows the situation of the two networks.
•
The ISP’s customers would receive lower prices, especially for on-net calls,
although most off-net calls would also be cheaper.270
270 One study of the comparative costs of service of providing VoIP at the local exchange, and the costs
of PTSN service at the local exchange, suggests an approximate 15% cost saving. However this
study does not take account of the economies of scope enjoyed by ISPs that would provide this
service, nor does it take into account that existing PSTN prices do not reflect the underlying economic
134
•
Final Report
The above two bullets imply that the ISP with the largest customer base would have
a competitive advantage as its network of lower priced on-net calls would be larger
than that of its competitors.
Figure 8-1:
VoIP; ‘on-net’ and ‘off-net’.
Internet
PSTN
ISP(A)
needs complex routing policy,
tunnelling & dynamic multihoming, & PSTN backup
On-net call;
QoS OK for
VoIP
℡
℡
℡
PSTN switch
℡
ISP(B)
HEWLETT
PACKARD
Off-net Internet call;
‘Best effort’ QoS
too poor for VoIP
℡
℡
Off-net IP/PSTN call
HAC
EWKA
LETT
P
RD
℡
℡
Router
℡
℡
ISP(C)
℡
℡
℡
HEWLETT
PACKARD
℡
H.323 / SIP gateway
internet phone
℡ PSTN phone
ENUM address translation
Source: WIK-Consult
Clearly such a strategy may be undermined by smaller ISPs deciding to co-operate to
overcome the off-net traffic problem, thus enabling them each to more than match the
on-net customer base of the leading ISP, assuming the larger ISP does not yet have a
majority of the total market. This suggests that the market leader’s strategy would only
last while its competitors were unable to overcome the off-net QoS problem. Knowing
this the market leader may agree to co-operate with a limited number of rivals much as
was analysed above by CRT and M&S when they considered targeted degradation of
interconnection. Would the likelihood of network tipping feature differently than it does in
M&S, and would it be a more likely outcome than is suggested by the results of M&S?
costs and demand characteristics of all the various services. This implies that VoIP providers would
potentially have a much wider pricing advantage than is indicated by the 15% cost saving figure found
by Weiss and Kim (2001).
Internet traffic exchange and the economics of IP networks
135
Without actually devising the model and doing the research we can not provide a firm
answer to such questions, but we suspect not.
Figure 8-2:
Degraded interconnection and non-substitutable services having
strong network effects
1. Traditional Internet
a)
Network benefits
(-)
+
b)
(+)
Competitive advantage:
Comprising: (i) existing subscribers
(ii) future subscribers
+
2. IP Voice (on-net)
c)
Network benefits
(-)
+
d)
(+)
Competitive advantage:
Comprising: (i) existing subscribers
(ii) future subscribers
+
Strategic interaction
between 1 & 2
Source: WIK-Consult (own construction)
A possible list of the main elements that would need to be considered by this research
is shown in Figure 8-2. The superscripts represent whether the effect of degraded
interconnection between the market leading ISP and one or more of its competitors, has
a positive or negative effect on the leading ISP. In the case of c), all ISP’s voice
customers would in any case be networked with all other voice subscribers, either onnet or through the PSTN, as is indicated in Figure 8-1. However, for the market leader
its advantage is that its network benefits relate to the size of its on-net customer base.
There will be some loss of value for the market leader from not having ubiquitous VoIP
136
Final Report
(i.e. an off-net VoIP service) because off-net calls will be more expensive for its
subscribers as calls will need to interconnect with the higher cost PSTN. The increase
in its competitive advantage would appear to more than compensate for this loss
however.
Other forms of service differentiation by backbones will likely be pursued that do not
have direct implications for traffic exchange, but will concern other features involving
values being marketed to subscribers. These are not considered here.
8.2.4 Fragmentation and changes in industry structure
If there are services that are only available on-net due to the persistence of QoS
problems at borders, then installed bases of networks potentially become vital in the
competitive process as websites and customers who wish to make use of these new
services will ceteris paribus want to connect to the largest network. This would be a
case where firms compete on the basis of the network benefits that they can offer their
customers. Competition in such a situation tends to tip in favour of the network with the
largest installed base.
In this regard, failure to overcome off-net QoS problems when the next generation
Internet becomes available on-net would appear to be similar in its effect to introducing
proprietary and conflicting standards between backbones. Off-net QoS problems could
also be seen as a form of degraded interconnection between ISPs.
In such cases pricing strategies may well also diverge from what would prevail on a
seamlessly interconnected Internet. Aggressive pricing, also know as 'penetration
pricing' by the ISP with the most market power may be employed if it expects off-net
QoS problems to persist into the era of the Next Generation Internet. Putting the fear of
regulatory intervention to one side, the intention behind this strategy would likely be to
monopolise the industry - a winner takes all strategy.271
If backbones’ expectations are that off-net QoS problems will persist horizontally and
vertically (i.e. across backbones, and between backbones and their ISPs) after next
generation features become widely available on-net, then their incentives may be to
integrate with other backbone networks and with their ISP customers, with the primary
purpose of re-configuring several networks into a larger network in a way that increases
the size of the installed 'on-net' base for which next generation services can be
accessed.
Where such vertical integration activity began, downstream ISPs may well want to
merge with each other so as to improve their negotiating position with backbones
271 This strategy sacrifices short-run profits for higher long-run profits, much as occurs with predatory
pricing.
Internet traffic exchange and the economics of IP networks
137
wishing to take them over. If expectations of market players are that off-net QoS
problems will persist following the availability of next generation services on-net, the
short to medium term outlook for industry structure may be a very limited number of
competing vertically integrated Internetworks comprising what were formerly local and
regional ISPs and backbones. We could describe this outcome as fragmentation of the
Internet, although as initially envisaged, connectivity regarding basic Internet services
would still be universal.
In actuality of course, backbone operators will consider what they expect various
regulatory authorities would do if these events started to materialise. The outcome may
depend on whether any individual backbone judged that, compared to a strategy that
does not "cross the line" and trigger industry regulation, "going for broke" and thus
attracting industry regulatory would provide it with a comparatively superior position.
Under these circumstances strategies of other core ISP will have an effect on their
competitors’ strategies. Depending on the specific circumstances that evolve, games of
this type can easily degrade such that the core ISPs would not be able to avoid industry
regulation, even though this may imply an inferior outcome for each core ISP compared
to the case where none of them crossed the line and triggered regulatory intervention,
but as core ISPs could not commit to follow a less destructive strategy they could not
prevent the outcome.272
Thus, while network benefits of the traditional Internet may be more important for a
large core ISP than sacrificing them for competitive advantage, we can envisage other
situations involving incremental services in which the network with the largest on-net
customer base for traditional Internet service may want to differentiate itself by offering
a new service on-net only. Among other things, such a decision will depend on the
ISP’s perception of the perceived additional value of the network benefits of the new
service to end-users of the larger network. Would a new on-net service offer enough
value to customers to give the largest network a competitive advantage over its rivals
that was greater than the lost network benefits implied by an on-net only new service?
Moreover, if this strategy paid off it may be possible that one (or other) core ISP would
begin a policy of extending the services it provided on-net only, to services that are
presently available off-net, such as web browsing.
There are many other structural changes possible for the Internet, such as integration
with equipment manufacturers, and perhaps also with a PSTN operator. The underlying
rationale would need analysing, but leverage of traditional market power across
markets, the use of ‘traditional’ vertical restraints, or the possible leverage of network
effects, are all possible candidates. We have not addressed these issues in this study,
as for one thing the issues involve so much more than commercial traffic exchange on
272 This type of game is know as "the prisoner's dilemma”. See Kreps (1990, pp 37-39) for an account of
the prisoner's dilemma game.
138
Final Report
the Internet. There are clearly many possibilities for research involving combinations of
these motivations.
8.2.5 Price discrimination
8.2.5.1 Price discrimination between peers
The only researchers we are aware of who have focussed on the issue of price
discrimination between peers are Laffont, Marcus, Rey and Tirole (2001) (LMRT). Their
model does not have multiple hierarchical layers of ISPs. In LMRT, ISPs serve end
users, websites, and there is no transit traffic (i.e. ISPs are fully meshed). Moreover,
interconnection between ISPs is modelled to show the factors determining the
interconnection price. This is needed in order for the model to analyse interconnection
pricing incentives. LMRT show that in a wide variety of circumstances where this aspect
of the industry is already relatively competitive, the incentive of ISPs is to price
interconnection to customers and websites as if these customers and websites
accessed the Internet through other ISPs, i.e. as if all their traffic was off-net. LMRT call
this the "off-net cost pricing principle".
Moreover, the level of the interconnection charge (i.e. transport and termination273)
determines how costs are distributed between both sides of the market – websites and
end-users (senders and receivers).274 A higher or lower interconnection charge has a
counter-balancing effect on the backbones revenues in that it affects backbones'
incoming and outgoing interconnection costs in regard to off-net traffic (and in this way
governs the distribution of costs between both sides of the market). A higher
interconnection charge puts up website costs (as they have an outgoing traffic bias) and
lowers the cost for end-users. An example will serve to illustrate: when one network
(call it A) steals a website from another network (call it B), then A must pay B to
terminate traffic to B's end subscribers, this being traffic that A previously did not see at
all, i.e. subscribers and websites were on the same network. Moreover, A looses
interconnection revenues that it previously received from B for terminating traffic sent by
the stolen website (connected to B) to A's subscribers. Origination costs have also
increased for A as a greater proportion of the traffic is now originated on its network.
The same factors apply when: new traffic is generated, end-users demand for traffic is
price sensitive (or elastic), and when several different QoS classes operate.275 In this
273 The vast majority of transport costs are born by terminating networks due to the operation of "hotpotato" routing.
274 Analysts commonly assume that end-users form a different group than do content providers. In
practice this is not absolutely true as end-users sometimes post content, and visa versa. However, the
adoption of this simplification does not invalidate the analysis.
275 LMRT use the following example to illustrate: "Suppose for example that backbone 1 "steals" a
consumer away from backbone 2. Then, the traffic from backbone 2's websites to that consumer,
which was previously internal to backbone 2, now costs backbone 1 an amount ct to terminate [where
Internet traffic exchange and the economics of IP networks
139
way a higher interconnection charge shifts more of the burden onto websites and
enables end-user subscription charges to be competed downwards.
Perhaps counter-intuitively, end-users as a group prefer a lower interconnection charge
and a higher end-user charge than would result from a competitive outcome, due to the
indirect network benefits generated for end-users by websites who will be discouraged
by a higher interconnection charge (see Section 7.1.2.2 on indirect network effects).
Market-based prices fail to internalise this externality. However, where websites receive
a lower interconnection charge they are less inclined to employ compression
technologies. Thus, the existence of indirect network effects suggests that social
welfare is increased with a lower interconnection charge, while proper data
compression (also required to improve social welfare) requires a higher interconnection
charge in order to send the correct signals to websites.
While there are in principle public policy issues involved here due to the existence of
externalities, multiple instruments would be required to address the problem, and these
instruments are presently absent. In any event, we consider that the level of market
failure resulting from these problems is unlikely to be high enough for intervention to
improve on the market-based outcome. Moreover, even if suitable instruments were
available, the Internet is still in its infancy and is developing rapidly, and under these
circumstances the level of market failure would need to imply very large losses in
economic welfare before public policy involvement could be recommended. Indeed,
LMRT's analysis suggests that increased use of micropayments (a price charged by
websites or embodied in online transactions) can reduce the divergence from welfare
maximisation that occurs with the interconnection charge being unable to perform all
tasks required of it. Such micropayments may well be developed by Internet players in
the near to medium term.
When ISPs have market power, LMRT show that while their interconnection prices obey
the "off-net pricing principle", they can nevertheless earn higher profits by differentiating
themselves and thereby weakening price competition between them. We discuss
aspects of differentiation by ISPs in Section 8.2.3. To show why off-net pricing would
still apply it is useful to first look at the situation assuming that end-users are insensitive
to price changes (demand is inelastic). When websites subscription demand is price
inelastic the interconnection price that ISPs charge each other to accept off-net traffic
ct = co + c", and co ≡ the cost of origination, and c" ≡ the cost of transport] but generates a marginal
termination revenue a; the opportunity cost of that traffic is thus ct – a. And the traffic from backbone
1's websites, which costs initially co for origination and a for termination on backbone 2, is now
internal to backbone 1 and thus costs c = co + ct; therefore, for that traffic too, the opportunity cost of
stealing the consumer away from its rival is c - (co + a) = ct - a. A similar reasoning shows that
stealing a website away from the rival backbone generates, for each connected consumer, a net cost
co + a: attracting a website increases originating traffic, which costs co, and also means sending more
traffic from its own websites to the other backbone's end users, as well as receiving less traffic from
the other backbone (since the traffic originated by the stolen backbone is now on-net); in both cases,
a termination revenue a is lost." (Laffont et al 2001, p8).
140
Final Report
for termination276 has the same characteristic as when the ISP market is competitive; it
simply governs the way costs are shared between websites and end-users.
More likely is the case where the demand for subscriptions by websites is elastic (or
price sensitive). Websites profits then depend on the interconnection charge (a) with a
lower a increasing the opportunity cost of serving end-users, thus requiring higher
prices for the receipt of traffic. This lower interconnection charge lowers the cost of
serving websites (they are now more profitable to serve) and ISPs price usage to them
more aggressively such that websites are now in an advantageous position. In
summary, LMRT’s model suggests that the reduction in the price a enables ISPs to
obtain more rents from usage charges levied on end-users, some of which are past
onto websites. The socially efficient outcome, however, suggests a higher
interconnection charge a.
When end-users are also price sensitive and ISP have market power, subscription
charges are also effected. LMRT's analysis suggests that even if ISPs have market
power, the pricing of a is again determined by the "off-net pricing principle", with excess
profits to ISPs coming by way of mark-ups on subscription charges.
For price discrimination between networks to be sustainable the evidence in LMRT’s
paper suggests that competitive pressures acting on ISPs would need to be weak.
Where the market is competitive LMRT show that under a wide range of fairly general
conditions, the incentives acting on ISPs are for them to price traffic according to the
"off-net pricing principle" i.e. as if all traffic were off-net. The suggestion is that so long
as competition is present, backbones will not succeed with a strategy of on-net / off-net
price discrimination. More generally, LMRT's analysis suggests that where core ISPs
are relatively competitive, interconnection prices will not be used by them as a strategy
for gaining advantage over their competitors, such as, through developing on-net
services which are not available off-net, or by differentiating between on-net and off-net
traffic.
In practice, the level of market failure seems likely to be more significant than is
suggested by LMRT’s results. Reasons for this include:
1. The existence of direct and indirect network externalities in a multi-firm environment
implies that these effects will not be internalised, as would be required for welfare
maximisation.
2. Ramsey pricing requires that both sides of the market are considered when setting
prices so as to minimise the loss of trade resulting from prices being above (or
below) marginal cost.277 This will not normally occur in a multi-firm environment.278
276 Remember that "hot potato" routing means that the terminating network faces both termination and
transport costs.
Internet traffic exchange and the economics of IP networks
141
3. Where ISPs have some market power, as is likely to be the case among core ISPs,
a level of on-net / off-net price discrimination is normally possible, and this will lead
to a network advantage based on size, implying network externalities and network
tipping.279 End-users and websites, and those that sign up to the Internet in the
future, will be drawn to the network with which they can get the cheapest service,
and where networks charge differently for on-net and off-net traffic, this will be the
network that keeps the greatest proportion of total traffic on-net. As in actuality the
structure of interconnection charges between core ISPs and downstream ISPs differ
between peers and between clients, we think this provides weak evidence that the
interconnection prices charged by core ISPs are different. This suggests opportunity
costs similar to those associated with market power, i.e. a level of market failure.
4. Third degree price discrimination in the interconnection charge a is required so that
no end-user or website is excluded where it brings more benefits than costs.280
Where competition exist, however, the level of price discrimination required to
accomplish this is not possible. Only a monopolist would be able to undertake this
level of price discrimination.
8.2.5.2 Customer price discrimination
The question arises as to whether we should expect a dominant vertically integrated
core ISP to price discriminate by charging (or imputing) its own on-net ISP services a
lower price than it sells transit service to others. Perhaps the main point to note in trying
to answer this question is that independent ISPs buying transit have other options
available to them, such as caching, mirroring, secondary peering, and most importantly,
multi-homing - now a viable option even for small ISPs.281 For this reason we do not
see that the largest vertically integrated backbone would gain an advantage with this
strategy. This is not a question addressed directly by LMRT as their model has no
explicit role for transit. Our intuition suggests to us that extending the model to include
transit interconnection would not add much additional insight in this regard. One insight
that looks likely to come out of the model extended to include a competitive market for
transit, is that to the extent that transit and peering provided the same service, it looks
likely that they would be priced the same.282
277 Internalise network effects can require below cost pricing. Where any resulting subsidies must be
raised within the industry a Ramsey approach to pricing is required, i.e. prices must take account of
demand elasticities on both sides of the market. Counterintuitively, LMRT find that the trade-off
between welfare costs imposed on both sides of the market by mark-ups over cost can require that
the price be higher in the more price sensitive segment, in contrast with standard Ramsey pricing
principles.
278 See Brown and Sibley (1986) for a extensive discussion about Ramsey Pricing.
279 See Laffont et al (2001) pp 13-14 and their appendix A.
280 Optimal pricing of the interconnection charge needs to be set individually to reflect the externalities
generate by heterogeneous end-users and website that benefit the other side of the market.
281 See Milgrom et al (1999).
282 In order to confirm this, however, the model would have to been amended to add transit.
142
Final Report
From an economic welfare perspective, end-users and websites should be charged
different prices. There are at least two reason for this:
•
No end-user or website should be excluded where their willingness to pay for
access is greater than the incremental cost they impose on their ISP.283 This is
important in networks as large common costs mean that average costs are typically
very much higher than incremental costs, and
•
As in each case there are network benefits generated by the other side of the
market and these benefits differ for each end-user and website, each end-user and
website should ideally be charged a personalised price so as the internalise the
externality benefits they bring.
Firms must have a great deal of market power to engage in third degree price
discrimination, which is clearly not the case at present in regard to traffic exchange on
the Internet. Such price discrimination is therefore mainly of theoretically interest.
8.3
Standardisation issues
The Internet is made up of over 100,000 networks connected in a loose hierarchy. For it
to continue growing and developing new services requires technical compatibility
standards that enable new networks and new equipment to be added to the Internet
and which are able to be accessed by all subscribers.
As opposed to telecommunications networks which developed mainly as large
monopolies employing internationally agreed technical standards, the Internet is a
collection of heterogeneous networks that are connected to each other through the IP
protocol and a variety of hardware and software based solutions. Multiple and
sometimes substitutable standards are used and have not so far resulted in significant
interoperability problems between networks. Rather this heterogeneity has been a
hallmark of the Internet. However, this network of networks has not as a rule provided
for seamless interconnection.
It is not our task in this study to write an analysis of either standards setting processes,
or the standards that make interoperability between Internetworks possible. This is a
hugely complex topic in itself. Rather, what we discuss below are general issues
concerning technical compatibility standards regarding the Internet. The rationale for the
inclusion of this section in the report is that where network effects are present the
control of standards can in theory be used to monopolise the market, much as the
freedom to decide not to interconnect with rivals implies ultimate monopolisation of the
market. The topic therefore deserves brief coverage in this study.
283 We are assuming here an absence of usage-based congestion externalities.
Internet traffic exchange and the economics of IP networks
143
The public policy interest in standards is chiefly concerned with network effects.
Network effects arise when the average benefit enjoyed by users increases with the
number of users. In this regard the economics of standards can be thought of in much
the same way as interconnection, where standards provide the virtual side of
interconnection. A lack of technical compatibility due to proprietary standards can lock
in subscribers and may also result in highly concentrated markets.284
The standards setting processes involving the Internet divide into two categories:
1.
Formal standards setting bodies where standards are agreed through consultation
with the various interests, and
2.
Standards developed by equipment manufacturers relating to the design and
operation of their own products.
In the first category there are several bodies involved in Internet standards setting.
These include the IETF, ITU-T, ISO, and the ATM forum. Each of these bodies is likely
to represent slightly different interests, although on many issues there may be a
commonality of views. For example, the ITU and ATM forum tend to be associated with
traditional telecommunications operators, while the IETF is associated with the Internet
and companies that have grown up in recent years along with the Internet. Such bodies
can end up competing with each other and this may be one cause of competing
standards on the Internet.285
The IETF is the most important of these bodies and is a spin-off of the Internet
Architecture Board (IAB) and the Internet Engineering Steering Group (IESG). It
employs what it refers to as a "fair, open, and objective basis for developing, evaluating,
and adopting Internet standards".286 Such processes do not appear to raise public
policy concerns.
Standards development by equipment manufacturers tends to substantially predate
comparable IETF standards development. Indeed, the recent model for standards (and
to a degree also technology) development on the Internet, appears to be that vendors
first develop proprietary standards (technologies), and the IETF later develops an open
standard which accomplishes roughly the same thing. Proprietary development tends to
be rapid, while IETF solutions are much slower. In this respect, competition between
vendors appears to be one of the forces driving technological progress, although doing
little for standardisation. The IEFT on the other hand, draws on this invention and in
time provides solutions for interoperability. Thus, competitive forces as well as co-
284 Such an outcome can be consistent with the preference of policy makers to foster technological
progress, although where high levels of market concentration arise there are clearly trade-offs
involved with this policy.
285 An obvious example here is SIP designed by the IETF, and H.323 designed by the technical
standards body of the ITU. Both protocols are designed to enable communications to pass between
the Internet and the PSTN. SIP is apparently more Spartan in its design.
286 See IETF RFC 2026, "The Internet Standards Process - Revision 3”.
144
Final Report
operation appear to play a role in Internet standards development. For example,
competition between equipment manufacturers played an important role in the
development of MPLS, which evolved from several proprietary developments intended
to combine layer 3 with layer 2 functions, perhaps the best known being Cisco’s Tag
Switching protocols. The IETF then determined its own standard (MPLS) which
borrowed from the various proprietary options. This development was to our knowledge
unopposed by the propriety developers.287
Indeed, several largely substitutable vendor solutions (hardware combined with
software) operate on the Internet, including Cisco’s Tag Switching. They do not provide
for seamless interoperability, a situation that could provide the largest vendor with a
strategic advantage over its rivals through being able to sell to ISPs the ability to access
the largest seamless network – the one involving a commonality of equipment and
standards.
There is a corollary here with our discussion above about the strategic interaction
between basic Internet service and the addition of a ubiquitous VoIP service. The
sharing of most network benefits between ISPs seems likely to occur even though
networks are not fully interoperable with each other – by which we mean that there is a
significant difference between on-net and off-net QoS. If in the future networks that use
completely interoperable equipment and software enjoy additional networks benefits not
available outside the "on-net" solution, then ISPs will want a compensatory discount if
they are to use a less than seamless solution.
However, the competition between equipment manufacturers also appears to be a
powerhouse of technological development in an industry characterised by rapidly
changing technology. In getting involved in standardisation processes, it may be
unavoidable that the authorities also change (probably reduce) the incentives that give
rise to technological progress.
Concerns about the possible role of standards in reducing competition between ISPs
may be unfounded, however, as it is normal for open standards to be agreed where this
is in the interests of both networks and subscribers. In relatively nascent and rapidly
expanding markets, if the network benefits for subscribers are sufficiently large, open
standards will normally prevail even though one (or more) of the networks might prefer
proprietary standards. Where network benefits for consumers are low, proprietary
standards may dominate.288 In this regard the IETF appears to provide open standards
287 Be that as it may, ISPs do not use the same software architectures at layers 2 and 3, and although we
have noted in Chapter 4 and Annex B the problems that tend to arise because of software and
hardware compatibility issues, sufficient compatibility is achieved in practice for a "best effort” to
provide an adequate serve in regard to basic Internet services, such as file transfer and web-surfing.
288 See Steinmueller (2001).
Internet traffic exchange and the economics of IP networks
145
in time such that proprietary developments get only a limited amount of time to provide
the vendor with an advantage before an open standard is adopted.289
8.4
Market failure concerning addressing
In this section we will discuss IPv4 address depletion, the replacement of IPv4 by IPv6,
BGP route table growth and the related issue of growth rates in the demands on router
processing, and AS number exhaustion.
Arguably the chief public policy issues that arise are due to the fact that the numbers
used by the Internet addressing systems are being treated as if they were public goods
(i.e. non exhaustible resources), whereas in practice they are scarce resources. In such
cases theory and evidence suggests that congestion will occur – sometimes known as a
tragedy of the commons problem. It occurs because entities’ usage of numbers and
addresses does not take account of the impact their usage has on the usage of others,
and this can give rise to externality costs.290
A similar type of problem may also arise where the increasing length of route
announcements, and increased route announcements and withdrawals, demand more
processing power from routers. These would represent spill-over costs of changes that
have occurred elsewhere in the network which relate to address depletion or the way
addresses are used.
Concerns have also been raised about possible co-ordination difficulties experienced in
trying to overcome addressing problems, and the possibility that in the future Internet
services may be disrupted due to address exhaustion and a non-standardised approach
to solving this problem. We discuss these concerns below.
8.4.1 The replacement of IPv4 by IPv6
Rates of growth in active IP addresses are non-constant and have varied substantially
from time to time since the 1980s (which is when data started to be provided). Some of
the most dramatic of these variations occurred as a result of developments that enabled
changes in address allocation policy and the way the addressing on LAINs and WAINs
was designed in response to these changes. These developments included:
289 A point perhaps worth considering for future research in this are is whether there are firms in other
industries close to the Internet that may be able to leverage entry and advantage on the back of
network benefits provided by certain standards. Microsoft may be one such company. Where such a
large majority of Internet hosts use Microsoft’s proprietary operating system there may be some
danger of Microsoft being able to leverage its market power in computer software into the Internet as
has reputably been attempted with HTML (unsuccessfully).
290 One of the hallmarks of a public good is that my usage of it does effect the value you get from using it.
This is not the case with numbering and addressing on the Internet, as we discuss below.
146
Final Report
•
The adoption of CIDR which enable much more efficient use and allocation of
addresses;
•
Network address translation (NAT,) and
•
Dynamic addressing.291
The latter two contributed greatly by enabling hosts to have access to the Internet
without having an IPv4 address permanently assigned to them. These factors have
greatly assisted in reducing the rate of growth in IPv4 address deployment.292 Indeed,
measurement shows that the span of address space had grown at an average annual
rate of about 7% between November 1999 and December 2000.293 If this rate were to
continue into the future IPv4 addresses could theoretically last until about 2018.
Huston’s projections are shown in Figure 8-3.
Figure 8-3:
IPv4 address depletion
Source: Huston (2001b).
Given the past history of changes in this growth rate, we consider there to be a fairly
wide margin of error in this projection. Moreover, Huston is considering the theoretically
possible 232 IPv4 addresses. There are several reason why IPv4 address exhaustion
will in practice occur well before this date. The main factors explaining this appear to be:
291 These topics are discussed in Chapter 3.
292 A more detailed discussion of these changes can be found in Chapter 3 and in the annex to Chapter
3.
293 See Huston (2001b). We do not know whether his means of estimating this figure involved confidence
intervals.
Internet traffic exchange and the economics of IP networks
147
•
The allocation of addresses is done in blocks, and this means that in practice it is
not possible to match allocation with actual requirements, i.e. utilisation efficiency is
always significantly less than perfect;
•
There is a list of non-usable IPv4 addresses,294 and
•
There will be many drivers of demand for Internet services in the roughly
foreseeable future that would appear to imply a rise in the growth in the demand for
addresses, including:
-
The role of GPRS, 3rd and 4th Generation (3G and 4G) mobile services on
demand for IPv4 addresses. These networks require a new network architecture
compared to that of the traditional voice oriented GSM networks. One particular
feature of this new architecture is that there will be new network elements and
interfaces which communicate with each other on the basis of the IP protocol.295
Release 5 of the 3GPP (3rd Generation Partnership Project) specifies that
UMTS is to be based on an all-IP network architecture.296 Moreover, it will not
only be network elements and interfaces that will be IP-based, but also the
terminal equipment, implying that demand for addresses will in all likelihood
experience a significant autonomous shift upwards by network operators and
mobile Internet service providers alike;297,298
-
The growth in next generation applications, such as VoIP, e-commerce, the ehome and e-business. These factors seem likely to substantially alter the rate of
growth of Internet addresses and traffic, and require that organisations using
dynamic provisioning will need to provision from a larger pool of addresses. It
has also been suggested that permanent addresses would need to be assigned
in the case of the e-home and e-business,299 and
-
The potential for Internet growth to accelerate in parts of the world where
present Internet subscriptions compared to GDP per capita is low.
See http://search.ietf.org/Internet-drafts/draft-manning-dsua-06.txt
See for example, Smith and Collins (2002, section 5.4.5).
See Smith and Collins (2002, section 4.7).
The UMTS Forum has stated that it is in favour of a rapid introduction of IPv6. In their report number
11 titled "Enabling UMTS / Third Generation Services and Applications" the UMTS Forum urges that
the rapid wide-scale introduction of IPv6 will play a vital role in overcoming problems relating to endto-end security as well as numbering, addressing, naming and QoS for real-time applications and
services. Further on, it is argued that "while the fixed ISP community is still weighing up the merits of
deploying IPv6, the mobile sector is already realising that its intrinsic security, ease of configuration
and vastly increased address space are all 'must haves' for mobile Internet". See UMTS Forum, Press
Release November 1, 2000.
298 The longer the introduction 3G is delayed, the more likely it seems that 3G networks will adopt IPv6
from the outset.
299 See Ó Hanluain (2002).
294
295
296
297
148
Final Report
In practical terms, therefore, IPv4 address exhaustion or substitution will occur
considerably before 2018, with most estimates suggesting some time around the end of
the decade.300
IPv6 is an Internet addressing scheme standardised by the IETF, and developed in the
early 1990s primarily in response to IPv4 address depletion, and intended as its
replacement. While adoption of IPv6 is still in the planning stage as at early 2002,
several (private) intranets have been using it for some time.301,302 Six main features
have been identified as giving IPv6 an advantage over IPv4:
•
IPv6 has 128 places in the header for addressing compared to IPv4’s 32 places,
thus solving the shortage of address space;
•
The IPv6 header is simplified reducing processing costs associated with packet
handling, and thus limiting the bandwidth cost of the header;
•
A new capability is added to IPv6 which enables the labelling of particular traffic
flows, i.e. sequences of packets sent from a particular source to a particular
destination for which the source desires special handling by the intervening routers,
such as non-default quality of service or real-time service;
•
ISPs that want to change their service provider can do so more easily and at lower
cost when they are using IPv6 compared to if they were using IPv4;
•
IPv6 is able to identify and distinguish between a great many different types of
service with different corresponding QoS. IPv4 has this capability also, but has room
for only four different types of service, and
•
IPv6 has improved security capabilities regarding authentication, data integrity, and
(optional) data confidentiality.
However, recent developments have resulted in the perceived gap between IPv4 and
IPv6 closing. These developments concern the following:
•
The original concern about pending address exhaustion has dissipated somewhat,
with estimated dates for address exhaustion starting in 2005, although arguably
300 The period prior to the 1990s was when address allocations were most profligate. As the Internet was
mainly US based at that time this has meant that most (about 75%) IPv4 addresses are allocated in
the U.S.
301 We understand IPv6 is running in several places on private IP networks, including WorldCom's own
corporate network, where it has been used for several years.
302 The European Commission is also funding several projects which have as their main aim to make
operational a pan European IPv6 network. For example, the Euro6IX project has apparently received
€17 Million to connect a number of Internet exchange points across Europe using IPv6, and to test a
range of technical issues concerning next generation IP networks. See www.euro6ix.net. The
Commissions total estimated contribution to IPv6 development projects comes to approximately
€55.45 million.
Internet traffic exchange and the economics of IP networks
149
most appear to think that IPv4 address exhaustion will occur around the end of this
decade;
•
The development of a flow management mechanism in MPLS appears to have
resulted in IPv4 being able to match the traffic flow functions provided by IPv6;303
•
The lower processing costs associated with handling packets due to IPv6 having a
simplified header in comparison to IPv4, are now thought to be largely cancelled by
IPv6’s more complex addresses, and
•
The security features offered by IPv6 appear to be relatively closely matched by
recent security developments by the IETF known as ‘IPSec’, and which use IPv4.
The Internet community does not appear to be in any rush to adopt IPv6, although
many of the key players are doing work that will enable its adoption.304 The reasons for
this will not be trivial, and will require us to look more widely than the bullet points
immediately above. We now provide a brief discussion of what some of these reasons
may be.
From each network’s perspective, adoption of IPv6 will be costly, especially in terms of
the man-hours required to plan and make adjustments to each network. Assuming
entities are not driven at this time to adopt IPv6 due to IPv4 address exhaustion, in
order for entities to make the costly switch from IPv4 to IPv6 they will typically need to
see a financial motive before doing so. In the case of ISPs, for example, they need to
have an expectation that their profitability will either be:
•
enhanced by switching to IPv6, or
•
damaged by not switching to IPv6.
One or the other of these bullets would be accomplished if IPv6 could be used to
provide new services, service attributes, or improved QoS, that enabled an ISP to get
additional revenues from customers, where those revenues are not cancelled by higher
costs. This being the case, ISPs that did not shift to IPv6 would see their customers
start to switch to ISPs that did use IPv6.305
There are arguably four main explanations for why networks do not appear to be in a
rush for IPv6 to be adopted:
1. The advantages offered by IPv6 over IPv4 may not be especially useful at this time
given the present level of Internet development;
303 Marcus (1999), p 235.
304 In Europe, periodic RIPE meetings provide a forum where the technical issues involved in an IPv6
Internet are discussed.
305 We are assuming here that end-to-end IPv6 functionality would be provided, an assumption we
analyse below.
150
Final Report
2. IPv4 and IPv6 will apparently interoperate on the Internet, i.e. there will be no shutoff date or Y2K type of catastrophe;
3. There may be a network externality benefit caused by a lack of co-ordinated uptake
of IPv6 (a synchronisation problem), and
4. There may be a significant option value retained by ISPs in waiting, rather than
being early preparers for IPv6 adoption.
We now discuss these points in turn.
Taking point 1 first, it appears that the advantages of IPv6 over IPv4 may not as yet
translate into sufficient commercial opportunities to drive ISPs to start the process of
preparing for IPv6 adoption more quickly than is required by pending IPv4 address
exhaustion. In addition to its vast address space, the other advantages IPv6 has over
IPv4 seem most likely to come to the fore with the arrival of the Next Generation
Internet (NGI), and this does not appear to be expected for at least another 5 years. It
may be the case that IPv4 is still considered suitable by most ISPs given the existing
level of Internet development and the level of IPv4 addresses that remain to be
allocated.
In regard to the second point, IPv6 has been designed to operate where many ISPs
continue to rely on IPv4. It is very unlikely that serious network problems or rationing
requirements will arise caused as a result of IPv4 address exhaustion. In other words,
both IPv6 and IPv4 will be acceptable on the Internet in the coming years such that
interoperability will be maintained. The way this is done is by tunnelling IPv6 within IPv4
in those parts of the Internet that continue to rely on IPv4, such that the IPv6 datagram
also gets an IPv4 header. This packet is then forwarded to the next hop according to
the instructions contained in the IPv4 header. At the tunnel end-point the process is
reversed.306 Tunnels can be used router-to-router, router-to-host, host-to-router, or
host-to-host.307
Considering the third point, a potentially important network externality benefit may be
lost if there are significant benefits associated with IPv6 adoption that are not being
considered by individual networks when considering whether to be early converters to
IPv6. This might be the case if much of the Internet needs to have already adopted IPv6
before it becomes sufficiently useful for most ISP to choose to switch to IPv6. We could
call this an unaccounted for synchronisation value. The issue concerns the need for
IPv6 to operate end-to-end in order for its benefits to be most useful. The first network
to switch may get relatively few advantages, these being presumably in relation to the
proportion of its on-net to total traffic. According to this argument there would be less
incentive for smaller networks to be early adopters than very large ones. If this
306 The tunnel end point need not necessarily be the final destination of the IPv6 packet.
307 See Marcus (1999), p. 239.
Internet traffic exchange and the economics of IP networks
151
synchronisation value exists and was large compared to IPv6 switching costs, then a
significant market failure would threaten suggesting that the authorities should intervene
in some way. While further analysis would be required in order to identify the form in
which an effective intervention may take, the obvious possibilities would include,
imposing a mandatory date for adopting IPv6,308 and subsidies paid to networks to
switch to IPv6. However, even if this co-ordination value was shown to exist, further
research would be needed to show that the private benefits for networks from switching
to IPv6 were not large enough for the most networks to switch to IPv6 without
intervention and in a time scale that did not imply a costly delay. In other words, even if
there were externality benefits present, the private benefits of adoption (i.e. exclusive of
the network benefit) may be large enough to bring about a sufficiently timely adoption of
IPv6. What is more, we suspect that due to the first point above, any synchronisation
value that did for arguments sake exist, is unlikely to be large enough over the next few
years to warrant any official intervention.
A closely related issue concerns the possibility that the development of NGI and its
associated services is being held back by not having IPv6 widely deployed today.
Assuming these IPv6 associated benefits exist, it is doubtful, however, that they
represent a genuine market failure as the benefits do not appear to be direct network
effects, and nor do they appear to be caused by the supply of complimentary services
that increase the value of the network (i.e. indirect network effects). They might more
accurately be described as spill-over benefits associated with economic multipliers.
Profitable economic activities generally have positive spill-over benefits, but these are
not market failures. In the absence of a market failure there is no divergence between
private and social cost and thus no market correction exists. Our research suggests that
other aspects of the Internet need to develop before IPv4 might or might not start to
impose a costly limitation on the development of the Internet. By this time pending IPv4
address exhaustion will likely provide sufficient incentive for widespread preparation for
IPv6 adoption.
The fourth point above refers to the option value of continuing to wait rather than start
the IPv6 conversion process. This option value could be quite significant where there
are high sunk costs and technology is evolving rapidly, as has occurred with the Internet
to date.309 The risk is that IPv6 may become out of date due to technological
development before it takes over from IPv4 as the leading Internet addressing scheme.
While this becomes increasingly unlikely as IPv4 address exhaustion draws near, as
there appear to be few costs involved in waiting, it may be a perfectly rational and
efficient decision for many ISPs and intranets to wait and see before making any
commitment to replace IPv4 with IPv6 on their own networks.
308 This has apparently occurred in Japan, where all networks are required to use IPv6 by 2005.
309 Dixit and Pindyck (1994), provide a through analysis of Real Options.
152
Final Report
In conclusion, Internet IP-address management is an area where there appears to be
market failure. Early IP-address allocation in particular failed to see that addresses were
a ‘depletable’ resource, and there remain huge amounts of allocated IPv4 addresses
that are unused and will remain so. The degree to which this market failure imposes
costs on society will depend on whether the adoption IPv6 provides a large enough
increase in cumulated benefits net of conversion costs, compared to the situation where
IPv4 continues to be used (assuming there is no IPv4 address shortage). We are not in
possession of the information that would give us the confidence needed to forecast an
outcome of this cost benefit exercise. The Internet community is in the best position to
make this judgement, although we note that few networks seem to be in any hurry to
make the conversion. There appears, however, to be a little more enthusiasm among
firms that sell equipment to these companies or to end-users.
8.4.2 Routing table growth
The largest networks - those at the top of the Internet’s loose hierarchy - referred to as
core ISPs, peer with each other, and each such network maintains a virtually complete
set of Internet addresses in its BGP routing tables. The size of these routing tables has
implications for router memory and the processing time with which routers can perform
calculations on routing table changes, and the speed with which datagrams can be
routed.
A public policy issue regarding routing table growth concerns the possibility that the rate
of growth in announced routes will exceed the rate of growth in the switching capacity of
routers (i.e. processing power) and that this would lead to increasing instability and a
rapid decline in QoS on the Internet, perhaps leading to usage and/or membership
numbers needing to be controlled.
As with IPv4 addresses, routing table growth is not stable and has also been effected
by CIDR and reputably also by a period of intense aggregation within the ISP industry
which brought about large scale reallocation between the mid 1990s until about 1998.
These factors reduced routing table growth rates substantially over this period.310
Routing tables are being continuously updated on the Internet. Networks periodically
announce their routes into the exterior routing domain. Through the mid to late 1990s
the hourly variation in announced routes has apparently decreased, and together with a
shift to announcing more aggregated blocks of addresses, along with the use of route
flap damping,311 the stability of the routing system has been improved, with addressing
310 Huston (2001b).
311 "Route flap" is the informal term used to describe an excessive rate of update in the advertised
reachability of a subset of Internet prefixes. These excessive updates are not necessarily periodic so
route oscillation would be a misleading term. The techniques used to solve these effects are
commonly referred as "route flap damping", and are usually associated with BGP (Border Gateway
Internet traffic exchange and the economics of IP networks
153
anomalies being shifted out of the core and into the edges of the Internet. Investigation
by Huston (2001b) suggests that the processing required to update routing tables in
response to route withdrawals and announcements also declined during this period.312
As can be seen from Figure 6-3, that from late 1998 the growth rate in the BGP route
table size appears to have trended up sharply, giving rise to concerns that it would
surpass the growth in processing speed or the capability of BGP routing protocol.
Reasons for the increases are complex, but appear to include the growth in multihoming and secondary peering which has lead to a more meshed and less hierarchical
Internet. One outcome of this appears to be the much greater use of policy routing
involving route announcements for a subset of prefixes that are themselves already
announced. Indeed, in 2000, about 30% of route announcement fitted this description.
However, in the last twelve months or so BGP table growth rates have dropped back to
a level that if sustained raise few concerns.
In addition to route table growth there are other factors that call on router processing
power and have implications for router performance. These are:
•
The length of the route advertisement themselves, and
•
The rate of route announcements and withdrawals.
The concern about route table growth rates exceeding the growth in processing power
must, therefore, take into account the role played by these two factors. Huston has
recently investigated them. He found that the prefix length of route advertisements is
increasing by one bit every 29 months, implying increasingly finer entries in forwarding
tables; lookup tables are getting larger and routers are increasingly having to search
deeper, calling on more and more memory.
Moreover, the growth in route announcements and withdrawals is increasing in the
smaller address block range, and this is where the route tables’ fastest growing prefixes
can be found. Withdrawals in particular are BGP processor-hungry.
Route table growth, the increasing table depth, and the increase in route
announcements and withdrawals in the smaller prefix area of the tables, has
implications for Internet performance and stability. While the situation appears
manageable at present, this topic looks likely to remain one of concern to the Internet
community and observing authorities.
Protocol) routing techniques. The formal definition and description can be found in RFC 2439
(http://www.faqs.org/rfcs/rfc2439.html).
312 However, the growth in multi-homing and secondary peering has had the opposite effect on route
table growth – an issue we discuss below.
154
Final Report
8.4.3 AS number growth
The number of ASes is growing more rapidly than is the number of IPv4 addresses
(roughly 50% per year c.f. 7%). At present rates of growth it has been estimated that AS
numbers will be used up before 2006.313 Since these estimates were made the growth
has slowed. Nevertheless, the EITF is planning to modify BGP so that its AS carrying
capacity is increased to a 32-bit field from the present 16-bits.
Estimates of AS number allocations up to and including November 2000 showed that of
the 17,000 AS numbers allocated at this date, a little under 10,000 were actually in
use.314 There are two main classes that make up the unused ASes. These are:
1. Historically allocated ASes that are:
-
lost in the system
-
used in a semi private context,
and
2. ASes that are recently allocated but not yet deployed (most common).315
The deployment of AS numbers and their pending exhaustion is of concern to the
Internet community. Protocol modifications are likely, and what Marcus refers to as
‘hacks’ may also be adopted. But these fail to address the main policy problem, and that
is the lack of incentives for networks to take account of the costs their number and
address usage has on other Internet entities and users. Huston has suggested that the
increasing number of smaller networks at the edge of the Internet implies that the
provider-based address block allocation policy is in need of replacement, and we
suspect his views are shared by a number of his peers.
8.4.4 Conclusions regarding addressing issues
It seems clear that IP addressing and AS numbering management are far from optimal.
In principle, there appear to be significant costs that networks’ addressing and
numbering choices impose on other networks (and end-users) that ought to be
avoidable. While perhaps impractical in regard to the Internet, the usual way to
overcome this type of externality cost of which the ‘free’ allocation of addresses and
numbers is an example, is to use an economic mechanism designed to internalise for
each demander of addresses and numbers, the costs his allocation places on the rest
313 Statistical analysis has suggested that growth rates are equally likely to prove to be exponential or
quadratic (Gupta and Srinivasan, in Marcus 2001).
314 See Marcus (2001).
315 Huston, in Marcus (2001).
Internet traffic exchange and the economics of IP networks
155
of the Internet community. In this regard, it is perhaps pertinent to note the efficient way
these problems are addressed in the PSTN World is to impose number holding costs on
all entities that have been allocated numbers. As in the case of telephone numbering
administration, the amount need be no larger than is required to encourage entities to
use numbers sparingly so that private costs more accurately reflect social costs. In the
case of IPv4 addresses for example, there are very large amounts of addresses that
have been allocated but are unused, and will remain so as long as IPv4 is in use. To
encourage their return to the RIRs so that they can be reallocated, an amount of, say, €
0.10 per address per annum may be sufficient to have the desired effect. We are not,
however, recommending this as a policy for the Internet as it would be a significant
intervention and begs the question of who would collect the money and how would it be
spent. Rather, this paragraph is included because it provides information about the
nature of the externality problem, as well as providing information about how the same
type of problem is efficiently dealt with in a nearby market. Moreover, whatever the
policy details, this sort of approach implies the creation of an independent authority
whose task it would be to manage Internet addressing.
Even assuming this was possible, in practice we suspect that it may not be practical or
ultimately efficient to bring about such a fundamental change to the ‘management’ of
the Internet. Other highly advantageous aspects of the Internet’s present structure may
be at risk if such changes were to be implemented. For example, numbering and
addressing appears to be tied up with technological development regarding the Internet,
which would creating the need for ‘an authority’ to make tradeoffs between static and
dynamic efficiency. To the extent that the dynamic benefits are in the future, the
authority would be guessing at these, with every possibility of getting them wrong.
Without being able to provide a rigorous defence underpinning its choices, decisions
would be viewed as arbitrary.
It should also be noted that there is no compelling reason to believe that the Internet
community will not be able to make the transition to a new addressing scheme without
encountering grave problems; those that may result in instability of the Internet and/or
require restrictions on usage or perhaps also on the numbering of new networks.
Worries about addressing and numbering co-ordination problems possibly requiring
some sort of official intervention seem premature. There are potentially avoidable
market failure costs associated with Internet addressing and numbering management,
but in trying to prevent these the authorities would likely create greater costs elsewhere.
8.5
Existing regulation
Except in one or two countries where (perhaps ineffective) rules about content have
been introduced, the Internet is not directly regulated. However, the Internet does not
operate in a legal / regulatory vacuum. There are rules that govern the posting of
information, such as copyright and intellectual property law. Some firms that own ISPs,
156
Final Report
or firms that supply network services used by the Internet, are directly regulated, the
most obvious examples being incumbent telecommunications operators. More
generally, firms that operate in markets close to and even overlapping Internet markets,
are regulated. This raises the possibility, indeed likelihood, that firms which are
competing for the same customers will not be equally treated under the law.
In the near future changes in the process by which regulation is applied in the EU
should significantly reduce the danger of competitors being covered by different laws
and regulations. The approach that is making its way through the relevant European
institutions at present will require any industry regulations to be targeted according to
antitrust markets, and not to be placed on any firm that does not have dominance in that
market. This approach will not completely prevent regulatory non-neutrality, especially
in industries where antitrust market boundaries are changing relatively rapidly, as
appears to be happening with the convergence of different communications platforms,
one of which is the Internet. One reason for this is that markets do not halt abruptly in
either product or geographic space, and thus some regulatory non-neutrality is
inevitable where different platforms are regulated differently.
An area where non-neutrality is already present concerns universal service subsidies,
or more particular the way special universal service taxes (USO contributions) are
raised from industry participants.316 As convergence occurs platforms other than the
PSTN will be competing in the same market to provide voice services. We expect that
voice services will be increasing provided over the Internet in competition with fixed line
switched circuit networks and potentially also in competition with cellular networks.
Clearly, when one type of firm pays extra taxes that its competitors do not pay, there
would be a breach in competitive neutrality principles.
The complexity of trying to work out the regulations which would determine the USO
liabilities of nationally-based Internet companies so as to overcome this non-neutrality
should not be underestimated. Indeed, the problems may be such as to make it too
difficult to design or operate regulatory rules that would be practical and efficient given
existing institutional limitations. Rather, the way forward may be not to seek to apply the
special tax to firms using other technology platforms, but to raise the revenues needed
through a different mechanism. We have discussed the efficiency and institutional
advantages of such an approach in WIK (2000), and we have also discussed in that
study how this might be done.
The USA is facing similar problems due to IP transport having been classified by the
FCC as information services as opposed to telecommunications services. This definition
has put IP transport outside of the scope of telecommunications service regulations.317
316 We provided a detailed analysis of the sources of non-neutrality in existing rules in chapter 3 of WIK
(2000).
317 See Mindel and Sirbu (2000) for a discussion of the situation in the USA. Related issues are also
addressed by Kolesar, Levin and Jerome (2000).
Internet traffic exchange and the economics of IP networks
157
The problem in the USA which is qualitatively similar to that in the EU, is shown in
Figure 8-4.
As the PSTN and the Internet increasingly compete for the same business, differences
in cost and price structures are likely to become very important in the competitive
process. We have already noted the type of costs involved in providing Internet service.
The main three are: fixed costs in the provision of basic capacity; the cost of adding
subscribers, and a congestion cost (see Section 8.1 for further discussion). None of
these costs are caused on a per minute of usage basis, and it is debatable as to the
degree to which per minute prices would be a suitable proxy for marginal usage costs.
In fact these costs are caused by the number of bits sent during congested periods.
Figure 8-4:
Non-neutral treatment of communications services in the USA
Telecommunications service
(triggers additional obligations)
PSTN (voice)
X.25,
ATM,
Frame relay.
(Transport services)
Information service
IP transport services
E-mail,
Websites,
Content,
Instant messaging
Source: Mindel and Sirbu (2000).
PSTN regulations require interconnection charges to be levied on a per minute basis.
One explanation for this approach is that circuits are dedicated for the entire period of a
telephone call – they can not be used by anyone else. This is not the case for a VoIP
call which involves statistical multiplexing such that peak usage costs are bit related.
We already discussed the potential for difficulties when levying per minute charge for
Internet usage.318 In order for PSTN operators not to face a regulatory disadvantage,
the regulated price structure of PSTN interconnection tariffs may well need review in the
near to medium term, perhaps with one outcome being that PSTN interconnection
would be priced in terms of ‘busy hour’ capacity costs. As per minute charges are in
principle built up from capacity costs, part of the work needed in order for such changes
to be implemented has already occurred.319
318 See Section 7.2.2.
319 Other areas of possible market failure caused by regulation are less obvious although potentially
important, and include reduced levels of new entry, competition, and investment, caused by investor
shyness due to regulatory uncertainty and the risk of regulatory opportunism. These are, however,
real problems and arise especially in utility industries where there are long-lived investments prone to
being stranded by regulatory or political decisions. (US tariffs on steel imports, for example, strand
investors’ assets in countries where steel producers export to the US.) We do not address this type of
market failure here but direct readers to the study we did jointly with a partner: Cullen International &
WIK (2001), "Universal service in the Accession Countries”, especially pages 82-96 in the Main
Report, and 8-13 in the Country Report.
158
Final Report
The Internet is not directly regulated and its Internationality and border-less structure
would make regulation very difficult to implement and operate. As a general rule, where
Internet networks and other firms are starting to compete with each other and do not
receive equal regulatory treatment, we suggest that regulatory authorities begin
addressing the problem by first looking at ways to remove regulations to bring about
competitive neutrality, rather than contemplate applying new (e.g. PSTN) regulations to
the Internet.
8.6
Summary of public policy interest
Left completely unregulated, Internet network effects would likely result in
monopolisation, most probably through the aggressive takeover of rivals. Where market
power is distributed among many interconnecting networks, however, direct ex ante
regulation is normally unnecessary in order to prevent this from occurring. Merger /
takeover rules which are included as part of competition law, are the appropriate way to
address the risk of monopolisation. However, in the long-run this need not be the case,
as events may occur that are not illegal under competition law but which result in
increasing market concentration. It is possible, for example, that firms that have some
market power in markets nearby to traditional Internet services might try to leverage that
power into the traditional Internet in order to give them a competitive advantage. If offnet QoS problems persist this may lead to on-net services being the source of such an
advantage. This type of differentiating strategy does not appear to have been influential
to date, although it may become so in future. Further academic research that focusses
on these issues would assist us in being able to finesse our discussion on this issue.320
The research evidence regarding interconnection pricing suggests that under a wide
variety of circumstances, even backbones that have market power have an incentive to
set interconnection prices according to "the off-net pricing principle". The main reason
for this is that the access price works to govern the distribution of costs between both
sides of the market – end-users and websites – but under quite general conditions,
does not alter the profitability of backbones. Indeed, it is suggested that powerful
backbones may do best by pricing interconnection very low and to seek higher profits in
end-user charges.321
While we have explained why there are significant inefficiencies involved with the
present pricing system (both between ISPs, and between end-users and their ISP),
there appears to be no need for intervention here by the authorities in the foreseeable
future. There is considerable effort ongoing by those in the Internet community to
http://europa.eu.int/information_society/topics/telecoms/international/news/index_en.htm
320 Similar arguments might apply to the largest equipment manufacturer, although subsequent IETF
open standards that are equivalent to proprietary ones seem likely to at least partially undermine such
a strategy.
321 Where several services with different QoSes operate over the Internet, the analysis would appear to
remain valid for each QoS.
Internet traffic exchange and the economics of IP networks
159
overcome QoS and demand management issues, and to a large degree these will rely
on technological developments.
Internet address allocation is an area where significant market failure may be found.
Concerns mainly involve the costly adoption of IPv6 being prematurely forced on the
Internet community by the likely exhaustion of IPv4 addresses before the end of this
decade. These costs exist to the extent that ISPs and other numbered entities would
not freely switch to IPv6 if IPv4 was not nearing exhaustion. IPv6 offers incremental but
not revolutionary improvements over IPv4 and as technology is evolving at a pace in
this area there is also a remote risk that premature adoption of IPv6 due to early
profligacy in IPv4 allocation will foreclose on technically superior developments that
may have materialised had IPv4 lasted longer. One reason behind IPv4’s early demise
is the lack of appropriate economic incentives to treat the addressing system as a
depletable resource.
BGP route table growth is of less concern, although an analysis of the issues suggests
that the Internet community might seek an agreement that would take pressure off
processing used for addressing on the Internet. AS number growth is also an area of
legitimate concern, although the IETF is in the process of designing an increase in the
carrying capacity of the AS header to a 32-bit field from the present 16-bit field.
A health warning needs to be attached to the findings of the academic research we
have discussed in this chapter. Where the circumstances are such as to suggest a nonproblematic outcome we need to view this as a "failure to reject" rather than a
confirmation that no problem exists. One reason for such caution is that the models
employed are stylised simplifications of the structures that guide the behaviour of the
main Internet players. Subtle nuances are not included in this research as to do so
would greatly complicate the analysis and detract from a focus on what are perceived to
be the most important relationships. Other strategies may be being played out,
however, that contradict the findings of authors’ whose work has been discussed in this
study. Our view is that the authorities should not rely too heavily on the findings that
such models uncover as a basis for either permitting or rejecting merger applications.
Finally, existing regulation is an area which will need attention in the next few years.
Perhaps the two main problem areas will concern universal service taxes, and the
structure of interconnection prices where there is convergence of the Internet with the
PSTN. While the EU has moved to modify the rules governing ex ante regulation by
employing an antitrust-based market analysis and this will assist in avoiding some of the
market distortions caused when different regulations apply to converging industries, the
reform of existing regulations still raises competition questions, and finding answers to
these will not be a trivial exercise.
160
8.7
Final Report
Main findings of the study
Market structure and competition:
In the complete absence of rules protecting competition, industries that display strong
network effects like the Internet have a tendency to drift toward monopolisation, most
probably through the aggressive takeover of rivals.
The Internet has become less hierarchical in the last 5 or 6 years due to the
introduction of new technologies which enable an increasing proportion of traffic to
avoid the upper layer of the Internet hierarchy. This has reduced the market power of
core ISPs.
The main technologies enabling this are:
those that enable content to be held closer to the edges of the internet
(caching, mirroring, and content delivery networks), and
those that have made regional interconnection between smaller ISPs
(secondary peering), and multi-homing (transit contracts with several core
ISPs), economically attractive.
Where market power is distributed among a sufficient number of interconnected
networks, and services that function as partial substitutes for transit interconnection
are widely purchased (those services noted in the above bullet), direct ex ante
regulation of the core of the Internet is unnecessary in order to prevent
monopolisation from occurring. Merger regulation should be relied upon instead.
The loosening of the hierarchy has reduced the bargaining power of core ISPs when
negotiating service contracts with ‘downstream’ ISPs and major content providers.
Analysis of the strategic interests of the largest player(s) suggests that in many cases
they will not gain by degrading interconnection unless they already have a share of
the global market well in excess of anything observed presently.
Research evidence regarding price discrimination suggests that in a wide variety of
circumstances backbones have an incentive to set interconnection prices according
to “the off-net pricing principle”: that is, customers pay the same price for receiving
traffic independently of whether the traffic was originated on the ISPs own network
(on-net) or on another ISPs network (off-net). If ISPs have the power to price
discriminate between on-net and off-net traffic, this situation may become unstable
and the market tip in favour of the largest network.
Where seamless interconnection can not be provided (e.g. where there are QoS
problems at network borders as is the present case), new services that require high
QoS attributes such as VoIP, may restore to a degree the importance of network size
in the competitive process between core ISPs. If this occurred it would imply tipping
in favour of the largest network.
Internet traffic exchange and the economics of IP networks
161
Where seamless interconnection can not be provided (e.g. where there are QoS
problems at network borders as is the present case), new services that require high
QoS attributes such as VoIP, may restore to a degree the importance of network
size in the competitive process between core ISPs. If this occurred it would imply
tipping in favour of the largest network.
Part of our report provides an analysis of the relevance of theoretical models related
to the topics discussed in this study, with several of these models motivated by
recent merger cases. The strategies identified by these economic models provide
valuable insight by focussing on key economic relationships. The models are,
however, always simplifications of reality, and should be viewed more as a failure to
reject the strategy predicted, and less as a confirmation that it will in practice occur.
A level of market failure exists in the pricing of interconnected traffic as the indirect
network benefits provided by content providers are not taken account of in
interconnection prices. Given recent growth rates in subscriptions and content, the
net cost to society of this failure appears to be relatively low, and not of the order
that would warrant official intervention.
The number of public Internet exchange points is increasing.
The peering policies of core ISPs do not appear to be unfair and do not at this stage
entail competition policy or regulatory concerns.
Addressing
IPv4 addresses are likely to be exhausted before the end of this decade. The
adoption of IPv6 is costly but appears unlikely to cause significant disruption, and
thus to require official involvement to facilitate transition.
In addition to a huge increase in address space provided by IPv6, it offers other
advantages over IPv4, although in several areas the advantage has been
significantly narrowed due to developments involving IPv4. Other aspects of the
Internet such as those concerning measurement, billing, and grade of service
pricing, need to develop before other advantages offered by IPv6 can be translated
into benefits to end-users.
IPv4 addresses are a scarce resource and failure to treat them as such has lead to
pending address exhaustion and the need for the Internet Community to adopt a
new addressing scheme before the end of this decade. The efficiency costs entailed
in this could be considerable. However, intervention that would try to correct this
inefficiency is not advised. The main reasons for this are as follows:
the Internet has no nationality, and is made up of over 100,000 networks
world-wide. This makes regulation rather impractical;
162
Final Report
addressing is tied up with technology development, and in such cases
intervention should only be considered where there are compelling reasons
for doing so, and
the internet community has shown that it is able to plan for the future and will
likely avoid serious disruption such that existing and replacement addressing
schemes will work with each other i.e. at a minimum IPv6 hosts will support
IPv4 addresses.
Quality of service (QoS)
QoS in packet-switched networks such as the Internet describes the statistical
properties of the packet stream of network ‘connections’
Most parameters concerning Internet traffic performance are non deterministic and
have to be defined probabilistically. Thus, strictly speaking packets do not receive
equal treatment. However, as packets are randomised irrespective of the source or
application to which that data is being put, then in this sense no packet can be said
to receive preferred treatment under the current best effort transfer principle of the
Internet.
The key to convergence between the Internet and broadcasting and voice telephony
services, is provided by a combination of:
solutions for QoS problems that exist between different networks (i.e. when
traffic passes from being on-net to off-net), which arise for reasons including
differences in propriety equipment and network management systems, and
the introduction of demand management techniques, such as through the
provision of several grades of service (GoS), each with different end-to-end
QoS statistics, and priced to customers accordingly.
Greater bandwidth and processing power alone will not solve all congestion and
QoS problems on the Internet. This is because the Internet involves the use of
scarce resources, and when treated otherwise, theory and evidence suggests that
congestion will become a problem, undermining convergence and the development
of services that require superior QoS statistics.
Technologies intended to provide for an improved QoS, such as IntServ, DiffServ,
and Resource reSerVation Protocol (RSVP), must be implemented on all routers
between origination and termination, and at present these technologies are mainly
limited to intranets and some routes on a small number of ISP networks.
Internet traffic exchange and the economics of IP networks
163
The prospect of these technologies being widely deployed in the Internet seems
low, especially as IntServ and RSVP have limited scalability, and increasing
convergence between data and optical layers looks likely to lead to superior
alternatives in the medium term.
Available empirical evidence suggests that QoS on the public Internet increased
significantly between 1998 and 2000. At the end of this period the average level of
delay (not counting those experienced in LAINs and in general in obtaining access
to the Internet), is getting closer to a level which if sustained could support voice.
164
Final Report
References
Alcatel Telecommunication Review (2001), Special issue "Next Generation Now”, vol. 2.
Anania, L., and R. Solomon, (1997), "The Minimalist Price", in L. McKnight and J Bailey (eds.)
(1997), Internet Economics, MIT Press, Cambridge, Mass.
Anquetil, L-P., Bouwen, J., Conte, A., and B. Van Doorselaer (1999), "Media gateway control
nd
protocol and voice over IP gateways", Alcatel Telecommunications Review, 2 Quarter.
Australian Competition & Consumer Commission (2000), "Internet interconnection: Factors
affecting commercial arrangements between network operators in Australia. Discussion
paper, 17 February.
Awduche and Y. Rekhter (2001), "Multiprotocol lambda switching: Combining MPLS Traffic
Engineering Control with Optical Crossconnects", IEEE Communication Magazine
March 2001.
Baake, P. and T. Wichmann (1997), "On the economics of Internet peering", paper presented at
the First Berlin Internet Economics Workshop, October 24-25, Berlin; published also
in:Netnomics vol. I (1999), no.1; pp. 89-105.
Badach, A. (2000): "High Speed und MultiService Networking – Technologien und Strategien",
paper presented at Online 2000, Düsseldorf (Germany), January 31-February 3.
Baiocco, C., S. Carbone, and C. Fumagalli, (1996), "Alcatel Access Nodes with Integrated SDH
rd
Transport", Alcatel Telecommunications Review, 3 Quarter.
Besen, S., Milgrom, P., Mitchell, B. and P. Srinagesh (2001), "Advances in routing technologies
and Internet peering agreements". American Economic Review, Papers and
proceedings, Vol. 91(2): 292-296.
Bishop, D.J., Giles, C.R. and S.R. Das, (2001), "The Rise of Optical Switching", in: Scientific
American, January.
Black, D. P. (1999), Building Switched Networks, Addison-Wesley, Reading (MA.).
Blumenthal, D.J. (2001), "Routing Packets with Light", in: Scientific American, January.
th
Boardwatch Magazine (ed.) (1999): Directory of Internet Service Providers, 11 Edition, Penton
Media Inc.
th
Boardwatch Magazine (ed.) (2000): Directory of Internet Service Providers, 12 Edition, Penton
Media Inc.
th
Boardwatch Magazine (ed.) (2001): Directory of Internet Service Providers, 13 Edition, Penton
Media Inc.
Bocker, P. (1997), ISDN-Digitale Netze für Sprach, Text, Daten-Video und Multimediakommunikation. Springer Verlag, Berlin: 4th ed.
Bogaert, J. (2001), "Convergence of the optical and data layers", Alcatel Telecommunications
nd
Review, 2 Quarter.
Brown, S. and D. Sibley (1986), The Theory of Public Utility Pricing. Cambridge University Press.
Internet traffic exchange and the economics of IP networks
165
Cable & Wireless (2000), "Competition in the Supply of IP- Network", A Position Paper by Cable
& Wireless.
Carpenter, B. (2001), "IPv6 and the future of the Internet", ISOC Member Briefing no. 1, July 23;
www.isoc.org
Cawley, R. (1997), “Interconnection, pricing, and settlements: Some healthy jostling in the
growth of the Internet”, in: B. Kahin and J. Kellser (eds.), Coordinating the Internet. MIT
Press, Cambridge, Mass.
Cerf, V.G. and R. Kahn (1974): "A protocol for packet network Interconnection", in: IEEE
Transactions on Communications, vol. 22 (5), pp. 637-648.
Clark, D. (1997), “Internet cost allocation and pricing”, in: McKnight and Bailey (eds.) (1997).
Coase, R. (1960), "The problem of social cost", Journal of Law and Economics, 3: 1-44.
Colt (2001), Telekommunikation überzeugend @nders (company’s presentation charts).
Crémer, J., Rey, P. and J. Tirole (1999), "Connectivity in the commercial Internet". Mimeo,
Institut D´Economie Industrielle, Université des Sciences Sociales de Toulouse.
Cullen International and Wissenschaftliches Institut für Kommunikationsdienste (2001),
"Universal service in the Accession Countries”, Study for the European Commission
(contract no. 71080), June. http://www.qlinks.net/quicklinks/univers.htm
Dang Nguyen, G. and T. Pénard (1999), "Interconnection between ISPs, Capacity Constraints
and Vertical Differentiation", paper presented at the 2nd Berlin Internet Economics
Workshop, May 28-29, Berlin.
David, P. and S. Greenstein (1990), "The economics of compatibility standards: An introduction
to recent research", Econ. Innov. New techn, (1): 3-41.
DeGraba, P. (2000a), "Bill and keep at the central office as the efficient interconnection regime".
OPP Working Paper No. 33. FCC.
DeGraba, P. (2000b), "Efficient Interconnection Regimes for Competing Networks", paper
presented at the 28th Telecommunications Policy Research Conference, September 2325, Alexandria, Virginia.
Denton, T.M. (1999): "Netheads vs. Bellheads - Research into Emerging Policy Issues in the
Development and Deployment of Internet Protocols", Report prepared for the Canadian
Federal Department of Industry, http://www.tmdenton.com/netheads.htm
Desmet, E., Gastaud, G. and G. Petit (1999), "Quality of service in the Internet". Alcatel
nd
Telecommunications Review, 2 Quarter.
Distelkamp, M. (1999), Möglichkeiten des Wettbewerbs im Orts- und Anschlussbereich des
Telekommunikationsnetzes, WIK-Diskussionsbeiträge Nr. 196, Bad Honnef, Germany
(October).
Dixit, A. and R. Pindyck (1994), Investment under uncertainty. Princeton University Press.
Downey, T. (1998), "Separation of IP routing and forwarding via tag switching and multiprotocal
label switching". Paper presented at the Pacific Telecommunications Council
conference, Honolulu, Hawaii (1999).
166
Final Report
Economides, N. (1996), "The economics of networks", International Journal of Industrial
Organisation, 14 (6): 673-699.
Economides, N. (2001), "Competition policy issues on the Internet", address to the American Bar
Association (August).
Elixmann, D. (2001), "Der Markt für Übertragungskapazität in Nordamerika und in Europa", WIK
Diskussionsbeiträge Nr. 224, Bad Honnef, Juli.
Elixmann, D. and A. Metzler (2001), Marktstruktur und Wettbewerb auf dem Markt für InternetZugangsdienste, WIK Diskussionsbeiträge Nr. 221, Bad Honnef, June.
Ergas, H. (2000), "Internet Peering: A Case Study of the ACCC´s Use of its Powers Under Part
XIB of the Trade Practices Act, 1974", paper presented at the 3rd Berlin Internet
Economics Workshop, May 26-27, Berlin.
European Institutes for Research and Strategic Studies in Telecommunications (EURESCOM)
(ed.) (2001): Next Generation Networks: The Service Offering Standpoint, Technical
Information,
Requirements
Analysis
and
Architecture
Definition,
www.eurescom.de/public/projectresults/results.asp
Fankhauser, G., Stiller, B. and B. Plattner (1999), "Arrow: A flexible architecture for an
accounting and charging infrastructure in the Next Generation Internet", Netnomics, (1):
210-223.
FCC (1998), Application of WorldCom, Inc. and MCI Communications for Transfer of Control of
MCI Communications Corporation to WorldCom, Inc.CC Docket No. 97-211, September
14.
Ferguson, P. and G. Huston (1998), Quality of service: Delivering QoS on the Internet and in
corporate networks, Wiley, New York.
Foros, Ö. and H.J. Kind (2000) "National and Global Regulation of the Market for Internet
Connectivity", paper presented at the 3rd Berlin Internet Economics Workshop, May 2627, Berlin.
Frieden, R. (1998), "Without Public Peer: The Potential Regulatory and Universal Service
Consequences of Internet Balkanization", paper presented at the 26th Telecommunications Policy Research Conference, October 3-5, Virginia, (also in: Virginia Journal of
Law and Technology, 3 VA. J.L. & Tech. 8, http://vjolt.student.virginia.edu).
Frieden, R. (2000), "Does a Hierarchical Internet Necessitate Multilateral Intervention? paper
presented at the 28th Telecommunications Policy Research Conference, September 2325, Alexandria, Virginia.
Gareiss, R. (1999), "The old boys’ network", Data Communications, vol. 28, no. 14, pp. 36-52
Gerpott, T.F and S.W. Massengeil (2001), "Electronic markets for telecommunications transport
capacities", Telecommunications Policy, vol. 25, pp. 587-610.
rd
Händel, R., Manfred, H.N. and S. Schröder (1998), "ATM Networks", Addison Wesley, 3 Ed.
Hart, O. and J. Tirole (1990), "Vertical integration and market foreclosure", Brookings Papers on
Economic Activity, Microeconomics: 205-285.
Höckels, A. (2001), "Alternative Formen des entbündelten Zugangs zur Teilnehmeranschlussleitung", WIK-Diskussionsbeiträge Nr. 215, Bad Honnef, Germany.
Internet traffic exchange and the economics of IP networks
167
Huston, G. (1999a), "Web Caching", www.telstra.net/gih/papers.
Huston, G. (1999b), "Interconnection; Peering and Settlements", www.telstra.net/gih
Huston, G. (2001a), "Scaling the Internet - The Routing View", www.telstra.net/gih/papers.html.
Huston, G. (2001b), "Analysing the Internet BGP Routing Table", The Internet Protocol Journal,
(March).
Hwang, J., Weiss, M. and S.J. Shin (2000), "Dynamic Bandwidth Provisioning Economy of A
Marked- Based IP QoS Interconnection: IntServ - DiffServ", paper presented at the 28th
Telecommunications Policy Research Conference, September 23-25, Alexandria,
Virginia.
INTEREST Verlag (2001), Protokolle und Dienste der Informationstechnologie, Vol 2
Katz, M., and C. Shapiro (1985), "Network externalities, competition, and compatibility",
American Economic Review, 75: 433-47.
Keagy, S. (2000), "Integrating Voice and Data Network", Cisco Press.
Kende, M. (2000), "The Digital Handshake: Connecting Internet Backbones", OPP Working
Paper No. 32, Washington: Federal Communications Commission.
Kende, M. and D.C. Sicker (2000), "Real-time Services and the Fragmentation of the Internet",
paper presented at the 28th Telecommunications Policy Research Conference,
September 23-25, Alexandria, Virginia.
Kercheval, K. (1997), "TCP/IP over ATM", Prentice Hall PTR.
Kolesar, M., Levin, S. L. and A. Jerome (2000), "Next Generation Packet Networks and the
Regulation of Interconnection and Unbundling", paper presented at the 28th
Telecommunications Policy Research Conference, September 23-25, Alexandria,
Virginia.
Kreps, D. (1990), Game theory and economic modelling. Oxford University Press, Oxford.
Kulenkampff, G. (2000), Der Markt für Internet Telefonie – Rahmenbedingungen,
Unternehmensstrategien und Marktentwicklung, WIK-Diskussionsbeiträge Nr. 206, Bad
Honnef, Germany.
Laffont, J.-J., Marcus, J.S., Rey, P. and J. Tirole (2001a), "Internet Interconnection and the OffNet-Cost Pricing Principle", mimeo, Institut D´Economie Industrielle, Université des
Sciences Sociales de Toulouse.
Laffont, J.-J., Marcus, J.S., Rey, P. and J. Tirole (2001b), "Internet Peering", American Economic
Review Papers and Proceedings, vol. 91, no. 2, May; pp. 287- 291
Lardiés, A. and G. Ester (2001), "Optimal network design of ultra-long-haul transmission
nd
networks", Alcatel Telecommunications Review, 2 Quarter.
Lehr, W. and L. McKnight (1998), Next Generation Internet Bandwidth Markets, in:
th
Communications & Strategies, no. 32, 4 quarter, pp. 91- 106.
Liebowitz, S. J. and S. E. Margolis (1994), "Network externality: an uncommon tragedy". Journal
of Economic Perspectives, 8 (2).
168
Final Report
Little and Wright (1999), "Peering and settlement in the Internet: An economic analysis", Mimeo
NECG.
Lobo J.F. and W. Warzanskyj (2001), "Redes de transmisión todo ópticas", Comunication de
Telefónica I+D nº 23.
Lynch, G. (2000a), T2T: The rise and downsides of bandwidth exchanges, in: America’s
Network, May 1, pp. 34-38.
Lynch, G. (2000b), Bandwidth trading markets edge toward legitimacy, in: America’s Network,
August 1, pp. 36-34.
Lynch, G. (2001), Bandwidth bubble bust – The rise and fall of the global telecom industry,
Authors Choice Press, San Jose et al.
Mailath, G. J. (1998), "Do people play Nash Equilibrium? Lessons from evolutionary game
theory", Journal of Economic Literature: XXXVI, 1347-1374.
Malueg, D. A. and M. Schwartz (2001), "Interconnection incentives in a large network", working
paper, Georgetown university. http://econ.georgetown.edu/workingpapers
Man Chan, Y., Womer, J.P., MacKie-Mason, J.K. and S. Jamin (1999), "One Size doesn't fit all;
Improving Network QoS through Preference-Driven Web Caching", paper presented at
the 2nd Berlin Internet Economics Workshop, May 28-29, Berlin 1999.
Marcus, J. S. (1999) Designing wide area networks and Internetworks – A practical guide,
Addison-Wesley, Reading (Mass.)
Marcus, J.S. (2001a): "AS number exhaustion", presentation January 25, 2001.
Marcus, J.S. (2001b), "Global traffic exchange among Internet service providers (ISPs)", paper
presented at OECD- Internet traffic exchange, Berlin, June 7.
McDysan, D. (2000), QoS and Traffic Management in IP and ATM Networks, McGraw-Hill.
McKnight, L. W. and J. Boroumand. (2000), "Pricing Internet services: after flat rate", Telecommunications Policy, 24: 565-590.
McKnight, L. W. and J.P. Bailey (eds.) (1997), Internet economics, MIT Press, Cambridge, MA.
Melián, J.P., López da Silva, R.A. and P.A. Gutíererez (2002), "Redes IP de nueva generación",
Comunicaciones de Telefónica I+D, nº 24.
Milgrom, P., B. Mitchell and P. Srinagesh (1999), "Competitive Effects of Internet Peering
Policies", paper presented at the 27th Telecommunications Policy Research
Conference, September 25-27, Arlington, Virginia.
Mindel, J. and M. Sirbu (2000), "Regulatory Treatment of IP Transport and Service", paper
presented at the 28th Telecommunications Policy Research Conference, September 2325, Alexandria, Virginia.
Minoli, D. and A. Schmidt (1999), Internet Architectures, Wiley Computer Publishing, John Wiley
& Sons, New York.
Netplane (2000), "Layer 3 switching using MPLS", www.netplane.com
169
Internet traffic exchange and the economics of IP networks
Ó Hanluain, D. (2002), "On the New World of Communications”. Electronic Magazine of
Ericsson. http://on.magazine.se/stories/story.asp?articleID=813
Odlyzko, A. (1998), "The economics of the Internet: Utility, utilisation, pricing, and quality of
service", mimeo, AT&T Labs – Research.
OECD (1998): "Internet Traffic Exchange: Developments and Policy", Working Party on
Telecommunication and Information Services Policies, DSTI/ICCP/TISP(98)1/FINAL
OECD (1999), "Internet traffic exchange: developments and policy", Paris.
Oh, C.H. and S.-G. Chang (2000), "Incentives for strategic alliances in online information product
markets:, Information Economics and Policy, 12: 155-180.
Øystein, F. and B. Hansen, (2001), "Competition and compatibility among Internet service
providers", Information Economics and Policy, 13: 411-425.
Postel (ed.) (1981b): "Transmission Control Protocol. DARPA Internet Program Protocol
Specification", RFC 793, University of Southern California, Information Sciences
Institute, September.
Postel, J. (ed.) (1981a): "Internet Protocol. DARPA Internet Program Protocol Specification",
RFC 791, University of Southern California, Information Sciences Institute, September.
Roberts, L. G. (2001), "U.S Internet IP Traffic Growth, presentation at Network Asia 2001",
http://www.caspiannetworks.com/library/presentations/index.shtml
Rolfs, J. H. (2001), Bandwagon effects in high technology industries, MIT Press, Cambridge,
MA.
Ross, (1995), Multi service loss models for Broadband Telecommunication Networks, Springer
Berlin.
Saloner, G. (1990), "The economics of computer interface standardization: the case of UNIX".
Economics of Innovation and new technology, 1(1/2).
Salop, S. and J. Stiglitz (1977), "Bargains and Rip-offs: a model of monopolistically competitive
price dispersion". Review of Economic Studies: 44, 493-510.
Schmidt, F., Gonzáles Lopez, F., Hackbarth, K. and A. Cuadra, (1999), "An Analytical Cost
model for the national Core Network", published by The Regulatory Authority for
Telecommunications and Posts (RegTP), Bonn.
Schmidt, K.-F. (2000): VoIP – Neue Basisplattform fuer Unternehmen, paper presented at the
Telecom e.V. Workshop "PBX Quo Vadis II – VoIP Alternativen zur PBX", Hamburg,
January 25.
Semeria, C. (1996), "Understanding IP addressing: Everything you ever wanted to know",
http://www.3com.com/nsc/501302html
Semeria, C. (1999), "Multiprotocol Label Switching: Enhancing routing in the new public
network", Juniper Networks, www.juniper.com
Semeria, C. (2000), "Internet
www.totaltele.com
Backbone
Routers
and
Evolving
Internet
Design",
170
Final Report
Shaked, R. and J. Sutten (1990), "Relaxing price competition through product differentiation",
Review of Economic Studies, 49, 3-13.
Shaw, R. (2001), Issues Facing the Internet Domain Name System, paper presented at Asia
Pacific Telecommunication Regulation Forum, 15-17 May 2001, Phuket, Thailand.
Sinnreich, H. and A.B. Johnston (2001), Internet communications using SIP – Delivering VoIP
and multimedia services with Session Initiation Protocol, Networking Council series,
John Wiley & Sons, Inc., New York .
SITICOM (2001) "GMPLS: a major component in future networks". Available through
www.cisco.com
Smith, C. and D. Collins (2002), 3G Wireless Networks, McGraw-Hill Telecom Professional, New
York.
Squire, Sanders & Dempsey LLP and WIK (2002), "Market definitions and regulatory obligations
in communications markets", A study for the European Commission.
SS8 SignalingSwitch™ White Paper (2001a), "IP telephony signaling with the SS8™ Signaling
Switch™", www.ss8.com
SS8 SignalingSwitch™ White Paper (2001b), "VoIP signaling solutions: technical and business
models for development, www.ss8.com
Steinmueller, W.E. (2001), "Bottlenecks, Standards, and the Information Society", mimeo SPRU
Science and Technology Policy Research, University of Sussex. Prepared for a study for
the European Commission Joint Research Centre, Institute of Prospective Technological
Studies (IPTS), "Future Bottlenecks in the Information Society", Seville.
Tanner, J.C. (2001), "Peering into the future", America's Network, May 15,
TeleGeography (ed.) (1999): TeleGeography 2000 – Global Telecommunications Traffic
Statistics and Commentary, TeleGeography Inc., Washington D.C.
TeleGeography (ed.) (2000): TeleGeography 2001 – Global Telecommunications Traffic
Statistics and Commentary, TeleGeography Inc., Washington D.C.
Tirole, J. (1988). The theory of industrial organization. The MIT Press, Cambridge, Mass.
Wang, Q., Peha, J.M. and M.A. Sirbu (1997), “Optimal pricing for integrated service networks”,
in: McKnight and Bailey (1997, p.353-376).
Weiss, M. and H. Kim, (2001), "Voice over IP in the local exchange: A case study", paper
th
presented at the 29 Telecommunications Policy Research Conference, September 2729, Alexandria, Virginia.
th
Werbach, K. (2000), "A Layered Model for Internet Policy", paper presented at the 28
Telecommunications Policy Research Conference, September 23-25, Alexandria,
Virginia.
WIK (2000), "Study on the re-examination of the scope of universal service in the telecommunications sector of the European Union, in the context of the 1999 Review", a study
for the European Commission:
http://europa.eu.int/ISPO/infosoc/telecompolicy/en/Study-en.htm
WorldCom (2001), proprietary presentation to WIK, October.
Internet traffic exchange and the economics of IP networks
171
List of companies and organisations interviewed
We spoke with many people about the issues addressed in this study. These people
occupied diverse roles in Internet organisations, from senior executives and corporate
councils of large networks, to network engineers who advise these networks, and
engineers whose task it is to implement the technology that makes the Internet work.
Rather than stating their names we have noted the companies that employed them at
the time we interviewed them and the venue where the interview took place.
BTCellnet , at RIPE 40, Prague (Czech Republic)
BTexact, at RIPE 40, Prague (Czech Republic)
Cable & Wireless, Munich (Germany), Brussels (Belgium)
Cisco, at RIPE 40, Prague (Czech Republic)
Colt, Frankfurt (Germany)
Deutsche Telekom, Bonn (Germany)
De-CIX, Frankfurt (Germany)
eco-Verband, Cologne (Germany)
European Telecommunications Platform (etp), invitation to participate at a regular
meeting in Brussels (Belgium)
Genuity (telephone interview)
Global Crossing (telephone)
Infigate, Essen (Germany)
MCI Worldcom, Brussels (Belgium) and telephone conferences
Nextra, Bonn (Germany)
Nokia, at RIPE 40, Prague (Czech Republic)
RIPE, participation at the 40th regular meeting in Prague (Czech Republic)
TeleCity, Frankfurt (Germany)
Viagénie (at RIPE 40, Prague)
In addition we had very informative email contacts with experts in other organisations,
including the FCC and Telstra. We would like to express our thanks to the above
organisations for the information they provided us at various stages of the project.
However, we bear sole responsibility for the contents of this study.
172
Final Report
Glossary
AAL
ATM Adaptation Layer
ABR
Available Bit Rate
AC
Average Cost
ALG
Application Layer Gateway
APNIC
Asia Pacific Network Information Centre
AR
Average Revenue
ARIN
American Registry for Internet Numbers
AS
Autonomous System
ATM
Asynchronous Transfer Mode
BGP
Border Gateway Protocol
CATV
Cable TV
CBR
Constant Bit Rate
CCITT
Comité Consultatif International Télegraphique et Téléphonique
ccTLD
country code Top Level Domain
CDN
Content Delivery Network
CDV
Cell Delay Variation
CER
Cell Error Ratio
CERT
Computer Emergency Response Team
Ceteris
Paribus
all other things being equal
CIDR
Classless Inter Domain Routing
CIX
Commercial Internet Exchange
CLR
Cell Loss Ratio
CMR
Cell Misinsertion Ratio
CoS
Class of Service
CRT
Crémer, Rey and Tirole
CTD
Cell Time Delay
DHCP
Dynamic Host Configuration Protocol
DiffServ
Differentiated Services (Protocols)
DNS
Domain Name System
DSCP
Differentiated Services Code Point
Internet traffic exchange and the economics of IP networks
DSL
Digital Subscriber Line
DSLAM
Digital Subscriber Line Access Multiplexer
DWDM
Dense Wave Division Multiplexing
ECI
Explicit Congestion Indicator
ENUM
Extended Numbering Internet DNS
FCC
Federal Communications Commission
FR
Frame Relay
FRIACO
Flat Rate Internet Call Origination
FTP
File Transfer Protocol
GMPLS
Generalised MPLS
GoS
Grade of Service
GSM
Global System for Mobile Communications
gTLD
generic Top Level Domain
HTML
Hypertext Markup Language
HTTP
Hypertext Transport Protocol
IAB
Internet Architecture Board
IANA
Internet Assigned Numbers Authority
IBP
Internet Backbone Provider
ICANN
Internet Corporation for Assigned Names and Numbers
IETF
Internet Engineering Task Force
IGRP
Interior Gateway Routing Protocol
IntServ
Integrated Services (Protocols)
IP
Internet Protocol
IRU
Indefeasible right of use
ISDN
Integrated Services Digital Network
ISO
International Organisation for Standardisation
ISP
Internet Service Provider
ITU
International Telecommunications Union
IxP
Internet Exchange Point
LAN
Local Area Network
LAIN
Local Area IP Network
LLC
Logical Link Control
173
174
Final Report
LMRT
Laffont, Marcus, Rey and Tirole
LSP
Label Switched Path
M&S
Malueg and Schwartz
MAE
Metropolitan Area Exchange
MC
Marginal Cost
MCU
Multipoint Control Unit
MDF
Main Distribution Frame
MED
Multi Exit Discriminator
MGCP
Media Gateway Control Protocol (Megaco)
MIB
Management Information Database
MinCR
Minimum Cell Rate
MPLS
Multi Protocol Label Switching
MPOA
Multi Protocol over ATM
MR
Marginal Revenue
MS
Management System
NA
not available
NAP
Network Access Point
NAT
Network Address Translation
NGI
Next Generation Internet
NGN
Next Generation Network
nrt
near real-time
NSF
National Service Foundation
OAM
Operation, Administration and Maintenance
OC
Optical Carrier
OSI
Open Systems Interconnection
OSP
Online Service Provider
OSPF
Open shortest path first
PCR
Pick Cell Rate
PIR
Packet or Cell Invention Ratio
PLR
Packet or Cell-Loss Ratio
PoP
Point of Presence
PoS
Packet over Sonet
Internet traffic exchange and the economics of IP networks
PPP
Point-to-Point Protocol
PSTN
Public Switched Telephone Network
PTD
Packet or Cell Transfer Delay
PTDV
Packet or Cell Transfer Delay Variation
PTO
Public Telecommunications Organisation
QoS
Quality of Service
RAS
Registration, Admission and Status
RED
Random Early Detection
RFC
Request For Comments
RIP
Routing Information Protocol
RIPE NCC
Réseause IP Européenes Network Coordination Centre
RSVP
Reserve ReSerVation Protocol
rt
real time
RTCP
Real-Time Control Protocol
RTP
Real-time Transport Protocol
RTSP
Real-time Streaming Protocol
SCR
Substantial Cellrate
SDH
Synchronous Digital Hierarchy
SEC
Securities and Exchange Commission
SECBR
Several Error Cell Block Ratio
SIP
Session Initiation Protocol
SLA
Service Level Agreement
SME
Small and Medium Enterprise
SMTP
Simple Mail Transport Protocol
SNAP
Subnetwork Access Protocol
SNMP
Simple Network Management System
SONET
Synchronous Optical Network
STM
Standardised Transport Module
TCP
Transfer Control Protocol
TLD
Top Level Domain
TMN
Telecommunication Management Network
UBR
Unspecified Bit Rate
175
176
Final Report
UDP
User Datagram Protocol
UMTS
Universal Mobile Telecommunications System
URL
Uniform Resource Locator
VBR
Variable Bit Rate
VC
Virtual Circuit, Channel or Connection
VLSM
Variable Length Subnet Masking
VoIP
Voice over IP
VP
Virtual Path
VPIPN
Virtual Private IP Network
WAN
Wide Area Network
WAIN
Wide Area IP Network
WDM
Wave Division Multiplexing
WIK
Wissenschaftliches Institut für Kommunikationsdienste
WLL
Wireless Local Loop
WTP
Willingness to Pay
Internet traffic exchange and the economics of IP networks
Annexes
177
178
Final Report
A
Annex to Chapter 3
A-1
Names and the Domain Name System
A name in the IP world has the following syntax:
(ccc.(bbb)).aaa.tld
where "aaa” denotes the so-called domain name and tld the Top Level Domain. “bbb”
and “ccc” represent sub-domains where the brackets indicate that the sub-domains are
optional.
Currently there are three types of TLDs:
•
243 country code (cc) TLDs defined by two-letter codes , e.g. .de for Germany or
.be for Belgium,
•
1 international domain (.int) reserved for international organisations,
•
6 generic (g) TLDs, with three of them open to organisations world-wide (.com, .org,
.net), and three (.edu, .gov, .mil) being reserved for American organisations.
In addition:
•
2 new generic TLDs (.biz and .info) have recently been adopted by ICANN and five
others are pending (.aero, .coop, .museum, .name, .pro),
•
The TLD .arpa is managed jointly by IANA and IAB322 under the authority of the
American government.
The Domain Name System (DNS) is a distributed database comprising a great number
of servers world-wide. The DNS is organised hierarchically. At the top are 13 root name
servers323 (ten in the USA and one each in Stockholm, London and Tokyo). The main
characteristic of these servers is that they have complete knowledge about where the
name servers for the gTLds and the ccTLDs are. The top level domain servers, in turn,
“know” where the domain name servers are at the level beneath them. Each
organisation operating a network has its own name server, however, a name server
within an organisation may reflect domains which are not visible to the outside world.
The hierarchical structure of the DNS is outlined in Figure A-1.
322 IANA is short for the Internet Assigned Numbers Authority; IAB stands for the Internet Architecture
Board.
323 Actually, there is one primary root server and 12 secondary root servers. ICANN coordinates the DNS
system. It delegates the management of TLDs and ensures that the global database is coherent. The
Internet root is a file whose maintenance is delegated by the Department of Commerce of the
American Government and ICANN, to Verisign, an American company quoted on the stock exchange,
which serves as a technical service provider. This file is then replicated on the other root servers.
179
Internet traffic exchange and the economics of IP networks
Figure A-1:
Hierarchical structure of the domain name system
The root node
""
top-level node
second-level node
top-level node
second-level node
third-level node
second-level node
third-level node
top-level node
second-level node
second-level node
third-level node
Source: Shaw (2001)
For small companies it is usually the ISP that provides DNS administration, i.e. the
updating of the respective DNS server. Big companies, however, often take care of the
primary server administration themselves.324
A-2
IPv4 addressing issues
Class A, B, C addresses
The main logical characteristics of the Class A, B, C system are summarised in Figure
A-2.325 Each Class A address requires the leftmost bit to be set to 0. The first 8
contiguous bits define the network portion of the address and the remaining 24 bits are
free to assign host numbers. Each Class A address theoretically supports more than 16
million hosts. However, the number of Class A addresses is rather limited: it is equal to
27 -2.326 A Class B address requires the first two leftmost bits to be set to 1-0 followed
by a 14-bit network number and a 16-bit host number. For each Class B address
theoretically more than 65,000 hosts can be assigned. Altogether more than 16,000
different Class B addresses can be defined. A Class C address requires the three
leftmost bits to be set to 1-1-0 followed by a 21-bit network number and an 8-bit host
number. Theoretically, more than 2 million Class C addresses can be distinguished with
up to 254 (i.e. 28 -2) hosts per network.
324 For more details on names and DNS see Marcus (1999, pp. 208).
325 For more information see Marcus (1999, 218) and Semeria (1996).
326 The 2 is subtracted because the all-0 network 0.0.0.0 and the network where bit 2 through 7 contain a
1 and the rest is equal to 0, (i.e. the network 127.0.0.0), are reserved for special purposes.
180
Figure A-2:
Final Report
Address formats of Class A, B, C addresses
Class A
bit #
01
78
31
0
NetworkNumber
Host-Number
Class B
bit #
0 2
15 16
31
10
Network-Number
Host-Number
Class C
bit #
0
3
23 24
31
110
Network-Number
HostNumber
Source: Semeria (1996)
The Internet addressing scheme in principle offers the possibility of connecting more
than 4 billion (i.e. 232) devices. It is obvious, that the
•
Class A address space covers 50 %
•
Class B address space covers 25 % and the
•
Class C address space covers 12,5 %
of the total IPv4 address space.327
Subnetting with a fixed mask
Subnetting is a concept developed around 1985. Essentially it consists of the following
three elements:
•
The original pre-specification nature of the first three bits is kept.
32
327 The total address space offers 2 different addresses. As regards class A addresses the first bit is
31
prespecified so there are 2 alternatives which is half the total address space. With respect to Class
30
B addresses the first two bits are prespecified and, thus, the number of different alternatives is 2
which is equal to a quarter of the total address space. Concerning Class C addresses the first three
29
bits are prespecified so there are only 2 different alternatives which yields one eighth of the total
name space.
Internet traffic exchange and the economics of IP networks
181
•
A certain number of bits of the host portion of the IP address are used for identifying
subnets. This is done by adding a “mask” to the IP address.
•
The size of the sub-networks has to be the same, i.e. the subnet masks within a
network have to be the same.
The first condition means that a 1 in the first bit still indicates an (original) Class A
address, a 1-0 indicates an (original) Class B address and a 1-1-0 indicates an original
Class C address. The second condition means that the two tiered structure of a network
and a host portion within an IP address is replaced by a three tiered structure consisting
of a network prefix, a subnet portion and a host portion. Network prefix and the subnet
portion together are usually called the “extended network prefix”. The second condition
relates to the encoding of the length of the extended network prefix. This subnet mask
is also 32 bits long and consists of as many contiguous “1s” beginning leftmost as there
are bits in the extended network prefix. The third condition requires the length of the
extended network prefix to be the same across all subnets. An example may help to
clarify this328.
Suppose one has the Class C address of 193.1.1.0 and one needs 8 subnets. The IP
address 193.1.1.0 can be written in full length as
subnet- hostnumber number
bits bits
network prefix
193.1.1.0
=
11000001.00000001.00000001.00000000
Distinguishing 8 subnets requires three more bits (8 equals 23). Thus, one needs the
following subnet mask:
extended-network-prefix
255.255.255.224
=
11111111.11111111.11111111.11100000
27-bits
resulting in a 27-bit extended network prefix. Today it is usual to use a different notation
for this: /prefix-length. Thus, a 27-bit extended network prefix is equal to a /27
address.329
The eight subnets numbered 0 through 7 can be represented as follows (the bold digits
identify the three bits reserved for the subnet number):
328 See Semeria (1996) for further details.
329 It is easy to see that the original Class A, B, C addresses can be denoted by /8, /16, and /24,
respectively.
182
Final Report
Base Net: 11000001.00000001.00000001.00000000
= 193.1.1.0/24
Subnet 0: 11000001.00000001.00000001.00000000
= 193.1.1.0/27
Subnet 1: 11000001.00000001.00000001.00100000
= 193.1.1.32/27
Subnet 2: 11000001.00000001.00000001.01000000
= 193.1.1.64/27
Subnet 3: 11000001.00000001.00000001.01100000
= 193.1.1.96/27
Subnet 4: 11000001.00000001.00000001.10000000
= 193.1.1.128/27
Subnet 5: 11000001.00000001.00000001.10100000
= 193.1.1.160/27
Subnet 6: 11000001.00000001.00000001.11000000
= 193.1.1.192/27
Subnet 7: 11000001.00000001.00000001.11100000
= 193.1.1.224/27
As there is a 27-bit network prefix 5 bits are left for defining host addresses on each
subnet. Thus, in this case one is allowed to assign 30 (25 –2) hosts on each of the
subnets.330
Subnetting with a fixed mask provides a much more efficient use of addresses.331 Yet,
there is a considerable disadvantage with masks: once a mask is selected an
organisation is stuck with the fixed-number of fixed-sized subnets. Thus, if for example
there arises the need to establish a subnet with more than 30 hosts, the organisation
either has to apply for a new address or it has to use more than one of the given
subnets. Moreover, suppose an organisation has been assigned a /16 address with a
/21 extended network prefix. This would yield 32 subnets (25), each of which supports a
maximum of 2,046 hosts (211 –2). However, if the organisation with this address space
wants to establish a subnet with, say, only 30 hosts, it still has to use one of the 32
subnet numbers. In effect this means there is a waste of more than 2,000 IP host
addresses.
Variable Length Subnet Masks
To provide more flexibility to network administrators a solution to this inefficient use of
addresses was developed in 1987. This approach, called Variable Length Subnet Mask
(VLSM), specifies how a subnetted network could use more than one subnet mask.
VLSM allows a recursive division of an organisation’s address space: if subnets are
330 2 has to be subtracted because the 'all-0s' and the 'all-1s' host addresses serve different purposes
and cannot be allocated.
331 Before subnetting was applied usually an administrator of a given network had to apply for a new
address from the Internet in order to be able to install a new network for his organisation. Subnetting,
however, allowed that each organisation was assigned only a few network numbers from the IPv4
address space. The organisation itself could then assign distinct subnet numbers at its discretion
without the need to obtain new network numbers from the Internet. .
183
Internet traffic exchange and the economics of IP networks
defined one can divide a given subnet into sub-subnets. A given sub-subnet, in turn,
can be further divided into subnets of subsubnets and so on. An example of this kind of
recursive allocation of address space is shown in Figure A-3.
Figure A-3:
Recursive division of address space using VSLM
11.1.0.0/16
11.2.0.0/16
11.3.0.0/16
11.0.0.0./8
•
•
•
11.252.0.0/16
11.253.0.0/16
11.254.0.0/16
11.1.1.0/24
11.1.2.0/24
•
•
•
11.1.253.0/24
11.1.254.0/24
11.1.253.32/27
11.1.253.64/27
•
•
•
11.1.253.160/27
11.1.253.192/27
11.253.32.0/19
11.253.64.0/19
•
•
•
11.253.160.0/19
11.253.192.0/19
Source: Semeria (1996)
In Figure A-3 the /8 network 11.0.0.0/8 is first divided into 254 /16 subnets. Subnet
11.1.0.0/16 is further divided into 254 /24 sub2-subnets and for sub2-subnet
11.1.253.0/24 there are once again 6 sub3-subnets. Subnet 11.253.0.0/16 is configured
with altogether 6 /19 sub-networks. Of course, the latter sub-network could be further
subdivided if the need arises within the organisation.
At each stage VLSM requires appropriate routing masks defining the length of the
respective extended network prefix. Moreover, VLSM still uses the first three bits of the
address to identify if it is an original Class A, B, or C address.
Classless Inter-Domain Routing
CIDR gets rid of the traditional concept of class-based addressing and replaces it by a
classless approach. Thus, the first three leftmost bits of an IPv4 address no longer have
any particular predefined meaning. Rather, it is only the extended network prefix which
marks the dividing point between the network portion and the host portion.
CIDR overcomes the problem that ISP could only allocate /8, /16 or /24 addresses, as
was the case in a class-based environment. With CIDR address assignment by an ISP
184
Final Report
can be much more focused on the needs of its clients.332 CIDR uses prefixes anywhere
from 13 to 27 bits (i.e. from /13 through /27). Thus, blocks of addresses can be
assigned to networks as small as 32 hosts (25) or to those with over 500,000 hosts (219).
Differences between VLSM and CIDR
VLSM and CIDR have several similarities, although there are also differences.333 VLSM
and CIDR both allow a portion of the address space to be divided recursively into
subsequently smaller pieces. However, the differences between the two addressing
schemes are more important and are as follows:
•
VLSM is a class-based addressing scheme while CIDR is classless.
•
VLSM requires the recursion to be performed on an address space previously
assigned to an organisation. Thus, it is invisible to the global Internet.
•
CIDR, however, can be recursively applied by an Internet Registry in that an
address block can be allocated, for example, to a high-level ISP, to a mid-level ISP,
to a low-level ISP, and finally assigned to a private organisation.
IPv6 addressing issues
IP version 6 (IPv6) defines 128-bit long addresses, structured into eight 16-bit pieces.
As writing out 132-bits is very cumbersome there are several conventional forms for
representing IPv6 addresses as text strings. The preferred form is
x: x: x: x: x: x: x: x
where the “x”s are the hexadecimal values of the eight 16 bit pieces of the address.
Thus, an example for an IPv6 address is
FEDC:BA98:7654:3210:FEDC:BA98:7654:3210 334
If there are long strings of zero bits there is a special syntax (see Semeria 1996 for
details). When dealing with a mixed environment of IPv4 and IPv6 there is an
alternative form used to express an address:
332 Suppose, a client requires 900 addresses for its hosts. In a world of class-based addressing the ISP
could either assign a Class B address or several Class C addresses. Assigning a full Class B address
would be highly inefficient because a Class B address enables at least 65,000 hosts to hook up.
Assigning Class C addresses (in this case four of them would be required) would lead to four new
entries in the Internet routing table, (see the next section for more details on routing). With CIDR,
however, the ISP can carve out a specific portion of its registered address space which is in keeping
with the needs of the client. In this example he could, for example, assign an address block of 1,024
10
(i.e. 2 ) IP addresses. See also Semeria (1996, section on CIDR) for an empirical example.
333 We refer here to Semeria (1996).
334 In hexadecimal notation A means 10, B. means 11, C means 12 and so on. FEDC, thus, stands for 16
bits where the first four bits are equal to 15, the second four bits are equal to 14, the third four bits
equal to 13 and the last four bits are equal to 12. Remember that in binary notation, e.g. the four bits
"1111", are equal to 15 which in hexadecimal notation is equal to F.
Internet traffic exchange and the economics of IP networks
185
x:x:x:x:x:x:d.d.d.d .
where the “x”s are the hexadecimal values of the six high-order 16-bit pieces of the
address, and the “d”s are the decimal values of the four low-order 8-bit pieces of the
address (i.e. the standard IPv4 representation). Examples are:
0:0:0:0:0:0:13.1.68.3 or
0:0:0:0:0:FFFF:129.144.52.38
IPv6, like IPv4, has a structured format, i.e. it works with a prefix portion of the address
and a host portion.
The text representation of IPv6 address prefixes is similar to the way IPv4 address
prefixes are expressed, i.e. it is equal to IPv6 address/ prefix-length, where the prefixlength is a decimal value specifying how many of the leftmost contiguous bits of the
address comprise the prefix.
IPv6 will comprise three different types of addresses:
•
Unicast,
•
Anycast,
•
Multicast.
A “unicast” address identifies a single interface.335 A packet sent to a unicast address is
delivered to the interface identified by that address. An “anycast” address is an identifier
for a set of interfaces (usually belonging to different nodes). A packet sent to an anycast
address is delivered to one of the interfaces identified by that address.336 A “multicast”
address also identifies a set of interfaces which usually belong to different nodes.
However, unlike in the case of an anycast address, a packet sent to a multicast address
is delivered to all interfaces identified by that address.337 The specific type of IPv6
address is indicated by the leading bits in the address.338
335 Subsequently, an interface is defined as a node’s attachment to a link. A node is a device that
implements IPv6. A link is a communication facility or medium over which nodes can communicate at
the link layer, i.e. the layer immediately below IPv6 in the OSI layer model. Examples of the link layer
are Ethernets, Frame Relay or ATM , see Deering and Hinden (1998b).
336 Usually it will be the "nearest" one according to the measure of distance of the routing protocol.
337 Generally speaking, IPv6 addresses of all types are assigned to interfaces rather than nodes.
However, nodes can also be identified in the following way. As an IPv6 unicast address refers to a
single interface and since each interface belongs to a single node, any of that node’s interfaces‘
unicast addresses may be used as an identifier for the node, see Deering and Hinden (1998a, p. 2)
338 The anycast option is not known within IPv4, rather, it is a new feature of IPv6.
186
A-3
Final Report
Route aggregation
We have seen in section 3.1.2 that addressing is far more sophisticated today than it
was in the early days of the Internet. This holds true in particular for VLSM and CIDR
addressing schemes, which when used appropriately provide significantly improved
routing efficiency compared to earlier schemes.
VLSM
VLSM permits route aggregation within an organisation which, in turn, can considerably
reduce the amount of routing information that needs to be provided in order to enable
communication between all hosts of an organisation’s network and between this
network and the outside world. Figure A-4 illustrates this.
Figure A-4:
Possibility of route aggregation and reduction of routing table size by
using VLSM
11.1.0.0/16
Router A
11.0.0.0/8
or 11/8
Internet
11.2.0.0/16
11.3.0.0/16
••
••
••
Router B
11.1.1.0/24
11.1.2.0/24
•
•
•
11.252.0.0/16
11.254.0.0/16
11.1.252.0/24
11.1.254.0/24
11.253.0.0/16
11.1.253.0/24
Router C
Router D
11.253.32.0/19
11.253.64.0/19
11.1.253.32/27
11.1.253.64/27
11.1.253.96/27
11.1.253.128/27
11.1.253.160/27
11.1.253.192/27
•
•
•
11.253.160.0/19
11.253.192.0/19
Source: Semeria (1996)
In the example illustrated in the Figure A-4 Router D summarises six subnets into a
single announcement 11.1.253.0/24 which it forwards to Router B. The latter router
summarises 254 subnets into an announcement 11.1.0.0/16 to Router A. Likewise,
Router C summarises 6 subnets into a single announcement 11.253.0.0/16 to Router A.
This structure is based on VSLM and can greatly reduce the size of an organisation’s
routing tables. The subnet structure, however, need not be visible to the global Internet.
Rather, Router A injects a single route 11.0.0.0/8 into the global Internet’s routing table.
Internet traffic exchange and the economics of IP networks
187
A routing protocol supporting VLSM has to use a forwarding policy which supports the
benefits of hierarchical subnetting. In this context Semeria (1996) mentions a forwarding
algorithm based on the “longest match”, i.e. all routers on a network have to use the
route with the longest matching extended network prefix when they forward traffic.339 It
is also obvious that the address assignment should reflect the actual network topology
as this reduces the amount of routing information required.340
CIDR
The global Internet routing table was growing exponentially in the 1980’s and the early
1990’s, see section 6.3 for empirical evidence on this. One of the most important factors
in reducing this growth rate was the widespread deployment of CIDR.341
As the discussion in section 3.1.2 has revealed CIDR offers great flexibility as regards
addressing because it is classless. CIDR enables route aggregation and it also requires
a forwarding algorithm based on longest match. The logic behind CIDR is illustrated by
Figure A-5, in which it is assumed that an ISP with a /20 address 200.25.16.0/20 has
allocated portions of its address space to different organisations named A, B, C, D.
These organisations can be ISPs or organisations that have been assigned multiple
addresses. The figure reveals that:
•
Organisation A aggregates a total of 8 /24s into a single announcement
(200.25.16.0/21);
•
Organisation B aggregates a total of 4 /24s into a single announcement
(200.25.24.0/22);
•
Organisation C aggregates a total of 2 /24s into a single announcement
(200.25.28.0/23), and
•
Organisation D aggregates a total of 2 /24s into a single announcement
(200.25.30.0/23).
339 An example may make this clear: Suppose that a packet is addressed to the host 11.1.252.7 and
there are three network prefixes in the routing table: 11.1.252.0/24, 11.1.0.0/16 and 11.0.0.0/8. In this
case the router will select the 11.1.252.0/24 route because this prefix matches best. Of course the
actual match is not required on the basis of the dotted decimal notation, rather, it is checked on the
basis of the binary 32 bit representation of the IP address.
340 The amount of routing information is reduced if the set of addresses assigned to a particular region of
the topology can be aggregated into a single routing announcement for the entire set, see Semeria
(1996).
341 Marcus (1999, p. 221) mentions as additional factors the introduction of dynamic IP addressing to dialup users and the deployment of Application Layer Gateways /ALGs).
188
Final Report
Figure A-5:
CIDR and Internet routing tables
Internet Service
Provider
Internet
200.25.0.0/16
200.25.16.0/21
200.25.16.0/24
200.25.17.0/24
200.25.18.0/24
200.25.19.0/24
200.25.20.0/24
200.25.21.0/24
200.25.22.0/24
200.25.23.0/24
200.25.24.0/22
200.25.24.0/24
200.25.25.0/24
200.25.26.0/24
200.25.27.0/24
200.25.16.0/20
200.25.28.0/23
200.25.28.0/24
200.25.29.0/24
Organization C
200.25.30.0/23
200.25.30.0/24
200.25.31.0/24
Organization D
Organization B
Organization A
Source: Semeria (1996)
Finally, it can be seen that the ISP only makes a single announcement into the global
Internet (200.25.0.0/16). Thus, in the global routing tables all these different networks
and hosts are represented by a single Internet route entry.
A important drawback to the otherwise considerable advantages brought by CIDR
arises if an organisation decides to change its ISP. This has required the organisation to
either completely change its address space, or for the new ISP to announced the
organisation’s original address space. This situation is highlighted in Figure A-6.342
Figure A-6 shows Organisation A is initially a client of ISP 1, i.e. the routes to
Organisation A are aggregated by ISP 1 into a single announcement. As CIDR uses the
longest match forwarding principle, Internet routers will route traffic to host 200.25.17.25
to ISP 1 which forwards the traffic to Organisation A (Stage 1). Stage 2 indicates that
Organisation A wants to become a client of ISP 2. If Organisation A gives back its
address space to ISP 1 and uses instead address space of ISP 2, there will be no effect
on the Internet routing table (Stage 2). If, however, Organisation A can convince ISP 1
that the address space remains with Organisation A then ISP 2 has to announce an
exception route in addition to its previous address space (Stage 3), i.e. ISP 2
announces both its own 199.30.0.0/16 address block and a route for 200.25.16.0/21.
342 This also raises customer switching costs. A reduction in these costs is one of the advantages of
switching to IPv6. We discuss these issues in Chapter 8.
Internet traffic exchange and the economics of IP networks
189
The longest match forwarding algorithm will mean that traffic to host 200.25.17.25 will
be routed to ISP 2 and not to ISP 1.
Figure A-6:
The effect of a change of an ISP on routing announcements in a CIDR
environment
stage 1
200.25.16.0/21
200.25.0.0./16
Internet Service
Provider #1
Organization A
"200.25.17.25"
Internet
199.30.0.0/16
Internet Service
Provider #2
stage 2
200.25.0.0/16
Internet Service
Provider #1
Organization A
Internet
199.30.0.0/16
Internet Service
Provider #2
stage 3
200.25.0.0/16
Internet Service
Provider #1
Internet
199.30.0.0/16
200.25.16.0/21
Source: Semeria (1996)
Organization A
"200.25.17.25"
200.25.16.0/21
Internet Service
Provider #2
190
Final Report
B
Annex to Chapter 4
B-1
Internet protocols, architecture and QoS
IP/ TCP/UDP
The basic features of the TCP/IP protocol which is the basis for Internet communication
were defined in 1974,343 and later revised by Postel.344 In common with the Open
Systems Interconnection (OSI) seven-layer protocol stack, the Internet rests on a
layered model, as can be seen from Figure B-1. In their combined form as written
TCP/IP signifies a suite of over 100 protocols that perform lower level functions. IP
(Internet protocol) and TCP (transmission control protocol) do, however, bear the
largest share of the workload in layer 3.
Figure B-1:
OSI and Internet protocol stack
Layer 7 – Application
Applications and Services
Layer 6 – Presentation
Layer 5 – Session
TCP or UDP
Layer 4 – Transport
IP
Layer 3 – Network
Layer 2 - Data Link
Layer 2 - Data Link
Layer 1 - Physical
Layer 1 – Physical
Source: Smith and Collins (2002, p. 327)
At layer 1 and 2 there are a multitude of different fixed-link networks (e.g. ISDN, LANs,
ATM-networks, SDH-networks, and (D)WDM) that can transport IP traffic. The Internet
Protocol (IP) as such corresponds to layer 3 and is completely independent of the lower
levels. IP routes datagrams, provides fragmentation and reassembly of (lengthy)
datagrams, and also provides an addressing service. Each packet has an IP-address in
its header. IP provides for the carriage of datagrams from source host to destination
host. Thus, different packets may take different routes and they can have different
343 See Cerf and Kahn (1974)
344 Postel (1981a) and (1981b).
191
Internet traffic exchange and the economics of IP networks
levels of delay in their delivery. Viewed as such, IP alone provides no guarantees
regarding delivery, error recovery, flow control, or reliability.345
However, IP protocol supports a multitude of different services and applications, which
can be subdivided into three different groups:346
•
Connection-oriented data applications;
•
Connection-less data applications, and
•
Real time voice / audio video and multimedia applications.
These different groups of applications require different transport protocols, as is
indicated in Figure B-2.
Figure B-2:
Protocol-architecture enabling multi-service networking with IP
Connection oriented
data applications
connection
free data
applications
Voice / Audio / Video
applications
Signaling protocols
RTP, RTCP
Signaling protocols
TCP
UDP
RSVP
IP
Transmission networks (LANs, ISDN, ATM networks,
SDH/SONET-networks, (D) WDM
Source: Badach (2000)
Connection-oriented data applications include file transfer (FTP) and HTTPapplications. They use the transport protocol TCP. Generally speaking, connectionoriented data applications are those which require a virtual connection, i.e. an
“agreement” between the two devices communicating with each other about the control
of the communication. TCP performs this task. It is intended to provide for service
345 These functions are left to TCP or UDP, and in more recent times to a range of other protocols that we
discuss below.
346 In the following we rely heavily on Badach (2000) and Smith and Collins (2002).
192
Final Report
reliability. It verifies the correct delivery of data between sender and receiver and
triggers retransmission of lost or dropped datagrams. TCP/IP operates independently of
the maker of an ISP’s equipment. Application layer protocols operate on top of TPC/IP
and it is these that provide meaningful service that customers are prepared to pay for
(e.g. FTP, TELNET, SMTP, HTTP). The current version of IP is IPv4 which is described
within a general theme of Internet routing in Chapter 3 and in Annex A.
TCP also has some inherent features that can be seen as providing a form of
congestion management and quality monitoring. But in this regard TCP does not meet
the needs of real time applications where packets have to be delivered within a limited
time and in the proper order. Packets might not be delivered within the required time
when using TCP under congested conditions, and some packets may actually be
discarded. This occurs when queuing during congestion results in overflow, indicating
congestion to sending TCP controlled devices, and sending slows down. Each TCP
device then starts to increase its sending rate. Thus, there is a cycle of increase and
decrease which provides rate adaptation on the Internet.347,348 During periods of
serious congestion this system encourages end-users to reduce their demand on the
Internet. Hence, TCP works as a crude form of congestion management.
Connection-less data applications are those that do not require a virtual connection
such as to provide a sequential transfer of multiple packets. Rather it is meant for a
simple request-response types of transactions. An example is Internet network
management under the SNMP or the Domain Name Service (DNS). These applications
use the protocol UDP (User Datagram Protocol) for communication. UDP is a simple
protocol which allows for the identification of the source and destination of a
communications link.
Contrary to the applications typically provided by TCP, voice/audio and video
applications have particular bandwidth requirements and are time-sensitive. For these
types of applications, especially for speech, UDP is chosen instead of TPC. The reason
is mainly technical. For speech, delay (latency) and jitter (latency variation) are far more
crucial in practice to the viability of the service than is a limited amount of packet loss.
Even though UDP offers no protection against packet loss it is much better than TCP as
regards delay. However, UDP has to be supported by additional features to offer
reasonable voice quality.
RTP
For real-time applications support for UDP is typically provided by RTP. The suite of
real-time protocols, which sits on top of UDP in the protocol stack is comprised of the
347 Jacobson (1988), in Clark (1997), and RFC813.
348 Where DiffServ is used, Random Early Detection (RED) is used to avoid the sort of queuing that
would result in a TPC causing a slow down.
Internet traffic exchange and the economics of IP networks
193
Real-time Transport Protocol (RTP), Real-time Control Protocol (RTCP), and Real-time
Streaming Protocol (RTSP).349 It is primarily designed to enable multi-participant
multimedia conferences. RTP does not address resource reservation or guarantee
QoS. It does not even guarantee datagram delivery. It relies on lower layers to provide
these functions. RTP provides additional information for each packet that allows the
receiver to reconstruct the correct packet sequence. The RTP Control Protocol (RTCP)
does not carry packets, rather, it is a signalling protocol which provides feedback
between session users regarding the quality of service of the session.350 RTP is
designed with multi-media multi participant 'hook-ups' in mind. One of its main
advantages is its scalable interface with transport layer protocols.351
When operating over an ATM transport layer, sequence number and timestrap
information take up roughly half of the required overhead. Moreover, RTP can only be
used in relative stable (uncongested) conditions under the premise that it is
implemented in all corresponding routers, currently only the case inside of Intranets.
We discuss ATM below.
Resource reSerVation Protocol
To enable real-time applications, especially voice over IP (VoIP), the protocols
discussed in the preceding subsections are not able to provide to sort of QoS statistics
required. There is an approach which enables resources to be reserved for a given
session prior to any exchange of content between hosts or communicating partners,
and this can provide the sort of QoS required for real-time applications. One such
protocol is the Resource reSerVation Protocol (RSVP). RSVP is a control protocol
which does not carry datagrams. Rather, these are transported after the reservation
procedures have been performed through the use of RTP.352
RSVP uses a token bucket algorithm. Tokens are collected by a logical token bucket as
a means of controlling transmission rate and burst duration. The simple token bucket
algorithm relies on two parameters: the average transmission rate and logical bucket
depth. Arriving packets are checked to see if their length is less than the tokens in the
bucket.
Three additional parameters are sometimes also operated: a maximum packet size, a
peak flow rate, and a minimum policed unit. The way the RSVP traffic shaper works
349 Thus, in principle a packet of coded voice is augmented by an RTP header and it is sent as the
payload of an RTP packet. The header contains e.g. information about the voice coding scheme being
used, a sequence number, a timestamp for the instant at which the voice packet was sampled and an
identification for the source of the voice packet. See Smith and Collins (2002, p. 329). RTP in addition
equilibrates to some degree jitter effects caused by the network.
350 The type of information that is exchanged includes e.g. lost RTP packets, delay, and inter-arrival jitter.
351 See Black (1999) for a detailed discussion.
352 RSVP requires in addition, signalling protocols to make these reservations, (discussed further below).
However, reservation is only possible if all routers involved in the transmission support RSVP.
194
Final Report
implies that some packets will have to wait so that the flow of packets conforms with the
average rate, peak rate, and maximum burst duration set by the RSVP algorithm.
There are three service classes specified by the IETF:
1. Best Effort;
2. Controlled Load, and
3. Guaranteed Service.
As opposed to ATM which is connection oriented, IP and RSVP is connectionless and
QoS attributes only apply to a flow of packets and not to virtual channels or paths, as in
the case with ATM. RSVP degrades to a best effort service where congestion points
become critical.
As the guaranteed QoS option known as IntServ, requires RSVP as a partner, we
therefore address RSVP further in the section below which discusses IntServ.
IntServ
IntServ (integrated services) architecture is designed to provide a means of controlling
end-to-end QoS per data flow.353 It does this in conjunction with Resource reSerVation
Protocol (RSVP) and by controlling the admission of packets onto the Internet. The
technology is designed to enable QoS statistics to be raised to several levels, thus
making it possible for real-time applications to run on the Internet.
The need for admission control follows from there being many traffic flows sharing
available capacity resources. During periods of heavy usage each flow request would
only be admitted if it did not crowd out other previously admitted flows. Link sharing
criteria are therefore specified during such periods, but typically do not operate during
periods of low utilisation. Packets that are not labelled as requiring priority will form the
group from which packets are dropped when network congestion begins to reach the
point when stated QoS statistics are threatened.
In 2000 the IntServ model offered two service classes, with standards specified by the
Integrated Services Working Group of the IETF:
(i)
the controlled load service class, and
(ii) guaranteed QoS class.
353 This section draws on work by Knight and Boroumand (2000), Desmet, Gastaud, and Petit (1999),
and McDysan (2000).
Internet traffic exchange and the economics of IP networks
195
The QoS offered by the former during periods when the network is in high demand, is
similar to the QoS provided by an unloaded network not using IntServ technology, such
as is provided today on a backbone during uncongested periods. For this option to work
the network needs to be provided with estimates of the demands required by users'
traffic so that resources can be made available.
The guaranteed QoS class focuses on minimum queuing delays and guaranteed
bandwidth. This option has no set-up mechanism or means of identifying traffic flows,
and so needs to be used along with RSVP. The receiver of packets needs to know the
traffic specification that is being sent so the appropriate reservation can be made, and
for this to occur, the path the packets will follow on their route between sender and
receiver needs to be known. When the request arrives at the first routers along this path
the path’s availability is checked and if confirmed the request is passed to the next
router. If capacity is not available on any router on this path an error message is
returned. The receiver will then resend the reservation request after a small delay.
Figure B-3:
Integrated services capable router
RSVP Messages
Control plane/Functions
Routing
module
Signal
module
Classifier
Policing
Admission
module
Data
Scheduling
Data Path/Forwarding
Source: Desmet, Gastaud and Petit (1999)
The traffic management control functions required of IntServ routers are shown in
Figure B-3. Explanations for these are as follows:354
•
Signalling to maintain a flow specific state on the pathway – known as RSVP.
354 See Desmet et al (1999) for a more complete explanation.
196
Final Report
•
Admission control to prevent new flows where these would affect QoS for previously
admitted flows.
•
Classifier to identify packets according to their particular QoS class.
•
Policing of flows to enforce customer traffic profiles.
•
Scheduler which selects and forwards packets according to a particular rule.
There are several drawbacks with the IntServ / RSVP model:
•
RSVP has low scalability due to router processing and memory requirements that
increase proportionately with separate RSVP requests.
•
The IntServ model takes only very partial account of economic efficiency by
enabling different prices to be charged for different levels of service. There is no
(economic) mechanism that would prevent users from cornering network resources,
an issue apparently given little consideration by designers.
•
The complexity of the IntServ RSVP model is thought by many to mean it is very
unlikely to be the way forward for the 'public' Internet, but it may well find a market in
intranets, corporate networks, and VPIPNs.
DiffServ
DiffServ (differentiated services) architecture is designed to operate at the edges of
networks bases on expected congestion rather than actual congestion along paths. It is
thus a service based on expected capacities, and as is implied by this description, there
is no guaranteed QoS for any particular flow. As with the standard Internet, DiffServ
technology is still based on statistical bandwidth provisioning. DiffServ technology is
intended to lift QoS statistics for packets that are marked accordingly. DiffServ will
support several different QoS standards in the Differentiated Services Code Point
(DSCP) in the packet header. Figure B-4 shows this part of the header for IPv4
packets.355 Marking of the DSCP will normally only need to occur once, at a DS
network boundary or in the user network. All data shaping, policing and per flow
information occurs at network edges. This means that DiffServ has considerable scaling
advantages over IntServ.
355 Under Ipv6 DiffServ can not apply the TOS field because the basic header does not contain it.
However, it will implemented under the field of a corresponding extension to the basic header.
197
Internet traffic exchange and the economics of IP networks
Figure B-4:
Differentiated services field in the IP packet header
0
1
2
3
4
Differentiated Services Code Point
Pool
1
DSCP
codepoints
xxxxxo
5
6
7
Currently unused
Assignment policy
Standards action
Source: McDysan (2000)
DiffServ requires that a service profile is defined for each user, the pricing of which will
be determined between the ISP and end-user. The subscriber is then allocated a token
bucket which is filled at a set rate over time, and can accumulate tokens only until the
bucket is full. As packets arrive for the user, tokens are removed. However, all packets
whether tagged or not, arrive in no particular order (as occurs with the present Internet).
Under congested conditions, while the user has tokens in credit, all her packets will be
marked as “in profile”, and packets not tagged as “in” form the group from which
packets are dumped under congested conditions. Otherwise routers do not discriminate
between packets. This is said to make the system much easy to implement than is the
case with IntServ.
The flexibility of the system allows service providers to match the expectation of QoS to
expected performance levels, such that numbers of different performance levels (and
prices) can in principle be provided. There are, however, no specified standards for the
detailing of expected capacity profiles. This function is left open for ISPs enabling them
to design their own service offering. The down-side of this, however, is that without
agreement and performance transparency between networks, the service would only
operate “on-net”.
The possibility exists that DiffServ and IntServ could be used together, with IntServ
being used outside the core where QoS is poorest and where scaling issues are least
problematic, and DiffServ would be used in the core (i.e. the trunking part) of the
Internet where the expectations based service could suffice to provide QoS that endusers find sufficiently reliable such that it can be used for real-time service provision in
direct competition with the PSTN. It is important to note, that DiffServ coexists with
multicast and allows the construction and updating of the corresponding multicast tree,
which is not the case for IntServ. Rather than employing IntServ outside the core,
however, it appears that MPLS is presently being considered for this task.
Further development of accounting and billing systems to be used with DiffServ is
necessary for service providers to build a value chain. Accounting architectures are
currently being developed that support management of Internet resources. These
architectures will also manage pricing and accounting of different classes of services
198
Final Report
and service levels. Middleware technologies such as enhanced IP-multicast facilitate a
new range of communication applications.356 Relevant issues are technical as well as
strategic.357 In the last couple of years the IETF has been looking into accounting and
billing systems.
DiffServ is apparently being used by KPNQwest, Telenor and Telia to provide improved
QoS in parts of their networks.358,359 However, a number of backbones appear to be
interested in alternatives like MPLS. Figure B-8 (top) in Annex B-2 shows the
technology options that network managers consider are most important for QoS. With
networks needing to work with different protocols, some networks may be
simultaneously pursuing several potentially (partially) substitutable technologies for
improved QoS.
However, we already have the impression that network designers tend to be looking
elsewhere for long-term solutions to QoS problems on the Internet. One part of the
cutting-edge regarding QoS seems to have moved to facilitating convergence between
optical and data network layers under the concept of Packet over SONET (PoS). For its
implementation various providers proposed an extension of MPLS named generalised
MPLS protocol (GMPLS) which integrates the control function of the IP layer with layer
2 (the optical layer).360 This new concept may help enable the handling of connections
to be moved from proprietary network management, with both routing and signalled
being done through the transport layer. GMPLS is one technology that can bring this
about. It is thought that these developments will go a long way toward overcoming “onnet” to “off-net” QoS problems that are presently problematic in the development of the
next generation Internet.
ATM and AAL
IP over ATM relies on routing at the edges and switching in the core, consistent with the
modern approach to network design – “route once and switch many”. While IP is a
packet oriented soft state (connectionless) technology located at layer 3 on the ISO
scheme, ATM is a cell oriented hard state (connection oriented) technology located at
layer 2 of the ISO scheme.361 IP over ATM is an overlay model involving two different
protocol architectures that were not originally designed to work with each other. IP
356 See Internet Engineering Task Force: http://www.ietf.org/html.charters/diffserv-charter.html. A market
based bandwidth management model for DiffServ networks with the implementation of bandwidth
brokers has been proposed recently by Hwang, et al (2000).
357 Examples of technical issues are: what kind of accounting architectures should be developed for the
next generation of Internet, and what type of middleware components are necessary. Strategic issues
include the evolution of the Internet service portfolio, the influence of technologies and architectures
on the opportunities for existing players and new entrants, the strategic importance of technologies
and the development of alliances.
358 See Ray Le Maistre, 10 January 2002, Corporate Solutions. www.totaltele.com
359 See Elizabeth Biddlecomb, 1 March 2001,"IP QoS: feel the quality”. www.totaltele.com
360 See Lobo and Warzanskyj (2001); Awduche and Rekhter (2001).
361 This section draws mainly on Black (1999), Marcus (1999) and McDysen (2000); Kercheval (1997).
Internet traffic exchange and the economics of IP networks
199
routing operates at the edges with IP providing for the intelligent handling of
applications and services for end-users. To forward datagrams in the core, IP routing is
replaced by ATM's label swapping algorithm, which for this function has much improved
price / performance compared to IP routing, although with technological progress this
advantage may not last into the medium term.
IP packet headers contain the information which enables them to be forwarded over the
network. IP routing is based on the destination of the packet, with the actual route being
decided on a hop-by-hop basis. At each router the packet is forwarded depending on
network load, such that the next hop is not known with certainty prior to each router
making this decision. This can result in packets that encapsulate a particular
communication going via different routes to the same destination.362 This design means
that packets arrive in different order than they are sent in, requiring buffering.
ATM is connection oriented, meaning that datagrams are sent along predetermined
paths. This feature results in the network being more predictable. Other reasons why
ISPs have adopted IP over ATM include:
•
ATM's traffic engineering capabilities including an ability to rapidly shift bandwidth
between applications, when it is needed;
•
ATM's definitive datagram forwarding performance, and
•
ATM’s QoS advantages.
The stand-out feature which helps explain all three reasons is ATM's label swapping
algorithm.
In the mid 1990s ISPs began using ATM in the transport layer. ATM enabled them to
multiplex IP and other traffic over an ATM core network. Although there is evidence that
many large ISPs had adopted MPLS (a potential ATM substitute - discussed below) by
the end of 2001, most operators in Europe were still using ATM as a transport protocol,
with many of them unlikely to use MPLS as an ATM replacement in the near future.363
ATM provides a cell relay service with a wide set of QoS attributes.364 ATM provides
cells of a constant length and in so doing provides a significant improvement in
processing in the ATM switches, although this is of reducing significant due to hardware
improvements in regard to modern routers. As ATM is connection oriented (i.e. it is
based on virtual channels) it does not need to transport timing and sequencing
information with datagrams, as does IP in the RTP header.
362 This is what is meant by a connectionless network.
363 Judge (1999), "MPLS still in fashion"
364 On large ISP networks ATM is provided over SDH (standardised transport module STM-1 = 155.52
Mbps). In the USA there are a similar standard denoted SONET, with a basic frame of 51 Mbit/s SDS1
equivalent in the optical domain to an optical carrier OC1.
200
Final Report
There are, however, several negative aspects to running IP over ATM. There is a loss
of efficiency due to small cell size, and when IP packets are converted into ATM cells
scarce header space is used which shrinks the useful payload per cell. ATM has been
described as imposing 22% "cell tax” merely due to the packing of IP into ATM,
although compared to existing frame-based protocols a 10% to 20% cell tax is more
likely depending on packet size.365,366 Where small amounts of information are sent
this problem tends to be compounded as information that will not fit into one cell is
carried by another similarly sized cell that may be almost empty. Moreover, with IP and
ATM operating on two quite different design principles, each with its own addressing
system, routing protocols, signalling protocols, and schemes for allocating resources,
the network is very complex with mapping required between the two protocol
architectures, and requiring two separate sets of kit. However, ATM appears to remain
the most popular option for ISPs although most experts appear to believe that this will
change over the next few years.
There are also scalability difficulties with IP over ATM, including the "n-squared”
problem. This occurs because in order for routers to exchange information with all other
peers they must be fully meshed, giving rise to the n-squared problem shown in Figure
B-5, in which five routers are shown requiring 10 virtual channels to maintain adjacency.
Each time a packet is routed a routing table has to be consulted. A complete set of IP
addresses that are available to the network must in principle be maintained at each
router – although in practice the information is held as IP prefixes rather than complete
tables.
Figure B-5:
Fully-meshed IP routers – the n2 problem
Router A
Router E
Router B
Router D
Router C
Source: WIK-Consult
365 McDysan (2000).
366 With IP/RTP operating over an ATM transport layer, sequence number and timestrap information take
up roughly half of this overhead. MPLS uses less header space than ATM and this is one of the
reasons large ISPs are presently converting to it.
Internet traffic exchange and the economics of IP networks
201
Other scaling problems concern; the stress put on IGP; bandwidth limitations of ATM
SAR interfaces, and the inability to operate over non-ATM infrastructure.367
In the absence of layer 2 and layer 3 concatenation technology (e.g. MPLS, which is still
under development and discussed below), ATM requires the ATM adaptation layer
(AAL) in order to link with upper layer protocols. AAL converts packets into ATM cells
and at the delivery end it does the contrary. Data comes down the protocol stack and
receives an AAL header which fits inside the ATM payload. It enables ATM to
accommodate the QoS requirements specified by the end-system. There are four AAL
protocols: AAL1, AAL2, AAL3/4 and AAL5:
AAL1:
constant bit rate, Suitable for video and voice;
AAL2:
variable length, low bit rate, delay sensitive. (Suitable for voice telephony and
the fixed network part of GSM networks);
AAL3/4:
intended for connectionless and assured data service. Not
thought to cope well with lost or corrupt cells, and
AAL5:
intended for non assured data services. Used for Multi-protocol Over ATM
(MPOA) and recommended for IP.368
A feature of ATM is that ISP transit customers can negotiate service level agreements
for connections that deliver a specified quality of service. Classes supported by UNI 4.0
are: constant bit rate (CBR); variable bit rate, real-time (VBR-rt); variable bit rate, nonreal-time (VBR-nrt); available bit rate (ABR), and unspecified bit rate (UBR).
Service definition is related to corresponding grade-of-service (GoS) and QoS
parameters. As already seen the GoS parameter indicates the conditions under which
the user can invoke the service. In traditional circuit switched networks GoS means the
probability that a user gets the service in the period of peak usage – often defined by
the ‘busy-hour’. Modern multimedia services allow the QoS conditions that will apply to
a connection or the selection between different predetermined service types (e.g. video
with different coding schemes) to be negotiated. Hence the definition of the
corresponding GoS parameter is more complex due to the possibility of a renegotiation
of the QoS parameter values.
As noted above, QoS parameters define the quality of an established connection. For
traditional services offered through circuit switching, the network architecture guaranties
these parameters, while in modern packet or cell switched networks these parameter
must be negotiated in the connection establishment phase and controlled during the
connection.
367 Semeria (1999).
368 This is the service used for the Internet. Others are contracted for but we understand the service is not
part of the public Internet.
202
Final Report
QoS on ATM networks is maintained by a combination of four main features. These are:
•
Traffic shaping, which is performed by sending devices;
•
Traffic policing, which is performed by switching devices;
•
Congestion control (needed most notably in the case of ABR), and
•
Cell priority (two states low and high, where the low state cells are candidates for
deletion in case of congestion).
Traffic shaping is performed at the user network interface. It checks that traffic falls
within the specifications of the contract with the receiving network. Traffic policing is
performed by the ATM network through a logical leaky bucket system.369 The bucket
leaks at a specific negotiated rate, irrespective of rate cells flow into the bucket, i.e.
there is an input rate and a limit parameter).370 The full range of possible parameters
which can be contracted for are compared in Table 1-B against the five ATM Forum
defined QoS categories.371,372
Table B-1:
QoS and ATM Forum service categories
CBR
VBR-rt
VBR-nrt
ABR
UBR
CLR
---
CTD
---
---
CDV
---
---
---
PCR
SCR
---
---
---
MinCR
---
---
---
---
ECI
---
---
---
---
Notes:
Source:
369
370
371
372
CLR: cell loss rate,
Hackbarth and Eloy (2001)
CTD: cell time delay,
CDV: cell delay variation,
PCR: pick cell rate,
SCR: substantial cell rate,
MinCR: minimal cell rate,
ECI: explicit congestion indication,
CBR: constant bit rate,
VBR-rt: variable bit rate, real-time,
VBR-nrt: variable bit rate, non-real-time,
ABR: available bit rate, and
UBR: unspecified bit rate.
Also know as a generic cell rate algorithm (GCRA).
Roughly similar devices have been designed to operate with IP, and we discuss these below.
For more details see Black (1999); Haendler et al ATM Networks Addision Wesley 3 ed. 1998.
The RSVP protocol which operates with IP and is a necessary inclusion with DiffServ architecture,
links into this functionality[0].
203
Internet traffic exchange and the economics of IP networks
The suitability of ATM service categories to applications is shown in Table B-2. For
services that require higher quality of service features like real-time voice and
interactive data and video, ATM networks can be configured to provide sustained
bandwidth, and low latency and jitter, i.e. to appear like a dedicated circuit.
Table B-2:
Suitability of ATM Forum service categories to applications
Applications
CBR
VBR-rt
VBR-nrt
ABR
UBR
Critical data
Good
Fair
Best
Fair
No
LAN interconnect
Fair
Fair
Good
Best
Good
WAN data transport
Fair
Fair
Good
Best
Good
Circuit Emulation
Best
Good
No
No
No
Telephony
Best
Good
No
No
No
Video conferencing
Best
Good
Fair
Fair
Poor
Compressed audio
Fair
Best
Good
Good
Poor
Video distribution
Best
Good
Fair
No
No
Interactive multimedia
Best
Best
Good
Good
Poor
Source: McDyson (2000)
For high grade virtual circuits complimented by user admission control, QoS can receive
higher statistical guarantees (in a smaller interval of limits) as for datagram networks.
But in any case, QoS is complex and dependent on many factors that can in practice
degrade QoS. The common causes of QoS degradation in ATM networks are noted in
Table B-3.
Table B-3:
Factors that can degrade an ATM network's QoS
Attribute
CER
SECBR
CLR
CMR
Switch architecture
Buffer capacity
Number of tandem nodes
Traffic load
Failures
CDV
Propagation delay
Media error statistics
CTD
Resource allocation
Notes: CER = cell error ratio;
Source: Black (1999)
SECBR= severely errored cell block ratio;
CLR = cell loss ratio;
CMR = cell misinsertion rate;
CTD = cell transfer delay;
CDV = cell delay variation
204
Final Report
In practice many of the QoS attributes of ATM are not readily available to ISPs as ATM
must be used with other embedded protocols, and because protocols that link IP and
ATM layers are complex and do not readily provide for the QoS attributes of ATM to be
usefully deployed by IP over ATM. The development of application programming
interfaces would have the effect of making the QoS attributes of ATM more accessible
to end-systems running IP over ATM. This would increase the prospect of IP over ATM
providing QoS features that are useful to end-users such as where ATM runs from
desktop to desktop. While 4 or 5 years ago, ATM was thought by many to be the means
by which the next generation Internet would become a reality, it appears to be at the
mature stage of its product life-cycle and this being the case we would expect its
popularity to decline among large ISPs. We understand that some large ISPs are
already converting to MPLS. Figure B-8 in Annex B-2 provides information to this effect.
We can not comment on the questionnaire or the margin of error that relates to these
two figures.
Multiprotocol over ATM
Multiprotocol over ATM (MPOA) is an evolution of LANE and combines switching and
routing. It encapsulates data frames, not necessarily IP packets, in an LLC/SNAP frame
which is transmitted via AAL-5. In contrast to LANE which provides layer 2 switching
inside of a subnet, MPOA provides layer 3 routing/switching between sub-networks
inside of an AS. It offers quality of service advantages for IP over ATM by enabling QoS
guarantees to be provided where hosts are in different sub-networks. MPOA has better
scalability then LANE but as it is primarily a campus technology, we do not discuss it
further in this study.
SDH and OC
The current network architecture for the physical layer of large ISPs is synchronous
digital hierarchy (SDH) in Europe and SONET in North America. The present practice is
for ATM to be provided over SDH (standardised transport module STM-1 = 155.52
Mbps).373 When ATM is not used there is a direct mapping of IP packets into an STM-1
frame. In physical backbone networks with SDH architecture it is normal for an STM-1
structure to be routed and maintained via corresponding digital cross connect
equipment DX4 over an optical fibre cable topology with corresponding STM-N point to
point transmission systems DX4. Note that STM-1 routing is a pure SDH management
function and the flexibility in rerouting or resetting of STM-1 depends on the MS
implementation being used by the operator.
373 In the USA there is a similar standard named SONET, with a basic frame of 51 Mbit/s SDS1
equivalent in the optical domain to an optical carrier OC1.
Internet traffic exchange and the economics of IP networks
205
A physical backbone network based on SDH architecture may integrate various logical
layers, such as PSDN/ISDN, Frame Relay (FR) data service, and IP networks, as is
shown by Figure 4.1 in the main report. The rerouting facility of the DX4 equipment
enables the assignment and re-assignment of physical capacity to the IP layer when
congestion is becoming a problem. Due to the length of time this function takes (at least
10 times longer than the reaction of IP routing protocols) this facility has limited
practicality and is mainly applied in case of failures within a transmission system or the
corresponding fibre cable. In the longer term there may be only two network layers in
the core of the Internet (e.g. (i) IP/MPLS and (ii) the optical layer.374
MPLS
Multi-Protocol Label Switching (MPLS) is a form of WAN IP-switched architecture which
maps labelled IP packets directly into an SDH frame (STM-1 or STM-4). Along with the
evolution of WDM/DWDM transmission technology and corresponding line terminals on
backbone routers,375 MPLS provides a direct interface with the optical layer.376 As with
all multi-layer switching, MPLS addresses the interrelated issues of network
performance and scalability. One of the main problems it addresses is routing
bottlenecks, especially the back-up of datagrams where tradition routers employ the
same amount of processing for each and every packet, even if the stream of packets is
being sent to the same address. Both hardware-based routing and label based routing
are used so that packets can be routed and switched at line-speed.
Label based switching and label routing is used to traverse the core of the network, with
full routing only being performed at the edges of the network. However, a negative
aspect of MPLS is the fact that it is based on virtual circuits in contradiction to the
concept of an all datagram network. One problem that arises because of this is that
there are no standardised solutions for working with multicast protocols. Hence the
protocols for multicast routing have to work in parallel to MPSL due to a large number of
multicast trees that need to be dynamically maintaining.
MPLS was designed with a view to integrating the best attributes of layer 2 and layer 3
technologies. It provides the same advantages as MPOA but is not limited to campus
networks because it does not require a common ATM infrastructure but only that all
routes involved implement the correct protocol. Like MPOA, MPLS integrates switching
functionality with IP routing. It is an end-to-end signalling protocol which attaches labels
to packets at edge routers, and switches them in the core network through the swapped
of labels, creating label switched paths (LSP). This latter functionality is a similar to that
used by ATM where VPIs and VCIs virtual path connections are created. The ATM
algorithm was pretty much carried over to MPLS by EITF designers. With MPLS, a
374 See Melian et. al. (2002) and Bogaert (2001).
375 As are provided by Alcatel[0], CISCO, JUNIPER and others.
376 Typically OC-48 (STM-16) or in same cases even OC192 (STM-64).
206
Final Report
logical MPLS routing network is created above the layer 2/1 e.g. FR, ATM or a pure
SDH or optical network enabling virtual connections with QoS guarantees to be
provided. Similarly to ATM, MPLS does this by establishing a virtual end-to-end
connection, and packets are then routed over this connection. MPLS is intended to be
an IETF standardised multi-layer switching solution drawing on several proprietary
solutions that were developed around the mid 1990s.377
While the label swapping algorithm means that MPLS and ATM have much in common
in their layer-2 operation, there are several important differences between them:
•
MPLS was designed to operate with IP, whereas ATM was not;
•
Virtual Private Intranets can be provided using MPLS;
•
MPLS connects over more than one network (e.g. IAcN, IBbN, IacN) with the
principle tunnelling and label stack (where each network in the connection provides
a label entry in the label stack;378
•
MPLS is designed to run on top of each second layer architecture, and
•
MPLS is not easily scalable as it requires label assignment and the construction of
label tables.
The reduction of the number of sub-layers from the complete IP-AAL-ATM-SDH-optical
protocol stack to a simple IP-MPLS-OC one (see Figure B-7) has cost advantages for
broadband applications, especially under high traffic load. However, there is a trade-off
with this approach as with the cancelling of each intermediate sub-layer the
management facilities are reduced resulting in a sharp reduction on the degree of
capacity utilisation, especially in the lowest layer. Moreover, in case of network failure
the restoration of the lost capacity tends to be difficult, and to restore the entire previous
capacity may on occasions be practically impossible. We understand that it may be quit
expensive to implement the simple architecture as it requires almost a doubling of the
layer three structure in order for the service to remain viable, and to protect the physical
layer with a 1:1 or N:M concept,379 On the other hand, the costs of maintaining SDH or
ATM infrastructure are avoided.
377 This includes Cisco's tag-switching, which has much in common with MPLS, not surprisingly as
experts from Cisco (and other hardware and software manufacturers) were represented in the EITF
MPLS design team. See Semeria (1999) for a discussion of proprietary forerunners to MPLS.
378 See Protokolle und Dienste der Informationstechnologie Vol 2 INTEREST Verlag 2001)
379 See Lobo and Warzansky (2001).
207
Internet traffic exchange and the economics of IP networks
Figure B-6:
Seven layer by three planes OSI model
Global Management plan
Layer Management plan
Control plan
Upper Layers
User Info plan
Upper Layers
Fig. ITU functional
model
Network Layer
Network Layer
Link Layer
Link Layer
Physical Layer
Physical Layer
Future architectures
Different future architectures result from different types of wide area Internet (WAIN)
implementation, each of them with corresponding advantages and disadvantages. We
use the OSI layer model shown in Figure B-6 below to show different options for WAIN
implementation in Figure B-7.
Logical layer
Figure B-7:
Network
layer
Types of layered architectures
Layer
Complete
architecture
Reduced architecture
Simple architecture
4
TCP/UDP
TCP/UDP
TCP/UDP
3b
IP
IP
IP
Label
Routing
3a
Data-link
layer
Physical layer
2b
AAL
TAG
Switching
Label
Switching
Label
Routing
TAG
Switching
Label
Switching
2a
ATM
---
---
1b
SDH *
SDH *
---
1a
OC
OC
OC
Notes: * SONET is used in North America rather than SDH.
Source: WIK-Consult
208
Final Report
The 'complete architecture' provides a WAIN implementation with a high degree of
flexibility regarding traffic grouping and routing facilities. It is supported by internationally
accepted standardised protocols, and large European operators have implemented this
form. On the negative side the 'complete architecture' results in high protocol overhead
which substantially reduces the degree of utilisation of lower layer capacities, and
imposes a strong requirement for network OAM (operation administration and
maintenance) functions. It may also be the most costly solution.380
The second solution, the 'reduced architecture', substitutes IP switching equipment (e.g.
MPLS) for the AAL- ATM layer, but maintains the electrical SDH level. The advantage
of the reduced architecture is a reduction in protocol overhead and the advantage of a
simpler network OAM function. On the negative side, the additional flexibility in traffic
segregation and traffic management provided by an ITU standard that operates with
ATM is lost, but partly substituted by TAG or Label Switching protocols. This implies the
need for the resulting physical facilities to be over-dimensioned in comparison to the
complete architecture solution.
In the 'simple architecture' solution the optical physical layer connects directly to the IP
switching function.381 All traffic management must be provided by this reduced
functionality resulting in the need for the facilities to be considerably over-dimensioned.
Such an approach may nevertheless be attractive due to economies of scale in high
capacity optical transmission systems and the corresponding savings in network
operational and management cost, and/or performance advantages.382 The second
and third solutions are preferred by pure IP WAIN operators, and are more popular in
the USA, while the last solution is currently considered as the architecture for the Next
Generation Internet and represents the newest solutions for Ethernet technologies (Fast
and Gigabit Ethernet).
For the ISP market these options have significant implications. Information from market
and media reports suggests that most European backbone operators were using MPLS
in 2001.383 While some are apparently intending to ultimately use MPLS to replace
ATM, our understanding is that none have yet done so. Moreover many of those who
are using MPLS are using it primarily to provide IP virtual private networks (IPVPNs),
and/or for its flexibility in traffic engineering in the core of their networks.384,385
380 Note, that AAL and ATM lies in layer 2 of the OSI model such that virtual paths are configured by the
management system but cannot be switched.
381 Routing and switching functionality in an Internet World are discussed in Chapter 3.
382 Each additional layer requires a corresponding management and OAM function for both line-side
management and coordination with the layer immediately above and below, such that reducing the
number of layers implies cost savings. However a reduced functionality and topology also requires
extra (stand by) capacity mainly in the physical layer in the form of 1:1 or N:M with M<N. To improve
the availability mainly in case of failures, Melian et. al (2002) propose a duplication of this layer and
party traffic distribution.
383 See the figure in annex B-2 from Infonetics Research.
384 UUNet has stated that it uses MPLS for traffic engineering purposes, and uses IPsec to the end points
of VPNs (see Rubenstein, Feb. 2001, www.totaltele.com)
Internet traffic exchange and the economics of IP networks
209
In 2001 MPLS was still considered to be under development. However, as early as year
2000 some operators started implementing it. While it is ultimately envisaged as an
ATM replacement technology, MPLS is also evolutionary, i.e. it uses existing layer-3
routing protocols, and can use all layer-2 protocols like ATM, frame relay, point-to-point
protocol (PPP) and Ethernet, as well as existing layer-2 kit. New services can thus be
provided using MPLS while maintaining IP over ATM as the principle network
architecture. Therefore, ISPs can use features offered by MPLS and maintain their
complete architecture. They may also decide some time in the near to medium term to
shift to a more reduced architecture by using MPLS in place of ATM in layer-2.
385 It has been claimed that MPLS is more prevalent in Europe because its VPN capabilities have proved
especially attractive due to the high price of dedicated leased lines in most European countries
(Rubenstein, Feb. 2001, www.totaltele.com).
210
B-2
Final Report
Technologies deployed by ISPs in Europe
Figure B-8 shows the different types of architectures used by larger ISPs in Europe in
2000 and 2001. It should be noted that beneath the figures lie quite small sample
numbers and so we would expect there to be quite high margins for error.
Figure B-8:
Technologies for QoS and backbone technologies used by ISPs in
Europe
European Technologies for QoS
71%
ATM
71%
53%
QoS Technologies
MPLS
IPv6
35%
•
ATM dominant
•
MPLS growing
fast
35%
6%
29%
DiffServ
24%
Cisco Tag
Switching
24%
29%
2002
12%
RSVP
2001
12%
0%
15%
30%
45%
Percent of Respondents
60%
75%
European Backbone Technologies
82%
82%
SDH
76%
71%
Backbone Technologies
WDM or DWDM
76%
Gigabit routing
•
Similar to US
Tier 1 and Tier 2
backbones
•
IP, ATM
prevalent
•
MPLS coming
fast
•
Suggests
multiservice
network
65%
71%
76%
ATM
65%
MPLS
41%
Packet over SDH
41%
41%
Ethernet
41%
35%
12%
12%
Terabit routing
0%
2002
2001
15%
30%
45%
60%
Percent of Respondents
75%
90%
Source: Infonetics Research, The Service Provider Opportunity, Europe 2001
Internet traffic exchange and the economics of IP networks
211
B-3 Interoperability of circuit-switched and packet-switched networks in
"Next Generation Networks”
One of the problems preventing VoIP and other real-time services from being provided
on a much wider scale than is presently the case is that VoIP providers need to connect
with end-users through interconnecting with traditional PSTN operators. This is
necessary in order for VoIP providers to obtain access the PSTN's addressing system
(i.e. telephone numbers), and to use the PSTN's SS7 signalling system. Present VoIP
providers are essentially providing long-distance / international bypass of the PSTN.386
Broadly speaking, the term "Next Generation Network" (NGN) denotes approaches for
future network architectures using a platform for a packet-based transmission of voice,
data and video.387 There is, however, a widespread belief that the traditional telephone
network (analogue PSTN, ISDN) will still be widely used for many years to come, in
particular with respect to the access network. Thus, interoperability between the
telephone network and the NGN will continue to be a crucial issue.
This section presents the main characteristics of two alternative approaches to NGNs in
a world where the PSTN/ISDN network still exists.
We outlined the basic idea of Voice over IP (VoIP) in section 2.1, although further
explanation can be found in section 8.2.1. In principle a VoIP call involves the following
steps:388
•
Voice is digitised;
•
Digitised voice is placed into packets;
•
Packets are compressed (by hardware and or software);
•
E.164 telephony numbers are mapped onto IP addresses;
•
IP-voice packets are transported via router systems to the terminal address, and
•
IP voice packets are converted into ordinary voice signals which the human ear can
understand.
In the traditional voice telephony network there is a very close relationship between the
transportation of the actual voice and the signalling protocols that are needed for call
386 Indeed, the service seems to be most popular for citizens of Asian countries who call the United
States. One reason for this may be that international calls over the PSTN remain relatively expensive
in those countries.
387 A comprehensive treatment of technical issues can be found in Eurescom (2001).
388 We focus here on the PC to Phone or Phone to PC and Phone to Phone alternative, i.e. VoIP where
both the PSTN and an IP network (the Internet) is involved.
212
Final Report
set-up, call monitoring and to disconnect a call.389 When voice is delivered over IP it is
also necessary to perform signalling and call control functions. To enable real-time
multi-media communications across IP networks, two protocol standards were in use as
at the end of 2001:
•
H.323, and
•
Session Initiation Protocol (SIP, RFC 2543).
About H.323
The ITU recommendation H.323 ”Packet based Multimedia communications systems”
contains a set of standards required for establishing, monitoring and terminating end-toend connections for multimedia services such as VoIP, video communication over IP
and collaborative work for multimedia services.390
H.323 not only specifies the signalling protocol but also a characteristic network
architecture. As can be seen from Figure B-9, the main components shaping an "H.323zone” are:
•
H.323 compatible terminal devices;
•
Gateways;
•
Multipoint Control Unit(s) (MCUs), and
•
Gatekeeper(s).
The primary objective of H.323 is to enable the exchange of media streams between
these components. Typically, an H.323 terminal is an end-user communications device
that enables real-time communications with other H.323 endpoints. A gateway provides
the interconnection between a H.323 network and other types of networks such as the
PSTN.391 An MCU is an H.323 endpoint that manages the establishment and the
tearing down of multi-point connections (e.g. conferences).392 The gatekeeper is
responsible for controlling his H.323-zone, including the authorisation of network access
from the endpoints of the H.323 zone. Moreover, the gatekeeper supports the
389 In the PSTN e.g. the Signalling System No. 7 (SS7) is a crucial technical building block, see e.g.
Denton (1999).
390 H.323 has the official title "Packet-based Multimedia Communications Systems”.
391 One side of the gateway mirrors the requirements of H.323, i.e. it provides H.323 signalling and
conveys packet media. The other side fulfils the requirements of the circuit-switched network. Thus,
from the perspective of the H.323 side a gateway has the characteristics of a H.323 terminal. From
the perspective of the PSTN it has the characteristics of a PSTN (or ISDN) network node.
392 The MCU functionality can be contained in a separate device, although it can also be part of a
gateway, a gatekeeper or a H.323 terminal. The task of an MCU is to establish the media that may be
shared between entities by assigning a capability set to the participants of a multi-part session. The
MCU may also change the capability set if other endpoints join or leave the session.
213
Internet traffic exchange and the economics of IP networks
bandwidth management of connections with particular QoS requirements. In addition it
performs IP addressing tasks.
Figure B-9:
H.323 network architecture
H.323term inal
H.323term inal
Network without
guarante ed Q oS levels
G atekeeper
MC U
H.323 zone
G ateway
PS TN
AT M network
H.324
H.321
ISDN
H.320
Source: Badach (2000).
H.323 is not a single standard, rather it is a complex suite of standards each concerned
with different tasks.393 An overview of the main elements of the protocol-architecture
enabling multi-service networking with IP are shown in Figure B-10.
With respect to the exchange of the actual payload, Figure B-10 shows that H.323
works with on RTP (RTCP) operating over UDP which operates over IP, i.e. TCP is not
used. Additionally, the protocols H.225 and H.245 are used for control of the terminal
equipment and applications.394 H.225 is a two-part protocol. One part is responsible for
setting up and tearing down connections between H.323 end-points (call signalling); the
other part of H.225 is utilized by the gatekeeper for the management of endpoints in his
zone and is usually called RAS (Registration, Admission, and Status) Signalling.395
The main task of the H.245 control signalling is the management of the actual packet
streams (media streams) between two or more participants of an H.323 session. To this
end H.245 opens one or more logical channels with specific properties (such as bit rate)
between H.323 endpoints, which are utilised for the transfer of the media streams.
393 See Schmidt (2000) for a discussion of ITU Internet telephony related standards.
394 In the following we draw heavily from Smith and Collins (2002).
395 RAS signalling is used for registration of an endpoint with a gatekeeper and it is utilised by the
gatekeeper to allow or to deny access to the endpoint.
214
Final Report
Figure B-10:
H.323 protocol layers
Audio
Video
G.711
H.261
G.722
H.263
Terminal Control and Management
RTCP
Data Applications
H.225.0
H.225.0
H.245
G.723.1
RAS
Call Signalling
Control
G.728
Channel
Channel
Channel
HTTP
T.120
G.729.A
RTP
X.224 Class 0
UDP
TCP
Network Layer (IP)
Link Layer (IEEE 802.3)
Physical Layer (IEEE 802.3)
Source: Schmidt (2000)
About Session Initiation Protocol (SIP) 396
SIP was developed by the IETF in 1999.397 It is a protocol for establishing, routing,
modifying and terminating communications sessions over IP networks. It is based on
elements from HTTP, which is used for Web browsing, and the Simple Mail Transport
Protocol (SMTP), which is used for e-mail on the Internet. Even though SIP is used for
peer-to-peer communications,398 it uses a client-server transaction model in which a
SIP client generates a SIP request which is responded to by a SIP server. SIP mainly
performs those functions in IP-based networks which in traditional networks are
performed by signalling protocols.
To get a clearer picture of what SIP is, it is useful to outline what SIP is not. SIP is not:
•
A protocol which controls network elements or terminal devices;
•
A resource reservation protocol or prioritisation protocol;
396 This sections draws on Sinnreich and Johnston (2001).
397 To some extent it is fair to say that H.323 is oriented to the old circuit-switched telephony world, see
Denton (1999) whereas SIP was developed with the Internet in mind. Both SIP and H.323 use the
same mechanism for transport of mediastreams, namely RTP. However, their addressing schemes
are different. For more detail on SIP and H.323 see also SS8 Networks (2001).
398 Peer-to-peer means that both parties involved in a SIP based communication are considered equals.
Internet traffic exchange and the economics of IP networks
215
•
A transfer protocol designed to carry large amounts of data;399
•
Designed to manage interactive sessions once the sessions are set up:
•
Aiming at mapping all known telephony features from circuit-switched networks into
the SIP world.400
Compared to H.323, SIP is considered simpler.401 SIP can use both the connectionless
UDP and TCP as transport protocol in layer 4.
The main building blocks of a SIP-enabled IP communications network are:
•
SIP endpoints;
•
SIP server, and
•
Location server.
Broadly speaking, a SIP endpoint is a computer that understands the SIP protocol. SIP
endpoints in particular are fully qualified Internet hosts. Two types of SIP-endpoints can
be distinguished:
•
User devices such as (SIP-)phones and PCs,402 and
•
Gateways to other networks, e.g. connecting to the PSTN, H.323 networks, or to
softswitch-based networks using MGCP (RFC 2805) or Megaco (H.248, RFC 3015)
protocols.
A SIP server is a computer that performs special functions at the request of SIP
endpoints. There is no necessity that a SIP server is on the same network as the SIP
endpoints that are associated to it, rather, the only requirement is that the server can be
reached via an IP network.403 A Location Server is a database containing information
399 Rather, it is designed to carry only those comparably small amounts of data required to set up
interactive communications.
400 Although it is worth to be noted that SIP supports PSTN Intelligent Network services and mobile
telephony features, see Sinnreich and Johnston (2001, p. 13).
401 To quote Sinnreich and Johnston: "SIP...makes no specification on media types, descriptions,
services etc. This is in comparison to a VoIP umbrella protocol such as H.323, which specifies all
aspects of signalling, media, features, services, and session control, similar to the other ISDN family
of protocols from which it is derived. See Sinnreich and Johnston (2001, pp. 56-57).
402 The end devices in a SIP network are also called user agents. They originate SIP requests to set up
media sessions and they send and receive media. A characteristic is that every user agent contains
both a User Agent Client and a User Agent Server. The User Agent Client is the part of the user agent
initiating requests and a User Agent Server is the part generating responses to requests. During a
session usually both parts are used. See Sinnreich and Johnston (2001, p. 57).
403 To be more precise there are different types of SIP servers with specific tasks: SIP proxy servers
receive SIP requests from an endpoint or another proxy server and forward them to another location.
Redirect servers receive requests from endpoints or proxy servers and are responding by indicating
where the request should be retried. Registrar servers receive SIP registration requests and are
updating information received from endpoints into location servers.
216
Final Report
about users (like URLs), IP addresses and routing information about a SIP enabled
network.
SIP-addressing rests on a similar scheme to e-mail addressing.404 SIP-addresses
identify users rather than the devices they are using, i.e. there is no differentiation
between voice and data, telephone or computer. SIP supports queries on DNS servers,
ENUM queries405 and queries at location servers.
Work to provide interworking of SIP with other protocols from ITU-T which had begun or
was complete as at the end of 2001, can be seen in Table B-4.
Table B-4:
SIP interworking with ITU-T protocols
ENUM: E.164 to IP address mapping using DNS
SIP-H323 interworking
Accessing IN services from SIP networks
SIP-IN Applications (INAP) Interworking
SIP and QSIG for circuit-switched PBX interworking by transport of QSIG signalling
SIP for telephony - for transport of telephony signalling across IP
Telecommunications and Internet Harmonisation (TIPHON)
Source: Sinnreich and Johnston (2001, Table 2.10)
H.323/SIP Interworking
We have seen that H.323 and SIP rest on very different principles and that they are
backed by different organisations (ITU, IETF). We recognise, however, that there are
developments in the manufacturing/software industry which might lead to a situation
where the technical differences between the approaches disappear.
A company called SS8 Networks has patented what it refers to as a "Signaling Switch”
which is both a SIP proxy server and a H.323 Gatekeeper and which in addition
provides H.323/SIP interworking. Moreover, this solution is claimed to support
ENUM.406
404 The SIP address of one of the authors of this study could be: sip:[email protected]. It is, however,
possible, to also use a telephone number in the user part, like sip:[email protected];
user=phone.
405 See next section.
406 See SS8 (2001a) and SS8 (2001b).
Internet traffic exchange and the economics of IP networks
B-4
217
ENUM 407
The convergence of traditional telecommunications networks (PSTN/ISDN, mobile
networks like GSM/UMTS, and broadband cable) and public data networks like the
Internet, require that services and applications become independent of the type of
telecommunications networks, i.e. the lower layers in the OSI-model on which they are
based. In order to access a subscriber on an IP address-based network, some sort of
global addressing scheme across PSTN and IP address-based networks needs to be
implemented. ENUM provides such a scheme.
ENUM is first and foremost a protocol defining a DNS-based architecture aiming at
using an ordinary E.164 telephone number408 to identify IP-based addresses.409 ENUM
was standardised by the IETF in September 2000.
ENUM aims at giving an Internet-based user A the opportunity to select particular
applications which user A wants to make available for communicating with other
persons B when B only knows the telephone number of A, or B has access only to a
telephone keypad. Applications of ENUM being discussed include: Voice over IP; the
Voice Protocol for Internet Mail;410 Internet Fax; Unified Messaging; Instant Messaging;
e-mail, and (personal) web pages.
The basic principle of ENUM is to convert a telephone number into a routing address
and to retrieve information about the specific applications associated with the telephone
number. Several steps are performed in doing this:
1. The phone number is translated into an E.164 number by adding the city/area and
country code;
2. All characters are removed except for the digits;
3. The order of the digits is reversed;
4. Dots are placed between each digit, and
5. The domain "e164.arpa" is appended to the end.
This procedure is clarified by an example:
407 In this section we refer to Frequently Asked Questions on www.enum.org.
408 E.164 is the name of the international telephone numbering plan administered by the ITU specifying
the format, structure, and administrative hierarchy of telephone numbers. The ITU issues country
codes to sovereign nations, and national telecommunications organisations administer the telephone
numbers within each country. A full E.164 number consists of a country code, an area or city code,
and a phone number.
409 These IP-based addresses might be the mail address, a URL or an IP phone address.
410 The latter focuses on enabling voice mail systems to exchange messages over IP networks.
218
Final Report
•
Step1: The national telephone number of one of the authors of this study 022249225-43 is translated into +49-2224-9225-43, where the "49" represents the country
code of Germany;
•
Step2: +49-2224-9225-43 is changed into 492224922543;
•
Step3: 492224922543 is changed into 345229422294;
•
Step4: 345229422294 is changed into 3.4.5.2.2.9.4.2.2.2.9.4, and
•
Step 5: 3.4.5.2.2.9.4.2.2.2.9.4 is changed into 3.4.5.2.2.9.4.2.2.2.9.4e164.arpa
ENUM then issues a DNS query on this domain. Once the authoritative name server is
found, ENUM replies with information about what applications are associated with this
specific telephone number.
Internet traffic exchange and the economics of IP networks
B-5
219
Adoption of new Internet architectures
The trend in Internet architectures appears to involve convergence between data and
optical layers, which is most apparent in the use of IP based protocols in the network
layer as well as data layer. Several factors appear to be driving this process, with
arguably the most important being:
•
The growth in demand for service provided over the Internet, which among other
things places a premium on network architectures and protocols that are scalable;
•
Competition to provide transit services, and
•
Competition among Internet hardware suppliers.
It is apparent that the protocols that operate the Internet and the hardware over which
they operate, are subject to rapid technological change. In this respect Internet
evolution is highly unpredictable, and given the different competitive positions and
strategies of the ISPs, backbone providers, equipment manufacturers, and possibly
other entities with market power in adjacent markets which may be able to lever that
power onto the Internet,411 it seems likely that several (perhaps partially) substitutable
architectures will continue to exist.
From the perspective of end-user ISPs and ISP transit providers, their willingness to
adopt new architectures will include a trade-off between:
•
The vintage of their existing technology;
•
The degree to which the new technology has been sorted and debugged;
•
The belief about whether the new technology will itself be replaced shortly by
something which is significantly better;
•
Switching costs involved, and
•
Business differences (e.g. strategies, customer base, etc).
It is clear that uncertainty and asymmetric and imperfect information will mean that ISPs
will have quite different beliefs about bullets two and three, and will also have different
interests. Thus, when technology is rapidly evolving, and there are many near
substitutes or variations available, they will not all choose the same architecture.
411 Such as through sponsoring a new standard. Perhaps Microsoft is an example of a company that
could lever market power into the Internet.
181
-20
Difference to 2000
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
29
0
6
16
31
1
0
33
-2
-1
-1
4
0
-6
-3
5
Final Report
TOTAL
1
1
MAE West
220
1
1
1
1
1
1
1
1
MAE Los
Angeles / LAP
MAE East
Annex to Chapter 6
new
0
0
0
0
new
0
1
new
new
-1
0
0
new
new
2
0
new
0
1
2
1
new
new
-4
2
0
-2
0
0
new
1
0
0
0
3
MAE East
Vienna
MAE Dallas
C
5
5
5
6
4
8
5
6
1
0
8
5
4
4
3
7
4
2
5
5
6
6
5
11
2
7
0
5
5
6
7
4
6
6
5
8
CIX Palo
Alto
Important Internet traffic exchange points and players
Aleron
AT&T
Broadwing
Cable & Wireless
CAIS Internet
Gogent Communications
e.spire Communications
Electric Lightwave
Enetricity Online
Enron Broadband Services
Epoch Internet
Excite @ Home
Fiber Network Solutions
Genuity
ICCX
ICG Communications
IDT
Infonet Services
Level 3 Communication
Lightning Internet Services
Multa Communications
Netrail
One Call Communications
OptiGate Networks
PSINet
Qwest Communications
SAVVIS Communications
ServInt Internet Services
Sprint Communications
Teleglobe
Telia Internet
Verio
Williams Communications Group
Winstar Communications
WorldCom
XO Communications
CIX
Herndon
C-1
Ameritech Chicago
NAP
The most important public Internet exchange points in North-America 2001
Difference
to 2000
Table C-1:
US international
NAPs 2001 - TOTAL
US-IBPs 2001
PacBell Los
Angeles NAP
PacBell San
Francisco NAP
PAIX Palo
Alto
PAIX-VA 1
Seattle
Internet
Exchange
1
1
1
1
1
1
1
1
1
1
Oregon IX
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
TOTAL
2
0
21
14
2
2
20
3
1
Difference to 2000
2
-2
-2
-8
1
1
-4
new
new
Internet traffic exchange and the economics of IP networks
1
Sprint New York San Diego
NAP
NAP
Table C-1:
Aleron
AT&T
Broadwing
Cable & Wireless
CAIS Internet
Gogent Communications
e.spire Communications
Electric Lightwave
Enetricity Online
Enron Broadband Services
Epoch Internet
Excite @ Home
Fiber Network Solutions
Genuity
ICCX
ICG Communications
IDT
Infonet Services
Level 3 Communication
Lightning Internet Services
Multa Communications
Netrail
One Call Communications
OptiGate Networks
PSINet
Qwest Communications
SAVVIS Communications
ServInt Internet Services
Sprint Communications
Teleglobe
Telia Internet
Verio
Williams Communications Group
Winstar Communications
WorldCom
XO Communications
NY II X (New York
International IX)
221
The most important public Internet exchange points in North-America
2001: cont'd
Source: Boardwatch 2001; Peering of US-IBPs at international important US peering points
US-IBPs 2001
1
1
1
201
CIX Palo Alto
MAE Dallas
MAE Los Angeles / LAP
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
5
1
1
1
1
1
1
31
MAE East Vienna
1
1
1
1
1
1
1
1
1
1
1
MAE East
12
1
1
1
1
31
7
3
Final Report
7
6
5
0
5
6
4
5
5
9
5
5
6
4
7
6
7
4
5
4
7
5
4
4
5
6
1
6
5
3
0
7
5
5
6
5
5
3
6
6
2
CIX Herndon
222
TOTAL
Ameritech
Chicago NAP
The most important public Internet exchange points in North-America
2000
Abovenet
AGIS
AT&T
BCE Nexxia
Broadwing
Cable & Wireless
CAIS Internet
Concentric
Electric Lightwave
Epoch Internet
e.spire
Excite @ Home
Exodus
Fibre Network Solutions
Global Center
Globix
GST
GTE
ICG Communications
IDT
Intermedia
Level 3 Communication
Lightning Internet Services
Multa Commmuncations
NetRail
Onyx Networks
OrcoNet
PSInet
Qwest
RMI.NET
SAVVIS Communications*
ServINT Internet Services
Splitrock
Sprint Communications
Teleglobe
Uunet
Verio
Vnet
Williams Communications
Winstar Communications
ZipLink
US international
NAPs 2000 - TOTAL
Table C-2:
US-IBPs 2000
PacBell Los
Angeles NAP
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
PAIX Palo Alto
PAIX-VA 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Seattle Internet Sprint New
Exchange
York NAP
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
38
PacBell San
Francisco NAP
1
1
1
1
1
1
1
1
1
1
1
1
1
0
2
1
1
1
1
23
22
1
1
1
24
Internet traffic exchange and the economics of IP networks
TOTAL
NY II X (New York
International IX)
Table C-2:
Abovenet
AGIS
AT&T
BCE Nexxia
Broadwing
Cable & Wireless
CAIS Internet
Concentric
Electric Lightwave
Epoch Internet
e.spire
Excite @ Home
Exodus
Fibre Network Solutions
Global Center
Globix
GST
GTE
ICG Communications
IDT
Intermedia
Level 3 Communication
Lightning Internet Services
Multa Commmuncations
NetRail
Onyx Networks
OrcoNet
PSInet
Qwest
RMI.NET
SAVVIS Communications*
ServINT Internet Services
Splitrock
Sprint Communications
Teleglobe
Uunet
Verio
Vnet
Williams Communications
Winstar Communications
ZipLink
MAE West
The most important public Internet exchange points in North-America
2000: cont'd
Source: Boardwatch 2000; Peering of US-IBPs at international important US peering points
US-IBPs 2000
223
Operator
URL
non-profit
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
n
139
NA
Ipv6
NA
Ameritech Chicago NAP
Chicago (IIIinois)
n
123
NA
traffic details
available
y
66
NA
NA
y
66
NA
NA
n
64
NA
NA
n/y
60
n
59
n
59
SBC/Ameritech
http://nap.aads.net/main.html
one of the original National
Science Foundation exchange
points
www.cix.org
Commercial Internet
eXchange Association
CIX Palo Alto
Palo Alto (California) Commercial Internet http://www.cix.org/index.htm
eXchange Association moved to PAIX Palo Alto
MAE East
Washington D.C.
WorldCom
www.mae.net/#east.html
one of the original National
Science Foundation exchange
points
MAE West
www.mae.net/#west.html
San José (California) WorldCom/
NASA Ames
Los Angeles
Pacific Bell SBC
http://www.pacbell.com/Products
PacBell Los Angeles NAP
(California)
_Services/Business/ProdInfo_1/1,
1973,146-1-6,00.html
one of the original National
Science Foundation exchange
points
Pacific Bell SBC
PacBell San Francisco NAP San Francisco
http://www.pacbell.com/Products
(California)
_Services/Business/ProdInfo_1/1,
1973,146-1-6,00.html
one of the original National
Science Foundation exchange
points
CIX Herndon
Herndon (Virginia)
Remarks see page 230
NA (restricted NA (restricted
access)
access)
NA
NA
NA
NA
Final Report
Palo Alto (California) PAIX.net Inc. (Above www.paix.net
Net Metromedia Fiber N.)**
Extended list of Internet Exchange Points in the USA and Canada
(ordered along number of ISPs connected)
PAIX Palo Alto
224
Location
(town, state)
Table C-3:
Name of IXP
MAE East Vienna
Location
(town, state)
Vienna (Virginia)
Operator
WorldCom
URL
non-profit
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
NY II X (New York International
New York (New York)Telehouse
IX)
http://www.nyiix.net/
international and local IXP
service
n
42
(daily traffic
details
available)
Seattle Internet Exchange
Seattle (Washington) ISI
www.altopia.com/six
y
42
PAIX-VA 1
Vienna (Virginia)
PAIX.net Inc. (Above www.paix.net
Net Metromedia Fiber N.)
n
30
(daily traffic
(daily traffic
details
details available)
available)
NA
NA
MAE Dallas
Dallas (Texas)
WorldCom
http://www.mae.net/#Central
n
19
NA (restricted
access)
NA
TORIX
Toronto (Ontario),
Canada
Toronto Internet
Exchange Inc.
www.torix.net
y
11
NA
NA
CANIX
Toronto (Ontario),
Canada
MAE Los Angeles / LAP
Los Angeles
(California)
WorldCom
LAP: USC/ISI
www.mfsdata net.com.MAE
http://www.isi.edu/div7/lap/
n/y
NA
MAE LA is
not accepting
new
customers
NA
NA
Sprint New York NAP
Pennsauken (New
Jersey)
Sprint
http://www.sprintbiz.com/index.
html
one of the original National
Science Foundation exchange
points
NA
NA
NA
NA
NA
no public information/no url
available
Remarks see page 230
Internet traffic exchange and the economics of IP networks
n
225
Extended list of Internet Exchange Points in the USA and Canada:
cont'd
57
NA (restricted NA (restricted
(closing,
access)
access
ISPs move to
MAE East)
www.mae.net/#east.html
Table C-3:
Name of IXP
Seattle (Washington)
founded by WolfeNet
and IXA (now SAVVIS)
financed by donors
URL
http://www.altopia.com/six/
compare to ISI SIX !!
non-profit
y
# of ISPs
connected
Traffic at
exchange
42
(daily traffic
details
available)
Connections to
the exchange
(capacity)*
NA
Seattle (Washington)
InterNAP
www.internap.com
n
35
NA
NA
PAIX Vienna (Tysons’s Corner)
Vienna (Virginia)
PAIX.net
http://www.paix.net/internet_exch
ange/index.htm
n
28
NA
NA
? (New Jersey)
?
www.avnet.org/
y
25
NA
NA
IndyX (Indianapolis Data
Exchange)
Indianapolis (Indiana)
One Call
Communications
http://www.indyx.net/
n
23
NA
NA
HIX (Hawai`i Internet
Exchange)
Hawai`i ?
University of Hawai`i
Information Technology
Services (UH-ITS)
http://www.lava.net/hix/
y
21
NA
NA
SD-NAP (San Diego Network
Access Point)
San Diego (California)
University of
California's San Diego
Supercomputer Center
(SDSC)
http://www.caida.org/projects/sdn
ap/content/
y
21
NA
NA
PAIX New York
New York (New York)
PAIX.net
http://www.paix.net/internet_exch
ange/index.htm
n
20
NA
NA
LAIIX Los Angeles
International Internet
eXchange
Los Angeles
(California)
Telehouse
http://www.laiix.net/
interconnected with LAP (Los
Angeles Access Point)/MAE-LA.
n
18
NA
NA
Equinix Exchange
Ashburn, San Jose,
Dallas
Equinix Inc.
https://www.equinixmatrix.com/ex
change_pub/
n
Ashburn: 8
San Jose: 5
Dallas: 1
NA
NA
Nova Scotia GigaPOP
Halifax, Canada
Dalhousie University
http://Snoopy.UCIS.Dal.Ca/Com
mServ/GigaPOP/
y
13
(daily traffic
details
available)
(daily traffic
details available)
Utah REP
? (Utah)
several consultancies
http://utah.rep.net/
y
13
(daily traffic
details
available)
(daily traffic
details available)
Final Report
IPeXchange
Extended list of Internet Exchange Points in the USA and Canada:
cont'd
InterNAP
Remarks see page 230
226
SIX (Seattle Internet
Exchange)
Operator
Location
(town, state)
Table C-3:
Name of IXP
Operator
Location
(town, state)
URL
non-profit
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
Phonoscope
http://www.magie-houston.net/
n
11
NA
NA
TorIX (Toronto IX)
Toronto, Canada
Toronto Internet
Exchange Inc.
http://www.torontointernetxchang
e.net/
n
11
NA
NA
PAIX Seattle
Seattle (Washinton)
PAIX.net
http://www.paix.net/internet_exch
ange/index.htm
n
10
NA
NA
Baltimore NAP
Baltimore (Maryland)
?
http://www.baltimore-nap.net/
?
9
(daily traffic
details
available)
NA
BCIX (British Columbia IX)
Vancouver, Canada
BC NET
http://www.bcix.net/BCIX.htm
n
9
NA
NA
Neutral NAP
McLean, Virginia
Pimmit Run Research,
Inc.
http://www.neutralnap.net/
y
9
NA
NA
PAIX Dallas
Dallas (Texas)
PAIX.net
http://www.paix.net/internet_exch
ange/index.htm
n
9
NA
NA
QIX (Quebec IX)
Quebec, Canada
RISQ Inc. (Réseau
d’informations
Scientifiques de
Quebec)
http://www.risq.net/qix
y
9
NA
NA
Austin MAE (Austin Metro
Access Point)
Austin (Texas)
NA
http://www.fc.net:80/map/austin/
y?
8
NA
NA
OIX (Oregon Exchange)
Eugene (Oregon)
University of Oregon
http://antc.uoregon.edu/OREGON
-EXCHANGE/
y
8
NA
NA
Boston MXP
Boston
(Massachusetts)
Allegiance Telecom
http://www.bostonmxp.com/index.
phtml
y
7
NA
NA
EIX (Edmonton IX)
Edmonton, Canada
Edmonton Internet
Exchange Society
http://www.eix.net/
y
7
(daily traffic
details
available)
NA
PAIX Atlanta
Atlanta (Georgia)
PAIX.net
http://www.paix.net/internet_exch
ange/index.htm
n
7
NA
NA
Compaq Houston NAP
Houston (Texas)
Compaq
http://www.compaq-nap.net/
n
5
NA
NA
Remarks see page 230
Internet traffic exchange and the economics of IP networks
Houston (Texas)
227
Extended list of Internet Exchange Points in the USA and Canada:
cont'd
The Houston "MAGIE"
(Metropolitan Area Gigabit
Internet Exchange )
Table C-3:
Name of IXP
URL
non-profit
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
Colorado Internet
Cooperative
Association
http://www.themax.net/
y
5
(daily traffic
details
available)
NA
PITX (The Pittsburgh Internet
Exchange)
Pittsburgh (?)
pair Networks, Inc
http://www.pitx.net/
n
4
NA
NA
San Antonio MAP (Metro
Access Point)
San Antonio (Texas)
TexasNet
http://www.fc.net/map/samap/
n
4
NA
NA
COX
Oregon (?)
Bend Cable, Bend Net,
and Empire Net
(member ISPs)
http://www.centraloregon.net/
y
3
NA
NA
HMAP
Houston (Texas)
Insync Internet Services www.fc.net:80/map /
?
2 (?)
NA
NA
NSF
http://www.magpi.org/
y
2 Universities
(daily traffic
details
available)
(daily traffic
details available)
Los Angeles
Telehouse
(California), New York
(New York)
http://www.6iix.net/
opened 2000/2001
n
NA
NA
NA
AIX
= MAE West
NASA Ames Research
Center
http://aix.arc.nasa.gov/
?
?
?
?
Atlanta-NAP (Atlanta
Exchange Point)
Atlanta (Georgia)
?
?
?
?
?
?
BC Gigapop
? (British Columbia),
Canada
BC NET
http://www.net.ubc.ca/BCGIGAPOP/
NA
research
project
NA
NA
NA
CMH-IX (Columbus Internet
eXchange)
Columbus (?)
?
http://www.cmh-ix.net/
?
NA
NA
NA
Detroit MXP
Detroit (Michigan)
Allegiance Telecom (?)
www.mai.net
?
DFWMAP (Dallas Fort Worth)
Dallas (Texas)
?
www.fc.net:80/map /
EWIX (Eastern Washington
Internet Exchange)
Spokane
(Washington)
?
www.dsource.com
MAGPI Mid-Atlantic Gigapop in Philadelphia
Philadelphia for Internet2
6IIX International Internet
eXchange points for IPv6
6IIX-LA
6IIX-NY
Remarks see page 230
?
?
?
?
None yet
NA
NA
?
NA
NA
Final Report
Denver (Colorado)
Extended list of Internet Exchange Points in the USA and Canada:
cont'd
MAX (Mountain Area
Exchange)
228
Operator
Location
(town, state)
Table C-3:
Name of IXP
Operator
Location
(town, state)
Montreal, Canada
Nashville CityNet
Nashville (Tennessee)
?
NIX (Northwest Internet
? (Washington)
eXchange)
NMNAP (New Mexico NAP) ? (New Mexico)
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
-
?
-
NA
?
shutdown/nn
ot accepting
new
customers
NA
NA
NA
?
?
NA
NA
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
y
?
NA
NA
?
?
NA
http://megasun.BCH.UMontreal.C
A/~burkep/sps.html
restricted access
nap.nashville.net
(wrong url)
?
www.structured.net/nix/
(wrong url)
?
www.nmnap.net
(wrong url)
?
PHLIX (Philadelphia Internet Philadelphia
www.phlix.net
Exchange)
(Pennsylvania)
(wrong url)
SNNAP (Seattle Network-to- Seattle (Washington)?
weber.u.washington.edu
Network Access Point)
(wrong url)
Star Tap NAP
Chicago (Michigan) Ameritech
http://www.startap.net/CONNECT
University of Illinois at /
Star Tap project founded by NSF
Chicago
Remarks see page 230
Internet traffic exchange and the economics of IP networks
MIX (Montreal IX)
MAE New York
in planning
?
non-profit
229
Extended list of Internet Exchange Points in the USA and Canada:
cont'd
MAE-Chicago
MAE-Houston
?
?
Nap.Net was acquired http://www.napnet.net/
by GTE Internetworking
which has now become
GENUiTY
New York (New York)?
http://www.mfsdatanet.com/MAE
(wrong url)
Chicago (Illinois)
WorldCom
http://mfs.datanet.com/MAE
Houston (Texas)
WorldCom
http://mae.houston.tx.us
FibreNAP
FloridaMIX
GENUiTY
? (Ohio)
URL
Table C-3:
Name of IXP
non-profit
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
?
www.stlouix.net
(wrong url)
?
?
?
?
TTI (The Tucson Interconnect) Tucson (Arizona)
?
www.tti.aces.net
(wrong url)
?
?
?
?
TTN (The Tucson NAP)
Tucson (Arizona)
?
www.ttn.rtd.net
(wrong url)
?
?
?
?
VIX (Vermont IX)
? (Vermont)
?
www.hill.com/trc/vix/index.html
(wrong url)
?
?
?
?
Sources:
St. Louis (Missouri)
URL
TeleGeography 2000, OECD 1998, Boardwatch 2000, Colt 2001, EP.NET,LLC, homepages of exchange points
**
=
PAIX.net was founded as Digital Equipment Corporation's Palo Alto Internet Exchange. PAIX is a
wholly owned subsidiary of Metromedia Fiber. Another subsidiary is AboveNet Communications
NA
=
not available from homepage, personal enquiry necessary
?
=
not checked yet / further enquiry necessary
y
=
not-for-profit public internet exchange
n
=
commercial public exchange
*
=
sum of all connected ISPs‘ capacities
grey marked IXPs: internationally important US NAPs (main US and Canadian IX according to Boardwatch 2000 and Telegeography 2000, 2001)
Final Report
Remarks: MAE (Metropolitan Access Exchange) is a trademark of WorldCom for their IXPs
IXP = general expression for Internet exchange point
CIX = commercial Internet exchange (but mostly not-for-profit)
NAP = Network/Neutral Access Point
GIX = Gigabit Internet Exchange
IBX = Internet Business Exchange is a trademark of Equinix.
Extended list of Internet Exchange Points in the USA and Canada:
cont'd
STLOUIX
Operator
Location
(town, state)
230
Table C-3:
Name of IXP
legal Operator
Location
URL
non-profit
# of ISPs
connected
Traffic at
exchange
NA
Netherlands,
Amsterdam
AMS-IX Association
www.ams-ix.net/
4 locations
y
125
3 Gbit/s (monthly
average)
LINX
UK, London
London Internet Exchange
Limited
www.linx.net
3 locations
y
118
5.5 Gbit/s
majority of LINX
members:
100Mbit/s
capacity, top 5+
members:
gigabit capacity.
GNI
France, Grenoble GNI – Grenoble Network
Initiative
http://www.gni.fr/index_accueil.htm
only French
y
84
NA
NA
M9-IX
Russia, Moscow
Russian Institute for Public
Networks (RIPN)
www.ripn.net/ix
y
84
NA
NA
DE-CIX
Germany,
Frankfurt/Main
www.eco.de
Eco Forum e.V. (not-forprofit industry association of 2 locations in Frankfurt/M.
ISPs)
y
75
2,5 Gbit/s
NA
VIX
Austria, Vienna
Vienna University
http://www.vix.at/
2 locations
y
72
(daily traffic details NA
available)
MIX Milan
Italy, Milan
MIX S.r.L.
http://www.mix-it.net/i
y
66
NA
? Max Speed:
1000.0 Mbits/s
(daily traffic details
available)
SFINX
France, Paris
Renater
http://www.sfinx.tm.fr/
y
59
NA
BNIX
Belgium, Brussels Belnet - Belgian National
Research Network
www.belnet.be/bnix
3 locations
y
45
(daily traffic details NA
available)
INXS
Germany, Munic
Cable & Wireless ECRC
GmbH
http://www.inxs.de
n
43
NA
NA
www.nic.hu/bix
only Hungarian
Ipv6
BIX
Hungary,
Budapest
NA
NA
Ipv6
NA
42
NA
NA
Internet traffic exchange and the economics of IP networks
AMS-IX
Most important IXPs in Europe (ordered according to number of ISPs
connected) [as of May 2001]
Connections to
the exchange
(capacity)*
Table C-4:
Name of
NAP
231
Location
legal Operator
URL
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
Norway, Oslo
Centre for Information
Technology Services
(USIT), University of Oslo
http://www.uio.no/nix/info-englishshort.html
2 locations
y
39
NA
NA
SE-DGIX
(NETNOD)
Sweden,
Stockholm
Netnod Internet Exchange i
Sverige AB with SOF (The
Swedish Operators Forum)
http://www.netnod.se/index-eng.html
several locations: 2 x Stockholm,
Gothenburg, Malmö, planned:
Stockholm, Sundsvall
y
39
NA
NA
CIXP
Switzerland,
Geneva
CERN IX Point
wwwcs.cern.ch/public/services/cixp/ind
ex.html
2 locations: CERN and Telehouse
y
37
(daily traffic details
available)
NA
NIX.CZ
Czech Republic,
Prague
NIX.CZ Internet Service
Provider Association
http://www.nix.cz/
y
31
NA
NA
DIX
Danmark, Lyngby UNI-C (partner of the
Danish Ministry of
Education)
www.uni-c.dk/dix/
Lyngby is situated near Copenhagen
y
29
NA
NA
LoNAP
UK, London
Lonap Limited
http://www.lonap.net/
y
29
(daily traffic details LoNAP provides
available)
connections of
up to 100Mbit
L-GIX
Latvia, Riga
LATNET
http://www.nic.lv/gix.html
y
27
NA
SIX-Slovak
IX
Slovakia,
Bratislava
Slovak University of
Technology
http://www.six.sk/
y
25
(daily traffic details NA
available)
NA
PARIX
France, Paris
France Télécom
http://www.parix.net/anglais/
n
25
NA
NA
MaNAP
UK, Manchester
MaNAP
http://www.manap.net/
y
24
NA
10 Mbit/s, 100
Mbit/s, 1 Gbit/s
possible
TIX
Switzerland,
Zurich
IX-Europe
http://www.tix.ch/
n
21
NA
NA
Final Report
NIX
Most important IXPs in Europe: cont'd
non-profit
232
Table C-4:
Name of
NAP
legal Operator
Location
URL
# of ISPs
connected
y
20
NA
restricted access
NA
restricted access
Traffic at
exchange
Connections to
the exchange
(capacity)*
Portugal, Lisbon
Fundacao para a
Computacao Cientifica
National (FCCN)
http://www.fccn.pt/PIX/
only Portugese
CUIX
Ukrain,
Dnepropetrovsk
Central Ukranian Internet
Exchange
http://cuix.dp.ua/
(only Cyrillic)
NA
20
NA
NA
WIX
Poland, Warsaw
NA
http://www.wix.net.pl/
only Polish
NA
19
NA
NA
UA-IX
Ukrain, NA
Ukrainskaja set’ obmena
internettrafikom
http://www.ua-ix.net.ua/
(only Cyrillic)
NA
19
NA
24 x 10/100
Mbps
2 x 1 Gbps
NAP Nautilus Italy, Rome
CASPUR (Consortium for
the Applications of
Supercomputation for
University and Research,
University of La Sapienza,
Rome)
http://www.nap.inroma.roma.it/
y
18
(daily traffic details NA
available)
BUHIX
Romania,
Bucharest
NA
www.buhix.ro
only Romanian
y
17
NA
NA
ESPANIX
Spain, Madrid
ESPANIX
www.espanix.net
only Spanish
y
16
NA
NA
AIX
Greece, Athens
Greek Research and
Technology Networks
www.aix.gr
y
14
(daily traffic details NA
available)
CATNIX
Spain, Barcelona
CATNIX (Founder: Catalan
Foundation for Research)
http://www.catnix.net/EN/
y
13
12.975 GB
(average 2000)
NA
IXEurope
Switzerland,
Zurich
IX Europe PLC
http://www.telehouse.ch/
same as TIX??
n
13
NA
NA
FICIX
Finland, Helsinki
Finnish Commercial Internet www.ficix.fi
Exchange Consortium
y
11
NA
Prime-time traffic
over 250 Mbit/s
(October 1998)
(daily traffic details
available)
Internet traffic exchange and the economics of IP networks
PIX
Most important IXPs in Europe: cont'd
non-profit
Table C-4:
Name of
NAP
233
URL
non-profit
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
Novosibirsk State University http://www.ripn.net:8082/ix/nsk/network
s
only Cyrillic
y
11
NA
NA
LIX
Luxembourg
RESTENA (Réseaux
Téléinformatique de
l'Education Nationale et de
la Recherche)
http://www.lix.lu/
y
10
NA
NA
SPB-IX
Russia, St.
Petersburg
St.-Petersburg State
University
http://www.ripn.net:8082/ix/spb/network
s
only Cyrillic
y
10
NA
NA
MANDA
Germany,
Darmstadt
Hochschulrechenzentrum
http://www.tu-darmstadt.de/manda/
der Technischen Universität Partners in this project are all important
Darmstadt
research organizations in Darmstadt
including two research centres of
DTAG.
y
8
375 Mbit/s
NA
INEX
Ireland, Dublin
INEX
http://www.inex.ie/
y
8
(daily traffic details NA
available)
SIX-Z
Switzerland,
Zurich
TheNet - Internet Services
AG
http://www.six.ch/six-z.htm
y
8
NA
NA
MPIX
Russia, St.
Petersburg
NA
http://www.rusnet.ru/mpix/
only Cyrillic
NA
5
NA
NA
Samara-IX
Russia, Samara
Samara State University
http://nic.samara.net/ix.html
only Cyrillic
y
5
NA
NA
World.IX
UK, Edinburgh
World.IX Ltd.
http://www.scolocate.com/worldix.html
n
5
NA
NA
CyIX
Cyprus
Cyprus Technology
Foundation
http://www.cytanet.com.cy/cyixen.html
NA
3
NA
NA
Ural-IX
Russia,
Ekaterinburg and
Perm
Ural State University
Perm State Technical
University
http://www.ripn.net:8082/ix/ural/network
s/
only Cyrillic
y
3
NA
NA
SIX-B
Switzerland, Bern TheNet - Internet Services
AG
http://www.six.ch/six-b.htm
y
2
NA
NA
Final Report
Russia,
Novosibirsk
Most important IXPs in Europe: cont'd
NSK-IX
234
legal Operator
Location
Table C-4:
Name of
NAP
Table C-4:
Name of
NAP
legal Operator
http://ebone.com
non-profit
# of ISPs
connected
n
private
peering
NA
NA
y
NA
NA
NA
NA
NA
NA
NA
49 PoPs in European Cities, 17
European Hosting Facilities
Traffic at
exchange
Connections to
the exchange
(capacity)*
offers „highest performance European
peering and worldwide Tier 1 private
peering“ (GTS)
Lyonix IN2P3
France, Lyon
Institut du Centre National
de La Recherche
Scientifique (CNRS)
http://www.lyonix.net/lyonix.htm
just opened (Dez. 2000)
CIX
Croatia, Zagreb
AT & T?
http://www.cix.hr/
only Croatian
MAE-Paris
France, Paris
WorldCom
www.mfst.com/mfsinternational/paris.html
n
NA
NA
NA
DFN
Germany, Berlin
DFN-Verein
http://www.dfn.de/
peering informations missing
y
NA
2.4 Gbit/s
NA
MAE-FFM
Germany,
Frankfurt/Main
WorldCom
www.mfst.com/mfsinternational/frankfurt.html
n
NA
Na
NA
INX-HH
Germany,
Hamburg
POP-Point of Presence
GmbH
http://www.inx-hh.de/
n
NA
NA
NA
M-CIX
Germany, Munic
Munic ISPs, NA
http://www.m-cix.de/
email-address for questions available
n
NA
NA
NA
MIX-Malta
Malta
independent operator
NA
y
NA
NA
NA
SCOTIX
UK, Edinburgh
SCOTIX Ltd.
http://www.scotix.net/
y
NA
NA
NA
Xchange
Point
UK, London
Xchangepoint Holding
Company
http://www.xchangepoint.net/
n
NA
NA
NA
founded 2000 by former
LINX manager
Internet traffic exchange and the economics of IP networks
Germany,
Ebone (GTS)
Hosting Centres:
Berlin, Dresden,
Frankfurt/M.,
Hannover, Munic,
Stuttgart
URL
Most important IXPs in Europe: cont'd
ebone
Location
plans to establish IXPs in Paris, Milan,
Brussels, Switzerland, Vienna,
Germany, Madrid, Copenhagen,
Amsterdam
235
legal Operator
Location
URL
NDIX BV - NederlandsDuitse Internet Exchange
http://www.ndix.nl
DTAG
Germany, wo??
Hamburg?
Düsseldorf?
Hannover?
Cologne?
Stuttgart?
Munic?
Berlin?
Frankfurt/M.?
DTAG
?
?
Iceland
?
?
Palermo
Italy, Palermo
?
?
TURNET
Turkey
?
i-Exchange
UK, London
MIEX
y
NA
n
Traffic at
exchange
Connections to
the exchange
(capacity)*
NA
NA
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
www.i-exchange.co.uk/
dead link, no success with search
engine research
?
?
?
?
Netherlands,
Maastricht
Web 3 Hosting B.V.
http://www.miex.nl/
only Dutch information
only hosting no peering!!!
-
-
-
-
SWISSIX
Switzerland,
Zurich
-
only a website for information about the
TIX and the CIXP
-
-
-
-
AMS DC2
Netherlands
Amsterdam,
?
CISP
Switzerland,
Geneva
?
founded summer 2001
1.600 PoPs (same number as PSTN
switches)
Final Report
Netherlands,
Enschede
# of ISPs
connected
Most important IXPs in Europe: cont'd
NDIX NederlandsDuitse
Internet
Exchange
non-profit
236
Table C-4:
Name of
NAP
Table C-4:
ebone
France, Paris
GIX
France, Paris
?
Vienna NAP
Austria, Vienna
?
ebone
UK, London
Ebone (GTS)
Sources:
Ebone (GTS)
URL
non-profit
http://www.ebone.com/
big hosting centre
TeleGeography 2000, OECD 1998, Boardwatch 2000, Colt 2001, EP.NET,LLC, homepages of exchange points
Remarks: MAE (Metropolitan Access Exchange) is a trademark of WorldCom for their IXPs
IXP = general expression for Internet exchange point
CIX = commercial Internet exchange (but mostly not-for-profit)
NAP = Network/Neutral Access Point
NA
=
not available from homepage, personal enquiry necessary
?
=
not checked yet / further enquiry necessary
y
=
not-for-profit public internet exchange
n
=
commercial public exchange
*
=
sum of all connected ISPs‘ capacities
# of ISPs
connected
Traffic at
exchange
Connections to
the exchange
(capacity)*
Internet traffic exchange and the economics of IP networks
legal Operator
Location
Most important IXPs in Europe: cont'd
Name of
NAP
237
238
C-2
Final Report
Peering guidelines of Internet backbone providers
The following tables exhibit main features of the peering guidelines of selected Internet
backbone providers. The expressions used in the tables are taken essentially from the
original documents. The data was collected in the 3rd quarter of 2001. We focused on
the requirements for peering in the US and Europe.
Table C-5:
Overview of peering guidelines – Broadwing Communications
Company
Broadwing Communications412
Network for which
peering policy is
applied
Not specified
Requirements for
connections to peering
points
Number
At least 2 connections
Location
Geographically dispersed (geographic regions defined for the location
of network nodes)
Public: connections to at least 2 geographically dispersed exchange
points where Broadwing is also connected (currently MAE-East ATM,
MAE-West ATM, Ameritech NAP, MAE-Dallas, Sprint NAP)
Bandwidth
At least 45 Mbit/s at all connections
Traffic
Imbalance of traffic at most 2,5:1 in either direction
Private: total traffic at least 20 Mbps on all connections in both
directions; for additional connections to existing locations, average
weekly traffic utilization of 85% for an individual existing connection
required
SLAs on traffic
exchanged
• Availability: at least 99,99%
• Packet loss: at most 1%
• Delay: at most 100 ms
Routing requirements
• CIDR at edge routers using BGP-4 and aggregated routes
• Consistent routing announcements
• No route of last resort directed at Broadwing
• Route announcements only for own customers
Network characteristics
Size of network
Nationally deployed Internet backbone in the US
Bandwidth of
backbone circuits
Dedicated circuits of at least OC-3
Location of
network nodes
Nodes in 8 geographic regions where Broadwing has nodes
412 Specific requirements for direct (private) or public peers are denoted as such in the following lines. All
other requirements represent the general peering policy of the company.
Internet traffic exchange and the economics of IP networks
Table C-5:
Overview of peering guidelines – Broadwing Communications: cont'd
Company
239
Topology
Broadwing Communications
• Redundant Internet backbone
• Each backbone PoP connected to at least 2 other hubs on own
backbone
Operational
requirements
• Network operation centre 24x7
Informational
requirements
• List of existing interconnection connections
• Establishment of "trouble ticket" and "escalation" procedures
• Copy of network
• Register routes with Internet Routing Registry or other registry
Guidelines available at
Source: WIK-Consult
www.broadwing.com/download/peering/policy_2000_ver2.0.doc
240
Final Report
Table C-6:
Overview of peering guidelines – Cable&Wireless
Company
Cable&Wireless413
Network for which
peering policy is
applied
US peering policy with global backbone (AS 3561)
Requirements for
connections to peering
points
Number
At least 4 connections
Location
Geographically dispersed (including one on the East and West coast,
one in the Midwest or in the South)
Bandwidth
At least 155 Mbps at all connections
Traffic
• Aggregated traffic ratio at most 2:1
• Traffic volume at each connection at least 45 Mbps;
SLAs on traffic
exchanged
Routing requirements
• CIDR at edge routers using BGP-4 and providing sufficiently
aggregated routes (applicant agrees to operate any routing
protocol that may be defined later)
• Consistent routing announcements
• No route of last resort directed at C&W
• Route announcements only for own customers
• Filtering of routes (by prefix or AS)
Network characteristics
Size of network
Nationally deployed Internet backbone in the US
Bandwidth of
backbone circuits
Dedicated circuits of at least OC-48c
Location of
network nodes
Nodes in 9 geographic regions where C&W has nodes
Topology
• Redundant Internet backbone
• Each backbone hub connected to at least 2 other hubs on own
backbone
Operational
requirements
Network operation centre 24x7
Informational
requirements
• Copy of network
• Network topology
• Capacity between nodes
• Register routes and routing policy with Internet Routing Registry
Guidelines available at
http://www.cw.com/th_05.asp?ID=us_10_02
Source: WIK-Consult
413 The following conditions refer both to private and public peering.
Internet traffic exchange and the economics of IP networks
Table C-7:
241
Overview of peering guidelines – Electric Lightwave
Company
Electric Lightwave (EL)414
Network for which
peering policy is
applied
Not specified
Requirements for
connections to peering
points
Number and
location
Public domestic: Connection to at least 3 locations where EL has
NAP presence (currently PAIX Seattle, Oregon IX, MAE West FDDI,
PAIX Palo Alto, MAE West ATM, MAE Central, Chicago AADS, MAE
East ATM, PAIX Vienna); two of these connections in peering region
1 or 2 and 5; obligation to interconnect at any NAP where both
parties have a presence
Private domestic: at least 3 locations where EL has an edge router;
one of these in peering region 1 or 2, one in peering region 3 or 4,
one in peering region 5
Private and public international: at minimum a West coast connection
for Asian based peers, an East coast connection for European based
peers
Bandwidth
Sufficient bandwidth behind any peering connection
Private domestic: minimum size of PNI is DS3
Traffic
SLAs on traffic
exchanged
Public domestic: at least 1 Mbps of cumulative traffic at each peering
point
Public: When peering across WorldCom ATM NAPs, best effort PVCs
only
Routing requirements
• Same routing policy announced at all peering points
• No abuse e.g. pointing default
• Separate BGP4 peering policy and static routing policy v1.0
Network characteristics
Size of network
Private domestic: at least 10 PoPs
Private or public international: majority of network and services
offered outside North America
Bandwidth of
backbone circuits
Private domestic: at least OC12
Private or public international: at least DS3 connecting the US with
one point outside North America
Location of
network nodes
Private domestic: nodes in different cities; at least 1 PoP must be
located in each of EL’s 5 peering regions
414 Specific requirements for direct (private) or public peers are denoted as such in the following lines. All
other requirements represent the general peering policy of the company. There is further distinction
between domestic and international peering.
242
Table C-7:
Final Report
Overview of peering guidelines – Electric Lightwave: cont'd
Company
Topology
Operational
requirements
Electric Lightwave (EL)
Public domestic: sufficient connectivity between peering points to
support closest exit routing
• Network operation centre 24x7;
• Resolve congestion issues within a defined timeframe e.g. adding
more bandwidth or adding connectivity at another site in the same
or adjacent peering region
• Respond to all technical issues within 48 hours
• Cooperate in chasing security violations etc.
Informational
requirements
Guidelines available at
Source: WIK-Consult
http://www.eli.net/techsupport/bgp/bp-policy.shtml
(peering policy)
http://www.eli.net/techsupport/routing/cp-policy.shtml
(BGP peering policy)
http://www.eli.net/techsupport/bgp/static-policy.shtml
(Static routing policy)
http://www.eli.net/techsupport/bgp/index.shtml
(IP Routing Policies, Procedures and Information Page)
Internet traffic exchange and the economics of IP networks
Table C-8:
243
Overview of peering policy – France Télécom
Company
France Télécom415 416
Network for which
peering policy is
applied
AS 5511
Requirements for
connections to peering
points
Number and
location
For worldwide peering (public or private) at least 5 geographically
dispersed locations in Europe, 4 in the US and 2 in Asia (specific
mandatory cities are defined); 2 not necessarily dispersed
connections for local peering
Public: active connections to at least 3 geographically dispersed
NAPs in Europe where France Télécom is also connected (currently
MAE-East, MAE-West, PAIX, Ameritech NAP, Sprint NAP, LINX,
PARIX, CIX, BNIX, DGIX, AMS-IX, Hong-Kong and Tokyo JPIX).
Ability and willingness to connect to at least 4 NAPs
Bandwidth
Private: at least 45 Mbps at all connections
Traffic
• For local peering at least 45 Mbps of aggregated traffic
• At least 10 Mb of bilateral traffic at each peering connection
• Imbalance of traffic at most 3:1 in either direction
• Shutdown of peering if the connection is loaded more than 95%
during more than two hours
Private: for additional connections to existing locations, average
daily traffic utilization of 65% for an individual existing connection
required
SLAs on traffic
exchanged
Routing requirements
• CIDR at edge routers using BGP-4 and aggregated routes
• Consistent routes announcements at all peering points
• Filtering of routes at network edge
• No route of last resort directed at France Télécom
• Route announcements only for own customers
Network characteristics
Size of network
For regional and worldwide peering nationally deployed Internet
backbone in countries where peering is desired
415 The requirements refer only to the non domestic IP network and represents France Télécom´s long
distance peering policy.
416 Specific requirements for direct (private) or public peers are denoted as such in the following lines. All
other requirements represent the general peering policy of the company. There is further distinction
between peering worldwide, regionally (i.e. peering over one continent) and locally (i.e. peering in a
specific country).
244
Final Report
Table C-8:
Overview of peering policy – France Télécom: cont'd
Company
Bandwidth of
backbone circuits
Location of
network nodes
Topology
Operational
requirements
France Télécom
For regional or worldwide peering dedicated IP circuits of at least
OC-12 in the US and in Europe, OC-3 in Asia
Each backbone hub connected to at least two other hubs on own
backbone
• Network operation centre 24x7
• Establishment of "trouble ticket" and "escalation" procedures
Informational
requirements
• Register routes with Internet Routing Registry or other registry
Guidelines available at
http://vision.opentransit.net/docs/peering_policy/
Source: WIK-Consult
• Copy of network for the region where peering is desired including
a list of existing peering connections
Internet traffic exchange and the economics of IP networks
Table C-9:
245
Overview of peering guidelines - Genuity
Company
Genuity417
Network for which
peering policy is
applied
Domestic AS1
Europe AS 7176
Domestic ISPs: at least 3
connections at the following
NAPs: MAE-East ATM, MAEWest ATM, AADS Chicago, MAE
Dallas ATM (2 of which must be
MAE East ATM and MAE West
ATM)
At least one connection at the
following NAPs: LINX, AMS-IX,
MAE-Frankfurt, D-GIX (SE-GIX),
MIXITA, SFINX
Requirements for
connections to peering
points
Number and
location
International ISPs: at least 2
connections at the NAPs listed
above
Bandwidth
Traffic
At least 1 Mbps traffic exchange At least 100 Kbps traffic
exchange
Domestic ISPs: roughly balanced
traffic
SLAs on traffic
exchanged
Routing requirements
Network characteristics
Size of network
Bandwidth of
backbone circuits
Location of
network nodes
Topology
Operational
requirements
Consistent route announcements
Domestic ISPs: coast to coast
nationwide backbone in US
Domestic ISPs: at least 155
Mbps
• Network operation centre 24x7
• LSRR capability at core border routers on network
Informational
requirements
Guidelines available at
www.genuity.com/infrastructure/interconnection.pdf
Source: WIK-Consult
417 The following conditions refer only to public peering. There is further distinction between domestic and
international ISPs for AS1.
246
Final Report
Table C-10:
Overview of peering guidelines – Level 3 418
Company
Level 3419
Network for which
peering policy is
applied
North America
Requirements for
connections to peering
points
Number and
location
Connections in at least 10 major US
markets
Public: at least 3 geographically
diverse connections at the following
NAPs: Sprint NAP; MAE East,
Ameritech NAP, MAE West, PAIX;
Private: connections in at least 6 of
the following cities: NY, Washington,
Atlanta, Chicago, Dallas, LA, San
Jose, San Francisco, Seattle
Bandwidth
Private: at least OC-3 at all
connections
Traffic
Private: at least 150 Mb/s of average
bi-directional traffic exchange
SLAs on traffic
exchanged
Routing requirements
Europe
Public (regional for AS 9057):
connections at 2 of the
following NAPs: AMSIX,
BNIX, DECIX, LINX, MAE
Frankfurt, PARIX; SFINX
Public (all Level 3 customer
routes): connections to at
least 3 of the above listed
NAPs
Public (AS 9057 and all Level
3 customer routes): at least
OC-3 to each NAP
• Consistent routing announcements
• Route announcements only for own customers
• No route of last resort shall be directed at each other
Network characteristics
Size of network
Bandwidth of
backbone circuits
Location of
network nodes
Topology
At least OC-48 intercity capacity
Redundant Internet backbone
418 Requirements for peering in Asia are not displayed in the table.
419 Specific requirements for direct (private) or public peers are denoted as such in the following lines. All
other requirements represent the general peering policy of the company.
Internet traffic exchange and the economics of IP networks
Table C-10:
247
Overview of peering guidelines – Level 3: cont'd
Company
Level 3
Operational
requirements
• Network operation centre 24x7
• "Escalation path" to resolve
network issues (e.g. routing or
congestion)
• Definition of an upgrade path to
accommodate traffic growth
• Network operation centre
24x7
• "Escalation path" to
resolve network issues
(e.g. routing or congestion)
Informational
requirements
• Register routes, routing domains, Register routes, routing
domains, routing policy with
routing policy with Internet
Internet Routing Registry
Routing Registry
• Network topology
• Backbone capacity
• Interconnection points
Guidelines available at
http://www.level3.de/de/services/crossroads/policy
Source: WIK-Consult
248
Final Report
Table C-11:
Overview of peering guidelines – WorldCom 420
Company
WorldCom421
Network for which
peering policy is
applied
US (AS701)
Europe (AS702)
• Ratio of aggregate traffic
exchanged roughly balanced,
at most 1,5:1
• Ratio of aggregate traffic
exchanged roughly balanced,
at most 1,5:1
• Aggregate amount of traffic
exchanged in each direction
over all connections at least
150 Mbps
• Aggregate amount of traffic
exchanged in each direction
over all connections at least
30 Mbps
Requirements for
connections to peering
points
Number and
location
Bandwidth
Traffic
SLAs on traffic
exchanged
Routing requirements
Network characteristics
Size of network
"Shortest exist routing" (unless both partners agree mutually
otherwise);
Facilities capable of terminating
customer leased line IP
connections onto a router in at
least 50% of the geographic
region which the requestor wants
to interconnect to; currently 15
states in the US
Facilities capable of terminating
customer leased line IP
connections onto a router in at
least 50% of the geographic
region which the requestor wants
to interconnect to; currently 8
countries in Europe
Bandwidth of
backbone circuits
Majority of inter-hub trunking
links at least OC-12 (622 Mbps)
Majority of the inter-hub trunking
links at least DS-3, (45 Mbps)
Location of
network nodes
• Geographically dispersed
network
• At least an East coast
location, a West coast
location and two Midwest
locations
Geographically dispersed
network
420 Requirements for peering in Asia (AS 703) are not displayed in the table.
421 The following conditions refer both to private and public peering.
Internet traffic exchange and the economics of IP networks
Table C-11:
Overview of peering guidelines – WorldCom: cont'd
Company
249
Topology
WorldCom
• Redundant Internet backbone
• Traffic exchange links of
sufficient robustness,
aggregate capacity and
geographic dispersion to
facilitate performance across
the peering connections
Operational
requirements
• Network operation centre 24x7
• Dealing with routing or security issues within 2 hours
Informational
requirements
Guidelines available at
Source: WIK-Consult
• Redundant Internet backbone
• Traffic exchange links of
sufficient robustness,
aggregate capacity and
geographic dispersion to
facilitate performance across
the peering connections
www.worldcom.com/peering/
250
Final Report
D
Annex for Chapter 8
D-1
Modelling the strategic interests of core ISPs in the presence of
network effects
In this annex we provide a more detailed analysis of the work of CRT and M&S, than
appears in Chapter 8 of the main report.
The models of the Internet presented by CRT and M&S describe a simplified Internet so
that analysis can focus on what researchers consider to be the most important
relationships. The main simplifications employed are outlined detail below.
There is no empirical evidence presented as to the values of the parameters that
appear in both models. The authors do provide a discussion of the likely ranges of
these variables, and we discuss these below.
1. Simplifications in the Internet's structure:
The model is of a simplified Internet structure comprising only 3 ISP competitors, and
does not account for the fact that smaller ISPs are typically connected to larger ISPs,
and many of them are multi-homed.
•
Might the more complex structure of the Internet provide for reactions that the model
misses and that would alter the strategies of the players in a fundamental way, or
the conditions under which a strategy of degraded interconnection would
work?422,423
•
In CRT there is no reaction of customers of A to the loss of connectivity with
subscribers of B.
-
Would the efforts of content providers seeking to maintain QoS to B's
customers by providing content directly on B's network, e.g. through caching
and mirroring, alter the viability of the dominant firm's degradation strategy?
-
Would customers connected through transit arrangements with A employ
alternative transit arrangements that provide non degraded access to B (i.e.
through bypassing A to interconnect with B).
2. The values of the model's parameters:
422 In CRT's paper multi-homing in a simplified structure does not fundamentally alter the incentive to
degrade interconnection, although it reduces the level of A's advantage. It also suggest that the larger
ISP has an incentive to target the smallest ISP with degraded interconnection, and this incentive
increases with the overlap of address space that occurs with multi-homing.
423 CRT's model employed the framework originally devised by Katz and Shapiro in their influential 1985
paper.
Internet traffic exchange and the economics of IP networks
(i)
251
The externality effect:
For targeted degradation to be attractive it is necessary in CRT's model
that v (the value that customers place on connectivity) be above ⅓. CRT
also show that values of v > ½ are implausible (v is important in
explaining the shares of the ‘benefits’ of a targeted degradation policy
that go to A and C. v needs to be high enough so that degradation is
sufficiently harmful to the target ISP (B) but not so high that large
numbers of A's own customers do not switch to C, which as we note in
the first bullet above is the only ISP able to provide universal
connectivity, even though it is much smaller than A.
(ii)
Numbers of present compared to future customers (β = 1 means the network
is already at its maximum; β = 0.5 means that half of the total of those that
will be subscribers in the future, are yet to subscribe):
A model needs to be able to capture the relative effects of strategic
actions, both on existing customers, and on the behaviour of those who
are not yet customers, but will subscribe in the future. If there are
relatively few customers that remain to be signed up, then a danger with
any strategy that targets these potential customers is the effect it has on
existing subscribers. Given that there will be arguments over relative size
of each group, the sensitivity of the model's predictions to changes in this
value might have a bearing on the risk involved with a strategy that
degrades interconnection at one (or more) targeted points.
(iii)
The cost of serving additional customers:
The cost of adding additional subscribers (c) will cover a range of
potential values up to where c equals average total costs, such that there
are no fixed costs, in which case c ≡ 1.
The question is whether the ranges of values of the model’s variables that would make
attractive to a market leading ISP a strategy of targeted degradation (or refusal) of
interconnection, are reasonable. If they are not then the suggestion would be that this
strategy is not viable in practice.
Figure D-1 (a) and (b) shows M&S's graphs where A faces 3 other equally size
competitors each with a 10%, (left) and 7.5% (right) market share. It is also assumed
that half of all potential subscribers are current subscribers to one of the four networks
(i.e. β = 0.5)424, and v ≤ ½ (recall that in CRT, ⅓ < v < ½), and c ≤ 1.425
424 Values of v > ½ are found to imply implausible outcomes. See M&S p12. At values v < ⅓ the network
effects appear to provide the fully interconnected network with too much advantage out of the
degradation strategy.
252
Final Report
Figure D-1:
Range of possible outcomes of a global degradation strategy
D
D
C
B
A1
A2
Z
Valuation, v
B
A1
A2
Marginal Cost, c
Marginal Cost, c
(a) β = 0.5, m 1 = .6
(b) β = 0.5, m 1= .7
D
Z
C
B
A1
A2
Marginal Cost, c
(c) β = 1.0, m 1 = .6
Z
Valuation, v
Valuation, v
C
Valuation, v
Z
B
A1
A2
Marginal Cost, c
(d) β = 1.0, m 1= .7
425 As did CRT, M&S also discuss the situation where A faces a single rival. We have not chosen to
discuss that here as most of what we learn from this can also be obtained from the case where A
faces 3 rivals.
Internet traffic exchange and the economics of IP networks
253
In the case of (c) and (d) it is assumed that the Internet is as large as it is going to get (β
= 1), which cancels the trade-off the largest ISP must make, between:
•
Being less attractive to those who will take out a first subscription to the Internet in
the future, (due to the depleted size of A's interconnected network), in comparison
to the 2 networks for which interconnection is not degraded, and
• The increase in market power that may be achieved through degrading
interconnection with one of the smaller rivals.
Clearly, the values of v and c, are vital to the viability of such a predatory strategy by
the dominant network, as is the present compared to future Internet penetration rate (β),
and the proportion of the total customer base subscribing to the largest network (m1).
The number of competing networks also increases the area in which tipping away from
the ‘dominant’ network can occur. This occurs because of the increased level of
competition between them provides constraints on all networks which reduces the
possibility (area) that the largest network can profitably capture the comparative
advantage and be more successful in attracting new customers.
M&S then model the situation for targeted degradation where there are three ISPs, one
with 50% market share (m1), and two with 25% share each. M&S map the ranges
where targeted degradation of interconnection would be a profitable strategy as
modelled. These profitable areas are shown as shaded areas in Figure D-2. The
strategies analysed, the types of outcomes (regions) and their explanations, are
summarised in Table D-1.
Table D-1:
Interconnection strategies and network tipping
A's best strategy
•
•
Region
Explanation
Global interconnection /
no degradation
Region A1
Leads to a worse interior situation for A
Region C
Leads to tipping away from A
Sensation / degradation
of interconnection
Region A2
Leads to a worse interior situation for A
Region B
Leads to tipping toward A
•
Ambiguous
Region D
Sensation / degradation of interconnection can
lead to tipping toward or away from A
•
Global interconnection /
no degradation preferred
Region Z
Area of implausible values for c and v
254
Final Report
Figure D-2:
Profitable regions of targeted degradation strategy
Z
h
g
Valuation, v
Valuation, v
Z
h
g
Marginal Cost, c
(b) β = 0.5
Valuation, v
Z
h
Valuation, v
Marginal Cost, c
(a) β = 0.25
Z
h
g
Marginal Cost, c
(c) β = 0.75
g
Marginal Cost, c
(d) β = 1.0
M&S investigate the likely range of values for c in terms of:
•
The maximum willingness to pay (WTP) of tomorrow's highest value subscriber as a
ratio of tomorrows subscription price; referred to as M, and
•
The ratio of price to marginal cost (which is expressed as a ratio of average total
cost (ATC); call this value α,426 given that v < ½ and β ≤ 1
In the case of (c) and (d) it is assumed that the Internet is as large as it is going to get (β
= 1), and this cancels the trade-off the largest ISP must make, between the following
two bullets:
426 ATC is simply the total of all variable and fixed costs divided by the total units of output.
Internet traffic exchange and the economics of IP networks
255
•
ISP A being less attractive to those who will take out a first subscription to the
Internet in the future, (due to the depleted size of A's interconnected network), in
comparison to the two networks for which interconnection is not degraded, and
•
The increase in A’s market power that may be achieved through degrading
interconnection with one of the smaller rivals.
Table D-2 shows values for c given values of M between 5 and 20, and values of α
between 1 (i.e. where there are no fixed costs) and 10 (where the marginal cost of
connecting a new subscriber is 1/10 of the average total cost per subscriber). By
looking at Figures D-1 and D-2, it is clear that there is relatively little room for a
successful strategy of targeted degradation of interconnection by the largest network. It
would require a very large market share and for values for several of the models
variables to be lower than we would expect to find in reality.
Table D-2:
Feasible upper bounds for c
M
5
α
10
15
20
1.00
0.40
0.20
0.13
0.08
2.00
0.20
0.10
0.07
0.05
5.00
0.08
0.04
0.03
0.02
10.00
0.04
0.02
0.01
0.01
Source: Malueg and Schwartz (2001)
We note that the CRT model and the alternative configuration of M&S are stylised. The
model does not account for some features of the Internet that may have a bearing on
the viability of a degradation strategy. Perhaps the most important of these is the multilayered and loosely hierarchical nature of the Internet today, and the prevalence of
secondary peering and the growth of other substitutes for connectivity with the core
backbone, such as caching, mirroring and content delivery networks (CDNs).