free motorola razr ringtone telus v3c

Transcription

free motorola razr ringtone telus v3c
Foreword
Competition in the ICT sector is increasingly taking the shape of service
platforms offering telephony, internet access and television programmes. Triple
play offers a concrete illustration of the convergence between telecommunications, the internet and the media industry. It consequently raises a large
number of questions regarding the strategies of various industry players and the
viability of their business models, as well as regulatory frameworks and antitrust
policies. This issue's Dossier provides a range of contributions that tackle these
major questions.
Following on from the Dossier are two papers chosen from a selection of the
best contributions presented at the EURO CPR conference held in March in
Seville. Other papers from this selection may be published in our forthcoming
issue(s).
Lastly, this issue of COMMUNICATIONS & STRATEGIES offers a series of
Features, rounded off by several book reviews, on a diverse and varied range of
topics that should prove of interest to our readers.
We hope that you enjoy this issue,
Edmond Baranes
Editor
Yves Gassot
Publishing Director
Call for papers
Dossiers
No. 63 – 3rd quarter 2006
Bundling strategies and competition in ICT industries
Editors: Edmond BARANES & Gilles LE BLANC
Bundling strategies and its effects on competition is now a well-established research
topic in industrial economics. Since the late 1990's, the technological "convergence"
of voice, video, and data services, the telecom sector has offered a rich experimental
field for bundling proposals. This special issue aims to confront recent advances in
the economic theory of bundling with the specific experiences and perspectives of
the telecom industry. We welcome theoretical papers, as well as empirical
contributions based on practical case studies, on the following issues: bundling and
competition policy, the implication of bundling strategies on competition and market
structure, bundling and innovation, bundling and network effects etc. Papers
addressing these questions with applications or illustrations in the software or media
industries will also be considered.
Please send proposals to:
[email protected] - [email protected]
th
No. 64 – 4 quarter 2006
Reviewing the Review
Editors: Yves GASSOT & Gérard POGOREL
This Dossier will examine the evidence, alternatives and possible trade-offs
concerning the most critical elements of an evolving future Electronic
Communications regulatory framework.
The current Electronic Communications regulatory framework has now been in place
for almost three years, and the European Commission should, in the very near future,
produce an analysis of what worked well within the current framework and what
needs to be changed to account for technological and economic evolutions in the
electronic communications market. This issue will run parallel to the Review,
examining the most crucial issues surrounding it and looking forward to a modern,
market-oriented regulatory framework that delivers optimal benefits to consumers
and industry alike.
We will look at competitive scenarios, particularly the status of access vs. network
based competition and whether the present regulatory framework provides adequate
incentives to genuine network based competition. We will also look at the lessons to
be learnt from the experiences of different EU Member States, and discuss how to
deal with local institutions and contingencies in setting up a competition development
strategy.
"Emerging" technologies, NGNs, and their status in competition policy and the
rationale for regulation or non-regulation are also a topic of major concern. Analyses
of EU trends and unresolved questions, including VoIP, FTTH, xDSL and converging
services, will be provided. This issue will also examine the decision criteria for
defining and coping with emerging technologies.
Strategic choices in regulation are to be defined within a dynamic industry process.
This section will deal specifically with relevant markets, focusing on the range of
overall assessments of present guidelines, and assess whether or not it is
appropriate to remove regulation from retail markets. The issue will include a look at
clustering possibilities and how to account for the phenomenon of convergence on
the one hand, and how to move forward in the direction of an internal market for
electronic communications on the other. Is the definition of Europe-wide relevant
markets (versus national markets) a necessary step to deal with with Europe-wide
operators? What should be the nature of competitive remedies, and who should
decide on them? Does this perspective imply changes in the balance of competences
between NRAs and the European Commission?
The following general questions will also be addressed:
• How should legal evidence be administrated in a dynamic and future-looking
environment?
• How can "more competition and less regulation" be achieved? How can regulatory
frameworks be simplified? How can the closer alignment of ex ante regulation and
general competition rules be achieved? How can the burden of regulation be shared
and what kind of incentives/organisation would induce players to comply?
Please send proposals to:
[email protected] - [email protected]
As far as practical and technical questions are
concerned, proposals for papers must be
submitted in Word format (.doc) and must not
exceed 20-22 pages (6,000 to 7,000 words).
Please ensure that your illustrations (graphics,
figures, etc.) are in black and white - excluding any
color - and are of printing quality. It is essential that
they be adapted to the journal's format (with a
maximum width of 12 cm). We would also like to
remind you to include bibliographical references at
the end of the article. Should these references
appear as footnotes, please indicate the author's
name and the year of publication in the text.
Coordination and information
Sophie NIGON
[email protected]
+33 (0)4 67 14 44 16
No. 62, 2nd quarter 2006
Contents
Dossier
Media industry facing convergence
Introduction
Bernard GUILLOU, André LANGE & Rémy LE CHAMPION ....................... 11
Spectrum Management and Broadcasting: Current Issues
Martin CAVE ................................................................................................ 19
DRMs, Innovation and Creation
Olivier BOMSEL & Anne-Gaëlle GEFFROY ................................................ 35
The Role of Public Service Broadcasters in the Era of Convergence
A Case Study of Televisió de Catalunya
Emili PRADO & David FERNÁNDEZ ........................................................... 49
Traditional paradigms for new services?
The Commission Proposal for a 'Audiovisual Media Services Directive'
Alexander SCHEUER .................................................................................. 71
Three scenarios for TV in 2015
Laurence MEYER ........................................................................................ 93
Opinion
Interview with Evelyne LENTZEN, Chairman of the CSA
of the French Community of Belgium
Conducted by Rémy LE CHAMPION ......................................................... 111
Articles
Alternative Wireless Technologies
Status, Trends and Policy Implications for Europe
Sven LINDMARK, Pieter BALLON, Colin BLACKMAN, Erik BOHLIN, Simon
FORGE & Uta WEHN de MONTALVO....................................................... 127
The Scope of Economic Sector Regulation
in Electronic Communications
Alexandre de STREEL ACKNOLO ............................................................. 147
Features
Regulation and Competition
Does Good Digital Rights Management
Mean Sacrificing the Private Copy?
Ariane DELVOIE ......................................................................................... 171
Firms and Markets
IPTV markets
New broadband service promising to upset the balance of the TV market
Jacques BAJON.......................................................................................... 175
Technical Innovations
Mobile Television
Peter CURWEN .......................................................................................... 183
Can DRM Create New Markets?
Anne DUCHÊNE, Martin PEITZ & Patrick WAELBROECK ....................... 197
Public Policies
Internet Governance, "In Larger Freedom" and "the international
Rule of Law" - Lessons from Tunis
Klaus W. GREWLICH ................................................................................. 209
Book Review
Alban GONORD & Joëlle MENRATH, Mobile Attitude
Ce que les portables ont changé dans nos vies
by Marie MARCHAND ................................................................................ 217
Debra HOWCROFT & Eileen M. TRAUTH (Eds), Handbook of critical
Information Systems Research - Theory and Application
by Jean-Gustave PADIOLEAU ................................................................... 220
Dan REINGOLD with Jennifer REINGOLD (Eds), Confession of a Wall
Street Analyst: A True Story of Inside Information and Corruption in the
Stock Market
by James ALLEMAN................................................................................... 221
Thomas SCHULTZ, Réguler le commerce électronique par la résolution
des litiges en ligne - Une approche critique
by Jean-Pierre DARDAYROL ..................................................................... 224
The authors ................................................................................................. 227
Announcements .......................................................................................... 235
Dossier
Media industry facing convergence
Spectrum Management and Broadcasting:
Current Issues
DRMs, Innovation and Creation
The Role of Public Service Broadcasters
in the Era of Convergence
A Case Study of Televisió de Catalunya
Traditional paradigms for new services?
The Commission Proposal
for a 'Audiovisual Media Services Directive'
Three scenarios for TV in 2015
Introduction
Bernard GUILLOU
MediaWise+, Paris
André LANGE
Observatoire européen de l'Audiovisuel, Strasbourg
Rémy LE CHAMPION
Université de Paris II
C
onvergence was the hype concept, which everyone praised in the late
1990s and became a quasi dirty word with the dot-com crash. However
threatened they might have felt at the time by the forthcoming end of the
established patterns for production, aggregation and distribution of media
content, most media companies did actually resist the trend which led, in
some well-known cases, to stringent financial and industrial streamlining.
With hindsight, this rise and fall can now be understood as the product of
overly optimistic expectations regarding the speed of the take-off and uptake
of new media, fuelled by a disproportionate flow of speculative funds. There
were, however, a number of voices at the time insisting that tangible
industrial prospects and trends were to be discerned beyond the hype.
Those voices nurtured ongoing changes in the patterns of media content
production, aggregation and dissemination. These changes include the
digitalization of transmission networks, the surge in the number of options for
distributing data, sound and pictures to the home, primarily via mobile and
DSL networks and the ease with which content can be produced,
manipulated and uploaded for widespread distribution thanks to the internet.
When asked to edit the Dossier of this issue of COMMUNICATIONS &
STRATEGIES, it seemed to us that the time had come to consider the
various issues facing the media industry and its regulators in an environment
less burdened by hype yet, in many ways, far more cumbersome than the
climate described by yesterday's aspiring gurus. Of course, we are well
aware that hype, glitz and glamour are still part of the show. Watching the
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 11.
12
No. 62, 2nd Q. 2006
Consumers Electronic Show earlier this year in Las Vegas, where Tom
Hanks and Robin Williams shared the stage with Bill Gates, Larry Page
(Google) and Terry Semel (Yahoo) served as a reminder that
conduits/devices are nothing without the content and social representations
that they help to formulate and deliver. A month later, at the GSM Mobile
World Congress in Barcelona, music artists and TV producers attracted the
biggest crowds in an industry keen to position itself as a prime network for
content delivery. Backstage, media companies, internet players and
networks operators are nevertheless busy moulding new business models
that should help to define a truly connected world and shape our daily lives
to an extent that we are still struggling to assess.
Broadband networks now reach almost 50% of homes in the world's most
advanced countries and the stage is set, with the development of 3G/4G
(point to point) and broadcast mobile networks, for the reception of video
programmes on mobile devices. Video via the internet is the fastest growing
segment in terms of both offer and use, with seminal internet players
jumping on board to exploit the trend and extend the reach and scope of
their portals. Key to the emergence of the somewhat fuzzy concept of
internet 2.0, content production is popping up everywhere in the guise of
blogs, vlogs, community-driven material and seems to be competing directly
with established media companies in the battle for attention and audiences.
For media companies, the time has certainly come to "buy in" to
convergence, namely to face up to the disintegration of their traditional
approach to content production and distribution, the repositioning of their
content production and publishing functions to consider multimedia
distribution, and the reality of content "repurposing." That may encourage
these companies to redefine their business models traditionally based on
direct payment for packaged products, license revenues derived from
intellectual property rights and ad-financed schemes, not to mention the
license fees enjoyed by public broadcasters relatively safely to-date. More
specifically, these companies have to devise a set of new strategies to better
combine these various sources of revenue, reflecting the type of product that
they create and distribute, their geographic scope and their investment
capacity. This is a daunting task, as it requires a wealth of technological
know-how, strong brands and, in most cases, an international presence.
Key movements have included strategic acquisitions by media
companies in the network field, like BSkyB's purchase of Easynet, the
alternate telecommunications provider, in the UK, with the implicit ambition
of broadening the scope of services offered as bundles, including video on
Introduction
13
demand (VOD) and internet access. In most cases though, media
companies have opted for partnerships to develop their presence in new
media fields. Although established as a standalone initiative with a will to
distintermediate that new distribution market, the movie on demand service
Movielink, launched by the U.S. studios in 2002, is now available on a
variety of broadband platforms. In Europe over 50 VOD services are now
available, proposing catalogues of feature films, music videos,
documentaries and TV programme archives. Partnership is the way to test
products, strike temporary exclusivities and benefit from cross-branding and
cross-marketing efforts. Such alliances are volatile, as the balance in the
value chain is far from solid. From the media companies' point of view,
network economics call for a wide distribution of their key content. However,
problems are arising as it becomes increasingly difficult to secure shop
space with distributors, making it necessary to establish a direct presence to
acquire some negotiation power.
Strategies in this area are multiple: U.S. networks like CBS have
launched their own channel (a combination of original and past
programming) on the web. In Belgium, Belgacom has acquired the football
league rights to position itself as a broadcaster, while the cable operator
Unity Media has pursued the same strategy in Germany. BSkyB has
developed downloadable software for mobile phones, aimed at bypassing
the operators' menu and fostering pure Sky-only sessions. News Corp. has
heavily invested in community sites, where it will gradually increase
professional content, monetize its audience and set up cross platform deals
with advertisers. DVB-H, already tested in Italy and Norway by public
broadcasters and supported by some important European pay-TV operators,
may also represent a way for broadcasters to provide services directly to
mobile receivers without depending on telecommunications operators.
For media companies, particularly those involved in content aggregation,
the drive towards convergence does not stop on the doorstep of households.
A lot is at stake for players in the consumer electronics and computer
industries, pay-TV providers and network operators when it comes to gaining
a leading edge in setting-up home networks, which may be used in the
future to store, exchange and view all content using a wide variety of devices
located in different rooms. As a growing number of options for receiving
broadband and digital TV services become available, at least in cities in
developed countries, and as unbundling regulation opens the local loop in
most countries, the battle for control over that new gateway is shifting to the
home; and it may not be merely gimmicky to use the name Media Centre to
summarize this shift. The evolution obviously raises a large number of
14
No. 62, 2nd Q. 2006
economic, social, cultural and regulatory issues, which go far beyond the
strategy analysis we have briefly described above. Some of these issues are
addressed in the papers collected for the dossier.
While convergence - along with related tools and services - is often
summarized as the trend of accessing content any time, anywhere and on
any device, this definition in turn begs the key questions of what type of
content is available, at what direct/indirect price and by whom is this
accessed?
Information is, or should be, in the famous words of Arthur Sulzberger:
"All the news that's fit to print". In the electronic world, there is still place for
editorial judgment and some form content regulation, but exercising it has
become a very difficult challenge. Information, in a world where search
engines play a growing role in the way internet users access content on the
web, is apparently everywhere, but lacks the contextualising framework that
most media users, for better or for worse, rely upon. Convenience, ease of
access and blogging may not necessarily go hand in hand with the "vibrant
market of ideas" that is recognized as a necessity in modern democracies.
The ongoing debate in the scientific and cultural communities about the
value of Wikipedia shows there might be a price to pay for such
convenience, which cannot yet be determined.
In this respect, it is worth noting that some public broadcasters, endowed
with specific objectives in the field of creation and culture in its widest sense,
have actively developed their presence and offering in the field of emerging
media. The BBC has certainly made its mark and remains a pioneer. It has
established a massive presence on the web, with thousands of general
interest and niche sites. The UK public broadcaster has also created new
channels in the digital world, distributed on all platforms, and should be a
leading player in the on-demand programming universe when it launches its
"catch-up" service in the near future, enabling viewers to watch TV
programmes they may have missed. Such development drives are not only
dominated by big broadcasters, as shown in the paper by Emili PRADO and
David FERNANDEZ. After identifying the new functions that a public
broadcaster must deal with when considering the transformations linked to
technological advances and the globalisation of the media space, the
authors show how the regional broadcaster Televisio de Catalunya has
embraced these challenges. Their analysis is a convincing example of how a
small public broadcaster with a mix of political determination, financial
commitment and innovative drive can rise to the challenge of competition,
without neglecting its primary mission of serving the public interest.
Introduction
15
However, moving into new digital services can jeopardise the financial health
of public broadcasters, as the examples of not only Televisio de Catalunya,
but also the BBC, tend to illustrate.
This general interest principle is clearly at stake when addressing the
issue of the affordability of access to content and networks. Convergence
brings, in essence, more options to those who already tend to enjoy the
widest potential access to content and delivery capacity, namely urban
dwellers. As the information society spreads its wings, it is also giving birth
not just to one, but to a variety of "digital divides," which governments are
proving increasingly keen to address with specific policies, either industrial
or social.
This may put governments at odds with regulatory authorities, as
indicated in the contribution by Martin CAVE. This paper, which addresses
the issue of spectrum management in relation to broadcasting in the United
Kingdom, shows that spectrum allocation issues that have been dealt with
relatively smoothly in national regulation and international coordination
summits to-date, are now bound to generate conflicts. Mobile operators are
seeking to get part of the UHF spectrum to broadcast TV once this is no
longer used by traditional broadcasters for analogue distribution. The paper
analyses the implications of the digital switchover for all the parties and the
ways in which this spectrum might be allocated, noting that Ofcom's view
may clash with the position adopted by broadcasters in that respect.
The "net neutrality" debate, currently very hot in the USA, is another
illustration of the new regulatory issues that may arise with the development
of the market and opportunities for segmentation according to prices and
expectations. Balancing industrial innovation and equality of access is never
an easy act, as economic externalities are difficult to quantify. It is therefore
not surprising that the construction of new infrastructures, (such as the
optical FTTH networks, currently pitched by Deutsche Telekom in Germany
and supported by the government) or the rapid dissemination of video
content on the web, challenges the existing regulatory schemes.
More than ever before regulation is a shaky political exercise where a
balance struck at any time between content providers and network
operators, between infrastructure and service, between suppliers and
consumers, can be rapidly threatened by the next disruptive technology.
Personal Video Recorders may seem a natural improvement on the classic
VCR, but this does not prevent advertising-supported channels from
considering these devices as the most dangerous innovation to hit the
16
No. 62, 2nd Q. 2006
market for years. The same goes for the seemingly inexorable, internetdriven end to an era of production, distribution and use of content organized
by the Intellectual Property Rights doctrine.
Networks and services need content, content needs to be monetised and
monetization depends on terms of use, which are defined by rights owners.
As Olivier BOMSEL and Anne-Gaëlle GEFFROY remind us in their paper on
Digital Rights Management systems, this empirical framework has more or
less secured the efficiency of the market for cultural goods and stimulated
creation to-date. Copyright is no more, but no less than the definition of a
particular property right, namely to define the principles of exploitation and
reproduction (in particular) of cultural contents. In this sense, Digital Rights
Management systems seem a benign tool for enforcing that claim in a digital
world, where everyone can be seen, from the copyright owner's point of
view, as an agent of an illicit reproduction and retransmission. Bomsel and
Geoffroy argue that DRMs are, as software applications, cultural creations.
They are therefore protected legally but bear, for their owners' benefit, the
outstanding power to define the nature of relations within the vertical
distribution chain, to-date protected from real public scrutiny. As seen in
France during the discussion on the transposition of the European Directive
on copyright, this is an issue that certainly merits debate.
Alexander SCHEUER's paper on the European Commission's policy in
the field of audiovisual services is a useful reminder on the series of
reflections in Brussels leading to the Directives on Television Without
Frontiers (TWF), on E-Commerce, which covered interactive video services
such as VOD, and the famous 2002 Telecom package. It usefully reminds us
of the Green Paper on convergence dating from 1997 and the various
options it listed to deal with the issue, including the gradual build-up of a
comprehensive regulatory framework for existing and new services. Key
principles are now well established: the principle of separation between
content and transport regulation, technological neutrality as regards content
regulation and the distinction between linear and non-linear services. A new
proposal for the revision of the TWF directive is now on the table which is
indeed, as Scheuer points out, a piece of "convergent regulation" largely
drawn up to address the development of non-linear services and video
content on the web. Most European countries and the UK see to have
adopted divergent positions on this proposal, as the latter considers many
protections built-into the proposal (such as the vigilant protection of minors
and the prohibition of surreptitious advertising) to be fully inapplicable in the
on-demand, decentralized internet world. It is obvious that the Commission
is showing no inclination to strengthen its content policy scheme and it has
Introduction
17
proposed a lighter set of rules for non-linear services. However, compared to
traditional broadcast services, the resources and associated costs needed to
effectively "control" internet and mobile video content are deemed excessive
by operators and service providers, when other legal tools are already
applicable. Moreover, given that the Commission has reaffirmed its
commitment to the principle of the "country of origin", some countries may
want to secure the maximum flexibility to attract service providers and
associated jobs as seen in the audiovisual field,.
The interview with Evelyne LENTZEN, President of the Conseil Supérieur
de l'Audiovisuel de la Communauté Française de Belgique, clearly illustrates
the complexity of the regulators' task in the audiovisual industry today.
Regulators have a varied set of goals encompassing cultural issues,
pluralism, diversity, original creation, not to mention questions of taste and
decency. They must take into account the social fabric of the nation, and at
the same time cannot ignore the dynamics of convergence,
internationalization and content dissemination. Lentzen is prompt to identify
the keys pillars and priorities of the regulator's mission and to discard the
temptation to micro-manage the sector, particularly when she talks of the
allocation of production grants. Lentzen also recognizes that the current
regulatory frameworks suffers from a serious hole, insofar as it does not deal
with the distributors and aggregators; which now play a central role in the
service selection, the choice offered to consumers and the development of
home networks.
Along with media companies, network operators, governments and
regulatory agencies, there is one player, who can give a thumbs up or down
to the whole process of content distribution over a wide variety of networks:
the consumer. Expectations of consumers are high, but there have been
very few studies, apart from glowing accounts of market tests and some
visionary tech-oriented fairy tales, of consumers'/viewers' preferences, their
patterns of use and the consequences of these patterns at a macroeconomic level. Laurence MEYER tackles these issues in her paper, the
substance of which comes from an extensive prospective study on the future
of television recently written by the author. Thanks to the scenarios she
draws for the deployment of the wide array of new services we are
promised, she vividly demonstrates that convergence really means
something to the media!
Spectrum Management and Broadcasting:
Current Issues
Martin CAVE
Warwick Business School, Warwick University, UK
Abstract: Broadcasting policy has traditionally been supported by a 'command-andcontrol' system of assigning frequencies for terrestrial transmission, but this link is being
eroded by the emergence of other technologies – cable, satellite, IPTV, mobile
broadcasting - and by the emergence of multi-channel television, which is facilitated by
digital terrestrial television. The switch off of analogue terrestrial transmission is being
achieved through significant government intervention, but with diverse intentions relating
to the use of the freed spectrum. It is argued, however, that the trend to liberalise
spectrum policy is strong, and that this will promote the liberalisation of broadcasting.
Key words: : spectrum management; broadcasting policy; digital switchover
H
istorically, broadcasting relied exclusively on spectrum, which fell
under the control of public agencies, and was itself in Europe
particularly heavily controlled by governments, through public
ownership of broadcasters, limitation on entry and supervision of content 1 .
Before the explosion of spectrum-using technologies of the last 20 years,
shortage of frequencies often acted as an alibi used to stop the development
of new services, usually with the enthusiastic support of existing
broadcasters.
That era is now decisively ended, through the interaction of several
simultaneous changes.
• New convergent platforms are now in place that deliver broadcasts
using a range of technologies: analogue and digital terrestrial, satellite, cable
and ADSL using telecommunications companies' copper wires; the latter two
1 This paper will focus on video entertainment (i.e. TV) although a discussion of audio (i.e.
radio) would have similar features, except that it would be writ smaller (and later). That is to
say, close control of radio via regulation and command and control spectrum policy survived the
advent of television. As with digital television radio is now expanding, according to competitive
standards, but discussion of a radio 'digital switchover' is much less well advanced.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 19.
20
No. 62, 2nd Q. 2006
do not use frequencies in the conventional sense (although their emissions
can cause interference problems).
• The variety of services has exploded from the handful available twenty
years age to a multi-channel world that supports itself not only through
government or licence-fee payments and advertising (the old methods), but
also through pay services; moreover, viewers can buy services 'on demand'
and avoid advertising material through personal video recorders.
• The growth of mobile or, more generally, wireless communications
has demonstrated the value of the spectrum for which those services
compete with broadcasting; this has created pressures to switch terrestrial
broadcasting from analogue to digital technologies, which are about five
times more efficient in their spectrum use. In the longer term, it may lead to
the abandonment of terrestrial broadcasting completely, in favour of wirebased networks and satellite distribution, which uses less valuable spectrum
and is less expensive over wide areas.
This article traces the interaction between spectrum management and
these factors, firstly by reviewing spectrum management techniques applied
to broadcasting, and then developments in transmission technology,
especially digital switchover.
Spectrum management and broadcasting:
past and present
Public policy in the field of spectrum allocation has exercised a powerful
influence on broadcasting. Governments used their power to assign
spectrum as an auxiliary instrument for controlling the number and identity of
broadcasters. Traditional spectrum management techniques suited this
purpose very well.
These techniques are known as 'command and control' and have
operated in essentially the same way since the first global convention for coordinating spectrum use in 1906. Under the system, spectrum blocks are
allocated through international agreement (global or regional) to broadly
defined services. National spectrum regulatory authorities (traditionally
government departments, now increasingly independent agencies)
subsequently assign licences for use of specific frequencies within these
allocations within their jurisdictions (CAVE, 2002, p. 55).
M. CAVE
21
This regulatory task involved an inherently complex balancing act in a
range of dimensions, in each of which there are many conflicting
considerations. Key factors included:
Interference
Transmissions interfere with one another unless sufficiently separated in
terms of frequency, geography and time. Regulators must strike a balance
between reducing the extent of harmful interference, through careful
planning, and enabling new and potentially valuable new services to enter
the market.
International co-ordination
The effective use of radio spectrum in one country will typically require
careful co-ordination with neighbouring countries, to mitigate the extent of
harmful interference.
Investment in equipment
Most radio equipment can operate over only a limited range of
frequencies, and so relies on predictable access over time to defined
frequency bands. Stability in spectrum assignments to encourage
investment in equipment can slow the place of spectrum re-use.
Increasingly, technical specifications are agreed internationally to reap
economies of scale in production. Spectrum regulators need to balance
stability and international harmonisation with responsiveness to new
technologies.
The problems of co-ordinating broadcasting spectrum are particularly
severe, since broadcasting is a 'one-to-many' communications technology,
which is efficiently done with high power over a large area. This inevitably
creates the risk of interference with broadcasters in neighbouring areas or
countries. This problem was vividly exposed in the United States in the
1920s when a Court ruling denied the Government the power to control
access to spectrum. The resulting free-for-all, in which radio stations
progressively turned up their power to resist interference from others, led to
a 'Tower of Babel' and eventually to the Radio Act 1927, which gave the
Secretary of Commerce power to authorise and control access to spectrum.
The resulting problems are resolved in the age of television broadcasting
by international agreements, which set out in great detail which transmission
at what power are permissible from which specified sites. Thus analogue
22
No. 62, 2nd Q. 2006
terrestrial television broadcasting in Europe is governed by agreements
reached in Stockholm in 1961. An equivalent plan for digital television
broadcasting in currently being developed for approval at a Regional Radio
Conference (RRC) in Geneva in 2006.
Subject to these constraints, each national spectrum authority assigns
frequencies to particular broadcasters. In the United Kingdom, for example,
analogue and digital terrestrial TV transmissions currently use 368 MHz of
spectrum within the band 470-854 MHz. The spectrum is split into 46
frequency channels, each composed of 8 MHz of spectrum. The following
bands are used:
- 470 to 590 MHz (channels 21 to 35),
- 598 to 606 MHz (channel 37),
- 614 to 854 MHz (channels 39 to 68).
(To complicate matters, channel 36 is allocated to radar for historical
reasons.) Each channel can be used to broadcast either one analogue TV
service, or one digital multiplex – carrying six or more separate TV services
– from a given transmission site. There is a maximum of 11 channels used
at a transmission site (five for analogue TV channels, and six for DTT
multiplexes). At such sites there are still seemingly 35 frequencies (46 minus
11) lying idle. These empty frequencies are interleaved with the frequencies
used for the analogue and digital services. Some of the empty interleaved
frequencies channels cannot be used because they would cause
interference with the channels that are used or with adjacent transmitters;
some, however, could be made available to broadcasters or other users.
Satellite broadcasting also requires spectrum for two purposes – to uplift
signals to the satellites and to broadcast signals direct to home (DTH). As
signal strength from the medium-powered satellites currently in use if fairly
low, frequencies must be cleared of alternative services to allow signals to
'get through'. A further feature of satellite broadcasting is that, because
transponders have a multinational footprint, and because uplift and reception
can be in different countries, the spectrum authority in the country where the
signal is received may have no jurisdiction over the provider of transmission
services.
While command and control has been used almost universally for
managing broadcasting frequencies as well as others, attention has turned
in several countries to the alternative of using market mechanisms to
allocate and assign spectrum. A start has been made in this process with the
use of auctions to assign licences, especially for third generation mobile
M. CAVE
23
services, but the market agenda extends to 'secondary trading,' namely the
exchange of ownership of spectrum or spectrum licences that have already
been issued, accompanied by the opportunity for the existing or new licence
to change the use of the spectrum – often known as liberalisation, subject, of
course, to international obligations. The U.S. and UK spectrum management
agencies have supported, and to some degree introduced, secondary
trading and liberalisation (FCC, 2002; Ofcom, 2005a). The European
Commission recently proposed that all spectrum used for terrestrial
broadcasting and fixed and mobile wireless communications should be
tradable and flexible. (EC, 2005a). This is being realised particularly through
a policy known as WAPECS (wireless access policy for electronic
communications services, a Commission Communication on which is
expected in mid 2006 (RSPG, 2006).
The implications of this policy (which currently has, at best, minority
support from the member states) would be major. It would mean that a
substantial swathe of frequencies would be available for a range of possible
uses. As well as mobile telephony, these include mobile broadcasting and
wireless
broadband,
both
'broadcasting'
technologies
providing
'broadcasting' services, but in a non-traditional form.
Mobile broadcasting has had several trials in Europe and elsewhere, but
fully fledged commercial services are still in their infancy 2 . Given the small
screen high levels of definition are not required, so that, roughly speaking,
the spectrum for one terrestrial channel can transmit three services. It is not
known yet which frequencies are best suited for mobile broadcasting, across
a range which, in the UK, includes spectrum used for digital audio
broadcasting (in which BT, Microsoft and others are developing a wholesale
mobile broadcasting service), the so-called L-band at 1452-1492 MHz
(which the UK spectrum regulator, Ofcom proposes to auction in 2007), and
spectrum freed by the analogue broadcasting switchoff described below. A
similar range of technological opportunities applies in other countries.
The European Commissioner for the Information Society and
Broadcasting has noted the development of plans for mobile broadcasting in
a number of countries and suggested that action might be needed to make
reception available within the EU as a whole (REDING, 2006). However, the
discussion above suggests that it may be premature to seek either to
2 The European Radio Spectrum Policy Group (RSPG) is preparing an opinion for the
European Commission on spectrum for mobile multimedia services in the field of broadcasting.
The RSPG is a group of national spectrum regulators that advises the Commission.
24
No. 62, 2nd Q. 2006
standardise the technology or to harmonise spectrum allocations for mobile
broadcasting.
Wireless communications technologies such as 3G (developed to higher
speeds via HDSPA), fixed CDMA and Wi-Max are also capable of
downloading or streaming broadcasts to individual (mobile or fixed)
customers. This point-to-point technology is inherently more expensive than
point-to-multi-point broadcasting technologies, but services are now or will
soon be available.
In the face of these competing claims on spectrum should spectrum
regulators (government departments on independent agencies) adopt an
administrative or command and control approach, or should they allow the
market to decide via auctioning of spectrum among competing users and
uses, and secondary trading with flexible use? The Commission's proposals
are set out above. But the UK has already opted for a predominantly marketbased regime.
As in other countries, in the UK broadcasting policy generally drove
spectrum allocation rather than vice versa. Channels were added as and
when broadcasting policy dictated, despite the availability of extra spectrum.
The emergence of digital terrestrial transmission has been highly directed
process. The only significant departure was the 'unauthorised' emergence in
1988 of direct to home (DTH) broadcaster Sky, which used a Luxembourgbased satellite and did not initially require a broadcasting licence from the
regulator or a wireless telegraphy licence from the UK Government. But
following the merger with its 'approved' rival BSB, BSkyB too came into the
regulatory fold.
Highly detailed planning of broadcasting frequencies was undertaken by
the broadcasters themselves or their regulators. This has led to what is
recognised as efficient outcomes, in the sense that the spectrum was used
intensively, but complaints abounded over its distribution among different
broadcasters. The regime also lacked any mechanism as to how to deal with
issues of the assignment of more spectrum, except with traditional means 3 .
3 Market mechanisms had been used previously in the UK to allocate commercial broadcast
licences via a competitive tendering process. The object competed for was not a spectrum
licence alone, but a package involving both favoured access to viewers and the availability of
spectrum, conditional to the performance of specified public service broadcasting obligations.
Thus a 'bundle' was auctioned; as a result the value of the spectrum license alone was not
transparent.
M. CAVE
25
Following the explicit legalisation of spectrum trading in the European
Union in 2003 under the new regulatory arrangements, the UK
Communications Act of 2003 placed on Ofcom, the newly integrated
(broadcasting and telecommunications) regulator, the duty of seeking
optimal use of spectrum, and laid the basis for the introduction of secondary
trading and change of use of spectrum, in addition to the auctions of
spectrum already used for primary issues. Prior legislation had also
permitted the spectrum agency to levy an annual payment for spectrum use
on private or public bodies, which became known as an 'administered
incentive price'. This was notionally designed to represent the value of the
spectrum in an alternative use – its 'opportunity cost' – and to encourage
economy and efficiency in spectrum use (Ofcom, 2005d). Public service
broadcasters in particular continue to oppose what they call a 'spectrum tax'.
Ofcom quickly developed a Spectrum Strategy Framework (Ofcom,
2005a) and Implementation Plan (Ofcom, 2005b), together with a series of
measures to accommodate trading. The strategy envisaged a speedy switch
from 'command and control' to market methods, which by 2010 would
account for 70% of assigned spectrum (see table 1), another 4-10% being
licence exempt 4 .
Table 1 - Use of different spectrum management techniques
a) Spectrum below 3GHz
1995
2000
2005
2010
Command & Control
95.8%
95.8%
68.8%
22.1%
1995
2000
2005
2010
Command & Control
95.6%
95.3%
30.68%
21.1%
The Market
0.0%
0.0%
27.1%
73.7%
Licence Exempt
4.2%
4.2%
4.2%
4.2%
b) Spectrum between 3GHz and 60GHz
The Market
0.0%
0.0%
61.3%
69.3%
Licence Exempt
4.4%
4.7%
8.2%
9.6%
Source: Ofcom (2005a) p. 36
The UK is clearly an extreme case, where tradability of spectrum will
extend to public sector uses (such as additional spectrum required for the
emergency services) in the future. Other administrations may nevertheless
4 Licence exempt spectrum can be used by anyone abiding by power restrictions. Wi-fi 'hot
spots' are a good example of current licence exempt use. Due to the interference problems
noted above, licence exempt spectrum is not suitable for wide area broadcasting.
No. 62, 2nd Q. 2006
26
find it increasingly difficult to arbitrate not between competing firms
producing the same service, but between competing users providing quite
different services. The trend of spectrum management more generally is
likely to favour market methods. This is now discussed in relation to the
transition from analogue to digital terrestrial broadcasting.
Digital switchover
The key spectrum issue facing broadcasters in 2006 involves proposals
to switch off analogue transmission and move to digital technologies. Each
of terrestrial, satellite and cable transmission modes can be realised in both
analogue and digital formats, but in the latter two cases, the technological
choice resides almost exclusively with the platform owner. However,
because analogue terrestrial has been responsible for the universal service
delivery of television to viewers without access to other platforms, the switch
to digital terrestrial transmission has been the product of a complicated
interaction of public policy, regulation and commercial incentives. As well as
expanded capacity, digital transmission offers other advantages such as
much greater interactivity.
Table 2 - Digital TV penetration rates (% of households)
in a number of countries, end 2005
Total
UK
France
Germany
Italy
Netherlands
Poland
Spain
Sweden
USA
Japan
Cable
68.9
34.7
28.9
36.0
11.4
17.9
27.6
44.5
50.3
59.1
10.5
4.3
6.7
0.0
5.3
0.4
5.6
9.6
25.3
7.2
Satellite
32.0
21.6
17.8
20.2
3.1
17.5
15.4
20.6
24.2
33.1
DTT
25.2
6.9
4.2
14.9
2.3
0.0
5.2
13.3
0.5
17.9
IPTV(*)
0.2
1.9
0.1
0.9
0.6
0.0
1.5
1.0
0.3
0.9
(*) Delivered by DSL or equivalent technology
Source: Screen Digest
This particular form of digital switchover (or digital transition or analogue
switchoff) is occurring or planned all over the world. Thus Japan has set a
switch-over date of 2011. Legislation has recently been passed in the U.S.
M. CAVE
27
requiring analogue switch-off on 17 February 2009 5 . Some data on the
penetration rates of digital TV in a number of countries are given in table 2.
In Europe, the European Union has adopted a target date of 2012 and a
final date of 2015 for completion of the switchoff on analogue terrestrial, but
member states have in many cases adopted more exacting targets (see
table 3).
Table 3 - European digital switchover timetables for terrestrial transmission
Country
Austria
Belgium
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Netherlands
Malta
Poland
Portugal
Slovakia
Slovenia
Spain
Sweden
United Kingdom
Target Date
2010
2012
no date set
2017
2011
2012
2007
2010
2008
2010
2012
no date
2006
2006
2012
no date set
no date set
no date set
2014
2010
2012
2015
2010
2008
2008-2012
Other details
No decision yet
Aug 07 – all of country
Berlin switched off 2003.
No decision yet
SO starting in 2012
Market to decide
No decision yet
No decision yet
2012 target
May start earlier by region
In progress
Region-by-region
Source: European Union May 2005, partially updated April 2006
Germany achieved the first regional switchover in 2003, in Berlin, a
heavily-cabled area. In mid 2006 switchover was half completed and the
target date of 2010 seems attainable. France has now set 2010 as the
switchover date. Italy has now delayed the switch-over target to 31
December 2006, but this is quite impracticable. Finland has a completion
date of August 2007, and Sweden of 2008, with a 50% target by the end of
2006. Experiences in the UK are discussed below.
5 For further analysis of developments through the world, see CAVE & NAKAMURA (2006).
28
No. 62, 2nd Q. 2006
One consequence of the switchover to digital terrestrial is that, for a
transitional period, both analogue and digital platforms have to be used at
once. The length of the period is under government control, but the turnover
of customer premises equipment – televisions, VCRs, etc. – of which there
may be three or more per household – is a slow process, and provision may
have to be made to encourage the acquisition of digital set top boxes by
slow adopters. In Italy, which has a switch over target date (almost certainly
unrealistic) of the end of 2006, the government is offering set top box
subsidies of 40 euros per household, effectively restricted to DTT boxes –
which has led to charges of illegal state aid.
Nonetheless, the possible 'spectrum dividend' associated with analogue
switch-off has encouraged most governments of richer countries to seek a
digital switchover of terrestrial transmission, which both brings the
advantages of digital television (more channels, interactivity) and releases
valuable spectrum that can be assigned to other users, either by command
and control or market methods 6 .
But some obstacles have to be overcome before alternative uses can be
implemented. In particular, the Geneva RRC of May/June 2006 (or a
subsequent authorised meeting) must agree that released spectrum can be
used for other purposes than broadcasting. If such alternative uses are
authorized, it will be on the footing that this may not cause more interference
than broadcasting, and would receive no more protection from interference
than broadcasting.
The European Commission has solicited from an opinion on the EU
spectrum policy implications of the digital dividend the Radio Spectrum
Policy Group (RSPG) 7 . This follows its May 2005 Communication, which
sets out the Community policy objectives for the transition and notes that it is
important not to unduly constrain the re-use of the freed bands for new and
innovative services (EC, 2005b).
Debate in the UK has been particularly intense as a result of the
legislative obligation in the Communications Act 2003 to ensure that digital
coverage will replicate that attained by analogue transmission, and by the
conflicting interests of broadcasters, many of whom, including the pay
6 Some estimates of the value of free spectrum in Europe can be found in HAZLETT et al
(2006).
7 See footnote 2.
M. CAVE
29
satellite broadcaster BSkyB and analogue channels that currently face
competition in respect of some households only from four other analogue
channels, are likely to be adversely affected by DTT. The UK had also
achieved the highest levels of digital penetration in Europe by mid-2006
(over 70%), especially of DTT in the form of Freeview, comprising of forty
channels, largely non-pay. This favourable background for a switch-over
followed a lengthy debate, which began in September 1999 when the UK
government first announced its ambition to switch off the analogue TV signal
and move to digital transmission. It said that the digital switchover could start
as early as 2006 and finish by 2010, although the precise date would
depend on the behaviour of broadcasters, manufacturers and consumers.
The government also announced that switchover would not take place
unless the following conditions were met:
• Everyone who could watch the main public service broadcasting
channels in analogue form (i.e. 98.5% of households) could receive them in
digital.
• Switching to digital was an affordable option for the vast majority of
the population.
The target indicator of affordability was defined as 95% of households
having access to digital equipment before switch-over, generally taken to
mean that 95% of households would have adopted digital TV before
switchover occurred. A plan soon crystallised to carry out the switch-over on
a region-by-region basis
The cost of converting receivers was expected to vary according to
several factors, including:
- the amount of digital equipment a household already has,
- how much additional equipment the household wishes to continue to
use after switchover,
- their platform and equipment choices,
- their service choices,
- prevailing prices in the year(s) they make their purchases.
However, as the switch-over will not be voluntary for some people and,
for most, affordability will be a major issue, there must be some minimum,
one-off cost with which consumers feel comfortable. The cost of conversion
for a house with only one TV where a new aerial is not required is estimated
at EUR 60-110, while the cost for a house with two TVs and one VCR may
be as high as EUR 110-220.
30
No. 62, 2nd Q. 2006
In research commissioned by the government, most households without
digital TV said they were likely to convert of their own accord within the next
few years. However, this left some 20% of households who currently intend
to remain analogue only. Independently of whether they could receive a DTT
signal, three-quarters of this group said they would adopt digital if they knew
switch-over was imminent, while the remaining quarter (5% of all
households) said they would never be willing to convert.
The group least willing to convert to digital TV was not a coherent cluster
with clearly defined socio-economic or demographic characteristics. Instead
it tended to have a variety of reasons for remaining with analogue TV. A
household's propensity to adopt digital television frequently reflected its
attitudes towards TV and multichannel TV in particular. Consumers least
willing to adopt digital television tended not to value TV as a medium, or
alternatively felt that more TV channels would have a negative impact on
society. Some others believed that digital TV had little to offer over and
above the analogue TV offering; while some mentioned issues such as cost
and difficulty of use.
In accordance with normal practice, the UK Government presented a
justification of the switchover policy in the form of a cost-benefit analysis
(DTI, 2005). The analysis evaluates the costs and benefits to the UK of
completing a digital switchover involving the switch-off of all analogue
signals. This scenario is compared with continuing both the analogue and
digital transmissions. The analysis focuses on the quantifiable effects of
switchover, including environmental effects. The non-quantifiable effects of
switchover such as the public service aspects of the DTV project are not
discussed and the distributional aspects of the project are not examined in
detail.
The consumer costs of switchover include the net cost of set conversion,
which will be achieved by purchasing a set top box (STB). However, the
aggregate cost of purchasing STBs overestimates the economic cost of
switchover, as some of these consumers will have been very close to buying
into digital, even if the switchover were not to take place, i.e. they value
digital TV at some level between the cost of the STB and zero. To model
this, it has been assumed that the implicit demand curve for STBs is a
straight line from the cost of as STB to zero, and therefore the average
valuation by consumers is half the cost of the STB.
One of the key consumer benefits associated with switchover is the value
of increased DTT coverage to previously un-served areas, areas that it was
M. CAVE
31
impossible to reach using digital signals during dual transmission.
Consumers will also benefit from the release of fourteen channels of clear
spectrum when analogue transmission ceases. The economic value of this
extra spectrum depends on the use to which it is put: generally it is
estimated that it will be of more value if it is used for mobile
telecommunications rather than television. However, because of risks and
uncertainties associated with the use of using spectrum for mobile telecoms,
the cost-benefit analysis is based on the assumption that the released
spectrum is used for digital television services.
The key producer benefit from switchover is the cost saving from
decommissioning analogue transmitters, as the cost of running, maintaining
and fuelling such sites will no longer have to be borne. It is assumed that
any producer surplus arising for the operators of the new services on
released spectrum will be competed away.
The cost-benefit analysis shows quantifiable benefits in the region of £1.1
– £2.25 billion in net present value (NPV) terms 8 . Sensitivity analysis gives
results that are reduced under some assumptions, but remaining
substantially positive under most likely combinations of assumptions. The
model shows that the outcome in terms of NPV is most sensitive to
estimates of the value of extended coverage of DTT services and released
spectrum.
The UK government has taken on a commitment to ensure a level of
coverage of public service broadcast signals equivalent to that currently
available with analogue broadcasting. However, this could be achieved by
various means: by directly mandating public service broadcasters to transmit
in particular ways, or indirectly by placing enforceable burden on relevant
broadcasters to meet a specified availability target, in whatever way they
chose; the latter approach would contemplate the possibility of a variety of
technologies being employed to provide coverage - DTT, cable, satellite,
DSL and other technologies. Broadcasters with a universal coverage
obligation would have an incentive to seek out the cheapest combination
from a commercial standpoint. Such harnessing of incentives has clear
advantages. Moreover, any preference for a single platform inspired by
8 The NPV is the capital sum available today which is equivalent to the expected stream of net
benefits.
32
No. 62, 2nd Q. 2006
regulator or government would, if accompanied by explicit or implicit state
subsidies, raise issues of possible state aid 9 .
Following a lengthy consultation, Ofcom finally decided to mandate DTT
as the means of providing universal digital coverage for public service
broadcasting multiplexes, although commercial multiplexes were free to
make their own choices, so long as coverage did not decrease (Ofcom,
2005c). Even this prescriptive solution left open a number of trade-offs
among the objectives of a) coverage (raising the level by small amounts)
above the current 98.5% available using digital technologies, b) power levels
(which determine the number of channels available or a particular multiplex),
c) the cost of additional transmitters, and d) the risk that the option adopted
would be subject to delays. The variant that emerged victorious in 2005 was
one which allowed more channels to be broadcast by using a particular
mode of operation (known as 64QAM).
This means that, as digital switch over progressively occurs throughout
the UK regions, analogue transmitters will fall silent at each of the current
1154 sites. All of those sites will be used for DTT, in place of the 80 sites
currently used, at lower power, to achieve a 70% coverage. The UK would
thus effectively replicate its existing analogue networks, but with a six-fold
increase in capacity.
This leaves open the question of how the liberated spectrum will be used.
Ofcom has established a digital dividend review (DDR) review to investigate
stake-holders' views and to establish a methodology for valuing alternative
applications. Its starting point is that auctioning the spectrum for flexible use
is the most likely way forward, but other considerations could override this.
UK broadcasters, however, would prefer an allocation of the freed spectrum
to them, which would support two or more national digital multiplexes in
addition to the six already in operation. Proposals have been put forward to
use the additional spectrum for high definition television (HDTV), which
requires approximately four times as much spectrum as normal definition
broadcasts. As a result of these requirements, the released spectrum would
support only a handful of HDTV services, and it seems more efficient to
provide such services using the less expensive spectrum utilised by satellite
broadcasting, or by cable or DSL.
9 This is a particular danger following the Altmark case, in which the European Court specified a
need for competitive tendering to be used where possible to finance projects with public
subsidies. Note that the Commission concluded that the switch-over accomplished in Berlin
involved state aid, because of its lack of technological neutrality.
M. CAVE
33
The UK debates are of particular interest because the government and
regulator have both sought to bring out both the economic effects of switchover and been faced with the problem of replicating existing high analogue
coverage levels. The degree of legal constraint on use of the spectrum has
been low, although this partly depends on decisions made at the 2006
Regional Radio Conference in Geneva.
In other member states, the spectrum regulator's freedom of manoeuvre
is much lower. This applies, for example, in France and Germany, where
legislative or political commitments to maintaining broadcasting use have
been much stronger than in the UK.
Conclusion
The broadcasting sector is thus on a transition path from the old world, in
which command and control allocation of spectrum and state control of
broadcasting combined to supply a very limited range of non-competitive
services, to one in which multiple wireless and wire-based platform supply
competitive services - free to air and pay - delivered in traditional linear or
non-linear fashion.
Spectrum policy can either promote or delay these changes. In the UK, a
liberalised spectrum policy is likely to permit new broadcasting services,
fixed or mobile, to come to the market, provided they can outbid other users
for the spectrum. In other countries, spectrum policy associated with
switchover is tending to exclude competition with alternative uses. Within
this framework, the spectrum regulator, in combination with the broadcasting
regulator, can either promote new broadcasting competition, or assign
released spectrum to existing broadcasters, of either the commercial or
public service variety. Unfortunately, the political economy of broadcasting
regulation is such that released spectrum is often given to incumbents,
which typically have great political influence.
This may not be enough, however, to sustain the broadcasting status
quo. The momentum behind new services such as mobile broadcasting is
very strong. Other frequencies spectrum released as part of the digital
dividend can provide them. IPTV is becoming established, using DSL. Even
conservative spectrum regulation will struggle to turn back the tide of new
broadcasting services.
No. 62, 2nd Q. 2006
34
References
CAVE M. & K. NAKAMURA (2006): Digital Broadcasting: Policy and Practice in the
Americas, Europe and Japan, Edward Elgar, Cheltenham.
CAVE M. (2002): Review of Radio Spectrum Management, London; HM Treasury
and DTI.
DTI (2005): Cost-Benefit Analysis of Digital Switchover, Department of Trade and
Industry.
EC:
- (2005a): Communications: a market-based approach to spectrum management in
the European Union.
- (2005b): Communication in Accelerating the transition from analogue to digital
broadcasting COM (2005) 204
FCC (2002): Spectrum Policy Task Force Report
HAZLETT T.W. et al (2006): 'The social value of TV band spectrum in European
countries', INFO 8:2 pp. 67-73.
Ofcom:
- (2005a): Spectrum Strategy Framework
- (2005b): Spectrum Strategy Framework: Implementation Plan – Interim Statement
- (2005c): Planning options for digital switchover: statement
- (2005d): Spectrum Pricing
- (2006): Digital Dividend Review
REDING V. (2006): Television is going mobile – and needs a pan of European policy
approach, Speech 06/15
DRMs, Innovation and Creation
Olivier BOMSEL & Anne-Gaëlle GEFFROY
CERNA, Ecole des Mines de Paris
Abstract: DRMs are intellectual property institutions. They transpose the empirical
principle of copyright, which implicitly recognizes that specific ownership rules should be
attached to non scientific creation, into the digital era. The legal protection of DRMs, a
private means of enforcing content excludability, participates in the "privatization" of
copyright protection. This, in turn, means that a proprietary software — governed by
intellectual property rights, reinforced by public law — becomes the key to the vertical
relations shaped by exclusive copyright. DRMs consequently represent a major stake in
the competition to capture network effects in the content distribution vertical chain.
Key words: copyright, distribution, DRMs, network effects
D
igital Right Management systems (DRMs) are commonly perceived
as technical nuisances invented by content owners to prevent
consumers from fully enjoying the enhanced benefits offered by a
digital age. This ridiculous function explains the painful roll-out of DRMs,
which can, in the best case scenario, be dismantled by avant-garde
information technologies such as media players, laptops, broadband open
networks and peer-to-peer software. The content industry is renowned for
shying away from innovation, and for running to court to protect its rents.
Everyone recalls how ruthlessly the studios sued the consumer electronics
industry thirty years ago in an attempt to block the roll-out of VCRs. And
how, in the end, they lost and were forced to adapt as a result.
From an economic standpoint, it is widely accepted that innovation
proceeds through a Schumpeterian destructive-creation whose effect is to
abolish rents from obsolete systems, thanks to inventive technical or
economical solutions. That vision implicitly applies to physical distribution
systems for content, such as music records or DVDs, justifying the massive
circumvention allowed by innovative information technologies. DRMs are
often seen as a harmless trick to block that process.
In fact, DRMs are intellectual property institutions. They transpose the
empirical principle of copyrights, which implicitly recognizes that specific
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 35.
No. 62, 2nd Q. 2006
36
ownership rules should be attached to non scientific creation, into the digital
era. Those rules constitute the economic basis of the creative industries that
provide expensive, useful and enjoyable mass consumption information
goods. Unlike patents, creative goods are not rendered obsolete through
scientific innovation or additional creation. They therefore need to be
effectively protected; otherwise the innovation process is distorted by false
signals of intellectual property theft. In other words, the destructive-creation
process leading to economic innovation should not be biased by systematic
creative property destruction. Yet it is because there are two sets of
industries involved in the process.
On the borderline between innovative and creative industries, the story of
DRMs clearly illustrates the conflict of interests inherent to that situation.
Copyright principles
Cultural contents are the only information goods that are simultaneously
experience goods. Their experience dimension — one needs to consume
them before gaining knowledge of them, nobody knows their market in
advance — has far-reaching implications in terms of production, marketing
and financing. We will not look at this topic in greater detail here, focusing
instead on the information dimension of DRMs. However, it is worth
remembering that their consumption via experimentation makes contents
economically different from many functional information goods such as
software programs or patents.
As information goods, contents have been characterized since the
seminal paper of Arrow in 1962 by the two major properties of public goods:
non-rivalry and non-excludability. The consumption of a non-rival good by an
additional person does not decrease the amount available for others. Given
the nullity of its marginal cost, it should be priced at zero to reach maximal
social welfare. A good is non-excludable when it is impossible to prevent
someone consuming it, even when s/he does not pay anything for it. Nonexcludability induces a deficit of incentives to create as producers anticipate
underpayment.
Incentives to create can be re-established in two ways. A first possibility
is to reward content producers through public remuneration schemes based
on tax revenues or levies on ancillary products. The second solution is to
rebuild excludability on contents. Copyright laws reward content owners with
O. BOMSEL & A.G. GEFFROY
37
exclusive rights to reproduction, distribution, representation, adaptation and
translation, but for a limited period. This is the result of a trade-off aimed at
maximizing social welfare, balancing incentives to create that would require
infinite protection against the benefits of cultural diffusion that would require
no protection at all.
Private Technical Protection Measures (TPMs) supplement copyright
laws with self-enforcing access and copy control measures. Access control
measures enforce consumers to pay to access content. The general idea is
simple: the information good is bundled with some private good that gives its
properties of excludability and rivalry to the entire bundle (VARIAN,1998).
The content may be bundled either with physical supports (books,
newspapers, tapes, CDs, DVDs) or with tickets (concerts, movie projections,
pay-TV broadcasts) to form what Watts (2000) calls "delivery goods and
services." Concurrently, copy control measures define consumers' freedom
of use. These technical protection measures are not only private, but also
cooperative: they have to be adopted simultaneously by the content industry,
its distribution networks – including, of course, terminal equipment - and end
consumers.
The exclusive copyright system is the result of these two principles: a
public principle (copyright laws) and a private principle (TPMs).
Copyright laws not only constitute the basis of content protection, but
also inform the industrial organization of content industries. They enable a
better allocation of decision rights along the different segments of the vertical
chain. Vertical selection and financing mechanisms are based on exclusive
copyright. This is also necessary to segment content markets into different
territories and versions.
DRMs: a digital copyright principle
Digitization embodies the theoretical public good nature of contents in a
highly concrete form: each copy is an original and each consumer a potential
broadcaster. This change of status has turned into a social phenomenon,
with the surge of broadband networks and PC equipment as content
distribution systems. Copyright issues have changed: the number of
potential diffusion channels is growing together with threats to content
owners' revenues and incentives to create. Moreover, the massive content
circumvention trend questions individual prosecutions in terms of costs and
No. 62, 2nd Q. 2006
38
of social acceptance. Following the 1996 WIPO Treaties, the European
Union and the United States adopted digital copyright laws, EUCD 1 and
DMCA 2 , that shifted the balance of the exclusive rights system with a
"radical innovation," namely the legal protection of DRMs.
Digital Rights Management systems (DRMs) refer to digital access, copy
and redistribution control mechanisms for copyrighted contents such as
music, video or text. They can be used either on physical supports (like
DVDs) or on purely digital files. DRMs control access to digital content files:
they are the entry ticket bundled with digital songs, texts and movies that
make them excludable. Early examples of DRMs like the Serial Copy
Management System for digital audiotapes or the Content Scrambling
System (CSS) for DVDs were just copy restriction tools. But DRMs can also
control the freedom attached to digital contents. They assign a pre-defined
and self-enforcing set of uses to each item of digital content covering rights
to view (hear), modify, record, excerpt, translate into another language, keep
for a certain period, distribute, etc.
Given how hard it is to sue individual circumventors, without DRMs each
consumer would exercise completel control over the exploitation digital files.
The legal protection of DRMs — a private means of enforcing content
excludability — is part of a "privatization" of copyright protection. This makes
proprietary software, governed by intellectual property rights and reinforced
by public law, crucial to the vertical relations shaped by exclusive copyright.
Content distribution systems
Contents are distributed to the end consumer through systems consisting
of delivery infrastructures (physical retail, broadcast, broadband, mobile etc.)
and via terminal equipment. All devices that enable consumers to select,
receive, render and store contents, be they fixed or mobile, are pieces of
content distribution systems. According to this definition, contents and all
delivery equipment are complementary goods. The systemness (Rosenberg,
1994) of digital content distribution comes from the need for technical
interoperability between each link of the vertical chain.
1 EUCD: European Copyright Directive (22 May 2001).
2 DMCA: Digital Millennium Copyright Act (1998)
O. BOMSEL & A.G. GEFFROY
39
All types of information systems are subject to powerful "network effects"
- also called bandwagon effects - whereby users' benefits increase with the
number of users. Network effects include "direct" effects, which are directly
proportional to the number of users (fax or telephone services), and
"indirect" effects mediated by a market such as complementary products, for
example: the music ringtone industry indirectly benefits from GSM network
effects; while MS Windows indirectly benefits from the effects of the internet
network. Moreover, direct network or bandwagon effects also occur in
experience or fashionable goods such as contents, where the testing of the
good by early adopters increases its value for other consumers. For each
Harry Potter fan, the utility of the movie increases with the number of fans
s/he can exchange with.
LEIBENSTEIN (1950) was the first economist to stress the importance of
bandwagon effects on the demand function. ROHLFS (1974) modelled the
network effects through an aggregated demand curve. He showed that there
is a critical mass of subscribers below which a network cannot be
sustainable. Before this mass is reached, any resignation brings the
willingness-to-pay of the remaining members under the price of the service.
Any equilibrium is instable. Once critical mass is reached, the utility of all
consumers stands above the price of the network. Moreover, every new
consumer brings additional utility to all the others. The main issue is
consequently how to achieve critical mass? One rule of thumb is to
subsidize early adopters. What tends to vary tremendously are the means of
subsidy selected and the economic signals given by the subsidy.
The subsidy may occur across services within the same network. In the
U.S. fixed telecoms sector, long distance calls were overcharged while local
calls were subsidized to provide "universal service", namely no price
discrimination for geographically isolated consumers. The subsidy may also
occur through vertical relations within networks. In Europe, GSM telephone
operators have been able to charge fixed networks high termination fees for
fixed-to-mobile calls, while the regulated fixed networks have been
powerless to retaliate. The money transfer resulting from high
interconnection charges 3 has been partially passed onto consumers
through handset subsidies. Network effects in mobile networks have resulted
in large-scale substitution of fixed calls by mobile.
3 About €19 billion for the UK, France and Germany between 1998 and 2002. BOMSEL, CAVE,
LE BLANC, NEUMANN, 2003.
40
No. 62, 2nd Q. 2006
In many cases, cross-subsidies occur along the content vertical
distribution chain. Piracy or copyright circumvention can be a form of crosssubsidy: the utility of the distribution industry increases thanks to the
availability of free content. YU (2003) and VARIAN (2004) both refer to the
history of U.S. copyright law in the 19th century. After independence,
newspapers and books were massively imported. In each state, local
newspapers lobbied for a copyright law. The first federal Copyright Act voted
in 1790 was limited to works by U.S. citizens. Between 1800 and 1860, the
publishing industry expanded thanks to royalty free (and already markettested) English books. Along the same lines, the U.S. refused a bilateral
treaty on copyright proposed by England in 1842. By 1880, however, as
American authors (Hawthorne, Irving, Poe, Beecher-Stove, Twain, etc.)
began to gain popularity, editors started to complain about unfair competition
with from pirated foreign authors whose books could be sold more cheaply.
As a result, the Congress voted in the International Copyright Act in 1891
that expanded copyright provisions to foreign authors.
This short story shows how industrial conflicts can surge in the
enforcement of copyright protection: vertical cross-subsidies from content
circumvention play a major role in the roll-out of distribution systems. The
innovative nature of digital distribution is twofold. Firstly, while in physical
distribution, the costs of logistics are fully supported by the retail network, in
digital systems, the consumer has to invest in terminal equipment to access
content. Such equipment has to be rolled-out in huge mass consumption
markets showing network effects. Secondly, "private" copyright protection
measures have to be rolled-out together with equipment and content, which
means that TPM have to be adopted by all the vertical players. While vertical
conflicts around TPM adoption have always arisen, their resolution is far
more complex — and more crucial — when several systems involving many
sets of firms compete together to capture network effects.
DRMs on dedicated content distribution systems
We use the term "dedicated" for content distribution systems like the
physical retailing of CDs or DVDs, radio or television where terminal
equipment does not provide any utility beyond content consumption.
Network effects on these networks are mediated by contents. Moreover, the
prior consent of content owners is required (PICKER, 2004). Whenever
content owners choose a standard, whether encrypted or not, network
O. BOMSEL & A.G. GEFFROY
41
effects promote it, as this standard allows consumers to access a larger
range of contents.
Digital dedicated distribution systems such as digital satellite or digital
cable have benefited from the initial roll-out of the TV sets. The latter
benefited from the "free-to-air" distribution model for audiovisual content. In
other words, consumers accepted the need to buy TV sets because they
offered access to free contents. The free-to-air model is based on the
network effects associated with two-sided market platforms, through which
the consumers of one side (the viewers) can be valued by the clients of the
other (advertisers). As information goods, contents can easily be structured
into two-sided information platforms, decreasing their utility for consumers
with ads, but making it possible to broadcast them for free (ROCHET &
TIROLE, 2004). The more viewers, the more advertisers, the more
resources available for new content, etc. Once TV sets were in place, pay
content services were rolled-out together with marginal additional equipment
(set-top-boxes) subsidized by distributors. In such systems, content has
always been in a position to monitor the network effects and therefore, to
impose technical standards for delivery and protection on the vertical chain
of distributors.
Such systems will benefit from more flexible DRMs in the future, to allow
contents to circulate within an authorized home network. The rapport de
force of contents and their interest in such a roll-out should favour a surge in
suitable solutions.
DRMs on the internet
In the beginning no one knew what the internet would be used for. Yet
every time the networks were boosted by additional capacity, application
followed the roll-out, rather than preceding it. However, since its beginning,
the internet has been driven by one-to-one communication applications. Email, web services, instant messaging, e-commerce and network gaming
take advantage of the two-way communication potential of the network.
Those applications generate direct network effects that pull the broadband
rollout. Peer-to-peer (P2P) applications have emerged in this context as a
way of sharing content, but also, and even predominantly, as a way of
circumventing copyright. These applications offer new uses for broadband
services and use circumvention as a roll-out subsidy (BOMSEL et al, 2004)
42
No. 62, 2nd Q. 2006
On internet networks, content owners have less bargaining power to
impose protection measures on their vertical partners. Firstly, indirect
network effects mediated by content are no longer conditional to the prior
consent of content owners. The huge range of contents available on P2P
networks provide indirect network effects that benefit and subsidize the rollout of broadband networks (Internet Service Providers) and to all broadband
complementary equipment (PCs, microprocessors, modems, software,
music and video players). Secondly, internet networks are not dedicated to
content distribution: PCs are multipurpose pieces of equipment, for which
content consumption is only one of a wide range of applications. Moreover,
they are pulled by one-to-one communications that provide strong direct
network effects.
Internet players consequently are under no obligation and stand to gain
nothing from accelerating the pace of DRM roll-out. On the contrary, they
have a vested interest in trying to impose their proprietary DRM standard,
while benefitting from the wild compatibility of P2P formats like MP3 or DivX.
These strategies have led to incompatibilities in DRMs between digital offers
and mobile players that are slowing consumers' adoption of DRM-based
online distribution. Moreover, they may incite consumers to circumvent DRM
technologies or to use P2P networks. In this vicious circle, before the
standards war is over, no equipment manufacturer can afford to launch a
content players that does not accept circumvented MP3 files (Sony tried to
launch a digital music player solely compatible with its DRMs files, but
quickly gave-up this suicidal strategy).
The on-line digital music market illustrates the reasons behind and
results of incompatible DRM systems. Four major players are trying to
impose their proprietary DRM standard. Two of them, Sony and Apple,
refuse to license their DRM technology to other digital music distributors and
portable players' manufacturers. Their proprietary DRMs (Apple Fair Play
and Sony Open Magic Gate) secure a complete music distribution system
composed of an internet music store, a media player and mobile players.
Real Networks and Microsoft are pursuing the opposite strategy, namely
trying to attract as many music stores and portable players manufacturers as
possible to their own DRM technology (WMA DRM and Helix). Helix is open
and Microsoft sells very cheap licenses for its WMA. Given its large market
share, Apple's proprietary strategy induces major incompatibility issues
between on-line music stores and mobile players.
O. BOMSEL & A.G. GEFFROY
43
DRM system roll-out issues
The P2P problem: innovation versus creation
Massive circumvention via P2P networks is the major obstacle to the
roll-out of DRMs. "Copyright respectful" digital offers cannot compete with
easily accessible free contents. While free-to-air models decrease the utility
of the consumer with ads, P2P offers the same product as paying content,
with greater choice and flexibility of use. In addition, because it increases the
utility of devices, P2P kills incentives for equipment manufacturers to loyally
secure their products. However, many voices have been raised in opposition
to DRMs for the sake of P2P technologies. DRMs have been accused of
impeding innovation in digital technologies and networks. While the
argument of P2P and DRMs being technologically incompatible does not
stand, it is disputable that the cross-subsidization of new distribution
systems by free contents may end up benefitting creative industries in the
long term. The reference often cited for this long-term benefit is the large
VHS market opened up by VCRs. However to what extend can innovation
be promoted at the expense of incentives to create?
The evolution of U.S. court decisions on copying technologies shows
that, with digitization, a new line has been crossed. In the famous 1984
Betamax case, Universal Studios and Walt Disney accused Sony
Corporation of infringing their copyrights. Arguing that individuals' use of
VCR (Video Cassette Recorders) would seriously damage their revenues,
especially from advertising, they wanted the production and importation of
VCRs to be prohibited. In a narrow vote the Supreme Court ruled in favor of
Sony, considering that "time shifting" (recording television broadcasts for
later viewing) was fair use. Moreover, as VCRs were primarily used for that
purpose, selling them was not considered to be copyright infringement,
despite their potentially unauthorized uses. An interpretation of this judgment
could be that the VCR technology's potential infringement on copyright was
considered to be overridden by the overall benefits of innovation.
However, this logic changed with the judgement on the (secondary)
liability of P2P software providers for copyright infringement. In 2001 and
2003, the U.S. courts found two centrally mediated P2P systems (Napster
and Amster) liable, as they materially contributed to copyright infringement.
European courts applied the same logic. This trend was clearly confirmed in
the MGM versus Grokster case. In the beginning, the U.S. Court of Appeals
44
No. 62, 2nd Q. 2006
applied the Sony-Betamax guideline and found no secondary liability of the
decentralized peer-to-peer software providers for their users' copyright
infringement. The decision focused on the non-infringing uses of P2P
networks (exchange of non copyrighted material) and on the lack of control
of P2P vendors over infringing uses. In June 2005, however, the Supreme
Court ruled that P2P software providers could be held liable for copyright
infringements committed by their users if they actively encourage that
infringement. Three criteria were then defined to judge such active
inducement of infringing uses: the marketing of infringing uses, the lack of a
technology to fight them and the place of infringement in the business
model. This decision led to the closure of the Grokster company 4 .
Compatibility issues
A second obstacle to the roll-out of DRMs is their incompatibility. This is
intrinsically linked to the existence of P2P networks. Manufacturers would be
more inclined to make DRMs compatible, if P2P networks did not already
provide this service through circumvented compatible contents. Moreover,
the incompatibility of DRMs incites consumers to seek circumvented
contents on P2P networks. This vicious circle fully benefits equipment
manufacturers.
The impact of the incompatibility of DRMs on consumers is not
unanimously considered negative, as it may result in a price decrease: if
there are no network effects, incompatible vertically integrated systems face
more elastic demand than compatible components (MATUTES &
REGIBEAU, 1988). However, consumer surplus may not be superior to
cases where systems are compatible. Indeed, compatibility increases
variety, enabling consumers to mix and match (MATUTES & REGIBEAU,
1988). However, in the case of incompatibility, consumers remain free to
accept or refuse each distributor's offer. This rule mostly applies to
dedicated networks similar to broadcasting.
The rule applies as long as the consumer can chose between different
integrated systems. In the second stage of a dynamic game, incompatibility
may indeed lessen competition and prices may rise as one system may win
4 In Europe, the last decision on decentralized P2P software liability found no liability. The 2002
BUMA vs. KaZaa case, Amsterdam Court of Appeals, later affirmed by Dutch Supreme Court,
considered that the KaZaa software was not used for "exclusively" infringing purposes.
O. BOMSEL & A.G. GEFFROY
45
the market (KATZ & SHAPIRO, 1994). This monopolization depends on the
existence and strength of network effects. In the case of incompatible
systems offering contents over the internet (as in the case of on-line music),
network effects are mainly mediated by contents. They depend on the
differentiation level of the content offering and on the range of contents that
each system may provide. If one platform monopolizes all the on-line
offerings of digital contents, the range of content variety accessible through
digital distribution may be endangered. Another possible scenario, once the
standards war is won, is that a DRM standard may start to be licensed as a
monopoly in the vertical chain. However, this monopolization may not
happen as every system benefits from strong indirect network effects
provided by… compatible circumvented contents.
Moral hazard in content distribution
As equipment and software manufacturers are the only beneficiaries of
ineffective DRMs, it can be assumed that incompatibility is a source of
"moral hazard" in digital content distribution 5 . Moral hazard means that
these distributors are not doing their best to maximize the returns of their
principals. Such moral hazard distorts the competition with distribution
systems that protect copyright, gives out wrong signals to the market and
misorients investment.
In broadcast networks, the contents monitor the utility of the system. In
this case, there is little moral hazard attached to content protection within the
system itself. The hazard may come from new digital recording equipment
able to store contents in an open format through the analogue hole, and
from the competition with open architecture systems that promote the
diffusion of P2P files. This competition is forcing broadcasters into a race to
roll-out DRMs due to a rise in content utility. This is why the launch of HDTV
in Europe will be aimed at stuffing the analogue hole and enforcing
consumers to record images in encrypted formats. Another example is the
subsidization of set-top-box DVRs to promote the content recording through
adapted DRMs.
5 Moral hazard occurs in a vertical relation where one party pursues its private interests at the
other's expense. One example of moral hazard is drivers that behave carelessly when they
know that the insurance company will pay for all of the damages. Moral hazard may deter
players from engaging in mutually beneficial transactions. It reduces welfare by blocking such
efficient vertical transactions. Moral hazard is also a source of market failure.
No. 62, 2nd Q. 2006
46
The consequences of moral hazard in internet-based content distribution
are more serious for content that has no alternative digital distribution
channel. Video content is massively distributed through digital broadcast
systems, so it can withstand (even unfair) competition with broadband.
However, digital music depends heavily upon the internet 6 . This is why the
compatibility of music DRMs is such a controversial issue.
Conclusion
The paradox is that imposing DRM interoperability to protect copyright for
cultural goods somehow calls into question the copyright of individual
DRMs. Existing reverse engineering provisions for compatibility do not apply
to DRMs: complex reverse engineering processes could not follow the fast
pace of renewals of these security tools. Mandatory licensing, mandatory
disclosure of DRMs interoperability information and public standardization
are the different solutions available to public authorities willing to impose
interoperability on DRMs. Critical issues are then the choice of the players
that will support the costs of interoperability and the effective security of
interoperable DRMs. The problem can be seen as the internalization of the
negative externalities of incompatibility. The general principle in such cases
is that the beneficiaries of the moral hazard pay the costs of interoperability.
However, the lack of interoperability is not the only source of moral hazard.
Interoperability will not be enough to ensure that copyright is respected or to
achieve fair competition between content distributors. The solution should
also imply the containment of illegal P2P networks and the implementation,
probably at the hardware level, of efficient DRM protection able to
discriminate between copyrighted and non copyrighted content. This is the
only way to restore the content monitoring of indirect network effects in open
communication systems.
6 Mobile telephony is indeed an alternative, but fixed broadband networks are far more
convenient.
O. BOMSEL & A.G. GEFFROY
47
References
BOMSEL O., CAVE M., LE BLANC G. & NEUMANN H. (2003): "How mobile
termination rates shape the European telecom industry", Cerna, Ecole des Mines de
Paris.
BOMSEL O.; LE BLANC G., CHARBONNEL J. & ZAKARIA A. (2004): "Economic
Issues of Content Distribution", Riam Contango Project, Cerna, Ecole des Mines de
Paris.
KATZ M. & SHAPIRO C. (1994): "Systems Competition and Network Effects,"
Journal of Economic Perspectives.
LEIBENSTEIN H. (1950): "Bandwagon, Snob, and Veblen effects in the Theory of
Consumers' Demand", Quarterly Journal of Economics, Vol. 64, no. 2, May 1950.
MATUTES C. & REGIBEAU P. (1988): "Mix and Match: Product Compatibility
Without Network Externalities", Rand Journal of Economics, 19 (2), 219-234.
PICKER R. (2004): "From Edison to the Broadcast Flag: Mechanisms of Consent and
Refusal and the Propertization of Copyright".
ROCHET J.C. & TIROLE J. (2004): "Two-Sided Markets: An Overview".
ROHLFS J. (1974): "A Theory of Interdependent Demand for a Communications
Service," Bell Journal of Economics and Management Science.
ROSENBERG N. (1994): "Inside the blackbox", Cambridge University Press.
VARIAN H.:
- (1998): "Markets for Information Goods".
- (2004): "Copying and Copyright".
WATT R. (2000): Copyright and Economic Theory. Friends or Foes?, Edward Elgar
Publishing.
YU P. (2003): "The Copyright Divide", Michigan State University.
The Role of Public Service Broadcasters
in the Era of Convergence
A Case Study of Televisió de Catalunya
Emili PRADO & David FERNÁNDEZ
Autonomous University of Barcelona, Spain
Abstract: The development of the convergence process has several implications in the
reconfiguration of the media landscape. Public services broadcasters have new
opportunities to fulfil their public service duties in a new competitive environment, which
involves developing new applications on new platforms. Televisió de Catalunya, the public
service broadcaster (PSB) of Catalonia, has developed a clear strategy in this new
convergent environment, applying its traditional know-how to new interactive and digital
media according to its public mission and getting positive feedback.
Key words: convergence, public service broadcasting, interactive TV, bandwidth, 3G
services, multimedia and digital divide.
C
onvergence in the television industry is continuing at a steady pace.
Although we are in a period of transition, some of the features that
will characterise the future can clearly be seen through the windows
of today. Convergence represents the second major transition for public
service in Europe. The first transition was the drastic change in its identity
caused by the break-up of monopolies and the establishment of mixed
systems.
The television system has been shaped by the tension between
regulation and technological innovation. From a technological perspective,
television was conditioned in its early days by the limitations of the radio
spectrum and the technical characteristics of transmission by Hertzian
waves. It consequently began its activity with a reduced number of channels
available and with coverage that could be adapted to state borders.
From a regulatory perspective, the medium was modelled according to
the government's general conception of television. As television was an
instrument of extremely high strategic value, the state established the
conditions of its usufruct. The prevailing trend in Europe was followed in
Spain. This involved establishing a model to offer a public television service
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 49.
No. 62, 2nd Q. 2006
50
in the form of a state monopoly. These public companies coordinated one,
two or three channels, depending on the country and the period. Such
channels were guided by three principles: to inform, to educate and to
entertain.
Solid public systems were formed out of these raw materials. Television
took on a highly central role as an ideological apparatus of the state. It
played an important part in processes of socialization, education and political
participation. Television became the backbone of the modern state due to its
functions in creating consensus and cultural homogenization; and in
constructing and defending national identities.
Europe, distribution of offerings 2004-2005 (%)
35
30
32
EUROMONITOR
27
25
20
14
15
9
10
4
5
5
4
4
SPORT
OTHERS
0
FICTION
INFORMATION
INFO-SHOW
SHOW
GAME-SHOW
CHILDREN
Source: Euromonitor 1
Technological innovation eliminated the constraints that had prevented
the number of channels from increasing. Television was seen to be big
business on the other side of the Atlantic. Pressure from economic sectors
interested in operating in the television sector consequently led to a process
of deregulation. This brought about the introduction of mixed television
systems in which public and private operators coexisted, or rather,
competed.
1 EUROMONITOR is a permanent observatory of television in Europe. It has been in operation
since 1989 and was set up by a group of European researchers (Paolo Baldi, Ian Connell,
Claus Dieter Rath and Emili Prado) at the request of the VQPT service of the RAI. Until 1995,
its headquarters were in Geneva. In 1995, it relocated to the Autonomous University of
Barcelona where it has been managed by Emili Prado. The team at the headquarters includes
the lecturers Matilde Delgado, Núria García and Gemma Larrègola as researchers. The
observatory undertakes regular reports for the main television operators in Europe and North
America, scientific publications and academic seminars, programming workshops with the
industry and advises the regulatory authorities.
E. PRADO & D. FERNÁNDEZ
51
The resulting increase in the number of channels led to a situation of high
audience fragmentation - which is not the same as segmentation. This was
due to the fact that the increase in supply under market law did not lead to
greater variety in contents. Instead, in general terms, a high degree of
uniformity of offerings was registered. This was due to the same weapons
being used in competition between operators. EUROMONITOR data on the
combined offerings of the general-interest, public and private channels in the
five main European television markets (the United Kingdom, France,
Germany, Italy and Spain) is conclusive. Almost three quarters of the
offerings are concentrated in only three macro-genres.
The identity crisis in public service broadcasting began to emerge and
the debate over the role of public television in a mixed model came to the
fore. A short time later, digitalization and the convergence that ensued, burst
onto the scene; leading to a redefinition of the role of public service in the
digital era.
Challenges for public television
in the framework of convergence
Public television had to redefine its role during the first transition from a
monopoly to a mixed system. However, most of the changes were brought
about in what can be called the traditional functions of public service. There
is already ample European doctrine on the mission of public television.
Several organisations (the European Broadcasting Union, the European
Commission, the European Parliament and the Council of Europe) have
issued or promoted reflections and guidelines in this field. Since the nineties,
this reflection has been combined with new requirements, arising from the
prospects presented by digitalization. Some of this doctrine is included in:
the Delors White Paper (1993), the Bangemann Report (1994), the Tongue
Report (1996) and the High Level Group of Experts Report (1998).
Obviously, each transition stage presented different kinds of challenges.
In the first transition, the challenge consisted of defining public television's
function in the context of competition. However, the prevailing model of
television still followed the traditional template, with the channel
predominating. This distribution system involves a flow of contents.
Regulations can be applied to this system, to determine the menu available
for consumption by a regional population. The second transition involved
52
No. 62, 2nd Q. 2006
digitalization, leading to total technological convergence. The television was
to be integrated with "everything digital and the internet". During this
transition the traditional template became blurred. Contents were to
predominate. Users would be able to access stores to obtain the contents
they liked. This represents a move from flow television to stock television. In
other words, users surf to find cultural products - in this case television
products - rather than the products being transmitted directly to the users.
This change in perspective affects the entire communication system.
However, it is most clearly evident in public television. Public television has
to readapt its strategies so that it can continue to fulfil the functions of a
public service, in the context of a much wider range of offerings. In addition,
it now has to readapt to a complete change in the rules of the game. It can
no longer depend on the support of national communication policies
regulating the circulation of products or establishing the characteristics that
such products should have. Moreover, it has to become an active agent in
the communicative context. Now it not only coexists in this context with other
operators from its own market, but also with operators from all markets. The
context not only includes television, but all the other media that make up the
convergent communication system.
MORAGAS & PRADO (2000) drew up a list of the functions that public
television needs to undertake if it aims to bring its democratic legitimacy up
to date in the digital era. Below are the functions that can be considered
updates of traditional functions:
- guaranteeing democracy (and defending pluralism),
- stimulating citizen participation (a political function),
- a cultural function,
- guaranteeing identity,
- ensuring the quality of scheduling and contents,
- an educational role,
- a social and social welfare function,
- maintaining regional balance,
- promoting economic development,
- acting as a driving force for the audiovisual industry,
- providing a source of creative innovation and experimentation,
- a humanist and moralistic function,
- spreading and socializing knowledge.
The functions listed above are fairly conventional. From now on, we need
to add other functions derived from the new situations that have arisen due
E. PRADO & D. FERNÁNDEZ
53
to the technological transformation and globalisation of the Information
Society. Among these functions, we can highlight:
• Mediation and credibility in the face of the many available channels
and sources. To play the role of mediator, public television must be
protected from external interference and guarantee its credibility
democratically.
• Guaranteeing universal access for everybody. In a context where
everything is digital and online, there is space for both free access
information and for a sizeable proportion of conditional access. In addition,
the internet does not reach all segments of the population. Public television
should guarantee universal access to important information and major
communication products. Such products should not be exclusively reserved
for users who can pay for them or people with online access.
• Producing information that is socially necessary. In market conditions
the production of socially necessary information is not guaranteed. Instead,
information that is economically viable is produced. Therefore, if we want an
Information Society for everyone, public communication systems need to
produce socially necessary information.
• Acting as a guide and mediator in the face of the wide variety of
information on offer. The user has access to a vast amount of information
that makes it difficult to carry out an effective selection from all of the
programmes and services available. The electronic programme guide has
arisen as a public service function to provide people with the information
needed to make an informed choice.
• Balancing and curbing new communication-telecommunication
oligopolies. The public sector should counterbalance the extraordinary
concentration in the audiovisual system. This concentration is caused by
convergence. In the face of this situation, public television should be a
guarantee of plurality. Thus, public television stations should be financially,
technologically and professionally solid. In addition, they should guarantee
the plurality of their contents and democracy in decision-taking.
• Acting as a driving force in the processes of convergence between the
communication sector and other social sectors, such as: culture, education,
health, social welfare, etc. Public televisions should expand their
communication activity beyond television to respond as multimedia
communication institutions. They will only be able to fulfil their mission in this
way: without being restricted in the increasingly narrow space kept for the
medium of television in a context where "everything is digital and on line".
No. 62, 2nd Q. 2006
54
To fulfil these functions, public television needs to organise its activities
to attain at least the following objectives:
• Offer a wide range of varied, high quality programmes that reflect the
common denominator of good taste and provide unbiased information with
educative, cultural or entertaining contents that are of interest to the public.
• Ensure that all types of programmes can reach everybody. In no case
should services and programmes of cultural and national importance be
limited so that they only reach well-off groups.
• Coordinate offerings of programmes that reflect the tastes of both the
majority and the minorities. This will contribute to creating social cohesion,
regional balance and a sense of belonging, particularly among minorities.
• Undertake to have a strong national production base. This will provide
programmes that reflect national values and the near environment better
than foreign products. This will help to contribute to sustaining and
revitalizing national culture and the characteristics of its identity.
Furthermore, it will help to promote the audiovisual sector and the economy.
• Form a complex communicative institution that acts on all available
platforms.
To fulfil these objectives, public television needs to be viewed by a large
enough audience to be able to exercise a social influence, to influence its
competitors by example and to justify the investment that it receives.
The specific case of Televisió de Catalunya
Specificities of the Catalan market and the position
of Televisió de Catalunya
Identity is a key factor in Catalonia. This is reflected in the region's
cultural idiosyncrasies and, in particular, in the region's own language,
Catalan. This language shares both the social and communicative space
with Spanish.
Catalonia is an integral part of the Spanish television market, which has
been established on a national scale since its beginnings in 1956. However,
regulations did not bring about the emergence of Televisió de Catalunya
E. PRADO & D. FERNÁNDEZ
55
(TVC) until 1984. In fact, the channel started its experimental transmissions
in September 1983, a few months before a law 2 allowing the creation of
public regional broadcasting corporations controlled by Spain's autonomous
governments. TVC finally started regular transmissions on January 16th
1984, a few days after the law was approved (PRADO & LARRÈGOLA,
2005). This is a public channel that has coverage in the Autonomous
Community and is broadcast entirely in Catalan. In addition, the public state
television makes regional broadcasts in this language. There are also other
examples of local television, which are mainly in Catalan. The total number
of broadcasts in Catalan currently accounts for less than 30% of the
television market.
Convergence has occurred at the same time as globalisation, which
represents both a considerable challenge and an opportunity for small,
stateless nations like Catalonia. It is a challenge because it leads to a huge
increase in the communicative opportunities of citizens. However, it also
represents an opportunity as the cultural proximity factor, the main asset of
media such as TVC, acquires strategic importance in the sector.
Convergence involves the emergence of new production and
transmission methods, as well as new consumer practices to which the
media have to adapt. In addition, public television must adapt in accordance
with its mission and functions. Its boundaries are extended in the digital era.
All of this is occurring in an environment that is hostile towards public TV due
to pressure from the private sector to increase deregulation; and the
restrictions on public TV's participation in the advertising market.
This process coincides with the transition to a totally digital environment,
the starting point for convergence and for a reconsideration of the role of
public service television. As a result of convergence, this role has broadened
in content and form. New functions have been added and it has been
extended via new platforms. This involves a change of environment, as
some of the initial barriers in traditional analogue television, such as the lack
of radio spectrum, disappear when competing in a market with bigger
audiences.
Catalonia also has a unique characteristic in the framework of the
Spanish TV market. This is the presence of a regulatory authority, the
2 Ley 46/1983, de 26 de Diciembre, reguladora del tercer canal de televisión, Boletín Oficial del
Estado n. 4, de 05.01.1984.
56
No. 62, 2nd Q. 2006
Consell de l'Audiovisual de Catalunya (Catalonia Broadcasting Council), with
broad competences in this territory. This authority was created in 1996 3 and
its remit was reinforced in 2000 4 . This represents the most advanced
example of an independent authority, since there is no national authority and
only a few regions, like Navarre and Andalusia, have a similar institution,
although with a minor degree of competences.
The spread of convergence
Convergence today is based on the technological opportunities provided
by digitalization. This is expressed in many ways. Some of these are clearly
reflected in the case of TVC. We should therefore examine the role of
technological convergence in the process of transforming TVC and its parent
company, Corporació Catalana de Ràdio i Televisió (CCRTV). We should
take into account that: technological convergence is a necessary, but nonsufficient condition by itself, because the 'present and the future of any
communication technique depends less on the characteristics of each
particular technique than on a series of different variables including
economic factors (installation cost, subscription cost), political factors (the
degree of intervention by the state as an agent) and social factors (habits
and uses)' (DE MIGUEL, 1993: 54). Technological convergence must
therefore be combined with other perspectives such as economic, regulatory
and social convergence. In the specific case study dealt with in this paper,
our analysis focuses on technological convergence and the convergence of
services, corporate convergence and processes of social convergence that
TVC is involved with in its role as a public service. Finally, it is also
necessary to show the financial framework of all these processes.
Technological convergence and the convergence of services
Four factors are of central importance within the wide field of activities
covered by technological convergence. Digitalization is the first of these
factors, as it affects both internal production processes and signal
transmission. The second factor is the level of interactivity developed in the
3 Llei 8/1996, de 5 de juliol, de regulació de la programació audiovisual distribuïda per cable,
Diari Oficial de la Generalitat de Catalunya n. 2232, de 19.07.96.
4 Llei 2/2000, de 4 de maig, del Consell de l'Audiovisual de Catalunya, Diari Oficial de la
Generalitat de Catalunya n. 3133, de 05.05.00.
E. PRADO & D. FERNÁNDEZ
57
different products. The third aspect is the extent of the services' coverage.
This is an essential condition if a service has to reach a considerable
proportion of the population. Finally, bandwidth is an essential condition for
using advanced convergent services.
Digitalization
The digitalization process for television has several different levels. The
first level is the digital production of TV contents, including the filming and
recording of information, and the subsequent editing of this material.
Secondly, digitalization is involved in transmission through the different
platforms available. Finally, the reception of a broadcast is a stage that
involves the user's response to the requirements of the media and of the
electronics industry. We will discount this final stage, in which the role of one
TV network has a secondary - though present - role. This section of the
paper therefore focuses on the digitalization process involved in production
and transmission.
In terms of production, TVC is at an advanced stage of updating all of its
analogical equipment and replacing it with new digital-based hardware. This
process is occurring at different speeds in most of the television networks in
the main television markets worldwide. Interestingly, TVC is developing its
own integrated system of production and digital archiving. The software for
this system is called DigitionSuite. It was developed by TVC's technological
subsidiary, TVC Multimèdia. After group restructuring, this subsidiary was
renamed Activa Multimèdia Digital. This software combines content
recording, video editing, asset management, archives and play-out all in a
digital format and in one programme. TVC therefore works with its own
production software. It has also marketed this software to other companies
in the sector such as IB3, the Balearic Islands' regional television station. In
this way TVC has also promoted technological transfer.
Due to digitalization, which made convergence possible, there are now
many technological platforms for transmitting audiovisual signals. Catalan
public television was one of the first suppliers of thematic channels for the
two digital satellite channels, Canal Satélite Digital and Vía Digital, which
appeared in the Spanish market in 1997. These thematic channels arose in
the context of an emblematic business project for the development of the
Information Society in Catalonia: Media Park. This is a benchmark
audiovisual production centre in the region. TVC has provided funding, and
acted as a partner and content company for Media Park since it was
58
No. 62, 2nd Q. 2006
founded in February 1996. Originally, TVC had a 17.59% share in Media
Park.
In this respect, TVC has always maintained the importance of contents
as a key element in the framework of convergence. It demonstrated this
stance in its clear strategic positioning during the "football war" between
Spanish television networks from 1996 to 1997. This battle was fought to
acquire the rights to broadcast the Spanish football league matches, which
are a guaranteed way of increasing audience share. TVC ended up
controlling 20% of shares in the company Audiovisual Sport, which ultimately
became the manager of these property rights. Other shareholders included:
Sogecable, the owner of Canal Satélite Digital; and Antena 3 Televisión, one
of the Spanish free-to-air private television networks. In the end, this gave
TVC a notable position in controlling a key content. However, it also
represented a sharp drop in the company's value, when the market prices of
football rights were rationalized after hyperinflation in Spain at the end of the
1990s.
TVC made its first Digital Terrestrial Television (DTT) broadcast in 1998,
despite the lack of adequate DTT set-top boxes in the market. In 2002, it
moved onto a simulcast stage, combined with the analogue transmission
required by Spanish Law on DTT introduction. When the pay platform Quiero
TV went bankrupt, in situations similar to those of On Digital in the UK,
Spanish DTT was left in limbo. Only about 200,000 set-top boxes for DTT
existed, which Quiero TV had distributed. However, boxes could not be
found in the shops, even though the law required operators to begin their
DTT broadcasts using this technology. Within six months of On Digital going
bust in the UK, the BBC led the launch of a new platform called Freeview.
Spain, on the contrary, lacked the response and leadership of such a strong
national public service broadcaster. The DTT situation in Spain at this time
involved some simulcast broadcasts of the programs transmitted in
analogue. These had very little audience potential. In this context, TVC, as a
public service broadcaster, assumed part of the responsibility for DTT and
headed the project TDT Micromercats (DTT Micromarkets). Between 2003
and 2004, this project analysed a series of new technological services, both
for contents and interactive services, through a pilot test carried out with 70
families in the Barcelona area.
The central Spanish government gave impetus to DTT when it approved
the Plan de Impulso de la Televisión Digital Terrestre, DATA (Plan to
E. PRADO & D. FERNÁNDEZ
59
Promote Digital Terrestrial Television) in June 2005 5 . This plan included
distributing the frequencies that were originally attributed to Quiero TV
among the current operators of analogue television. In addition, the
analogue "switch off" was brought forward from 2012 to 2010. Towards the
end of 2005, the operators began to offer new contents. This coincided with
Christmas sales campaigns, in which DTT set-top boxes were one of the top
presents. By the beginning of 2006, there were around a million units in
Spain 6 . In this context, TVC started up a new broadcast, Canal 300, which
uses its archive of contents, adding to its analogue offerings, namely the
general-interest TV3, K3/33, a complementary channel that has a combined
content of programmes for children and young people, culture and sports
programmes; and 3/24, a channel providing 24 hour news. Thus it has made
an effort to develop new contents. The regional, private multiplex
concessionary, the Godó group has not made a similar effort. It only
broadcasts the programmes from its analogue channel, City TV, in simulcast
and consequently only uses 25% of its concession.
However, public Catalan television has not stopped at the traditional
concept of television. Early on, it established its presence on the internet.
The first TVC web site was set up in 1995 (www.tvcatalunya.com). This was
one of the first Spanish state media to become established on the Net.
Subsequently, the project was reformulated. From 2002, its objectives and
approaches were expanded. Specific portals were created for the following
areas: news (www.telenoticies.com), sport (www.elsesports.net), music
(www.ritmes.net), and children's content (www.3xl.net). The main framework
was kept for the rest of the network's contents. This involved incorporating
both audio and video format contents. Advantage was taken of the synergies
created by the integration of the online newsroom in the workflow of the
public television and radio newsroom (FRANQUET et al., 2006). Most of
these contents are available online since 2004, using the service '3 a la
carta' (www.3alacarta.com). This service won the Promax 2005 World Gold
Award for the best interactive service of a television channel.
After transferring the experience from the television to a web
environment, another step forward was taken. This involved integrating
5 Ley 10/2005, de 14 de junio, de medidas urgentes de impulso de la televisión digital terrestre,
de liberalización de la televisión por cable y de fomento del pluralismo, Boletín Oficial del
Estado n. 142, de 15.06.2005.
6 "Vendidos ya en España un millón de descodificadores para la TDT" (A thousand DTT
st
decoders sold already in Spain), La Vanguardia Digital, February 21 2006. See:
http://www.lavanguardia.es/web/20060221/51234268973.html
60
No. 62, 2nd Q. 2006
these and television services with the mobile phone through an SMS alert
service. After this first contact with the mobile telephone, the next step was
natural: to adapt and develop contents for third generation mobile
telephones. Such contents were developed from August 2004 through the
mobile operator Amena, which offers the programming for TVC's main
channel, TV3, live to its 3G telephone clients. TVC is the first television
operator to offer this service in Spain. This experience is due to be
expanded on in 2006 with a new project that has the following partners: the
telecommunications operator Tradia, the technological company Nokia, and
Telefónica Móviles, the leading mobile phone operator in the Spanish
market. In addition, TVC plans to develop interactive applications using this
platform. It also plans to adapt the on demand internet video service '3 a la
carta' for third generation mobile platforms.
Interactivity
The development of interactive systems that can be applied to television
has also been of interest from the outset; since the creation of TVC
Multimèdia in 1998. TVC Multimèdia has been functioning under the CCRTV
Interactiva (CCRTVi) umbrella since 2001. It was specifically created to
develop new distribution media. The objective was to ensure: "the presence
and competitiveness of the Corporació Catalana de Ràdio i Televisió, as well
as its content and mission, in the new interactive media in existence today.
These interactive media include: the internet, interactive digital television
channels, teletext, mobile telephones and all those media that technology
will allow in the future" (CCRTV, 2002: 101). From the beginning, the
corporation committed itself to applying interactive systems to its first
thematic channels, which are described above. This occurred as these
channels were being created. TVC differentiates between three lines of
activity in this field. The first involves interactive meteorology services. The
second involves those services that fall under Automatic TV and the third
business includes remaining interactive television services.
The meteorological services are developing applications that are both
multiplatform (TV, Internet, WAP, SMS, PDA and UMTS) and multi-format
(Flash, Real, Windows Media and Quicktime). Such applications are used in
connection with a climatic database from which the service's own digitalised
file is created, containing data obtained via satellite. Thanks to the European
Union's MLIS project, these services are available in eight different
languages. TVC Multimèdia participated in this project, alongside partners
such as TV Cabo, World On Line, Weather World Prod. and Alice Prod.
(PETIT & ROSÉS, 2003).
E. PRADO & D. FERNÁNDEZ
61
Automatic TV is a hardware and software platform that automates
content publication for television head ends. These include the SMS
messages that the channel receives from viewers. Automatic TV created the
first interactive television experience on channel TVRL in Lausanne,
Switzerland.
Other interactive television services involve the conceptualisation,
development, editing and maintenance of content. In contrast to the majority
of companies in this field, TVC Multimèdia is part of a television group. This
gives it a greater degree of knowledge about the medium in which its
applications will subsequently be implemented. Its projects include electronic
programme guides (EPG), TV sites and interactive thematic channels.
The company has also developed its own interactive service, HandData.
This is a managing and broadcasting system for digital television. This
software has been sold to other operators and has therefore contributed to
obtaining financial returns. TVC's early and determined commitment to
interactivity is not comparable to that of the British giant, BSkyB. However,
generally speaking, its commitment has been pioneering and noteworthy in
the realm of Spanish television, which even today appears to shy away from
the development of added-value services. This enterprising spirit has aided
the acquisition of know-how, which has enabled these services to be
supplied to other national and international telecommunication companies.
National companies include: Vía Digital, Canal Satélite Digital; Digital+, a
product of the merger between these two companies; the cable platform
Ono; and TV Castilla La Mancha. International companies include: Canal
Plus Technologies in France, Mediaset in Italy, and NTV or TV Cabo in
Portugal, for whom it developed a thematic meteorological channel with
related interactive services. This is a field in which it has acquired extensive
experience and in which it developed its latest project, Sam, a virtual
weather reporter (www.meteosam.com). This virtual meteorologist is already
being shown on television, the internet and the mobile telephone services of
companies such as Telefónica Móviles.
Since 2002, the commitment to interactive applications within the limits of
DTT, in the framework of the TDT Micromercats project, has increased. The
first applications for this platform have been introduced. These include:
informative tickers, navigation bars, on line chatting, competitions and onscreen, navigable news and weather services. All these applications have
been developed on Multimedia Home Platform (MHP). This platform was
accepted in Spain as the standard protocol for interactive applications by
virtue of an agreement among the main analogue TV companies and the
62
No. 62, 2nd Q. 2006
most important equipment manufacturers in February, 2002. Motivated by a
desire to cover all of the market's technological niches, TVC Multimèdia has
also developed applications for other technological platforms, such as: Open
TV, Media Highway, Liberate or Microsoft TV. Additionally, the company is
an active participant in groups that develop these systems, including MHP
Implementers Group or Liberate Pop TV.
After this pilot project and following the momentum caused by DTT
broadcasting since the end of 2005, TVC has already begun to offer certain
services that were tested in the pilot study on its digital channel, TV3. These
services include both autonomous interactive services, and interactive
services related to high-interest programmes, such as broadcasting the
games of F.C. Barcelona, Catalonia's flagship football team. The objective is
to make these new, value-added services attractive and well-known.
Penetration
Televisió de Catalunya was founded to be a market leader and in order to
attain its public service objectives. It has been largely successful in this
endeavour. As regards public service, the objective of accumulating the
highest audience share is not the only leitmotiv in TVC's development and
programming strategies, although its programming is quite competitive. This
is demonstrated by the fact that its general-interest channel, TV3, had the
second highest audience share in the region in 2005, pulling in a 20.3%
share. This was only one point behind the national private television
channel, Telecinco, which has double the budget of the public channel, and
is in line with that of other national broadcasters. The group of TVC channels
with free-to-air broadcasting has a 25.7% share. Its market penetration is
therefore quite significant in such a competitive market, which includes four
national, free broadcast channels. Another channel, Cuatro, which is the old
analogical Canal+, now offers free-to-air broadcasts. There is also an
autonomous private channel operator, City TV, based on the Italian
deregulation model i.e. it broadcasts a network of various local frequencies,
and over one hundred local broadcasters in all of Catalonia, as well as the
competition of digital platforms by satellite, cable and ADSL.
As far as the internet is concerned, TVC and its various brands have a
solid and indisputable position in the Catalan-speaking market. In addition to
having the highest market share, TVC is also the leader in product
development (FRANQUET et al., 2006), adding content leadership to
technological improvements.
E. PRADO & D. FERNÁNDEZ
63
Bandwidth
TVC's role in bandwidth development in Catalonia is secondary, as it is
not a telecommunications company. Nevertheless, it is important to highlight
its participation in bandwidth development projects. One of these is i2Cat, a
pilot project promoted by the Catalan autonomous government on the
Internet2 network. TVC has become involved in the development and testing
of advanced applications and high-quality audiovisual content across the
latest generation of networks, through which it even broadcasts in highdefinition. One of the most noteworthy subprojects within the i2Cat project is
Dexvio. This is an experimental gateway that unites television through the
Internet, with material from TVC and CCRTV Interactiva among others, and
shared viewing spaces (ALCOBER, MARTIN & SERRA, 2003).
Some of the services described above also promote internet bandwidth.
One of these is the video on demand (VOD) '3 a la carta' application, whose
expansion to IP TV services is being considered. TVC's offer for the
Microsoft Windows Media Center services should also be mentioned as one
of the most recent company applications.
Corporate convergence
The process of corporate convergence follows the logic of services
convergence. A competitive position, offering essentially similar services or
products in the same market, leads to convergence among different players.
Players are expanding constantly due to this external growth. Thus,
corporate convergence refers to alliances and unions between companies,
by means of different cooperation processes, which can take different forms:
vertical integration, horizontal integration, multimedia integration, acquisition,
mergers, alliances, joint-ventures, etc. The concept can also be applied to
the processes of organic diversification within a firm, to broaden its field of
action field within the new convergent space.
This phenomenon is frequently claimed to be new. However, historically,
'the first characteristic of innovation in the communication field is that it is
located at the crossroads of many industrial activities' (FLICHY, 1980: 31).
This has been demonstrated by the history of the first cultural industries, like
cinema, radio or record companies.
Convergence, in the form of alliances, has played an important role in
TVC's development strategy. The Federación de Organizaciones de Radio y
Televisión Autonómicas (FORTA - Federation of Autonomous Radio and
64
No. 62, 2nd Q. 2006
Television Organisations) is at the heart of strategic alliances. This
federation groups together other Spanish regional channels and currently
has eleven associated regional television stations that broadcast to over
37½ million Spaniards. According to the latest official statistics 7 , this
represents 85.2% of the country's population. The group's objectives are to
achieve economies of scale that are not possible in their smaller regional
markets. This enables them to compete with national television stations. The
federation's work has included joint production projects, such as TV-movies,
the sale of advertising packages, and, above all, the acquisition of
audiovisual rights, especially broadcasting rights for the football league, the
indisputable king of sports in Spain.
Moreover, national empathy with the other two historical Spanish
communities, Galicia and the Basque Country, was reflected in the
strengthening of an alliance with the autonomous channels in those regions
and the start-up of the satellite channel Galeusca in 1996. This channel was
distributed via PanamSat to Latin America, a historic destination for many
Spanish immigrants. The channel included content in the languages of
Catalonia, Galicia and the Basque Country and selected material from the
programme schedules of the three autonomous, public television stations.
Nevertheless, political interests redirected the project to the broadcasting of
one of TVC's own channels, TVC Sat, through Vía Digital, a platform that
uses the Hispasat satellite and thus enables it to reach the same target. The
TVC Sat experience was also repeated with TVC Internacional, aimed at the
Catalan communities in Europe and transmitted through the Astra satellite.
This channel subsequently joined the offering of the Canal Satélite Digital
platform using the same satellite.
Other alliances that were developed with the private sector made a fullyfledged entry into the convergence field. One example of this is the case of
Vía Digital, the Spanish satellite platform created in 1997 with the backing of
the leading telecommunications operator, Telefónica. Public Catalan
television initially had a 5% share in this platform. It also contributed various
television channels produced directly or through Media Park, a company that
also had shares in the platform. This contribution included various channels:
Teletiempo, the thematic weather channel; Club Super3, the children's
channel; and TVC Sat, previously mentioned. This was a very controversial
7 Information taken from the National Statistics Institute, pertaining to January 1st 2005 and
consulted on March 11th 2006. See:
http://www.ine.es/inebase/cgi/um?M=%2Ft20%2Fe260%2Fa2005%2F&O=pcaxis&N=&L=0
E. PRADO & D. FERNÁNDEZ
65
step at the time given that, in spite of its public status, the company had
opted to position itself - just like other public services, such as its own TVE,
the State's public television – as one of the two contenders in the sky battle.
TVC later pulled out of this project by selling its shares, resulting in a heavy
capital loss.
On the other hand, its positioning with Vía Digital did not hinder its
collaboration with Canal Satélite Digital, the competing digital platform. In
addition to TVC Internacional, TVC produced the following programmes for
Canal Satélite Digital: Sputnik, the music channel, Canal Méteo, the weather
channel, and Fútbol Mundial, a thematic football channel produced from the
Media Park facilities in collaboration with Canal+. The Sogecable group,
controlled equally by Prisa, Spain's largest media group, and by the French
group Vivendi, headed the Canal Satélite Digital platform.
In any case, the absolute maintenance of TVC's position in the value
chain is what stands out from all these alliances. It did not want to enter into
other close links of the chain through convergence, as it was convinced that
its know-how lay in content production and management. Content had been
highlighted by the Green Paper on the Convergence of Telecommunications,
Media and Information Technology Sectors, and the Implications for
Regulation as one of the bottlenecks in the process due to its scarcity. In
fact, TVC's only incursions into outside fields have been in the area of
technology, where technological systems and applications were created by
its subsidiary, TVC Multimèdia, when the market was unable to meet
company needs. Thus, this expansion of the original core business, which
was brought about by the current situation, occurred in a timely fashion. The
company's management has promoted organic growth, avoiding corporate
culture management problems similar to those seen in other cases, including
the paradigm of AOL-Time Warner (GERSHON & ALHASSAN, 2003).
Social convergence
Social convergence mainly consists of a model for applying the
Information Society's policies, which are usually gathered under the label of
the fight against the digital divide. This is one aspect of convergence that
private participants tend to leave in the hands of the authorities. By
extension, public broadcasting services are also involved in this process of
social convergence.
66
No. 62, 2nd Q. 2006
As a public service, TVC promotes public access in the Catalan market to
new converging services. It creates content for a familiar public and offers
programmes in Catalan, whereas private initiatives in Catalan are unusually
stifled by market limitations. It does this with its own vision and from a
Catalan perspective on world events, while also featuring information about
the Catalan reality. It has consequently become an online social reference,
stimulating the creation of communities such as the one it has brought
together around its various interactive services. These communities have
over 600,000 subscribers, representing almost 10% of the Catalan
population and about 30% of the region's entire online community.
From the individual perspective of citizenship, the web - which is seen as
a new market by the economists and firms involved - takes the form of a new
social space for connection and public expression. In this sense, one of the
main characteristics of social convergence is the de-construction of
communities, which are normally de-territorialized. Although the role of
infrastructure is essential in a virtual community, it is important to remember
that 'technological features do not ensure effective communication, and
technological connection alone does not create a community' (HARASIM,
1993). In this sense, it seems that the feeling of community is also
developed in another way in cyberspace, from the previous community, that
is based on physical proximity (GOCHENOUR, 2006).
The creation of appealing content that transfers this ability to attract to
one's own technological infrastructure is one of the ways in which TVC, as a
public service, attempts to fight the digital gap. Its motivation lies in the fact
that "the absence of online public services offering content and services that
are viewed as useful by citizens will continue to limit the diffusion and
adoption of the Web as a widespread household tool" (FRANQUET et al.,
2006). The group's latest internet development, És a dir (http://esadir.com/),
is part of this effort. És a dir is a translation and consultation tool for the
Catalan language that helps users understand much of the Web's content,
which is primarily in English. Nor has TVC abandoned the promotion of all
the converging tools that are part of the Information Society project via its
traditional analogue transmission platform. One of the flagships of this
project was the Cataluny@XXI programme. Manuel Castells was this
programme's consultant and member. He is a prestigious sociologist whose
trilogy, The Information Age, is key in understanding the web society.
In the course of its 20+ year existence, TVC has acquired extensive
expertise in managing linguistic aspects of its content. It has consequently
become a benchmark for the audience that uses Catalan as its reference
E. PRADO & D. FERNÁNDEZ
67
language. This community is not just limited to Catalonia, but extends to
some areas of neighbouring Spanish regions, Andorra and the south of
France.
Financial divergence
This panorama has its Achilles heel in the financial situation of the
company. Since its creation, the financing model of TVC has emulated that
of Televisión Española (TVE), the former state monopoly. Revenues come
from two main sources: public funding and advertising rates. Those rates
have been in decline since the introduction of private national channels in
early 1990s and the rising competence of satellite and cable TV. In a political
framework of budget adjustment, public funding did not grow sufficiently.
Together, these factors led to a situation of deficit multiplication. By the end
of 2004 TVC's debt amounted to over EUR 923 million, the highest deficit
among Spanish autonomous broadcasters. It is granted by the autonomous
government.
The public Catalan TV has responded positively to its new public service
functions in the framework of the information society. However, its financing
model has remained frozen since its origins, and has failed to adapt to the
structural changes in such a dynamic sector. This situation has created a
vicious circle in terms of rising debt levels since the financial costs of
borrowing are continuously increasing the debt, independent of TVC's
management skills. This vicious circle requires public intervention to
eradicate the debt and establish a sustainable financing model based on a
contract-program. Resources have not grown at the same level as the
expenses derived from the new functions assumed in the digital
environment, a new source of debt for CCRTV, despite the revenues
generated by new segments of activity.
Conclusions
TVC's experience shows that public television can respond to the
challenges arising from convergence. As it cannot respond in any other way,
its response is partial, and conditioned by the regulatory situation that
determines its manoeuvrability. Reforms to this legislation, such as the
reform of the regulatory authority, the Consell de l'Audiovisual de Catalunya;
or the creation of a Ley General del Audiovisual (General Audiovisual Law)
68
No. 62, 2nd Q. 2006
that puts together all the fragmented legislative texts, represent a step
forward. However, the law for reforming the Corporació Catalana de Ràdio i
Televisió is still pending. This law should provide public television with a
mission that is adapted to the new challenges of convergence, a funding
system that is sufficient and sustainable, a programme contract that details
the objectives in a pluri-annual spectrum, and an organisational system that
protects public television's political independence. The new challenges can
only be effectively addressed in this way.
In the absence of this normative framework, the activity carried out on the
converging front can be described as very positive. TVC has addressed the
need to obtain a massive audience, with scheduling that in general terms
reflects the parameters involved in quality television. Its offering has
diversified to serve specific audiences. Thematic channels have been
created to ensure the presence of public options within the proliferation of
specialised channels. It has been a pioneer in software and hardware
development, originally in response to new converging demands. In turn, this
has enabled it to obtain new sources of funding from selling these
applications to other companies. TVC has also been a pioneer in adopting
digital terrestrial television technology. It has a proactive attitude to R&D and
has implemented experimentation that helps it to design efficient services
within the potential of this technology, and possibly to contribute to the nondiscriminatory socialization of the Information Society's services. It began to
undertake activities dedicated to internet applications, with and on the Net,
from an early stage. In short, TVC has acted decisively in the converging
industrial context, so as not to be left on the fringes of major concentration
deals. It has participated in some of these operations to protect its interests
as a public television broadcaster.
This public condition has allowed the CCRTV to play a pioneering role in
this new process of convergence. In terms of the negative implications of
TVC's transition to the digital era, it seems mandatory to mention the
corporation's financial situation. Some of its high debt is part of the holding's
attempts to promote the Information Society and push the audiovisual
industry forward – creating a social benefit that can not be demanded of any
private firm - and demonstrates the necessity of the public TV broadcaster
as a driving force in the first steps of the process. However, this also calls
into question its long-term development model. It is also the main reason for
lobbying the autonomous government to pass a law designed to establish a
definitive financing model to fulfil the functions of a public service.
E. PRADO & D. FERNÁNDEZ
69
References
ALCOBER J.; MARTIN R.M. & SERRA A. (2003): 'Internet2 a Catalunya: la Internet
del vídeo", Quaderns del CAC, 15, p. 27-32.
CCRTV:
- (2002): Informe Annual 2001, Barcelona: CCRTV
[http://www.ccrtv.com/doc/informe_2001.PDF]
- (2005): Pla d'activitats 2006, Barcelona: Direcció General CCRTV
[http://www.ccrtv.com/pdf/PlaActivitats2006.pdf]
DE MIGUEL J.C. (1993): Los grupos multimedia. Estructuras y estrategias en los
medios europeos, Barcelona: Bosch.
EUROPEAN COMMISSION (1997): Green Paper on the convergence of the
telecommunications, media and information technology sectors, and the implications
for Regulation - Towards an information society approach, Brussels: European
Commission.
FLICHY P. (1980): Les industries de l'imaginaire: pour une analyse économique des
media, Paris: Institut National de l'Audiovisuel.
FRANQUET R., RIBES X., SOTO M.T. & FERNÁNDEZ D. (2006): Assalt a la Xarxa.
La batalla decisiva dels mitjans de comunicació online en català, Barcelona: Col·legi
de Periodistes de Catalunya (forthcoming).
GERSHON R.A. & ALHASSAN A.D. (2003): "AOL/Time Warner and WorldCom:
Corporate Governance and the Effects of the Deregulation Paradox", paper
rd
presented at the 53 Annual International Communication Association (ICA)
Conference, San Diego, May 26th.
GOGHENOUR P.H. (2006): "Distributed communities and nodal subjects", New
Media & Society, 8(1), p. 33-51.
HARASIM L.M. (1993): "Networlds: Networks as Social Space", L.M. Harasim (Ed.)
Global Networks. Computers and International Communication, Cambridge, Mass:
The MIT Press.
MORAGAS M. & PRADO E. (2000): La televisió pública a l'era digital, Barcelona:
Pòrtic.
PETIT M. & ROSÉS J. (2003): 'TVC Multimèdia, pol d'innovació en la producció
audiovisual", Quaderns del CAC, 15, p. 21-26.
PRADO E. & FRANQUET R. (Eds) (2006): Televisió interactiva. Simbiosi tecnològica
i sistemes d'interacció amb la televisió, Barcelona: Consell de l'Audiovisual de
Catalunya.
PRADO E. & LARREGOLA G. (2005): "TV3: una televisión de calidad y audiencias
masivas", in Pérez Ornia J.R. (Ed): El Anuario de la Televisión 2005, Madrid: GECA.
Traditional paradigms for new services?
The Commission Proposal for a 'Audiovisual Media Services Directive'
Alexander SCHEUER
Rechtsanwalt, gérant de l'Institut du droit européen des médias (EMR)
Saarbruck/Bruxelles
Abstract: For over 10 years the European Community has strived to develop suitable and
proportionate answers to the phenomenon of convergence in its audiovisual regulatory
policy. This article outlines the regulatory process at an EU level since the early 1980s as
far as media, telecommunications and Information Society services are concerned, and
analyses some of the most relevant policy papers specifically related to the adoption of
the EC legal framework for the media in the digital age, before focusing on the preparatory
phase leading up to the adoption of the Commission proposal for a Directive on
"Audiovisual Media Services", issued in December 2005. In addition, the core of this
proposal for a revised "Television without Frontiers" Directive, i.e. the extension of its
scope to cover new media services provided in a non-linear manner and the introduction
of a graduated regime of regulation with a lighter-touch approach in view of such services,
is presented along with the main lines of debate among stakeholders.
Key words: Convergence, digital television, new audiovisual media services, EU media
regulatory policy, revision of TWF Directive, electronic communications, broadcasting.
"Nul vent fait pour celuy qui n'a point de port destiné."
Michel de MONTAIGNE, Les Essais – Livre II (1580), Chapitre I,
"De l'inconstance de nos actions"
"Lumos!"
J.K. ROWLINGS, Harry Potter
F
or the ICT industries, convergence has, for a number of years, not
just meant something, but everything, an impression that was
especially strong at the end of the 1990s. Today, we are finally
witnessing the market launch of a number of new services in the audiovisual
domain, or, at least, the establishing of new business models for services
that were mostly already available. What makes such developments both
interesting and important, not least from a media policy perspective, is the
fact that they might be regarded as a point of crystallisation of different
aspects of convergence. From a technological angle, the arrival of highcapacity broadband digital subscriber lines and the upgrading of mobile
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 71.
72
No. 62, 2nd Q. 2006
networks to 2,5 G and 3,0 G have made it possible to use even more digital
networks for a number of services, including audiovisual media, besides the
existing networks, i.e. digital cable, satellite and terrestrial (van LOON,
2004). This, to a large extent, goes hand in hand with the availability of multifunctional terminal equipment that can be used for either of the traditionally
separate activities, i.e. for communications/information and media purposes.
The "e" and "m-families" can be taken as examples: eCommerce, eCinema,
eLearning, mCommerce, mobile media, etc. For users in the UK, France or
Italy, for instance, it is a reality that different providers of Video-on-Demand
services (VoD) are available, offering digital films and series libraries and
extending their offerings both in terms of quantity and genres (BERGER &
SCHOENTHAL, 2005).
Against this background, the present article firstly aims to outline the
starting point for the European Commission and how subsequent steps
looked at the time they were taken, as well as the Commission's audiovisual
regulatory policy and its approach to handling the phenomenon of
convergence. The article will then move on to describe the actual Proposal
for a Directive on "Audiovisual Media Services", particularly with a view to
the rules foreseen for "new services."
European media, telecommunications
and eCommerce law and policy
General background
European media policy was born in the early 1980s, mainly in response
to the imminent launch of cable and satellite broadcasting networks. As in a
parallel process at the Council of Europe level, this development led to the
adoption of a legal instrument enabling the transmission of television
programmes on a pan-European scale, i.e. the Directive "Television without
frontiers" (TWFD) of 1989 (EEC, 1989). Efforts by the European (Economic)
Community to liberalise the telecommunications market took as a starting
point the divergence of national standards regarding terminal equipment,
namely the telephone, mainly supplied by state enterprises under their
monopoly in the telecommunications and postal sector at the time. This led
to the adoption, by the Commission of the European Communities, of
Directive 88/301/EEC in 1988 (EEC, 1988). Steps aimed at liberalising
A. SCHEUER
73
services were soon to follow, bringing an end to the monopolies of the
incumbent providers, through the enactment of Directive 90/387/EEC by the
Council (EEC, 1990). Subsequently, after this first wave of liberalising
measures, taken in an "analogue environment", regulatory policy was faced
with the advent of digital technologies such as ISDN (EC, 1994).
In the mid-1990s, when the first review of the TWFD was underway, the
Commission prepared for the publication of a Green Paper on Convergence,
adopted at the end of 1997 (Commission, 1997). New services already being
a reality, particularly the internet and digital carriage media, which were both
combining text, graphics, video and audio ("multimedia"), the need was felt
to discuss the conclusions to be drawn from the technical convergence
induced by digitalisation and, more specifically, its impact on regulatory
policies. Since the results of this discussion process were not immediately
apparent, not to mention new legislation, which was still to be prepared, we
shall look at the key features of the current and subsequent legislation at
that time, before returning to the next wave of essentially
telecommunications-based legislation to be passed in 2002.
According to Directive 97/36/EC, which amended the TWFD in 1997 (EC,
1997), an approach was maintained which, in some ways, was seen as
technologically neutral: both analogue and digital transmission of television
broadcasting were covered and the provisions were applicable regardless of
the transmission network used 1 . However, its scope of application was
restricted to television programmes directed at the public, meaning that a
point-to-multipoint transmission is essentially required to be underlying the
conveyance of the service (ECJ, 2005), and communications services on
individual demand were explicitly excluded. This distinction, however, was
questioned during the legislative process, particularly by the European
Parliament, which considered that Video-on-Demand services should also
be covered by the Directive's rules; the question whether TWFD should
apply to broadcasting over the internet was left without any explicit answer 2 .
1 Art. 1 lit. a) reads: "(a) 'television broadcasting' means the initial transmission by wire or over
the air, including that by satellite, in unencoded or encoded form, of television programmes
intended for reception by the public. It includes the communication of programmes between
undertakings with a view to their being relayed to the public. It does not include communication
services providing items of information or other messages on individual demand such as
telecopying, electronic data banks and other similar services;"
2 Similarly, at a later stage, the decision whether the TWFD or the eCommerce Directive should
apply to "broadcasting over the Internet" has not been taken in a formal way: in the reasons of
motivation accompanying the Commission's proposal for the eCommerce Directive it was stated
that 'simulcast' was to be covered by TWFD.
74
No. 62, 2nd Q. 2006
Shortly afterwards, the differentiation between broadcasting and on-demand
services was reinforced, when the so-called "Technical Standards
Transparency" Directive was amended, in particular by Directive 98/48/EC
(EC, 1998). This time, a definition of "Information Society services" was
introduced, in Art. 1 point 2, which provides:
"'service', any Information Society service, that is to say, any service
normally provided for remuneration, at a distance, by electronic means
and at the individual request of a recipient of services.
For the purposes of this definition:
- 'at a distance' means that the service is provided without the parties
being simultaneously present,
- 'by electronic means' means that the service is sent initially and
received at its destination by means of electronic equipment for the
processing (including digital compression) and storage of data, and
entirely transmitted, conveyed and received by wire, by radio, by
optical means or by other electromagnetic means,
- 'at the individual request of a recipient of services' means that the
service is provided through the transmission of data on individual
request.
An indicative list of services not covered by this definition is set out in
Annex V.
This Directive shall not apply to:
- radio broadcasting services,
- television broadcasting services covered by point (a) of Article 1 of
Directive 89/552/EEC."
See also Annex V 'Indicative list of services not covered by the second
subparagraph of point 2 of Article 1':
"[...] 3. Services not supplied 'at the individual request of a recipient of
services'
Services provided by transmitting data without individual demand for
simultaneous reception by an unlimited number of individual receivers
(point to multipoint transmission):
(a) television broadcasting services (including near-video on-demand
services), covered by point (a) of Article 1 of Directive 89/552/EEC;
(b) radio broadcasting services;
(c) (televised) teletext."
In this case, the respective services are characterised by the fact that
they are provided at a distance, by electronic means and at the individual
request of a recipient of services. This definition subsequently also formed
the basis for determining the scope of application of the eCommerce
Directive, to be enacted in 2000 (EC, 2000). In the telecommunications
sector, liberalisation was then brought forward with the so-called "1998
A. SCHEUER
75
package." This legislative framework was primarily designed to manage the
transition from monopoly to competition and was therefore focused on the
creation of a competitive market and the rights of new entrants.
The question to be raised in this context is: what was the motivation for
the European Community to follow these regulatory paths? At this point, it is
useful to come back to the "Convergence Green Paper" of 1997, and its
successor, the Communication on the follow-up to the consultation process
initiated by it, issued by the European Commission in 1999 (Commission,
1999a).
At the end of 1997, the same year the first revision of the TWFD had
been finalised, the Commission presented a Green Paper proposing several
ways of reacting to the challenges posed to regulatory policy by digitalisation
and convergence at a European level. The first option consisted of building
on current structures, i.e. developing future regulation along the existing
instruments and extending them prudently to new services where required.
The second option was to develop a separate regulatory model for new
activities, to co-exist with existing telecommunications and broadcasting
legislation; and the third was to progressively introduce a new regulatory
model to cover the whole range of existing and new services. According to
the Commission, the key messages emerging from the consultation held on
the basis of the Green Paper, as summarised in the March 1999
Communication, included:
"Separation of transport and content regulation, with recognition of the
links between them for possible competition problems. This implies a
more horizontal approach to regulation with:
Homogenous treatment of all transport network infrastructure
and associated services, irrespective of the types of services carried;
A need to ensure that content regulation is in accordance with
the specific characteristics of given content services, and with the
public policy objectives associated with those services;
A need to ensure that content regulation addresses the
specificity of the audiovisual sector, in particular through a vertical
approach where necessary, building on current structures;
Application of an appropriate regulatory regime to new services,
recognising the uncertainties of the marketplace and the need for the
large initial investments involved in their launch while at the same time
maintaining adequate consumer safeguard." (emphasis added)
The aforementioned eCommerce Directive, proposed by the Commission
in February 1999, was the first concrete example of implementation of the
76
No. 62, 2nd Q. 2006
guidelines given in the Convergence Communication, as it opted for a
sector-specific approach to Information Society services.
Afterwards, in view of rapidly changing technologies, convergence and
the new challenges of liberalised markets, the need was perceived to enact
a single, coherent new framework, that would cover the whole range of
electronic communications. Building on the 1999 Review and intense debate
among European institutions, Member States, regulatory authorities and
stakeholders, the legislator adopted the so-called "2002 regulatory
framework" covering electronic communications networks and services. This
is concerned with the carriage and provision of signals, but is explicitly not
applicable to the content conveyed via such services (EC, 2002) 3 . The
package of Directives on electronic communications of 2002 is intended to
ensure technological neutrality, i.e. irrespective of the former "nature" of a
given network – in the past, the telephone lines used for voice telephony, the
cable networks installed in order to convey broadcasting programmes, both
on an exclusive basis – all networks and, accordingly, all services provided
over them (except for those referred to above) should be treated identically.
Graph 1 – Regulation of networks and services according to EC law
3 Art. 2 lit. c) reads: "'electronic communications service' [...] exclude services providing, or
exercising editorial control over, content transmitted using electronic communications networks
and services; it does not include information society services, as defined in Article 1 of Directive
98/34/EC, which do not consist wholly or mainly in the conveyance of signals on electronic
communications networks;".
A. SCHEUER
77
This leads to the current status of EC legislation in the media, electronic
communications and Information Society sectors. There is a layer of
regulation on infrastructure, applicable to electronic communications
networks, and a layer of regulation covering services, across the different
sectors. However, where content-related offers like broadcast programmes
are at hand and in the case of Information Society services, the regulation
on electronic communications services is not to be applied (graph 1).
Graph 2 - Distinction between different "content" services according to EC law
After painting a picture of European regulation in the media and ICT
sectors, and explaining the current differentiation, in the Community's
acquis, between the regulation of electronic communications and contentbased services, on the one hand, and the distinction made in the latter field
between television broadcasting services and Information Society or
eCommerce services, on the other (graph 2), we shall now focus on tracing
the process that led to the Commission's proposal to amend the TWFD.
Specific
In principle, as mentioned at the beginning of this paper, the last revision
already involved a discussion, among other issues, of the necessity to
broaden the TWFD's scope of application in view of the (then) "new
services." Nevertheless, the compromise reached between the European
Parliament and the Council (and the Commission), foresaw not to include
webcasting or VoD in the Directive. It was deemed premature to regulate
78
No. 62, 2nd Q. 2006
such emerging markets as the impact of such regulation would be difficult to
predict. However, the revision-clause in Art. 26 TWFD nevertheless
stipulated that "the Commission shall [...], if necessary, make further
proposals to adapt it [the Directive] to developments in the field of television
broadcasting, in particular in the light of recent technological developments."
At a Member State level, vigilance was also to be exercised to prepare for
legislative initiatives made necessary by technological changes. According
to Recital no. 8 TWFD, "it is essential that the Member States should take
action with regard to services comparable to television broadcasting in order
to prevent any breach of the fundamental principles which must govern
information and the emergence of wide disparities as regards free
movement and competition;" (emphasis added).
It has been said that the revision of a legal act starts, at the latest, with its
adoption. This seems particularly true for the TWFD, not least when bearing
in mind the aforementioned "review programme" already implemented in the
text of the revised Directive. Later, in view of the Convergence Green Paper
which was regarded by many as highly influenced by the Commissioner
responsible for the Information Society at the time, , and as a certain
counter-weight, the Commissioner responsible for Education and Culture
nominated a High Level Group of professionals from the audiovisual sector
to investigate into, and present proposals on, the impact of technological and
business changes for the media industries and related Community policy.
Their report, presented in October 1998 (HLG, 1998), argued that the
regulatory framework should be coherent and clear, and that steps should
be taken to avoid a situation whereby two different sets of rules with an
entirely different purpose would apply to the same service. The Commission,
in its Communication entitled "Principles and Guidelines for the
Commission's audiovisual policy in the digital age", published in December
1999 (Commission, 1999b), and thus published after the Follow-up
Communication on Convergence, stated that one principal of regulatory
action is to target technological neutrality. This term is explained as follows:
"Technological convergence means that services that were previously
carried over a limited number of communication networks can now be
carried over several competing ones. This implies a need for
technological neutrality in regulation: identical services should in
principle be regulated in the same way, regardless of their means of
transmission."
Apparently, the ground had been well-prepared to enter into a debate on
the review of TWF. However, the pace of discussion slowed considerably
mainly due to the end of the "internet hype" and the crisis in (television)
A. SCHEUER
79
advertising revenues in 2001. "Convergence", having been considered in all
its implications for many years, suddenly no longer seemed an issue that
required immediate action – due to a lack of concrete business models and
the slowed-down or postponed market entry of service providers. Moreover,
the Andersen study, dated June 2002 (ANDERSEN, 2002), which examined
the different possible economic and technical developments in the media
sector through 2010, had come to the conclusion that whatever trends would
predominately characterise the audiovisual market in the next years, no
immediate legislative measures were required. The three main trends
identified by Andersen were – formulated as case scenarios – (1) businessas-usual, (2) interactivity and (3) personalisation. According to the study's
authors, the different scenarios would lead to differences in the way market
players like broadcasters, infrastructure operators, content providers etc.
would participate in the value chain in the future. In scenario (2) and (3),
traditional broadcasters as "content packagers" would be negatively affected
to the greatest degree compared to other players. Interestingly, analysis and
interpretation of the study focused on the prognosis that nothing
fundamental would change until the year 2005. This was perceived as a
"relief" in terms of the alleged pressure on politics to take action.
In March 2002, the Member of the European Commission responsible for
Education and Culture (now: Information Society and Media), Viviane
Reding, presented three options for addressing the relevant issues: (A) the
comprehensive, complete overhaul of the Directive, (B) a moderate revision,
which would be restricted to specific parts of TWFD only, and (C) the
preparation of a Working Programme which, at a later stage, could lead to
initiating a review process. The presentation of these options was to be seen
in the context of the obligation, imposed on the Commission by Art. 26, to
present a report on the application of TWFD at the end of 2002. Read
between the lines, it transpired that the Commissioner was favouring
option C. This, of course, was a position that put a significantly different
emphasis on the approach to be followed than previously signalled,
particularly in 2000 and at the beginning of 2001. At that time, reflecting the
numerous calls by the European Parliament to extent the scope of a revised
directive, the aim of a future EC audiovisual policy instrument was sketched
as to embrace all forms of electronic media, hence the notion "Content
Directive" was introduced into the debate.
When, on January 6th 2003 the Commission adopted the Fourth Report
on the application of the "Television without Frontiers" Directive
(Commission, 2002), it annexed a work programme for 2003 to it, which set
out topics and initiatives including a public consultation, for the review.
80
No. 62, 2nd Q. 2006
Remarkably, the question of whether the Directive's scope of application
should be extended, was not dealt with at all. As phrased by the former Chef
of Cabinet of Commissioner Reding, this, of course, did not prevent a great
number of stakeholders from submitting their opinion on this subject matter,
with preferences almost equally distributed among those in favour and those
against adapting the scope to cover new services.
The Commission's bilan of the 2003 Consultation was presented in its
Communication entitled "The Future of European Regulatory Audiovisual
Policy", published in December 2003 (Commission, 2003). There, the
following conclusion is drawn:
"In the medium term, nevertheless, the Commission considers that a
thorough revision of the Directive might be necessary to take account
of technological developments and changes in the structure of the
audiovisual market. The Commission will therefore reflect with the help
of experts (in focus groups) whether any changes to content regulation
for the different distribution channels for audiovisual content would be
necessary at Community level in order to take account of media
convergence at a technical level and any divergence of national
regulation which affects the establishment and the functioning of the
common market. The mandate of the group shall be based on the
existing framework. Any intervention would have to ensure the
proportionate application of content rules and the coherent application
of relevant policies considered to be connected to this sector, such as
competition, commercial communications, consumer protection and
the internal market strategy for the services sector."
While the reference to "the medium term", understood as a self-restraint
by the then acting Commission so as not to prejudice the political agenda of
its successor nominated in 2004 shortly after the enlargement of the EU, and
the approach formulated, i.e. to restart a reflection and consultation phase,
are the most obvious reservations made; in substance, however, the
moment seemed to have come to seriously consider revising TWFD. In
autumn 2004, the so-called Focus Groups – convening under the presidency
of the Commission and invited to be led, in their discussion, by working
papers prepared by it – consequently started to debate the policy options
eventually to be recommended for future regulatory action. On the basis of
these reflections, the Commission drafted so-called "Issue Papers" in order
to open up the 2005 Consultation to all stakeholders, and to be able to
present the input received at the Audiovisual Conference "Between Culture
and Commerce" under British EU presidency in autumn 2005 (the "Liverpool
Conference").
A. SCHEUER
81
The new Commission also sought to set the above activities in a larger
framework, having recourse to the Lisbon agenda. In its i2010 initiative,
adopted in June 2005 (Commission 2005a), the Commission outlined i.a. the
following policy priority aiming at "A European information society for growth
and jobs":
"- To create an open and competitive single market for information
society and media services within the EU. To support technological
convergence with "policy convergence", the Commission will propose:
an efficient spectrum management policy in Europe (2005); a
modernisation of the rules on audiovisual media services (end 2005);
an updating of the regulatory framework for electronic communications
(2006); a strategy for a secure information society (2006); and a
comprehensive approach for effective and interoperable digital rights
management (2006/2007). [...]" (emphasis added)
Before and after the Liverpool Conference, different versions of drafts,
focusing on the definition component in a revised TWFD, were in circulation.
These drafts were apparently, in first instance, mainly inspired by the work
done by Focus Group 1, and foresaw an extension of the scope of a future
instrument. A remarkable amendment, though, was made post-Liverpool, i.e.
that the definition of services falling under a new Directive should be more
strongly focused on services of a mass media character. We will come back
to this point later on. It seemed that for all those involved in the discussion –
both the stakeholders and the institutions competent in the upcoming
legislative exercise – at this stage it was important, first, to verify whether
some issues of perceived consensus were likely to be acceptable for the
majority of interested persons, and, secondly, to test how big opposition
might become in areas which could encounter a foreseeably intense debate.
An example of the latter kind of debate, obviously most interesting in the
present context, was the degree to which the scope of application would be
extended in order to cover new services.
On December 13th 2005, the European Commission officially adopted the
proposal for a Directive on Audiovisual Media Services (Commission,
2005b). Besides the Committee of the Regions and the European Economic
and Social Committee who will be consulted, it is now up to the Union's
organs European Parliament and Council 4 to take a position in the
legislative process.
4 This contribution will not elaborate on past EP or Council positions. In essence, it shall suffice
to recall that the Parliament has constantly renewed its request for enacting a new directive
which should cover new media services, see e.g. Resolution of 6 September 2005, A6-
No. 62, 2nd Q. 2006
82
1988
1989
1990
1994
1995
1995
1996
1997
1997
1998
1998
1998
19971999
1999
1999
1999
2000
2002
2002
2003
2003
2004
2005
2005
2005
Chronology of general and specific media and ICT policy developments
Liberalisation of TTE
Adoption of "Television without Frontiers" Directive
Liberalisation of Telecommunications Services (ONP)
Liberalisation of Satellite Communications
Start of First Review of TWFD
Use of Cable Networks for Liberalised TC Services
Liberalisation of Mobile Communications
Adoption of Amendment to TWFD
Convergence Green Paper
Full Liberalisation of Voice Telephony Services
"Technical Standards Transparency" Directives
High Level Group ("Oreja"-) Report
Telecommunications Package (i.a. Separate Legal Structures for Owners of TC and
Cable Networks)
Convergence Communication (Follow-up)
Start of Telecommunications Review
Communication "Principles and Guidelines for the Commission's Audiovisual Policy in
the Digital Age"
Adoption of eCommerce Directive
Electronic Communications Package (i.a. Framework, Access, and Universal Services
Directives; Frequency Decision)
th
4 TWFD Application Report (incl. Working Programme)
First Consultation on TWFD Review
Communication "The Future of European Regulatory Audiovisual Policy"
Focus Groups on TWFD Review
Communication "i2010" (i.a. TWFD Review, 2006 Electronic Communications Review,
Spectrum Management Policy)
Second Consultation on TWFD Directive (Issue Papers and Liverpool Conference)
Commission Proposal for a "Audiovisual Media Services" Directive (AMSD)
Discussion of the draft
Audiovisual Media Services Directive
Primary objectives and definition of scope
According to the motivation put forward in the Commission's proposal,
Member States rules applicable to activities such as on-demand audiovisual
0202/2005 ("Weber Report"). The Council stressed the importance to have a technologically
neutral approach when regulating content services, and underlined that the content of
interactive media should be regarded a new audiovisual phenomenon. Consequently, the
Commission was requested to consider possible adaptations of the regulatory framework in
order to safeguard cultural diversity and a healthy development of the sector. For more
information, see A. Roßnagel, [2005] EMR book series vol. 29, p. 35 (41).
A. SCHEUER
83
media services contain disparities, some of which may impede the free
movement of these services within the EU and may distort competition within
the Common Market. Reference is made, in this respect, to Art. 3
eCommerce Directive, which allows Member States to derogate from the
country-of-origin principle – that is the general approach to regard a service,
legally rendered in the Member State where the provider is established, to
be freely circulated across the EU without interference by the receiving
Member State – for specific public policy reasons. "Legal uncertainty and a
non-level playing field exist for European companies delivering audiovisual
media services as regards the legal regime governing emerging on-demand
services, it is therefore necessary [...] to apply at least a basic tier of
coordinated rules to all audiovisual media services." (emphasis added)
Those arguments are triggered by the requirements laid down in Art. 49 in
conjunction with Art. 55 EC, i.e. that the Directive will facilitate the free
provision of services, and, thus, serve to demonstrate that a legal measure
has to be adopted in order to overcome hindrances resulting from
divergences in the national rules. The recitals go on to state that the
importance of audiovisual media services for societies, democracy and
culture should justify the application of specific rules to these services.
Further on, the Commission refers to two principles contained in Art. 5
EC. In accordance with the principle of proportionality, on the one hand, it
proclaims that the measures provided for in the Directive represent the
minimum needed to achieve the objective of the proper functioning of the
internal market. The Commission thinks that non-linear audiovisual media
services have the potential to partially replace linear services. Nevertheless,
the recitals state, non-linear services are different from linear services with
regard to the choice and control users can exercise and with regard to the
impact they have on society. This would justify imposing lighter regulation on
non-linear services, which only have to comply with the basic rules provided
for. In view of the principle of subsidiarity, on the other hand, action at a
Community level is seen as necessary in order to guarantee an area without
internal frontiers as far as audiovisual media services are concerned.
The legislative proposal, according to its recitals, intends to ensure a high
level of protection of objectives of general interest, in particular the
protection of minors and human dignity. The EC Treaty, in its Arts 151 and
153, stipulates the obligation of the Community to take into account, when
acting, cultural aspects, in particular in order to preserve and enhance the
diversity of cultures; in addition, it has to strive for a high level of consumer
protection. With regards to the former, the Directive sets out that non-linear
services should also promote the production and distribution of European
No. 62, 2nd Q. 2006
84
works, where practicable, and thus actively contribute to the promotion of
cultural diversity. As far as the the latter is concerned, it is deemed both
necessary and sufficient that a minimum set of harmonised obligations is
introduced in order to prevent Member States from derogating from the
country-of-origin principle with regard to protection of consumers in the
areas harmonised in the Directive. The same kind of consideration is made
when it comes to other public policy objectives, such as the protection of
minors, the fight against incitement to any kind of hatred, and violation of
human dignity concerning individual persons.
With regard to several objections, communicated for many years when
the extension of scope was under debate, the following passages might be
read:
"This Directive enhances compliance with fundamental rights and is
fully in line with the principles recognised by the Charter of
Fundamental Rights of the European Union, in particular Article 11
thereof. In this regard, this Directive does not in any way prevent
Member States from applying their constitutional rules relating to
freedom of the press and freedom of expression in the media. [...] No
provision of this Directive should require or encourage Member States
to impose new systems of licensing or administrative authorisation on
any type of media. [...] None of the provisions of this Directive that
concern the protection of minors and public order necessarily requires
that the measures in question be implemented through prior control of
audiovisual media services."
In short, the Commission intends a future Directive:
• To have a broader scope of application: the Directive should be
formulated in such a way that all audiovisual media services are covered,
whatever their mode of delivery ("regulatory convergence"; it is the content
that matters when specific general interest objectives have to be attained,
therefore the approach of technological neutrality is chosen);
• To lay down basic requirements that all of those services must respect
while at the same time introducing a certain graduation, within the body of
rules of the Directive, taking account of the character of different audiovisual
media services, particularly their influence on the viewer or user ("lighter
touch regulation" for television-like "non-linear" services, VoD for instance).
For present purposes it is important to clarify how the future scope of
application shall be designed. Here, Art. 1 of the draft Audiovisual Media
Services Directive (AMSD) is to be looked at, which contains the guiding
definitions. According to Art. 1 lit. a), "audiovisual media service" means a
A. SCHEUER
85
service as defined by Arts 49, 50 EC the principal purpose of which is the
provision of moving images with or without sound in order to inform,
entertain, or educate, to the general public by electronic communications
networks within the meaning of Art. 2 lit. a) of Directive 2002/21/EC
(Framework Directive). This general definition is accompanied by definitions
of "television broadcasting" and "non-linear service". The former means a
linear audiovisual media service where a media service provider decides
upon the moment in time when a specific programme is transmitted and
establishes the programme schedule. A non-linear service is defined as an
audiovisual media service where the user decides upon the moment in time
when a specific programme is transmitted on the basis of a choice of content
selected by the media service provider.
It is clear that one also has to consider the definition of media service
provider in order to identify the exact scope of application, both ratione
materiae and ratione personae. The term "Media service provider" refers to
the natural or legal person who has editorial responsibility for the choice of
content of the audiovisual media service and determines the manner in
which it is organised, Art. 1 lit. b). The notion of broadcaster is then defined,
more narrowly, as the provider of linear audiovisual media services.
Thus, the Directive will be applicable to:
- audiovisual content (moving images) of a mass media character (to
inform, entertain, or educate) being provided to a general audience
(numerous participants of a non previously defined group) by any kind of
network;
- where the activity is an economic one (service in the meaning of the
treaty);
- where editorial responsibility over a specific programme is exercised
(schedules; selection of choice of content) by a media services provider;
- irrespective of whether the moment in time when the programme is
accessed is determined by the broadcaster (linear service) or the viewer
(non-linear service).
Linear and non-linear services, on the one hand, are distinguished
according to the degree in which the viewer exercises control over the
moment in time and the kind of programme s/he is watching. In cases where
s/he depends on a constant stream of programmes, arranged according to a
schedule, by a provider, a linear service, i.e. a television broadcast, is at
stake. Where the user is free to chose what specific content offer s/he is
viewing, and when, the offer will be regarded a non-linear service.
86
No. 62, 2nd Q. 2006
Basically, the demarcation line follows the well-known models of NVoD
and VoD, at least as long as the former is made available only at time slots
that are not so short as to be negligible. On the other hand, in the case of
non-linear services, it remains to be defined whether the AMSD or the
eCommerce Directive should be applied. Relevant elements here are
whether an audiovisual media service is rendered, which means that there
have to be moving images of a mass media nature in the form of a
programme that can be characterised as the principle or main content
encountered. Therefore, in cases of the mere inclusion of a small number of
audiovisual files on a webpage, where this is of an insignificant proportion
compared to the rest of the content put online, where no editorial
responsibility is exercised, or where it will not be intended to inform,
entertain and educate, the regime of the eCommerce Directive will be
applicable (graph 3).
Graph 3 – Audiovisual media service vs. Information Society service
A. SCHEUER
87
Open debate within EC institutions, at Member State level
and with stakeholders
Following the publication of the Commissions's proposal, a discussion
recommenced over whether a future directive on audiovisual content really
should cover "new services" – services that have only emerged in recent
years or are still to be launched. Supposedly, there is some familiarity with
related arguments: some claim that regulation would be premature,
particularly when it comes to mobile media or internet services. Moreover, in
a similar vein, it is argued that it would be disproportionate to apply the
traditionally strict "broadcasting" regulation to new forms of audiovisual
content distribution. Sometimes these arguments are a bit irritating, to say
the least. The fact that a graduated level of detail in regulation has been
foreseen is exactly the answer to concerns that new services would be
regulated over-heavily. The fact that the service at hand must represent an
economic activity that entails a certain mass media appeal ("television-like
offer", "principal purpose") excludes both purely private initiatives, as well as
non-media services from being covered by the proposed rules. On the
contrary, it seems difficult to understand why minimum requirements
regarding the protection of minors, consumers, and personal integrity should
not be relevant, at least in principle, to any kind of audiovisual media service.
Critics also question whether the distinction between linear and nonlinear services has been formulated adequately and whether this really is
"future-proof". Here, the arguments will very much depend on the
preconditions one might want to set for the future audiovisual landscape in
Europe. Indeed, coming back to the interactivity and personalisation
scenarios, forecast by Andersen in 2002, technical development appears to
offer viewers an even wider range of possibilities to individually select the
kind of media information they are interested in. So, the question is rather
whether there will be many linear services left by 2010 (and especially
beyond), by which point the Directive will have been implemented into
national law in the majority of Member States. Presumably, in particular
highly attractive commercial general interest channels and public service
television broadcasts will remain as essentially linear services, which means
that only the smaller part of the proposed provisions, i.e. the basic tier with
reduced restrictions on the pursuit of the activity, will show relevance for the
majority of services rendered in the audiovisual sector. In this respect, the
draft directive is not technologically neutral when it differentiates, internally,
between the two kind of services to be covered; therefore, it might be
reassessed if future progress in technology could render it interesting for
No. 62, 2nd Q. 2006
88
some providers to switch from one level of regulation to another by adapting
the underlying technical parameters accordingly.
Apparently, especially telecommunications operators and multimedia
companies, and their respective associations or lobbying groups, are rather
discontented with the Commission's proposal – the same holds true for
newspaper and magazine publishers. This tendency could be observed as
early as the Liverpool Conference. It is often argued that general law, acts
on defamation, advertising standards in horizontal Community instruments
(e.g. the Unfair Business Practices Directive of 2005), or legislation on the
protection of minors in criminal law for instance, would be sufficient. Yet this
still leaves open the question of whether operators/providers and
users/viewers are better off with a clear legal framework based on the
principle of country-of-origin control – or not.
By contrast, Member States and particularly the European Parliament,
seem to be preparing for an early consensus on the fact that the Directive's
scope will be expanded. In mid-may, the Council held an exchange of views
on the draft text for the first time, and, in a most cautiously worded
statement, said tendency was confirmed. However, the UK government has
reiterated its preliminary negative position on several occasions, and it is
difficult to predict how many other Member States might liase to this
opposition.
From the European Parliament's committee on culture and education,
having the lead for this dossier, there have been reported signs of a broad
agreement to follow the Commission's approach. At the beginning of June,
all competent committees held a joint hearing. The ambitious timetable of
the EP foresees the following steps: a draft report will be presented at the
Culture Committee meeting in July, the report presented at that meeting will
be adopted in October and the EP Plenary will be called to vote on the
proposed report in December in its first reading.
Résumé and outlook
The revision of the TWFD has finally become reality. After a decade of
discussion on how European audiovisual policy should react on the
convergence issue, a concrete proposal has been tabled by the Commission
that largely follows the trends already indicated at the end of the 1990s. The
aim is to ensure technological neutrality when adopting "convergent
A. SCHEUER
89
regulation" and to foresee graduation in the level of conditions set for linear
and non-linear services through a differentiated regulatory approach. In such
an environment as in the field of ICT and media – nowadays showing clear
tendencies towards convergence both in terms of internal structures (vertical
integration) and the extension of business activities across sectors, with a
constant high pace of technological and economic changes –, any prognosis
of future market conditions, on which the European legislator must also, to
some extent, base its approaches, is generally suspected to be misguided.
However, the Community must now decide whether it need to play an active
role in shaping the future of the audiovisual media landscape, not least in
order to protect recognised standards of public interest objectives in all
audiovisual media services.
No. 62, 2nd Q. 2006
90
Bibliographic References
ANDERSEN (2002): "Outlook of development of the Market for European audiovisual
content and of the regulatory framework concerning production and distribution of
this content".
http://ec.europa.eu/comm/avpolicy/docs/library/studies/finalised/tvoutlook/tvoutlook_fi
nalreport.pdf
BERGER K. & M. SCHOENTHAL (2005): Tomorrow's Delivery of Audiovisual
Services – Legal Questions Raised by Digital Broadcasting and Mobile Reception,
European Audiovisual Observatory (Ed.), IRIS Special.
EC:
- (1997): Directive 97/36/EC of the European Parliament and of the Council of 30
June 1997 amending Council Directive 89/552/EEC on the coordination of certain
provisions laid down by law, regulation or administrative action in Member States
concerning the pursuit of television broadcasting activities, OJ EC [1997] L 202,
p. 60.
- (1998): Directive 98/48/EC of the European Parliament and of the Council of 20 July
1998 amending Directive 98/34/EC laying down a procedure for the provision of
information in the field of technical standards and regulations, OJ EC [1998] L 217,
p. 18.
- (2000): Directive 2000/31/EC of the European Parliament and of the Council of 8
June 2000 on certain legal aspects of information society services, in particular
electronic commerce, in the Internal Market (Directive on electronic commerce), OJ
EC [2000] L 178, p. 1.
- (2002): Directive 2002/21/EC of the European Parliament and of the Council of 7
March 2002 on a common regulatory framework for electronic communications
networks and services (Framework Directive), OJ EC [2002] L 108, p. 33.
EEC:
- (1988): Commission Directive 88/301/EEC of 16 May 1988 on competition in the
markets in telecommunications terminal equipment, OJ EEC [1988] L 131 p. 73.
- (1989): Council Directive 89/552/EEC on the coordination of certain provisions laid
down by law, regulation or administrative action in Member States concerning the
pursuit of television broadcasting activities, OJ EEC [1989] L 331 p. 51.
European Commission:
- (1994): Commission Decision 94/796/EC of 18 November 1994 on a common
technical regulation for the pan-European integrated services digital network (ISDN)
primary rate access, OJ EC [1994] L 329 p. 1.
- (1997): Green Paper on the Convergence of the Telecommunications, Media and
Information Technology Sectors, and the Implications for Regulation – Towards an
Information Society Approach, COM (1997) 623 final.
(1999a):
Commission
Communication
"The
Convergence
of
the
Telecommunications, Media and Information Technology Sectors, and the
Implications for Regulation – Results of the Public Consultation on the Green Paper
[COM(97)623]", COM(1999) 108 final.
A. SCHEUER
91
- (1999b): Commission Communication "Principles and Guidelines for the
Commission's audiovisual policy in the digital age".
http://ec.europa.eu/comm/avpolicy/docs/library/legispdffiles/av_en.pdf
- (2002): Commission Communication "Fourth Report on the application of the
'Television without Frontiers' Directive", COM(2002) 778 final.
- (2003): Commission Communication, "The Future of European Regulatory
Audiovisual Policy".
http://ec.europa.eu/information_society/eeurope/i2010/docs/launch/com_audiovisual
_future_151203_en.pdf
- (2005a): Commission Communication "i2010 - A European information society for
growth and jobs".
http://europa.eu.int/information_society/eeurope/i2010/docs/communications/com_22
9_i2010_310505_fv_en.pdf
- (2005b): Commission Proposal for a "Audiovisual Media Services Directive".
http://ec.europa.eu/comm/avpolicy/docs/reg/modernisation/proposal_2005/com2005646-final-en.pdf
European Court of Justice (2005): ECJ, case C-89/04, Mediakabel, judgement of 2
June 2005, nyr (ECJ 2005). According to this judgement, "television broadcasting"
includes Near-Video-on-Demand (NVoD) services.
HLG (1998): Report from the High Level Group on Audiovisual Policy, chaired by
Commissioner Marcelino Oreja, "The Digital Age: European Audiovisual Policy".
http://ec.europa.eu/comm/avpolicy/docs/library/studies/finalised/hlg/hlg_en.pdf
(van) LOON A. (2004): "The end of the broadcasting era", Communications Law,
no. 5, p. 172.
Three scenarios for TV in 2015
(*)
Laurence MEYER
IDATE, Montpellier
Abstract: By offering three visions of the future of television through 2015, this article
aims to highlight some of the socio-economic changes that the television sector may
experience in the long term. It highlights the structuring impact that PVR could have on the
sector, as well as the upheavals that may arise from a new paradigm of internet TV. It also
highlights the options now open to TV channel operators wishing to set up a mobile TV
service and the threats facing mobile telecommunications operators in the development of
this market as a result.
Key words: television, forecast, media usages.
T
he TV sector is currently in turmoil and is only gradually sizing up the
challenges and opportunities presented by in the rise of IPTV, the
growth of VoD services, the emergence of TV services distributed on
a P2P basis via the internet, the phenomenon of video podcasting and usergenerated contents, the expected success of PVRs and multimedia PCs
(Media Centres) and the forthcoming launch of commercial mobile TV
offerings based on the DVB-H standard in Europe.
In view of the large number of ongoing changes, the future of TV sector
not only looks uncertain, but is also sure to see major changes.
These transformations are forecast both on the level of the programme
offering to be marketed to television viewers in the long term, and the
characteristics of television consumption ten years down the line.
(*) This article is based on the results of an IDATE multi-client report entitled "TV 2015: the
future of TV financing in Europe" headed by Laurence Meyer and published in 2005.
One of the aims of this report was naturally to offer a vision of television in the future. This
exercise drew on a certain number of prerequisites and consequently began by offering a
definition of television. The report subsequently focused on analysing key factors in the
evolution of the sector. Finally, it examined the mid-term objectives of the various players in the
TV market, the challenges facing them, the conflicts and converging interests of these players in
terms of their objectives and finally their strengths and weaknesses. Once this groundwork had
been covered, several long-term growth scenarios for television were described, each
accompanied by a statistical forecast.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 93.
No. 62, 2nd Q. 2006
94
From this point of view, trying to set the scene for the future of television
would seem to be one of the best ways of anticipating the changes that
forthcoming decade may bring. Such an approach avoids reducing the
outlook to a single "futurist" vision that may appear either overly simplistic or
excessively complex and thus open to criticism!
The main objective of this article is consequently to present three growth
scenarios for TV through 2015. It begins with an analysis of factors of
change that may drastically alter long-term television consumption. The
paper goes on to describe various scenarios and finally examines a few of
the economic changes that may be experienced by the TV sector.
TV, Europe's favourite medium
In major European countries exposure time to the media is currently
approaching 10 hours per day!
Despite the emergence of new digital media and competition from other
"major media" (radio, the press, magazines and cinema), Europeans still
have the highest exposure to television.
Statistics published by the EIAA (European Interactive Advertising
Association) at the end of 2004 show that TV remains the leading electronic
form of entertainment for Europeans. In fact, time spent watching television
accounted for one third of total time devoted to the media on a daily basis in
Europe, while the internet only accounted for 20%.
Media consumption of Europeans in 2004
20
17,8
15,6
15
10,8
10
5,4
4,3
5
0
Radio
TV
Internet
Presse
Magazine
Source: EIAA – Media Consumption study 2004
L. MEYER
95
Are TV channel operators under threat?
Although it still enjoys a special status in the eyes of most viewers, the
TV service as we know it today, that is to say mainly broadcast terrestrially,
by cable or via satellite and based on linear TV programming, now seems to
be under threat.
The TV sector is effectively facing a range of changes and potentially
disruptive factors that are likely to upset the current market balance. This
article focuses on the factors of change that may revolutionise television
consumption in the long term, and are thus likely to play a part in the midlong term challenges and opportunities related to the higher household
penetration of tools and equipment promoting the emergence of concepts
such as personal TV, mobile TV, home networks and "Egocasting" 1 .
Personal TV: the next step?
With the number of VOD services growing and the equipment of TV
households with PVRs, new generation set-top boxes and PC Media
Centres increasing, TV should be less and less synonymous with the linear
programming imposed by TV channel operators in the future. The world of
TV will, on the contrary, steadily move into the era of personal programming.
Television viewers should then control their own consumption. They should
be in a position to consume what they want, when they want and, depending
on advances in portability and mobility, they should soon be in a position to
watch TV programmes wherever they want too.
Mobile TV turns into a reality
Since the beginning of 2003 most mobile telecommunications operators
have been offering video via their multimedia portals and for downloading.
Streaming services, which emerged at the end of 2003, were seen as a
1 This term was adopted by IDATE after its appearance in an article written by Shelly Palmer,
Chairman of The Advanced Media Committee of the National Academy of Television Arts and
th
Sciences (NATAS), published on August 11 2005, entitled "The Age of Egocasting"
(http://advancedmediacommittee.typepad.com/emmyadvancedmedia/2005/08/the_age_of_egoc
.html ). Notwithstanding this quote, the term Egocasting was used for the first time by the
historian Christine Rosen in an essay entitled "The age of Egocasting".
(http://www.thenewatlantis.com/archive/7/rosen.htm )
96
No. 62, 2nd Q. 2006
second stage in the development of mobile video. The third stage will be that
of mobile TV broadcast on traditional TV networks designed for telephones
or other pieces of mobile equipment such as PDAs, for example.
Most mobile TV offerings are currently in the pilot phase. Over the course
of the next two years the number of commercial offerings based on the DVBH standard nevertheless looks set to grow.
During this transition period, and despite the existence of "competing"
equipment (PDAs, laptops, mobile TVs, as well as multimedia personal
players of the iPod variety), the telephone should play a central role in
individuals' electronic entertainment consumption, notably as far as the
consumption of audiovisual programmes is concerned. Mobile TV is
effectively seen by telecommunication operators as a strategic activity:
against a background of falling fixed telephony revenues, video and TV
enables these operators to contemplate an increase in their ARPU,
especially as the results of several surveys have revealed high levels of
consumer interest in this type of offering.
The inevitable rise of internet TV
The internet is obviously a key factor in the long-term evolution of the TV
sector: over 60% of European households are almost certain to have a
broadband internet connection by 2015.
Moreover, the emergence of an "alternative universal TV" on the internet,
via the growing number of vlogs, personalised TV platforms, video search
engines, streaming software programmes based on peercasting, and
audiovisual programmes specially designed for web-based distribution (web
reality programmes, for example), would seem inevitable.
Vlogging and podcasting: two booming online trends
The phenomenon of blogging on the internet is exploding, and blogs
using video, vlogs, are now beginning to appear. The vlogger community is
still relatively small compared to the world of blogging. It is nevertheless
beginning to take shape via the use of tools such as, for instance, the video
RSS aggregators FireANT and Videora 1.0 or the Vlogdir directory, which
"tracks" vlogs on the internet.
L. MEYER
97
Like blogs, vlogs cover highly diverse topics ranging from cookery
lessons to mini-reports on local film festivals, not to mention the broadcast
(or, for that sake, narrowcast) of family events and personal videos.
On top of podcasting 2 , vlogging makes it possible to envisage the
development of a new TV model enabling the diffusion of "hyper-specialised"
contents to communities of users interested in specific topics. Thanks to the
use of a video RSS aggregator, podcasting effectively empowers users to
build a "personal TV" programming schedule. Programmes can
subsequently be transferred to their digital personal multimedia players for
deferred viewing.
An expanding offering of online TV services
Parallel to vlogs, an offering of video and TV services, marketed by the
major media groups and web-based players, is developing rapidly on the
internet.
The major media groups are devising specific programmes for the
internet, which are often free. Catalogue owners like Disney and Warner
Bros have also moved into the online distribution (free and/or P2P) of some
of their programmes. The big internet brands such as Yahoo! and AOL are
also very active in that field. VOD services offering TV programmes are also
spreading over the internet. In fact, legal software applications for
downloading films and TV programmes based on P2P distribution systems
are emerging, as are streaming and personalised TV software based on
peercasting (Open Media Network - OMN), Broadcast Machine Software
and Veoh Networks etc.).
Towards a new paradigm for TV?
The internet TV offering should improve in the years to come.
Several factors would seem to support this trend:
• It is in the interest of TV channels: broadcasting their service on the
internet represents a way of "catching back" their audience, and, for
2 Podcasting is a way of distributing audio and video files via the internet that uses the RSS and
Atom syndication formats. Podcasting enables users to automate the downloading of audio or
video programmes, notably to their digital personal players or to the hard disk of their PC,
enabling them to view these files immediately or at a later date.
98
No. 62, 2nd Q. 2006
commercial TV channels, of limiting the financial risks of a massive transfer
of investment in televised advertisements to the internet
• In general terms, players from the IT world such as Microsoft, Intel
and Apple are working towards the adoption of a new TV model based both
on the digital home network and on the distribution of universal TV
programmes via the internet.
• TV distribution systems based on streaming and P2P, such as those
using RSS feeds, make it possible to envisage the growth of innovative
personal TV services that should, in theory, be cheaper to produce.
Over the next few years, and parallel to today's TV offering, the TV sector
could consequently move towards a new alternative growth model,
characterised by:
- the boom of a video programme offering created by television viewers
on the internet,
- the enhancement of online video and TV services developed by major
media groups,
- the recognition of P2P and podcasting as serious alternative channels
for distributing TV over the internet.
Home networks - a prerequisite
The set up of multimedia domestic networks in households based on
data exchange on the one hand and pooling the functions of all pieces of
electronic household equipment on the other, would seem to be a
prerequisite for future changes to the television paradigm as we know it
today.
Home networks effectively form the basis of a new, open environment for
media consumption and an ecosystem, in which consumers are set to have
easy access to protected multimedia content 3 from the internet or other
sources at any time from their homes. To achieve this, consumers should
use a remote control, as well as a PC (a central unit and adaptors) and a
series of domestic electronic devices (televisions, laptops, PMCs, MP3
players, portable games consoles, smartphones etc.) synchronised with the
PC. These will be seamlessly interoperable and networked using a wireless
technology (belonging to the WiMAX family).
3 And notably to TV programmes.
L. MEYER
99
The generalisation of home networks lies at the heart of the philosophy
adopted by companies such as Intel, Microsoft and Apple and many other
players in the IT and consumer electronics world, as well as a large number
of internet access providers and cable operators when it comes to building
the future of television. It consequently forms the core of their short and midterm growth strategies.
Three scenarios for the future
The scenarios below have been developed bearing in mind that changes
in the field of TV in the long term will be guided more by transformations in
end demand, regulation and commercial innovations than by the availability
of technologies.
Each of these scenarios thus takes television consumption by individual
viewers as a starting point, which is re-contextualised in the larger
framework of overall media usage. Of course, the scenarios also take
technological changes into account and are therefore firmly rooted in a
climate of convergence.
SCENARIO 1: "TV in complete freedom"
Usages and equipment
By 2015 access to TV services should be almost ubiquitous; namely it
should be possible for viewers to watch a news bulletin, an episode of their
favourite series or a live show in any location and at any time, provided that
they have a piece of digital receiving equipment, preferably mobile.
Ownership of a piece of mobile multimedia equipment will also have become
relatively widespread, whether this be dedicated to receiving audiovisual
services (TV, video, music, video games etc.) or a latest generation mobile
telephone.
In view of the richness of the offering both in equipment and in mobile
services and content, a large number of Europeans should subscribe to a
mobile TV offering. The penetration of mobile TV should reach 50% of the
population by 2015, and 80% of subscribers will use this service every day
for almost one hour on average! Under these conditions, the reference point
100
No. 62, 2nd Q. 2006
in the TV market in terms of marketing should no longer be the household,
but the individual: the TV market will consequently have entered the age of
individual TV.
Underlying market structures and business models
Although the changes described above do not seem very "revolutionary,"
this scenario nevertheless assumes a few major developments in the
structure of the TV market and the related business models.
In this scenario, the mobile TV consumption by subscribers is set to grow
significantly, amounting to around one hour per day.
In terms of the service offering, changes concern the mobile TV market
where two types of offering will compete with each other:
- the terrestrial offering organised by mobile telecommunication
operators and mainly targeting mobile telephones;
- the satellite offering for television sets and laptops, marketed by pay
TV platform operators in conjunction with consumer electronic equipment
manufacturers.
This offering is rich and particularly well segmented:
• Satellite pay TV platform operators offer a "best of mobile" of fixed
services, as well as an offering of specific content and services adapted for
mobility.
• Mobile operators market a linear TV offering based on the DVB-H
standard, as well as original and innovative programmes on demand via 3G
networks.
This scenario assumes no major upheavals to existing business models:
• The model for mobile pay TV is similar to that of fixed pay TV via
satellite.
• The most significant change involves TV advertising, which sees the
rise of interactive and mobile advertising.
L. MEYER
101
Scenario 1 – Usage of various media in 2015
2005
240
Fixed TV
Radio
(in minutes / day)
Average time of use
180
120
Internet
60
Pre-recorded TV programmes
Video games
Magazine
Mobile TV
Newspapers
Music
0
0
10
20
30
40
50
60
70
80
90
100
Daily reach (in % of population)
2015
240
Fixed TV
(in minutes / day)
Average time of use
180
Radio
Internet
120
Mobile phone
60
Pre-recorded TV programmes
Mobile TV
Magazines
Video games
Newspapers
Music
0
0
10
20
30
40
50
60
70
80
90
100
Daily reach (in % of population)
NB: The time devoted to the press, magazines, music, radio, TV and video games refers to the
time spent reading, listening to or watching these media excluding the internet. Pre-recorded TV
programmes refers to the time spent watching DVDs, VHS cassettes, pre-recorded
programmes on a PVR and programmes accessible via VOD.
Source: IDATE
102
No. 62, 2nd Q. 2006
SCENARIO 2: "Welcome to the age of Egocasting"
Usages and equipment
This second scenario is based on the fundamental hypothesis that the
internet has become the favourite medium of a large section of the
population by 2015, and notably of target consumers under the age of 45
years old. TV should no longer be at the centre of media consumption: as
society plunges into the culture of hyper-personalisation, television viewers
will no longer subscribe to the "formatted" programmes that most TV
channels will have continued to offer. They will switch to vlogs, personalised
TV platforms 4 , and the VOD services available on the internet, which not
only deliver more original content (multimedia), but also allow viewers to
contribute to the topics that concern or interest them. TV should thus have
entered the age of Egocasting.
The video and TV offering on the internet has been taking shape and
expanding since 2005 thanks to the efforts of internet players including the
big brand names, software solution providers and TV producers, who are
bidding for their own "survival."
Against this background the audience for "traditional" TV channels should
drop significantly. Mobile TV fails to attract consumers, who prefer to opt for
the podcasting model for nomadic video consumption, notably thanks to:
- ever cheaper ways of accessing broadband internet (by 2015, over
85% of households will have broadband access);
- the availability of an expanded range of multimedia players of the iPod
variety on the market as of 2006.
Consumers will also be particularly drawn to home network offerings
structured around PCs or PVRs and marketed by internet access and IT
equipment providers. By 2015 45% of households should consequently be
equipped with a PC Media Centre, and over half should have a multimedia
home network.
4 Aggregators of RSS video or streaming software programmes enabling users to access an
enhanced TV offering via a P2P distribution system.
L. MEYER
103
Scenario 2 – Usages of various media in 2015
2015
240
Internet
(in minutes / day)
Average time of use
180
Fixed TV
120
Radio
Pre-recorded TV programmes
60
Mobile phone
Newspapers
Video games Magazines
Mobile TV
Music
0
0
10
20
30
40
50
60
70
80
90
100
Daily reach (in % of population)
Source: IDATE
Underlying market structures and business models
As regards the underlying market structures and business models, this
scenario points to far-reaching changes.
First of all, a series of "exogenous" events have taken place favouring the
"domination" of the internet:
• Between 2005 and 2015 the public authorities have taken steps to
ensure that the internet network becomes multi-casting, or that P2P is now
largely used as a means of distributing TV via the internet.
• IT and consumer electronics players have launched a series of
concerted initiatives to promote the benefits of digital multimedia home
networks and the PC Media Centre.
• As of 2006-2007 wireless fixed alternatives to DSL (WiMAX and its
derivatives) make it possible to cover certain populations that had been
served poorly or not at all by traditional broadband technologies.
This second scenario subsequently assumes the emergence of a
universal alternative TV offering:
- distributed on a peercasting basis via the Internet,
104
No. 62, 2nd Q. 2006
- structured around on the big internet brands using powerful search
engines and programme guides,
- providing a TV offering from across the world consisting of niche
programmes and vlogs, webTV, VOD services etc.
It also calls for strong growth in the VOD offering available on the
internet, as well as via cable and ADSL networks. This VOD offering should
be available for both television sets and PCs.
The offering includes a non-linear version of the programmes shown by
TV channels, as well as VOD services launched by independent cinema and
audiovisual producers wishing to recoup on their catalogues.
Against this background the free-to-air TV market will also face its fair
share of problems.
These difficulties should mainly be related to the drop in overall TV
audience figures and a major flow of televised advertising spending to the
internet. As a result, this market segment should enter a period of major
restructuring, especially given that, unlike in the first scenario, the mobile
pay TV market will not materialise, as consumers are likely to prefer the
"iPod model."
In terms of business models, this second scenario assumes a few major
upheavals. These drastic changes are mainly linked to the emergence of the
"alternative universal TV" offering on the internet.
This new offering should effectively call into question a certain number of
the golden rules defining how the TV sector operates:
- notably by enabling television consumers to contribute to the
programme offering;
- or by assuming global distribution for TV programmes;
- or by attributing a strategic role to TV guidance tools, as they become
the only way of "capturing" television viewers.
Moreover, TV enters the age of Egocasting, which:
- sees the advent of the consumption of audiovisual programmes on
demand whose copyrights are stipulated by their owners;
- leads to an evolution in advertising towards a business model
dominated by rigorous measurement, highly selective targeting and
personalisation of the message. This model specifically assumes that
advertising spending will be concentrated on programme access
platforms, and that the latter will redistribute advertising revenues
L. MEYER
105
according to the popularity of the various programmes and services that
they "host."
SCENARIO 3: "The reign of TV portals"
Usages and equipment
This third scenario is based on the assumption that over the 2005-2015
period, a large number of television viewers have been attracted to the
concept of personal TV enabled by PVR and VOD services. TV has
consequently entered the age of personal TV. Television consumption has
therefore become largely non-linear.
In 2015 personal TV can be consumed at home, as well as in mobile
contexts. As a result, the mobile TV market should be structured on the
basis of two models:
- the iPod model based on the use of portable PVRs,
- the model of the real-time broadcasting of TV programmes centring on
the use of the mobile telephone or special devices equipped with a hard
disk.
Thanks to marketing initiatives by consumer electronic manufacturers,
which saw the DVD player market run out of steam in 2005, the household
rate of equipment in PVRs (or DVD player/recorders with a hard disk) has
taken off rapidly in Europe, especially since the pay TV platform operators
(via satellite, cable or ADSL) were very quick to latch onto this trend. In 2015
60% of TV households should consequently have the option of customising
their TV consumption and watching most TV programmes on a slightly
deferred basis or after having recorded them (on their fixed or portable
PVR).
Against this background, the major media brands should move fast to
position themselves, so that they continue to be the "reference" not only for
real-time TV consumption, particularly on a live basis, but also in a universe
of TV consumption on demand.
106
No. 62, 2nd Q. 2006
Underlying market structures and business models
Although this scenario may appear more to follow on from current trends,
it still assumes several significant changes in terms of market structure and
related business models.
From a usage point of view, the key change lies in the fact that on
average, 45% of time in front of a TV screen will be spent watching recorded
programmes, deferred broadcasts or on demand offerings. Moreover, the
time devoted to TV, fixed or mobile, should rise overall, whereas the TV
audience, in real or non real-time, should remain concentrated around the
big TV brand names. Lastly, in households that have started using VOD
services, consumption should increase dramatically from 2005 levels.
At the initiative of major media groups and TV channel operators, the
fixed digital TV offering should be restructured around television portals that
create a unique environment for each major TV brand, that should enable
not only:
- access to the linear programme offering,
- but also access to a non linear version of these programmes.
These portals will naturally be interactive. Moreover, they should enable
viewers to access a range of services, and notably relational marketing
campaigns and interactive advertising. They should also include a
sophisticated audience loyalty system, often developed in partnership with
advertisers. As a result, control over television portals would seem to be of
key strategic importance for the big media brand names, not only to capture
television viewers, but also as a fresh source of revenues.
In terms of business models, this third scenario also assumes several
fundamental changes.
With the boom in personal TV, the TV business is set to evolve
significantly by moving towards a model that directly finances a TV
programme, instead of a programme grid. This transformation implies new
economic and financial relations between producers, TV channel operators
and TV platform access operators.
The boom in personal TV notably assumes that players in the sector are
in a position to renew the model of TV financing based on advertising by
building a new kind of relationship with advertisers. By entering the age of
personal TV, television is effectively breaking with the advertising model
based on broadcast slots. Personal TV should consequently promote the
L. MEYER
107
emergence of new forms of TV advertising including product placement, the
financing of programmes by brands, split screens and targeted interactive
advertising.
More generally, the coming of the age of personal TV should be
synonymous with a boom in interactive television, leading to the
enhancement of programmes, as well as more direct links to television
viewers.
Scenario 3 – Usages of various media in 2015
2015
240
Fixed TV
(in minutes / day)
Average time of use
180
Internet
120
Radio
Pre-recorded TV programmes
Mobile phone
60
Video Music
games
Mobile TV
Magazines Newspapers
0
0
10
20
30
40
50
60
70
80
90
100
Daily reach (in % of population)
Source: IDATE
These changes not only imply the development of new business and
financial relations between advertisers and TV channels, but also involve
rapid changes to the competences of the main protagonists in the television
sector. With the generalisation of VOD offerings, the business of
broadcasting in particular should steadily evolve towards that of a "TV
programme aggregator or distributor." It will no longer be a question of linear
TV programming, or of maximising audience share throughout the day, but
of maximising TV programme "sales" via a television portal that is
"recognised" by TV viewers. TV channel operators should consequently
become non linear content vendors.
No. 62, 2nd Q. 2006
108
Conclusions
By offering three visions of the future of television, its usages and
associated business models, this article highlights the importance of the
opportunities and challenges facing the TV sector over the next 10 years. It
should also enable us to understand that the future of TV depends on a
large number of factors whose combined effects will, in the end, be
extremely difficult to apprehend.
The reality of 2015 is probably situated at the crossroads of the three
broad visions described above, with several variations of these scenarios
potentially feasible.
Without being able to describe exactly how the TV sector will look in ten
years time, it is nevertheless safe to say that TV is evolving towards a new
paradigm whereby television consumption will be less linear and more
interactive, personal and nomadic.
Stages in the European television industry
1930
1930 -- 1980
1980
Age of...
Channel operator’s
strategic goal...
Public analogue TV
Inform,
Cultivate,
Entertain
1980
1980 -- 1995
1995
Commercial
analogue TV
Maximise viewing
audience
1995
1995 -- 2005
2005
Multichannel
digital TV
Optimise coverage
2005
2005 -- ...
...
2010
2010 -- ...
...
Personal TV
Egocasting
Help viewers (re)find
"their" programmes
Maintain audience
Production
Programming
Key activites
Assembly
Programm access guide
Creation / Talent
Challenges
Controlling
frequencies
Programme supply
(control over
broadcasting rights)
Appeal and marketing
service offerings
Source: IDATE
Media brand
Cost control and
name clout (ability
evolution of existing
to generate demand) business model
Opinion
Interview with
Evelyne LENTZEN
Chairman of the CSA
of the French Community of Belgium
Interview with Evelyne LENTZEN
Chairman of the CSA of the French Community of Belgium
Conducted by Rémy LE CHAMPION
C&S: Does a homogenous model for regulatory bodies exist in Europe?
Evelyne LENTZEN: The first regulatory bodies for broadcast content were
founded in the mid 1980s, on the morrow of the growth of private radio and
television stations and the collapse of public broadcasting monopolies. The
parliaments and governments of European countries subsequently decided
that it would be appropriate to entrust the monitoring – including the
implementation of rules set by the legislator – of a sector situated at the
crossroads of human rights and fundamental freedoms to an autonomous
body (independent of political and economic powers). In many cases the
structure of the administrative authority was selected. Although the exercise
of these functions is far from homogenous and in some cases, limited, these
authorities all share the power to award authorizations licenses, to monitor
broadcasters' respect of legal conditions and to impose sanctions in cases
where obligations are not fulfilled. Luxemburg is an exception to this rule.
There are currently over one hundred regulatory bodies worldwide. It goes
without saying that they are not all the same. Their internal structure, the
way that their authorities and staff are appointed, their financial and human
resources, as well as their various competences differ according to how they
fit in with existing state structures, the administrative practices of these
states, market structures, the political choices of parliaments and
governments etc. Most regulatory authorities regulate both the public and
the private sectors, but this is not the case with all bodies. The German
Landesmedienanstalten, for example, are only competent in private
broadcasting.
Diversity is consequently a fact in the world of regulation.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 111.
112
No. 62, 2nd Q. 2006
C&S: Would it be desirable to have just one regulatory body and what would
such an organisation look like?
EL: The regulatory authorities form part of the history of each democratic
state. It would be vain to hope that the logic of the unitary State, for example,
could be imposed on a federal State or vice-versa.
C&S: In some countries, like the United Kingdom and Italy, a single body is
responsible for all of the electronic communication sectors, whereas in others
a clear line is drawn between telecommunications and audiovisual regulation.
Is this distinction likely to continue in the age of convergence?
EL: There are all types of configurations: from the coexistence of several
bodies for broadcast content to the so-called "convergent" regulator.
A "convergent" regulator with a varying range of competences covering both
the audiovisual and telecommunications sector is not necessarily a "sole"
regulator.
The number of regulatory bodies with joint content-infrastructure
competences has been rising for several years. This is more or less the case
in at least eight countries in Europe (Belgium, Bosnia-Herzegovina, Spain,
Finland, Great Britain, Italy, Switzerland and Slovenia).
National, traditional circumstances play a key role. There was, for example,
no regulatory authority for telecommunications before the Italian AGCOM
was founded.
Structural simplifications, consisting of merging several authorities into a
single body, have also been undertaken in Austria, Belgium (the Flemish
community), Ireland, Norway and Switzerland, for example.
It was acknowledged at the EPRA (European Platform of Regulatory
Authority) that the creation of a single or convergent body for
telecommunications and audiovisual did not necessarily go hand in hand
with a convergent vision of communication regulation. Internal decisionmaking structures provide some indication of this AGCOM has, for example,
two commissions that deal with infrastructures-networks and servicesproducts separately.
The advantages and drawbacks of "convergent" regulatory authorities, or
even single bodies, are often discussed. It is worth remembering that the
cultures and public interests in the audiovisual and telecoms sectors
industries differ significantly. This is undoubtedly where the greatest difficulty
lies (not to merely regulate content with the economic rules applied to
infrastructures). In several countries another difficulty also undoubtedly lies
in the level of political independence of telecommunications regulatory
bodies, which is generally lower than that of audiovisual regulators.
Interview with E. LENTZEN
113
The case of the federal States opens up another perspective, namely that of
cooperation.
As a result, the Belgian audiovisual regulatory authorities are organised at
the community (regional) level, competent in questions of culture, while the
telecommunications regulatory authority has been created at the national
level. At a first glance, this would consequently appear to represent a
content-infrastructure breakdown. However, the laws of the French-speaking
community, for example, have granted competences in terms of
infrastructures to the regulator, namely the Conseil supérieur de
l'audiovisuel. In fact the "Cour d'arbitrage" (constitutional court) has always
considered the competences of Belgium's communities in terms of
broadcasting and television, are "Not to be linked to a specific form of
broadcasting or transmission. It enables the communities to regulate the
technical aspects of transmission that are an accessory to the field of
broadcasting and television. Responsibility for regulating other aspects of
the infrastructure, notably including the waves policing, falls to the federal
legislator." It has also stated that, "Recent technological developments have
meant that the fields of broadcasting and television on the one hand, and
telecommunications on the other, can no longer be defined according to
technical criteria such as the underlying infrastructure, the networks or the
terminals used, but naturally according to content-related and functional
criteria," and that "Broadcasting that includes television can be distinguished
from other forms of telecommunications insofar as a broadcast programme
distributes public information that is destined, from the broadcaster's point of
view, for all or part of its audience and is not of a confidential nature.
Services that provide individualised information, on the other hand,
characterised by a certain degree of confidentiality, do not come under the
jurisdiction of broadcasting and are monitored by the federal legislator." The
Court goes on to say that, "The main characteristic of broadcasting and
television is that it provides public information to the entire audience […],
which also includes broadcasting at individual request. Broadcasting
activities are not losing this feature just because a wider choice of
programmes is offered to television viewers and audiences due to advances
in technology." The Court concluded that the federal authority is not the only
authority competent to regulate networks and electronic communications
infrastructures, and that there is, "An absolute necessity to ensure that the
federal authority and the communities cooperate" 1 to manage shared
electronic communication infrastructures. The "Cour d'arbitrage" set a
deadline for this cooperation, which has now expired, without any such
cooperation being organised by the governments in question to-date.
1 Judgements 128/2005 of July 13th 2005, 7/90 of January 25 1990, 1/91 of February 7
st
th
1991, 109/2000 of October 31 2000 and 132/2004 of July 14 2004.
th
th
114
No. 62, 2nd Q. 2006
C&S: What are the regulatory problems that you see with the accentuation of
the trend towards convergence in each country, as well as on a European
level?
EL: There was a time, not so long ago, when infrastructures like co-axial
cable networks and hertzian over-the-air networks were used to transmit
television and radio broadcasting services and when other networks such as
fixed networks were used to transport voice and data. Networks have
become less specialized. The convergence in question is that of
infrastructures and networks.
The European Union fixed new rules for all infrastructures in the 2003 New
Regulatory Framework for electronic communication infrastructures and
associated services: services or networks that transmit communications
electronically, whether they be wireless or fixed, carrying data or voice,
internet-based or circuit switched, broadcasting or personal communication,
are all covered by a set of EU rules that became applicable on July 25th
2003.
Moreover, we can see that the growing number and convergence of
infrastructures has not lead to an increase in the amount of original content
available. The same broadcast contents are offered on all platforms.
One of the regulatory questions, that needs to be addressed quickly in my
opinion, involves the function of the service distributor, which is not defined
on a European level. The function of service aggregation and delivery to the
consumer can effectively be distinguished from the functions of media
service provider or broadcaster (editorial responsibility for the choice of
audiovisual content and determination of the manner in which it is organised)
and network operator (signal transmission: compression and transport).
These functions can be integrated in a single market player. The differences
and interplay between these three functions form a multimedia value chain
that is schematized by the consultants of Arthur Andersen as shown
below 2 .
Although the network operator only has technical responsibility for signal
transmission and the media service provider or broadcaster exercises
2 Arthur Andersen, Outlook of development of the Market for European audiovisual content and
of the regulatory framework concerning production and distribution of this content, June 2002,
p. 60.
Interview with E. LENTZEN
115
editorial responsibility in the choice of content organisation, what
responsibility is borne by the service distributor that establishes commercial
contacts with the public? Do distributors of broadcasting services bear only a
technical and commercial responsibility, similar to that of internet services
providers who are subject to laws on intermediary service providers covered
by articles 12 onwards of the "Electronic Commerce" Directive? Or do they
also have a social responsibility due to the assembly and aggregation of the
audiovisual contents of the offering that they market to the public? Moreover,
the simplification and articulation of the European rules on content (whether
audiovisual or not) would be most welcome.
Similarly, ensuring the suitability and interaction of services and networks in
a regulatory environment that is contradictory in some aspects would be
highly useful at the various levels of decision-making. Clarification of what
legislators expect from regulation, self-regulation and co-regulation would
also be constructive.
Furthermore, it seems to me that we must move resolutely towards greater
co-operation on a European level.
C&S: How does collaboration and coordination take place between regulatory
bodies on a European level?
EL: Regulatory bodies very quickly sought to contact their foreign
counterparts and the authorities responsible for regulating other sectors
(mainly those bodies competent in terms of infrastructures and competition).
These organisations have set up a large number of forums to promote
meetings and debates.
As far as bilateral relations are concerned, a number of cooperation
agreements have been signed. This is the case, for example, between the
CSA of the French-speaking community of Belgium and the French CSA 3 ,
as well as between the Swiss authority (OFCOM) and its Canadian
counterpart (CTRC). There are also trilateral agreements linking the three
biggest regulatory authorities in Europe: the CSA in France, OFCOM in
Britain and the DLM in Germany.
The European Commission and the Council of Europe are also promoting
forms of bilateral assistance. The French CSA is consequently supporting
and assisting the Polish (KRRiT) and Lithuanian (CRT) authorities, while the
Italian body AGCOM is doing the same for the CRA of Bosnia-Herzegovina,
3 For the record, the name "Conseil supérieur de l'audiovisuel" comes from Belgium; it was
adopted in France in 1989, two years after the creation of the CSA of the French-speaking
community of Belgium.
116
No. 62, 2nd Q. 2006
the German LFK of Baden-Württemberg is assisting the council of Latvia
(BCL) etc.
The European Platform of Regulatory Authorities (EPRA) was founded in
April 1995. It now brings together 48 regulatory bodies from 40 different
countries. The aims of the EPRA are to exchange experiences and
information and to host a forum for the discussion of practical solutions to
legal problems concerning the interpretation and application of audiovisual
regulation. Its bi-annual meetings are devoted to debates on topical issues 4
via presentations of "national" case studies and summaries of surveys by
member organisations. The "European" nature of EPRA is to be understood
in the broadest sense of the term since authorities from countries such as
Turkey and Israel are members.
Alongside EPRA, regional institutions have, over the years, acquired a
certain importance. Founded in 1997, the Mediterranean network of
regulatory authorities covers 13 bodies from 11 countries (Albania, Cyprus,
France, Greece, Israel, Italy, Malta, Morocco, Portugal, Spain and Turkey).
This network has the dual mission of acting as a forum for exchange and
discussion, as well as to strengthen historical and cultural links between
Mediterranean countries. Regulators in Nordic countries have also
established contacts at a regional level. The regulatory authorities of a
certain number of central European countries are feeling the need to share
their experiences at special meetings.
The European Commission has fallen into the habit of inviting regulatory
authorities to the various consultations and meetings that it organises. It is
effectively down to the regulatory authorities to ensure the effective
application of national legislation, which has itself transposed the TWF
directive. The European Commission has gone a step further by creating a
high level group of regulatory authorities. The European Commission's
proposal of December 2005 reflects this "privileged" relationship.
European authorities are stakeholders in another forum, Broadcasting
Regulation & Cultural Diversity (BRCD), founded in 2004 at the initiatives of
the Catalan authority and open to producers, universities, governments etc.
C&S: Why are there so many forums and meetings?
4 The main topics dealt with at the last EPRA meeting in May 2006 were political advertising
(case study and monitoring), the revision of the Television Without Frontiers directive and the
proposed directive on audiovisual media services (part of the meeting was devoted to
advertising and especially questions related to sponsoring and product placement), the reform
and convergence of regulatory authorities (from a hands-on perspective).
Interview with E. LENTZEN
117
EL: It is quite simply because the mission of regulatory authorities, at the
crossroads between human rights and fundamental freedoms and political
and economic interests, is not an easy one. The in-depth knowledge of
legislation, technological changes and company profiles, as well as the
protection of fundamental public interests that is required make regulatory
authorities one of the public mechanisms belonging to what I would call the
"fine-tuning" of democracy. Perhaps these authorities are, or will become,
one of the ramparts of democracy. Only the future will tell.
It is worth remembering that it is the impact of radio and television on public
opinion that has led European and national legislators to set limits to
initiatives in these fields, on their free distribution and freedom of expression.
It is a result of this impact that legislators have entrusted regulatory
authorities with the task of ensuring that these rules are fairly applied.
C&S: Are regulatory bodies sufficiently well-equipped, both on a legal and an
operational level, to manage media concentration and promote pluralism?
EL: Although not all regulatory authorities are competent in this respect, they
all share these concerns. National systems are complex. Many legal and
regulatory rules and regulatory instruments exist, and they generally involve
the Competition Council.
For "convergent" authorities, it is a question both of ensuring pluralism in the
service offering to give the public access to a wide range of information
sources and of ensuring that this plural offering has access to infrastructures
and networks under transparent and non-discriminatory conditions. This
involves analysis of both the wholesale and the retail markets.
Thanks to our function of authorising and monitoring the sector – and
sometimes related sectors – regulators have relevant information on the
property structure of service providers or broadcasters, service distributors
(or aggregators) and network operators regardless of the platform in
question, and we try to keep this information up to date 5 . Overviews are
regularly provided by the EPRA 6 which, in 2000, set up a database that
EPRA members complete with information on the services that they
authorise. This is no easy task in view of the growing number of television
services on offer (almost 4,000 at the beginning of 2006 according to figures
from the European Audiovisual Observatory). The high-level group of
regulatory authorities and the European Commission also decided to set up
a shared database last year.
5 We regularly publish the results of our analysis.
6 The Dutch regulatory authority has published an excellent summary of the situation in Europe.
118
No. 62, 2nd Q. 2006
The analysis of the balance sheets and financial statements of companies
and groups – media service providers or broadcasters, service distributors
and network operators – is interesting from this point of view, as it provides
information on the breakdown of revenues, including those from advertising.
Information on the market and audience shares of television services and
channels comes from audiometric institutes that use their own methods of
collecting and analysing data.
In addition, there are specific questions of multimedia cross-ownership and
relocations. This latter question is on the agenda for discussion of the
revision of the TWF Directive, with 13 states (out of the 25 European Union
members) requesting that relevant criteria (such as the target audience) be
taken into account in establishing territorial competence. The question of
targeting audiences and markets needs to be considered in a different way
to the success of the foreign services and channels, mainly from
neighbouring countries, that are available via cable, hertzian, satellite or
other networks.
We also pay a lot of attention to the "bottlenecks" in terms of access to
content and infrastructures. This includes questions of "single sellers" and
"single buyers," notably as far as sports rights are concerned.
C&S: Beyond collecting and processing information, beyond all of the
analyses and conclusions that you may come to based on the numerous
sources at your disposal, which notably reveal that concentration in the
sectors in question has increased over the last decade, are you equipped to
face this phenomenon?
EL: The response is mixed. Every regulator you ask will give a different
answer. Yet all will agree that free of expression is the cornerstone of our
democratic political systems and that the public's freedom to access a
pluralist offering ("the plurality of independent media reflects the diversity of
the largest possible range of opinions and ideas" 7 ) is a key objective of our
regulatory activity, as is the protection of human rights and fundamental
freedoms, which include the protection of minors, of human dignity and of
consumers' rights.
All regulators will also agree that it is not for them to interfere with the
editorial responsibility of media service providers-broadcasters or in the
investment/disinvestment decisions of companies.
This is our scope for action. The means at our disposal are stipulated by
regulatory provisions on a European and national level: establishing that a
7 Article 7 §1 of the decree by the French-speaking community of Belgium of February 27
2003 on broadcasting.
th
Interview with E. LENTZEN
119
player holds a dominant position and initiating a procedure of assessing the
broadcasting services offering and adopting remedies on the one hand; and
defining and analysing relevant markets in terms of infrastructures and
electronic communications networks, identifying powerful operators in these
markets and implementing remedies on the other. At the CSA of the Frenchspeaking community in Belgium, we are involved in both processes. The
results should be seen in a year from now.
C&S: Do regulatory authorities have real power to promote cultural diversity?
EL: Cultural diversity is also an international challenge and the Broadcasting
Regulation & Cultural Diversity (BRCD) was set up in a bid to rise to this
challenge.
C&S: How can you reconcile economic nationalism with concentration on a
global scale?
EL: Each player has its own territory. Regulatory authorities act in the
framework of national or "regional" laws as far as federal states are
concerned. It is not the role of regulatory authorities to protect any form of
economic nationalism or to support global concentration. Most of us do not
have economic competences strictly speaking, but are more culturally
oriented.
Corporate groups do not all have regional or national boundaries as far as
their investment/disinvestment horizons are concerned. The latter follow
their own sector-based or financial logics, whether these be "rational" or not.
Whenever I am asked to describe the role and missions of the CSA in the
audiovisual industry, I like to use the image of a triangle whose three sides
are:
- players in the multimedia value chain (media services providers or
broadcasters, distributors and public and private operators),
- the legislator (parliament and government),
- and the public.
As for the CSA, it is situated in the middle of this triangle and has to interact
with all three sides. These are two-way interactions.
We are also a centre for observing the audiovisual landscape where roles
are played and sometimes exchanged, where alliances are formed and
broken, where interests overlap and conflict and where creativity seems
continuously unlimited.
120
No. 62, 2nd Q. 2006
This is naturally a complex position: we do not aim to attempt to usurp the
role of other players too. Should we seek to do so, we would be rightly
stopped.
C&S: Isn't the disparate economic influence of audiovisual, telecommunication
and information technologies players likely to lead to distortions in
competitiveness and competition in the emerging markets born of
convergence?
EL: It is clear that the financial resources of groups of network operators
cannot be compared with those of service producers, including multimedia
players. The growth in networks, their convergence, their strategies that now
privilege "triple" and even "quadruple" play, technological uncertainties
regarding the choices of the public, or the majority of the public, the debate
over the future of fixed and mobile communications, the challenges and
opportunities of portability and HDTV, the advent of new players that do not
necessarily have a past as engineers or financiers etc; all upsets the
analytical framework.
However, we must admit that nobody has ever sold or rented access to an
infrastructure or network to the public without contents related to the offering.
This is the rock 'n roll that sells offerings, not the copper pair.
Although there are "emerging" networks and infrastructures in the
transmission of video contents, these contents are the same as those found
on networks traditionally dedicated to broadcasting, although their
presentation, their form and their consumption are changing or diversifying.
A goal in football remains a goal in football whether it be broadcast via
coaxial cable, a fixed network, via satellite, a hertzian network or the
internet, whether it be broadcast on your television screen in your living
room or via GSM, whether it be transmitted in an analogue or a digital
format. The same applies to other examples such as audiovisual works and
advertisements. Incumbent telecommunication players are currently
experiencing this phenomenon. These players need non data contents
(programmes). They have to learn that, "Providing animated images,
accompanied by sound or otherwise, with a view to informing, entertaining
and educating the general public," on a linear or on demand basis, fulfils the
conditions dictated by the responsibility inherent in their role in shaping
public opinion.
The stakes are, and will remain, cultural. Regardless of how old or new
networks develop, it is our collective responsibility that these networks
transmit contents that continue to "speak to" or "resemble" our communities
of citizens under similar conditions.
The European Commission was aware of this when it decided to separate
the transmission regulation from that of content. While taking into account
Interview with E. LENTZEN
121
the links existing between them, notably to guarantee media pluralism (e.g.
the possibility of reserving a frequency for a radio service, "must carry"
obligations, the regime of conditional access to digital radio and television
services) and to apply the principle of technological neutrality to all electronic
communication networks and infrastructures. The Commission's proposal to
transform the "Television Without Frontiers" directive into the "Media
Services" (linear and non-linear) directive stems from the same concern. It
could undoubtedly be judiciously completed to further establish the future of
rules related to content in general (the protection of minors, human dignity
and consumers, the integrity of national or European works etc.) in a world
where how audiences access services is undoubtedly diversifying more
rapidly than their consumption habits.
The playing field is wide open at all levels.
There are questions concerning all issues from appropriate regulatory
measures, financing structures and "success-guaranteed" investment
choices to growth factors and the future consumption habits of young
people.
All this demonstrates the importance of maintaining and developing public
service broadcasters: the criteria of programming and the assessment of
public service channels follow more complex logics and are more anchored,
by virtue of their management contract, in social objectives. A pluralist public
service that is balanced and strong reflects our communities of citizens and
speaks to our head and our heart, has clear perspectives for the future in a
field in which nobody has resigned themselves to relying totally on the rules
of the market. If this service is able to remain focused on the values and
missions previously defined in a clear contract with the government, rather
than obsessed with successive fads like its private competitors, it will be far
better prepared to offer the public original and meaningful content in the
virtually unlimited offering that is set to unfold in the digital age.
C&S: Will regulatory bodies try to jointly influence the European Union's
legislative process and notably the revision of the TWF directive?
EL: Regulatory bodies are far from absent and mute in the European Union
legislative process. Their positions with regard to the future directive on
content and media services that is supposed to replace the TWF directive
are not always those adopted by their States or governments.
Within the EPRA, the reform process is very often approached via questions
such as the field of application of the future directive and the competence of
regulatory authorities in the field of services on demand (non-linear), the
principle of the country of origin and the determination of criteria linking to a
jurisdiction, advertising, the protection of minors and human dignity, ways of
supporting the production of European and national works, etc.
122
No. 62, 2nd Q. 2006
At each EPRA meeting a representative of the European Commission and
the Council of Europe present an overview of the discussions and decisions
or recommendations adopted in each of these institutions.
The high-level group of regulatory authorities aims to strengthen its
cooperation with the European Commission in the application of the directive
and its adaptation to technological advances and the markets.
Information consequently circulates and the positions adopted by the various
parties become known.
However, there are no joint positions or declarations by regulators. It has
been clearly decided that the EPRA will not act as the regulators'
"spokesperson".
C&S: In January 2006 the RTL group moved TRL-TVi from the French-speaking
community of Belgium to Luxemburg, arguing that obligations to conform to
the TWF directive, which states that a channel can only have a licence in a
single country, and the regulatory framework of the French-speaking
community in Belgium were too restrictive. Other cases oppose the Swedish
authority and the British regulatory body OFCOM. Should European Union
regulation make it possible to deal with this kind of relocation?
EL: In the framework of the 15th EPRA meeting held in May 2002 in
Brussels, the CSA conducted a survey of the situation in over 35 states, a
survey that revealed the extent of targeted content and advertising practices
and their impact. Various scenarios could consequently be deduced from
these results.
In the European Union four waivers to the principles of freedom of reception
can be cited in the preambles and articles of the TWF directive and the
decisions of the European Court of Justice. These procedures are
nevertheless long, involved and off-putting as a result.
Throughout the consultation process of the revision of the TWF directive, the
CSA of the French-speaking community of Belgium underlined the
fundamental role played by establishing territorial competence.
This is one of the reasons why the European Union is able to act in the
audiovisual field to ensure free competition between services (and not
contributing to concentration) and to promote the free circulation and
expansion of the content offering (and not its impoverishment).
However, it has to be said that the country of origin principle, which lies at
the heart of the European directive, as the Commission reminds us, is
applied blindly in some cases, although the conditions laid out in the
directive no longer correspond to the reality of the situations in markets that
are becoming increasingly international. The failure to account for cases in
which the least restrictive location is chosen ultimately leads to the absurd
Interview with E. LENTZEN
123
situation whereby most of the European broadcasting industry could be
regulated by the UK, Luxemburg or even France.
For small and medium-sized countries – especially those neighbouring a
large, linguistically homogenous market – the risks of companies relocating
and targeted content and advertising practices are calling into question the
equation between freedom to set up a service and of circulation, the
maintenance of an audiovisual industry within their borders and
safeguarding the diversity of ideas, opinions and cultures, all vehicles of
freedom of expression.
The proposed directive on "media services" that is supposed to replace the
TWF directive offers no appropriate response to at least 13 states that have
expressly asked the European Commission to review its text with regard to
this question.
In the French-speaking community of Belgium, the criteria of targeting
entirely or mainly public broadcasting was reflected in legislation. Why
shouldn't the same thing happen on a European level?
C&S: What do you think of the systems set up in Austria, Hungary and
Denmark where the regulatory authorities also manage funds to support
audiovisual production?
EL: Personally, I don't believe that the regulatory function should include the
management of a fund to support audiovisual production. This leads to
editorial choices that are based upon other competences and could run the
risk of interfering with our other missions.
In French-speaking Belgium, our role is defined by legislation as to check
whether the authorised broadcasters are contributing to audiovisual
production on a prorata basis according to their turnover either via coproductions or via payments to the Centre for film and audiovisual
production. It is also the responsibility of the CSA to determine the basic rate
of the annual contribution paid by each broadcaster, but not to manage the
Centre for film and audiovisual production.
But, as I have already said, all of these regulatory models exist and are
relevant in terms of individual national situations.
C&S: Should mobile telephony now be considered as a separate media and
how should it be regulated as such?
EL: As I have said, mobile telephony is seen as a channel - an infrastructure
– that is used to transport data and video content.
124
No. 62, 2nd Q. 2006
Mobile telephony operators are consequently subject to the rules related to
networks and infrastructures.
If they also produce the audiovisual content that they carry, they are subject
to laws related to media services, which in this case are non-linear.
The question that is not asked as such regards their role as a service
distributor has not yet been debated on a European level. This is the solution
that has been selected by the French-speaking community of Belgium.
Articles
Alternative Wireless Technologies
Status, Trends and Policy Implications for Europe
The Scope of Economic Sector Regulation
in Electronic Communications
Alternative Wireless Technologies
Status, Trends and Policy Implications for Europe
(*)
Sven LINDMARK
Chalmers University of Technology
Pieter BALLON
TNO-ICT and SMIT, Vrije Universiteit Brussel
Colin BLACKMAN
Independent consultant and editor
Erik BOHLIN
Chalmers University of Technology
Simon FORGE
SCF Associates
Uta WEHN de MONTALVO
TNO-ICT
Abstract: Besides 3G, a number of alternative wireless technologies (AWTs) have
emerged. Such AWTs create new growth opportunities, but may also constitute a
disruptive threat to existing networks and their supporting communities. The objectives of
this paper are firstly to map AWTs' deployment and current trends, drivers and bottlenecks
in Europe; and secondly to identify policy implications for Europe. Specifically, we
consider: WLAN / Wi-Fi, UWB, WiMAX, Flash OFDM, UMTS-TDD and mesh / ad-hoc
networking technologies. Policy recommendations relating to R&D encouragement,
stimulating competition and encouraging market entry, spectrum allocation and standardsetting are brought forward.
Key words: mobile communications, Wi-Fi, WiMAX, Ultra Wide Band, mesh and ad-hoc
networks, UMTS-TDD, Flash OFDM, Europe, policy.
(*) This paper is based on the results of the "Mapping European Wireless Trends and Drivers"
(MEWTAD) project led by E. BOHLIN, conducted for the European Commission – Joint
Research Center – Institute for Prospective Technology Studies. See the forthcoming IPTS
Technical Report (BOHLIN et al., 2006). Please note that the findings presented herein are
solely the personal opinions of the authors, and should not be construed to represent the
opinions of the European Commission. The authors would like to acknowledge Jeroen HERES,
Annemieke KIPS, Mildo van STADEN, Richard TEE, Silvain de MUNCK and Willem-Pieter van
der LAAN, all from TNO, for extensive data gathering and processing.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 127.
128
No. 62, 2nd Q. 2006
T
he European telecommunications and electronics industry has
enjoyed outstanding success in the second generation (2G) of
mobile telecommunications. In a relatively short time period,
European players have established leading positions in system, handset,
and operator levels of the player system. As in all lucrative industries, this
lead will not be left unchallenged. In the ongoing transition to thirdgeneration (3G) mobile communications, and perhaps even more so in the
coming fourth generation (4G), Asian and American players are going ahead
with new initiatives. A plethora of competing (and complementing) wireless
technologies and solutions, often stemming from the computer industry,
have entered the scene. For short, these are denoted alternative wireless
technologies (AWTs).
In some areas, notably wireless LAN applications (WLAN) for offices,
homes and "hot spots", they have already reached substantial usage and
diffusion. Other alternative technologies – including WiMAX, UWB (Ultra
Wide Band) and meshed and ad-hoc networks – show promising signs of
fulfilling existent and growing user needs. Clearly, diffusion and usage of
AWTs may contribute to provide high-quality public services and promote
quality of life, constituting an opportunity. On the other hand, if AWTs
succeed, there is a risk that the leading European position will be seriously
challenged, resulting in lost growth and job opportunities. Hence, there is a
strong and urgent need to thoroughly research the usage of AWTs, as well
as the trends and drivers currently catalysing their diffusion, also in relation
to the newly adopted "i2010" policy framework for the information society.
The objectives of this paper are to map AWTs' development, deployment
and usage in Europe as well as their current trends, drivers and bottlenecks;
and, given the current i2010 policy framework, to identify policy options and
implications for European Union (EU) and member states (MS).
In principle, the concept of AWTs includes all emerging wireless
technologies with the exception of established cellular technologies.
However, due to the number of countries to be covered, it was beyond the
scope of this research project to cover all such technologies. The set of
technologies selected was based on an early evaluation of their presumptive
impact upon the presently dominant cellular paradigm. In sum, the AWTs
covered in this paper are those existing in the market today and/or on their
way towards standardisation or in (advanced) R&D stages and/or potentially
presenting a challenge to traditional business models in the mobile market.
Specifically, we consider the following:
- short-range protocols (WLAN /Wi-Fi, and UWB)
S. LINDMARK et al.
-
129
longer-range protocols (WiMAX, Flash OFDM and UMTS-TDD 1)
meshed and ad hoc networking
The rest of this paper is structured as follows. The next section presents
an overview and analysis of the availability and usage of the main AWTs in
the EU, as well as operators' strategies. The subsequent section analyses
the main drivers and barriers for their diffusion and the European position
with respect to AWTs. The final section identifies implications of AWTs for
the EU over the next 10 years, in terms of the policies required for their
evolution, diffusion and competition.
AWTs in Europe
This section summarises our observations regarding AWT activities in
Europe. The extent of AWT diffusion for all 25 EU member states and, in a
more limited form, for the 4 candidate countries was mapped out 2 .
Clearly the most dynamic markets, in terms of the variety of AWTs being
used or deployed, are situated in Western Europe and Scandinavia. France,
while Germany, Ireland, the Netherlands, Sweden and the UK present the
most diverse European markets in terms of AWTs, with almost all AWTs
under review being deployed or used.
Unsurprisingly, WLAN (in the form of WiFi) is the most mature AWT on
the market. The availability and usage of AWTs other than Wi-Fi are far
more incidental. However, despite the limited and fragmented nature of the
diffusion of these AWTs, there is a certain dynamism related to them in
many countries. table 1demonstrates that, while UWB and Flash OFDM are
marginal or non-existent on the EU market, (pre)WiMAX, Mesh/Ad-hoc
technologies and UMTS-TDD are available or being deployed in many, or
even most, EU member states.
1 UMTS-TDD is considered as an alternative mode of operation of the common UMTS-FDD
standard. As it can, for example, operate in unlicensed spectrum, UMTS-TDD is regarded here
as a potential AWT.
2 Data sources for the targeted information were non-confidential, publicly available or publicly
verifiable. To gather them, extensive desk research activity was carried out, involving academic
and consultancy sources, official country- and region-specific data, the business press,
specialized web information, and corporate information provided by the main AWT providers in
each country. In addition, a series of in-depth telephone interviews were conducted with country
experts for each of the 25 EU countries.
No. 62, 2nd Q. 2006
130
Table 1 - Overview of selected AWT activity in EU25 (as of mid-2005)
Country
Austria
Belgium
Cyprus
Czech Rep.
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Slovakia
Slovenia
Spain
Sweden
UK
UWB
WLAN
(pre)
WiMAX
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
commercial
deployment
commercial
Flash
OFDM
UMTS
TDD
use
use
trial
use
use
trial
commercial
trial
trial
commercial
commercial
deployment
commercial
commercial
commercial
trial
commercial
commercial
Mesh /
Ad-hoc
use
commercial
commercial
use
trial
commercial
deployment
deployment
commercial
deployment
trial
use
commercial
commercial
commercial
trial
commercial
use
use
commercial
deployment
commercial
Note: The activity stated relates to the most advanced use encountered in a country.
Table 2 summarises our observations on the main service providers and
main usage of AWTs in the EU as encountered during our research.
This overview demonstrates that AWTs are mostly being used as
substitutes for fixed broadband connectivity (i.e. in the case of being
positioned as 'wireless or portable DSL'), but also as substitutes for mobile
or at least for nomadic iInternet connectivity. The likelihood of AWTs actually
constituting a considerable threat to (traditional) operators' positions is very
much dependent upon the types of actorsplayers driving the service offering,
as well as on their strategies. For non-operators, i.e. in case of the individual
provision of hotspots and the establishment of (free) wireless zones, the
strategies encountered in the European market are:
- communitarian: offerings by communities of individuals;
- location-based: municipalities and universities wishing to increase the
attractiveness of their location or site;
- commercial: aimed at indirect returns from increased sales of other
products or services (hotels, etc.).
S. LINDMARK et al.
131
For new entrant operators, such ase.g. new generations of WISPs, their
strategies can be labelled as:
• Niche-player strategy: in segments of the business market; or
providing rural and remote coverage;
• Mass-markets strategy:
- serving consumer and small business markets in urban areas (creamskimming or competing head-on with existing networks);
- serving consumer and business markets in large areas with
underdeveloped infrastructure.
Finally, established operators' strategies vis-à-vis AWTs can be
summarised as:
- pre-emption strategy: often by the acquisition of small new entrants, in
order to discourage or preclude entry by other operators;
- non-cannibalisation strategy: deployment in small niches where no
overlap exists with traditional activities;
- integration strategy with small scope for AWTs: integration of AWTs
into the overall operator offering (for niche use);
- integration strategy with large scope for AWTs: integration of AWTs
into the overall operator offering, with AWTs constituting a considerable
part of the value proposition.
Table 2 - Provisioning and usage of AWTs in Europe (mid-2005)
Technology
UWB
WiFi / WLAN
(pre)WIMAX
Provisioning
Usage
Both civilian and military testing, but no
current provisioning
Offered mostly by established mobile
and fixed operators. Other providers
include new entrant hotspot providers,
municipalities, universities, hospitality
providers, communities, and individuals
Offered by established mobile and fixed
operators as well as by new entrants
Still in stage of research and
technological testing
Internet and intranet access in WiFi
hotspots and zones. Also
widespread in-house private use
Flash OFDM
Trialled by established mobile operator
Mesh/ ad
hoc
Offered mostly by communities of
individuals. Other providers include new
commercial service providers
Offered by new entrant network
providers, as well as by established
operators
UMTS-TDD
Fixed and nomadic wireless access
in rural areas and urban centres.
Aimed at both the business and the
residential markets
The mobile data trial using Flash
OFDM was apparently aimed
primarily at the business market.
Data transport and Internet access,
mostly for free
Fixed and nomadic wireless access,
mainly in urban centres
No. 62, 2nd Q. 2006
132
In shorten, established operators have taken the lead in the deployment
and exploitation of AWTs throughout most of Europe, as shown by table 2.
These operators are mainly using pre-emption, non-cannibalisation or
limited-scope integration strategies. This suggests that there are at present
constraints in Europe against AWTs being used by other actors players and
in other ways, even though in some countries there are considerable nonoperator and new entrant activities.
Assessment of European position
In order to assess the impact of AWT and identify the relevant policy
actions, we briefly conduct the following analysis. Firstly we identify the main
drivers and barriers for AWT diffusion in Europe. Secondly, we assess
Europe and European actors' position with respect to AWTs. Thirdly, we
identify the need for policy change in a number of key areas.
Table 3 - General AWT drivers and bottlenecks
Drivers
Bottlenecks
Poor fixed broadband infrastructure
development in many small cities, towns,
rural and remote areas across Europe.
Government incentives, programmes and
public-private partnerships to stimulate
broadband connectivity.
Competition in Wi-Fi markets, e.g.
because of relatively low prices of Wi-Fi
deployment, driving prices down and
ensuring relatively high coverage in a
number of countries.
Success of private in-house WLANs,
which might stimulate the usage of public
WLANs.
Emerging integration of AWT and mobile
capabilities in dual mode handsets.
Falling hardware prices and backhaul
costs.
Limited number of licensed operators in
some markets, creating incentives for new
stakeholders to enter national markets
using AWTs.
New applications and possibilities such as
VoIP over wireless, deployment of AWTs
on trains etc.
Expected expansion of WiMAX with
mobility characteristics.
Lack of interconnection and roaming
agreements, especially between new AWT
operators.
Pricing models of public hotspot access in many
EU countries still oriented towards occasional
use, limiting scope of AWTs to business market.
Licensing regimes in many EU countries
imposing limitations on spectrum availability,
deployment, handoff and integration of AWT
cells, and generally allowing technical
experiments with AWTs but no market
experiments.
Persistent standardisation problems.
Lack of user-friendliness in access,
authentication and billing procedures.
Lack of structural advantages (in terms of speed
or cost) over fixed broadband, and therefore a
lack of incentives for AWTs in areas with welldeveloped fixed broadband infrastructure.
Potential saturation and congestion of
unlicensed spectrum in prime locations.
Limited amount of terminals and other certified
equipment in the market.
Lack of customer education, i.e. in terms of
differences between mobile and various AWTs.
Lack of content applications.
S. LINDMARK et al.
133
Table 3 summarises general drivers and bottlenecks at the level of
general developments in markets, technologies and regulations – as
highlighted by nearly 30 interviewed country and technology experts.
In conclusion, there are strong forces at play, making it likely that one or
several AWTs will have a substantial impact, even in the mid-term future,
with WLAN leading the way. General barriers, such as the limited number of
terminals and content applications, a lack of interconnection and roaming
are likely to be gradually overcome, further driving diffusion of AWTs.
However, a cursory analysis of the European position with respect to
AWTs also reveals a number of weaknesses. AWTs have no real place
today in European telecommunications and media – nor do they yet form a
part of an overall strategy for communications. AWTs are not understood by
mass markets, nor are AWT capabilities and positioning well understood by
EU industry and technical centres of expertise. Moreover, licensing regimes
in many EU countries impose limitations on spectrum availability,
deployment, handoff and integration of AWT cells, generally allowing
technical experiments with AWTs, but no market experiments.
In some ways the European telecommunications sector is still
counteracting the proliferation of AWTs, or at least mainly considering it as
complementary to cellular technologies. Although this paper shows that
traditional operators are most active in the deployment and exploitation of
AWTs throughout most of Europe, this is mainly a result of defensive preemption, non-cannibalisation and limited-scope integration strategies. As
long as incumbents enjoy strong profitable positions in cellular, there are no
true incentives for them to promote alternatives. Similarly, the European
supplier industry holds a very strong position in cellular, while it has been a
follower of – or even resisted – WiMAX, WiFi and other AWTs. In fact, both
cellular operators and suppliers probably view AWTs as a major threat. From
an industrial policy viewpoint, it is then tempting to protect the strong position
in cellular through adopting a restrictive approach to AWTs, since a rapid
transition to AWTs from cellular would disrupt the current European
dominance in 2G and 3G. In conclusion, it seems that in Europe AWTs, at
least in their 'independent' form, are in a weak market position with no
champions, promotion or financial muscle.
However, AWTs are being promoted by other communities, particularly
outside Europe. Failure to firmly grasp the potential of AWTs might leave
Europe far behind in mobile technologies, behind Asia and – unthinkable two
years ago – even behind the USA. In the latter region, the IT industry (not
No. 62, 2nd Q. 2006
134
least, component suppliers like Intel) strongly promotes AWTs; Wi-Fi and
WiMAX are drawing new strengths in municipal networks and emergency
services networks, as well as being exploited in research as the platforms for
health and elderly care. In the former region, government-led promotions of
AWTs are reformulating national infrastructures and market players.
Globally, the most advanced AWT market is probably South Korea in terms
of WLAN diffusion, for instance. South Korea is currently bringing together
"mobile" and "broadband" with development of the "Portable internet" using
a home-grown AWT, WiBro, as its carrier infrastructure. AWTs are an
important component of South Korea's critical path for achieving a
ubiquitous network society ("U-Korea"), and for sustaining industrial
competitiveness – the "IT 839 Strategy". Other Asian countries also seem
determined to move quickly into wireless technologies other than and
beyond 3G (see further BOHLIN et al., 2006).
Taken together, these trends and bottlenecks, and the relative European
weakness vis-à-vis AWTs, suggest that European industry and policy are
somewhat locked into the predominant cellular wireless technology. As welldocumented in technology management literature (UTTERBACK, 1994;
CHRISTENSEN, 1997), technology-conservative incumbents are likely to
invest in R&D and innovation in sustaining technologies, and neglect the
potentially disruptive ones. Since innovation processes are evolutionary and
path-dependent, positive feedbacks may lead systems to be locked into
inferior technologies (see DAVID, 1985; ARTHUR, 1989), whereby
potentially superior technologies may not take off and diversity may be
blocked and reduced (EDQUIST et al., 2005). Here we may have such a
lock-in situation at the level of the European sectoral system of innovation.
At the very least, this situation requires a review of the current policies with
respect to AWTs. This will be the topic of our final section.
Policy implications
On June 1st, 2005, the European Commission adopted a new strategic
framework – "i2010: European Information Society 2010" – to foster growth
and jobs in the information society and media industries. i2010 aims to
integrate, modernize and deploy EU policy instruments to encourage the
development of the digital economy, including regulatory instruments,
research and partnerships with industry. In i2010, the Commission outlines
three policy priorities and objectives: (1) the completion of a Single
S. LINDMARK et al.
135
European Information Space offering affordable and secure high-bandwidth
communications, rich and diverse content and digital services, which
promotes an open and competitive internal market for the information society
and media; and (2) strengthening Innovation and Investment, with worldclass performance in research and innovation in ICT by closing the gap to
Europe's leading competitors, to promote growth and more and better jobs;
(3) achieving an Inclusive European Information Society that provides highquality public services, enhances quality of life, and promotes growth and
jobs in a manner that is consistent with sustainable development (CEC,
2005).
Clearly, diffusion and usage of AWTs may contribute to objectives (1)
and (3) of the i2010, i.e. to achieving an Inclusive European Information
Society that provides high-quality public services and promotes quality of
life, thus constituting an opportunity. AWTs are proliferating and offer
considerable potential for service and business model innovation. AWTs fill
the gaps left by cellular, extending to the areas of life that cellular does not
reach. For some applications they offer lower costs, faster rollout compared
to mobile and, in addition, higher bandwidth. AWTs offer potential for
increased competition as well as for connecting rural and less-developed
areas. This needs to be accommodated by policy.
On the other hand, and relating to objective (2) of the i2010, if AWTs
succeed – and given a current lock-in – there is a risk that the leading
European position will be seriously challenged, resulting in lost growth and
job opportunities. This risk is heightened by the fact that the European ICT
sector is under-investing in R&D compared to the US and Japan 3 . In line
with modern innovation research, we suggest that policy should ensure that
negative lock-in situations are avoided. EDQUIST et al. (2005) suggest three
general policy options for how this objective can be achieved. Firstly, policy
should keep technological rivalry alive by supporting alternatives, through
support for alternative R&D for example. Thus, there is a need to re-think
R&D policy with respect to AWTs.
Secondly, diversity could be introduced to industry through the provision
of, and support for, firm entry and survival of new firms. In addition to a more
traditional competition policy perspective – where competition means lower
prices – new entrants bring new capabilities, cognitive frames, ideas,
products, services, and technologies to the market, which is especially
3 IDATE and OECD as cited in CEC (2005).
136
No. 62, 2nd Q. 2006
relevant under circumstances of technological and market uncertainty 4 .
Consequently, there is a need to rethink competition policy and to stimulate
new entrants.
A necessary condition for entry when it comes to radio communications is
the availability of frequency spectrum and licensing regime allowing for new
entries. This has been identified as a major blocking mechanism for AWT
diffusion in Europe, and hence there is a need to rethink spectrum policy.
Thirdly, common infrastructures, in particular gateway technologies and
standards, could provide common bases for the development of new
varieties of products and services at other levels (although closing
technological opportunities at some level). Research has also shown strong
links between the support of successful standards and the success of
supporting firms and regions (cf. GSM, VHS and TCP/IP) 5 . Perhaps there is
still time for Europe to take back some of the initiative in standardisation, and
perhaps policy can support this.
In summary, four main policy areas will be discussed below: R&D policy,
policies for competition and new entries, spectrum policy and
standardisation policy. These areas do not include the full range of policy
issues in need of further consideration. A choice had to be made for the
purposes of this paper. For a fuller treatment of such issues, consult
BOHLIN et al. (2006).
R&D policy
Our research indicates that, in addition to under-investing in ICT-related
R&D in general, Europe seems to invest even less in AWT-related R&D
versus the main competing regions of the world (the U.S., Japan/South
Korea). As mentioned above, to achieve "world-class performance in
research and innovation in ICT by closing the gap to Europe's leading
competitors" (CEC, 2005), Europe needs to specifically target the AWT gap.
Here, timing is of crucial importance since, in the presence of pathdependencies and positive feedbacks, there is a "narrow policy window
4 Admittedly, the relationship between competition and innovation is ambiguous also in
telecommunications (BOHLIN et al., 2004).
5 See e.g. GRINDLEY, 1995; CUSUMANO et al., 1997; SHAPIRO & VARIAN, 1999. See also
LINDMARK, 2002, for the GSM case.
S. LINDMARK et al.
137
paradox" (DAVID, 1987), suggesting that government has only a short
period for effective intervention before the industry is locked into a specific
technology.
In order to keep technological alternatives alive in Europe, and to
accelerate the catching-up process, we propose the following R&D support
actions. A suitably structured and EC-led funded programme of research
and demonstrator implementations should be set up and mobilised. Firstly, a
European Alternative Radio Network Research Programme should be
established as a matter of urgency. It should cover several well-defined
areas, with study projects for university laboratories and industrial precompetitive consortia, with all results being in the public domain. The release
of classified military research in this area should be urgently sought for
Europe's advantage 6 7 .
Secondly, we suggest the formation of a European Radiocommunications
Research Institute – ERRI – as a further initiative to pursue the full promise
of the new directions in radio. ERRI would be a European research and
development centre for AWT radio technologies and networking
architectures. Jointly funded by industry, national governments and the EC,
the first phase of rapid set-up and early growth could be through a joint
programme of projects distributed across existing universities. This would
form a launch pad for the second phase, of setting up a permanent institute
with its own faculty and facilities at one site. ERRI would have twin research
roles, of primary and applied research, to form an international centre of
excellence.
6 In addition, there are existing EC e-initiatives that could be harnessed to provide part of the
above, in particular the eMobility Technology Platform. If this is not possible, then an alternative
high-level group specifically for AWT – a kind of European 'skunk works' to develop AWT –
could be created. Moreover, the EU's interest in broadband deployment could also be
harnessed for certain AWTs, if any political barriers raised by xDSL incumbents to wireless
access can be overcome.
7 Specifically, we propose that the programme's main research lines should include: (1) radio
propagation analysis; (2) networking processes and architectures for inter-working and
interfacing to other (existing) networks; (3) analysis of mesh networking algorithms; (4) analysis
of techniques for sharing spectrum based on non-frequency-constrained propagation; (5)
cognitive radio systems for SDR; (6) spatial and directional signal multiplexing and
enhancement; (7) human interface research for rich capability but easy-to-use handsets and
terminal devices; (6) socio-economic analysis of user demand for new services; (8) analysis of
handset operating systems for secure hosting of multimedia applications; (9) analysis of security
threats; (10) content and media transmission and management; (11) tracking of AWT
development globally; and (12) self-organising operator-less ad-hoc networks for disaster
situations, with robust self-configuration.
138
No. 62, 2nd Q. 2006
It would also be useful to build a range of European test beds at a
national (or EU) level, that would aim to stimulate the economy by proving
technology and, most importantly, to educate both the work force and
society in general. The intention would be to promote the knowledge base of
the economy. On top of this, we suggest large demonstrator projects (size
decided by the number of MS participating at national and local levels),
which would revolve around four main initiatives: (1) a pan-European
wireless broadband network infrastructure (EWBNI), mainly functioning to
provide a robust broadband infrastructure platform at low cost, and on which
vertical application networks could be based; (2) a European citizen-alert
network (CAN), perhaps using a mesh infrastructure; (3) a European
Emergency Services Infrastructure Network (EESIN), accessible only by
emergency services, with an architecture for robust operation in all
situations; and (4) a European recovery network for attacks and disasters
(ERNAD), a temporary network to be set up instantly whenever and
wherever infrastructure fails, following natural or man-made disasters.
Policies for competition and new entries
Our research has shown that AWTs in Europe are presently almost
monopolised by traditional operators, partly for defensive purposes. This is,
in turn, likely to hamper the dynamism of AWTs in Europe. Clearly there is a
case for policies enabling new players to enter the market, or at least for
preventing concentration.
To create an active AWT-based communications market, it will be critical
to form conditions of freedom of market entry for new players without
restrictive practices, be it in inter-working – physical attachment, protocols at
network or at application level – or in related areas such as media content,
or in dependencies such as the software for 'media players' and operating
systems. Regulation has to maintain a level playing field for competition, in
market conditions where world-class players are seeking vertical integration.
This means expanding regulation models for the areas of:
- media/broadcast-multicast and content in all areas including
protection of minors, digital rights management, ownership of multiple
media, etc.;
- telecommunications;
- financial transactions and banking.
S. LINDMARK et al.
139
In the systems interface area, we also need to see open interface
standards published down to chipset level. For instance, SANDVIG et al.
(2004) note that access to development of mesh networking over Wi-Fi is
now constrained by secrecy among manufacturers of network card chipsets,
a highly concentrated industry. None of the dominant chipset suppliers in the
Wi-Fi markets make available any interface specifications. This effectively
bars any user-driven innovation, a central force for innovation in the area of
mesh networking.
A related area for policy decisions is the assurance of interconnection
access by the new entrants to existing networks – be they fixed or mobile
with internet access. Issues of roaming, interconnection and termination
charges must be considered, with cost-based pricing to prevent monopolistic
margins on interconnect activity. AWTs could then provide strong local loop
competition. Assuring connection of any-to-any covers several areas
including:
• Open access: required also at application level with AWTs for mobile
services.
• Mandated mobile exchanges: ensure that operators of all kinds have
common internet access. Requires creation of mobile exchanges – a key
element of a converged network to integrate AWTs – and would also open
the way for mobile content competition.
• Ownership restrictions: for different types of networks, allowing and
even forcing the sharing of infrastructures according to dynamic financial
models.
• Pricing models: a major barrier to AWT introduction (especially by
cellular mobile operators) is their associated pricing model. This extends into
interconnection and the billing settlements, with termination and roaming
agreements.
• Naming and addressing: resolving conflicts over naming and
addressing is a key aim for open access. AWTs in Europe sit in the area of
three address spaces – Internet logical addresses (URLs), fixed number
plans, and mobile number plans. The latter two vary by country, but are
usually differentiated. Suggested solutions for mobile-internet access include
the ENUM scheme for mapping a PSTN telephone number into a typical
Internet Uniform Resource Locator (URL), i.e. an e-number.
• Universal service: providing universal service with equal provision and
access for all citizens is open to question in a mobile broadband world.
140
No. 62, 2nd Q. 2006
• Emergency number obligations: many AWT-based public services
providing voice are likely to have to comply with the requirement for
connection of the emergency services in each MS.
Finally, it should be mentioned here that regulation and competition
policy are not the only measures to stimulate new players to enter the fields
of AWT. New AWT-based services and product innovation could be
stimulated by setting up and incubating AWT start-ups, which should be a
major priority leading to positive effects in terms of market experimentation
and learning.
Spectrum policy
The present study has shown that AWTs are being experimented with
and combined with other wireless and cellular technologies, but that this is
being hindered by restrictive frequency allocation mechanisms. How can
these be addressed by spectrum policy? The first issue is a rethinking of
policy for spectrum allocation at the highest levels for Europe, Member
States, and globally in order to incorporate AWTs adequately. And now is
the time, as the issues are under debate today within the ITU forum.
However, this gradual slow process via the WRCs is at a critical juncture,
and inadequate for the rapidly evolving and increasing number of new
wireless technologies. For propagation distances to be optimised, AWTs
may need to have frequency bands currently taken by broadcast, mobile
cellular, or the military. By WRC-07, it would be judicious to have
reconsidered the current allocation of spectra in view of the economic
benefits of AWTs for Europe, and of abandoning existing frequency plans.
Such a move requires a socio-economic basis for planning (FORGE et al.,
2005), and work so far points to a far wider usage in which AWTs would
form a major part. It is, moreover, important to note that spectrum will also
be affected by the advances in signal processing – both of sharing, with new
spread spectrum techniques, and of compression, so that very high data
rates (Gigabits per second) can be substituted for by increasing processing
power.
Recent research (FORGE, 2006; FORGE & BLACKMAN, 2006; FORGE
et al., 2006) indicates that the economic application of the spectrum as a key
resource factor will have a profound impact on future growth and
development in Europe in terms of employment in the high-technology
manufacturing and support industries, and on user industries and levels of
S. LINDMARK et al.
141
consumer spending. This will be especially significant as our dependence on
various forms of pervasive or ubiquitous communications and computing
builds the world of 'The internet of Things' following the tenets and trends
indicated in the EC's AMI (Ambient Intelligence) programme. Spectrum is
thus a key factor that will lead our economy to new models and rules,
perhaps those of a 'tele-economy', as considered in certain World Bank
studies. Hence, models of its management based on open access to a
commons are worth serious consideration.
Consideration of spectrum policy for AWTs must take into account two
key factors: firstly, spectrum availability must be matched against technology
type, and here we must balance the social and commercial importance of
existing services. Secondly, the form of spectrum allocation needs to be
decided, be it a single choice or a mix of several (managed award as in a
'beauty contest', and/or competing market bids and resale in a secondary
market, and/or open unlicensed bands).
The use of unlicensed bands is perhaps the most promising perspective
today and is behind the successful rollout of WiFi. Further bands will be
needed to support WiMAX, ZigBee and other short-range technologies
including Near Field Communications for body area networks. Policy
directions will need to be considered carefully, against a background of
vested interests, to decide what will be required over the next decades in
frequency allocation. To conclude, spectrum needs to be given to AWTs as
part of any policy to support them, or else they will stall – just as mobile
cellular did in the USA for almost four decades after 1947.
Standardisation policy
For the AWT market emerging over the next decade, far more than
standards for simple air interface and network-level protocols are required, if
the applications that run over AWTs are to interwork seamlessly 8 . So far,
our current AWT standards have largely been formed in the fora developing
the IEEE 802 series (USA). A simple policy of harnessing these air interface
and physical connection standards is perhaps to be preferred for rapid
8 On a general note, for standards policy, a key point has been made by South Korea, which
often takes a contrarian view on standards in order to be first in a new technology. This could be
applied to the AWT standards scene in Europe.
142
No. 62, 2nd Q. 2006
industrial advance, which will avoid unprofitable conflict, time and money in
redundant standards-setting.
Building on the IEEE 802 standards series at a basic communications
protocol level, we can illustrate useful standards-setting by moving up the
seven-layer model to build complete systems that can be easily integrated
into a broadband wireless network. They may be selections of existing
standards in some cases. Domains to be covered would include:
- network Air interfaces, network protocols and network operations, and
the key network entities and their operational behaviour;
- handsets – any usefully defined software characteristics such as
operating system calls, form and use of microbrowsers to display content,
etc.;
- session and application processes at the internet level for
mechanisms and protocols;
- content and media standards to enable common distribution
mechanisms for content ingest and delivery;
- security mechanisms and overall architecture.
Thus, building on the IEEE 802 series, European standards efforts (in
ETSI and other groups) could also well be marshalled to attack a higher,
more sophisticated level of AWT operation. This would enable European
industry to go forward rapidly in AWTs in the areas of: (1) a high-level,
behavioural model of the network architecture for mesh networking, with
strategies for use of participating nodes, and for inter-working with existing
network types; (2) definition of the main operations in a self-organising or adhoc network for a mesh architecture following the high-level model – the
processes and policies of management for awareness and adaptive
response, with choice of existing standards where appropriate.
A security model and architecture to accommodate the high-level network
model, which runs end-to-end from content servers through all network types
into handsets, will also be needed. However, standardisation of technical
developments for inter-working is not enough. There must be regulation to
enforce standards usage – for example, integrated naming and addressing,
and specifically security measures.
Towards an integrated AWT policy
The rapid growth of AWTs means that focusing on a single wireless
technology, which is just on cellular mobile, at a policy, research and
S. LINDMARK et al.
143
industrial level, is not only inappropriate, but is also likely to result in Europe
being left behind by Asia and North America in these key enabling
technologies. Assuming a linear development of successive generations of
the one dominant technology can no longer be a valid approach. Instead, it
is likely that a number of technologies will co-evolve, and that many new
operator models will develop as well. At present the operator-centric model
dominates the AWTs, but opportunities for future non-operator-centric
business models are likely to grow, as competition intensifies. As the AWT
contenders enter the market, they broaden the horizons of applications as
well as multiplying the networks and access available to the citizens, so that
applications impossible with cellular technology can be made available
ubiquitously.
Drawing lessons both from Europe's successful fostering of GSM and
from the less successful third generation of cellular wireless technology, and
considering the approaches being adopted in other regions of the world,
there is a strong argument in favour of Europe adopting an integrated
approach to the policy and regulatory issues arising from AWTs. However,
these are sensitive issues and care needs to be taken in striking the right
balance between a command-style dirigiste intervention, which would not fit
culturally with MS, and a repeat of the experience with previous European
programmes, which took a long time to organise and fund, but achieved little
or nothing to show for this.
In spite of the difficulties, and since the significance of AWTs is likely to
be downplayed if left to current market forces and those players dominated
by interests in conventional fixed wire or cellular mobile technologies, the
key policy conclusion of this paper is that a comprehensive and systematic
European approach for AWTs is justified, one which will be appropriate to
embracing all radio technologies and considering how they fit together. Thus
the policy would not be implicitly oriented just towards mobile cellular, with a
few limited and random efforts in areas such as RFID where strong lobbying
by industry groups publicises such technologies. We would have an explicit
commitment to examine all AWTs – because an AWT take-off in Europe at
an industrial level, in order to go beyond the importation of systems and
equipment from overseas for local service operations, will require a policy of
actively promoting innovative development through the support of a
multitude of technologies and complementary solutions.
No. 62, 2nd Q. 2006
144
References
ARHUR B. (1989): "Competing technologies, increasing returns and lock-in by
historical events", Economic Journal, 99(1), pp. 1116-1146.
BOHLIN E., GARRONE P. & ANDERSSON E. (2004): "Investment, Innovation and
Telecommunications Regulation: What is the Role of the NRA?", paper presented at
the Seminar on Competition in Telecommunications, organised by Post- och
telestyrelsen, 28 September 2004. Available online at:
http://www.mot.chalmers.se/citisen/project11.asp.
BOHLIN E, LINDMARK S., FORGE S., BLACKMAN C., BALLON P., WEHN de
MONTALVO, U., HERES J., KIPS A., van STADEN M., TEE R., de MUNCK S. & van
der LAAN W.-P. (2006): "Mapping European Wireless Trends and Drivers",
forthcoming IPTS Technical Report prepared for the European Commission – Joint
Research Center, see www.jrc.es.
CEC (2005): "i2010 – A European Information Society for growth and employment".
Communication from the Commission to the Council, the European Parliament, the
European economic and social committee and the Committee of the regions,
COM(2005) 229 final, Brussels, 1.6.2005.
CHRISTENSEN C. (1997): The Innovator's Dilemma: When New Technologies
Cause Great Firms to Fail, Harvard Business School Press, Boston, MA.
CUSUMANO M., MYONADIS Y. & ROSENBLOOM R. (1997): "Strategic
Maneuvering and Mass-market Dynamics: The Triumph of VHS over Beta", in
Tushman M. & Anderson P. (Eds), Managing Strategic Innovation and Change,
Oxford.
DAVID P.:
- (1985): "Clio and the Economics of QWERTY", AEA Papers and Proceedings, May,
pp. 332-334.
- (1987): "Some new standards for the economics of standardization in the
information age", in Dasgupta P. & Stoneman P. (Eds), Economic policy and
technological performance, Cambridge University Press, pp. 206-239.
EDQUIST C. MALERBA F., METCALFE S., MONTOBBIO F. & STEINMUELLER E.
(2005): "Sectoral systems: implications for European innovation policy", in Malerba F.
(Ed.), Sectoral Systems of Innovation, Concepts, issues, and analysis of six major
sectors in Europe, pp. 427-461.
FORGE S.:
- (2004): "Towards an EU Policy for Open Source software", IPTS Report, Vol. 85.
Available online at:
http://www.jrc.es/home/report/english/articles/vol85/ICT3E856.htm#simon
- (2006): "The Issue Of Spectrum – Radio Spectrum Management in a Ubiquitous
Network Society". Conference proceedings, RFID to the Internet of Things, Pervasive
network systems conference, Brussels, DG Information Society, 6/7 March 2006,
S. LINDMARK et al.
145
organised by the European Commission Directorate "Network and Communication
technologies".
FORGE S. & BLACKMAN C. (2006): "Spectrum for the next radio revolution: the
economic and technical case for collective use", Info, Vol. 8, no. 2, March.
FORGE S., BLACKMAN C. & BOHLIN E.:
- (2005): "The Demand for Future Mobile Markets and Services in Europe", IPTS
Technical Report prepared for the European Commission – Joint Research Center,
EUR 21673 EN, Seville. Available online at: http://fiste.jrc.es/.
- (2006): The Economic Effects of Spectrum Allocation and The New Radio
Revolution, a multi-client study, SCF Associates Ltd, UK, 2006.
GRINDLEY P. (1995): Standards Strategy and Policy Cases and Stories, Oxford,
1995, pp. 20-55.
SHAPIRO C. & VARIAN H. (1999): "The art of Standard Wars", California
Management Review, 41(2), Winter, pp. 8-32.
UTTERBACK J. (1994): Mastering the Dynamics of Innovation: How Companies can
Seize Opportunities in the Face of Technological Change, Harvard Business School
Press, Boston, MA.
The Scope of Economic Sector Regulation
in Electronic Communications
Alexandre de STREEL ACKNOLO (*)
Faculty of Economics, University of Namur
Abstract: This paper proposes a market-based approach relying on a combination of
selection criteria and antitrust methodology to determine the scope of economic regulation
and its balance with competition law. It suggests a clarified three criteria test related to the
presence of high non-transitory and non-strategic entry barriers that are mainly of an
economic nature, the absence of dynamic competition behind those barriers and a crosschecking criterion related to the insufficiency of antitrust remedies to solve the identified
problems. The paper recalls the importance of using use antitrust methodology adapted to
the characteristics of the sector and also suggests some clarification of the regulation of
emerging markets. This article draws a distinction between retail services and the
underlying wholesale infrastructures, and proposes that all wholesale access products
used for the provision of similar retail services should be dealt with in the same way,
independently of the infrastructures in question (the old copper pair or an upgraded VSDL
network). The paper concludes that only wholesale access products used to provide new
retail services should possibly escape regulation.
Key words: Regulation, electronic communications, market failures, balance between
antitrust and sector regulation and emerging markets.
T
his paper proposes an efficient test to determine the scope of
economic regulation in the electronic communications sector and the
balance between regulation and antitrust law. The suggested test is
based on economic methodology, as well as its practice in the European
Union.
The first section of the paper studies the rationale for public intervention.
It starts by listing the reasons for such intervention (market failures) and then
characterises the differences between the means of intervention (sector
regulation and antitrust law). The second section proposes a test to
determine the scope of economic regulation by recalling the approaches
(*) Acknowledgements are made to Allan Bartroff, Richard Cawley, Peter Johnston, Paul
Richards and the participants of the EuroCPR 2006 conference for their very helpful comments
and discussions.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 147.
No. 62, 2nd Q. 2006
148
currently followed in Europe and in the United States, by advocating a
particular option in the European context and a clarification of the regulation
of emerging markets. The final section rounds up the paper with some
conclusions.
Rationale for public intervention
Market failures justifying public intervention
It is generally agreed that public authorities should aim to maximise the
welfare of their citizens and markets are supposed to be the best means to
ensure such welfare maximisation. Thus, governments should intervene only
when the functioning of markets does not deliver this objective.
Economists distinguish between three types of market failure (See also
Australian Productivity Commission, 2001; MOTTA, 2004, Chapter 2). The
first type of failure is the presence of an excessive market power (such as a
monopoly operator), which may lead to over-pricing and/or too little
innovation. Excessive market power is mainly due to the presence of entry
barriers. In economic literature, there are two opposing views (McAfee et al.,
2004; OECD, 2006) to the controversial concept of entry barriers. The
narrow (Stiglerian) view limits the barriers to the absolute cost advantages of
incumbents (such as access to the best outlets in town, the presence of
consumer switching costs, or any type of legal barriers), but excludes all
entrants' costs that have also been borne by incumbents (for instance high
fixed and sunk costs 1 ). The broad (Bainian) view extends the concept of
barriers to all factors that limit entry and enable incumbents to make a supranormal profit and hence includes absolute cost advantages, as well as
economies of scale and scope. In telecommunications economics literature,
this first market failure corresponds to the one-way access (or access)
model, which concerns the provision of bottleneck inputs by an incumbent
network provider to new entrants (ARMSTRONG, 2002; LAFFONT &
TIROLE, 2000; VOGELSANG , 2003).
1 The European Regulators Group defines sunk costs as: "Costs which, once incurred, cannot
be recouped, e.g. when exiting the market. Examples for sunk costs are transaction costs,
advertising expenses or investment in infrastructure for which there is no or little alternative
use': Revised Common Position of May 2006 on remedies ERG(06) 33, p. 127.
A. de STREEL ACKNOLO
149
The second market failure is the presence of an externality (like network
externality or tariffs-mediated externality), which may lead to underconsumption in cases of positive externality and over-consumption in cases
of negative externality 2 . For instance, less than the optimal number of
customers may decide to join a network if new customers are not
compensated, when joining the network, for the increase in welfare that they
offer existing customers. In telecommunications economics literature, this
second market failure corresponds to two-way access (or the
interconnection) model, which concerns reciprocal access between two
networks that have to rely upon each other for call termination
(ARMSTRONG, 2002; LAFFONT & TIROLE, 2000; VOGELSANG , 2003).
The third market failure is the presence of information asymmetries (such
as the absence of knowledge of price), which may lead to under or over
consumption. For instance, the very high prices of international roaming may
partly be due to insufficient knowledge of the price and techniques of such a
service.
In addition, each type of market failure may be structural and result from
the supply and demand conditions of the market, or may be behavioural and
artificially (albeit rationally) 'manufactured' by firms, leading to the two-by-two
matrix illustrated below 3 . Since the decline of the Structure-ConductPerformance paradigm in industrial economics, it is now recognised that
structural and strategic market failures are closely linked and that market
structure influences the conduct of firms as much as their conduct influences
market structure (SUTTON, 1991). Yet it remains possible (and useful when
choosing between the different instruments of public intervention) to identify
the causes of non-efficient market results and to distinguish between
structural and strategic market failures.
However, this table is only a stylised and static view of the market and is
consequently constitutes more of a starting point for raising relevant
questions about the scope of public intervention, rather than a check list to
provide definitive answers. Indeed, telecommunications markets are
intrinsically dynamic and a rationale based on static view that does not
sufficiently take into account investment incentives may lead to inappropriate
2 The European Regulators Group defines network externality as: "The effect which existing
subscribers enjoy as additional subscribers join the network, which is not taken into account
when this decision is made": ERG Revised Common Position on remedies, p. 126.
3 Several potential behavioural market failures have been identified by the European
Regulators Group in its Revised Common position on remedies at Chapter 2.
No. 62, 2nd Q. 2006
150
and over-inclusive public intervention. For instance, a high level of market
power when taking a static view may be welfare enhancing when taking a
dynamic view because it stimulates investment. Thus, it will not justify
intervention, provided they are some constrains in the long term as
explained by SCHUMPETER (1964) in the theory of creative destruction.
Conversely, public intervention may be welfare detrimental from a static
view, but welfare enhancing from a dynamic view. For instance, the support
of less efficient entrants may be justified to give these players time to
consolidate their customer base and become more efficient over time 4 .
Table 1 - Market failures susceptible to public intervention
Structural/non-strategic
Excessive market
power
One way access
(access model)
Externality
Two way access
(interconnection
model)
Information
asymmetry
Behavioural/strategic
Cell 1
- High and sunk fixed with uncertainty
- Important absolute cost advantages
(like switching costs, legal barriers)
Cell 2
- Reinforcement of dominance
- Vertical leveraging
- Horizontal leveraging
Cell 3
- Network effects
- Two-sided markets
Cell 4
- Strategic network effects like
loyalty program or tariff
mediated externality
Cell 6
Cell 5
Moreover, it is also important to look at the origin of market power and to
intervene more stringently in the case of monopolies acquired under legal
protection, but take a laxer approach to monopolies acquired under
competitive conditions (there was competition for the market, although there
is no competition in the market) along the lines of the 'original sin'
rationale 5 .
4 ERG Revised Common Position on remedies, p. 78 noting that: "In some cases however,
'inefficient' (e.g. small-scale) entry might be desirable as short-run productive inefficiencies may
be more than outweighed by the enhanced allocative efficiencies and long-run (dynamic)
advantages provided by competition". See also the Annex of the ERG Revised Common
Position on remedies and DG Competition Discussion Paper of December 2005 of Application
of Article 82 of the Treaty to exclusionary abuses, para 67.
5 This view was defended in the Opinion of the Advocate General Jacobs in Case C-7/97
Bronner v MediaPrint [1998] ECR I-7791. It was also implicitly suggested by the European
th
Commission at Article 13(2) of DG Information Society Working Document of April 27 2000 on
a Common regulatory framework for electronic communications networks and services available
at:
http://europa.eu.int/information_society/topics/telecoms/regulatory/maindocs/miscdocs/index_en.htm.
This working document added the following text to the current definition of SMP: "And, where
(a) undertaking has financed infrastructure partly or wholly on the basis of special or exclusive
rights which have been abolished, and there are legal, technical or economic barriers to market
entry, in particular for construction of network infrastructure; or (b) the undertaking concerned is
a vertically entity owning or operating network infrastructure for delivery of services to
A. de STREEL ACKNOLO
151
The choice of legal instruments to deal with market failures
To tackle these different market failures, public authorities dispose of
several legal instruments (in particular competition law, sector regulation,
consumer law) that they must combine in the most efficient way. In fact, the
scope of each legal instrument varies across jurisdictions.
In the European Union, the scope of competition law (Articles 81-86 EC)
is independent of sector regulation. Competition law has a constitutional
value and applies to all market segments. An antitrust authority may
consequently intervene in addition to the intervention of a sectoral
regulator 6 . On the other hand, the scope of sector regulation 7 is dependent
on competition law. Sector regulation applies when competition law
remedies prove insufficient to solve a market failure problem (Recital 27 of
the Framework Directive). However, it is difficult to determine when sector
regulation has an added-value (i.e. is more efficient in dealing with market
failure) compared to antitrust law because both instruments have converged
over time in the electronic communications sector. Competition law has been
applied extensively to maintain level competition, but also to increase such
level 8 and has become a sort of 'regulatory antitrust'. Conversely, sector
regulation is now based on antitrust methodologies 9 and has become sort
customers and also providing services over that infrastructure, and its competitors necessarily
require access to some of its facilities to compete with it in downstream market." (my
underlining).
6 Commission Decision of May 21st 2003, Deutsche Telekom, O.J. [2003] C 264/29, currently
under appeal at the Court of First Instance as case T-271/03. In the United States, the Supreme
Court decided in 2004 that antitrust would in practice not be applicable if sector regulation
applies: Verizon v. Trinko 540 U.S. 682 (2004). For an analysis of the differences between the
USA and Europe, see Larouche, 2006. For an analysis of the relationship between antitrust and
sector regulation in other jurisdictions, see GERADIN & KERF, 2003.
7 Directive 2002/21/EC of the European Parliament and of the Council of March 7th 2002 on a
common regulatory framework for electronic communications networks and services
(Framework Directive), O.J. [2002] L 108/33; Directive 2002/19/EC of the European Parliament
th
and of the Council of March 7 2002 on access to, and interconnection of, electronic
communications networks and services (Access Directive), O.J. [2002] L 108/7; Directive
th
2002/22/EC of the European Parliament and of the Council of March 7 2002 on universal
service and users' rights relating to electronic communications networks and services (Universal
Service Directive), O.J. [2002] L 108/51.
8 Such approach has been explicitly endorsed by the Court of First Instance: Case T-87/05
Energias de Portugal v Commission [2005] ECR II-0000, para 91.
9 To regulate an operator, a regulatory agency must delineate the border of the relevant
markets with competition methodology (the hypothetical monopolist test), select the market
according to the three criteria test, and determine whether the operator enjoys a dominant
position (as defined in competition law) in the delineated and selected market.
152
No. 62, 2nd Q. 2006
of 'premptive competition law' (CAVE & CROWTHER, 2005; de STREEL,
2004). As KRÜGER & DI MAURO (2003: 36) observe:
"The perceived antagonism between competition and regulation is,
therefore, only apparent, and it is destined to disappear. In fact,
competition has already been shaping regulation: it is the latter which
has been adapting itself to suit the philosophy and the approach of the
former. Regulatory policy cannot be seen anymore as independent of
competition policy: it must be seen as a part of a broader set of tools of
intervention in the economy based on competition analysis principles.
[…] competition instruments and regulatory tools are complementary,
rather than substitute, means. They deal with a common problem and
try to achieve a common aim. The problem is always high levels of
market power and the likelihood of it being abused, and the aim is
putting the end user at the centre of any economic activity. Only
through a combination of both tools can we ensure that market power
does not distort and hamper the development of competition in the
communications markets. This in turn allows end users to drive and
steer such development, as well as to benefit the most of it."
In practice, the two main and related substantive differences between
sector regulation and antitrust 10 are that (1) the former intervenes ex-ante,
hence deals with unsatisfactory market structures whereas latter (with the
exception of merger control, which is admittedly very important in the
electronic communications sector) intervenes ex-post, and consequently
deals with unsatisfactory behaviour 11 and (2) the burden of proof for sector
regulation to intervene is lower than antitrust law. The main institutional
difference is that (3) sector regulation is only applied by national authorities,
whereas antitrust law is applied by national and European authorities (DG
Competition).
As a result of the first difference (related to structure and behaviour), it is
efficient for sector regulation to deal with structural market failures and
competition law to deal with behavioural ones. The second difference
(related to the burden of proof) makes it efficient for the factor used to select
markets for regulation to be set at a very high level because once a market
area is selected, intervention is relatively easy. In other words, the selecting
factor should ensure that regulation is limited to markets where the risks of
type I errors (false condemnation) are low and the risks of type II errors
10 On the differences between sector regulation and antitrust law, see also LAFFONT &
TIROLE, 2000: 276-280; KATZ , 2004; TEMPLE LANG, 2006.
11 Paradoxically, the sectoral remedies are mainly behavioural and not structural.
A. de STREEL ACKNOLO
153
(false acquittal) are high 12 . This is all the more important since the costs of
type I errors are significant in dynamic markets 13 . Taking both arguments
together, any possible regulation should be limited to cells 1 and 3 of table 1,
i.e. structural market failures due to excessive market power and
externalities. Finally, because of the third difference (related to institutional
design), it might be justifiable for antitrust law to apply in addition to sector
regulation in cases where NRAs have not performed their tasks
adequately 14 .
A test to determine the scope of sector regulation
Two approaches to a general test
This section of the paper focuses on a test for the first market failure
(excessive market power or one-way access) and disregards a test for the
second market failure (network effects or interconnection) to alleviate any
confusion between two very different economic problems 15 . The distinction
between these two market failures is important because the first market
failure may disappear over time in electronic communications and economic
regulation could be limited to network affects. In the USA, for instance, some
authors suggest that Congress should reform the Telecom Act and limit
regulation to interconnection issues where there are few players, leaving all
the other issues (like one-way access or interconnection with many players)
to antitrust law 16 .
12 I link here the burden of proof to intervene with the risks and the costs of type I and type II
errors, following EVANS & PADILLA (2004) and references cited therein in footnote 5.
13 HAUSMAN (1997) valued the delay of the introduction of voice messaging services from late
1970s until 1988 at USD 1.27 billion per year by 1994, and the delay of the introduction of
mobile service at USD 100 billion, large compared with the 1995 US global telecoms revenues
of USD 180 billion/year.
14 As was the case in the Deustche Telekom decision.
15 For interconnection, many authors argue for a move towards a generalized bill and keep
rule: DEGRABA, 2002; HORROCKS ,2005.
16 See the Draft Bill of the Progress & Freedom Foundation proposing a blueprint for and U.S.
Digital Age Communications Act, available at: http://www.pff.org/daca/. This can not necessarily
be transposed in Europe due to the lower penetration rate of cable and the application of the
calling party principle.
154
No. 62, 2nd Q. 2006
For the first market failure, there are two main approaches to translating
this economic rationale into legal provisions. The first approach is a marketbased test and is currently followed in Europe. It relies on a combination of
antitrust methodologies and additional criteria. Thus, regulators start by
defining relevant retail markets according to antitrust methodologies
(adapted to sector characteristics such as its dynamism) 17 . In cases of
excessive market power at a retail level, regulators move to the linked
wholesale network access market(s) and select markets for possible
regulation on the basis of three criteria that are deemed to indicate which
markets are not efficiently policed by competition law: entry barriers, no
dynamics behind the barriers, and the insufficiency of competition law
remedies to deal with the perceived problem 18 .
Such an approach is praised by BUIGES (2004) and CAVE (2004)
because it ensures flexibility (as antitrust principles are based on economic
theory), legal certainty (as antitrust principles are based on over forty years
of case-law) and harmonisation (as antitrust principles are strongly
Europeanised) and should facilitate the transition towards the disappearance
of economic regulation and a state of affairs where competition laws is solely
remaining. However, this approach is criticised by LAROUCHE (2002:136140), DOBBS & RICHARDS (2004), and RICHARDS (2006) because it is
overly complicated and may contain a bias towards more regulation.
The second approach is an asset-based test and is currently adopted in
the United States 19 . It detects hard-core market power justifying regulation
with alternative and supposedly more direct economic methods. Thus,
regulators do not start at the retail level, but focus directly on wholesale
network segments with high fixed and sunk costs that make them unlikely to
be replicable. One variant is the non-replicable asset defined as (1) an asset
that has not already been replicated on a commercial basis in similar
circumstances, and (2) with no functionally equivalent commercially viable
17 On this approach, see the Explanatory Memorandum of the Commission Recommendation
on relevant markets, section 3.1.
18 Recitals 9-16 of the Commission Recommendation 2003/311 of February 11th 2003 on
relevant product and service markets within the electronic communications sector susceptible to
ex ante regulation in accordance with Directive 2002/21/EC of the European Parliament and of
the Council on a common regulatory framework for electronic communications networks and
services, OJ [2003] L 114/45.
19 Note that the ERG Revised Common position on remedies refers to the non-replicable asset
approach at p. 57-59 and the Recital 13 of the Access Directive links the need of regulation with
the presence of bottleneck.
A. de STREEL ACKNOLO
155
and being able to deliver comparable services to end-users 20 . Another
broader (i.e. including more assets for regulation) variant is the bottleneck
defined as: "The parts of the network where there are little prospects for
effective and sustainable competition in the medium term." 21
In the end, both approaches are less different than they may seem at a
first sight, as they pursue the same goal of identifying the 'parts of the
infrastructure' that justify regulation due to their structural characteristics.
Yet, the starting point is different and the first approach may be a little more
complicated, and thus more easily manipulated. On that basis, the first-best
option and most efficient test may be an asset-based approach. However,
this test would not be able to cover two-way access problems, which might
remain the only area of regulation in the long term. It also requires some
qualification to ensure that retail markets and the principle of technological
neutrality are duly taken into account. In addition in the specific context of
the European Union, regulators are now used to the market-based approach
and such an approach justifies stringent control by the Commission over
NRAs' decisions because it relies on antitrust methodology, an area of
Commission expertise. In this specific context, this paper submits that the
market-based approach is a second-best that should be maintained.
However, it should be clarified.
A reformed and clarified market-based test
The market-based approach needs to be clarified at two levels: in terms
of the use of antitrust methodologies and of the three criteria test. On the
one hand, antitrust principles and their underlying economic theories should
be adapted to the characteristics of the legal instrument and the markets
20 Indecent and Ovum (2005: 26). In practice, the authors considered that this test implies
regulation for to the fixed local loop in all Member States and might also include backhaul
facilities from the Main Distribution Frame to the core network in some Member States.
21 Ofcom Final statements of September 22nd 2005 on the Strategic Review of the
Telecommunications and undertakings in lieu of a reference under the Enterprise Act 2002, at
Para 4.6. Currently Ofcom considers that this test implies regulation for shared and full metallic
path facility, wholesale line rental, backhaul extension services, Wireless Access Network
extension services and IPstream. The bottleneck approach was also favoured by SQUIRESANDERS & ANALYSYS (1999: 147). They did not define the concept, but pragmatically
identified interconnection (especially termination practices), access to networks or digital
gateways, local loop, distribution and access to scarce resources. For the (then) future, they
also identified intellectual property rights, directory services, programming guides, and control
over interfaces/web navigators. In his doctoral dissertation, LAROUCHE (2000: 359-402) also
proposed to base regulation on the concepts of bottleneck and network effects.
156
No. 62, 2nd Q. 2006
conditions: for vertical chain of production, for Schumpeterian competition
and for two-sided markets.
On the other hand, the three criteria test should be qualified 22 . (1) The
first criterion would be related the presence of entry barriers. As we have
already seen, there are different conceptions of entry barriers and
Schmalensee (2004:471) argues that the appropriate notion of entry barriers
depends on the objectives of the legal instrument for which it is used. He
submits that a Bainian approach is preferable for antitrust law pursuing the
maximisation of consumer welfare 23 . For the same reason, a Bainian
approach should be used in sector regulation. Indeed, the European
Regulators Group defines the barriers to entry as:
"An additional cost which must be borne by entrants but not by
undertakings already in the industry; or other factors which enable an
undertaking with significant market power to maintain prices above the
competitive level without inducing entry". (ERG Revised Common
Position on remedies, p. 124)
Yet, such a notion needs to be qualified before being used as the first
criterion to screen a market for regulation. Firstly, the barriers should be
structural because strategic barriers (like excessive investment or
reinforcement of network effects) would require idiosyncratic and episodic
intervention that is better left to competition law (CAVE, 2004: 34).
Secondly, the barriers should be non-transitory because transient
barriers do not justify heavy-handed intervention by sector regulators. The
timeframe of what is 'transitory' is difficult to decide, but should at least cover
the period until the next market review (a minimum of 2 to 3 years) and
possibly beyond (ERG Revised Common Position on remedies, p. 59).
Thirdly, the barriers should principally be of an economic nature. Indeed,
if the barriers are of a legal nature (such as a limitation of spectrum that
22 If the tests were followed, the Commission Recommendation on relevant market would be
substantially slimmed down. The retail markets, the fixed core network markets and most of the
mobile markets would be removed, meaning that only the fixed terminating segments and the
fixed and mobile interconnection markets would be identified for further analysis by NRAs.
23 Thus, the Commission proposes a broad definition of entry barriers, stating that: "Factors
that make entry impossible or unprofitable while permitting established undertakings to charge
prices above the competitive level." This includes many elements like economies of scale and
scope, capacity constraints, absolute cost advantages, privileged access to supply, highly
developed distribution and sales networks, the established position of the incumbent firms in
markets, legal barriers, and other strategic barriers to entry: Discussion Paper on exclusionary
abuses, para 38-40.
A. de STREEL ACKNOLO
157
cannot be traded), the best remedy consists of removing the barrier and not
regulating the artificially uncompetitive market 24 . In such cases, the
regulator would do better to advocate rather than intervene in the market,
and to lobby the public authorities (legislator, government etc.) to remove
legal barriers instead of regulating the market. Thus it is only if, and for the
period when, there is no way of removing such legal barriers that the market
may be selected for regulation.
Fourthly and most importantly, the barriers should be so high that no
effective competition may be expected. The difficult question here is how
'high' is high? The issue is whether a 'natural' tight oligopoly should be
regulated 25 . To alleviate any type 1 error, this paper submits that the entry
barriers should be so high that only one operator, except in exceptional
circumstances, can be profitable in the market. This paper does not contend
that oligopolies should be regulated because the authorities do not have
sufficient information to discriminate between efficient and inefficient
oligopolies, or have efficient remedies for dealing with them under the sector
regime 26 and most oligopoly situations could be resolved by removing legal
entry barriers.
Thus, the first criterion would cover non-transitory and non-strategic entry
barriers that are mainly of an economic nature and that should be so high
that only one operator is viable in the market,
save exceptional
circumstances. To make the criterion operational, the regulatory players
could opt for a two-stage approach 27 . They could start with an empirical
analysis and look at the degree to which operators in Europe or worldwide
have built out competitive networks in similar circumstances and under
viable economic conditions. Regulators could subsequently complement this
finding with a cost analysis based on engineering models that estimate the
cost curve or econometric cost functions (GASMI et al., 2002; FUSS &
WAVERMAN, 2002). In practice, only some fixed segments may be
24 Indeed, the European Commission is encouraging a more flexible and marked-based
approach for the allocation and the exchange of spectrum: Communication from the
th
Commission of September 6 2005 on a Forward-looking radio spectrum policy for the
European Union, COM (2005) 411.
25 As suggested by the E/IRG response to the 2006 Review, p. 22.
26 In general, remedies include transparency, non-discrimination, accounting separation,
compulsory access, price control and cost accounting: Articles 9-13 of the Access Directive.
27 As proposed by CAVE (2006: 227). See also ERG Revised Common Position on remedies,
pp. 59-60.
158
No. 62, 2nd Q. 2006
screened for one-way access regulation and no mobile segments, save
exceptional circumstances 28 .
The second criterion would ensure that a dynamic view is adopted and
correct the static bias that the first criterion may carry. Thus regulators
should assess whether the market would deliver the results of dynamic
competition (i.e. innovation) despite high entry barriers; in other words,
whether the market would deliver the benefits of Schumpeterian creative
destruction. This may be the case, for instance, if there is ex-ante
competition for the market, although there is no more ex-post competition in
the market 29 . This should, however, be applied in a nuanced way. TIROLE
(2004: 262) argues that if a monopoly is due to a legal monopoly, scale
economies or pure network externalities, intervention is justified whereas if
monopoly is due to genuine investment and innovation, regulators should
forbear.
The third criterion would ensure that a market is selected for regulation
solely in cases where antitrust remedies prove less efficient than sector
regulation to solve the identified dynamic competitive problem and recall that
sector regulation is subsidiary to competition law. This criterion should be
based on the same structural elements as the first two criteria and be fulfilled
when these criteria are met (i.e. when there are high entry barriers that do
not deliver the dynamic benefit of competition) serving solely as a crosscheck. This paper does not contend that the third criterion should be based
on additional institutional elements (like the respective powers of the national
competition authority relative to the national sector regulator) because such
elements can vary from Member State to State, and could consequently
undermine the consistency of regulation in the single market 30 .
A complementing clause for emerging markets
This general screening test should complemented by clear provision
regarding the treatment of emerging markets given the importance of
28 For a discussion of one-way access obligation in the mobile sector: ERGAS et al.,2005;
VALETTI, 2004.
29 One of the first economists to argue this point was DEMSETZ (1968).
30 See similarly in the parallel issue of the relationship between the ex ante merger control and
ex post control of abuse of dominant position: TetraLaval C-12/03P [2005] not yet reported,
para 75.
A. de STREEL ACKNOLO
159
investment in the sector and of legal certainty for investors. To be sure, the
screening test based on three criteria test already contains an investment
safeguard, as the second criterion relates to dynamic considerations.
However, such a safeguard may not provide sufficient legal certainty for
investors 31 .
To clarify the issue, the first step is to define an emerging market or
service. The European Regulators Group defines the emerging market as:
"Distinct from a market that is already susceptible to ex ante regulation
from both a demand and a supply perspective. This means that
consumers of the new service should not move their custom to
currently available services in response to a small but significant nontransitory price increase in the price of the new service. In a similar
manner, firms currently providing existing services should not be in a
position to quickly enter the new service market in response to a price
increase" (ERG Revised Common Position on remedies, p. 19).
The ERG notes that such markets will normally not be selected for
regulation because it isn't possible to assess the three criteria test as there
is a high degree of demand uncertainty and entrants to the market bear
higher risk 32 .
To make the definition operational, it is useful to distinguish further
between the retail services and the underlying infrastructures relied upon to
provide such services. As far as retail markets are concerned, a new service
does not emerge/exist when it can be included in a relevant existing market
according to the hypothetical monopolist test 33 . This is the case when endusers consider the new service as substitutable for existing services, hence
the new service provider is constrained in its prices (cf. box 1 of table 2).
This is the case with Voice over Broadband, for instance, now that it permits
31 In any case a test based on non-replicability or bottlenecks contains nothing to protect new
investment.
32 Similarly, Indepen & Ovum (2005: 3) define an emerging market as: "Any relatively new
market in which there is insufficient information (for example in terms of demand, pricing, price
elasticity and entry behaviour) to carry out the necessary market definition procedures and/or
tests as to whether the market is susceptible to ex ante regulation". BAAKE et al. (2005: 22)
take: "As the necessary condition for a new market the existence of an innovation, i.e. an
increase in general knowledge regarding the possibility of manufacturing or distributing goods
and services."
33 As noted by many like RICHARDS (2006), the application of the SSNIP test to emerging
markets is complex because little information is available.
No. 62, 2nd Q. 2006
160
nearly the same functionalities as voice over PSTN 34 . Conversely, a new
service is considered to be emerging when this service cannot be included in
a relevant existing market because end-users do not consider that this new
service substitutes existing services (cf. box 2). This may be the case with
the next generation of mobile broadband data services providing end users
with internet access through a fast connection and with the added feature of
mobility (ERG Revised Common Position on remedies, p. 20) or extremely
fast fixed broadband access.
Table 2: Different cases of emerging markets (*)
R
E
T
A
I
L
Existing services
No regulation in principle
Box 3
W
H
O
L
E
S
A
L
E
Emerging services
Box 1 (incl. VoIP)
Existing transmission
inputs
Apply standard SMP regime
(market-based or assetbased approach)
Box 2
No regulation
-----------
Box 4 (incl. VDSL, FTTx)
Mixed new
transmission inputs
Totally new
transmission inputs
Market-based approach: as in Box 3 for existing services
Asset-based approach: as in Box 5 for existing and emerging
services
Box 5
----------No regulation OR access
holidays, depending of the
characteristics of the new
infrastructure
(*) This figure, which distinguishes between existing and new retail markets and between
existing and new wholesale inputs, is adapted from a presentation that R. Cawley did at the
CICT conference in Copenhagen in December 2005. For an alternative view, see Ovum &
Indepen (2005).
In terms of the wholesale inputs, there are three possibilities. An
infrastructure may exist (and possibly have been deployed under a legal
monopoly) and be used to provide existing retail services (cf. box 3). This is
the case of the PSTN network. Alternatively, an upgraded infrastructure or a
new infrastructure may be used to provide both existing and new retail
services (cf. box 4). This is the case with the VDSL or even the Fiber To The
Curb or To The Home network (FTTx). Finally, a new infrastructure may be
used solely to provide emerging retail services (cf. box 5). This was the case
34 Annex of the Communication from the Commission of 6 February 2006 on Market Reviews
under the EU Regulatory Framework: Consolidating the internal market for electronic
communications, COM(2006) 28, p. 4.
A. de STREEL ACKNOLO
161
with the 2G network when digital mobile voice was launched at the
beginning of the 1990s.
Once the emerging markets have been defined, the second step is to
decide upon the optimal level of regulation required in order to preserve
investment incentives 35 . With regard to retail market regulation, existing
services (box 1) should, in principle, be left to competition and sector
regulation phased out, possibly accompanied by a safeguard period to
ensure that regulation at the wholesale level is efficient in removing barriers
to retail entry. Emerging services (box 2) that entail a much higher risk
should be left to antitrust law alone 36 .
With regard to the regulation of wholesale inputs, the case of existing
transmission infrastructures (box 3) is not controversial. These inputs should
be subject to the standard three criteria test. Once this test is passed, NRAs
may analyse further and possibly regulate the existing transmission inputs.
The case of totally new transmission inputs is not very controversial
either, although it should rarely happen in practice. There are two
hypotheses (box 5) 37 . The input does not, and will not in the future, meet
the conditions of the screening test. In such circumstances, there is no need
to intervene because the market is emerging, and more importantly,
because there is no hard-core market power that justifies regulation.
Alternatively, the totally new transmission input may, in the future, meet
the conditions of the screening test. This situation is trickier because on the
one hand there is hard-core market power that may justify regulation, but on
the other hand, investment incentives need to be preserved 38 . Regulators
may adopt a radical approach and guarantee the operator 'access holidays'
for a certain period of time, like an intellectual property right. The optimal
35 See also Ofcom, March 7 2006, New Generation Networks: Developing the regulatory
framework.
36 This was the case with the ADSL tariffs in the Commission Decision of July 16th 2003, case
38.233 Wanadoo.
37 Another sub-category may also be created between new infrastructures put in place by
th
incumbents and by new entrants, although such a distinction may not be relevant.
38 It can be argued that regulation will not impede the recoupment of investment risk (hence will
not undermine future investment incentives), as any access regulation (and access price)
should provide a premium for investment risk. However, the calculation of this premium is far
from simple, as regulators face difficulties in distinguishing the ex post rewards for risky
investment from monopoly rents, hence there is a possibility that the premium will be set too
low. On this point, see Australian Productivity Commission (2001: 268).
162
No. 62, 2nd Q. 2006
length of such access holidays is difficult to determine. Indepen & Ovum
(2005) propose one third of the life of the asset 39 , whereas BAAKE et al.
(2005) propose a multiple stage approach whereby the situation is assessed
every two to four years. Regulators may also be more interventionist and
impose 'open access regulatory compacts' that leave operators the freedom
to set the level of prices, but establish a structure of prices such that the
operator can not foreclose its competitors in related markets 40 .
The most controversial case is the mixed new transmission inputs
(box 4). Some (LAROUCHE, 2006) start at the retail level (i.e. at the top of
figure 1) and adopt a 'vertical approach'. They argue that box 4 should be
treated in the same way as box 3 for existing services. This is the view taken
by the Commission and the European Regulators Group 41 . Thus, regulators
should deal equally with old and mixed new transmission inputs when they
provide the same existing retail services. That may lead to further analysis,
and possibly an imposition of remedies on the mixed new infrastructure, if
the conditions of the three criteria test are met. For instance, if a VDSL line
or a FTTH line replaces copper pairs, access regulation may continue to be
imposed for the provision of existing retail services (like voice), but not for
the provision of emerging services 42 . However, NRAs should be cautious
not to extend existing regulation to new inputs without an articulated
economic analysis. Indeed, the fact that a mixed new infrastructure has been
deployed may be an indication that there is no structural market failure
justifying regulation altogether.
39 Based on GANS & KING, 2004.
40 This is the approach followed in the Microsoft case, as the company is free to determine its
price on the Operating System market, but it may not extend its monopoly from the OS market
th
to related markets: Commission Decision of March 24 2004, case 37.792 Microsoft.
41 Commission Decision of December 23rd 2005, Case DE/2005/262 (Wholesale Broadband
Access in Germany), available at: http://forum.europa.eu.int/Public/irc/infso/ecctf/library. See
also the ERG Revised Common Position on remedies, p. 116-118. Note that Commissioner
Reding appears to have changed her mind about regulatory holidays. At the beginning, she
seemed to be in favour of such approach to stimulate investment in new broadband
infrastructure (REDING, 2005). Now on the basis of the data gathered in the 11th
Implementation Report (Communication from the Commission of February 20th 2006, European
th
Electronic Communications Regulation and Markets 2005 (11 Report), COM(2006) 68) and an
independent study of London Economics (2006) done for the European Commission, she
seems much more reluctant to accept regulatory holidays (REDING, 2006: 4).
42 To ensure that investment in new infrastructure is not impeded, regulators may decide to
regulate the access price of the mixed new infrastructure at retail minus (instead of cost plus
based).
A. de STREEL ACKNOLO
163
Conversely, others (Indepen & Ovum, 2005) start at the wholesale level
(i.e. at the bottom of figure 1) and adopt a 'horizontal' approach. They argue
that box 4 should be treated in the same way as box 5. This is the view of
the U.S. Federal Communications Commission 43 . Thus the regulator should
deal equally wit all new infrastructures, independently of the services for
which they are used. For instance, if a VDSL line or a FTTH line replaces
copper, access regulation will be lifted even to provide existing retail
services (like voice).
In practice
To put these principles into practice, this paper suggests that regulators
start by screening markets with the proposed market-based test to detect
structural market failures due to excessive market power. In particular,
regulators should assess whether the supposed market failures are
detrimental to user welfare in the long run; and in cases of conflict between
static and dynamic efficiencies, regulators should favour latter because
dynamic gains and losses are generally more important than static gains and
losses (de BIJL & PEITZ, 2002; Indepen & Ovum, 2005: 22).
Regulators should then advance compelling arguments that the benefits
of their intervention outweighs its cost. The benefit is the correction of the
market failure and the consequent increase in welfare. The costs are the
direct costs of designing and implementing the rules by the regulators and
the regulatees and indirect costs due to type I errors (false condemnation),
both of which are substantial in the electronic communications sector 44 . As
cost/benefit analyses are extremely difficult to perform, especially as they
involve predictions of future market developments, a qualitative argument
should suffice when quantitative analysis is not possible or far too
burdensome 45 .
43 Note, however, that the position of the FCC is mainly due to the important cable penetration
in the USA, which has no equivalent in most of the EU Member States.
44 For the direct costs of the implementation of the European regulation, Cave estimated at a
CEPT conference in April 2005 that the average costs for the initial market review at 5 million
Euros per Member State. Similarly, Australian Productivity Commission (2001) notes that the
Australian incumbent is the biggest consumer of legal service in the country. For the indirect
costs, see note 20.
45 In particular when choosing remedies, the NRAs should carry a regulatory impact
assessment showing that the anticipated benefits of the option selected outweigh its potential
costs: Revised Common Position on remedies, p. 56.
No. 62, 2nd Q. 2006
164
Conclusion
To conclude, it is important that regulators have a clear and soundly
economic-based test to determine the scope of economic regulation in the
electronic communications sector and its optimal balance with competition
law. This paper proposes a market-based approach relying on a combination
of selection criteria and antitrust methodology to determine the scope of
economic regulation. I suggest a clarified three criteria test related to the
presence of high non-transitory and non-strategic entry barriers that are
mainly of an economic nature, the absence of dynamic competition behind
those barriers and a cross-checking criterion related to the insufficiency of
antitrust remedies to solve the problems identified. It recalls the importance
of using antitrust methodology adapted to relect the characteristics of the
sector. The paper also proposes a clarification of the regulation of emerging
markets and suggests drawing a distinction between retail services and
underlying wholesale infrastructures. It also suggests that all wholesale
access products used for the provision of similar retail services should be
dealt with in the same way, independently of their infrastructure (the old
copper pair or an upgraded VSDL network) and that only wholesale access
products used to provide new retail services should possibly escape
regulation.
A. de STREEL ACKNOLO
165
References
ARMSTRONG M. (2002): "The Theory of Access Pricing and Interconnection", in
Cave M., Majumdar S., Vogelsang I (Eds), Handbook of Telecommunications
Economics V.I, North-Holland, pp. 295-384.
Australian Productivity Commission (2001): Telecommunications Competition
Regulation, Final Report, AusInfo.
BAAKE P., KAMECKE U. & WEY C. (2005): Efficient Regulation of Dynamic
Telecommunications Markets and the New Regulatory Framework in Europe, Mimeo.
de BIJL P. & PEITZ M. (2002): Regulation and Entry into Telecommunications
Markets, Cambridge University Press.
BUIGES P. (2004): "A Competition Policy Approach", in P. Buiges & P. Rey (Eds),
The Economics of Antitrust and Regulation in Telecommunications, E. Elgar, 9-26.
CAVE M.:
- (2004): "Economic Aspects of the New Regulatory Regime for Electronic
Communication Services", in P. Buiges & P. Rey (Eds), The Economics of Antitrust
and Regulation in Telecommunications, E. Elgar, pp. 27-41.
- (2006), "Encouraging infrastructure competition via the ladder of investment",
Telecommunications Policy 30, pp. 223-237.
CAVE M. & VOGELSANG I. (2003): "How access pricing and entry interact",
Telecommunications Policy 27, pp. 717-727.
CAVE M. & CROWTHER P. (2005): "Pre-emptive Competition Policy meets
Regulatory Antitrust", European Competition Law Review, pp. 481-490.
DEGRABA P. (2002): "Central Office Bill and Keep as a Unified Inter-Carrier
Compensation Regime", Yale Jour. on Regulation 19, 36.
DEMSETZ H. (1968): "Why Regulate Utilities?", Jour. of Law and Economics XI, pp.
55-65.
DOBBS I. & RICHARDS P. (2004): "Innovation and the New Regulatory Framework
for Electronic Communications in the EU", Eur. Comp. Law Rev., pp. 716-730.
ERGAS H., WATERS P. & DODD M. (2005): Regulatory approaches to Mobile
Virtual Network Operators (MVNOs), Study prepared for Vodafone.
EVANS D.S. & A. J. PADILLA (2004): "Designing Antitrust Rules for Assessing
Unilateral Practices: A Neo-Chicago Approach", CEPR Discussion Paper 4625.
FUSS M.A. & WAVERMAN L. (2002): "Econometric Cost Functions", in Cave M.,
Majumdar S., Vogelsang I. (Eds), Handbook of Telecommunications Economics, V.I,
North-Holland, pp. 144-177.
GANS J. & KING S. (2004): "Access Holidays and the Timing of Infrastructure
Investment", The Economic Record 80(284), pp. 89-100.
166
No. 62, 2nd Q. 2006
GARZANITI L. (2003): Telecommunications, Broadcasting and the Internet: EU
nd
Competition Law and Regulation, 2 ed., Sweet & Maxwell.
GASMI F., KENNET M., LAFFONT J.J. & SHARKEY W.W. (2002): Cost proxy
models and telecommunications policy: A new empirical research to regulation, MIT
Press.
GERADIN D. & KERF M. (2003): Controlling Market Power in Telecommunications:
Antitrust vs Sector-Specific Regulation, Oxford University Press.
HAUSMAN J.A. (1997): "Valuing the Effect of Regulation on New Services in
Telecommunications", Brookings Papers: Microeconomics, pp. 1-38.
HAUSMAN J.A. & SIDAK J.G.:
- (1999): "A Consumer-Welfare Approach to the Mandatory Unbundling of
Telecommunications Networks", Yale Law Jour. 109, pp. 417-505.
- (2005): "Did Mandatory Unbundling Achieve its Purpose? Empirical Evidence from
Five Countries", Jour. of Competition Law and Economics 1(1), pp. 173-245.
HORROCKS J. (2005): A Model for Interconnection in IP-based Networks, ECC
TRIS Report.
Indepen & Ovum (2005): Regulating Emerging Markets?, Study prepared for OPTA.
KATZ M.L. (2004): "Antitrust or regulation? US public policy in telecommunications
markets", in P. Buiges & P. Rey (Eds), The Economics of Antitrust and Regulation in
Telecommunications, E. Elgar, pp. 243-259.
KRÜGER R. & DI MAURO L. (2003): "The Article 7 consultation mechanism:
managing the consolidation of the internal market for electronic communications",
Comp. Policy Newsletter 3, pp. 33-36.
LAFFONT J.J. & TIROLE J. (2000): Competition in Telecommunications, MIT Press.
LAROUCHE P.:
- (2000): Competition Law and Regulation in European Telecommunications, Hart.
- (2002): "A closer look at some assumptions underlying EC regulation of electronic
communications", Jour. of Network Industries 3, pp. 129-149.
- (2006): "Contrasting Legal Solutions and Comparability of the EU and US
Experiences", Presented at the Conference Balancing Antitrust and Regulation in
Network Industries.
London Economics (2006): Measuring the impact of the regulatory framework on
growth and investments in e-coms, Study for the European Commission.
McAFEE R., MIALON H.G. & WILLIAMS M.A. (2004): "What is a Barrier to Entry",
American Economic Review: AEA Papers and Proceedings 94(2), pp. 461-465.
MOTTA M. (2004): Competition Policy: Theory and Practice, Cambridge University
Press.
A. de STREEL ACKNOLO
167
OECD (2006): Barriers to entry, DAF/COMP(2005)42.
REDING V.:
- (2005): "The review of the regulatory framework for e-Communications", Speech
th
September 15 .
- (2006): "The information society: Europe's highway to growth and prosperity",
Speech March 6th.
RICHARDS P. (2006): "The limitations of market-based regulation of the electronic
communications sector", Telecommunications Policy 30, pp. 201-222.
SCHUMPETER J. (1964): The Theory of Economic Development: An Inquiry into
Profits, Capital, Credit, Interest and the Business Cycle, Cambridge University Press.
SHARKEY W.W. (2002): "Representation of Technology and production", in Cave
M., Majumdar S. & Vogelsang I (Eds), Handbook of Telecommunications Economics
V.I, North-Holland, pp. 180-222.
Squire-Sanders-Dempsey and Analysis (1999): Consumer demand for telecommunications services and the implications of the convergence of fixed and mobile
networks for the regulatory framework for a liberalised EU market, Study for the
European Commission.
Squire-Sanders-Dempsey & WIK Consult (2002): Market Definitions for Regulatory
Obligations in Communications Markets, Study for the European Commission.
de STREEL A. (2004): "Remedies in the Electronic Communications Sector", in D.
Geradin (Ed.), Remedies in Network Industries: EC Competition Law vs. Sectorspecific Regulation, Intersentia, pp. 67-124.
SCHMALENSEE R. (2004): "Sunk Costs and Antitrust Barriers to Entry", American
Economic Review: AEA Papers and Proceedings 94(2), pp. 471-475.
SUTTON J. (1991): Sunk Costs and Market Structure, MIT Press.
TEMPLE LANG J. (2006): "Competition Policy and Regulation: Differences,
Overlaps, and Constraints", presented at the Conference Balancing Antitrust and
Regulation in Network Industries.
TIROLE J. (2004): "Telecommunications and competition", in P. Buiges & P. Rey
(Eds), The Economics of Antitrust and Regulation in Telecommunications, E. Elgar,
pp. 260-265.
VALLETTI T. (2004): "Market Failures and Remedies in Mobile Telephony", Jour. of
Network Industries 5, pp. 51-81.
VOGELSANG I. (2003): "Price Regulation of Access to Telecommunications
Networks", Jour. of Economic Literature XLI, pp. 830-862.
Features
Regulation and Competition
Firms and Markets
Technical Innovations
Public Policies
Use Logics
Book Review
Regulation and Competition
Does Good Digital Rights Management
Mean Sacrificing the Private Copy?
Ariane DELVOIE
Cabinet Alain Bensoussan, Paris
D
RM, or Digital Rights Management, refers to the technology used to
secure digital works and the management of access rights to those
works. Through the use of four components – the encoder which encrypts
the files protected by copyright, the streaming server which provides access
to the files, the reader which decrypts the coding, and the management
software which determines to whom the rights belong and how they are to
be distributed – DRM architecture permits:
- on one hand, the tracing of file users' activity, in order to verify if
access to the files in question is authorized, and to determine whether
the user is complying with applicable copyrights;
- on the other hand, to proscribe or limit access to the digital work or
copies thereof.
The second of these "lock" functions was addressed in the May 22, 2001
Community Directive 2001/29/CE, harmonizing certain aspects of copyright
law with apposite legal rights in the domain of software and digital
information, and subsequently by the Bill on Conversion ("DADVSI") (on the
matters of copyright and related digital information rights), adopted by the
French Senate on May 11th 2006 and now examined by a commission
composed by 7 senators and 7 members of the House of representatives in
order to find an agreement on the final text as soon as possible.
These two texts officially establish the protection of "effective technical
measures intended to prevent or limit uses not authorized by a copyright
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 171.
172
No. 62, 2nd Q. 2006
owner, or owner of a related right, of a work, performance, audio recording,
video recording, or program outside the software application."
Do these measures sound a death knell for the right of a legal user to
make a personal (backup) copy of digital materials?
To be sure, the DADVSI Bill, which echoes the terms of the Directive,
reaffirms the right to a private copy, which the management technology
ought not to encumber 1 . However, this right to a private copy is subject to
all of three conditions, two of which are completely subjective, directly
inspired by Article 9.2 of the Berne Convention, namely:
- the beneficiary of the right to a private backup copy must be entitled to
legal access to the work in the first instance;
- creation of the private backup copy should not encumber in any way
the normal exploitation of the work by copyright holders; and
- the creation of the private backup copy must not create any unjustified
prejudice or injury to the legitimate interests of the copyright owner.
What are we to understand is meant by "normal exploitation of the work"?
This question is left to liberal interpretation by the judge, which may lead to
contradictory rulings. The "Mulholland Drive" Affair is an excellent illustration
of these contradictions in the judicial interpretation of "normal exploitation."
While the Cour d'Appel (Court of Appeals) in Paris considered, in its April 22,
2005 injunction, that a private copy of a DVD could not be seen as impeding
the normal exploitation of the work, the First Civil Chamber of the Cour de
Cassation (French Supreme Court), in its February 28, 2006 decision,
affirmed to the contrary that, taking into account the economic importance of
DVD distribution toward defraying the costs of movie production, a private
copy did represent an imposition on normal exploitation by the copyright
holder.
Thus, the French Supreme Court, in reviewing the arguments upheld by
the judges in the lower court 2 , held that the economic impact of an
additional (private) copy must be taken into account in the digital domain.
The court did not address the conflict here with the terms of Article L.122-5
of the Intellectual Property Code (CPI), under which "the author many not
prohibit copies or reproductions retained for the sole purpose of private use
1 Article 8 of the DADVSI Bill.
2 TGI Paris, 30 April 2004 (see: juriscom.net, legalis.net, foruminternet.org) , GTA July 2004.
"Exploitation normale d'une œuvre numérique : vers le Fair Use américain ?", Benoît de
ROQUEFEUIL, Ariane DELVOIE.
A. DELVOIE
173
by the copying party, which copies are not intended for use by any other
party."
Indeed, the particular person who purchased the DVD and who is
expected to be the copying party falling within the ambit of CPI Art. L.122-5,
has no justifiable need for making multiple copies of his DVD for private use.
Nonetheless, such a position on the part of the judges raises the question of
the legitimacy of the tax on blank recording media 3 .
As the Director of Studies and Communication of Que Choisir 4 has
highlighted 5 , since "blank DVD royalty taxes are the highest in France," if
it's "the place where the gamut of rights is weakest," we reach a certain
paradox which leads us to look again at lowering the remuneration derived
from the tax on blank media for private copies.
Far from the Anglo-Saxon common law system of "precedents," our
system does not allow us to treat the holding of the French Supreme Court
as stating an immutable principle of interpretation of the idea of "normal
exploitation of the work."
To the end of alleviating these problems in interpretation, the DADVSI Bill
endeavors, in its Articles 8 and 9, creates an "Authority for the regulation of
technical measures" which will fix the minimum number of authorized private
copies with regards to the type of work at stake, the media used for the
representation of the work, and the kind of technical measure used.
Any disputes with regard to mechanisms constraining the benefits of the
private copy right may be submitted to this Authority by any person claiming
to be a beneficiary of the right of private copy.
This Authority has as its stated objective the determination of how the
DRM should be applied in each case, in order to safeguard to some extent
the right to a private copy while trying to arrive at a reconciliation, and, in the
end, to establish either an injunction or a proscription on the part of the
3 Many European countries tax blank recording media and redistribute those imposts as
royalties to copyright holders, based on the presumption that many or most copies produced on
these media are of copyrighted content.
4 French Association for the protection of consumers.
5 "Copie Privée sur les DVD : l'UFC- Que choisir prêt à repartir à la bagarre en appel", Estelle
st
DUMOUT, ZDNet.fr, 1 March 2006 (http://www.zdnet.fr/).
174
No. 62, 2nd Q. 2006
person who alleges himself to be a legitimate beneficiary of the right to a
private copy.
Still, will an Authority composed of magistrates or independent
functionaries 6 , enjoy a sufficient legitimacy and perception of authority in
the digital community to carry itself as authoritative on the questions of
digital rights management?
6 Article 9 of the DADVSI Bill.
Firms and Markets
IPTV markets
New broadband service promising to upset the balance of the TV market (*)
Jacques BAJON
IDATE, Montpellier
I
PTV was unveiled in the late 1990s through VoD offers. With a base of
2.5 million subscribers, the market is currently in a phase of heightened
deployment. Telecom operators, both incumbent and alternative, are taking
up a position in the IPTV market, which will have an impact on all markets
worldwide. IPTV is based on broadband access, and benefiting from the
market's stunning growth, with TV services being offered as part of bundling
strategies. It is expected to gain a sizeable share of the TV market: coverage
forecasts point to 40 million subscribers in 2010.
The IPTV market is still characterised by the coexistence of broadcasting
and broadband services. But, this new broadband service will upset existing
balances in the TV market, fuelling a trend of increasingly individualised
consumption of TV services and, in the longer term, of synergies with the
Internet Universe.
(*) This market report, published by IDATE, provides in-depth analysis of the IPTV market and
its ongoing development: technical architecture, deployment around the globe, new services
and challenges. This new means of distributing TV programmes is a key growth relay for telcos
which are investing massively in the TVoDSL market. As a result, not only the TV sector, but
our television viewing habits are likely to be altered considerably.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 175.
176
No. 62, 2nd Q. 2006
IPTV now a technical reality
While using the internet protocol for video transport goes beyond the
scope of fixed telecom networks, it is central to the development of IPTV
services. The technical progress being made enables service deployments,
as do increased broadband penetration and bitrates, multicasting and new
compression standards. These various elements make it possible to step up
IPTV service deployments, opening the way to enhanced offerings that
include high-definition TV and dual stream systems. Added to this, more
stable IPTV middleware solutions are expected to enable new players to
enter the fray.
Telcos investing in the market
IPTV service operators are crossing over with internet access providers,
whether tied to incumbent or alternative telcos. One of the top priorities for
incumbents is consolidating the fixed line – seeking to compensate for the
drop in market share and revenues in the telephony and internet access
sectors. Alternative players, in the meantime, are working to increase their
share of the market. Service bundles help increase existing customers'
loyalty, and to attract new ones, and TV has now become an additional
incentive to subscribe. TV services allow operators to increase, or at the
very least maintain, their ARPU.
State of the local market is key in shaping the offer
The steady growth of the globe's broadband access base over the past
few years has driven a rise in the average bitrates being offered to
subscribers. The situation in each country still varies a great deal, however,
in terms of coverage and degree of market competition. Local TV markets
differ a great deal, depending on the level of pay-TV penetration, available
TV multichannel offers and the services' digitisation.
While the regulatory situation has become more clear in the Europe, the
same cannot be said of the rest of the world. The ability to broadcast the
most popular TV channels is still problematic in certain Asian countries, as is
obtaining an IPTV licence in places like China. In the US, meanwhile, the
system in place for negotiating local franchises is proving a major obstacle
for the RBOCs.
J. BAJON
177
Figure 1: IPTV subscribers around the globe
45
40
35
30
25
20
15
10
5
0
2004
2005
2006
Asia
2007
Europe
North America
2008
2009
2010
RoW
2.5 millions IPTV subscribers worldwide in 2005
Europe is home to more than 1 million active IPTV households, with
France, Spain and Italy proving the most dynamic markets. The IPTV offers
that are currently available in the United States were initially based on
services marketed by local operators, and then, RBOCs launched massive
network deployments. The country's IPTV viewer base totaled over 500,000
households. In Asia's most mature markets (Japan and South Korea), which
are equipped with the most advanced internet structures on the planet, a
range of IP services are being developed that include broadcasting of
thematic channels and VoD. Nevertheless, except for PCCW in Hong Kong,
whose base is over the half million mark, and in part because of the difficulty
in obtaining licences, IPTV development in that part of the world remains
disappointing, compared to its "high-speed" potential. Here, China and India
will emerge as major growth pools.
IPTV solutions provider market taking shape
Solutions providers have recently begun to merge and form alliances, to
be in a position to offer the most complete line of services possible. Cases in
point include the Microsoft-Alcatel alliance, Siemens's take-over of Myrio,
178
No. 62, 2nd Q. 2006
Cisco's takeover of Scientific Atlanta, and merger between Thales Broadcast
Multimedia and Thomson. A horizontal concentration, too, could well come
to pass, as part of a bid to combine the provision of triple play solutions
including voice solutions. Most of the first telcos to enter the market relied on
in-house solutions, given that off the shelf solutions are only just now
beginning to appear. Estimates indicate that roughly half of the world's IPTV
operators currently rely on in-house solutions, and the other half on solutions
designed by a third party. Contracting a supplier-integrator is expected to
gradually become more common, and facilitate service deployments for new
entrants to the IPTV market. It nevertheless remains that broad array of
architectures and networks being used by telcos, and the variety of TV and
video services in the marketplace, will inevitably lead to a major service
integration phase.
IPTV helping to enhance the TV offer…
The digital, multichannel TV offer is progressing, thanks to the
development of basic TV services, and the gradual inclusion of local
channels and special events channels. In the long run, IPTV will allow
viewers to manipulate personal content, and enable the integration of video
offers from the web. Beyond the expansion and diversification of the TV
offer, IPTV is also helping to boost the take-up of triple play bundles. And
new services may soon see the light of day: 4 play bundles that include
mobile telephony, communication tools and home networking.
…and enabling more individualised TV viewing
The TV sector is evolving irreversibly from a supply side economy to one
driven by demand (personal TV, VoD, individual media, etc.), wherein TV
viewing is less and less a linear affair, and more and more personalised and
individualised. In the medium term, 25% of programmes could be watched
outside the airing grid. IPTV is likely to take advantage of the growing trends
of always on, open return paths and client-server architectures. This means
that, after the integration of broadcasting systems on the network, thanks to
multicasting, a return to unicasting (or point to point) streams could become
a mainstay of the IPTV distribution market.
J. BAJON
179
Critical decisions when designing a service
The goals of acquiring and keep subscribers, and of increasing revenues,
overlap more than they conflict. Two IPTV distribution models in particular
are commonly used: the low-cost triple play that includes at least a basic
multichannel TV offer, and aimed at gaining a greater share of the access
market, and premium offers whose goal is to increase ARPU. A third, more
flexible model is emerging by increasing the option systems offered with
triple plays, and by incorporating an initial basic TV offer. For alternative
operators in particular, IPTV is based on a variable cost model. One of the
key elements in operating a TV over DSL service is the predominance of
variable costs: fixed line access per user, set-top box, subscriber acquisition
and management costs, programme costs. This marks a major break from
the configuration under which operators have to deploy a new network. The
technical choices too will be decisive, such as the choice between a DSL or
FTTx access network, which compression standard to use, etc.
Some uncertainties
The quality of the IPTV stream is generally below the one obtained on
other digital TV networks and, above all, fluctuates from one point in the
network to another. But operators have recently made changes to their
coverage policies, and technical solutions are being developed, both of
which are helping to bring network reliability closer to the level offered by
"classic" digital TV market standards. Operators need to ensure that they will
be able to expand their service's coverage nationally, and secure access to
end users. Also to mention, competition with cable networks: IPTV services
are coming to occupy the same segment as cable networks, both in terms of
coverage, highly concentrated in urban centres, and in terms of service
policies, focusing on triple play offers.
We can expect to see a strain on ARPU levels. Increased competition in
the TV market, as new players enter the fray, combined with a trend of
decreasing prices for bundled services, could well be detrimental to the
average per subscriber revenues. This trend will only be accentuated if IPTV
expands the multichannel offering, and triggers a price war. This means that
the only way to maintain ARPU growth will be to continually enhance the
offer. Finally, regulation remains a dark area in some markets, although
positive developments are expected.
180
No. 62, 2nd Q. 2006
IPTV: opportunities and threats for the sector's players
Incumbent telcos' business model is evolving from traffic to access
provision. Becoming customers' access provider allows them to become the
supplier of all services carried over a broadband network (TV, internet,
telephony), and so consolidate the fixed line. The fact of including a TV offer
helps boost the appeal of the access service, and so their ability to keep
their customers. TV services also help generate additional revenues. The
main threat lies in the loss of market share and revenues in the telephony
and internet access sectors.
For the competitors, namely alternative operators, depending on the
degree of "control" they have over the network, a position in the TV market
could, by default, include carriage of DTT offers using a hybrid STB. It may
also translate into a more aggressive strategy aimed at increasing market
share by enhancing their TV offer which could include a basic offer of 25 to
50 channels. In the most dynamic markets, ISPs without a TV offer run the
risk of being marginalised. But internet services, and bundles that include
access plus voice, still make up the core of the access market, and are still
very popular with a great many consumers, particularly those who already
receive a multichannel TV offer in other ways.
TV channel operators may benefit from IPTV's growing popularity by
increasing the licensing fees earned from pay-TV operators, and from a
potential increase in advertising revenues for thematic channels. These
channels could, however, elect to by-pass TV aggregators and opt for new
distribution systems created by IPTV operators.
The increasingly personalised nature of TV viewing will drive the
programmes' rights holders to increase the value of their catalogues. This
trend could become a source of added income for the channels that financed
the programmes or, on the flipside, create a threat to TV channel operators'
programming business.
Pay-TV packagors face the greatest threat. They could suffer from the
growing adoption of pick and mix channel systems, and access providers'
desire to package their TV offers themselves. They may also suffer from the
expansion of "free" multichannel TV offers. Most telling so far is their loss of
control over the STB and related revenues. On the other hand, IPTV
provides added coverage for TV packages, on satellite in particular. Even if
the dismantling of TV offers poses a threat to service operators, they
J. BAJON
181
nonetheless still boast major assets, namely their expertise in aggregating
TV offers and their brand name clout.
Cable operators too have some sizeable assets. They pioneered the
double and, in some cases, triple play, boast longstanding experience in
selling TV services and bundled services, and have built up substantial
customer bases. As it stands, cablecos' involvement with IP is confined to its
use as a new transport protocol. But, telcos marketing IPTV services are
emerging as potentially fierce rivals, targeting the same clientele, and
operating in a much broader financial arena.
Table 1 - Range of TV/video services available via IPTV
Operator
Basic
TV
TV
bouquet
Pick &
mix
channels
Local HDTV
channel
Special
events
channels
PPV
VOD
PVR
Tennis,
2004
Free
FastWeb
Big
Brother
Imagenio
Home
Choice
Yahoo!
BB
Source: IDATE
Tableau 2 - IPTV – opportunities and threats for operators
Opportunities
Incumbent
telcos
Alternative
operators
TV service
operators
Cablecos
Maintaining their broadband
subscriber base
Increasing ARPU
Increasing their share of the
broadband and triple play markets
Potential to generate added
revenues
Programme sales on a per-unit
basis
Growing ubiquity of IP in transport
networks and for VoD
Threats
Alternative operators' aggressive
strategies - Triple plays and
development of VoIP services
Growing competition, and the race to
achieve critical mass
Price wars over basic TV offers
Rise of on-demand services
Weight inferior of content in
convergent offers
Dismantling of pay-TV package
operators' bouquets
Emergence of new, head-on,
competitor
Technical Innovations
Mobile Television
Peter CURWEN
University of Strathclyde, Scotland
D
ownloading content to mobile handsets can be effected either point-topoint (streamed to an individual handset via a conventional mobile
network) or point-to-multipoint (via a broadcasting network as with
conventional TV and radio). Virtually all handsets now come equipped with a
capability to handle relatively low-speed data sent point-to-point using
technology based upon GSM, such as GPRS, or upon CDMA, such as
cdma2000 1xRTT. This is not well-suited for visual imagery beyond still
photography, but increasingly consumers are turning to handsets with much
faster download speeds that meet the definition of so-called 3G, typically the
W-CDMA (UMTS) or cdma2000 1xEV-DO variants, and these largely do
away with the problem of jerkiness. Furthermore, the quality of screen
imagery, and in many cases the size of screens, has improved very rapidly,
and a modern handset can be expected to provide a crystal-clear image.
Given an almost universal familiarity with TV and mobile telephony and the
increasing prevalence of handsets capable of high-speed data transfer, it is
hardly surprising, therefore, that mobile TV is seen in some quarters as the
next 'killer application'.
Competing technologies
The simplest way to provide mobile TV is to stream content along a highspeed data network. However, such networks are by no means
commonplace outside Europe, provide patchy coverage even in countries
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 183.
184
No. 62, 2nd Q. 2006
where they are available and run at average speeds well below their
theoretical maxima. The situation is certainly improving, and considerably
faster networks based largely on High-Speed Packet Access (HSPA) are
beginning to be launched during 2006. Pending their widespread
introduction, it is evident that the service is bound to be somewhat expensive
because of the download time involved and could potentially clog up the
networks. For this reason, it is desirable to send TV to mobile devices via a
different method, independent of existing mobile networks.
The favoured solution is Digital Video Broadcast(ing) Handheld (DVB-H)
technology whereby a TV signal is beamed to an additional digital TV
receiver within the handset. This is based upon the well-established DVBTerrestrial (DVB-T) standard with modifications to support small batterydriven devices. The technology for integrating IP into the DVB broadcast
also exists in the form of IP-datacast (IPDC) 1 . This makes it possible to use
the same streaming video protocols (for example, MPEG-based) that are
used on the internet for DVB broadcasts.
DVB-H is currently associated primarily with developments in Europe.
The European Commission, following the pattern previously used
successfully with respect to GSM, for example, determined to establish a
common standard that could be imposed across the EU, and the European
Telecommunications Standards Institute (ETSI) accordingly adopted DVB-H
as the European standard at the end of 2004.
However, DVB-H is not universally popular, and has not been chosen in
South Korea which, as ever, is at the forefront of technological
developments. There, the favoured solution is its home-grown Digital
Multimedia Broadcasting (DMB) service, based upon the DAB standard
used for digital radio 2 . The underlying strategy in choosing a proprietary
route was not, however, to keep the technology in-house. Rather, it was that
the initial service, launched as satellite DMB (S-DMB), would be followed by
terrestrial DMB (T-DMB) and then exported to Europe.
In Japan there is also a desire to promote a home-grown standard,
although this does not preclude the possibility that other standards such as
DVB-H and DMB, which are undergoing trials there, will also establish a
1 See www.dvb.org and http://ipdc-forum.org respectively.
2 A huge amount of material on DAB, essentially in relation to digital radio broadcasts, can be
found at: www.worlddab.org.
P. CURWEN
185
foothold. This domestic technology is known as Terrestrial Integrated
Services Digital Broadcasting (ISDB-T), and for the time being at least, there
appears to be little interest in establishing it as a global standard. First
adopted in December 2003, it is supported by the Japanese Association of
Radio Industries and Business (ARIB). ISDB-T is designed such that each
channel is divided into thirteen segments. High-Definition Television (HDTV)
content fills twelve segments leaving the thirteenth free for broadcasts to
mobiles – hence what is popularly known as the 'One-seg[ment]' strategy (1SEG) 3 .
The USA, for its part, usually opts for technological competition, which
does have the downside that the end-result can be too many choices, none
of which attain the requisite scale. In this case, while it is open to technology
of overseas origin, it has a domestic champion, Qualcomm, which is
advocating its proprietary MediaFLO system. DVB-H and FLO technology
are similar in that both are based on Orthogonal Frequency Division
Multiplexing (OFDM) and both are designed to transmit roughly fifteen
frames of video per second to a Quarter VGA (QVGA) screen. Furthermore,
both aim to provide four hours of continuous viewing from a single battery
charge. At first sight, DVB-H appears to have the upper hand since it is
being trialed by numerous operators on a worldwide basis, with each
independently arranging equipment and content deals.
In contrast, MediaFLO USA is not only developing its core technology
and chipsets, but acquiring spectrum and building its own network.
Qualcomm's plan is to replicate its marketing techniques for CDMA – that is,
to licence FLO technology and sell FLO chipsets to other equipment
vendors. Qualcomm claims that, for any given spectrum band, FLO will
provide twice the coverage or twice the number of channels (FITCHARD,
2005), and if that proves to be the case – for now its status is merely an
unproven claim 4 – FLO may prove hard to resist.
There are other possibilities – for example, Singapore has chosen the
Eureka 147 standard – but the main choice facing network operators is
whether to stick to streaming via 3G/HSPA or adopt DVB-H, DMB or FLO or,
indeed, any combination thereof. There is no guarantee that the best
3 For technical details see www.dibeg.org and www.rohide-schwarz.com.
4 As of end-2005, DVB-H had managed a best performance in trials of 16 simultaneous video
channels streaming at 112 kbps and producing 15 frames per second at sub-QVGA resolutions.
This was expected to improve to 384 kbps at full QVGA resolution in early 2006, roughly the
same specification as for FLO, but with half the number of frames per second.
186
No. 62, 2nd Q. 2006
technology – assuming that any one technology can be said unequivocally to
be the best – will be the most successful because, as noted, there are major
vested interests involved.
Streaming content
Streaming content is particularly associated with the introduction of 3G
and HSPA. Because 3G was widely licensed across Europe at an early
stage, and DVB-H is for now unavailable, it is appropriate to include some
brief examples of streaming from that region. Orange UK, for example,
introduced its Orange TV service in May 2005. The downside was that
potential customers had to be covered by the Orange 3G service, own a
Nokia 6680 and be willing to pay GBP 10 (EUR 15) a month. It was also
noted that streaming TV over 3G would potentially put the network under
severe strain. For its part, attempting to skirt around these difficulties,
Vodafone launched its own service – Sky Mobile TV – in conjunction with
BskyB in October, providing 19 channels over the Vodafone live! with 3G
network. This was provided free until the end of January 2006 when a GBP
3 to GBP 10 a month charge kicked in. It is wort noting that some of Sky
One's most popular programmes were unavailable on account of difficulties
in obtaining the mobile rights 5 . These difficulties can be greatly mitigated if
programmes are made specifically for viewing on mobile devices, and to that
end, for example, Endemol in the UK set up a mobile TV division largely
dedicated to youth-oriented soap operas, comedy and 'extreme' reality
shows.
Streaming over 3G is likely to fill most of the void elsewhere in Europe
pending the arrival of DVB-H, and to this end, for example, a joint venture
involving Kanmera and the leading Swedish tabloid Expressen-TV
announced the launch of live streamed TV over the Vodafone network in
Sweden in November 2005. At the same time SFR in France announced
plans shortly to launch 'SFR-TV-Video' over the Vodafone live! with 3G
network 6 Using technology provided by Streamezzo, the service was
5 See "BSkyB in Vodafone mobile TV deal" at: http://newsvotebbc.co.uk of 17th November 2005
and BILLQUIST (2006), which noted that the service had provided six million (free) sessions in
its first two months.
6 See "SFR to offer mobile TV" at: www.dmeurope.com of November 16th 2005. The trial
appears to have ended up involving SFR, Canal+, Nokia and Tower Cast and using the Nokia
7710, with the 500 participants watching on average for 20 minutes a day. 73 per cent claimed
P. CURWEN
187
initially provided free, but required the ownership of a Nokia 6630, 6680 or
N70 or alternatively a SonyEricsson V600i handset.
After a slow start, partly due to competing 2G technologies and the lack
of dedicated W-CDMA spectrum, the USA has taken streaming to its heart in
a major way. For example, in November 2005, Alltel (fresh from absorbing
Western Wireless) signed a deal with MobiTV to deliver TV via mobile using
the Motorola RAZR V3c. The service included live TV broadcasts and madefor-mobile content - 25 channels altogether. The cost was USD 9.99 a month
plus USD 200 for the handset on a two-year contract. This service was very
similar to that provided by Cingular Wireless, also with MobiTV using madefor-mobile content - 25 channels altogether. The cost was USD 9.99 a month
plus USD 200 for the handset on a two-year contract. However, this really
needed to be underpinned with a USD 20 a month data plan. For its part,
Sprint Nextel also uses MobiTV, offering packages for USD 15, USD 20 and
USD 25 a month including data charges 7 . Verizon Wireless, however, uses
VCast to provide a service for USD 15 a month including data charges over
its cdma2000 1xEV-DO network. The third main system that can be used for
streaming is SmartVideo.
As a final brief example of what is happening elsewhere, Vodafone NZ
launched a mobile TV service in New Zealand in August 2005 which is
available wherever there is 3G coverage for a fee of NZD3 (USD1.90) per
week per channel for unlimited access or NZD1 for a single 15-minute
session.
Technology trials
DVB-H
DVB-H is mainly undergoing trials in Europe, but is under consideration
on a modest scale in many other parts of the world. A Nokia Streamer - a
device specifically designed for DVB-H and attached to a Nokia 7700 - was
to be satisfied and sixty-eight per cent stated that they would be prepared to take out a
subscription at EUR 7 a month – see TILAK (2006).
7 In September 2005, Sprint Nextel/MobiTV was awarded a technical Emmy by the Academy of
Television Arts and Sciences for the delivery of standard programming over mobile networks.
No. 62, 2nd Q. 2006
188
put on trial in Finland in October 2004 8 . However, it was the UK that was at
the forefront of developments. In September 2004, O2 and NTL's Broadcast
division (Arqiva) announced the UK's first usability trial of digital TV, held in
conjunction with Nokia. Commencing in Spring 2005, 375 O2 subscribers in
Oxford were able to access 16 TV channels via DVB-H using a Nokia
7710 9 . Almost inevitably, in practice, the trial was delayed until September
2005 and a commercial launch was not forecast to occur until 2008 due to
the need to free up and licence the necessary spectrum 10 .
Several other EU-based operators are anxious to be early to market once
DVB-H becomes a practical proposition. For example, in Italy, TIM stated in
October 2005 that it had combined with Mediaset to launch a service once
DVB-H handsets became available – which turned out to be June 2006 with
the arrival of the UMTS/DVB-H-enabled Samsung SGH-P920 - and to
facilitate this Mediaset agreed in December to buy the infrastructure and
frequencies owned by TF1's Italian subsidiary, Europa TV. In Italy, H3G and
investment fund the Profit Group acquired TV broadcaster Channel 7 in
November 2005. Channel 7 holds a network licence for national TV
distribution via terrestrial frequencies, including the delivery of DVB-H, and
this is due to commence in the second half of 2006.
In France, Orange and Bouygues Télécom jointly inaugurated a trial
during 2005 with 400 users over a six-to-nine month period using handsets
made by Sagem, but as in other countries the commercial launch is not
anticipated before 2008. Furthermore, in November 2005, a trial was
initiated between Swisscom and Nokia involving a commercially available
Nokia 7710 smartphone upgraded with a DVB-H receiver. Despite such
developments, widespread digital services are in practice unlikely to appear
in Europe until the analogue spectrum can be auctioned off for digital use that is, post-2009 - and in any event there is an issue of cost because the
8 See Global Mobile, January 14th 2004, p. 11 and www.3g.co.uk/PR/June 2004/7916 of 17th
June 2004. Nokia itself is not really interested in any technology other than DVB-H, for which it
has been heavily criticised by operators interested in other technologies.
9 See www.3g.co.uk/PR/Sept2004/8308 and www.telecoms.com of September 23rd 2004, and
th
http://newsvote.bbc.co.uk of 17 November 2005. The first results of the trial were released in
January 2006, with O2 claiming that it would no longer be referring to mobile TV but rather to
'personal TV'. The results were viewed as extremely promising – no surprise there – with
particular emphasis upon the unexpected finding that usage was higher in the home than
elsewhere. One interesting aspect was that users considered the sixteen channels on offer to
be no more than adequate – see MORRIS (2006).
10 See www.totaltele.com of September 23rd 2005. A summary table of the European trials can
th
be found in "Mobile TV set to be very popular" at: www.cellular-news.com of March 9 2006.
P. CURWEN
189
digital signal will have to be available across an entire cellular network.
Nevertheless, Nokia does not intend to wait around, and in November 2005
it launched the N92, allegedly the world's first mobile device with a built-in
DVB-H receiver.
Interoperability is also, as ever, a key issue. The Open Mobile Alliance is
investigating common standards for mobile broadcast services to be
delivered to new, larger, colour-screen handsets. This is good news for
Nokia 11 , which is not only involved in the O2 trial but is involved in others in
Finland (with TeliaSonera and Radiolinja using the 7710 handset), Germany,
Taiwan (with Chunghwa Telecom and others using the 7710), Spain (with
Telefónica Móviles 12 using the 7710) and the USA (with 'Modeo').
A large number of DVB-H trials are due to take place in Asia, including
Japan, during 2006, so it is the favourite to succeed as a truly international
standard.. For example, Elang Mahkota Teknologi Group and its subsidiaries
PT Mediatama Citra Abadi and PT Surya Citra Televisi arranged to conduct
a trial with Nokia in Indonesia during the second half of 2006, using the N92
handset 13 .
Meanwhile, in the USA, the former Crown Castle Mobile Media,
rebranded as 'Modeo', has already piloted a DVB-H service in Pittsburgh,
and expects to be operational in the top thirty U.S. markets by the end of
2007 (JAY, 2006).
DAB/DMB
Given the drawbacks to streaming over 3G and the need to wait some
time for DVB-H, there is also interest in Europe in building a market
11 SonyEricsson also supports the standard, and in February 2006, the two vendors agreed to
co-operate on interoperability in DVB-H devices – see "Nokia, Sony Ericsson cooperate on
th
mobile-TV interoperability" at: www.cellular-news.com of February 14 2006. The collaboration
would be based around Nokia's Open Air Interface implementation guidelines published in
August 2005.
12 In February 2006, the results of the trial, also involving Abertis Telecom and content
broadcast by the likes of Antena 3 and Sogecable, revealed that 55 per cent of the 500 trial
users would be prepared to pay around EUR 5 a month for the service. The typical
caonsumption, largely restricted to the basic package, was 20 minutes a day – see "75 per cent
th
of Spanish mobile users give TV the thumbs up" at: www.telegeography.com of February 17
2006.
13 See "Nokia to pilot mobile TV services in Indonesia" at: www.telegeography.com of February
th
14 2006.
No. 62, 2nd Q. 2006
190
presence via another route. In June 2005, for example, Virgin Mobile
launched a trial of its own UK service based not upon 3G or DVB-H, but on
the pre-existing Digital Audio Broadcast (DAB) system in conjunction with BT
Livetime - which intends to act as a wholesaler and has re-branded as
'Movio' - that necessitates a special receiver, but no new infrastructure. The
underlying justification was that DAB already had 85 per cent coverage. The
trial was adjudged to be a success and the service is to be launched
commercially sometime in 2006 (FILDES, 2006) 14 .
Some indication of the confusion about how to proceed even in Europe is
evident from the fact that Bouygues Télécom has also opted to become
involved in a T-DMB trial in France together with TV network TF1,
broadcasting equipment maker VDL and Korea's Samsung. The handset
chosen is the SGH-P900T 15 . There is also to be a test service of T-DMB in
May using the LG-V9000.
As noted, South Korea is keen to be a market leader where DMB is
concerned. Hence, in December 2004, TU Media, 30 per cent owned by SK
Telecom, partnered with the Korean Broadcasting Commission to announce
the limited launch the following month of DMB with a full commercial
operation commencing on May 1st 2005, at which point both SK Teletech
(with the IBM-1000) and Samsung (with the SCH-B100) would have highend handsets available.
Nine dual-mode S-DMB/T-DMB handsets were subsequently launched
by Samsung and, in January 2006, T-DMB was made available by both KTF
and LG Telecom, with SK Telecom due to follow suit in March. However,
unlike with S-DMB where network operators can split the fee with sole
provider TU Media, T-DMB is free for anyone with an appropriate receiver
and hence is not being promoted by operators (NEWLANDS, 2005, Nov.). In
December 2005, it was reported that only 1.5 per cent of mobile subscribers
owned an S-DMB-enabled handset and that there was widespread
dissatisfaction with poor content and service, which did not bode well for the
14 However, although the majority of users found the service to be 'appealing', they were only
prepared to pay GBP 5 a month to access it whereas the providers had hoped to charge GBP
10, so 'success' is not necessarily an apt description – see "Mobile TV trial disappoints" at:
th
www.telegeography.com of January 13 2006. It is worth noting that only three channels were
on offer.
15 See "Bouygues, TF1 and VDL select Samsung for DMB phones" at:
th
www.telegeography.com of February 15 2006; and "Samsung unveils mobile TV phone for
th
Europe" at: www.digitalmediaasia.com of February 15 2006.
P. CURWEN
191
prospects for T-DMB (NEWLANDS, 2005, Dec.). This situation is inherently
unstable.
However, this does not appear to have put everyone off the standard
since, in January 2006, Samsung announced that it would be supplying
200,000 T-DMB handsets to Beijing Jonion Digital Media Broadcasting Co.
and 300,000 to Guangdong Mobile Television Media Co., with the former
intent upon the first launch in China in April 2006 (TURNER L., 2006).
Immediately thereafter, the Indian government announced that as India was
using the same spectrum as Korea, it too would be introducing T-DMB just
as soon as the government could introduce regulations to make this
possible. Mobile operator Bharti agreed to launch a trial of T-DMB in Mumbai
by mid-February 16 .
ISDB-T
In September 2005, DoCoMo announced the development of the FOMA
P901iTV handset made by Panasonic, capable of receiving both analogue
and digital signals. Its small 2.5-inch screen permits 2.5 hours of digital
viewing, and the handset is designed such that the screen twists
horizontally, leaving the other half as a handle. Since its launch in April
2006, ISDB-T users have been able to access, at no cost - assuming that
they have been able to lay their hands on a scarce handset - the normal
programmes available on TV as special programming for mobile devices is
not permitted until 2008.
In December 2005, intent upon underpinning this so-called 'Oneseg[ment]' strategy (1-SEG), DoCoMo acquired 2.6 per cent of Fuji-TV and
followed this up in February 2006 by forming a limited liability partnership
with Nippon-TV. Vodafone Japan is also tapping into the 1-SEG service but
using the Sharp 905SH, while KDDI is doing so in conjunction with TV
Asahi 17 .
16 See "India moves into mobile TV" at: www.bwcs.com of February 1st 2006.
17 See "KDDI and TV Asahi to collaborate on terrestrial digital broadcasting" at:
th
www.telegeography.com of March 28 2006. For some general comments on Japan see
"Japanese watch TV shows on the go on cell phones" at www.telecomdirect.news of March
st
31 .
192
No. 62, 2nd Q. 2006
MediaFLO
KDDI is, predictably, unwilling to end up as second-best to DoCoMo, and
it announced in December 2005 that it would be setting up a company called
MediaFLO Japan, owned 80 per cent by itself and 20 per cent by
Qualcomm, with a view to introducing services in 2007 based upon the
latter's FLO technology (MIDDLETON, 2005).
Meanwhile, in the USA where MediaFLO must realistically become
established if it is to become popular overseas, some major network
operators are beginning to support Qualcomm. For example, at the end of
2005 Verizon Wireless signed up to offer FLO over its cdma2000 1xEV-DO
network. For its part, Motorola has yet to decide whether to opt for DVB-H or
some alternative, in which case it is likely to be MediaFLO.
Licensing
Because mobile TV cannot sensibly be provided over unlicensed
spectrum, the future of DVB-H in the EU is partly dependent upon progress
with licensing. The optimum spectrum for DVB-H is is the UHF band
between 470 MHz and 850 MHz. The O2 trial, for example, used a
temporary test frequency in this band. Unfortunately, providing small
spectrum slots for tests is quite a different matter from allocating sufficient
spectrum to allow for competing broadcast networks, and since the optimum
spectrum is currently largely occupied, especially by analogue transmissions
which do, however, have a limited life span, licensing will largely have to wait
until the spectrum is cleared.
Despite this, the Finnish government announced in November 2005 that
a 20-year DVB-H licence would be awarded early in 2006 to a network
operator, which would be responsible for the transmission network and
service management (LIBBENGA, 2005; TURNER, 2005). The licensee
would be obliged to sell network capacity to other service providers.
TeliaSonera was quick to register an application which specified that it would
build a DVB-H network covering the Greater Helsinki area before the end of
2006, the area within the outer ring road by the end of 2007 and Tampere,
Turku and Oulu commencing during 2007. However, it would also rely upon
P. CURWEN
streaming TV along its existing 3G network
won by Digita 19 .
193
18 .
In the event, the licence was
Events such as this appear to have woken up the European Commission
to the desirability of a pan-European strategy. Hence, in March 2006,
Viviane Reding pontificated that "if we want European players to develop
efficient business models […] we need a coordinated approach on spectrum
policy" 20 . Needless to say, the Commission was deemed to be trampling
upon the rights of individual member state regulators to determine what they
considered to be in their home markets' best interests, not to mention
contravening the European Commission's existing policy of technological
neutrality (PATTERSON G., 2006).
Releasing spectrum is anyway a somewhat different proposition in each
member state because of the prior history of allocation. UK regulator Ofcom,
for example, is unwilling to detail its plans until after the 2006 Regional
Radio Conference when European regulators will discuss the role of different
technologies. Because spectrum for DVB-H will probably not be available in
the UK until the digital switchover in 2012, several years later than in the
likes of Germany and Italy, operators such as Vodafone which are panEuropean will as a consequence probably have to adopt different strategies
in different countries.
Conclusions
For the time being, the favoured method of providing mobile TV is to
stream it along a high-speed data network. However, this is expensive for
subscribers and would use up significant slabs of spectrum if it became very
popular, and hence it is not favoured as a medium-to-long-term solution.
Fortunately, a partial solution to the spectrum shortage problem will come
from the improved speed of data transfer due to the widespread introduction
of the likes of HSPA, and it may also be possible to use the unpaired TDD
18 See "Sonera seeks to build a mobile TV network" at: www.cellular-news.com of February 1
2006.
19 The applicants were Elisa, TeliaSonera, Telemast Nordic and Digita. The latter, a unit of
st
French media group TDF, won the licence, partly because it was already building a test network
and partly because its status meant that there would be opportunities for content providers.
20 See "EU: Viviane Reding Member of the European Commission responsible for Information
Society and Media. Television is going mobile – and needs a pan European policy approach" at:
th
www.noticias.info of March 8 2006.
194
No. 62, 2nd Q. 2006
spectrum that was issued to most winners of 3G licences. But with HSPA
networks only recently beginning to come on stream, it is difficult to assess
their potential impact upon mobile TV.
Irrespective of any improvements in streaming, the favoured solution is
still expected to mimic the broadcasting method in use for terrestrial TV,
namely the transmission of programming direct from TV towers to mobile
devices. Whether this will be based upon satellite or terrestrial links is as yet
unclear. A quick summary comparison suggests that streaming can provide
very large numbers of channels and deep indoor coverage but comparatively
few subscribers; terrestrial broadcasting can reach an unlimited audience
with good indoor coverage but with only 27 channels; while satellite
broadcasting is particularly suitable for rural areas but indoor coverage is
poor and only nine channels can be provided.
It would be very helpful if the relevant spectrum bands could be adjacent
as this would permit some common use of equipment, but that is certainly
not going to be the case if the UHF band is used. However, an alternative
was put forward in February 2006 by Alcatel whereby DVB-H would be
broadcast in the 2.170-2.220 GHz 'S' band reserved for mobile satellite
services across the world. The proposal, which would require some changes
to the regulatory framework, involves using satellites for the primary
broadcast to rural areas and a terrestrial repeater network for urban
services. Thi spectrum is being used in the USA but not in Europe, and it
has the advantage that, being in proximity to that being used for 3G, the
latter's base stations and masts could be reused. In addition, handsets
would remain acceptably small for European tastes because the aerials
could be internalised.
P. CURWEN
195
References
BILLQUIST R. (2006): "Mobile TV promising, but technical issues still to be
th
resolved", at: www.totaltelecom of February 13 .
FILDES N. (2006): "BT names live mobile TV/radio wholesale service BT Movio", at:
th
www.cellular-news.com of January 12 .
FITCHARD K. (2005): "TV wars go wireless", at: www.printthis.clickability.com of
th
September 26 .
JAY R. (2006): "Modeo to offer DVB-H services in the U.S.", at: www.telecom.com of
th
January 5 .
LIBBENGA J. (2005): "Finland to license commercial mobile TV service", at:
th
www.theregister.co.uk of November 14 .
MIDDLETON J. (2005): "KDDI, Qualcomm forge mobile TV venture", at:
rd
ww.telecoms.com of December 23 .
MORRIS A. (2006): "IPWireless adds TDD option to the mobile TV pot", at:
www.totaltele.com of January 18th.
NEWLANDS M.:
- (2005): "Survey shows Koreans not happy with DMB service", at: www.totaltele.com
of December 9th.
- (2005): "Operators are reluctant to market phones for the free terrestrial mobile TV
services", at: www.totaltele.com of November 30th.
PATTERSON G. (2006): "Mobile TV: it's gonna be hell…", at: www.telecoms.com of
th
March 27 .
TILAK J. (2006): "French DVB-H trial reveals interest in mobile TV", at
nd
www.dmeurope.com of March 2 .
TURNER L. (2006): "Samsung supplies first T-DMB phones to China", at:
www.totaltele.com of January 9th.
TURNER L. (2005): "Finland invites applications for DVB-H mobile TV licence", at:
www.totaltele.com of November 15th.
Can DRM Create New Markets?
Anne DUCHÊNE
University of British Columbia, Sauder Business School
Martin PEITZ
International University in Germany
Patrick WAELBROECK
ENST
W
e argue that Digital Rights Management (DRM) technologies can
serve a number of marketing purposes that will make it easier for new
artists to expose their products to consumers. As a result, DRM can create
new markets.
Introduction
The music market is characterized by a high degree of heterogeneity in
music tastes among consumers that vary not only with respect to music
genres, but also within each genre. It is therefore difficult for a consumer to
evaluate a new album from a catalogue. This is especially true for new
releases about which consumers have less information. For this reason,
music can be classified as an experience good that consumers need to
"taste" before they can make an informed purchase decision. Successfully
transmitting that information is one of the main challenges facing music
producers and retailers. Record companies usually dedicate a substantial
budget to the marketing and promotion of new CDs. Chuck Philips, who
interviewed executives from the music industry, reports that, "It costs about
USD 2 to manufacture and distribute a CD, but marketing costs can run from
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 197.
198
No. 62, 2nd Q. 2006
USD 3 per hit CD to more than USD 10 for failed projects." 1 The Record
Industry Association of America (RIAA) elaborates on those costs:
"Then come marketing and promotion costs - perhaps the most
expensive part of the music business today. They include increasingly
expensive video clips, public relations, tour support, marketing
campaigns, and promotion to get the songs played on the radio. [...]
Labels make investments in artists by paying for both the production
and the promotion of the album, and promotion is very expensive."
(www.riaa.org, RIAA-Key stats-Facts-Cost of CD).
The nature of those costs is clearly identified in Hilary Rosen's statement
in the Napster case 2 :
"Record companies search out artists and help to develop their talent
and their image. Much goes into developing artists, maximizing their
creativity and helping them reach an audience. In addition to providing
advance payments to artists for use by them in the recording process,
a record company invests time, energy and money into advertising
costs, retail store positioning fees, listening posts in record stores,
radio promotions, press and public relations for the artist, television
appearances and travel, publicity, and internet marketing, promotions
and contests. Those costs are investments that companies make to
promote the success of the artist so that both can profit from the sale
of the artist's recording. In addition, the record company typically pays
one half of independent radio promotions, music videos, and tour
support. If a recording is not successful, the company loses its entire
investment, including the advance. If a recording is successful, the
advance is taken out of royalties, but the other costs I mentioned are
the responsibility of the record company."
This is the old way of doing business, which has led to the rise of a few
superstars who cater for the masses. The widespread use of fast internet
connections in home computing offers consumers a new way of acquiring
information about new music. Indeed, after the Napster experience, it has
become clear that there is a cheaper way for consumers to obtain that
information, namely by searching, downloading and testing digital music files
made available through Peer-to-Peer (P2P) or other file-sharing
technologies. This information transmission technology is fundamentally
different from traditional marketing and promotion channels, as consumers,
not firms, spend time and resources to listen to new music downloaded from
1 "Record Label Chorus: High Risk, Low Margin", Chuck PHILIPS, Los Angeles Times, May
st
31 , 2001
2 Quoted from a press release from the RIAA on May 25th, 2000 available on the RIAA.com
website.
A. DUCHÊNE, M. PEITZ & P. WAELBROECK
199
the internet. Consumers thus have the opportunity to discover new products
and new artists, some of which would have been excluded from traditional
distribution channels. New (or emerging) artists therefore see file-sharing
and new digital distribution technologies as a low-cost way of entering the
market. In a recent Pew Internet Report, based on interviews with 2,755
musicians in the USA who were asked about their opinions on file-sharing on
the internet, 35 percent of respondents answered that free downloads
helped their career.
The main claim of this article is that new digital technologies can be used
to expose consumers to newcomers' music. Marketing strategies for
newcomers are likely to be different to those used by record companies
distributing incumbent artists, who claim that unabated internet piracy could
mean the end of the industry as a whole 3 .
The availability of digital copies on the internet has given rise to two
forces, which pull in opposite directions for incumbents and newcomers. On
the one hand, digital copies and original CDs are mostly complements for
new entrants, as consumers use copies to discover, listen to and purchase
new music that they enjoy. On the other hand, copies and originals are often
substitutes for incumbent artists, as a significant number of consumers copy,
but do not purchase the original they already know 4 . For both groups,
strategies adopted by the music industry will involve the use of Digital Rights
Management (DRM) to protect digital music.
DRM is a small piece of software that can detect, monitor and block the
(unauthorized) use of copyrighted material. Digital Rights Management for
3 Five percent of respondents to the previous survey claimed that file-sharing hurt their career.
Unlike CDs and audio tapes, digital music files that can be found on file-sharing networks can
be separated from their physical support. They can be quickly compressed and exchanged over
the internet, a process which is significantly faster and more flexible than renting a CD in a
media library or borrowing it from a friend. Unlike traditional means of copying, file-sharing
technologies provide a large scale diffusion channel that is virtually impossible to monitor, as a
single copy can be downloaded by any user across the world. To deal with such a threat, record
companies have been suing internet users who share copyrighted files over P2P networks
freely and anonymously without the authorization of copyright owners. The number of lawsuits
reached over 10,000 in April 2005. Most U.S. cases are settled out of court with a fine of USD
3,000-4,000.
4 Even for this latter group there exist complementary products such as live concerts, which
make free or subsidized releases of their material an attractive promotion tool for a record
company, especially if DRM can be used to collect valuable information about consumers.
Record companies and artists could both benefit from the sale of complementary products by
sharing revenues from these complementary products, as illustrated by the contract between
nd
EMI and Robbie Williams (see e.g. "Robbie signs '£80m' deal", BBC News, October 2 , 2002).
200
No. 62, 2nd Q. 2006
music generally includes: copy control, watermarking (a digital identification
technology inserted in digital files, i.e. ex ante constraints), fingerprinting
(which converts the files content into a unique identification number, i.e. ex
post control), authentication and access control. Technological protection
can limit the uses of music files downloaded from online retailers. The most
common restrictions consist of limiting the number of computers that the
user can transfer his or her files to (typically between 1 and 5), as well as the
number of times a playlist can be burned on a CD-R (typically between 7
and 10). Technology companies use different DRM technologies and
different audio formats. Apple's iTunes service uses the Advanced Audio
Coding (AAC) format along with FairPlay DRM. Users can burn a playlist 7
times and transfer music files to up to 5 computers. They can offer their
playlists for preview to other members of the community, as well as musical
gifts to other subscribers of the music service. In response to Apple,
Microsoft has developed its own series of DRM solutions. The first type of
DRM protection is implemented in its Windows Media Audio (WMA) music
format that is used by many e-tailers and works like Apple's DRM, restricting
the number of CD burns and transfers to desktop computers. The most
recent DRM solution, Janus, can also limit the use of a music file in time,
thus enabling business models based on subscription services that do not
limit the number of computers or portable players the music file can be
uploaded to. In terms of business strategy model, those two types of DRM
allow both rent and buy strategies.
While policy-makers have given their support to DRM 5 , several
academic researchers have strongly argued against it. Samuelson (2003),
for instance, argues that DRM goes beyond the Copyright Act. Indeed, DRM
can protect any piece of digital content, even if it is not protected by the
Copyright Law such as documents in the public domain. It reduces the value
of fair use and can force consumers to view content that they do not wish to
(such as ads and FBI warnings). As a result of such restrictions, DRM
sometimes stands for "Digital Restrictions Management." Moreover, it can
potentially protect over an infinite amount of time, which is contrary to the
spirit of the Copyright Act. In a sense, DRM creates the basis for an ever ongoing payment system.
5 Following the World Intellectual Property Organization (WIPO) convention in Geneva, in 1998,
U.S. Congress enacted the Digital Millenium Copyright Act (DMCA) that extends the Copyright
Act. The DMCA makes it a crime to circumvent anti-DRM piracy measures built into most
commercial software. A similar directive is being implemented in member countries of the
European Community generating heated debate (such as the ongoing debate on the
compulsory license in France).
A. DUCHÊNE, M. PEITZ & P. WAELBROECK
201
DRM raises many concerns both on legal and economic grounds. The
negative aspects of DRM are documented in Peitz and Waelbroeck (2005a)
and Gasser (2004). We refer interested readers to these works for more
information. Our article takes a complementary approach as we would like to
argue that DRM can also serve as a marketing tool to allow artists to enter
the market at a low cost. Whether society eventually benefits from this
phenomenon is another issue. To answer that question, it is important to
understand the effect of DRM on the production and distribution of digital
products. Before examining this, we would like to highlight four limits to our
discussion. Firstly, the choices of the DRM technology and of the digital
music format will raise issues related to compatibility and two-sided markets.
Indeed, the success of any DRM-based marketing strategy is linked to the
success of the platform (i.e. the ability of the intermediary to license music
from new and established artists), of the music player and of the format.
These three dimensions are hard to disentangle, because many companies
bundle them together. Secondly, we have chosen to include examples of
DRM and marketing strategies, rather than a lengthy discussion of previous
work on copyright. We nevertheless point interested readers to PEITZ &
WAELBROECK (2003) for a review of existing literature on this topic. Finally,
we will not cover the effect of DRM on cumulative creation. DRM could
influence the production of new music if there were multiple rights owners,
for instance if the music was sampled using older songs or mixing lines from
other artists. While inadequate DRM protection could possibly make
cumulative creation more difficult, DRM does not necessarily prevent
subsequent use. For instance, Bechthold (2003) has argued that one could
develop a "Rights expression language" that would deal with multiple rights
owners. Similarly, LESSIG (2001) is a proponent of the "Copyright
Commons" that uses DRM to control copyrighted works that are registered in
a metadata system. DRM is then used to enforce openness. Several artists,
such as Chuck D., Beastie Boys and David Byrne, have released content
under the copyright commons. Finally, we will not discuss DRM-based
marketing tools of incumbent artists such as versioning, personalized group
pricing, targeted advertising, test trials and alternative remuneration
schemes. Excellent reviews on versioning in general can be found in
VARIAN (1999) and BELLEFLAMME (2005).
The reminder of the article is organized as follows: we firstly discuss
marketing strategies based on Digital Rights Management for new artists
based on sampling. Examples of such DRM marketing strategies are
provided in the following section. Although our analysis focuses on music as
a digital product, several insights apply to other digital products.
202
No. 62, 2nd Q. 2006
DRM and the theory of sampling
Music requires some form of experimentation. Although listening to new
albums has always been possible (by listening to music in record stores for
instance), file-sharing makes trying and sampling new music much easier.
Songs are often just a click away from being listened to on a computer or
portable MP3 player. Artists consequently have a new opportunity to expose
consumers to their music. Many of these artists could not previously
distribute their music to a large audience because revenues for CD sales
would not cover the high fixed costs of marketing and promotion. The
situation has changed as sampling may partly replace costly promotions on
television and radio as a channel for information transmission.
A number of recent articles analyze the informational role of unauthorized
copies on firms' profits and strategies and on welfare. DUCHÊNE &
WAELBROECK (2006) analyze the effect of increasing copyright protection
on the distribution and protection strategies of a firm selling music. They
model the demand for a new product that is not distributed through the
traditional marketing/promotion technology by assuming that consumers can
only purchase a good after they have downloaded a digital copy that
includes information on the characteristics of the product. They show that
increasing copyright protection not only has a direct effect on copiers, but
also an indirect effect on buyers as technological protection and prices
increase with legal protection, unambiguously reducing consumer surplus.
PEITZ & WAELBROECK (2005b) analyze a single firm selling a variety of
products that cater for different tastes. Consumers can obtain information on
the individual characteristics of the albums by downloading files from P2P
networks. P2P allows a user to sample and quickly find new products that he
likes if cross-recommendations and other means for directed search are
available. PEITZ & WAELBROECK show that for a sufficiently large number
of products, firms can benefit from the sampling 6 (or matching effect) of
digital copies that leads to a higher willingness to pay for the original and
that dominates the direct substitution between copies and original products.
If, alternatively, the firm decided to promote the product through traditional
channels, it would bear the cost of such information transmission. Moreover,
6 In a survey of graduate school students, BOUNIE, BOURREAU & WAELBROECK (2005) find
that over 90% of respondents discovered new artists by downloading music files, while 65%
reported that, after listening to MP3 files, they purchased albums that they would not have
purchased otherwise.
A. DUCHÊNE, M. PEITZ & P. WAELBROECK
203
the fit would not be worse because a firm typically would not promote all
products.
ZHANG (2002) argues that traditional distribution technology made it
easier for artists with a large audience (or stars) and harder for marginal
artists to be distributed in the market. Zhang proposes a model of the music
industry with two horizontally differentiated products. The star artist is
pushed by a big label and the marginal artist has no backing. Without P2P,
the star artist distorts demand in his/her favour. This distortion can be so
large that the marginal artist is driven out of the market. Using file-sharing
networks, this artist can distribute his/her songs on P2P networks. Thus
marginal artists may stand to gain from the exposure effect, while stars
unambiguously lose out as the current distribution technology benefits the
promotion of stars. With digital copies, niche performers can reach
consumers more easily. In effect, this suggests that the distribution of album
sales would be less skewed 7 .
Sampling and file-sharing technologies might thus reduce the stardom
phenomenon and raise the amount of variety available on the market. PEITZ
& WAELBROECK (2005a) point out that the two most legally downloaded
songs in September 2004 in the UK were not even in the top 20 singles
chart. Similarly, GOPAL et al. (2004) document changes in popularity on
chart rankings in the file-sharing era.
Examples of uses of DRM by entrants
Since digital music is easy to share on P2P networks, several technology
companies offer flexible DRM protection to maximize the potential audience
of an artist who wants consumers to discover his or her work. We discuss
three such offers: Altnet, MagnaTune and LastFM.
7 This argument has received wide attention, especially after the publication of an article by
Chris Anderson in Wired. According to the author, "Suddenly popularity has no longer a
th
monopoly on profitability." (Chris ANDERSON, "The Long Tail", in Wired, Oct. 12 , 2004).
Combined with sophisticated software that tracks and aggregates what consumers listen to and
purchase, infomediaries can come up with recommendations that guide a consumer to discover
products down the long tail that s/he would have been highly unlikely to discover through
traditional promotion and distribution channels.
204
No. 62, 2nd Q. 2006
Altnet
Altnet is the leading online distributor of licensed digital entertainment,
enabling record labels, film studios, as well as software and video game
developers to market and sell their media to a worldwide audience of 70
million users. Altnet powers leading Peer-to-Peer applications, internet
portals and affiliate websites with access to Altnet's library of rightsmanaged, downloadable content and integrated payment processing
gateway. Altnet also works with Cornerband.com, which distributes music of
signed-up artists and promotes "emerging" artists on its network. Ratings are
made by users of Cornerband.com. All Cornerband.com artists have control
over the secure distribution of their music, including the way in which songs
are downloaded, sampled and priced to the consumer. Every time a user
downloads and plays a music or video file, a DRM window appears,
featuring an image of the band, album cover art or other promotional
material to help market the file. Each file is 'wrapped' with Altnet's secure
rights management technology, which protects the content and allows the
file to be sold in the Altnet Payment Gateway. Altnet licenses DRM
technology from Microsoft Corporation to protect music and video files. Most
of the files can be previewed for free for a fixed period of time. At the end of
the trial period, the user is prompted with information about purchasing the
file. Each file has an individual pricing and license agreement. Altnet applies
the Altnet GoldFile Signature to the files once they are protected with DRM,
allowing for greater file control and security, and making it possible to track
the number of times the file is downloaded.
Magnatune
Prices vary according to the buyer's stated willingness to pay for a few
dozen artists signed with MagnaTune. Streaming is free and the site wants
to be associated with a sharing or open-source community, although
consumers pay for the music they want to acquire. The suggested purchase
price of an album is USD 8, but users can spend as little as USD 5 or as
much as they want. Surprisingly, the average price is USD 8.93, according
th
to an interview with Magnatune's founder in USA Today (January 20 ,
2004). MagnaTune pays artists half of the revenues generated. About 1 in
42 visitors purchases music on the site according to The Economist
(September 17th, 2005), which is a better ratio than an average e-commerce
site. As of October 2005, MagnaTune offers the possibility to share music
with 3 friends.
A. DUCHÊNE, M. PEITZ & P. WAELBROECK
205
Last.fm
Last.fm offers a free streaming service based on music profiles. Song
listening times are collected by a small plugin DRM program called
"audioscrobbler" that builds a user's music profile. Last.fm software then
finds music that a user might enjoy, builds a customized radio station called
last.fm radio and finds other users with similar music profiles. The software
also sends music recommendations that can be exchanged by private
messages between users. The player is free, open-source and offers 128
kbs MP3 stream. Last.fm also provides a platform for new artists and labels
to distribute their albums. Artist pages include the band profile, as well as
various statistics such as the number of times a specific song has been
played.
It is also interesting to observe that Soulseek P2P service has created
Soulseek Records to distribute and promote new artists initially exchanged
on the file-sharing network, so that promotion on P2P networks and physical
distribution need not necessarily be separated. All these examples
demonstrate that new artists can make inroads into the music market
through the use of digital distribution and DRM. However, it is too early to
say which strategies will result in viable business models. The recent
success of Artic Monkeys, who chose to distribute their songs freely on the
internet on file-sharing networks, illustrates the explosive distribution
potential of file-sharing technologies that make sampling much easier than
checking individual websites for new music. Artic Monkeys sold more than
360,000 copies of their first album during the week after the release, a U.K.
record topping that of the Beatles.
Conclusion
DRM should not simply be considered as a tool to protect against piracy,
but rather as a key to opening up the market to new artists and improving
musical offerings to internet users. The development of digital distribution
technologies (digital products and information about these products) can
also open the market up to info-mediaries, who promote and recommend
new products to consumers and producers. For instance, some file-sharing
networks already function as a two-sided platform where record companies
can reach a large potential audience and select new acts more easily. Such
music info-mediaries could collect detailed user information, which would, in
turn, allow them to make targeted offers to users. As a result, they could
206
No. 62, 2nd Q. 2006
become more efficient at spotting new trends and potential stars. Moreover,
the promotion of acts could be partly ensured by the music sites themselves.
This would mean that music sites would take over some of the functions that
belonged to the labels to-date. Although the death of big labels appears
unlikely, it remains to be seen whether internet music sites will reduce their
role in selecting music. The internet could enable consumers to make their
own decisions, independent from the marketing strategies that are used to
push some artists and neglect others, whose music is hardly, if at all,
available to the public. Which of the various players in the music industry will
benefit from this dramatic change remains to be seen. This paper outlines
the dimensions within which the industry can adjust to offer new music as a
digital product using DRM.
A. DUCHÊNE, M. PEITZ & P. WAELBROECK
207
References
BECHTOLD S. (2003): "The Present and Future of Digital Rights Management:
Musing on Emerging Legal Problems", in: Becker, Buhse, Gunnewi, & Rump (Eds),
Digital Rights Management - Technological, Economic, Legal and Political Aspects,
Springer, pp. 597-654
BELLEFLAMME P. (2005): "Versioning in the Information Economy: Theory and
Applications", CESIfo Economic Studies 51, pp. 329-358.
BOUNIE D., BOURREAU M. & P. WAELBROECK (2005): "Pirates or Explorers?
Analysis of Music Consumption in French Graduate Schools", Telecom Paris working
paper in economics EC-05-01.
DUCHÊNE A. & P. WAELBROECK (2006): "Legal and Technological Battle in Music
Industry: Information-Pull vs. Information Push Technologies", International Review
of Law and Economics, accepted for publication.
GASSER U. (2004): "iTunes: How Copyright, Contract, and Technology Shape the
Business of Digital Media - A Case Study", Berkman Center for Internet & Society at
Harvard Law School Research Publication, no. 2004-07.
GOPAL R.D., BHATTACHARJEE S. & G.L. SANDERS (2004): "Do Artists Benefit
From Online Music Sharing?", Journal of Business, forthcoming
LESSIG L. (2001): "The Future of Ideas: The Fate of the Commons in a Connected
World", Random House.
PEITZ M. & P. WAELBROECK:
- (2003): "Piracy of Digital Products: A Critical Review of the Economics Literature",
CESIfo Working Paper #1071.
- (2005a): "An Economist's Guide to Digital Music", CESifo Economic Studies 51,
363-432.
- (2005b): "Why the Music Industry May Gain from Free Downloads - the Role of
Sampling", International Journal of Industrial Organization, forthcoming.
SAMUELSON P. (2003): "DRM, (And, Or, Vs.} the Law", Communications of the
ACM 46, pp. 41-45.
SHAPIRO C. & H.R. VARIAN (1999): Information Rules, Harvard Business School
Press.
ZHANG M. (2002): "Stardom, Peer-to-Peer and the Socially Optimal Distribution of
Music", mimeo.
Public Policies
Internet Governance, "In Larger Freedom"
and "the international Rule of Law"
Lessons from Tunis
Klaus W. GREWLICH (*)
Ambassador at large, Berlin
Professor,Center for European Integration Studies, Bonn
Member of the High level Advisors (ICT) to the UN-Secretary General
I
nternet governance was one of the core issues of the 2005 World Summit
on the Information Society (WSIS) in Tunis 1 . The compromise
arrangement reached in Tunis basically confirms the role of the Internet
Corporation for Assigned Names and Numbers (ICANN) and the specific
responsibilities and controls exercised by the United States. However,
individual countries will manage their own country-code Top-Level Domains
(ccTLDs).
In addition, UN-Secretary General Kofi Annan was asked to convene a
multi-stakeholder Internet Governance Forum (IGF) 2 , which is to have no
supervisory function and will not replace existing arrangements, but should
allow for a continued emphasis on global internet governance issues. The
new Internet Governance Forum will have to work hand in hand with the new
"Global Alliance for ICT and Development" established by the UN-Secretary
General in Kuala Lumpur in June 2006. The latter provides a platform for the
(*) The author is expressing his personal opinion.
1 UN/ITU (2005), "Tunis Commitment", WSIS-05/TUNIS/DOC/7-E; Tunis Agenda for the
Information Society, WSIS-05/TUNIS/DOC/6(Rev.1)-E.
2 UN/ITU (2005): Tunis Agenda, para. 72.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 209.
210
No. 62, 2nd Q. 2006
UN system and other stakeholders to mainstream and integrate ICT into the
development agenda.
"What we are all striving for is to protect and to strengthen the internet
and to ensure that its benefits are available to all", declared the UNSecretary General in the opening session of the Tunis Summit 3 . Achieving
these objectives implies functioning markets, rules, trust in human networks
and appropriate procedural arrangements. John Rawls' "Theory of Justice" 4
has demonstrated the possibility of formulating general principles of fairness
so as to supplement the "invisible hand of the market" with the visible hand
of law and schemes of effective governance.
Internet governance
Prior to Tunis a Working Group on Internet Governance (WGIG) 5
established the following working definition: "Internet governance is the
development and application by governments, the private sector and civil
society, in their respective roles, of shared principles, norms, rules, decisionmaking procedures, and programmes that shape the evolution and use of
the internet." This working definition of internet governance, however, is not
substantive, but procedural and lacks normative depth. The WGIG has
avoided tackling the normative question of what internet governance should
be, what it should not be, and who should participate in effective
governance.
Some see the present allocation of internet resources as market based
and effective; while others take directly the opposite view. For many
developing countries the position of some members of the internet
community who argue: "If it isn't broke, don't fix it" is perceived as
perpetuating the existing distribution of internet resources. As far as the
crucial role of all stakeholders in internet governance, including
governments, the private sector, civil society and international organisations
is concerned, the EU 6 believes that a new "cooperation model" is needed.
3 UN News Service, UN News Center, Nov. 16th 2005.
4 RAWLS J.(1973): A Theory of Justice, London p. 60.
5 WGIG (2005), "Report", Chateau de Bossey June 2005.
6 EU COMMISSION (2005): Communication from the Commission to the Council, the European
Parliament, the European Economic and Social Committee and the Committee of the Regions –
Towards a Global Partnership in the Information Society: The Contribution of the European
K.G. GREWLICH
211
According to the EU, existing internet governance mechanisms should be
founded on a transparent and multilateral basis, with stronger emphasis on
the public policy interest of all governments. The EU has underlined that
governments have a specific mission and responsibility to their citizens in
this respect.
Prior to WSIS Tunis, the United States opposed the EU proposal, or
rather what was perceived to be the EU position in the following statement:
"The U.S. does not support an alternative system of internet governance
based on governmental control over technical aspects of the internet. New
intergovernmental bureaucracy over such a dynamic medium would dampen
private sector interest in information technology innovation […].The U.S.
seeks to preserve the growth and dynamism of the internet as a medium for
commerce, freedom of expression and democracy against calls from Iran,
Saudi Arabia, Brazil and the EU for a new level of bureaucracy to oversee
and control the internet. The U.S. is committed to the expansion of the
internet in an environment of political and economic freedom."
The EU is not denying that the United States has done an excellent job in
ensuring efficient administration of the internet. Moreover, the EU is well
aware of the fact that some of the countries who are most verbal in
advocating the internationalisation of the internet are those ready to inhibit
free speech via the internet. Nevertheless it is the U.S. government that
effectively has the sole right to decide when a new Top Level Domain (TDL)
can be introduced into cyberspace. The controversy surrounding a possible
new .xxx TDL for adult content has highlighted this bizarre situation. Despite
concern over the .xxx-initiative expressed by the EU and other countries, it
was the U.S. government that convinced ICANN to let the .xxx TDL enter
cyberspace in the framework of a privileged contractual relationship (in the
form of a letter written by the DOC Secretary of State), despite the fact that
their content would be visible to internet-users worldwide 7 .
Many representatives of civil society were dissatisfied with the
discussions in Tunis and underlined that governments "played out little
political games," but did not live up to their strategic policy responsibilities, to
define what principles and norms should apply to the internet as a whole.
Union to the Second Phase of the World Summit on the Information Society (WSIS), COM
(2005) 234 final, 02.06.2005, p.8f.
7 REDING Viviane (2005), Speech – "Opportunities and challenges of the Ubiquitous World and
some words on Internet Governance", (2005 Summit of the Global Business Dialogue on
th
electronic commerce), Brussels October 17 2005.
212
No. 62, 2nd Q. 2006
Elements of the process towards an effective international "rule of law"
It might be helpful to remember that there has been a historical transition
from the sovereignty-centred, so-called Westphalian international "law of
coexistence" to the modern "international law of cooperation" 8 . The latter
began to materialize in the mid-19th Century. The International
Telecommunications Union (ITU), the world's oldest intergovernmental
organization dating back to 1865, is an excellent example of the
"international law of cooperation".
"Governance" is not the final objective in terms of a "final regulatory
regime," but part of a step by step progress towards an effective
international "rule of law" (that also comprises of non-legal governance tools,
such as "self-regulation" or "technical solutions" or "Code" 9 ). In this process
individual states remain players of particular importance; but in governance
reality the nation state is either competing with foreign, international and
private governing authorities or working with them in joint efforts. We are
currently experiencing a muddled, transitory situation, for which the term
"governance" may be used as an exploratory notion.
Even if the expression "cyberspace" - often used as a synonym for the
internet – suggests similarities to physical space, such as land, sea, air or
interstellar space, it is unlikely that a specific form of internet governance or
a special "regime" for global information networks will be found. Comparing
"cyberspace" to other "spaces" is misleading. As the internet clearly
illustrates, the "space" created by global information networks is different in
nature to physical space. It is true that, as with maritime transportation,
where ships, cargos, ports and their facilities have owners, component parts
of cyberspace such as communication links, satellites, computers, storage
devices, data centres, routers and telephone exchanges are identifiable in
terms of property. However, the seas constitute a physical space or
substrate, and a special legal regime applies to them, namely the
International Law of the Seas, enforced by a special court, the "Law of the
8 FRIEDMANN W. (1962): The Changing Dimensions of International Law, 62 Columbia L.R.,
p.1147ff. (1150); von BOGDANDY A. (2003): "Democracy, Globalisation, Future of Public
International Law – a Stocktaking", Heidelberg Journal of International Law, 63 pp. 853-877
(871f.)
9 Most prominent are LESSIG L. (1999): Code and Other Laws of Cyberspace, Basic
Books/Perseus; REIDENBERG J.R. (1998): "Lex Informatica – The Formulation of Information
Policy Rules through Technology", in Texas Law Review, 76 (1998) 553-593; SYME S./CAMP
L.J.: "Code as Governance - The Governance of Code", at:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=297154.
K.G. GREWLICH
213
Sea Tribunal" 10 . As far as cyberspace is concerned, there is no such
physical substrate, no legal regime applying to a particular space and no
specialized international court.
That does not mean that there are no elements of governance to be
applied to cyberspace. The current governance of cyberspace is divided
among many groups, some composed of volunteers such as the Internet
Engineering Task Force (IETF), some like the World Wide Web Consortium
(W3C) or ICANN, composed by a number of private and public sector
entities and others entirely run by the private sector like many domain name
registration bodies. International (governmental) organisations deal with
policies and rules affecting access to cyberspace in the field of service
liberalisation and e-commerce (WTO), cultural identity and linguistic diversity
(UNESCO), intellectual property (WIPO), e-commerce and contracts
(UNCITRAL) and technical standards (ITU, ISO and IEC), for instance.
A distinction can be drawn between: (i) governance bodies (multi-player
analysis asks the question: who is in charge of governance?); (ii)
governance levels (multi-level analysis looks at the level at which
governance is performed in terms of "subsidiarity?); and (iii) governance
principles and tools (multi-instrument analysis asks how governance is
implemented) 11 .
Even if there is no full fledged regime for cyberspace, there are
fragments of international governance. "Internet governance" in terms of
principles would:
- firstly, draw together and further refine existing general principles 12
of international law such as cooperation; fairness; comity; nondiscrimination, national treatment, most favoured nation treatment,
estoppel 13 and particularly proportionality 14 and subsidiarity 15;
10 TREVES T. (1995): "The Law of the Sea Tribunal", Heidelberg Journal of International Law,
55 p. 421ff.
11 GREWLICH K.W. (2006): "'Internet governance' und 'voelkerrechtliche Konstitutionalisierung'
– nach dem Weltinformationsgipfel 2005 Tunis", Kommunikation & Recht, 4 , p. 156ff.
12 e.g. CARREAU D. (1991): "Droit International", Pedone; LOWENFELD A. (Ed.) (1975-1979):
International Economic Law, Matthew Bender 7 vols.; VERDROSS A. (1973): "Die Quellen des
universellen Voelkerrechts", Romback, Freiburg; van HOUTTE H. (1995): The Law of
International Trade, Carswell, London; KACZOROWSKA A. (2002): Public International Law,
Old Bailey Press, London.
13 Estoppel is a well established principle of public international law. It is defined as a rule of
evidence whereby a person is barred from denying the truth of a fact that has already been
settled. Estoppel thus gives a party's partners a certain measure of confidence, since it
precludes that party from "venire contra factum proprium".
214
No. 62, 2nd Q. 2006
- secondly, evolve additional specific substantive principles relating to
both "access" and "public interest" 16:
Principles relating to "access" would notably include fair use zones and
the public domain, consumer needs relating to trust and digital signatures,
intellectual property rights, cryptography and confidentiality, domain names,
infrastructural development, the art and quality of regulation, the notion of
digital solidarity and cooperation in building enabling frameworks and
capacities to overcome the "digital divide."
Principles relating to "public interest" would cover areas like human
dignity and privacy/data protection, protection of minors, spam, security and
cryptography, development, cultural identity and diversity including the
"multi-lingualisation" of the internet.
In this context, some insights from public economics may be helpful to
better understand the concept of public goods pertaining to internet
governance. At a national level – in markets and societies – it is the role of
governments to provide public goods such as security, public education or
public parks, and also to regulate public ills such as actual threats to health,
privacy and information integrity, pollution or unfair competition. The
economic characteristics of public goods (non-excludable and non-rival in
consumption) are the same, irrespective of whether the public goods are
local or global. Obvious examples include the global environment, global
security, economic stability, health and knowledge. The openness,
interdependence/interconnection
and
integration
associated
with
globalization, and notably the ubiquitous internet, mean that somehow the
tasks of identifying to what extent public goods are provided and public ills
avoided would have to be undertaken not only at a local, but also at a global
level 17 .
14 The principle of "proportionality" obliges governments to use the least intrusive measure,
given the legitimate aim. This test calls for a comparison between the measure actually chosen
and hypothetical alternative measures.
15 "Subsidarity" is an assignment principle that determines the optimal level at which public
tasks should be performed, i.e. as close as possible to where the action occurs.
16 GREWLICH K. W. (1999): "Governance in 'Cyberspace' – Access and public interest in
global communications", Kluwer Law International, p. 305ff.
17 DEEPAK N. & COURT J. (2002): "Governing Globalization: Issues and Institutions", UN
University World Institute for Development Economics Research (UNU/WIDER), Policy Brief
no. 5, Helsinki (7).
K.G. GREWLICH
215
The challenge to provide public goods at the global level und to avoid
public ills is one of the engines driving the process from governance to the
"effective international rule of law" ("Constitutionalisation" in terms of
normative principles contained in the UN Secretary General's Report "In
Larger Freedom" 18 ). Another major driving force is that everyday users, or
(net)citizens, urgently expect measures to deal with fraud, spam, hacking,
violations of data protection.
A real commitment to an open and inclusive dialogue is needed, notably
on the part of those presently feel comfortable with the status quo. The
prevailing currency should be engagement and persuasion, built on long
term relationships and trust.
18 UN-SECRETARY GENERAL (2005), "In larger freedom: Towards development, security and
st
human rights for all", UN Gen. Ass. Doc, A/59/2005, March 21 2005.
Book Review
Alban GONORD & Joëlle MENRATH
Mobile Attitude
Ce que les portables ont changé dans nos vies
Hachette Littératures, 2005, 282 pages
by Marie MARCHAND
Les auteurs disent du mobile (ou portable) qu'il est un objet à tout faire (bien
ou mal) et qu'il n'est pas forcément utilisé pour ses fonctions principales. A
cet égard, le portable ne fait que ressembler à tous les objets domestiques
qui sont constamment détournés de leur usage (voir les travaux de Philippe
Mallein sur le sujet). Les auteurs ajoutent que ce portable s'inscrit dans la
gamme des objets qui disent le statut d'une personne, ce qui n'est guère
différent d'une Swatch, d'une paire de Nike, d'un appareil photo ou d'un
Palm au demeurant.
Tout ceci est vrai et a été confirmé dans de nombreuses études sur les
objets techniques et sur les objets de communication. Mais, c'est assez
plaisant de se reconnaître dans les petites stratégies individuelles que les
uns et les autres mettent en œuvre pour se donner l'impression qu'ils
utilisent leur téléphone portable différemment de leurs amis ou
connaissances, en se créant leur bulle, en branchant l'appareil dès leur
réveil pour scander la journée, en téléphonant en compagnie d'autres
personnes (vivent les terrasses de cafés !), etc.
C'est dans cette
chaîne d'objets techniques de communication et
d'information (qui va du fixe au mobile, en passant par le minitel, l'internet, le
palm, etc.) qu'il me semble devoir analyser le téléphone mobile et je regrette
que les auteurs de Mobile attitude aient l'air de considérer qu'ils s'intéressent
au premier objet technique de communication individuelle et portable, en
faisant fi de la chaîne d'objets qui l'ont précédé et introduit.
Mais au fond, les auteurs de Mobile attitude racontent ce "dé-paysement" à
leur manière et je voudrais reprendre quelques-unes des idées forces de
leur travail. Je retiendrais les idées suivantes :
• Le téléphone portable est un objet technique qui comble le vide
"existentiel" (la vacuité du temps, l'ennui, les temps morts, tous ces espaces
interstitiels, et malheureusement, de mon point de vue, les moments de
respiration, et de méditation).
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 217.
218
No. 62, 2nd Q. 2006
• La médiation des relations entre moi et les autres. Les auteurs vont
chercher des exemples riches, qui montrent que le téléphone scande toute
une vie. Dommage qu'ils n'aient pas analysé le deuil auquel ont conduit les
téléphones portables qui sonnaient du haut des Twin Towers, alors que
leurs porteurs étaient probablement déjà morts. Y aurait -t-il un portable
après la mort ?
• L'impérieuse nécessité d'un téléphone portable comme outil
d'autonomie à l'âge adolescent où l'on veut être libre (ce que notait Philippe
Mallein dans ses travaux déjà anciens sur le concept du "séparés
ensemble").
• Sa petite note de musique qui fait de "sa" sonnerie un double de sa
personne.
• Les retrouvailles familiales des parents qui osent à nouveau
téléphoner à leurs enfants, car avec un portable, c'est ludique. Et du coup …
pas intrusif, de mon pint de vue.
• Le fait que le mobile soit le plus souvent porteur d'un non-événement
(T'es où ? Tu m'entends ? je ne t'entends plus !)
• Le gestuel autour du portable, qui leur fait, à juste titre, penser à celui
de la cigarette.
• Leur analyse du regard social sur le portable ne peut manquer de me
faire penser à son revers direct : le vol massif des portables, que j'aurais
aimé voir pris en compte.
• L'impossibilité de l'isolement, que ce soit de la voix ou du SMS, qui
me rappelle les raisons - évidement indirectes - de l'assassinat tragique de
Marie Trintignant pour cause de SMS mortel.
• Le savoir-vivre du téléphone portable, dont les auteurs relèvent
l'ambivalence, et qu'on ne peut s'empêcher de rapprocher de la "Netiquette",
autrement plus contraignante.
• Le concept de communauté compatissante qui est bien sûr lié au
téléphone, mais qui le dépasse largement, puisque consubstantiel de nos
sociétés contemporaines, tant au plan familial qu'au plan social, qu'au plan
politique. …"Les foules sentimentales" qu'Alain Souchon célèbre avec les
mots du poète.
• Le pacte, sans cesse renégocié et mouvant, qui remplace la loi non
appliquée et rigide (elle-même antinomique des communautés
compatissantes).
Je terminerai cette note de lecture en vous disant que j'ai beaucoup aimé
l'épilogue que je vous laisse découvrir.
Book Review
219
The authors claim that the mobile phone is a multi-purpose object (for better
or worse), which is not necessarily used for its main function, namely
telephony. In this respect, mobile phones are no different to all domestic
objects, which are constantly used for another purpose to that for which they
were intended (for a more detailed discussion of this topic see the work of
Philippe Mallein). The authors are right when they add that the mobile phone
belongs to a range of objects that reflect an individual's status, but that it is
no different to Swatch watches, Nikies, cameras or Palms at the end of the
day.
All this is true and has been confirmed by several studies of technical and
communication objects. However, it is rather pleasant to recognise oneself in
the little strategies adopted by various individuals to give the impression that
they use their mobile phone differently to their friends and acquaintances, by
creating their own private bubble, by turning the telephone on the moment
they wake up to mark the beginning of the day, by telephoning while in the
company of others (three cheers for café terraces!), and so on.
It is in the context of this family of communication and information tools
(ranging from the fixed telephone to the mobile, via minitel, the Internet,
palms, etc.) that mobile telephony should, in my opinion, be analysed.
Unfortunately the authors of Mobile Attitude seem to be under the
impression that they are dealing with the first ever personal and portable
communication object, and do not spare a thought for the chain of objects
that preceded and paved the way for the mobile phone.
However, the authors of Mobile Attitude essentially have their own way of
relating this "change of scene" and I would like to highlight some of their
stronger concepts in their work. These ideas are that:
• The mobile phone is a technological object that fills the "existential"
gap (empty time, boredom, dead time, all of those interstitial spaces, which
unfortunately, from my point of view, represent moments for taking stock and
reflection).
• The mediation of relationships between me and the others. The
authors present a broad range of examples that illustrate how the mobile
phone marks the pace of our entire life. What I regret is that they do not
analyse the grief caused by the mobile phones that continued to ring at the
top of the Twin Towers after their owners had probably already died. Will
there be a mobile phone after death?
• The imperative necessity of owning a mobile phone as a tool of
independence for adolescents who have reached an age where one wants
to be free (a point already made by Philippe Mallein in older research into
the concept of "separated together").
• The mobile phone's musical touch that makes "its" ring tone a double
of its owner's personality.
220
No. 62, 2nd Q. 2006
• Renewed family bonds with parents who dare to ring their children
again because mobile phones make this fun - and suddenly not so intrusive
in my opinion.
• The fact that mobile phones often bear the news of non-events
(Where are you? Can you hear me? I can't hear you!)
• All the gestures related to mobile phones, which are reminiscent of
smoking.
• The authors' analysis of the social attention attracted by the mobile
phone inevitably made me think of its flip side, namely the widespread theft
of mobile phones, which I would like to have seen dealt with.
• The impossibility of isolating oneself, whether this be from voice or
SMS communications.
• The good manners of mobile telephony, whose ambivalence is
underlined by the authors and which one cannot help comparing to
"Netiquette", which is more restrictive in a different way.
• The concept of the caring community, which is of course related to the
telephone, but which goes far beyond it, since it lays the foundation of
contemporary society, both in terms of family-life, social interactions and
political reality.
• The pact, constantly renegotiated and changing, that replaces the non
applied and rigid law (itself antinomic to the concept of caring communities).
I would like to conclude this review by saying that I deeply enjoyed the
book's epilogue, which I will leave its readers to discover for themselves.
Debra HOWCROFT & Eileen M. TRAUTH (Eds)
Handbook of critical Information Systems Research
Theory and Application
Edward Elgar publishing, 2005, 426 pages
by Jean-Gustave PADIOLEAU
This book provides an excellent opportunity for readers to familiarise themselves with
an increasingly popular research paradigm in organization and management studies,
namely critical realism. This volume is a welcome addition to the previously published
works of reference (including: M. ARCHER et al., Critical Realism, Essential
Readings, Routledge, 1998, London; S. ACKROYD et al., Realist Perspectives on
Organization and Management, Routledge, 2000 London; Critical Realist
Applications in Organization and Management Studies, Routledge, 2001, London).
In a few words, critical realism aims to be an alternative research paradigm
to positivism and to post-modern relativism. Realism? There is a world which
exists independently of the researcher's representation of it. Critical? Social
Book Review
221
sciences have an emancipatory potential. They can be critical of the social
phenomena they observe. Realism favours normative judgments. This
normative agenda may disturb or irritate potential readers of the Handbook
of Critical Information Systems Research. The contributions include
numerous neo-marxist and feminism critiques. Nevertheless, this collective
work deserves attention.
The Handbook of Critical Information Systems Research offers valuable
insights into, for instance, the complex issues of "rationalities and emotions"
and "power" in Information Systems. Economists and specialists will
appreciate the provocative analysis of wide-spread phenomena, although
the cases of "Is failure" and "design fallacies" merit further study. Other
stimulating contributions include "management fashions and informations
systems", "the ethnical turn in Information systems" and "competing
rationalities."
Once again, some aficionados of the "lieu commun" in positivist social
science against normative judgements will disregard this challenging
paradigm in progress. However, like it or not, let's recognize that the
academic labour of critical realism has an emancipatory potential of
liberating our closed minds.
Dan REINGOLD with Jennifer REINGOLD (Eds)
Confession of a Wall Street Analyst: A True Story of Inside Information
and Corruption in the Stock Market
HarperCollins Publishing, New York, NY, 2006, 368 pages.
by James ALLEMAN
Dan Reingold not only lays outs his career as a telecommunications analyst
on Wall Street in this professional autobiography, but sets forth quite
explicitly many of the problems with the financial services industry and the
information and communications technology (ICT) sector it supported.
Unfortunately, most of these problems have not been resolved by legislation
arising from the scandals, accounting frauds, and deceptive practices at
Enron, Qwest, WorldCom, Global Crossings, and other companies that
resulted in bankruptcies, losses of trillions (yes a "t") in stockholders' value,
tens of thousands of redundancies and employees losing their retirement
plans and savings.
Reingold takes a swipe at interlocking directorships, particularly in the case
of AT&T and Citigroup, the indefeasible rights of use (IRUs) and their
accounting treatment. Reingold provides details on Bernie Ebbers and the
rise of WorldCom and Jack Grubman's role on the financial side, as well as
Grubman's intimate relationship with Ebbers. The rise and fall of Global
Crossing, Qwest, and other companies, with the aid of analysts and bankers,
222
No. 62, 2nd Q. 2006
are detailed in the book. In addition, the author indicts the accounting
industry, particularly the quality and reliability of auditors, for whom he earlier
had admiration. He points out the changes in their practices that aided and
abetted the nefarious practices of the ICT sector.
Reingold indicates how the investment bankers both scorn analysts and
attempt to coerce, intimidate and sway them to rate their clients favorably. At
the same time, many analysts were under pressure to provide advance
notice to bankers of down grades in ratings. With respect to IPO's Reingold
notes:
"The problem was that the IPO shares were being spun, certain
investors got inside information and the rest of the investing public was
playing in a rigged game and didn't even know it." (p. 274).
For those who lost some of the trillions of dollars in the market, the book is
sure to confirm their suspicions and annoy them. More importantly,
according to Reingold, the systemic problems have not been addressed.
This points to additional problems, disappointments and a lack of confidence
in the veracity of the stock market in the future.
According to Reingold the "Chinese Wall" separating the analyst and the
investment banking segment of the firm is breached – more often than one
would like to think. Moreover, he stresses the importance of information,
which moves the market, by millions, if not billions. Early access to this
"insider" information can make fortunes for the insiders (and rarely leads to a
jail term, even if offenders are caught). Reingold cites the example of
Fidelity, in the days before Reg FD (Fair Disclosure), which requires all
information about a company to be released to everyone simultaneously.
The analysts would fly to Boston and brief Fidelity first, and then move on to
others. Thus, Fidelity would get the early news on the buys and when to sell.
"Trying to mimic Fidelity's stock purchase was a good strategy, one
that many individuals and companies employed, but it had a major
downside: if you were still in when 'FIDO' started selling, you were
toast." (p. 110).
Another example of the power of insider information was with Global
Crossing, in which only a handful of people were privy to a briefing (p. 237)
which indicated problems with Global Crossing's business plan – its stock
price fell by 17 percent within two hours of the meeting.
More importantly, he indicates how both Wall Street and the ICT sectors
were complicit in the collapse of the ICT sector. Reingold also notes how the
research arms of banks were continually pressured to support banking side
deals (pp. 36, 71, 103, 104, 112), as blatant as tying analyst's compensation
to the banking side revenues (p. 186) or more subtle forms of incentives or
threats. He discusses how the Security and Exchange Commission's (SEC)
"No-Action Letter" and knowingly looking the other way facilitated the
conflicts of interest between research and banking (pp. 41, 103, 163-165 ),
which have not been addressed.
Book Review
223
In this easy to read and informative book, Reingold (with Jennifer Reingold,
his niece) traces his career from MCI to Morgan Stanley, to Merrill Lynch,
and finally, to Credit Suisse First Boston (CSFB) during the stock market's
"go-go" years of the late nineties and early in this century. Along the way, he
introduces many high-rollers on Wall Street and the ICT sectors – their
arrogance, hubris, and conceit, but most of all their greed. In particular, his
evil "twin," Jack Grubman of Smith Barney, is one of the major villains of the
piece. According to Reingold, Grubman is constantly revealing privileged
information after going "over-the-wall," that is, talking to the investment
banking side of the business and their clients. Grubman, according to
Reingold, would spread this knowledge through out the industry in order to
gain status and be known as an insider (pp. 77, 71, 133, 174, 224, 285).
Unfortunately, this could cost the unaware investing public, and even some
of the professional investors, hundreds of millions of dollars; in one case
Reingold documents, a three billion dollar swing in a day (p. 78). But yet,
even when the law comes down on Grubman, he only has to pay a 15
million dollar fine, less than half of his severance package and only a small
piece of his reported 80 million dollars of compensation during the period
(pp. 288-289). Crime pays!
But Reingold goes further than this well publicized story to criticize the fact
that Spitzer or other crime fighters did not investigate how high up the ladder
the corruption went; although one gets a clue in the actions of Sandy Weill,
Chairman and CEO Citigroup at the time, and Grubman to cite a specific
example.
Michael Armstrong, head of AT&T, expressed to Weill, who was on AT&T's
Board, his annoyance with Grubman's negative views on AT&T. Weill asked
Grubman to "take another look" at ATT shortly before ATT spun-off its
wireless business and would need an investment bank to handle the world's
largest IPO. In the end Grubman raised the rating on ATT, and got his two
children into the desirable 92nd. Street Y pre-school with Weill's assistance.
The pre-school received a one-million dollar donation from Citigroup (a tax
write-off for the firm). And, Citigroup became one of the lead bankers on the
AT&T offering and made 63 million dollars for the firm (pp. 197-199). Later,
Grubman returned to verbally dissing AT&T while maintaining its high "buy"
rating and within a few months had lowered it by two notches. By the way,
Armstrong was used to solidify Weill's power at Citigroup by helping oust his
rival co-chairman. This is one, but perhaps the most well known, example
cited by Reingold of conflicts of interest in the industry.
Reingold's focus on the incentive structure offered by the banks and the
industry is critical to the understanding of the remedies, which he does not
feel the current reform captured. Every securities policymaker should read
the last chapter – "Afterword" – of Confessions… to understand how the
current legislation and regulations are not adequate to control and correct
the practices of the finance sector. Indeed, he feels that Elliot Spitzer did not
No. 62, 2nd Q. 2006
224
go far enough. He failed to reach the highest level of the executive suite.
The famous 1.4 billion dollar settlement with the financial industry, for
example, only "punished" the firms and not the individuals involved. Indeed,
it is a small sum to pay when the industry made over 80 billion dollars in
profit during the period (p. 288).
"I just hoped he [Spitzer] wouldn't stop at firms, but would take his
crusade right to the door of individuals who broke the rules. After all…
if firms are fined, the stockholders suffer. By contrast, if individual
executives are punished, shareholders will benefit because executives
are more likely to behave better in the future and spend more time
running their companies instead of lining their own pockets." (p. 288).
The research environment steadily deteriorated over this period, and it does
not appear to have improved despite all the rule changes and legislation.
"This job was less about analysis and more than ever about who you knew."
(p. 205). This is why the last chapter of the book is so important; it provides
guidance on how to rectify these issues from someone who knows the
problems and conflicts intimately.
As with several books on the collapse of the ICT sector and practices of its
financial enablers, this book reads like a novel, except no publisher would
accept it as believable. But this is not fiction and the problems, issues and
conflicts remain, much to the detriment of the society and the investing
public. Would that all analysts had the ethics of a Dan Reingold; regrettably,
it appears they do not.
Thomas SCHULTZ
Réguler le commerce électronique par la résolution des litiges en ligne
Une approche critique
Bruylant, Cahiers du centre de recherches informatiques et droit, Brussels, 2005, 672
pages
by Jean-Pierre DARDAYROL
Ce livre traite des relations qu'entretiennent d'une part, la régulation du
commerce électronique et d'autre part la résolution des litiges en lignes. Il
est tiré de la thèse en doctorat de l'auteur. L'approche, présentée comme
pluridisciplinaire, est principalement juridique. Elle s'adresse à un public de
spécialistes.
L'ouvrage aborde successivement trois sujets :
- la théorie générale de la régulation du cyberespace en s'attachant
principalement à décrire l'état du droit dans le cyberespace, les trois
principaux modèles de régulation, avant de présenter la thèse de
l'auteur : "le réseau : méta-modèle de régulation ;
Book Review
225
- le mouvement "on line dispute resolution" qui est analysé, puis
présenté comme une voie privilégiée d'accès à la justice ;
- l'analyse critique et fouillée des conditions de validité d'une régulation
du cyberespace par la résolution des litiges en ligne.
La thèse de l'auteur - riche et large - repose sur des idées fortes :
- pour les petites et moyennes transactions, les acteurs du on line
dispute resolution sont bien positionnés ;
- alors que les formes classiques de justice sont peu accessibles, mal
adaptées et coûteuses ;
- les résolutions en ligne par des acteurs déterritorialisés reposeront sur
deux piliers : d'une part, "des principes fondamentaux communs" et
d'autre part, l'autoexécution des accords.
En face de cette innovation fondamentale, l'auteur souhaite la mise en place
d'une intervention de l'Etat pour l'encadrer, notamment en cas de dérives.
The book, based on the author's doctoral thesis, examines the relations
between electronic commerce regulation on the one hand and online dispute
resolution on the other. Although presented as multi-disciplinary, the
approach adopted is mainly juridical and the book is aimed at specialist
readers.
The book centres on the following three topics:
- a general theory of cyberspace regulation, which offers an overview of
cyberspace legislation and the three main regulatory models adopted,
before moving on to present the author's thesis: "The network: a metamodel of regulation";
- the trend towards "online dispute resolution," which is analysed and
subsequently presented as a privileged form of access to justice;
- a critical and detailed analysis of the validity of regulating cyberspace
via online dispute resolution.
- The author's thesis – rich and extensive – is based on strong ideas:
- online dispute resolution players are well-placed when it comes to
small and medium-sized transactions;
- traditional forms of justice, by comparison, are inaccessible, ill-suited
and expensive;
- online resolutions by players outside national jurisdictions will be
based on two mainstays: "fundamental common principles" on the one
hand and self-enforced agreements on the other.
Confronted by this radically innovative process, the author expresses his
desire to see a State body set up to oversee online dispute resolution,
especially if the situation drifts out of control.
The authors
James ALLEMAN is a professor in the College of Engineering and Applied
Science, University of Colorado, Boulder. In the fall of 2005 Dr. Alleman was a
visiting scholar at IDATE in Montpellier, France; previously, he was a visiting
professor at the Graduate School of Business, Columbia University, and director of
research at Columbia Institute of Tele-Information (CITI). Professor Alleman
continues his involvement at CITI in research projects as a senior fellow. He has also
served as the director of the International Center for Telecommunications
Management at the University of Nebraska at Omaha, director of policy research for
GTE, and an economist for the International Telecommunication Union. Dr. Alleman
was also the founder of Paragon Service International, Inc., a telecommunications
call-back firm and has been granted patents (numbers 5,883,964 & 6,035,027) on the
call-back process widely used by the industry.
Jacques BAJON is Senior consultant in Television and medias at lDATE. His
missions are focused on the television and internet sphere in which he conducts
strategic and sector-based studies. He is also active in the new TV services domains
and follows up the roll out of new TV delivery networks. He previously worked as a
freelance writer for the Eurostaf group, carrying out market research and analysis on
groups in the media and telecommunications sectors. He has also gained market
analyst experience from working with Ericsson. Jacques Bajon holds a post-graduate
research degree (DEA) in international economics (University Paris X Nanterre)
[email protected]
Pieter BALLON is senior consultant at TNO-ICT, the ICT institute of the
Netherlands Organisation for Applied Scientific Research. He is also programme
manager at the centre for Studies on Media, Information and Telecommunication
(SMIT), Vrije Universiteit Brussel. He specialises in innovations in fixed and wireless
broadband services and the political economy of telecommunications, on which
topics he has published extensively. Pieter Ballon holds degrees in modern history
(Catholic University of Leuven) and in library and information science (University of
Antwerp). Currently he is coordinator of the cross issue "business modelling" for the
various integrated projects of the Wireless World Initiative within the 6th European
Framework Programme.
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 227.
228
No. 61, 1st Q. 2006
Colin BLACKMAN is an independent consultant, editor and writer specialising in
foresight and analysis of information age issues. He is the founding editor of info and
foresight, chief editor of Shaping Tomorrow and was formerly editor of
Telecommunications Policy. He works with a wide variety of clients including the
European Commission's Institute for Prospective and Technological Studies, DG
Information Society and Media, the European Foundation for the Improvement of
Living and Working Conditions, the OECD, the International Telecommunication
Union and the World Bank. His recent private sector clients include European and
global players in the telecommunications and energy sectors.
Erik BOHLIN is currently Head and Associate Professor ("Docent") at the
Division of Technology & Society, Department of Technology Management &
Economics at Chalmers University of Technology, Gothenburg. He has published in
a number of areas relating to the information society including policy, strategy, and
management. He is Chair of the International Telecommunications Society, as well
as a member of the Scientific Advisory Boards of COMMUNICATIONS &
STRATEGIES, Info and Telecommunications Policy. Erik Bohlin graduated in
Business Administration and Economics from the Stockholm School of Economics
(1987) and holds a Ph.D. from the Chalmers University of Technology (1995).
[email protected]
Olivier BOMSEL is Professor of Industrial Economics at the Ecole des Mines and
a senior researcher at Cerna, the school's Centre of Industrial Economics. His
background is in computer science engineering (Ecole des Mines de Saint Etienne)
and he also holds a PhD in industrial economics. Since 1997 his research has dealt
with digital innovation dynamics through networks and media economics. On top of
his activities at the Ecole des Mines, Olivier Bomsel is co-founder of TaboTabo Films,
a media company producing films and TV series.
Martin CAVE is Professor at Warwick Business School at the University of
Warwick in the UK. He specialises in regulatory economics and is the author of
numerous articles. Martin Cave is also co-editor of the Handbook of
Telecommunications Economics (Elsevier, 2002 and 2005) and of Digital Television
(Edward Elgar, 2006). He advises a number of regulators and is President of
Thinktel, an international think tank in the field of communications based in Milan.
Peter CURWEN is Visiting Professor of Telecommunications Strategy attached to
the Department of Management Science at the University of Strathclyde in Glasgow
and an independent author and consultant. He was previously Professor of Business
Economics at Sheffield Hallam University. He has published several books on
telecommunications, including The Future of Mobile Telecommunications: Awaiting
the third generation (London: Macmillan, 2002) and with Jason Whalley
Telecommunications Strategy: Cases, theory and applications (London: Routledge,
2004). His specialist area is mobile telecommunications, and he has published
The authors
229
extensively on this subject (often with Dr. Whalley) including articles in
COMMUNICATIONS & STRATEGIES, Telecommunications Policy and Info.
Jean-Pierre DARDAYROL joined the CGTI (Conseil général des technologies de
l'information) in 2003 as a senior advisor in the field of ICT, specialising in software
industries and usage innovations. He is also a research fellow at Grenoble MSHAlpes (CNRS), President of Standarmedia and at CARSI (Carcassonne ICT summer
University and Grenoble winter University). Jean-Pierre Dardyrol graduated from the
Ecole polytechnique in 1972 and ENST-Paris in 1977.
Ariane DELVOIE is Attorney at law in Paris, Senior associate in the Alain
Bensoussan Law Firm. Specialized both in consulting and litigation in computer
agreements issues and intellectual property law in France as in the common law
countries. She is author of numerous articles related to IT news, notably in: la
Gazette du Palais des technologies avancées, l'Informatique Professionnel, ExpertIT
News, I-Date, Information & Systèmes. She regularly participates in seminars and
conferences and also provides in-house training.
[email protected]
Anne DUCHÊNE is currently a post-doctoral fellow at the University of British
Columbia and is due to take up a position as assistant professor at Drexel University
in autumn 2006. She earned her PhD from CERAS-Ecole Nationale des Ponts et
Chaussées, after graduate studies at University of Paris I - La Sorbonne. Her current
research focuses on intellectual property rights, and more specifically on internet
piracy, agency relationships in the patent industry and the (dis)functionning of patent
offices.
David FERNÁNDEZ graduated in audiovisual communication from the
Autonomous University of Barcelona. He is currently working on his PhD at the
GRISS in the Department of Audiovisual Communication and Advertising at the
University of Barcelona. In this context David Fernández is investigating several
research projects related to cultural industries, online media, peer-to-peer
communication and interactive television services, as well as other projects on
traditional media. He has also developed a career as a journalist and a producer in
the press, radio, television and the multimedia sector.
Simon FORGE has some 30 years of experience working in the information
industries on projects in telecommunications and computing, specifically exploring
new wireless and computing technologies and potential futures, outcomes and
strategies for markets, products, companies, countries and regions. He recently led a
European Commission study on future spectrum usage for EU input to the ITU WRC07 deliberations (downloadable, http://fms.jrc.es) for EC/JRC/IPTS and DG Info Soc
and has studied policy, legal and economic issues of open source software on behalf
of the EC as input to EU policy formulation on OSS. He is currently leading a study of
230
No. 61, 1st Q. 2006
the legal and commercial aspects of Grid computing industrialisation. Previously,
Simon Forge managed delivery of IT and telecommunications systems as Director of
IT Development for Consumer and Business Products for Hutchison 3G UK,
developing mobile multimedia products. He holds a Ph.D, in digital signal processing,
a MSc and a BSc in Control Engineering, all from University of Sussex, UK, is
Chartered Engineer and M. IEE, and sits on the editorial board of the Journal Info.
Anne-Gaëlle GEFFROY is currently a Ph.D student at CERNA, the Centre of
Industrial Economics of the Ecole des Mines de Paris. She also holds a batchelor's
degree in economics from the Ecole Normale Supérieure. Anne-Gaëlle Geffroy's
research focuses on the economics of DRMs and addresses issues such as
compatibility and standardization, vertical relations in the media chain and digital
copyright laws.
Klaus W. GREWLICH is Professor of International Law & Communications at the
Center of European Integration Studies at Bonn University. He is also a member of
the high level ICT-Policy Advisors to the Secretary General of the United Nations in
New York. From 1990 to 1995 Prof. Grewlich was Executive Vice President
(International Business) and a board member of Deutsche Telekom. From 1996 to
1998 he was Director General and a board member of an Industrial Federation in
Brussels. In 2001 Prof. Grewlich was appointed Ambassador of the Federal Republic
of Germany after serving in Baku, New York and Bukarest.
[email protected]
Bernard GUILLOU has been active in the field of international media and
telecommunications for nearly thirty years, with a particular focus on regulation,
industry development and the strategies of multimedia companies. In 2003 he
founded Mediawise+, a strategic advice and research firm serving key players in
those two industries. Prior to this Guillou spent a number of years with the Canal+
pay-TV group, firstly as Director of International Development and subsequently as
Group Director of Corporate Strategy. Prior to joining Canal+, Bernard Guillou was
head of the video services division at the Prospective and Economic Studies Unit of
France Telecom. He is a member of the Editorial Committee of COMMUNICATIONS
& STRATEGIES and the Euro-Conference for Policy and Regulation. He is the
author and co-author of three books on the strategies of multimedia companies and
broadcasting regulation in Europe and the USA. He holds degrees from the Ecole
des Hautes Etudes Commerciales and the Paris-Dauphine University.
Rémy LE CHAMPION is an associate professor at the University Paris II and a
researcher at CARISM. He holds a Ph.D in economics from the University Paris X, as
well as a post-doctorate from Keio University in Tokyo, Japan. He was previously a
visiting researcher at the Columbia Institute for Tele-Information in New York, USA,
and headed the research department of the Conseil Supérieur de l'Audiovisuel in
France. He currently works as an independent expert for the European Commission.
His research interests include media economics and new media (TV, programming,
The authors
231
LPTV, press). He is the author of numerous publications, and most recently
Télévision de pénurie, télévision d'abondance, with Benoît Danard, La télévision sur
Internet, with Michel Agnola.
[email protected]
Evelyne LENTZEN is currently President of the Conseil supérieur de l'audiovisuel
(CSA) in Belgium. Before taking up this post in 1997, Ms. Lentzen was editor-in-chief
of the Centre de recherches et d'information socio-politique (CRISP), a Belgian
research centre whose activities are devoted to the study of political and economic
decision-making in Belgium and in Europe. Ms. Lentzen holds a B.A. in politics from
the Université libre de Bruxelles.
Sven LINDMARK is assistant professor and head of the Division of Innovation
Engineering and Management at Chalmers University of Technology, where he
lectures on the management and economics of innovation in the ICT sector. He holds
MSc and Ph. D degrees in Industrial Management. His research interests include a
broad range of innovation-related issues, including industrial and technological
change, standardization, innovation systems and the diffusion of innovations and
forecasting. He has worked extensively in the field of mobile communications,
including studies on the history of the sector, mobile internet services in Japan and
Europe, 3G and 4G mobile systems, barriers to the diffusion of mobile internet
services and the evolution and state of the Swedish telecom innovation system.
Marie MARCHAND has studied the field of new information technologies and
their impact on corporate strategy for many years. After managing France Telecom's
market forecast department, she was subsequently appointed marketing director at
Cap Gemini. She has taught at Stanford University and is now a consultant. In this
capacity she works with large corporations to help them adapt their strategies to the
generalisation of the internet and to use if as a management tool in their daily
business.
Laurence MEYER joined IDATE in November 1998 as senior consultant in the
Media Economics department. She specialises in structural analysis, sector-based
economic forecasts and the evaluation of public policies. As projects director at
IDATE, Laurence Meyer's missions are primarily focused on the different domains of
the Television world, the mid-term risks and potential in this sector (Digital TV,
Interactive TV, IPTV, Personal TV and VOD services, …). In this context, she is
frequently involved in Europe-wide strategic missions. Previously, she was a
consultant at the French economic study and forecasting company B.I.P.E. where
she monitored the different Communication markets. This experience has also
endowed her with skills in the Cinema, Internet and Telecommunications for
audiovisual services sectors, as well as in the field of Consumer Electronics.
Laurence Meyer is an economic engineer and holds a post-graduate professional
diploma (Magistère) from Aix-Marseille II University of Economics (1991).
[email protected]
232
No. 61, 1st Q. 2006
Jean-Gustave PADIOLEAU is associate professor at University Paris-Dauphine
and senior researcher at GEMAS-Fondation Maison des Sciences de l'Homme
(Paris). He has written several academic books and articles and regularly contributes
to newspapers and reviews such as Les Echos, Libération and Le Débat. He is also
a member of COMMUNICATIONS & STRATEGIES Scientific Committee.
Martin PEITZ is Professor of Economics and Quantitative Methods at the
International University in Germany. Martin Peitz received his doctorate in economics
from the University of Bonn. He is an associate editor of Information Economics &
Policy, a research fellow of CESifo and ENCORE, and has published numerous
articles in leading economics journals. His current research focuses on the media,
entertainment, and telecommunications markets, as well as the understanding of
reputation in markets.
Emili PRADO is Professor of Audiovisual Communication at the Autonomous
University of Barcelona and a research associate at the University of California,
Berkeley. He is the director of EUROMONITOR and USAMONITOR (Television
International Observatories), which been doing research on TV programming since
1989. He is the Director of the Image, Sound and Synthesis Research Group
(GRISS), which is working on various projects ranging from the consumption of
screen-mediated communication to the acoustic model of credibility, interactive
television and political communication. He has been a visiting professor at the
universities of Quebec, Montreal, Sao Paulo, Bordeaux and Pisa.
Alexander SCHEUER completed studies in law at the University of Saarland,
Germany, (1987-1993), and the Catholic University of Leuven, Belgium, (1990/91). In
1994 he was appointed Vice-General Manager of the law section of the EuropaInstitut at the University of Saarland. From 1997 to 2000 Alexander Scheuer was
Vice-General Manager of the Institute of European Media Law (EMR), in
Saarbrucken/Brussels, where he became General Manager and member of the
Executive Board in 2000. The same year, he was admitted to the bar. Scheuer is a
member of the Advisory Committee and of the IRIS' Editorial Board, both at the
European Audiovisual Observatory, Strasbourg. In 2003 he became a member of the
Scientific Advisory Board of the Voluntary Self-Regulation of Private Televisions in
Germany (FSF), Berlin. Alexander Scheuer has also written and contributed to a
large number of studies and other publications on European media law.
Alexandre de STREEL ACKNOLO is a lecturer at the economic faculty of the
University of Namur (Belgium) and a researcher at the law department of the
European University Institute of Florence (Italy). His research interests focus on
electronic communications regulation and competition policy. From 2000 to 2003 he
was an expert at the European Commission, where he was involved in the
negotiation of the new regulatory framework. Alexandre de Streel has published
extensively on this topic in both telecommunications and antitrust reviews (World
Competition, European Competition Law Review, Journal of Network Industries, Info,
The authors
233
Cahiers de droit europeen, Journal des Tribunaux de droit europeen). He holds a
degree in law (University of Brussels) and in economics (University of Louvain).
Patrick WAELBROECK is a lecturer at Télécom Paris - Ecole nationale
supérieure des télécommunications. He earned his Ph.D from the University of Paris
La Sorbonne. He also holds an M.A. from Yale University and is a Fulbright alumnus.
His current research proposes both empirical and theoretical perspectives on internet
piracy and technological protection in creative industries.
Uta WHEN de MONTALVO is senior researcher and advisor on Information and
Communication Technology (ICT) policy at TNO (Netherlands Organisation for
Applied Scientific Research). Previously, she worked as a research officer in SPRU –
Science and Technology Policy Research and as a programmer for IBM UK Ltd. She
holds a DPhil in Science and Technology Policy from the University of Sussex (UK).
Her research focuses on new ICT services with a special interest in business models
for location-based services and spatial data policy.
ITS News
Message from the Chair
Dear ITS members,
As I write this message the concluding event of the 16th ITS Biennial
Conference in Beijing, which took place on June 12th-16th 2006 has just drawn to
a close. The conference captured our hearts and minds, and its participants will
cherish many fond memories of Chinese hospitality and elegance. The event
took place in the best possible spirit of ITS: scientific interchange, new research
results, professional networking and new friendships.
The theme of the ITS 16th Biennial Conference was "Information
Communication Technology (ICT): Opportunities and Challenges for the
Telecommunications Industry." The conference was hosted by the Beijing
University of Posts and Telecommunications (BUPT). The principal conference
organisers were Professor Ting-Jie LU, Dean, School of Economics and
Management at BUPT, together with ITS Board member Professor Xu Yan,
Hong Kong University of Science and Technology.
Just before the conference in Beijing, ITS convened its annual board of
directors meeting. The ITS Committees (Conferences & Seminars, Finance,
Marketing & Promotions, Membership and Nominations, Publications, and
Strategic Planning) each reported on their respective activities and results.
The most important decision taken at the ITS Board meeting regarded the
location and hosting of the next biennial conference. I am pleased to announce
that the 17th ITS Biennial Conference will be held in Montreal, June 24th-27th,
2008, and will be co-hosted by the University of Sherbroke and TELUS. The
principal organizers will be Professor Anastassios Gentzoglanis from the
University of Sherbroke and Stephen Schmidt from TELUS. ITS will be in good
hands with these able and dedicated friends – TELUS is a long standing
supporter of ITS, and Professor Gentzoglanis presented his first ITS papers
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 235.
236
No. 62, 2nd Q. 2006
back in 1989! We are confident that these colleagues will make the next
conference a hallmark event in keeping with the spirit of ITS.
ITS is also pleased to welcome a new board member. Professor Hitoshi
Mitomo, Waseda University, joined the ITS board in April 2006. At the ITS Board
meeting in Beijing, several ITS board members were also re-elected for a period
of six years additional service including James Alleman, Loretta Anania, Erik
Bohlin, Hidenori Fuke, Gary Madden and Xu Yan.
New roles and responsibilities have emerged in the ITS board and related
support functions: - Vice Chair: Brigitte Preissl; Strategy Chair: Xu Yan;
Marketing and Promotions Committee Co-Chairs: Andrea Kavanaugh and
Patricial Longstaff; Web Development Committee Chair: Stephen Schmidt. We
would like to highlight the ITS board’s decision to create a new committee to
focus on web development and provided additional support for our website.
TELUS has already done much to make our website appealing and useful. ITS is
thankful for the many hours of service and creative contributions offered by this
new team.
ITS is also proud to be associated with its corporate sponsors and society
members, whose support is critical to the vitality of our organization. In
particular, on behalf of the society, I wish to acknowledge our appreciation and
extend our thanks to: Arnold & Porter, BT, BAKOM, Deutsche Telekom, France
Telecom, IDATE, Infocom, KT, NERA Economic Consulting, NTT DoCoMo,
Telecommunications Authority of Turkey and TELUS. ITS would struggle to fulfil
its mission without commitment on the part of government and industry.
To be effective, ITS depends on hearing from our members and public.
Please let us know how we can continue to most effectively serve your
professional interests and ambitions. For more information on ITS, visit our
website (www.itsworld.org). I look forward to working with you and hope to see
you at a future ITS event.
Best wishes,
Erik Bohlin
ITS Chair
International Federation for Information Processing
Seventh International Conference Human Choice and Computers
(HCC7)
An International Conference in remembrance of Rob Kling
SOCIAL INFORMATICS:
AN INFORMATION SOCIETY FOR ALL?
Maribor (Slovenia), September 21-23, 2006
Venue: HCC7 moves to Maribor (Slovenia)
A decision to organise the HCC7 Conference in Nova Gorica was made under the
assumption that a new Congress Centre would be open by September 2006. It has
turned out that the opening of the Centre will surely be delayed and the conference
would thus have to take place at four different locations in the city. To avoid that and
to provide better working conditions for the conference participants, a decision has
been taken to move the conference to Maribor.
Information, Accommodation and Registration on-line
All information regarding the Conference Frame and the Programme, as well as the online registration form and the accommodation indications may be found at
http://www.hcc7.org
Scope of the Conference
Human choice and computers, Computers and Society, Social Informatics, are terms
referring to a similar preoccupation: How is the human being and its societal
environment kept in the centre – How to build up an “Information Society for All”
[UNESCO, 2002], [eEurope, 2002] when developing our more and more complex ICT
(Information and Communication Technology) systems?
Questions regarding programme and organization
Respectively mailto:
[email protected]
and:
[email protected]
COMMUNICATIONS & STRATEGIES, no. 62, 2nd quarter 2006, p. 237.
Call for papers
If you would like to contribute to a forthcoming issue of
COMMUNICATIONS & STRATEGIES, you are invited to submit your
paper via email or by post on diskette/CD.
As far as practical and technical questions are concerned, proposals
for papers must be submitted in Word format (.doc) and must not
exceed 20-22 pages (6,000 to 7,000 words). Please ensure that your
illustrations (graphics, figures, etc.) are in black and white - excluding
any color - and are of printing quality. It is essential that they be
adapted to the journal's format (with a maximum width of 12 cm). We
would also like to remind you to include bibliographical references at
the end of the article. Should these references appear as footnotes,
please indicate the author's name and the year of publication in the
text.
All submissions should be addressed to:
Sophie NIGON, Coordinator
COMMUNICATIONS & STRATEGIES
c/o IDATE
BP 4167
34092 Montpellier CEDEX 5 - France
[email protected] - +33 (0)4 67 14 44 16
http://www.idate.org