Wireless – Muhammad Tahir

Transcription

Wireless – Muhammad Tahir
Wireless Communication
Systems
Compiled By:
Muhammad Tahir
WIRELESS COMMUNICACTION
CONTENTS
Preface
Chapter 1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
1.14
1.15
1.16
1.17
1.18
Chapter 2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
Chapter 3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
Page
Introduction to Telecommunication Systems
07
Creating and Receiving the Signal
Transmitting the Signal
Wires and Cables
Fiber-Optic Cables
Radio Waves
Communications Satellites
Telecommunication System
Telephone
Telegraph
Teletype, Telex, and Facsimile Transmission
Radio
Television
Voice over Internet Protocol (VOIP)
The Emergence of Broadcasting
Government Regulation
International Telecommunications Networks
Current Developments
Society and Telecommunication
08
10
10
10
11
11
11
12
12
12
13
13
14
14
15
15
16
16
Network Architecture
18
Network protocol design principle
Computer Network
Building a computer network
Classification of computer networks
Network Architecture
Communication Techniques
Network Topology
Physical Topologies
Hybrid Network Topologies
19
22
Broadband Digital Transport
46
Objective and Scope
SONET
Synchronous Signal Structure
Basic Building Block Structure
STS-N Frames
STS Concatenation
Structure of Virtual Tributaries (VTs)
The Payload Pointer
47
47
48
48
51
51
53
54
23
25
26
34
35
36
40
2
WIRELESS COMMUNICACTION
3.9
The Three Overhead Levels of SONET
55
3.10
3.11
3.12
The SPE Assembly/Disassembly Process
Line Rates for Standard SONET Interface Signals
SYNCHRONOUS DIGITAL HIERARCHY
Synchronous Transport Module (STM)
57
58
60
61
Chapter 4
4.1
4.2
4.3
4.4
4.5
4.6
4.7
Radio Propagation System
68
Radio Basics
Infrared (IR)
Telecommunication bands in the infrared
Broadcasting
Distribution methods
Radio Propagation Models
Antenna Fundamentals
69
78
81
84
85
91
129
Chapter 5
Satellite Communication
132
Communications Satellite
Applications of Communication Satellite
VSAT networks
Space Surveillance Network
Types of Communication Satellite
Launch capable countries
Technical Parameters
Objects in Orbit
Basic Components
Advanced Communications Technology Satellite
Fixed Service Satellite
Cable Television
Low-Noise Block Converter (LNB)
133
136
139
144
144
148
151
156
165
167
172
176
183
Mobile Communication
191
Universal Mobile Telecommunications System (UMTS)
Base Station Subsystem (BSS)
General Packet Radio Service (GPRS)
Enhanced Data rates for GSM Evolution (EDGE)
/ Enhanced GPRS (EGPRS)
Network Switching Subsystem / NSS
Mobile Switching Centre (MSC)
Bluetooth Technology
Bluetooth Vs Wi-Fi
192
197
202
206
Wireless Local Loop (WLL)
231
Wireless local loop (WLL)
Licensed and Unlicensed Microwave services
WiMAX (IEEE 802.16)
Wireless Local Loop (WLL) Standards
232
232
235
236
3.13
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
Chapter 6
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
Chapter 7
7.1
7.2
7.3
7.4
211
212
218
220
3
WIRELESS COMMUNICACTION
Wi-Fi Technology
Wireless Networks
236
249
Chapter 8
8.1
8.2
8.3
8.4
8.5
Global System for Mobile Communication (GSM)
257
Global System for Mobile communications (GSM)
Cellular Network
Code Division Multiple Access-Based Systems (CDMA)
Frequency Division Multiple Access (FDMA)
Time Division Multiple Access (TDMA)
258
Chapter 9
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
Global Positioning System (GPS)
283
Global Positioning System (GPS)
System Segmentation
Atmospheric Effects
GPS Interference and Jamming
Techniques to Improve Accuracy
GPS Time and Date
Global Navigation Satellite System (GNSS)
Surveying and Mapping
GPS Tunnelling Protocol (GTP)
Reference Points and Interfaces
284
284
289
292
292
293
296
298
301
304
7.5
7.6
INDEX
263
265
268
277
306
4
WIRELESS COMMUNICACTION
Description
This book Telecom Systems shows how the latest telecommunications technologies are
converting traditional telephone and computer networks into cost competitive integrated
digital systems with yet undiscovered applications. These systems are continuing to emerge
and become more complex.
Telecom Systems explains how various telecommunications systems and services work and
how they are evolving to meet the needs of bandwidth and the applications involved for the
hungry consumers.
This book describes how telecommunications systems and services work and the markets
associated with them. Telecommunications technology and services are continually changing.
Descriptions and easy to understand diagrams of typical systems and their interconnections
are provided for local exchange company (LEC), inter-exchange company (IXC), private
telephone exchanges (PBX), computer networks (LANs), data networks (e.g. Internet),
billing and customer care systems (BCC).
The book starts with a basic introduction to telecom communication. It covers the different
types of telecom industries, who controls and regulates them, and provides a basic definition
of each of the major telecom technologies. A broad overview of the telecom voice, data, and
multimedia applications is provided. You will discover the fundamentals of telecom
transmission and switching technologies and their terminology.
The basics of public telephone systems are provided along with the structure and operation
of local exchange carrier (LEC) systems. Described are the different types of analog loop,
digital loop, switches, multi-channel communication lines and signaling control systems.
The different types of private telephone systems and their evolution are covered. Included is
the basic operation, attributes and services for key telephone systems (KTS), central
exchange (CENTREX) systems, private branch exchange (PBX) and computer telephony
integration (CTI). You will learn how these systems are converting from fixed proprietary
systems to flexible industry standard systems.
This book covers how digital subscriber lines (DSL) are important to telephone operators,
what services it can offer, and the installation options. You will discover the different types
of DSL including HDSL, ADSL, SDSL, VDSL, and the new ADSL2+ systems.
The different types of wireless systems are explained including cellular and personal
communication services (PCS), broadcast radio and television, paging, wireless data, land
mobile radio (LMR), aircraft telephones, satellite, wireless PBX, residential cordless, wireless
local area networks (WLAN), short range data (piconets,) wireless cable, wireless broadband
(WiMax), wireless local loops (WLL), and 1st, 2nd, 2.5, and third generation wireless (3G).
5
WIRELESS COMMUNICACTION
IP Telephony services and systems are described and explained. You will learn about IP
private branch exchange (IP PBX) and IP Centrex managed IP telephone services and will
discover how Internet telephone service providers (ITSPs) can provide high-quality
telephone services over unmanaged broadband communication systems.
You will discover how the high data transmission bandwidth available from broadband
connections (such as DSL service) is being used to provide digital television service to
customers (IPTV). Find out how the use of an IP television set top box (IP STB) will allow
customers to select from thousands of television channels available through their telephone
line and watch them on their standard television.
Telecom billing provides the fundamentals for telecom billing and customer care (BCC)
systems. The topics that are explained include: types of services, standard billing processes,
real time billing, multilingual support, multiple currencies, inter-carrier settlements, event
sources and tracking, mediation devices, call detail records (CDRs), call processing, cycle
billing, clearinghouse, invoicing, management reporting, processing payments.
6
WIRELESS COMMUNICACTION
Chapter No:1
Introduction to
Telecommunication Systems
7
WIRELESS COMMUNICACTION
Telecommunications,
devices and systems that transmit electronic or optical
signals across long distances. Telecommunications enables people around the world
to contact one another, to access information instantly, and to communicate from
remote areas. Telecommunications usually involves a sender of information and one
or more recipients linked by a technology, such as a telephone system, that
transmits information from one place to another. Telecommunications enables people
to send and receive personal messages across town, between countries, and to and
from outer space. It also provides the key medium for delivering news, data,
information, and entertainment.
Telecommunications devices convert different types of information, such as sound
and video, into electronic or optical signals. Electronic signals typically travel along a
medium such as copper wire or are carried over the air as radio waves. Optical
signals typically travel along a medium such as strands of glass fibers. When a signal
reaches its destination, the device on the receiving end converts the signal back into
an understandable message, such as sound over a telephone, moving images on a
television, or words and pictures on a computer screen.
Telecommunications messages can be sent in a variety of ways and by a wide range
of devices. The messages can be sent from one sender to a single receiver (point-topoint) or from one sender to many receivers (point-to-multipoint). Personal
communications, such as a telephone conversation between two people or a facsimile
(fax) message (see Facsimile Transmission), usually involve point-to-point
transmission. Point-to-multipoint telecommunications, often called broadcasts,
provide the basis for commercial radio and television programming.
Telecommunications begin with messages that are converted into electronic or optical
signals. Some signals, such as those that carry voice or music, are created in an analog or
wave format, but may be converted into a digital or mathematical format for faster and
more efficient transmission. The signals are then sent over a medium to a receiver, where
they are decoded back into a form that the person receiving the message can understand.
There are a variety of ways to create and decode signals, and many different ways to
transmit signals.
Creating and Receiving the Signal
Devices such as the telegraph and telephone relay messages by creating modulated
electrical impulses, or impulses that change in a systematic way. These impulses are
then sent along wires, through the air as radio waves, or via other media to a
receiver that decodes the modulation. The telegraph, the earliest method of
delivering telecommunications, works by converting the contacts (connections
between two conductors that permit a flow of current) between a telegraph key and
a metal conductor into electrical impulses. These impulses are sent along a wire to a
receiver, which converts the impulses into short and long bursts of sound or into dots
and dashes on a simple printing device. Specific sequences of dots and dashes
represent letters of the alphabet. In the early days of the telegraph, these sequences
were decoded by telegraph operators (see Morse Code, International). In this way,
telegraph operators could transmit and receive letters that spelled words. Later
8
WIRELESS COMMUNICACTION
versions of the telegraph could decipher letters and numbers automatically.
Telegraphs have been largely replaced by other forms of telecommunications, such
as electronic mail (e-mail), but they are still used in some parts of the world to send
messages.
The telephone uses a diaphragm (small membrane) connected to a magnet and a
wire coil to convert sound into an analog or electrical waveform representation of the
sound. When a person speaks into the telephone’s microphone, sound waves created
by the voice vibrate the diaphragm, which in turn creates electrical impulses that are
sent along a telephone wire. The receiver’s wire is connected to a speaker, which
converts the modulated electrical impulses back into sound.
Broadcast radio and cellular radio telephones are examples of devices that create
signals by modulating radio waves. A radio wave is one type of electromagnetic
radiation, a form of energy that travels in waves. Microwaves are also
electromagnetic waves, but with shorter wavelengths and higher frequencies. In
telecommunications, a transmitter creates and emits radio waves. The transmitter
electronically modulates or encodes sound or other information onto the radio waves
by varying either the amplitude (height) of the radio waves, or by varying the
frequency (number) of the waves within an established range ( see Frequency
Modulation). A receiver (tuner) tuned to a specific frequency or range of frequencies
will pick up the modulation added to the radio waves. A speaker connected to the
tuner converts the modulation back into sound.
Broadcast television works in a similar fashion. A television camera takes the light
reflected from a scene and converts it into an electronic signal, which is transmitted
over high-frequency radio waves. A television set contains a tuner that receives the
signal and uses that signal to modulate the images seen on the picture tube. The
picture tube contains an electron gun that shoots electrons onto a photo-sensitive
display screen. The electrons illuminate the screen wherever they fall, thus creating
moving pictures.
Telegraphs, telephones, radio, and television all work by modifying electronic signals,
making the signals imitate, or reproduce, the original message. This form of
transmission is known as analog transmission. Computers and other types of
electronic equipment, however, transmit digital information. Digital technologies
convert a message into an electronic or optical form first by measuring different
qualities of the message, such as the pitch and volume of a voice, many times.
These measurements are then encoded into multiple series of binary numbers, or 1s
and 0s. Finally, digital technologies create and send impulses that correspond to the
series of 1s and 0s. Digital information can be transmitted faster and more clearly
than analog signals, because the impulses only need to correspond to two digits and
not to the full range of qualities that compose the original message, such as the pitch
and volume of a human voice. While digital transmissions can be sent over wires,
cables or radio waves, they must be decoded by a digital receiver. New digital
telephones and televisions are being developed to make telecommunications more
efficient.
Personal computers primarily communicate with each other and with larger
networks, such as the Internet, by using the ordinary telephone network. Increasing
numbers of computers rely on broadband networks provided by telephone and cable
television companies to send text, music, and video over the Internet at high speeds.
9
WIRELESS COMMUNICACTION
Since the telephone network functions by converting sound into electronic signals,
the computer must first convert its digital data into sound. Computers do this with a
device called a modem, which is short for modulator/demodulator. A modem
converts the stream of 1s and 0s from a computer into an analog signal that can
then be transmitted over the telephone network, as a speaker’s voice would. The
modem of the receiving computer demodulates the analog sound signal back into a
digital form that the computer can understand.
Transmitting the Signal
Telecommunications systems deliver messages using a number of different
transmission media, including copper wires, fiber-optic cables, communication
satellites, and microwave radio. One way to categorize telecommunications media is
to consider whether or not the media uses wires. Wire-based (or wireline)
telecommunications provide the initial link between most telephones and the
telephone network and are a reliable means for transmitting messages.
Telecommunications
without
wires,
commonly
referred
to
as
wireless
communications, use technologies such as cordless telephones, cellular radio
telephones, pagers, and satellites. Wireless communications offer increased mobility
and flexibility. In the future some experts believe that wireless devices will also offer
high-speed Internet access.
Wires and Cables
Wires and cables were the original medium for telecommunications and are still the
primary means for telephone connections. Wireline transmission evolved from
telegraph to telephone service and continues to provide the majority of
telecommunications services. Wires connect telephones together within a home or
business and also connect these telephones to the nearest telephone switching
facility.
Other wireline services employ coaxial cable, which is used by cable television to
provide hundreds of video channels to subscribers. Much of the content transmitted
by the coaxial cable of cable television systems is sent by satellite to a central
location known as the headend. Coaxial cables flow from the headend throughout a
community and onward to individual residences and, finally, to individual television
sets. Because signals weaken as distance from the headend increases, the coaxial
cable network includes amplifiers that process and retransmit the television signals.
Fiber-Optic Cables
Fiber-optic cables use specially treated glass that can transmit signals in the form of
pulsed beams of laser light. Fiber-optic cables carry many times more information
than copper wires can, and they can transmit several television channels or
thousands of telephone conversations at the same time. Fiber-optic technology has
replaced copper wires for most transoceanic routes and in areas where large
amounts of data are sent. This technology uses laser transmitters to send pulses of
light via hair-thin strands of specially prepared glass fibers. New improvements
promise cables that can transmit millions of telephone calls over a single fiber.
10
WIRELESS COMMUNICACTION
Already fiber optic cables provide the high capacity, 'backbone' links necessary to
carry the enormous and growing volume of telecommunications and Internet traffic.
Radio Waves
Wireless telecommunications use radio waves, sent through space from one antenna to
another, as the medium for communication. Radio waves are used for receiving AM and
FM radio and for receiving television. Cordless telephones and wireless radio telephone
services, such as cellular radio telephones and pagers, also use radio waves. Telephone
companies use microwaves to send signals over long distances. Microwaves use higher
frequencies than the radio waves used for AM, FM, or cellular telephone transmissions,
and they can transmit larger amounts of data more efficiently. Microwaves have
characteristics similar to those of visible light waves and transmit pencil-thin beams that
can be received using dish-shaped antennas. Such narrow beams can be focused to a
particular destination and provide reliable transmissions over short distances on Earth.
Even higher and narrower beams provide the high-capacity links to and from satellites.
The high frequencies easily penetrate the ionosphere (a layer of Earth’s atmosphere that
blocks low-frequency waves) and provide a high-quality signal.
Communications Satellites
Communications satellites provide a means of transmitting telecommunications all
over the globe, without the need for a network of wires and cables. They orbit Earth
at a speed that enables them to stay above the same place on Earth at all times.
This type of orbit is called geostationary or geosynchronous orbit because the
satellite’s orbital speed operates in synchronicity with Earth’s rotation. The satellites
receive transmissions from Earth and transmit them back to numerous Earth station
receivers scattered within the receiving coverage area of the satellite. This relay
function makes it possible for satellites to operate as “bent pipes”—that is, wireless
transfer stations for point-to-point and point-to-multipoint transmissions.
Communications satellites are used by telephone and television companies to
transmit signals across great distances. Ship, airplane, and land navigators also
receive signals from satellites to determine geographic positions.
TELECOMMUNICATIONS SYSTEMS
Individual people, businesses, and governments use many different types of
telecommunications systems. Some systems, such as the telephone system, use a
network of cables, wires, and switching stations for point-to-point communication.
Other systems, such as radio and television, broadcast radio signals over the air that
can be received by anyone who has a device to receive them. Some systems make
use of several types of media to complete a transmission. For example, a telephone
call may travel by means of copper wire, fiber-optic cable, and radio waves as the
call is sent from sender to receiver. All telecommunications systems are constantly
evolving as telecommunications technology improves. Many recent improvements,
for example, offer high-speed broadband connections that are needed to send
multimedia information over the Internet.
11
WIRELESS COMMUNICACTION
Telegraph
Telegraph services use both wireline and wireless media for transmissions. Soon
after the introduction of the telegraph in 1844, telegraph wires spanned the country.
Telegraph companies maintained a system of wires and offices located in numerous
cities. A message sent by telegraph was called a telegram. Telegrams were printed
on paper and delivered to the receiving party by the telegraph company. With the
invention of the radio in the early 1900s, telegraph signals could also be sent by
radio waves. Wireless telegraphy made it practical for oceangoing ships as well as
aircraft to stay in constant contact with land-based stations.
Telephone
The telephone network also uses both wireline and wireless methods to deliver voice
communications between people, and data communications between computers and
people or other computers. The part of the telephone network that currently serves
individual residences and many businesses operates in an analog mode, uses copper
wires, and relays electronic signals that are continuous, such as the human voice.
Digital transmission via fiber-optic cables is now used in some sections of the
telephone network that send large amounts of calls over long distances. However,
since the rest of the telephone system is still analog, these digital signals must be
converted back to analog before they reach users. The telephone network is stable
and reliable, because it uses its own wire system that is powered by low-voltage
direct current from the telephone company. Telephone networks modulate voice
communications over these wires. A complex system of network switches maintains
the telephone links between callers. Telephone networks also use microwave relay
stations to send calls from place to place on the ground. Satellites are used by
telephone networks to transmit telephone calls across countries and oceans.
Teletype, Telex, and Facsimile Transmission
Teletype, telex, and facsimile transmission are all methods for transmitting text
rather than sounds. These text delivery systems evolved from the telegraph.
Teletype and telex systems still exist, but they have been largely replaced by
facsimile machines, which are inexpensive and better able to operate over the
existing telephone network. The Internet increasingly provides an even more
inexpensive and convenient option. The teletype, essentially a printing telegraph, is
primarily a point-to-multipoint system for sending text. The teletype converts the
same pulses used by telegraphs into letters and numbers, and then prints out
readable text. It was often used by news media organizations to provide newspaper
stories and stock market data to subscribers. Telex is primarily a point-to-point
system that uses a keyboard to transmit typed text over telephone lines to similar
terminals situated at individual company locations. See also Office Systems:
Communications; Telegraph: Modern Telegraph Services.
Facsimile transmission now provides a cheaper and easier way to transmit text and
graphics over distances. Fax machines contain an optical scanner that converts text
12
WIRELESS COMMUNICACTION
and graphics into digital, or machine-readable, codes. This coded information is sent
over ordinary analog telephone lines through the use of a modem included in the fax
machine. The receiving fax machine’s modem demodulates the signal and sends it to
a printer also contained in the fax machine.
Radio
Radios transmit and receive communications at various preset frequencies. Radio
waves carry the signals heard on AM and FM radio, as well as the signals seen on a
television set receiving broadcasts from an antenna. Radio is used mostly as a public
medium, sending commercial broadcasts from a transmitter to anyone with a radio
receiver within its range, so it is known as a point-to-multipoint medium. However,
radio can also be used for private point-to-point transmissions. Two-way radios,
cordless telephones, and cellular radio telephones are common examples of
transceivers, which are devices that can both transmit and receive point-to-point
messages.
Personal radio communication is generally limited to short distances (usually a few
kilometers), but powerful transmitters can send broadcast radio signals hundreds of
kilometers. Shortwave radio, popular with amateur radio enthusiasts, uses a range of
radio frequencies that are able to bounce off the ionosphere. This electrically charged
layer of the atmosphere reflects certain frequencies of radio waves, such as
shortwave frequencies, while allowing higher-frequency waves, such as microwaves,
to pass through it. Amateur radio operators use the ionosphere to bounce their radio
signals to other radio operators thousands of kilometers away.
Television
Television is primarily a public broadcasting medium, using point-to-multipoint
technology that is broadcast to any user within range of the transmitter. Televisions
transmit news and information, as well as entertainment. Commercial television is
broadcast over very high frequency (VHF) and ultrahigh frequency (UHF) radio
waves and can be received by any television set within range of the transmitter.
Televisions have also been used for point-to-point, two-way telecommunications.
Teleconferencing, in which a television picture links two physically separated parties,
is a convenient way for businesspeople to meet and communicate without the
expense or inconvenience of travel. Video cameras on computers now allow personal
computer users to teleconference over the Internet. Videophones, which use tiny
video cameras and rely on satellite technology, can also send private or public
television images and have been used in news reporting in remote locations.
Cable television is a commercial service that links televisions to a source of many
different types of video programming using coaxial cable. The cable provider obtains
coded, or scrambled, programming from a communications satellite, as well as from
terrestrial links, including broadcast television stations. The signal may be scrambled
to prevent unpaid access to the programming. The cable provider electronically
unscrambles the signal and supplies the decoded signals by cable to subscribers.
Television users with personal satellite dishes can access satellite programming
directly without a cable installation. Personal satellite dishes are also a subscriber
13
WIRELESS COMMUNICACTION
service. Fees are paid to the network operator in return for access to the satellite
channels.
Most television sets outside of the United States that receive programming use
different types of standards for receiving video signals. The European Phase
Alternative Line standard generates a higher-resolution picture than the sets used in
the United States, but these television sets are more expensive. Manufacturers now
offer digital video and audio signal processing, which features even higher picture
resolution and sound quality. The shape of the television screen is changing as well,
reflecting the aspect ratio (ratio of image height to width) used for movie
presentation.
Voice Over Internet Protocol (VOIP)
Voice Over Internet Protocol (VOIP) is a method for making telephone calls over the
Internet by sending voice data in separate packets, just as e-mail is sent. Each
packet is assigned a code for its destination, and the packets are then reassembled
in the correct order at the receiving end. Recent technological improvements have
made VOIP almost as seamless and smooth as a regular telephone call.
In February 2004 the Federal Communications Commission (FCC) ruled that VOIP,
like e-mail and instant messaging, is free of government regulation as long as it
involves communication from one computer to another. The FCC did not rule on
whether VOIP software that sends voice data from a computer directly to a regular
telephone should be regulated. Such services became available in the early part of
the 21st century and are expected to become widely available. They require a
broadband connection to the Internet but can reduce telephone charges significantly
while also offering for free additional services such as call waiting, caller
identification, voice mail, and the ability to call from your home telephone number
wherever you travel.
The Emergence of Broadcasting
Telephones and telegraphs are primarily private means of communications, sending
signals from one point to another, but with the invention of the radio, public
communications, or point-to-multipoint signals, could be sent through a central
transmitter to be received by anyone possessing a receiver. Italian inventor and
electrical engineer Guglielmo Marconi transmitted a Morse-code telegraph signal by
radio in 1895. This began a revolution in wireless telegraphy that would later result
in broadcast radios that could transmit actual voice and music. Radio and wireless
telegraph communication played an important role during World War I (1914-1918),
allowing military personnel to communicate instantly with troops in remote locations.
United States president Woodrow Wilson was impressed with the ability of radio, but
he was fearful of its potential for espionage use. He banned nonmilitary radio use in
the United States as the nation entered World War I in 1917, and this stifled
commercial development of the medium. After the war, however, commercial radio
stations began to broadcast. By the mid-1920s, millions of radio listeners tuned in to
music, news, and entertainment programming.
14
WIRELESS COMMUNICACTION
Television got its start as a mass-communication medium shortly after World War II
(1939-1945). The expense of television transmission prevented its use as a two-way
medium, but radio broadcasters quickly saw the potential for television to provide a
new way of bringing news and entertainment programming to people. For more
information on the development of radio and television, see Radio and Television
Broadcasting.
Government Regulation
The number of radio broadcasts grew quickly in the 1920s, but there was no
regulation of frequency use or transmitter strength. The result was a crowded radio
band of overlapping signals. To remedy this, the U.S. government created the
Federal Communications Commission (FCC) in 1934 to regulate the spreading use of
the broadcast spectrum. The FCC licenses broadcasters and regulates the location
and transmitting strength, or range, stations have in an effort to prevent
interference from nearby signals.
The FCC and the U.S. government have also assumed roles in limiting the types of
business practices in which telecommunications companies can engage. The U.S.
Department of Justice filed an antitrust lawsuit against AT&T Corp., arguing that the
company used its monopoly position to stifle competition, particularly through its
control over local telephone service facilities. The lawsuit was settled in 1982, and
AT&T agreed to disperse its local telephone companies, thereby creating seven new
independent companies.
In 1996 the U.S. government enacted the Telecommunications Reform Act to further
encourage competition in the telecommunications marketplace. This legislation
removed government rules preventing local and long-distance phone companies,
cable television operators, broadcasters, and wireless services from directly
competing with one another. The act spurred consolidation in the industry, as
regional companies joined forces to create telecommunications giants that provided
telephone, wireless, cable, and Internet services.
Deregulation, however, also led to overproduction of fiber optic cable and a steep
decline in the fortunes of the telecommunications industry beginning in 2000. The
increased competition provided the backdrop for the bankruptcy of a leading
telecommunications company, WorldCom, Inc., in 2002, when it admitted to the
largest accounting fraud in the history of U.S. business.
International Telecommunications Networks
In order to provide overseas telecommunications, people had to develop networks
that could link widely separated nations. The first networks to provide such linkage
were telegraph networks that used undersea cables, but these networks could
provide channels for only a few simultaneous communications. Shortwave radio also
made it possible for wireless transmissions of both telegraphy and voice over very
long distances.
To take advantage of the wideband capability of satellites to provide
telecommunications service, companies from all over the world pooled resources and
15
WIRELESS COMMUNICACTION
shared risks by creating a cooperative known as the International
Telecommunications Satellite Organization, or Intelsat, in 1964. Transoceanic
satellite telecommunications first became possible in 1965 with the successful launch
of Early Bird, also known as Intelsat 1. Intelsat 1 provided the first international
television transmission and had the capacity to handle one television channel or 240
simultaneous telephone calls.
Intelsat later expanded and diversified to meet the global and regional satellite
requirements of more than 200 nations and territories. In response to private
satellite ventures entering the market, the managers of Intelsat converted the
cooperative into a private corporation better able to compete with these emerging
companies. The International Mobile Satellite Organization (Inmarsat) primarily
provided service to oceangoing vessels when it first formed as a cooperative in 1979,
but it later expanded operations to include service to airplanes and users in remote
land areas not served by cellular radio or wireline services. Inmarsat became a
privatized, commercial venture in 1999.
Current Developments
Personal computers have pushed the limits of the telephone system as more and
more complex computer messages are being sent over telephone lines, and at
rapidly increasing speeds. This need for speed has encouraged the development of
digital transmission technology. The growing use of personal computers for
telecommunications has increased the need for innovations in fiber-optic technology.
Telecommunications and information technologies are merging and converging. This
means that many of the devices now associated with only one function may evolve
into more versatile equipment. This convergence is already happening in various
fields. Some telephones and pagers are able to store not only phone numbers but
also names and personal information about callers. Wireless phones with keyboards
and small screens can access the Internet and send and receive e-mail messages.
Personal computers can now access information and video entertainment and are in
effect becoming a combined television set and computer terminal. Television sets can
access the Internet through add-on appliances. Future modifications and technology
innovations may blur the distinctions between appliances even more.
Convergence of telecommunications technologies may also trigger a change in the
kind of content available. Both television and personal computers are likely to
incorporate new multimedia, interactive, and digital features. However, in the near
term, before the actualization of a fully digital telecommunications world, devices
such as modems will still be necessary to provide an essential link between the old
analog world and the upcoming digital one.
Society and Telecommunication
Telecommunication is an important part of many modern societies. In 2006, estimates
place the telecommunication industry's revenue at $1.2 trillion or just under 3% of the
gross world product. Good telecommunication infrastructure is widely acknowledged as
important for economic success in the modern world on both the micro- and
macroeconomic scale.
16
WIRELESS COMMUNICACTION
On the microeconomic scale, companies have used telecommunication to help build
global empires, this is self-evident in the business of online retailer Amazon.com but
even the conventional retailer Wal-Mart has benefited from superior telecommunication
infrastructure compared to its competitors. In modern Western society, home owners
often use their telephone to organize many home services ranging from pizza deliveries
to electricians. Even relatively poor communities have been noted to use
telecommunication to their advantage. In Bangladesh's Narshingdi district, isolated
villagers use cell phones to speak directly to wholesalers and arrange a better price for
their goods. In Cote d'Ivoire coffee growers share mobile phones to follow hourly
variations in coffee prices and sell at the best price. With respect to the macroeconomic
scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good
telecommunication infrastructure and economic growth in 2001. Few dispute the
existence of a correlation although some argue it is wrong to view the relationship as
causal.
Due to the economic benefits of good telecommunication infrastructure there is
increasing worry about the digital divide. This stems from the fact that the world's
population does not have equal access to telecommunication systems. A 2003 survey by
the International Telecommunication Union (ITU) revealed that roughly one-third of
countries have less than 1 mobile subscription for every 20 people and one-third of
countries have less than 1 fixed line subscription for every 20 people. In terms of Internet
access, roughly half of countries have less than 1 in 20 people with Internet access. From
this information, as well as educational data, the ITU was able to compile a Digital
Access Index that measures the overall ability of citizens to access and use information
and communication technologies. Using this measure, countries such as Sweden,
Denmark and Iceland receive the highest ranking while African countries such as Niger,
Burkina Faso and Mali receive the lowest.
17
WIRELESS COMMUNICACTION
Chapter No:2
Introduction to
Telecommunication Network
Architecture
18
WIRELESS COMMUNICACTION
In the field of telecommunications, a communication protocol is the set of standard rules for
data representation, signalling, authentication and error detection required to send
information over a communications channel. An example of a simple communications
protocol adapted to voice communication is the case of a radio dispatcher talking to mobile
stations. The communication protocols for digital computer network communication have
many features intended to ensure reliable interchange of data over an imperfect
communication channel. Communication protocol is basically following certain rules so that
the system works properly.
Network protocol design principles
Systems engineering principles have been applied to create a set of common network
protocol design principles. These principles include effectiveness, reliability, and resiliency.
Effectiveness
Needs to be specified in such a way, that engineers, designers, and in some cases software
developers can implement and/or use it. In human-machine systems, its design needs to
facilitate routine usage by humans. Protocol layering accomplishes these objectives by
dividing the protocol design into a number of smaller parts, each of which performs closely
related sub-tasks, and interacts with other layers of the protocol only in a small number of
well-defined ways.
Protocol layering allows the parts of a protocol to be designed and tested without a
combinatorial explosion of cases, keeping each design relatively simple. The implementation
of a sub-task on one layer can make assumptions about the behavior and services offered by
the layers beneath it. Thus, layering enables a "mix-and-match" of protocols that permit
familiar protocols to be adapted to unusual circumstances.
For an example that involves computing, consider an email protocol like the Simple Mail
Transfer Protocol (SMTP). An SMTP client can send messages to any server that conforms
to SMTP's specification. Actual applications can be (for example) an aircraft with an SMTP
server receiving messages from a ground controller over a radio-based internet link. Any
SMTP client can correctly interact with any SMTP server, because they both conform to the
same protocol specification, RFC2821, RT49764368.
This paragraph informally provides some examples of layers, some required functionalities,
and some protocols that implement them, all from the realm of computing protocols.
At the lowest level, bits are encoded in electrical, light or radio signals by the Physical
layer. Some examples include RS-232, SONET, and WiFi.
A somewhat higher Data link layer such as the point-to-point protocol (PPP) may
detect errors and configure the transmission system.
An even higher protocol may perform network functions. One very common
protocol is the Internet protocol (IP), which implements addressing for large set of
19
WIRELESS COMMUNICACTION
protocols. A common associated protocol is the Transmission control protocol
(TCP) which implements error detection and correction (by retransmission). TCP
and IP are often paired, giving rise to the familiar acronym TCP/IP.
A layer in charge of presentation might describe how to encode text (ie: ASCII, or
Unicode).
An application protocol like SMTP, may (among other things) describe how to
inquire about electronic mail messages.
These different tasks show why there's a need for a software architecture or reference model
that systematically places each task into context.
The reference model usually used for protocol layering is the OSI seven layer model, which
can be applied to any protocol, not just the OSI protocols of the International Organization
for Standardization (ISO). In particular, the Internet Protocol can be analysed using the OSI
model.
Reliability
Assuring reliability of data transmission involves error detection and correction, or some
means of requesting retransmission. It is a truism that communication media are always
faulty. The conventional measure of quality is the number of failed bits per bits transmitted.
This has the useful feature of being a dimensionless figure of merit that can be compared
across any speed or type of communication media.
In telephony, links with bit error rates (BER) of 10-4 or more are regarded as faulty (they
interfere with telephone conversations), while links with a BER of 10-5 or more should be
dealt with by routine maintenance (they can be heard).
Data transmission often requires bit error rates below 10-12. Computer data transmissions are
so frequent that larger error rates would affect operations of customers like banks and stock
exchanges. Since most transmissions use networks with telephonic error rates, the errors
caused by these networks must be detected and then corrected.
Communications systems detect errors by transmitting a summary of the data with the data.
In TCP (the internet's Transmission Control Protocol), the sum of the data bytes of packet
is sent in each packet's header. Simple arithmetic sums do not detect out-of-order data, or
cancelling errors. A bit-wise binary polynomial, a cyclic redundancy check, can detect these
errors and more, but is slightly more expensive to calculate.
Communication systems correct errors by selectively resending bad parts of a message. For
example, in TCP when a checksum is bad, the packet is discarded. When a packet is lost, the
receiver acknowledges all of the packets up to, but not including the failed packet.
Eventually, the sender sees that too much time has elapsed without an acknowledgement, so
it resends all of the packets that have not been acknowledged. At the same time, the sender
backs off its rate of sending, in case the packet loss was caused by saturation of the path
20
WIRELESS COMMUNICACTION
between sender and receiver. (Note: this is an over-simplification: see TCP and congestion
collapse for more detail)
In general, the performance of TCP is severely degraded in conditions of high packet loss
(more than 0.1%), due to the need to resend packets repeatedly. For this reason, TCP/IP
connections are typically either run on highly reliable fiber networks, or over a lower-level
protocol with added error-detection and correction features (such as modem links with
ARQ). These connections typically have uncorrected bit error rates of 10-9 to 10-12, ensuring
high TCP/IP performance.
Resiliency
Re addresses a form of network failure known as topological failure in which a
communications link is cut, or degrades below usable quality. Most modern communication
protocols periodically send messages to test a link. In phones, a framing bit is sent every 24
bits on T1 lines. In phone systems, when "sync is lost", fail-safe mechanisms reroute the
signals around the failing equipment.
In packet switched networks, the equivalent functions are performed using router update
messages to detect loss of connectivity.
Standards organizations
Most recent protocols are assigned by the IETF for Internet communications, and the
IEEE, or the ISO organizations for other types. The ITU-T handles telecommunications
protocols and formats for the public switched telephone network (PSTN). The ITU-R
handles protocols and formats for radio communications. As the PSTN. radio systems, and
Internet converge, the different sets of standards are also being driven towards technological
convergence.
Protocol families
A number of major protocol stacks or families exist, including the following:
Open standards:
Internet protocol suite (TCP/IP)
Open Systems Interconnection (OSI)
FTP
UPnP (Universial Plug and Play)
iSCSI
NFS
21
WIRELESS COMMUNICACTION
Proprietary standards:
AppleTalk
DECnet
IPX/SPX
Server Message Block (SMB) (formaly known as CIFS), also see samba)
Systems Network Architecture (SNA)
Distributed Systems Architecture (DSA)
AFP
RSYNC
Unison
See also
Protocol (computing)
Connection-oriented protocol
Connectionless protocol
List of network protocols
Network architecture
Congestion collapse
Tunneling protocol
Computer Network
Is multiple computers connected together using a telecommunication system for the purpose
of communicating and sharing resources.
Experts in the field of networking debate whether two computers that are connected
together using some form of communications medium constitute a network. Therefore,
some works state that a network requires three connected computers. For example,
"Telecommunications: Glossary of Telecommunication Terms" states that a computer
network is "A network of data processing nodes that are interconnected for the purpose of
data communication", the term "network" being defined in the same document as "An
interconnection of three or more communicating entities". A computer connected to a noncomputing device (e.g., networked to a printer via an Ethernet link) may also represent a
computer network, although this article does not address this configuration.
This article uses the definition which requires two or more computers to be connected
together to form a network. Therefore, this does not include intranets, due to their not
needing to be connected via two or more computers. The same basic functions are generally
present in this case as with larger numbers of connected computers.
22
WIRELESS COMMUNICACTION
Basics
A computer network may be described as the interconnection of two or more computers
that may share files and folders, applications, or resources like printers, scanners, web-cams
etc. The internet is also a type of computer network which connects millions of computers
around the world.
Protocols
A protocol is a set of rules and conventions about the communication in the network. A
protocol mainly defines the following:
1. Syntax: Defines the structure or format of data.
2. Semantics: Defines the interpretation of data being sent.
3. Timing: Refers to an agreement between a sender and a receiver about the
transmission.
Building a computer network
A simple computer network may be constructed from two computers by adding a
network adapter (Network Interface Controller (NIC)) to each computer and then
connecting them together with a special cable called a crossover cable. This type of
network is useful for transferring information between two computers that are not
normally connected to each other by a permanent network connection or for basic
home networking applications. Alternatively, a network between two computers can
be established without dedicated extra hardware by using a standard connection such
as the RS-232 serial port on both computers, connecting them to each other via a
special crosslinked null modem cable.
Practical networks generally consist of more than two interconnected computers and
generally require special devices in addition to the Network Interface Controller that
each computer needs to be equipped with. Examples of some of these special
devices are listed above under Basic Computer Network Building Blocks /
networking devices.
Types of networks:
Below is a list of the most common types of computer networks.
A personal area network (PAN) :
A personal area network (PAN) is a computer network used for communication
among computer devices (including telephones and personal digital assistants) close
to one person. The devices may or may not belong to the person in question. The
reach of a PAN is typically a few meters. PANs can be used for communication
among the personal devices themselves (intrapersonal communication), or for
connecting to a higher level network and the Internet (an uplink).
23
WIRELESS COMMUNICACTION
Personal area networks may be wired with computer buses such as USB and
FireWire. A wireless personal area network (WPAN) can also be made possible with
network technologies such as IrDA and Bluetooth.
Campus Area Network (CAN):
A network that connects two or more LANs but that is limited to a specific (possibly
private) geographical area such as a college campus, industrial complex, or a military
base
Note: A CAN is generally limited to an area that is smaller than a Metropolitan Area
Network.
Metropolitan Area Network (MAN):
A network that connects two or more Local Area Networks or CANs together but
does not extend beyond the boundaries of the immediate town, city, or metropolitan
area. Multiple routers, switches & hubs are connected to create a MAN
Wide Area Networks (WAN):
A WAN is a data communications network that covers a relatively broad geographic
area and that often uses transmission facilities provided by common carriers, such as
telephone companies. WAN technologies generally function at the lower three layers
of the OSI reference model: the physical layer, the data link layer, and the network
layer.
Types of WANs:
Centralized:
A centralized WAN consists of a central computer that is connected to dumb
terminals and / or other types of terminal devices.
Distributed:
A distributed WAN consists of two or more computers in different locations and
may also include connections to dumb terminals and other types of terminal devices.
Internetwork:
Two or more networks or network segments connected using devices that operate at
layer 3 (the 'network' layer) of the OSI Basic Reference Model, such as a router.
Note: Any interconnection among or between public, private, commercial, industrial,
or governmental networks may also be defined as an internetwork.
Internet, The:
A specific internetwork, consisting of a worldwide interconnection of governmental,
academic, public, and private networks based upon the Advanced Research Projects
Agency Network (ARPANET) developed by ARPA of the U.S. Department of
Defense – also home to the World Wide Web (WWW) and referred to as the
'Internet' with a capital 'I' to distinguish it from other generic internetworks.
24
WIRELESS COMMUNICACTION
Extranet:
A network or internetwork that is limited in scope to a single organization or entity
but which also has limited connections to the networks of one or more other usually,
but not necessarily, trusted organizations or entities (e.g., a company's customers may
be provided access to some part of its intranet thusly creating an extranet while at
the same time the customers may not be considered 'trusted' from a security
standpoint).
Note: Technically, an extranet may also be categorized as a CAN, MAN, WAN, or
other type of network, although, by definition, an extranet cannot consist of a single
LAN, because an extranet must have at least one connection with an outside
network.
Intranets and extranets may or may not have connections to the Internet. If
connected to the Internet, the intranet or extranet is normally protected from being
accessed from the Internet without proper authorization. The Internet itself is not
considered to be a part of the intranet or extranet, although the Internet may serve as
a portal for access to portions of an extranet.
Classification of computer networks
By network layer
Computer networks may be classified according to the network layer at which they operate
according to some basic reference models that are considered to be standards in the industry
such as the seven layer OSI reference model and the five layer TCP/IP model.
By scale
Computer networks may be classified according to the scale or extent of reach of the
network, for example as a Personal area network (PAN), Local area network (LAN),
Campus area network (CAN), Metropolitan area network (MAN), or Wide area network
(WAN).
By connection method
Computer networks may be classified according to the technology that is used to connect
the individual devices in the network such as HomePNA, Power line communication,
Ethernet, or Wireless LAN.
By functional relationship
Computer networks may be classified according to the functional relationships which exist
between the elements of the network, for example Active Networking, Client-server and
Peer-to-peer (workgroup) architectures. Also, computer networks are used to send data from
one to another by the hardrive
25
WIRELESS COMMUNICACTION
By network topology
Computer networks may be classified according to the network topology upon which the
network is based, such as Bus network, Star network, Ring network, Mesh network, Star-bus
network, Tree or Hierarchical topology network, etc.
Topology can be arranged in a Geometric Arrangement
Network Topologies are logical layouts of the network. The term "logical" used here marks a
great significant. That means network topologies depends not on the "physical" layout of the
network. No matter that computer on a network are placed in a linear format, but if they
connected via a hub they are forming a Star topology, not the Bus Topology. And here is the
important factor where networks differ, visually and operationally.
By services provided
Computer networks may be classified according to the services which they provide, such as
Storage area networks, Server farms, Process control networks, Value-added network,
Wireless community network, etc.
By protocol
Computer networks may be classified according to the communications protocol that is
being used on the network. See the articles on List of network protocol stacks and List of
network protocols for more information.
Network Architecture
In telecommunication, the term network architecture has the following meanings:
The design principles, physical configuration, functional organization, operational
procedures, and data formats used as the bases for the design, construction, modification,
and operation of a communications network.
The structure of an existing communications network, including the physical configuration,
facilities, operational structure, operational procedures, and the data formats in use.
With the development of distributed computing, the term network architecture has also
come to denote classifications and implementations of distributed computing architectures.
For example the applications architecture of the telephone network PSTN has been termed
the Advanced Intelligent Network. There are any number of specific classifications but all lie
on a continuum between the dumb network (e.g. Internet) and the intelligent computer
network (e.g. the telephone network PSTN). Other networks contain various elements of
these two classical types to make them suitable for various types of applications. Recently the
context aware network which is a synthesis of the two has gained much interest with its
ability to combine the best elements of both.
26
WIRELESS COMMUNICACTION
Intelligent Network
An intelligent network is a computer network in which the network is in control of
application creation and operation. Relatively dumb terminal and devices on the network
periphery access centralized network services on behalf of their users. The owners of the
network are in complete charge of the type and quantity of applications that exist on the
network.
An intelligent network is most suited for applications in which reliability and security are
prime requirements. Application software is centralized and so can be rigorously verified
before deployment. This large scale of the network and the ability to verify application
operation allows such networks to address very complicated tasks. The costs of development
and testing may be spread across many users.
The intelligent network architecture is at one extreme of a continuum of network
architectures. It should be contrasted with the dumb network architecture.
Distributed Computing is a method of computer processing in which different parts of a
program run simultaneously on two or more computers that are communicating with each
other over a network. Distributed computing is a type of parallel computing. But the latter
term is most commonly used to refer to processing in which different parts of a program run
simultaneously on two or more processors that are part of the same computer. While both
types of processing require that a program be parallelized—divided into sections that can
run simultaneously, distributed computing also requires that the division of the program take
into account the different environments on which the different sections of the program will
be running. For example, two computers are likely to have different file systems and
different hardware components.
An example of distributed computing is BOINC, a framework in which large problems can
be divided into many small problems which are distributed to many computers. Later, the
small results are reassembled into a larger solution.
Distributed computing is a natural result of the use of networks to allow computers to
efficiently communicate. But distributed computing is distinct from networking. The latter
refers to two or more computers interacting with each other, but not, typically, sharing the
processing of a single program. The World Wide Web is an example of a network, but not
an example of distributed computing.
There are numerous technologies and standards used to construct distributed computations,
including some which are specially designed and optimized for that purpose, such as Remote
Procedure Calls (RPC) or Remote Method Invocation (RMI) or .NET Remoting.
Organization
Organizing the interaction between each computer is of prime importance. In order to be
able to use the widest possible range and types of computers, the protocol or
communication channel should not contain or use any information that may not be
27
WIRELESS COMMUNICACTION
understood by certain machines. Special care must also be taken that messages are indeed
delivered correctly and that invalid messages are rejected which would otherwise bring down
the system and perhaps the rest of the network.
Another important factor is the ability to send software to another computer in a portable
way so that it may execute and interact with the existing network. This may not always be
possible or practical when using differing hardware and resources, in which case other
methods must be used such as cross-compiling or manually porting this software.
Goals and Advantages
There are many different types of distributed computing systems and many challenges to
overcome in successfully designing one. The main goal of a distributed computing system is
to connect users and resources in a transparent, open, and scalable way. Ideally this
arrangement is drastically more fault tolerant and more powerful than many combinations of
stand-alone computer systems.
Openness
Openness is the property of distributed systems such that each subsystem is continually
open to interaction with other systems (see references). Web Services protocols are
standards which enable distributed systems to be extended and scaled. In general, an open
system that scales has an advantage over a perfectly closed and self-contained system.
Consequently, open distributed systems are required to meet the following challenges:
Monotonicity
Once something is published in an open system, it cannot be taken back.
Pluralism
Different subsystems of an open distributed system include heterogeneous,
overlapping and possibly conflicting information. There is no central arbiter of truth
in open distributed systems.
Unbounded nondeterminism
Asynchronously, different subsystems can come up and go down and
communication links can come in and go out between subsystems of an open
distributed system. Therefore the time that it will take to complete an operation
cannot be bounded in advance (see unbounded nondeterminism).
Scalability
A scalable system is one that can easily be altered to accommodate changes in the number of
users, resources and computing entities affected to it. Scalability can be measured in three
different dimensions:
Load scalability
A distributed system should make it easy for us to expand and contract its resource
pool to accommodate heavier or lighter loads.
28
WIRELESS COMMUNICACTION
Geographic scalability
A geographically scalable system is one that maintains its usefulness and usability,
regardless of how far apart its users or resources are.
Administrative scalability
No matter how many different organizations need to share a single distributed
system, it should still be easy to use and manage.
Some loss of performance may occur in a system that allows itself to scale in one or more of
these dimensions.
Drawbacks and Disadvantages
If not planned properly, a distributed system can decrease the overall reliability of
computations if the unavailability of a node can cause disruption of the other nodes. Leslie
Lamport famously quipped that: "A distributed system is one in which the failure of a
computer you didn't even know existed can render your own computer unusable."[1]
Troubleshooting and diagnosing problems in a distributed system can also become more
difficult, because the analysis may require connecting to remote nodes or inspecting
communication between nodes.
Many types of computation are not well suited for distributed environments, typically owing
to the amount of network communication or synchronization that would be required
between nodes. If bandwidth, latency, or communication requirements are too significant,
then the benefits of distributed computing may be negated and the performance may be
worse than a non-distributed environment.
Architecture
Various hardware and software architectures are used for distributed computing. At a lower
level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of
whether that network is printed onto a circuit board or made up of loosely-coupled devices
and cables. At a higher level, it is necessary to interconnect processes running on those
CPUs with some sort of communication system.
Distributed programming typically falls into one of several basic architectures or categories:
Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose coupling, or
tight coupling.
Client-server — Smart client code contacts the server for data, then formats and
displays it to the user. Input at the client is committed back to the server when it
represents a permanent change.
3-tier architecture — Three tier systems move the client intelligence to a middle tier
so that stateless clients can be used. This simplifies application deployment. Most
web applications are 3-Tier.
29
WIRELESS COMMUNICACTION
N-tier architecture — N-Tier refers typically to web applications which further
forward their requests to other enterprise services. This type of application is the one
most responsible for the success of application servers.
Tightly coupled (clustered) — refers typically to a set of highly integrated machines
that run the same process in parallel, subdividing the task in parts that are made
individually by each one, and then put back together to make the final result.
Peer-to-peer — an architecture where there is no special machine or machines that
provide a service or manage the network resources. Instead all responsibilities are
uniformly divided among all machines, known as peers. Peers can serve both as
clients and servers.
Concurrency
Distributed computing implements a kind of concurrency. It interrelates tightly with
concurrent programming so much that they are sometimes not taught as distinct subjects.
Multiprocessor Systems
A multiprocessor system is simply a computer that has more than one CPU on its
motherboard. If the operating system is built to take advantage of this, it can run different
processes (or different threads belonging to the same process) on different CPUs.
Multicore Systems
Intel CPUs from the late Pentium 4 era (Northwood and Prescott cores) employed a
technology called Hyperthreading that allowed more than one thread (usually two) to run on
the same CPU. The more recent Sun UltraSPARC T1, AMD Athlon 64 X2, AMD Athlon
FX, AMD Opteron, Intel Pentium D, Intel Core, Intel Core 2 and Intel Xeon processors
feature multiple processor cores to also increase the number of concurrent threads they can
run.
Multicomputer Systems
A multicomputer may be considered to be either a loosely coupled NUMA computer or a
tightly coupled cluster. Multicomputers are commonly used when strong compute power is
required in an environment with restricted physical space or electrical power.
Common suppliers include Mercury Computer Systems, CSPI, and SKY Computers.
Common uses include 3D medical imaging devices and mobile radar.
Computing Taxonomies
The types of distributed systems are based on Flynn's taxonomy of systems; single
instruction, single data (SISD), single instruction, multiple data (SIMD), multiple instruction,
30
WIRELESS COMMUNICACTION
single data (MISD), and multiple instruction, multiple data (MIMD). Other taxonomies and
architectures available at Computer architecture and in Category:Computer architecture.
Computer Clusters
A cluster consists of multiple stand-alone machines acting in parallel across a local high
speed network. Distributed computing differs from cluster computing in that computers in a
distributed computing environment are typically not exclusively running "group" tasks,
whereas clustered computers are usually much more tightly coupled. Distributed computing
also often consists of machines which are widely separated geographically.
Grid Computing
A grid uses the resources of many separate computers connected by a network (usually the
Internet) to solve large-scale computation problems. Most use idle time on many thousands
of computers throughout the world. Such arrangements permit handling of data that would
otherwise require the power of expensive supercomputers or would have been impossible to
analyze.
Languages
Nearly any programming language that has access to the full hardware of the system could
handle distributed programming given enough time and code. Remote procedure calls
distribute operating system commands over a network connection. Systems like CORBA,
Microsoft D/COM, Java RMI and others, try to map object oriented design to the network.
Loosely coupled systems communicate through intermediate documents that are typically
human readable (eg. XML, HTML, SGML, X.500, and EDI).
Languages specifically tailored for distributed programming are:
Ada programming language
Alef programming language
E programming language
Erlang programming language
Limbo programming language
Oz programming language
ZPL (programming language)
31
WIRELESS COMMUNICACTION
A Computer Network is multiple computers connected together using a telecommunication
system for the purpose of communicating and sharing resources.
Experts in the field of networking debate whether two computers that are connected together
using some form of communications medium constitute a network. Therefore, some works
state that a network requires three connected computers. For example, "Telecommunications:
Glossary of Telecommunication Terms" states that a computer network is "A network of
data processing nodes that are interconnected for the purpose of data communication", the
term "network" being defined in the same document as "An interconnection of three or
more communicating entities". A computer connected to a non-computing device (e.g.,
networked to a printer via an Ethernet link) may also represent a computer network,
although this article does not address this configuration.
This article uses the definition which requires two or more computers to be connected
together to form a network. Therefore, this does not include intranets, due to their not
needing to be connected via two or more computers.[2] The same basic functions are
generally present in this case as with larger numbers of connected computers.
Basics
A computer network may be described as the interconnection of two or more computers
that may share files and folders, applications, or resources like printers, scanners, web-cams
etc. The internet is also a type of computer network which connects all the computers of the
world.
Protocols
A protocol is a set of rules and conventions about the communication in the network. A
protocol mainly defines the following:
Syntax: Defines the structure or format of data.
Semantics: Defines the interpretation of data being sent.
Timing: Refers to an agreement between a sender and a receiver about the transmission.
Standards Organizations
Various standards organizations for data communication exist today. They are broadly
classified into three categories:
Standards Creation Committees.
Forums
Regulatory Agencies
32
WIRELESS COMMUNICACTION
Standards Creation Committees
Some important organizations in this category are:
International Organization for Standardization (ISO; also known as International
Standards Organization)
A multinational standards body
International Telecommunications Union - Telecommunication Standards Sector
(ITU-T)
Previously, CCITT. Developed under United Nations for national standards.
American National Standards Institute (ANSI)
An affiliate of ITU-T
Institute of Electrical and Electronic Engineers (IEEE)
Largest professional engineering body in the world. Oversees the development and
adoption of international electrical and electronic standards.
Electronic Industries Alliance (EIA; formerly Electronic Industries Association)
Aligned with ANSI. Focuses public awareness and lobbying for standards.
Forums
University students, user groups, industry representatives and experts come together and set
up forums to address various issues and concerns of data communication technology and
come up with standards for the day's need. Some of the well-known forums are:
The Internet Society(ISOC)
Internet Engineering Task Force (IETF)
Frame Relay Forum
ATM Forum
ATM Consortium
33
WIRELESS COMMUNICACTION
Communication Techniques
Data is transmitted in the form of electrical signals. The electrical signals are of two types
viz., analog and digital. Similarly, data can also be either analog or digital. Based on them,
data communication may be of following types:
Analog data, analog transmission
e.g.: transmission of voice signals over telephone line
Analog data, digital transmission
e.g.: transmission of voice signal after digitization using PCM, delta modulation or
adaptive delta modulation
Digital data, analog transmission
e.g.: communication using modem
Digital data, digital transmission
e.g.: most of present day communication
Modes of Data Transmission
Digital data can be transmitted in a number of ways:
Parallel and serial communication
Synchronous, iso-synchronous and asynchronous communication
Simplex, half-duplex and full-duplex communication
Transmission Errors
It is virtually impossible to send any signal, analog or digital, over a distance without any
distortion even in the most perfect conditions due to:
Delay Distortion
Signals of varying frequencies travel at different speeds along the medium. The speed
of travel of a signal is highest at the center of the bandwidth of the medium and
lower at both the ends. Therefore, at the receiving end, signals with different
frequencies in the given medium will arrive at different times causing delay error.
34
WIRELESS COMMUNICACTION
Attenuation
As a signal travels through a medium, its signal strength decreases.
Noise
A signal travels as an electromagnetic signal through any medium. Electromagnetic
energy that gets inserted somewhere during transmission is called noise.
Many attempts have been made to detect and rectify the transmission errors. Error detection
schemes:
Vertical Redundancy Check (VRC) or Parity Check
Longitudinal Redundancy Check (LRC)
Cyclic Redundancy Check (CRC)
Error correction schemes:
stop-and-wait
go-back-n
sliding-window
Network Topology
Is the study of the arrangement or mapping of the elements (links, nodes, etc.) of a network,
especially the physical (real) and logical (virtual) interconnections between nodes.
A local area network (LAN) is one example of a network that exhibits both a physical and a
logical topology. Any given node in the LAN will have one or more links to one or more
other nodes in the network and the mapping of these links and nodes onto a graph results in
a geometrical shape that determines the physical topology of the network. Likewise, the
mapping of the flow of data between the nodes in the network determines the logical topology
of the network. It is important to note that the physical and logical topologies might be
identical in any particular network but they also may be different.
Any particular network topology is determined only by the graphical mapping of the
configuration of physical and/or logical connections between nodes - Network Topology is,
therefore, technically a part of graph theory. Distances between nodes, physical
interconnections, transmission rates, and/or signal types may differ in two networks and yet
their topologies may be identical.
35
WIRELESS COMMUNICACTION
Basic Types of Topologies
The arrangement or mapping of the elements of a network gives rise to certain basic
topologies which may then be combined to form more complex topologies (hybrid
topologies). The most common of these basic types of topologies are (refer to the
illustration at the top right of this page):
Bus (Linear, Linear Bus)
Star
Ring
Mesh , partially connected mesh (or simply 'mesh'), fully connected mesh (or simply fully
connected)
Tree
Hybrid
Classification of Network Topologies
There are also three basic categories of network topologies:
physical topologies
signal topologies
logical topologies
The terms signal topology and logical topology are often used interchangeably even though
there is a subtle difference between the two and the distinction is not often made between
the two.
Physical Topologies
The mapping of the nodes of a network and the physical connections between them – i.e.,
the layout of wiring, cables, the locations of nodes, and the interconnections between the
nodes and the cabling or wiring system.
36
WIRELESS COMMUNICACTION
Classification of Physical Topologies:
Bus:
Linear Bus:
The type of network topology in which all of the nodes of the network are
connected to a common transmission medium which has exactly two endpoints (this
is the 'bus', which is also commonly referred to as the backbone, or trunk) – all data
that is transmitted between nodes in the network is transmitted over this common
transmission medium and is able to be received by all nodes in the network virtually
simultaneously (disregarding propagation delays).
Note: The two endpoints of the common transmission medium are normally
terminated with a device called a terminator that exhibits the characteristic
impedance of the transmission medium and which dissipates or absorbs the energy
that remains in the signal to prevent the signal from being reflected or propagated
back onto the transmission medium in the opposite direction, which would cause
interference with and degradation of the signals on the transmission medium (See
Electrical termination).
Distributed Bus:
The type of network topology in which all of the nodes of the network are
connected to a common transmission medium which has more than two endpoints
that are created by adding branches to the main section of the transmission medium
– the physical distributed bus topology functions in exactly the same fashion as the
physical linear bus topology (i.e., all nodes share a common transmission medium).
Notes:
1.) All of the endpoints of the common transmission medium are normally
terminated with a device called a 'terminator' (see the note under linear bus).
2.) The physical linear bus topology is sometimes considered to be a special case of
the physical distributed bus topology – i.e., a distributed bus with no branching
segments.
3.) The physical distributed bus topology is sometimes incorrectly referred to as a
physical tree topology – however, although the physical distributed bus topology
resembles the physical tree topology, it differs from the physical tree topology in that
there is no central node to which any other nodes are connected, since this
hierarchical functionality is replaced by the common bus.
Star:
The type of network topology in which each of the nodes of the network is
connected to a central node with a point-to-point link in a 'hub' and 'spoke' fashion,
the central node being the 'hub' and the nodes that are attached to the central node
being the 'spokes' (e.g., a collection of point-to-point links from the peripheral nodes
that converge at a central node) – all data that is transmitted between nodes in the
network is transmitted to this central node, which is usually some type of device that
then retransmits the data to some or all of the other nodes in the network, although
37
WIRELESS COMMUNICACTION
the central node may also be a simple common connection point (such as a 'punchdown' block) without any active device to repeat the signals.
Notes:
1.) A point-to-point link is sometimes categorized as a special instance of the
physical star topology – therefore, the simplest type of network that is based upon
the physical star topology would consist of one node with a single point-to-point link
to a second node, the choice of which node is the 'hub' and which node is the 'spoke'
being arbitrary.
2.) After the special case of the point-to-point link, as in note 1.) above, the next
simplest type of network that is based upon the physical star topology would consist
of one central node – the 'hub' – with two separate point-to-point links to two
peripheral nodes – the 'spokes'.
3.) Although most networks that are based upon the physical star topology are
commonly implemented using a special device such as a hub or switch as the central
node (i.e., the 'hub' of the star), it is also possible to implement a network that is
based upon the physical star topology using a computer or even a simple common
connection point as the 'hub' or central node – however, since many illustrations of
the physical star network topology depict the central node as one of these special
devices, some confusion is possible, since this practice may lead to the
misconception that a physical star network requires the central node to be one of
these special devices, which is not true because a simple network consisting of three
computers connected as in note 2.) above also has the topology of the physical star.
Extended Star:
A type of network topology in which a network that is based upon the physical star
topology has one or more repeaters between the central node (the 'hub' of the star)
and the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum
transmission distance of the point-to-point links between the central node and the
peripheral nodes beyond that which is supported by the transmitter power of the
central node or beyond that which is supported by the standard upon which the
physical layer of the physical star network is based.
Note: If the repeaters in a network that is based upon the physical extended star
topology are replaced with hubs or switches, then a hybrid network topology is
created that is referred to as a physical hierarchical star topology, although some
texts make no distinction between the two topologies.
Distributed Star:
A type of network topology that is composed of individual networks that are based
upon the physical star topology connected together in a linear fashion – i.e., 'daisychained' – with no central or top level connection point (e.g., two or more 'stacked'
hubs, along with their associated star connected nodes or 'spokes').
Ring:
The type of network topology in which each of the nodes of the network is
connected to two other nodes in the network and with the first and last nodes being
connected to each other, forming a ring – all data that is transmitted between nodes
in the network travels from one node to the next node in a circular manner and the
data generally flows in a single direction only.
38
WIRELESS COMMUNICACTION
Dual-ring:
The type of network topology in which each of the nodes of the network is
connected to two other nodes in the network, with two connections to each of these
nodes, and with the first and last nodes being connected to each other with two
connections, forming a double ring – the data flows in opposite directions around
the two rings, although, generally, only one of the rings carries data during normal
operation, and the two rings are independent unless there is a failure or break in one
of the rings, at which time the two rings are joined (by the stations on either side of
the fault) to enable the flow of data to continue using a segment of the second ring
to bypass the fault in the primary ring.
Mesh:
Full:
Fully Connected:
The type of network topology in which each of the nodes of the network is
connected to each of the other nodes in the network with a point-to-point link – this
makes it possible for data to be simultaneously transmitted from any single node to
all of the other nodes.
Note: The physical fully connected mesh topology is generally too costly and
complex for practical networks, although the topology is used when there are only a
small number of nodes to be interconnected.
Partial:
Partially Connected:
The type of network topology in which some of the nodes of the network are
connected to more than one other node in the network with a point-to-point link –
this makes it possible to take advantage of some of the redundancy that is provided
by a physical fully connected mesh topology without the expense and complexity
required for a connection between every node in the network.
Note: In most practical networks that are based upon the physical partially
connected mesh topology, all of the data that is transmitted between nodes in the
network takes the shortest path between nodes, except in the case of a failure or
break in one of the links, in which case the data takes an alternate path to the
destination – this implies that the nodes of the network possess some type of logical
'routing' algorithm to determine the correct path to use at any particular time.
Tree (also known as Hierarchical):
The type of network topology in which a central 'root' node (the top level of the
hierarchy) is connected to one or more other nodes that are one level lower in the
hierarchy (i.e., the second level) with a point-to-point link between each of the
second level nodes and the top level central 'root' node, while each of the second
level nodes that are connected to the top level central 'root' node will also have one
or more other nodes that are one level lower in the hierarchy (i.e., the third level)
connected to it, also with a point-to-point link, the top level central 'root' node being
the only node that has no other node above it in the hierarchy – the hierarchy of the
tree is symmetrical, each node in the network having a specific fixed number, f, of
39
WIRELESS COMMUNICACTION
nodes connected to it at the next lower level in the hierarchy, the number, f, being
referred to as the 'branching factor' of the hierarchical tree.
Notes:
1.) A network that is based upon the physical hierarchical topology must have at least
three levels in the hierarchy of the tree, since a network with a central 'root' node and
only one hierarchical level below it would exhibit the physical topology of a star.
2.) A network that is based upon the physical hierarchical topology and with a
branching factor of 1 would be classified as a physical linear topology.
3.) The branching factor, f, is independent of the total number of nodes in the
network and, therefore, if the nodes in the network require ports for connection to
other nodes the total number of ports per node may be kept low even though the
total number of nodes is large – this makes the effect of the cost of adding ports to
each node totally dependent upon the branching factor and may therefore be kept as
low as required without any effect upon the total number of nodes that are possible.
4.) The total number of point-to-point links in a network that is based upon the
physical hierarchical topology will be one less that the total number of nodes in the
network.
5.) If the nodes in a network that is based upon the physical hierarchical topology are
required to perform any processing upon the data that is transmitted between nodes
in the network, the nodes that are at higher levels in the hierarchy will be required to
perform more processing operations on behalf of other nodes than the nodes that
are lower in the hierarchy.
Hybrid Network Topologies
The hybrid topology is a type of network topology that is composed of one or more
interconnections of two or more networks that are based upon different physical topologies
or a type of network topology that is composed of one or more interconnections of two or
more networks that are based upon the same physical topology, but where the physical
topology of the network resulting from such an interconnection does not meet the definition
of the original physical topology of the interconnected networks (e.g., the physical topology
of a network that would result from an interconnection of two or more networks that are
based upon the physical star topology might create a hybrid topology which resembles a
mixture of the physical star and physical bus topologies or a mixture of the physical star and
the physical tree topologies, depending upon how the individual networks are
interconnected, while the physical topology of a network that would result from an
interconnection of two or more networks that are based upon the physical distributed bus
network retains the topology of a physical distributed bus network).
Star-Bus:
A type of network topology in which the central nodes of one or more individual
networks that are based upon the physical star topology are connected together using
a common 'bus' network whose physical topology is based upon the physical linear
bus topology, the endpoints of the common 'bus' being terminated with the
characteristic impedance of the transmission medium where required – e.g., two or
more hubs connected to a common backbone with drop cables through the port on
the hub that is provided for that purpose (e.g., a properly configured 'uplink' port)
would comprise the physical bus portion of the physical star-bus topology, while
40
WIRELESS COMMUNICACTION
each of the individual hubs, combined with the individual nodes which are
connected to them, would comprise the physical star portion of the physical star-bus
topology.
Star-of_Stars:
Hierarchical Star:
A type of network topology that is composed of an interconnection of individual
networks that are based upon the physical star topology connected together in a
hierarchical fashion to form a more complex network – e.g., a top level central node
which is the 'hub' of the top level physical star topology and to which other second
level central nodes are attached as the 'spoke' nodes, each of which, in turn, may also
become the central nodes of a third level physical star topology.
Notes:
1.) The physical hierarchical star topology is not a combination of the physical linear
bus and the physical star topologies, as cited in some texts, as there is no common
linear bus within the topology, although the top level 'hub' which is the beginning of
the physical hierarchical star topology may be connected to the backbone of another
network, such as a common carrier, which is, topologically, not considered to be a
part of the local network – if the top level central node is connected to a backbone
that is considered to be a part of the local network, then the resulting network
topology would be considered to be a hybrid topology that is a mixture of the
topology of the backbone network and the physical hierarchical star topology.
2.) The physical hierarchical star topology is also sometimes incorrectly referred to as
a physical tree topology, since its physical topology is hierarchical, however, the
physical hierarchical star topology does not have a structure that is determined by a
branching factor, as is the case with the physical tree topology and, therefore, nodes
may be added to, or removed from, any node that is the 'hub' of one of the
individual physical star topology networks within a network that is based upon the
physical hierarchical star topology.
3.) The physical hierarchical star topology is commonly used in 'outside plant' (OSP)
cabling to connect various buildings to a central connection facility, which may also
house the 'demarcation point' for the connection to the data transmission facilities of
a common carrier, and in 'inside plant' (ISP) cabling to connect multiple wiring
closets within a building to a common wiring closet within the same building, which
is also generally where the main backbone or trunk that connects to a larger network,
if any, enters the building.
Star-wired Ring:
A type of hybrid physical network topology that is a combination of the physical star
topology and the physical ring topology, the physical star portion of the topology
consisting of a network in which each of the nodes of which the network is
composed are connected to a central node with a point-to-point link in a 'hub' and
41
WIRELESS COMMUNICACTION
'spoke' fashion, the central node being the 'hub' and the nodes that are attached to
the central node being the 'spokes' (e.g., a collection of point-to-point links from the
peripheral nodes that converge at a central node) in a fashion that is identical to the
physical star topology, while the physical ring portion of the topology consists of
circuitry within the central node which routes the signals on the network to each of
the connected nodes sequentially, in a circular fashion.
Note: In an 802.5 Token Ring network the central node is called a Multistation
Access Unit (MAU).
Hybrid Mesh:
A type of hybrid physical network topology that is a combination of the physical
partially connected topology and one or more other physical topologies the mesh
portion of the topology consisting of redundant or alternate connections between
some of the nodes in the network – the physical hybrid mesh topology is commonly
used in networks which require a high degree of availability.
Signal Topology
The mapping of the actual connections between the nodes of a network, as evidenced by the
path that the signals take when propagating between the nodes.
Note: The term 'signal topology' is often used synonymously with the term 'logical
topology', however, some confusion may result from this practice in certain
situations since, by definition, the term 'logical topology' refers to the apparent path
that the data takes between nodes in a network while the term 'signal topology'
generally refers to the actual path that the signals (e.g., optical, electrical,
electromagnetic, etc.) take when propagating between nodes.
Example:
In an 802.4 Token Bus network, the physical topology may be a physical bus, a
physical star, or a hybrid physical topology, while the signal topology is a bus (i.e., the
electrical signal propagates to all nodes simultaneously [ignoring propagation delays
and network latency] ), and the logical topology is a ring (i.e., the data flows from
one node to the next in a circular manner according to the protocol).[4]
Logical Topology
The mapping of the apparent connections between the nodes of a network, as evidenced by
the path that data appears to take when traveling between the nodes.
Classification of Logical Topologies
The logical classification of network topologies generally follows the same classifications as
those in the physical classifications of network topologies, the path that the data takes
between nodes being used to determine the topology as opposed to the actual physical
connections being used to determine the topology.
42
WIRELESS COMMUNICACTION
Notes:
1.) Logical topologies are often closely associated with media access control (MAC)
methods and protocols.
2.) The logical topologies are generally determined by network protocols as opposed
to being determined by the physical layout of cables, wires, and network devices or
by the flow of the electrical signals, although in many cases the paths that the
electrical signals take between nodes may closely match the logical flow of data,
hence the convention of using the terms 'logical topology' and 'signal topology'
interchangeably.
3.) Logical topologies are able to be dynamically reconfigured by special types of
equipment such as routers and switches.
Daisy chains
Except for star-based networks, the easiest way to add more computers into a network is by
daisy-chaining, or connecting each computer in series to the next. If a message is intended
for a computer partway down the line, each system bounces it along in sequence until it
reaches the destination. A daisy-chained network can take two basic forms: linear and ring.
A linear topology puts a two-way link between one computer and the next.
However, this was expensive in the early days of computing, since each computer
(except for the ones at each end) required two receivers and two transmitters.
By connecting the computers at each end, a ring topology can be formed. An
advantage of the ring is that the number of transmitters and receivers can be cut in
half, since a message will eventually loop all of the way around. When a node sends a
message, the message is processed by each computer in the ring. If a computer is not
the destination node, it will pass the message to the next node, until the message
arrives at its destination. If the message is not accepted by any node on the network,
it will travel around the entire ring and return to the sender. This potentially results
in a doubling of travel time for data, but since it is traveling at a significant fraction
of the speed of light, the loss is usually negligible.
Centralization
The star topology reduces the probability of a network failure by connecting all of the
peripheral nodes (computers, etc.) to a central node. When the physical star topology is
applied to a logical bus network such as Ethernet, this central node (usually a hub)
rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on
the network, sometimes including the originating node. All peripheral nodes may thus
communicate with all others by transmitting to, and receiving from, the central node only.
The failure of a transmission line linking any peripheral node to the central node will result
in the isolation of that peripheral node from all others, but the remaining peripheral nodes
will be unaffected. However, the disadvantage is that the failure of the central node will
cause the failure of all of the peripheral nodes also.
43
WIRELESS COMMUNICACTION
If the central node is passive, the originating node must be able to tolerate the reception of an
echo of its own transmission, delayed by the two-way transmission time (i.e. to and from the
central node) plus any delay generated in the central node. An active star network has an
active central node that usually has the means to prevent echo-related problems.
A tree topology (hierarchical topology) can be viewed as a collection of star networks
arranged in a hierarchy. This tree has individual peripheral nodes (i.e. leaves) which are
required to transmit to and receive from one other node only and are not required to act as
repeaters or regenerators. Unlike the star network, the functionality of the central node may
be distributed.
As in the conventional star network, individual nodes may thus still be isolated from the
network by a single-point failure of a transmission path to the node. If a link connecting a
leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the
network becomes isolated from the rest.In order to alleviate the amount of network traffic
that comes from broadcasting all signals to all nodes, more advanced central nodes were
developed that are able to keep track of the identities of the nodes that are connected to the
network. These network switches will "learn" the layout of the network by first broadcasting
data packets to all nodes, then observing where response packets come from and entering
the addresses of these nodes into an internal table for future routing purposes.
Decentralization
In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes
with two or more paths between them to provide redundant paths to be used in case the link
providing one of the paths fails. This decentralization is often used to advantage to
compensate for the single-point-failure disadvantage that is present when using a single
device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the
number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh
networks makes them more difficult to design and implement, but their decentralized nature
makes them very useful. This is similar in some ways to a grid network, where a linear or
ring topology is used to connect systems in multiple directions. A multi-dimensional ring has
a toroidal topology, for instance.
A fully connected network, complete topology or full mesh topology is a network
topology in which there is a direct link between all pairs of nodes. In a fully connected
network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology
are usually very expensive to set up, but provide a high degree of reliability due to the
multiple paths for data that are provided by the large number of redundant links between
nodes. This topology is mostly seen in military applications. However, it can also be seen in
the file sharing protocol BitTorrent in which users connect to other users in the "swarm" by
allowing each user sharing the file to connect to other users also involved. Often in actual
usage of BitTorrent any given individual node is rarely connected to every single other node
as in a true fully connected network but the protocol does allow for the possibility for any
one node to connect to any other node when sharing files.
44
WIRELESS COMMUNICACTION
Hybrids
Hybrid networks use a combination of any two or more topologies in such a way that the
resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.).
For example, a tree network connected to a tree network is still a tree network, but two star
networks connected together exhibit a hybrid network topology. A hybrid topology is always
produced when two different basic network topologies are connected. Two common
examples for Hybrid network are: star ring network and star bus network
A Star ring network consists of two or more star topologies connected using a
multistation access unit (MAU) as a centralized hub.
A Star Bus network consists of two or more star topologies connected using a bus
trunk (the bus trunk serves as the network's backbone). While grid networks have
found popularity in high-performance computing applications, some systems have
used genetic algorithms to design custom networks that have the fewest possible
hops in between different nodes. Some of the resulting layouts are nearly
incomprehensible, although they do function quite well.
45
WIRELESS COMMUNICACTION
Chapter No:3
Introduction to Broadband
Digital Transport
46
WIRELESS COMMUNICACTION
OBJECTIVE AND SCOPE
In the early 1980s, .Fiber-optic transmission links burst upon the telecommunication
transport
scene. The potential bit rate capacity of these new systems was so great that there was no
underlying digital format to accommodate such transmission rates. The maximum bit rate in
the DS1 family of digital formats was DS4 at 274 Mbps; and for the E1 family, E4 at 139
Mbps. These data rates satisfied the requirements of the metallic transmission plant, but the
evolving .Fiber-optic plant had the promise of much greater capacity, in the multigigabit
region.
In the mid-1980s ANSI and Bellcore began to develop a new digital format standard
specifically designed for the potential bit rates of .ber optics. The name of this structure is
SONET, standing for Synchronous Optical Network. As the development of SONET was
proceeding, CEPT1 showed interest in the development of a European standard. In 1986
CCITT stepped in proposing a singular standard that would accommodate U.S., European,
and Japanese hierarchies. This unfortunately was not achieved due more to time constraints
on the part of U.S. interests. As a result, there are two digital format standards: SONET and
the synchronous digital hierarchy (SDH) espoused by CCITT.
It should be pointed out that these formats are optimized for voice operation with 125-µsec
frames. Both types commonly carry plesiochronous digital hierarchy (PDH) formats such as
DS1 and E1, as well as ATM cells.2 In the general scheme of things, the interface from one
to the other will take place at North American gateways. In other words, international trunks
are SDH-equipped, not SONET-equipped. The objective of this chapter is to provide a brief
overview of both SONET and SDH standards.
1 CEPT stands for Conference European Post & Telegraph, a European telecommunication
standardization agency based in France. In 1990 the name of the agency was changed to
ETSI—European Telecommunication Standardization Institute.
2 Held (Ref. 9) de.nes plesiochronous as ―a network with multiple stratum 1 primary reference
sources.‖ See Section 6.12.1. In this context, when transporting these PCM formats, the
underlying network timing and synchronization must have stratum 1 traceability.
SONET
Introduction and Background
The SONET standard was developed by the ANSI T1X1 committee with .rst publication in
1988. The standard de.nes the features and functionality of a transport system based on the
principles of synchronous multiplexing. In essence this means that individual tributary
signals may be multiplexed directly into a higher rate SONET signal without intermediate
stages of multiplexing.
DS1 and E1 digital hierarchies had rather limited overhead capabilities for network
management, control, and monitoring. SONET (and SDH) provides a rich built-in capacity
for advanced network management and maintenance capabilities. Nearly 5% of the SONET
signal structure is allocated to supporting such management and maintenance procedures
and practices.
47
WIRELESS COMMUNICACTION
SONET is capable of transporting all the tributary signals that have been defined for the
digital networks in existence today. This means that SONET can be deployed as an overlay
to the existing network and, where appropriate, provide enhanced network flexibility by
transporting existing signal types. In addition, SONET has the flexibility to readily
accommodate the new types of customer service signals such as SMDS (switched
multimegabit data service) and ATM (asynchronous transfer mode). Actually, it can carry any
octet-based binary format such as TCP/IP, SNA, OSI regimes, X.25, frame relay, and
various LAN formats, which have been packaged for long-distance transmission.
Synchronous Signal Structure
SONET is based on a synchronous signal comprised of 8-bit octets, which are organized
into a frame structure. The frame can be represented by a two-dimensional map comprising
N rows and M columns, where each box so derived contains one octet (or byte). The upper
left-hand corner of the rectangular map representing a frame contains an identifiable marker
to tell the receiver it is the start of frame.
SONET consists of a basic, first-level, structure called STS-1, which is discussed in the
following. The definition of the first level also definnes the entire hierarchy of SONET
signals because higher-level SONET signals are obtained by synchronously multiplexing the
lower-level modules. When lower-level modules are multiplexed together, the result is
denoted as STS-N (STS stands for synchronous transport signal), where N is an integer.
The resulting format then can be converted to an OC-N (OC stands for optical carrier) or
STS-N electrical signal. There is an integer multiple relationship between the rate of the
basic module STS-1 and the OC-N electrical signals (i.e., the rate of an OC-N is equal to N
times the rate of an STS-1). Only OC-1, -3, -12, -24, -48, and -192 are supported by today‘s
SONET.
Basic Building Block Structure
The STS-1 frame is shown in Figure 19.1. STS-1 is the basic module and building block of
SONET. It is a specific sequence of 810 octets3 (6480 bits), which includes various
overhead octets and an envelope capacity for transporting payloads. STS-1 is depicted as a
90-column, 9-row structure with a frame period of 125 µsec (i.e., 8000 frames per second).
STS-1 has a bit rate of 51.840 Mbps. Consider Figure 19.1. The order of transmission of
octets is row-by-row, from left to right. In each octet of STS-1 the most significant bit is
transmitted first.
The STS-1 frame.
48
WIRELESS COMMUNICACTION
STS-1 synchronous payload envelope (SPE).
As illustrated in, the first three columns of the STS-1 frame contain the Transport Overhead.
These three columns have 27 octets (i.e., 9X3), of which nine are used for the section overhead
and 18 octets contain the line overhead. The remaining 87 columns make up the STS-1
Envelope Capacity as illustrated in Figure 19.2. The STS-1 synchronous payload envelope
(SPE) occupies the STS-1 envelope capacity. The STS-1 SPE consists of 783 octets and is
depicted as an 87-column by 9-row structure. In that structure, column 1 contains 9 octets
and is designated as the STS path overhead (POH). In the SPE columns 30 and 59 are not
used for payload but are designated as fixed-stuff columns. The 756 octets in the remaining 84
columns are used for the actual STS-1 payload capacity. Fig shows the fixed-stuff columns
30 and 59 inside the SPE. The reference document (Ref. 1) states that the octets in these
fixed-stuff columns are undefined and are set to binary 0s. However, the values used to stuff
these columns of each STS-1 SPE will produce even parity in the calculation of the STS-1
Path BIP-8 (BIP = path interleaved parity).
49
WIRELESS COMMUNICACTION
The STS-1 SPE may begin anywhere in the STS-1 envelope capacity. Typically the SPE
begins in one STS-1 frame and ends in the next. This is illustrated in Figure 19.4. However,
on occasion the SPE may be wholly contained in one frame. The STS payload pointer resides
in the transport overhead. It designates the location of the next octet where the SPE begins.
Payload pointers are described in the following paragraphs.
POH and the STS-1 payload capacity within the STS-1 SPE. Note that the net payload
capacity in the STS-1 frame is only 84 columns.
50
WIRELESS COMMUNICACTION
STS-1 SPE typically located in STS-1 frames. (From Ref. 2, courtesy of Hewlett-Packard.)
The STS POH (path overhead) is associated with each payload and is used to communicate
various pieces of information from the point where the payload is mapped into the STS-1
SPE to the point where it is delivered. Among the pieces of information carried in the POH
are alarm and performance data.
STS-N Frames.
The frame consists of a specific sequence of NX810 octets. The STS-N frame is formed by
octet-interleave STS-1 and STS-M (<N) modules. The transport overhead of the individual
STS-1 and STS-M modules are frame-aligned before interleaving, but the associated STS
SPEs are not required to be aligned because each STS-1 has a payload pointer to indicate the
location of the SPE or to indicate concatenation.
STS-N frame.
STS Concatenation. Super rate payloads require multiple STS-1 SPEs. FDDI and some B-
ISDN payloads fall into this category. Concatenation means the linking together. An STS-Nc
module is formed by linking N constituent STS-1s together in a fixed phase alignment. The
super rate payload is then mapped into the resulting STS-Nc SPE for transport. Such STSNc SPE requires an OC-N4 or an STS-N electrical signal. Concatenation indicators
contained in the second through the Nth STS payload pointer are used to show that the
STS-1s of an STS-Nc are linked together. There are NX783 octets in an STS-Nc. Such an
STS-Nc arrangement is illustrated in Figure 19.6 and is depicted as an NX87 column by 9row structure. Because of the linkage, only one set of STS POH is required in the STS-Nc
SPE. Here the STS POH always appears in the .rst of the N STS-1s that make up the STSNc (Ref. 3). Fig shows the transport overhead assignment of an OC-3 carrying an STS-3c
51
WIRELESS COMMUNICACTION
SPE.
STS-3c concatenated SPE. [Courtesy of Hewlett-Packard
Transport overhead assignment showing OC-3 carrying an STS-3c SPE. (Based on Ref. 1,
Figure 3-8 and updated to current practice.)
52
WIRELESS COMMUNICACTION
The virtual tributary (VT) concept.
Structure of Virtual Tributaries (VTs).
The SONET STS-1 SPE with a channel capacity of 50.11 Mbps has been designed
specifically to transport a DS3 tributary signal. To accommodate sub-STS-1 rate payloads
such as DS1, the VT structure is used. It consists of four sizes: VT1.5 (1.728 Mbps) for DS1
transport, VT2 (2.304 Mbps) for E1 transport, VT3 (3.456 Mbps) for DS1C transport, and
VT6 (6.912 Mbps) for DS2 transport. The virtual tributary concept is illustrated in Figure
19.8. The four VT configurations are shown in Figure 19.9. In the 87-column by 9-row
structure of the STS-1 SPE, the VTs occupy 3, 4, 6, and 12 columns, respectively.
The four sizes of virtual tributary frames.
53
WIRELESS COMMUNICACTION
There are two VT operating modes: floating mode and locked mode. The floating mode was
designed to minimize network delay and provide efficient cross-connects of transport signals
at the VT level within the synchronous network. This is achieved by allowing each VT SPE
to .oat with respect to the STS-1 SPE in order to avoid the use of unwanted slip buffers at
each VT cross-connect point.5 Each VT SPE has its own payload pointer, which
accommodates timing synchronization issues associated with the individual VTs. As a result,
by allowing a selected VT1.5, for example, to be cross-connected between different
transport systems without unwanted network delay, this mode allows a DS1 to be
transported effectively across a SONET network. The locked mode minimizes interface
complexity and supports bulk transport of DS1 signals for digital switching applications.
This is achieved by locking individual VT SPEs in fixed positions with respect to the STS-1
SPE. In this case, each VT1.5 SPE is not provided with its own payload pointer. With the
locked mode it is not possible to route a selected VT1.5 through the SONET network
without unwanted network delay caused by having to provide slip buffers to accommodate
the timing/synchronization issues.
The Payload Pointer.
The STS payload pointer provides a method for allowing flexible and dynamic alignment of
the STS SPE within the STS envelope capacity, independent of the actual contents of the
SPE. SONET, by definition, is intended to be synchronous. It derives its timing from the
master network clock. Modern digital networks must make provision for more than one
master clock. Examples in the United States are the several interexchange carriers that
interface with local exchange carriers, each with their own master clock. Each master clock
(stratum 1) operates independently. And each of these master clocks has excellent stability
(i.e., better than 1 × 10-11 per month), yet there may be some small variance in time among
the clocks. Assuredly they will not be phase-aligned. Likewise, SONET must take into
account loss of master clock or a segment of its timing delivery system. In this case, switches
fall back on lower-stability internal clocks. This situation must also be handled by SONET.
Therefore synchronous transport must be able to operate effectively under these conditions
where network nodes are operating at slightly different rates.
To accommodate these clock offsets, the SPE can be moved (justified) in the positive or
negative direction one octet at a time with respect to the transport frame. This is achieved by
recalculating or updating the payload pointer at each SONET network node. In addition to
clock offsets, updating the payload pointer also accommodates any other timing phase
adjustments required between the input SONET signals and the timing reference at the
SONET node. This is what is meant by dynamic alignment where the STS SPE is allowed to
.oat within the STS envelope capacity.
The payload pointer is contained in the H1 and H2 octets in the line overhead (LOH) and
designates the location of the octet where the STS SPE begins. These two octets are viewed
in Figure 19.10. Bits 1 through 4 of the pointer word carry the new data flag, and bits 7
through 16 carry the pointer value. Bits 5 and 6 are undefined. Let‘s discuss bits 7 through
16, the actual pointer value. It is a binary number with a range of 0 to 782. It indicates the
offset of the pointer word and the .rst octet of the STS SPE (i.e., the J1 octet). The transport
overhead octets are not counted in the offset. For example, a pointer value of 0 indicates
that the STS SPE starts in the octet location that immediately follows the H3 octet, whereas
an offset of 87 indicates that it starts immediately after the K2 octet location.
54
WIRELESS COMMUNICACTION
Payload pointer processing introduces a signal impairment known as payload adjustment jitter.
This impairment appears on a received tributary signal after recovery from a SPE that has
been subjected to payload pointer changes. The operation of the network equipment
processing the tributary signal immediately downstream is influenced by this excessive jitter.
By careful design of the timing distribution for the synchronous network, payload pointer
adjustments can be minimized, thus reducing the level of tributary jitter that can be
accumulated through synchronous transport.
The Three Overhead Levels of SONET
The three embedded overhead levels of SONET are:
1 Path (POH)
2 Line (LOH)
3 Section (SOH)
These overhead levels, represented as spans, are illustrated in Figure 19.11. One important
function is to support network operation and maintenance (OAM). The POH consists of 9
octets and occupies the first column of the SPE, as pointed out previously. It is created and
included in the SPE as part of the SPE assembly process. The POH provides the facilities to
support and maintain the transport of the SPE between path terminations, where the SPE is
assembled and disassembled. Among the POH specific functions are:
1 An 8-bit-wide (octet B3) BIP (bit-interleaved parity) check calculated over all bits of the
previous SPE. The computed value is placed in the POH of the following frame.
2 Alarm and performance information (octet G1).
3 A path signal label (octet C2); gives details of SPE structure. It is 8 bits wide, which can
identify up to 256 structures (28).
4 One octet (J1) repeated through 64 frames can develop an alphanumeric message
associated with the path. This allows verification of continuity of connection to the source
of the path signal at any receiving terminal along the path by monitoring the message string.
55
WIRELESS COMMUNICACTION
STS payload pointer (H1, H2) coding.
56
WIRELESS COMMUNICACTION
SONET section, line and path definitions.
5 An orderwire for network operator communications between path equipment (octet F2).
Facilities to support and maintain the transport of the SPE between adjacent nodes are
provided by the line and section overhead. These two overhead groups share the .rst three
columns of the STS-1 frame (see Figure 19.1). The SOH occupies the top three rows (total
of 9 octets) and the LOH occupies the bottom 6 rows (18 octets). The line overhead
functions include:
i Payload pointer (octets H1, H2, and H3) (each STS-1 in an STS-N frame has its own
payload pointer).
ii Automatic protection switching control (octets K1 and K2).
iii BIP parity check (octet B2).
iv 576-kbps data channel (octets D4 through D12).
v Express orderwire (octet E2). A section is defined. Among the section overhead functions
are:
a Frame alignment pattern (octets A1, A2).
b STS-1 identification (octet C1): a binary number corresponding to the order of appearance
in the STS-N frame, which can be used in the framing and deinter-leaving process to
determine the position of other signals.
c BIP-8 parity check (octet B1): section error monitoring.
d Data communications channel (octets D1, D2, and D3).
e Local orderwire channel (octet E1).
f User channel (octet F1).
The SPE Assembly/Disassembly Process.
Payload mapping is the process of assembling a tributary signal into an SPE. It is fundamental
to SONET operation. The payload capacity provided for each individual tributary signal is
always slightly greater than that required by the tributary signal. The mapping process, in
essence, is to synchronize the tributary signal with the payload capacity. This is achieved by
adding stuffing bits to the bit stream as part of the mapping process. An example might be a
DS3 tributary signal at a nominal rate of 44.736 Mbps to be synchronized with a payload
capacity of 49.54 Mbps provided by an STS-1 SPE.
57
WIRELESS COMMUNICACTION
Th
SPE disassembly process.
The addition of path overhead completes the assembly process of the STS-1 SPE and
increases the bit rate of the composite signal to 50.11 Mbps. The SPE assembly process is
shown graphically in Figure 19.12. At the terminus or drop point of the network, the original
DS3 payload must be recovered as in our example. The process of SPE Disassembly is
shown in Figure 19.13. The term used here is payload demapping. The demapping process
desynchronizes the tributary signal from the composite SPE signal by stripping off the path
overhead and the added stuff bits. In the example, an STS-1 SPE with a mapped DS3
payload arrives at the tributary disassembly location with a signal rate of 50.11 Mbps. The
stripping process results in a discontinuous signal representing the transported DS3 signal
with an average signal rate of 44.74 Mbps. The timing discontinuities are reduced by means
of a desynchronizing phase-locked loop, which then produces a continuous DS3 signal at
the required average transmission rate (Refs. 1, 2).
Line Rates for Standard SONET Interface Signals.
Table 19.1 shows the standard line transmission rates for OC-N (OC = optical carrier) and
STS-N.
Add–Drop Multiplexer
The SONET ADM multiplexes one or more DS-n signals into the SONET OC-N channel.
An ADM can be configured for either the add–drop or terminal mode. In the ADM mode, it
can operate when the low-speed DS1 signals terminating at the SONET derive timing from
the same or equivalent source as (SONET) (i.e., synchronous) but do not derive timing from
asynchronous sources. Figure 19.14 is an example of an ADM configured in the add–drop
58
WIRELESS COMMUNICACTION
mode with DS1 and OC-N interfaces. A SONET ADM interfaces with two full-duplex OCN signals and
SONET ADM add-drop configuration example.
one or more full-duplex DS1 signals. It may optionally provide low speed DS1C, DS2, DS3,
or OC-M (M ¡Ü N) interfaces. There are nonpath-terminating information payloads from
each incoming OC-N signal, which are passed the SONET ADM and transmitted by the
OC-N interface at the other side. Timing for transmitted OC-N is derived from either (a) an
external synchronization source, (b) an incoming OC-N signal, (c) each incoming OC-N
signal in each direction (called through-timing), or (d) its local clock depending on the network
application. Each DS1 interface reads data from an incoming OC-N and inserts data into an
outgoing OC-N bit stream as required. Figure 19.14 is an example of an ADM configured in
the add–drop mode with DS1 and OC-N interfaces. A SONET ADM interfaces with two
full-duplex OC-N signals and one or more full-duplex DS1 signals. It may optionally provide
low-speed DS1C, DS2, DS3, or OC-M (M ¡Ü N) interfaces. There is non-pathterminating
information payloads from each incoming OC-N signal which are passed the SONET ADM
and transmitted by the OC-N interface at the other side. Figure 19.14 also shows a
synchronization interface for local switch application with external timing and an operations
interface module (OIM) that provides local technician orderwire,6 local alarm, and an
59
WIRELESS COMMUNICACTION
interface to remote operations systems. A controller is part of each SONET ADM, which
maintains and controls the ADM functions, to connect to local or remote
technician interfaces and to connect to required and optional operation links that permit
maintenance, provisioning, and testing. Figure shows an example of an ADM in the terminal
mode of operation with DS1 interfaces. In this case, the ADM multiplexes up to Nx(28
DS1)7 or equivalent signals into an OC-N bit stream. Timing for this terminal configuration
is taken from either an external synchronization source, the received OC-N signal (called
loop timing), or its own local clock depending on the network application.
SYNCHRONOUS DIGITAL HIERARCHY
Introduction
SDH was a European/CCITT development, whereas SONET was a North American
development. They are very similar. One major difference is their initial line rate. STS1/OC-1 has an initial line rate of 51.84 Mbps; and SDH level 1 has a bit rate of 155,520
Mbps. These rates are the basic building blocks of each system. SONET‘s STS-3/OC-3 line
rate is the same as SDH STM-1 of 155.520 Mbps. Another difference is in their basic digital
line rates. In North America it is at the DS1 or DS3 lines rates; in SDH countries it is at the
2.048-, 34-, or 139-Mbps rates. This has been resolved in the SONET/SDH environment
through the SDH administrative unit (AU) at a 34-Mbps rate. Four such 34-Mbps AUs are
―nested‖ (i.e., joined) to form the SDH STM-1, the 155-Mbps basic building block. There is
an AU3 sed with SDH to carry a SONET STS-1 or a DS3 signal. In such a way, a nominal
0-Mbps AU3 can be transported on an STM-1 SDH signal.
SDH Standard Bit Rates
The standard SDH bits rates are shown in Table 19.2. ITU-T Rec. G.707 (Ref. 5) states that
the first level digital hierarchy shall be 155,520 kbps . . . and . . . higher synchronous digital
hierarchy rates shall be obtained as integer multiples of the first level it rate.‖
60
WIRELESS COMMUNICACTION
Interface and Frame Structure of SDH
Figure 19.16 illustrates the relationship between various multiplexing elements of SDH and
shows generic multiplexing structures. Figure 19.17 illustrates one multiplexing example for
SDH, where there is direct multiplexing from container-1 using AU-3.
Definitions
Synchronous Transport Module (STM).
An STM is the information structure used to support section layer connections in the SDH.
It is analogous to STS in the SONET regime. STM consists of information payload and
section overhead (SOH) information fields organized in a block frame structure that repeats
every 125 µsec. The information is suitably conditioned for serial transmission on selected
media at a rate that is synchronized to the network. A basic STM (STM-1) is defined at
155,520 kbps. Higher-capacity STMs are formed at rates equivalent to N times multiples of
this basic rate. STM capacities for N = 4 and N = 16 are defined, and higher values are
under consideration by ITU-T. An STM comprises a single administrative unit group (AUG)
together with the SOH. STM-N contains N AUGs together with SOH. Container, C-n (n
= 1 to n = 4). This element is a defined unit of payload capacity, which is dimensioned to
carry any of the bit rates currently defined in Table 19.2, and may also provide capacity for
transport of broadband signals that are not yet defined by CCITT (ITU-T Organization)
(Ref. 6).
Basic generalized SDH multiplexing structure. (From Ref. 7, Figure 1/G.709, ITU-T Rec.
G.709.)
61
WIRELESS COMMUNICACTION
SDH multiplexing method directly from container-1 using AU-3. (From Ref. 6, Figure 2-3/
G.708, ITU-T Rec. G.708.)
Virtual Container-n (VC-n).
A virtual container is the information structure used to support path layer connection in the
SDH. It consists of information payload and POH information fields organized in a block
frame that repeats every 125 µsec or 500 µsec. Alignment information to identify VC-n
frame start is provided by the server network layer. Two types of virtual container have been
identified:
1. Lower-Order Virtual Container-n, VC-n (n = 1, 2). This element comprises a single C-n (n = 1,
2), plus the basic virtual container POH appropriate to that level.
2. Higher-Order Virtual Container-n, to VC-n (n = 3, 4). This element comprises a single C-n (n
= 3, 4), an assembly of tributary unit groups (TUG-2s), or an assembly of TU-3s, together
with virtual container POH appropriate to that level.
Administrative Unit-n (AU-n).
An administrative unit is the information structure that provides adaptation between the
higher-order path layer and the multiplex section. It consists of an information payload (the
higher-order virtual container) and an administrative unit pointer, which indicates the offset
of the payload frame start relative to the multiplex section frame start. Two administrative
units are defined. The AU-4 consists of a VC-4 plus an administrative unit pointer, which
indicates the phase alignment of the VC-4 with respect to the STM-N frame. The AU-3
consists of a VC-3 plus an administrative unit pointer, which indicates the phase alignment
of the VC-3 with respect to the STM-N frame. In each case the administrative unit pointer
location is fixed with respect to the STM-N frame (Ref. 6). One or more administrative units
62
WIRELESS COMMUNICACTION
occupying fixed, defined positions in a STM payload is termed an administrative unit group
(AUG). An AUG consists of a homogeneous assembly of AU-3s or an AU-4.
Tributary Unit-n (TU-n).
A tributary unit is an information structure that provides adaptation between the lower-order
path layer and the higher-order path layer. It consists of an information payload (the lowerorder virtual container) and a tributary unit pointer, which indicates the offset of the payload
frame start relative to the higher-order virtual container frame start. The TU-n (n = 1, 2, 3)
consists of a VC-n together with a tributary unit pointer. One or more tributary units
occupying fixed, defined positions in a higher-order VC-n payload is termed a tributary unit
group (TUG). TUGs are defined in such a way that mixed-capacity payloads made up of
different-size tributary units can be constructed to increase flexibility of the transport
network. A TUG-2 consists of a homogeneous assembly of identical TU-1s or a TU-2. A
TUG-3 consists of a homogeneous assembly of TUG-2s or a TU-3 (Ref. 6).
Container-n (n = 1–4).
A container is the information structure that forms the network synchronous information
payload for a virtual container. For each of the defined virtual containers there is a
corresponding container. Adaptation functions have been defined for many common
network rates into a limited number of standard containers (Refs. 6, 7). These include
standard E-1/DS-1 rates defined in ITU-T Rec. G.702 (Ref. 8).
Frame Structure.
The basic frame structure of SDH is illustrated in Figure 19.18. The three principal areas of
the STM-1 frame are section overhead, AU pointers, and STM-1 payload. Section Overhead.
Section overhead is contained in rows 1–3 and 5–9 of columns 1–9 × N of the STM-N
shown in Figure 19.18. Administrative Unit (AU) Pointers. The AU-n pointer (like the SONET
pointer) allows .exible and dynamic alignment of the VC-n within the AU-n frame. Dynamic
alignment means that the VC-n .oats within the AU-n frame. Thus the pointer is able to
accommodate differences, not only in the phases of the VC-n and the SOH but also in the
frame rates. Row 4 of columns 1–9 × N in Figure 19.18 is available for AU pointers. The
AU- 4 pointer is contained in octets H1, H2, and H3 as shown in Figure 19.19. The three
individual AU-3 pointers are contained in three separate H1, H2, and H3 octets as shown in
Figure 19.20. The pointer contained in H1 and H2 designates the location of the octet where
the VC-n begins. The two octets (or bytes) allocated to the pointer function can be viewed as
one word as illustrated in Figure 19.21. The last ten bits (bits 7–16) of the pointer word carry
the pointer value.
63
WIRELESS COMMUNICACTION
STM-N frame structure.
AU-4 pointer offset numbering.
64
WIRELESS COMMUNICACTION
AU-3 offset numbering.
As shown in Figure 19.21, the AU-4 pointer value is a binary number with a range of 0–782,
which indicates offset, in three-octet increments, between the pointer and the first octet of
the VC-4 (see Figure 19.19). Figure 19.21 also indicates one additional valid point, the
concatenation indication. This concatenation is given by the binary sequence 1001 in bit
positions 1–4; bits 5 and 6 are unspecified, and there are ten 1s in bit positions 7–16. The
AU-4 pointer is set to the concatenation indications for AU-4 concatenation.
There are three AU-3s in an AUG, where each AU-3 has its own associated H1, H2, and H3
octets. As detailed in Figure 19.20, the H octets are shown in sequence. The first H1, H2, H3
set refers to the first AU-3, the second set to the second AU-3, and so on. For the AU-3s,
each pointer operates independently. In all cases the AU-n pointer octets are not counted in
the offset. For example, in an AU-4, the pointer value of 0 indicates that the VC-4 starts in
the octet location that immediately follows the last H3 octet, whereas an offset of 87
indicates that the VC-4 starts three octets after the K2 octet (byte) (Ref. 7). Note the
similarity to SONET here. Frequency Justification. If there is a frequency offset between the
frame rate of the AUG and that of the VC-n, then the pointer value will be incremented or
decremented as needed, accompanied by a corresponding positive or negative justification
octet or octets. Consecutive pointer operations must be separated by at least three frames
(i.e., every fourth frame) in which the pointer values remain constant. If the frame rate of the
VC-n is too slow with respect to that of the AUG, then the alignment of the VC-n must
periodically slip back in time and the pointer value must be incremented by one. This
operation is indicated by inverting bits 7, 9, 11, 13, and 15 (I bits) of the pointer word to
allow 5-bit majority voting at the receiver. Three positive justification octets appear
immediately after the last H3 octet in the AU-4 frame containing the inverted I-bits.
Subsequent pointers will contain the new offset. For AU-3 frames, a positive justification
octet appears immediately after the individual H3 octet of the AU-3 frame containing
inverted I-bits. Subsequent pointers will contain the new offset. If the frame rate of the VCn is too fast with respect to that of the AUG, then the alignment of the VC-n must
65
WIRELESS COMMUNICACTION
periodically be advanced in time and the pointer value must then be decremented by one.
This operation is indicated by inverting bits 8, 10, 12, 14, and 16 (D-bits) of the pointer word
to allow 5-bit majority voting at the receiver. Three negative justification octets appear in the
H3 octets in the AU-4 frame containing inverted D-bits. Subsequent pointers will contain
the new offset.
For AU-3 frames, a negative justification octet appears in the individual H3 octet of the AU3 frame containing inverted D-bits. Subsequent pointers will contain the new offset. The
following summarizes the rules (Ref. 7) for interpreting the AU-n pointers:
1. During normal operation, the pointer locates the start of the VC-n within the AUn frame.
2. Any variation from the current pointer value is ignored unless a consistent new value is
received three times consecutively or it is preceded by one; see rules 3, 4, or 5 (below). Any
consistent new value received three times consecutively overrides (i.e., takes priority over)
rules 3 and 4.
3. If the majority of I-bits of the pointer word are inverted, a positive justification operation
is indicated. Subsequent pointer values shall be incremented by one.
4. If the majority of D-bits of the pointer word are inverted, a negative justification
operation is indicated. Subsequent pointer values shall be decremented by one.
5. If the NDF (new data .flag) is set to 1001, then the coincident pointer value shall replace
the current one at the offset indicated by the new pointer values unless the receiver is in a
state that corresponds to a loss of pointer. Administrative Units in the STM-N. The STM-N
payload can support N AUGs, where each AUG may consist of one AU-4 or three AU-3s.
The VC-n associated with each AUG-n does not have a fixed phase with respect to the STMN frame. The location of the first octet in the VC-n is indicated by the AU-n pointer. The
AU-n pointer is in a fixed location in the STM-N frame. This is illustrated in Figures 19.17,
19.18, 19.22, and 19.23.
The AU-4 may be used to carry, via the VC-4, a number of TU-ns (n = 1, 2, 3) forming a
two-stage multiplex. An example of this arrangement is shown in Figure 19.22. The VC-n
associated with each Tu-n does not have a fixed phase relationship with respect to the start
of the VC-4. The TU-n pointer is in a fixed location in the VC-4 and the location of the first
octet of the VC-n is indicated by the TU-n pointer.
66
WIRELESS COMMUNICACTION
Administrative units in an STM-1 frame.
Two-stage multiplex.
The AU-3 may be used to carry, via the VC-3, a number of TU-ns (n = 1, 2) forming a twostage multiplex. An example of this arrangement is shown in Figure . The VC-n associated
with each TU-n does not have a fixed phase relationship with respect to the start of the VC3. The TU-n pointer is in a fixed location in the VC-3, and the location of the first octet of
the VC-n is indicated by the TU-n pointer (Ref. 7).
Interconnection of STM-1s.
SDH has been designed to be universal, allowing transport of a large variety of signals
including those specified in ITU-T Rec. G.702 (Ref. 8), such as the North American DS1
hierarchy and the European E-1 hierarchy. However, different structures can be employed
for the transport of virtual containers. The following interconnection rules are used.
1. The rule for interconnecting two AUGs based on two different types of administrative
unit, namely, AU-4 and AU-3, is to use the AU-4 structure. Therefore the AUG based on
AU-3 is demultiplexed to the TUG-2 or VC-3 level according to the type of payload and is
remultiplexed within an AUG via the TUG-3/VC-4/AU- 4 route.
2. The rule for interconnecting VC-11s transported via different types of tributary unit,
namely, TU-11 and TU-12, is to use the TU-11 structure. VC-11, TU-11, and TU-12 are
described in ITU-T Rec. G.709 (Ref. 7).
67
WIRELESS COMMUNICACTION
Chapter No:4
Radio Broadcasting System
68
WIRELESS COMMUNICACTION
Radio
is the wireless transmission of signals, by modulation of electromagnetic waves
with frequencies below those of visible light.
Electromagnetic radiation travels by means of oscillating electromagnetic fields that pass
through the air and the vacuum of space. It does not require a medium of transport.
Information is carried by systematically changing (modulating) some property of the radiated
waves, such as their amplitude or their frequency. When radio waves pass an electrical
conductor, the oscillating fields induce an alternating current in the conductor. This can be
detected and transformed into sound or other signals that carry information.
The word 'radio' is used to describe this phenomenon, and radio transmissions are classed as
radio frequency emissions.
Etymology
Originally, radio or radioteleography was called 'wireless telegraphy', which was shortened to
'wireless'. The prefix radio- in the sense of wireless transmission was first recorded in the
word radioconductor, coined by the French physicist Edouard Branly in 1897 and based on
the verb to radiate (in Latin "radius" means "spoke of a wheel, beam of light, ray"). 'Radio' as
a noun is said to have been coined by advertising expert Waldo Warren (White 1944). The
word appears in a 1907 article by Lee de Forest, was adopted by the United States Navy in
1912 and became common by the time of the first commercial broadcasts in the United
States in the 1920s. (The noun 'broadcasting' itself came from an agricultural term, meaning
'scattering seeds'.) The American term was then adopted by other languages in Europe and
Asia, although British Commonwealth countries retained the term 'wireless' until the mid20th century. In Japanese, the term 'wireless' is the basis for the term 'radio wave' although
the term for the device that listens to radio waves is literally 'device for receiving sounds'.
In recent years the term 'wireless' has gained renewed popularity through the rapid growth of
short range networking, e.g. WLAN ('Wireless Local Area Network'),WiFi, Bluetooth as well
as mobile telephony, e.g. GSM and UMTS. Today, the term 'radio' often refers to the actual
transceiver device or chip, whereas 'wireless' refers to the system and/or method used for
radio communication. Hence one talks about radio transceivers and Radio Frequency
Identification (RFID), but about wireless devices and wireless sensor networks.
69
WIRELESS COMMUNICACTION
History
Tesla demonstrating wireless transmissions during his high frequency and potential lecture of
1891. After continued research, Tesla gave the fundementals of radio in 1893.
In 1893, in St. Louis, Missouri, Tesla made devices for his experiments with electricity.
Addressing the Franklin Institute in Philadelphia and the National Electric Light
Association, he described and demonstrated in detail the principles of his wireless work. [2]
The descriptions contained all the elements that were later incorporated into radio systems
before the development of the vacuum tube. He initially experimented with magnetic
receivers, unlike the coherers (detecting devices consisting of tubes filled with iron filings
which had been invented by Temistocle Calzecchi-Onesti at Fermo in Italy in 1884) used by
Guglielmo Marconi and other early experimenters..
In 1894 Alexander Stepanovich Popov built his first radio receiver, which contained a
coherer. Further refined as a lightning detector, it was presented to the Russian Physical and
Chemical Society on May 7, 1895.
In 1896, Marconi was awarded the British patent 12039, Improvements in transmitting
electrical impulses and signals and in apparatus there-for, for radio. In 1897 he established
the world's first radio station on the Isle of Wight, England. Marconi opened the world's
first "wireless" factory in Hall Street, Chelmsford, England in 1898, employing around 50
people.
The next great invention was the vacuum tube detector, invented by the Westinghouse
engineers. On Christmas Eve, 1906, Reginald Fessenden used a synchronous rotary-spark
transmitter for the first radio program broadcast, from Brant Rock, Massachusetts. Ships at
sea heard a broadcast that included Fessenden playing O Holy Night on the violin and
reading a passage from the Bible. The first radio news program was broadcast August 31,
1920 by station 8MK in Detroit, Michigan. The first college radio station, 2ADD, renamed
WRUC in 1940, began broadcasting October 14, 1920 from Union College, Schenectady,
70
WIRELESS COMMUNICACTION
New York. The first regular entertainment broadcasts commenced in 1922 from the
Marconi Research Centre at Writtle, near Chelmsford, England.
One of the first developments in the early 20th century (1900-1959) was that aircraft used
commercial AM radio stations for navigation. This continued until the early 1960s when
VOR systems finally became widespread (though AM stations are still marked on U.S.
aviation charts). In the early 1930s, single sideband and frequency modulation were invented
by amateur radio operators. By the end of the decade, they were established commercial
modes. Radio was used to transmit pictures visible as television as early as the 1920s.
Commercial television transmissions started in North America and Europe in the 1940s. In
1954, Regency introduced a pocket transistor radio, the TR-1, powered by a "standard 22.5
V Battery".
In 1960, Sony introduced its first transistorized radio, small enough to fit in a vest pocket,
and able to be powered by a small battery. It was durable, because there were no tubes to
burn out. Over the next 20 years, transistors replaced tubes almost completely except for
very high-power uses. By 1963 color television was being regularly transmitted commercially,
and the first (radio) communication satellite, TELSTAR, was launched. In the late 1960s, the
U.S. long-distance telephone network began to convert to a digital network, employing
digital radios for many of its links. In the 1970s, LORAN became the premier radio
navigation system. Soon, the U.S. Navy experimented with satellite navigation, culminating
in the invention and launch of the GPS constellation in 1987. In the early 1990s, amateur
radio experimenters began to use personal computers with audio cards to process radio
signals. In 1994, the U.S. Army and DARPA launched an aggressive, successful project to
construct a software radio that could become a different radio on the fly by changing
software. Digital transmissions began to be applied to broadcasting in the late 1990s.
Uses of radio
Early uses were maritime, for sending telegraphic messages using Morse code between ships
and land. The earliest users included the Japanese Navy scouting the Russian fleet during the
Battle of Tsushima in 1905. One of the most memorable uses of marine telegraphy was
during the sinking of the RMS Titanic in 1912, including communications between operators
on the sinking ship and nearby vessels, and communications to shore stations listing the
survivors.
Radio was used to pass on orders and communications between armies and navies on both
sides in World War I; Germany used radio communications for diplomatic messages once its
submarine cables were cut by the British. The United States passed on President Woodrow
Wilson's Fourteen Points to Germany via radio during the war. Broadcasting began from
San Jose in 1909[4], and became feasible in the 1920s, with the widespread introduction of
radio receivers, particularly in Europe and the United States. Besides broadcasting, point-topoint broadcasting, including telephone messages and relays of radio programs, became
widespread in the 1920s and 1930s. Another use of radio in the pre-war years was the
development of detecting and locating aircraft and ships by the use of radar (RAdio
Detection And Ranging).
71
WIRELESS COMMUNICACTION
Today, radio takes many forms, including wireless networks, mobile communications of all
types, as well as radio broadcasting. Before the advent of television, commercial radio
broadcasts included not only news and music, but dramas, comedies, variety shows, and
many other forms of entertainment. Radio was unique among dramatic presentation that it
used only sound. For more, see radio programming.
Audio
A Fisher 500 AM/FM hi-fi receiver from 1959.
AM broadcast radio sends music and voice in the Medium Frequency (MF—0.300 MHz to 3
MHz) radio spectrum. AM radio uses amplitude modulation, in which the amplitude of the
transmitted signal is made proportional to the sound amplitude captured (transduced) by the
microphone while the transmitted frequency remains unchanged. Transmissions are affected
by static and interference because lightning and other sources of radio that are transmitting
at the same frequency add their amplitudes to the original transmitted amplitude.
FM broadcast radio sends music and voice with higher fidelity than AM radio. In frequency
modulation, amplitude variation at the microphone cause the transmitter frequency to
fluctuate. Because the audio signal modulates the frequency and not the amplitude, an FM
signal is not subject to static and interference in the same way as AM signals. FM is
transmitted in the Very High Frequency (VHF—30 MHz to 300 MHz) radio spectrum.
VHF radio waves act more like light, traveling in straight lines, hence the reception range is
generally limited to about 50-100 miles. During unusual upper atmospheric conditions, FM
signals are occasionally reflected back towards the Earth by the ionosphere, resulting in
Long distance FM reception. FM receivers are subject to the capture effect, which causes the
radio to only receive the strongest signal when multiple signals appear on the same
frequency. FM receivers are relatively immune to lightning and spark interference.
FM Subcarrier services are secondary signals transmitted "piggyback" along with the main
program. Special receivers are required to utilize these services. Analog channels may contain
alternative programming, such as reading services for the blind, background music or stereo
sound signals. In some extremely crowded metropolitan areas, the subchannel program
might be an alternate foreign language radio program for various ethnic groups. Subcarriers
can also transmit digital data, such as station identification, the current song's name, web
addresses, or stock quotes. In some countries, FM radios automatically retune themselves to
the same channel in a different district by using sub-bands.
Aviation voice radios use VHF AM. AM is used so that multiple stations on the same
channel can be received. (Use of FM would result in stronger stations blocking out reception
72
WIRELESS COMMUNICACTION
of weaker stations due to FM's capture effect). Aircraft fly high enough that their
transmitters can be received hundreds of miles (or kilometres) away, even though they are
using VHF.
Marine voice radios can use AM in the shortwave High Frequency (HF—3 MHz to 30
MHz) radio spectrum for very long ranges or narrowband FM in the VHF spectrum for
much shorter ranges. Government, police, fire and commercial voice services use
narrowband FM on special frequencies. Fidelity is sacrificed to use a smaller range of radio
frequencies, usually five kHz of deviation, rather than the 75 kHz used by FM broadcasts
and 25 kHz used by TV sound.
Civil and military HF (high frequency) voice services use shortwave radio to contact ships at
sea, aircraft and isolated settlements. Most use single sideband voice (SSB), which uses less
bandwidth than AM. On an AM radio SSB sounds like ducks quacking. Viewed as a graph of
frequency versus power, an AM signal shows power where the frequencies of the voice add
and subtract with the main radio frequency. SSB cuts the bandwidth in half by suppressing
the carrier and (usually) lower sideband. This also makes the transmitter about three times
more powerful, because it doesn't need to transmit the unused carrier and sideband.
TETRA, Terrestrial Trunked Radio is a digital cell phone system for military, police and
ambulances. Commercial services such as XM, WorldSpace and Sirius offer encrypted digital
Satellite radio.
Telephony
Mobile phones transmit to a local cell site (transmitter/receiver) that ultimately connects to
the public switched telephone network (PSTN) through an optic fiber or microwave radio
and other network elements. When the mobile phone nears the edge of the cell site's radio
coverage area, the central computer switches the phone to a new cell. Cell phones originally
used FM, but now most use various digital modulation schemes. Satellite phones use
satellites rather than cell towers to communicate. They come in two types: INMARSAT and
Iridium. Both types provide world-wide coverage. INMARSAT uses geosynchronous
satellites, with aimed high-gain antennas on the vehicles. Iridium uses 66 Low Earth Orbit
satellites as the cells.
Video
Television sends the picture as AM and the sound as FM, with the sound carrier a fixed
frequency (4.5 MHz in the NTSC system) away from the video carrier. Analog television also
uses a vestigial sideband on the video carrier to reduce the bandwidth required.
Digital television uses quadrature amplitude modulation. A Reed-Solomon error correction
code adds redundant correction codes and allows reliable reception during moderate data
loss. Although many current and future codecs can be sent in the MPEG-2 transport stream
container format, as of 2006 most systems use a standard-definition format almost identical
to DVD: MPEG-2 video in Anamorphic widescreen and MPEG layer 2 (MP2) audio. Highdefinition television is possible simply by using a higher-resolution picture, but H.264/AVC
73
WIRELESS COMMUNICACTION
is being considered as a replacement video codec in some regions for its improved
compression. With the compression and improved modulation involved, a single "channel"
can contain a high-definition program and several standard-definition programs.
Navigation
All satellite navigation systems use satellites with precision clocks. The satellite transmits its
position, and the time of the transmission. The receiver listens to four satellites, and can
figure its position as being on a line that is tangent to a spherical shell around each satellite,
determined by the time-of-flight of the radio signals from the satellite. A computer in the
receiver does the math.
Radio direction-finding is the oldest form of radio navigation. Before 1960 navigators used
movable loop antennas to locate commercial AM stations near cities. In some cases they
used marine radiolocation beacons, which share a range of frequencies just above AM radio
with amateur radio operators. Loran systems also used time-of-flight radio signals, but from
radio stations on the ground. VOR (Very High Frequency Omnidirectional Range), systems
(used by aircraft), have an antenna array that transmits two signals simultaneously. A
directional signal rotates like a lighthouse at a fixed rate. When the directional signal is facing
north, an omnidirectional signal pulses. By measuring the difference in phase of these two
signals, an aircraft can determine its bearing or radial from the station, thus establishing a
line of position. An aircraft can get readings from two VOR and locate its position at the
intersection of the two radials, known as a "fix." When the VOR station is collocated with
DME (Distance Measuring Equipment), the aircraft can determine its bearing and range
from the station, thus providing a fix from only one ground station. Such stations are called
VOR/DMEs. The military operates a similar system of navaids, called TACANs, which are
often built into VOR stations. Such stations are called VORTACs. Because TACANs
include distance measuring equipment, VOR/DME and VORTAC stations are identical in
navigation potential to civil aircraft.
Radar
Radar (Radio Detection And Ranging) detects things at a distance by bouncing radio waves
off them. The delay caused by the echo measures the distance. The direction of the beam
determines the direction of the reflection. The polarization and frequency of the return can
sense the type of surface. Navigational radars scan a wide area two to four times per minute.
They use very short waves that reflect from earth and stone. They are common on
commercial ships and long-distance commercial aircraft
General purpose radars generally use navigational radar frequencies, but modulate and
polarize the pulse so the receiver can determine the type of surface of the reflector. The best
general-purpose radars distinguish the rain of heavy storms, as well as land and vehicles.
Some can superimpose sonar data and map data from GPS position.
Search radars scan a wide area with pulses of short radio waves. They usually scan the area
two to four times a minute. Sometimes search radars use the doppler effect to separate
moving vehicles from clutter. Targeting radars use the same principle as search radar but
74
WIRELESS COMMUNICACTION
scan a much smaller area far more often, usually several times a second or more. Weather
radars resemble search radars, but use radio waves with circular polarization and a
wavelength to reflect from water droplets. Some weather radar use the doppler to measure
wind speeds.
Emergency services
Emergency Position-Indicating Radio Beacons (EPIRBs), Emergency Locating Transmitters
(ELTs) or Personal Locator Beacons (PLBs) are small radio transmitters that satellites can
use to locate a person or vehicle needing rescue. Their purpose is to help rescue people in
the first day, when survival is most likely. There are several types, with widely-varying
performance.
Data (digital radio)
Most new radio systems are digital, see also: Digital TV, Satellite Radio, Digital Audio
Broadcasting. The oldest form of digital broadcast was spark gap telegraphy, used by
pioneers such as Marconi. By pressing the key, the operator could send messages in Morse
code by energizing a rotating commutating spark gap. The rotating commutator produced a
tone in the receiver, where a simple spark gap would produce a hiss, indistinguishable from
static. Spark gap transmitters are now illegal, because their transmissions span several
hundred megahertz. This is very wasteful of both radio frequencies and power.
The next advance was continuous wave telegraphy, or CW (Continuous Wave), in which a
pure radio frequency, produced by a vacuum tube electronic oscillator was switched on and
off by a key. A receiver with a local oscillator would "heterodyne" with the pure radio
frequency, creating a whistle-like audio tone. CW uses less than 100 Hz of bandwidth. CW is
still used, these days primarily by amateur radio operators (hams). Strictly, on-off keying of a
carrier should be known as "Interrupted Continuous Wave" or ICW.
Radio teletypes usually operate on short-wave (HF) and are much loved by the military
because they create written information without a skilled operator. They send a bit as one of
two tones. Groups of five or seven bits become a character printed by a teletype. From
about 1925 to 1975, radio teletype was how most commercial messages were sent to less
developed countries. These are still used by the military and weather services.
Aircraft use a 1200 Baud radioteletype service over VHF to send their ID, altitude and
position, and get gate and connecting-flight data. Microwave dishes on satellites, telephone
exchanges and TV stations usually use quadrature amplitude modulation (QAM). QAM
sends data by changing both the phase and the amplitude of the radio signal. Engineers like
QAM because it packs the most bits into a radio signal. Usually the bits are sent in "frames"
that repeat. A special bit pattern is used to locate the beginning of a frame.
Systems that need reliability, or that share their frequency with other services, may use
"corrected orthogonal frequency-division multiplexing" or COFDM. COFDM breaks a
digital signal into as many as several hundred slower subchannels. The digital signal is often
sent as QAM on the subchannels. Modern COFDM systems use a small computer to make
75
WIRELESS COMMUNICACTION
and decode the signal with digital signal processing, which is more flexible and far less
expensive than older systems that implemented separate electronic channels. COFDM resists
fading and ghosting because the narrow-channel QAM signals can be sent slowly. An
adaptive system, or one that sends error-correction codes can also resist interference,
because most interference can affect only a few of the QAM channels. COFDM is used for
WiFi, some cell phones, Digital Radio Mondiale, Eureka 147, and many other local area
network, digital TV and radio standards.
Heating
Radio-frequency energy generated for heating of objects is generally not intended to radiate
outside of the generating equipment, to prevent interference with other radio signals.
Microwave ovens use intense radio waves to heat food. (Note: It is a common
misconception that the radio waves are tuned to the resonant frequency of water molecules.
The microwave frequencies used are actually about a factor of ten below the resonant
frequency.) Diathermy equipment is used in surgery for sealing of blood vessels. Induction
furnaces are used for melting metal for casting.
Mechanical force
Tractor beams can use radio waves which exert small electrostatic and magnetic forces.
These are enough to perform station-keeping in microgravity environments. Conceptually,
spacecraft propulsion: Radiation pressure from intense radio waves has been proposed as a
propulsion method for an interstellar probe called Starwisp. Since the waves are long, the
probe could be a very light metal mesh, and thus achieve higher accelerations than if it were
a solar sail.
Amateur radio service
Amateur radio is a hobby in which enthusiasts purchase or build their own equipment and
use radio for their own enjoyment. They may also provide an emergency and public-service
radio service. This has been of great use, saving lives in many instances. Radio amateurs are
licensed to use frequencies in a large number of narrow bands throughout the radio
spectrum. They use all forms of encoding, including obsolete and experimental ones. Several
forms of radio were pioneered by radio amateurs and later became commercially important
including FM, single-sideband (SSB), AM, digital packet radio and satellite repeaters. Some
amateur frequencies may be disrupted by power-line internet service.
Unlicensed radio services
Personal radio services such as Citizens' Band Radio, Family Radio Service, Multi-Use Radio
Service and others exist in North America to provide simple, (usually) short range
communication for individuals and small groups, without the overhead of licensing. Similar
services exist in other parts of the world. These radio services involve the use of handheld or
mobile radios better known as "walkie-talkies".
76
WIRELESS COMMUNICACTION
Radio control (RC)
Radio remote control use of radio waves to transmit control data to a remote object as in
some early forms of guided missile, some early TV remotes and a range of model boats, cars
and airplanes. Large industrial remote-controlled equipment such as cranes and switching
locomotives now usually use digital radio techniques to ensure safety and reliability.
In Madison Square Garden, at the Electrical Exhibition of 1898, Nikola Tesla successfully
demonstrated a radio-controlled boat.[5] He was awarded U.S. patent No. 613,809 for a
"Method of and Apparatus for Controlling Mechanism of Moving Vessels or Vehicles." [6]
The electromagnetic spectrum
Radio waves are a form of electromagnetic radiation, created whenever a charged object (in
normal radio transmission, an electron) accelerates with a frequency that lies in the radio
frequency (RF) portion of the electromagnetic spectrum. In radio, this acceleration is caused
by an alternating current in an antenna. Radio frequencies occupy the range from a few tens
of hertz to three hundred gigahertz, although commercially important uses of radio use only
a small part of this spectrum.[1]
Radio spectrum
ELF SLF
ULF
3 Hz 30 Hz 300
Hz
30
Hz
300
Hz
VLF
LF
3 kHz 30
kHz
3 kHz 30
kHz
300
kHz
MF
HF
VHF
UHF
SHF
300
kHz
3 MHz 30
MHz
300
MHz
3 GHz 30
GHz
3 MHz 30
MHz
300
MHz
3 GHz 30
GHz
EHF
300
GHz
Other types of electromagnetic radiation, with frequencies above the RF range, are
microwave, infrared, visible light, ultraviolet, X-rays and gamma rays. Since the energy of an
individual photon of radio frequency is too low to remove an electron from an atom, radio
waves are classified as non-ionizing radiation.
77
WIRELESS COMMUNICACTION
Electromagnetic spectrum and diagram of radio transmission of an audio signal. The colours
used in this diagram of the electromagnetic spectrum are for decoration only. They do not
correspond to the wavelengths and frequencies indicated on the scale.
Infrared (IR) radiation is electromagnetic radiation of a wavelength longer than
that of visible light, but shorter than that of radio waves. The name means "below red"
(from the Latin infra, "below"), red being the color of visible light with the longest
wavelength. Infrared radiation has wavelengths between about 750 nm and 1 mm, spanning
three orders of magnitude.
The uses of infrared include military, such as: target acquisition, surveillance, homing and
tracking and non-military, such as thermal efficiency analysis, remote temperature sensing,
short-ranged wireless communication, spectroscopy, and weather forecasting. Infrared
astronomy uses sensor-equipped telescopes to penetrate dusty regions of space, such as
molecular clouds; detect cool objects such as planets, and to view highly red-shifted objects
from the early days of the universe.
At the atomic level, infrared energy elicits vibrational modes in a molecule through a change
in the dipole moment, making it a useful frequency range for study of these energy states.
Infrared spectroscopy examines absorption and transmission of photons in the infrared
energy range, based on their frequency and intensity.
Different regions in the infrared
Objects generally emit infrared radiation across a spectrum of wavelengths, but only a
specific region of the spectrum is of interest because sensors are usually designed only to
collect radiation within a specific bandwidth. As a result, the infrared band is often
78
WIRELESS COMMUNICACTION
subdivided into smaller sections. There are no standard divisions, but a commonly used
scheme is:
Near-infrared (NIR): 0.75-1.4 µm in wavelength, defined by the water absorption,
and commonly used in fiber optic telecommunication because of low attenuation
losses in the SiO2 glass (silica) medium. Image intensifiers are sensitive to this area of
the spectrum, about 1 micron, 1,000 nanometres or 10,000 Angstroms. Examples
include night vision devices such as night vision goggles.
Short-wavelength infrared (SWIR): 1.4-3 µm, water absorption increases significantly
at 1,450 nm. The 1,530 to 1,560 nm range is the dominant spectral region for longdistance telecommunications
Mid-wavelength infrared (MWIR) also called intermediate infrared (IIR): 3-8 µm. In
guided missile technology this is the 'heat seeking' region in which the homing heads
of passive IR homing missiles are designed to work, homing on to the IR signature
of the target aircraft, typically the jet engine exhaust plume.
Long-wavelength infrared (LWIR): 8–15 µm. About 10 microns is the "thermal
imaging" region, in which sensors can obtain a completely passive picture of the
outside world based on thermal emissions only and requiring no external light or
thermal source such as the sun, moon or infrared illuminator. Forward-looking
infrared (FLIR)) systems use this area of the spectrum. Sometimes also called the
"far infrared."
Far infrared (FIR): 15-1,000 µm (see also far infrared laser)
NIR and SWIR is sometimes called reflected infrared while MWIR and LWIR is sometimes
referred to as thermal infrared. Due to the nature of the blackbody radiation curves, typical
'hot' objects, such as exhaust pipes, often appear brighter in the MW compared to the same
object viewed in the LW.
Astronomers typically divide the infrared spectrum as follows:
near: (0.7-1) to 5 µm
mid: 5 to (25-40) µm
long: (25-40) to (200-350) µm
These divisions are not precise and can vary depending on the publication. The three regions
are used for observation of different temperature ranges, and hence different environments
in space.
A third scheme divides up the band based on the response of various detectors:
Near infrared (NIR): from 0.7 to 1.0 micrometers (from the approximate end of the
response of the human eye to that of silicon)
79
WIRELESS COMMUNICACTION
Short-wave infrared (SWIR): 1.0 to 3 micrometers (from the cut off of silicon to that
of the MWIR atmospheric window. InGaAs covers to about 1.8 micrometers; the
less sensitive lead salts cover this region
Mid-wave infrared (MWIR): 3 to 5 micrometers (defined by the atmospheric window
and covered by InSb and HgCdTe and partially PbSe)
Long-wave infrared (LWIR): 8 to 12, or 7 to 14 micrometers: the atmospheric
window (Covered by HgCdTe and microbolometers)
Very-long wave infrared (VLWIR): 12 to about 30 micrometers, covered by doped
silicon
These divisions are justified by the different human response to this radiation: near infrared
is the region closest in wavelength to the radiation detectable by the human eye, mid and far
infrared are progressively further from the visible regime. Other definitions follow different
physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow
technical reasons (The common silicon detectors are sensitive to about 1,050 nm, while
InGaAs' sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending
on the specific configuration). Unfortunately, international standards for these specifications
are not currently available.
Plot of atmospheric transmittance in part of the infrared region.
The boundary between visible and infrared light is not precisely defined. The human eye is
markedly less sensitive to light above 700 nm wavelength, so shorter frequencies make
insignificant contributions to scenes illuminated by common light sources. But particularly
intense light (e.g., from lasers, or from bright daylight with the visible light removed by
colored gels) can be detected up to approximately 780 nm, and will be perceived as red light.
The onset of infrared is defined (according to different standards) at various values typically
between 700 nm and 800 nm.
80
WIRELESS COMMUNICACTION
Telecommunication bands in the infrared
In optical communications, the part of the infrared spectrum that is used is divided into
several bands based on availability of light sources, transmitting/absorbing materials (fibers)
and detectors:
Band
Descriptor
Wavelength range
O band Original
1260–1360 nm
E band Extended
1360–1460 nm
S band Short wavelength
1460–1530 nm
C band Conventional
1530–1565 nm
L band Long wavelength
1565–1625 nm
U band Ultralong wavelength 1625–1675 nm
The C-band is the dominant band for long-distance telecommunication networks. The S and
L bands are based on less well established technology, and are not as widely deployed.
Heat
Infrared radiation is popularly known as "heat" or sometimes "heat radiation," since many
people attribute all radiant heating to infrared light. This is a widespread misconception,
since light and electromagnetic waves of any frequency will heat surfaces that absorb them.
Infrared light from the Sun only accounts for 50% of the heating of the Earth, the rest being
caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or
ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible
radiation. It is true that objects at room temperature will emit radiation mostly concentrated
in the 8-12 micron band, but this is not distinct from the emission of visible light by
incandescent objects and ultraviolet by even hotter objects (see black body and Wien's
displacement law).
81
WIRELESS COMMUNICACTION
Heat is energy in transient form that flows due to temperature difference. Unlike heat
transmitted by thermal conduction or thermal convection, radiation can propagate through a
vacuum.
The concept of emissivity is important in understanding the infrared emissions of objects.
This is a property of a surface which describes how its thermal emissions deviate from the
ideal of a blackbody. To further explain, two objects at the same physical temperature will
not 'appear' the same temperature in an infrared image if they have differing emissivities.
Point-to-point telecommunications generally refers to a connection restricted to two
endpoints, usually host computers.
Point-to-point is sometimes referred to as P2P, or Pt2Pt, or variations of this. Among other
things, P2P also refers to peer-to-peer file sharing networks.
Point-to-point is distinct from point-to-multipoint and broadcast.
Basic point-to-point data link
A traditional point-to-point data link is a communications medium with exactly two
endpoints and no data or packet formatting. The host computers at either end had to take
full responsibility for formatting the data transmitted between them. The connection
between the computer and the communications medium was generally implemented through
an RS-232 interface, or something similar. Computers in close proximity may be connected
by wires directly between their interface cards.
When connected at a distance, each endpoint would be fitted with a modem to convert
analog telecommunications signals into a digital data stream. When the connection used a
telecommunications provider, the connections were called a dedicated, leased, or private line.
The ARPANET used leased lines to provide point-to-point data links between its packetswitching nodes, which were called Interface Message Processors.
Modern point-to-point links
More recently (2003), the term point-to-point telecommunications relates to wireless data
communications for Internet or Voice over IP via radio frequencies in the multi-gigahertz
range. It also includes technologies such as laser for telecommunications but in all cases
expects that the transmission medium is line of sight and capable of being fairly tightly
beamed from transmitter to receiver.
The telecommunications signal is typically bi-directional, either time division multiple access
(TDMA) or channelized.
In hubs and switches, a hub provides a point-to-multipoint (or simply multipoint) circuit
which divides the total bandwidth supplied by the hub among each connected client node. A
switch on the other hand provides a series of point-to-point circuits, via microsegmentation,
82
WIRELESS COMMUNICACTION
which allows each client node to have a dedicated circuit and the added advantage of having
full-duplex connections.
Broadcasting is the distribution of audio and/or video signals which transmit programs to an
audience. The audience may be the general public or a relatively large sub-audience, such as
children or young adults.
There are wide variety of broadcasting systems, all of which have different capabilities. The
largest broadcasting systems are institutional public address systems, which transmit
nonverbal messages and music within a school or hospital, and low-powered broadcasting
systems which transmit radio stations or television stations to a small area. National radio
and television broadcasters have nationwide coverage, using retransmitter towers, satellite
systems, and cable distribution. Satellite radio and television broadcasters can cover even
wider areas, such as entire continents, and Internet channels can distribute text or streamed
music worldwide.
The sequencing of content in a broadcast is called a schedule. As with all technological
endeavors, a number of technical terms and slang have developed. A list of these terms can
be found at list of broadcasting terms. Television and radio programs are distributed through
radio broadcasting or cable, often both simultaneously. By coding signals and having
decoding equipment in homes, the latter also enables subscription-based channels and payper-view services.
The term "broadcast" was coined by early radio engineers from the midwestern United
States. Broadcasting forms a very large segment of the mass media. Broadcasting to a very
narrow range of audience is called narrowcasting.
Economically there are a few ways in which stations are able to continually broadcast. Each
differs in the method by which stations are funded:
in-kind donations of time and skills by volunteers (common with community
broadcasters)
direct government payments or operation of public broadcasters
indirect government payments, such as radio and television licenses
grants from foundations or business entities
selling advertising or sponsorships
public subscription or membership
fees charged to all owners of TV sets or radios, regardless of whether they intend to
receive that program or not (an approach used in the UK)
Broadcasters may rely on a combination of these business models. For example, National
Public Radio, a non-commercial network within the United States, receives grants from the
Corporation for Public Broadcasting (which in turn receives funding from the U.S.
government), by public membership, and by selling "extended credits" to corporations.
83
WIRELESS COMMUNICACTION
Broadcasting is the distribution of audio and/or video signals which transmit
programs to an audience. The audience may be the general public or a relatively large subaudience, such as children or young adults.
There are wide variety of broadcasting systems, all of which have different capabilities. The
largest broadcasting systems are institutional public address systems, which transmit
nonverbal messages and music within a school or hospital, and low-powered broadcasting
systems which transmit radio stations or television stations to a small area. National radio
and television broadcasters have nationwide coverage, using retransmitter towers, satellite
systems, and cable distribution. Satellite radio and television broadcasters can cover even
wider areas, such as entire continents, and Internet channels can distribute text or streamed
music worldwide.
The sequencing of content in a broadcast is called a schedule. As with all technological
endeavors, a number of technical terms and slang have developed. A list of these terms can
be found at list of broadcasting terms. Television and radio programs are distributed through
radio broadcasting or cable, often both simultaneously. By coding signals and having
decoding equipment in homes, the latter also enables subscription-based channels and payper-view services.
The term "broadcast" was coined by early radio engineers from the midwestern United
States. Broadcasting forms a very large segment of the mass media. Broadcasting to a very
narrow range of audience is called narrowcasting.
Economically there are a few ways in which stations are able to continually broadcast. Each
differs in the method by which stations are funded:
in-kind donations of time and skills by volunteers (common with community
broadcasters)
direct government payments or operation of public broadcasters
indirect government payments, such as radio and television licenses
grants from foundations or business entities
selling advertising or sponsorships
public subscription or membership
fees charged to all owners of TV sets or radios, regardless of whether they intend to
receive that program or not (an approach used in the UK)
Broadcasters may rely on a combination of these business models. For example, National
Public Radio, a non-commercial network within the United States, receives grants from the
Corporation for Public Broadcasting (which in turn receives funding from the U.S.
government), by public membership, and by selling "extended credits" to corporations.
Recorded broadcasts and live broadcasts
One can distinguish between recorded and live broadcasts. The former allows correcting
errors, and removing superfluous or undesired material, rearranging it, applying slow-motion
84
WIRELESS COMMUNICACTION
and repetitions, and other techniques to enhance the program. However some live events
like sports telecasts can include some of the aspects including slow motion clips of
important goals/hits etc in between the live telecast.
American radio network broadcasters habitually forbade prerecorded broadcasts in the 1930s
and 1940s, requiring radio programs played for the Eastern and Central time zones to be
repeated three hours later for the Pacific time zone. This restriction was dropped for special
occasions, as in the case of the German dirigible airship Hindenburg at Lakehurst, New Jersey
in 1937. During World War II, prerecorded broadcasts from war correspondents were
allowed on U.S. radio. In addition, American radio programs were recorded for playback by
Armed Forces Radio stations around the world.
A disadvantage of recording first is that the public may know the outcome of an event from
another source, which may be a spoiler. In addition, prerecording prevents live announcers
from deviating from an officially-approved script, as occurred with propaganda broadcasts
from Germany in the 1940s and with Radio Moscow in the 1980s.
Many events are advertised as being live, although they are often "recorded live" (sometimes
this is referred to as "live-to-tape"). This is particularly true of performances of musical
artists on radio when they visit for an in-studio concert performance. This intentional
blurring of the distinction between live and recorded media is viewed with chagrin among
many music lovers. Similar situations have sometimes appeared in television ("The Cosby Show
is recorded in front of a live studio audience").
Distribution methods
A broadcast may be distributed through several physical means. If coming directly from the
studio at a single radio or tv station, it is simply sent through the air chain to the transmitter
and thence from the antenna on the tower out to the world. Programming may also come
through a communications satellite, played either live or recorded for later transmission.
Networks of stations may simulcast the same programming at the same time, originally via
microwave link, and now mostly by satellite.
Distribution to stations or networks may also be through physical media, such as analogue or
digital videotape, CD, DVD, and sometimes other formats. Usually these are included in
another broadcast, such as when electronic news gathering returns a story to the station for
inclusion on a news programme.
The final leg of broadcast distribution is how the signal gets to the listener or viewer. It may
come over the air as with a radio station or TV station to an antenna and receiver, or may
come through cable TV [1] or cable radio (or "wireless cable") via the station or directly
from a network. The Internet may also bring either radio or TV to the recipient, especially
with multicasting allowing the signal and bandwidth to be shared.
The term "broadcast network" is often used to distinguish networks that broadcast an overthe-air television signal that can be received using a television antenna from so-called
85
WIRELESS COMMUNICACTION
networks that are broadcast only via cable or satellite television. The term "broadcast
television" can refer to the programming of such networks.
History
The earliest radio stations were simply radio and audiotelegraph systems and did not carry
audio. The first claimed audio transmission that could be termed a broadcast occurred on
Christmas Eve in 1906, and was made by Reginald Fessenden. While many early
experimenters attempted to create systems similar to radiotelephone devices where only two
parties were meant to communicate, there were others who intended to transmit to larger
audiences. Charles Herrold started broadcasting in California in 1909 and was carrying audio
by the next year.
For the next decade, radio tinkerers had to build their own radio receivers. KDKA AM of
Pittsburgh, Pennsylvania (owned by Westinghouse) started broadcasting as the first licensed
"commercial" radio station on November 2, 1920. The commercial designation came from
the type of license—they didn't start airing advertisements until a few years later. The first
broadcast was the results of the U.S. presidential election, 1920. The Montreal station that
became CFCF-AM began program broadcasts on May 20, 1920, and the Detroit station that
became WWJ began program broadcasts beginning on August 20, 1920, although neither
held a license at the time.
Radio Argentina began regularly scheduled transmissions from the Teatro Coliseo in Buenos
Aires on August 27, 1920, making its own priority claim. The station got its license on
November 19, 1923. The delay was due to the lack of official Argentine licensing procedures
before that date. This station continued regular broadcasting of entertainment and cultural
fare for several decades.
When internet based radio became feasible in the mid '90s, the new medium required no
licensing and the stations could broadcast from anywhere in the world without the need for
"over the air" transmitters. This greatly reduced the overhead for establishing a station, and
in 1996, 'A' Net Station (A.N.E.T.) began broadcasting commercial free from Antarctica.
M.I.T. developed the "Radio Locator" List of Radio Stations. After stations started
streaming audio on the internet, Radio-Locator added this to their search engine so anyone
could locate a station's website and listen to a station offering a worldwide stream. This list
also tracks "terrestrial" radio stations who may not have live audio on the net, or even a
website, but are able to find station information by various other search queries.
Types
Radio stations are of several types. The best known are the AM and FM stations; these
include both commercial, public and nonprofit varieties as well as student-run campus radio
stations and hospital radio stations can be found throughout the developed world.
Although now being eclipsed by internet-distributed radio, there are many stations that
broadcast on shortwave bands using AM technology that can be received over thousands of
86
WIRELESS COMMUNICACTION
miles (especially at night). For example, the BBC has a full schedule transmitted via
shortwave. These broadcasts are very sensitive to atmospheric conditions and sunspots.
AM
AM stations were the earliest broadcasting stations to be developed. AM refers to amplitude
modulation, a mode of broadcasting radio waves by varying the amplitude of the carrier
signal in response to the amplitude of the signal to be transmitted.
One of the advantages of AM is that its unsophisticated signal can be detected (turned into
sound) with simple equipment. If a signal is strong enough, not even a power source is
needed; building an unpowered crystal radio receiver was a common childhood project in
the early years of radio.
AM broadcasts occur on North American airwaves in the mediumwave frequency range of
530 to 1700 kHz (known as the "standard broadcast band"). The band was expanded in the
1990s by adding nine channels from 1620 to 1700 kHz. Channels are spaced every 10 kHz in
the Americas, and generally every 9 kHz everywhere else.
Many countries outside of the U.S. use a similar frequency band for AM transmissions.
Europe also uses the longwave band. In response to the growing popularity of FM radio
stations in the late 1980s and early 1990s, some North American stations began broadcasting
in AM stereo, though this never really gained acceptance.
AM radio has some serious shortcomings.
The signal is subject to interference from electrical storms (lightning) and other EMI.
Fading of the signal can be severe at night.
AM signals exhibit diurnal variation, travelling much longer distances at night. In a
crowded channel environment this means that the power of regional channels which share
a frequency must be reduced at night or directionally beamed in order to avoid
interference, which reduces the potential nighttime audience. Some stations have
frequencies unshared with other stations; these are called clear channel stations. Many of
them can be heard across much of the country at night.
AM radio transmitters can transmit audio frequencies up to 20 kHz (now limited to 10
kHz in the US due to FCC rules designed to reduce interference), but most receivers are
only capable of reproducing frequencies up to 5 kHz or less. At the time that AM
broadcasting began in the 1920s, this provided adequate fidelity for existing
microphones, 78 rpm recordings, and loudspeakers. The fidelity of sound equipment
subsequently improved considerably but the receivers did not. Reducing the bandwidth of
the receivers reduces the cost of manufacturing and makes them less prone to
interference. In the United States, AM stations are never assigned adjacent channels in
the same service area. This prevents the sideband energy generated by two stations from
interfering with each other. Bob Carver created an AM stereo tuner employing notch
filtering that demonstrated an AM broadcast can meet or exceed the 15 kHz bandwidth of
87
WIRELESS COMMUNICACTION
FM stations without objectionable interference. After a few years the tuner was
discontinued; Bob Carver had left the company and Carver Corporation later cut the
number of models produced before discontinuing production completely. AM stereo
broadcasts declined with the advent of HD Radio.
FM
FM refers to frequency modulation, and occurs on VHF airwaves in the frequency range of
88 to 108 MHz everywhere (except Japan and Russia). Japan uses the 76 to 90 MHz band.
FM stations are much more popular in economically developed regions, such as Europe and
the United States, especially since higher sound fidelity and stereo broadcasting became
common in this format.
FM radio was invented by Edwin H. Armstrong in the 1930s for the specific purpose of
overcoming the interference (static) problem of AM radio, to which it is immune. At the
same time, greater fidelity was made possible by spacing stations further apart. Instead of 10
kHz apart, they are 200 kHz apart—the difference between the lowest current FM frequency
in the U.S., 88.1 MHz and the next lowest, 88.3 MHz. This was far in advance of the audio
equipment of the 1940s, but wide interchannel spacing was chosen to reduce interference
problems that existed with AM.
In fact 200 kHz is not needed to accommodate an audio signal — 20 kHz to 30 kHz is all
that is necessary for a narrowband FM signal. The 200 kHz bandwidth allowed room for
±75 kHz signal deviation from the assigned frequency plus a 50 kHz guardband to eliminate
adjacent channel interference. The larger bandwidth allows for broadcasting a 15 kHz
bandwidth audio signal plus a 38 kHz stereo "subcarrier" — a piggyback signal that rides on
the main signal. Additional unused capacity is used by some broadcasters to transmit utility
functions such as background music for public areas, GPS auxiliary signals, or financial
market data.
The AM radio problem of interference at night was addressed in a different way. At the time
FM was set up, the only available frequencies were far higher in the spectrum than those
used for AM radio. Using these frequencies meant that even at far higher power, the range
of a given FM signal was much lower, thus its market was more local than for AM radio.
Reception range at night was the same as daytime, and while the problem of interference
between stations has not disappeared, it is far less.
The original FM radio service in the U.S. was the Yankee Network, located in New
England.. Broadcasting began in the early 1940s but did not pose a significant threat to the
AM broadcasting industry. It required purchase of a special receiver. The frequencies used
were not those used today: 42 to 50 megahertz. The change to the current frequencies, 88 to
108 megahertz, began at the end of World War II and was to some extent imposed by AM
radio owners so as to cripple what was by now realized to be a potentially serious threat.
FM radio on the new band had to begin from step one. As a commercial venture it remained
a little used audio enthusiast's medium until the 1960s. The more prosperous AM stations, or
their owners, acquired FM licenses and often broadcast the same programming on the FM
88
WIRELESS COMMUNICACTION
station as on the AM station (simulcasting). The FCC limited this practice in the 1970s. By
the 1980s, since almost all new radios included both AM and FM tuners (without any
government mandate), FM became the dominant medium, especially in cities. Because of its
greater range, AM remained more common in rural environments.
Digital
Digital radio broadcasting has emerged, first in Europe (the UK in 1995 and Germany in
1999), and later in the United States. The European system is named DAB, for Digital Audio
Broadcasting, and uses the public domain EUREKA 147 system. In the United States, the
IBOC system is named HD Radio and owned by a consortium of private companies called
iBiquity. An international non-profit consortium Digital Radio Mondiale (DRM), has
introduced the public domain DRM system.
It is expected that for the next 10 to 20 years, all these systems will co-exist, while by 2015 to
2020 digital radio may predominate, at least in the developed countries.
Satellite
Satellite radiobroadcasters are slowly emerging, but the enormous entry costs of space-based
satellite transmitters, and restrictions on available radio spectrum licenses has restricted
growth of this market. In the USA and Canada, just two services, XM Satellite Radio and
Sirius Satellite Radio exist.
Other
Many other non-broadcast types of radio stations exist. These include:
base stations for police, fire and ambulance networks
military base stations
dispatch base stations for taxis, trucks, and couriers
emergency broadcast systems
amateur radio stations
Program formats
Radio program formats differ by country, regulation and markets. For instance, the U.S.
Federal Communications Commission designates the 88–92 megahertz band in the U.S. for
non-profit or educational programming, with advertising prohibited.
In addition, formats change in popularity as time passes and technology improves. Early
radio equipment only allowed program material to be broadcast in real time, known as live
broadcasting. As technology for sound recording improved, an increasing proportion of
broadcast programming used pre-recorded material. A current trend is the automation of
radio stations. Some stations now operate without direct human intervention by using
entirely pre-recorded material sequenced by computer control.
89
WIRELESS COMMUNICACTION
In electrical engineering and computer science, channel capacity, is the amount of discrete
information that can be reliably transmitted over a channel. By the noisy-channel coding
theorem, the channel capacity of a given channel is the limiting information transport rate (in
units of information per unit time) that can be achieved with vanishingly small error
probability.
Information theory, developed by Claude E. Shannon in 1948, defines the notion of channel
capacity and provides a mathematical model by which one can compute the maximal amount
of information that can be carried by a channel. The key result states that the capacity of the
channel, as defined above, is given by the maximum of the mutual information between the
input and output of the channel, where the maximization is with respect to the input
distribution.
Formal Definition
Here X represents the space of messages transmitted, and Y the space of messages received
during a unit time over our channel. Let
pY | X(y | x)
be the conditional distribution function of Y given X. We will consider pY | X(y | x) to be an
inherent fixed property of our communications channel (representing the nature of the noise
of our channel). Then the joint distribution
pX,Y(x,y)
of X and Y is completely determined by our channel and by our choice of
the marginal distribution of messages we choose to send over the channel. The joint
distribution can be recovered by using the identity
90
WIRELESS COMMUNICACTION
Under these constraints, we would like to maximize the amount of information, or the
signal, we can communicate over the channel. The appropriate measure for this is the mutual
information I(X;Y), and this maximum mutual information is called the channel capacity
and is given by
Noisy channel coding theorem
The channel coding theorem states that for any ε > 0 and for any rate R less than the
channel capacity C, there is an encoding and decoding scheme that can be used to ensure
that the probability of block error is less than ε for any sufficiently long code. Also, for any
rate greater than the channel capacity, the probability of block error at the receiver goes to
one as the block length goes to infinity. However, there are other definitions for channel
capacity too.
Radio Propagation Models
Introduction
Land-mobile communication is burdened with particular propagation complications
compared to the channel characteristics in radio systems with fixed and carefully positioned
antennas. The antenna height at a mobile terminal is usually very small, typically less than a
few meters. Hence, the antenna is expected to have very little 'clearance', so obstacles and
reflecting surfaces in the vicinity of the antenna have a substantial influence on the
characteristics of the propagation path. Moreover, the propagation characteristics change
from place to place and, if the mobile unit moves, from time to time. Thus, the transmission
path between the transmitter and the receiver can vary from simple direct line of sight to one
that is severly obstructed by buildings, foliage and the terrain.
In generic system studies, the mobile radio channel is usually evaluated from 'statistical'
propagation models: no specific terrain data is considered, and channel parameters are
modelled as stochastic variables. The mean signal strength for an arbitrary transmitterreceiver (T-R) separation is useful in estimating the radio coverage of a given transmitter
whereas measures of signal variability are key determinants in system design issues such as
antenna diversity and signal coding.
Three mutually independent, multiplicative propagation phenomena can usually be
distinguished: multipath fading, shadowing and 'large-scale' path loss.
Multipath propagation leads to rapid fluctuations of the phase and amplitude of the
signal if the vehicle moves over a distance in the order of a wave length or more.
Multipath fading thus has a 'small-scale' effect.
Shadowing is a 'medium-scale' effect: field strength variations occur if the antenna is
displaced over distances larger than a few tens or hundreds of metres.
91
WIRELESS COMMUNICACTION
The 'large-scale' effects of path losses cause the received power to vary gradually due
to signal attenuation determined by the geometry of the path profile in its entirety.
This is in contrast to the local propagation mechanisms, which are determined by
builfing and terrain features in the immediate vicinity of the antennas.
The large-scale effects determine a power level averaged over an area of tens or hundreds of
metres and therefore called the 'area-mean' power. Shadowing introduces additional
fluctuations, so the received local-mean power varies around the area-mean. The term 'localmean' is used to denote the signal level averaged over a few tens of wave lengths, typically 40
wavelengths. This ensures that the rapid fluctuations of the instantaneous received power
due to multipath effects are largely removed.
Path Loss: Models of "large-scale effects"
Example
The most appropriate path loss model depends on the location of the receiving antenna. For
the example above at:
location 1, free space loss is likely to give an accurate estimate of path loss.
location 2, a strong line-of-sight is present, but ground reflections can significantly
influence path loss. The plane earth loss model appears appropriate.
location 3, plane earth loss needs to be corrected for significant diffraction losses,
caused by trees cutting into the direct line of sight.
location 4, a simple diffraction model is likely to give an accurate estimate of path
loss.
location 5, loss prediction fairly difficult and unreliable since multiple diffraction is
involved.
92
WIRELESS COMMUNICACTION
Path-loss law
Figure 1: Average path loss versus distance in UHF bands as measured
in Northern Germany. (a, green): forestrial terrain; (b, orange): open area;
(grey): average of (a) and (b); (black): Egli's model
Free Space Propagation
For propagation distances d much larger than the square of the antenna size divided by the
wavelength, the far-field of the generated electromagnetic wave dominates all other
components (in the far-field region the electric and magnetic fields vary inversely with
distance). In free space, the power radiated by an isotropic antenna is spread uniformly and
without loss over the surface of a sphere surrounding the antenna. An isotropic antenna is a
hypothetical entity!! Even the simplest antenna has some directivity. For example, a linear
dipole has uniform power flow in any plane perpendicular to the axis of the dipole
(omnidirectionality) and the maximum power flow is in the equatorial plane (see Appendix 1:
Antenna Fundamentals).
The surface area of a sphere of radius d is 4 d2, so that the power flow per unit area
w(power flux in watts/meter2) at distance d from a transmitter antenna with input accepted
power pT and antenna gain GT is
93
WIRELESS COMMUNICACTION
.
Transmitting antenna gain is defined as the ratio of the intensity (or power flux) radiated in
some particular direction to the radiation intensity that would be obtained if the power
accepted by the antenna were radiated isotropically. When the direction is not stated, the
power gain is usually taken in the direction of maximum power flow. The product GT pT is
called the effective radiated power (ERP) of the transmitter. The available power pR at the
terminals of a receiving antenna with gain GR is
where A is the effective area or aperture of the antenna and
(see Appendix 1:
Antenna Fundamentals). The wavelength = c / fc with c the velocity of light and fcthe
carrier frequency.
While cellular telephone operator mostly calculate in received powers, in the planning of the
coverage area of broadcast transmitters, the CCIR recommends the use of the electric field
strength E at the location of the receiver. The conversion is
.
Exercise:
Show that for a reference transmitter with ERP of 1 kwatt in free space,
As the propagation distance increases, the radiated energy is spread over the surface of a
sphere of radius d, so the power received decreases proportional to d2. Expressed in dB, the
received power is
Exercise:
Show that the path loss L between two isotropic antennas (GR = 1, GT = 1) can be
expressed as
L (dB)= - 32.44 - 20 log ( fc/ 1MHz) - 20 log (d / 1km)
which leads engineers to speak of a "20 log d" law.
94
WIRELESS COMMUNICACTION
Propagation over a Plane Earth
If we consider the effect of the earth surface, the expressions for the received signal become
more complicated than in case of free space propagation. The main effect is that signals
reflected off the earth surface may (partially) cancel the line of sight wave.
Model
For an isotropic or omnidirectional antenna above a plane earth, the received electric field
strength is
with Rg the reflection coefficient and E0 the field strength for propagation in free space. This
expression can be interpreted as the complex sum of a direct line-of-sight wave, a groundreflected wave and a surface wave. The phasor sum of the first and second term is known as
the space wave.
For a horizontally-polarized wave incident on the surface of a perfectly smooth earth,
where r is the relative dielectric constant of the earth, is the angle of incidence (between
the radio ray and the earth surface) and x = /(2 fc 0) with the conductivity of the ground
and 0 the dielectric constant of vacuum.
For vertical polarization
Exercise
Show that the reflection coefficient tends to -1 for angles close to 0. Verify that for vertical
polarization, abs( Rc) > 0.9 for <10 degrees. For horizontal polarization, abs( Rc) > 0.5 for
< 5 degrees and abs( Rc) > 0.9 for < 1 degree.
95
WIRELESS COMMUNICACTION
UHF Mobile Communication
The relative amplitude F(.) of the surface wave is very small for most cases of mobile UHF
communication (F(.) << 1). Its contribution is relevant only a few wavelengths above the
ground. The phase difference between the direct and the ground-reflected wave can be
found from the two-ray approximation considering only a Line-of-Sight and a Ground
Reflection. Denoting the transmit and receive antenna heights as hT and hR, respectively, the
phase difference can be expressed as
For large d, one finds, using
the expression
,
For large d, (d >> 5hT hR ), the reflection coefficient tends to -1, so the received signal
power becomes
For propagation distances substantially beyond the turnover point -- i.e.
expression tends to the fourth power distance law:
-- this
Exercise
Discuss the effect of path loss on the performance of a cellular radio network. Is it good to
have signals attenuate rapidly with increasing distance?
Egli's Model (1957)
Experiments confirm that in macro-cellular links over smooth, plane terrain, the received
signal power (expressed in dB) decreases with "40 log d". Also a "6 dB/octave" height gain
is experienced: doubling the height increases the received power by a factor 4.
In contrast to the theoretical plane earth loss, Egli measured a significant increase of the
path loss with the carrier frequency fc. He proposed the semi-empirical model
96
WIRELESS COMMUNICACTION
i.e., he introduced a frequency dependent empirical correction for ranges 1< d < 50 km, and
carrier frequencies 30 MHz < fc < 1 GHz.
For communication at short range, this formula looses its accuracy because the reflection
coefficient is not necessarily close to -1. For
, free space propagation is more
appropriate, but a number of significant reflections must be taken into account. In streets
with high buildings, guided propagation may occur.
Diffraction loss
If the direct line-of-sight is obstructed by a single object ( of height hm), such as a mountain
or building, the attenuation caused by diffraction over such an object can be estimated by
treating the obstruction as a diffracting knife-edge.
Figure: Path profile model for (single) knife edge diffraction
This is the simplest of diffraction models, and the diffraction loss in this case can be readily
estimated using the classical Fresnel solution for the field in the shadow behind a half-plane.
Thus, the field strength in the shadowed region is given by
where E0 is the free space field strength in the absence of the knife-edge and F(v) is the
complex Fresnel integral which is a tabulated function of the diffraction parameter
where dT and dR are the terminal distances from the knife edge. The diffraction loss,
additional to free space loss and expressed in dB, can be closely approximated by
The attenuation over rounded obstacles is usually higher than Adiff in the above formula.
Approximate techniques to compute the diffraction loss over multiple knife edges have been
proposed by
97
WIRELESS COMMUNICACTION
Bullington
Figure: Construction for approximate calculation of Multiple Knife-Edge
diffraction loss, proposed by Bullington
The method by Bullington defines a new effective obstacle at the point where the
line-of-sight from the two antennas cross.
Epstein and Peterson
Construction
for
approximate
calculation
diffraction loss, proposed by Epstein and Peterson
of
multiple
knife-edge
Epstein and Peterson suggested that lines-of-sight be drawn between relevant
obstacles and the diffraction losses at each obstacle be added.
Deygout
Construction
for
approximate
diffraction loss, proposed by Deygout
calculation
of
multiple
knife-edge
Deygout suggested that entire path be searched for a main obstacle, i.e., the point
with the highest value of v along the path. Diffraction losses over "secondary"
obstacles may be added to the diffraction loss over the main obstacle.
98
WIRELESS COMMUNICACTION
Total Path loss
The previously presented methods for ground reflection loss and diffraction losses suggest a
"Mondriaan" interpretation of the path profile: Obstacles occur as straight vertical lines
while horizontal planes cause reflections. That is the propagation path is seen as a collection
of horizontal and vertical elements. Accurate computation of the path loss over non-line-ofsight paths with ground reflections is a complicated task and does not allow such
simplifications.
Many measurements of propagation losses for paths with combined diffraction and ground
reflection losses indicate that knife edge type of obstacles significantly reduce ground wave
losses. Blomquist suggested three methods that may be used to find the total loss -- viz.
Bullington
Blomquist
Blomquist empirical formula
where Afs is the free space loss, AR is the ground reflection loss and Adiff is the multiple knifeedge diffraction loss in dB values.
Statistical Path Loss Law:
Most generic system studies address networks in which all mobile units have the same gain,
height and transmitter power. For ease of notation, received signal powers and propagation
distances
can
be
normalized.
In
macro-cellular
networks
(1 km < d < 50 km), the area-mean received power can be written as
with r the normalized distance and the path loss exponent. Theoretical values are,
respectively 2 and 4 for free space and plane, smooth, perfectly conducting terrain. Typical
values for irregular terrain are between 3.0 and 3.4 and in forestal terrain propagation can be
appropriately described as in free space plus some diffraction losses, but without significant
groundwave losses (
). If the propagation model has to cover a wide range of distances,
may vary as different propagation mechanisms dominate at different ranges. In microcellular nets, typically changes from approximately 2 to approximately 4 at some turnover
distance dg. Experimental values of dg are between 90 and 300 m for hT between 5 and 20 m
and hR approximately 2m where hT and hR are, respectively, the heights of the transmitting
99
WIRELESS COMMUNICACTION
and receiving and antennas. These values are in reasonable agreement with the theoretical
expression
where is the wavelength of the transmitted wave.
Many models have been proposed and are used in system design:
Harley suggested a smooth transition model, with
where r is a normalized distance, rg is the normalized turnover distance, and is the
local-mean power (i.e., the received power averaged over a few meters to remove to
effect of multipath fades). Studies indicate that actual turnover distances are on the
order of 800 meters around 2 GHz. This model neglects the wave-interference
pattern that may be experienced at ranges shorter than rg.
Other models, such as a step-wise transition from "20 log d" to "40 log d" at the
turnover distance, have been proposed. Empirical values for the path loss exponents
and their intervals of validity have been reported in many papers. See the following
table.
Model
Area
FSL
Egli
free space
average terrain
two-ray
plane earth
Green
London
Harley
Melbourne
Pickhlotz, et al. Orlando, Florida
Typical Harley Parameters
1
2
2
0
0
4
2
1.7 to 2.1
1.5 to 2.5
1.3
2
2 to 7
-0.3 to 0.5
3.5
Multipath in Micro-Cells
The micro-cellular propagation channel typically is Rician : it contains a dominant direct
component, with an amplitude determined by path loss, a set of early reflected waves adding
(possibly distructively) with the dominant wave, and intersymbol interference caused by the
excessively delayed waves, adding incoherently with the dominant wave.
Shadowing
Experiments reported by Egli in 1957 showed that, for paths longer than a few hundred
meters, the received (local-mean) power fluctuates with a 'log-normal' distribution about the
area- mean power. "Log-normal" means that the local-mean power expressed in logarithmic
values -- i.e.,
100
WIRELESS COMMUNICACTION
-- has a normal -- i.e., Gaussian distribution. The probability density function (pdf) of the
local-mean power is thus of the form
where
s
is the logarithmic standard deviation of the shadowing, expressed in natural units.
Depth of Shadowing
For average terrain, Egli reported a logarithmic standard deviation of about 8.3 dB and 12
dB for VHF and UHF frequencies, respectively. Such large fluctuations are caused not only
by local shadow attenuation by obstacles in the vicinity of the antenna, but also by large-scale
effects leading to a coarse estimate of the area-mean power.
This log-normal fluctuation was called large-area shadowing by Marsan, Hess and Gilbert;
over semi-circular routes in Chicago, with fixed distance to the base station, it was found to
range from 6.5 dB to 10.5 dB, with a median of 9.3 dB. Large-area shadowing thus reflects
shadow fluctuations if the vehicle moves over many kilometres.
In contrast to this, in most papers on mobile propagation, only small-area shadowing is
considered: log-normal fluctuations of the local-mean power over a distance of tens or
hundreds of metres are measured. Marsan et al. reported a median of 3.7 dB for small area
shadowing. Preller and Koch measured local-mean powers at 10 m intervals and studied
shadowing over 500 m intervals. The maximum standard deviation experienced was about 7
dB, but 50% of all experiments showed shadowing of less than 4 dB.
Implications for Cell Planning
If one extends the distinction between large-area and small-area shadowing, the definition of
shadowing covers any statistical fluctuation of the received local-mean power about a certain
area-mean power, with the latter determined by (predictable) large-scale mechanisms.
Multipath propagation is separated from shadow fluctuations by considering the local-mean
powers. That is, the standard deviation of the shadowing will depend on the geographical
resolution of the estimate of the area-mean power. A propagation model which ignores
specific terrain data produces about 12 dB of shadowing. On the other hand, prediction
methods using topographical data bases with unlimited resolution can, at least in theory,
achieve a standard deviation of 0 dB. Thus, the standard deviation is a measure of the
impreciseness of the terrain description. If, for generic system studies, the (large-scale) path
loss is taken of simple form depending only on distance but not on details of the path
profile, the standard deviation will necessarily be large. On the other hand, for the planning
of a practical network in a certain (known) environment, the accuracy of the large-scale
propagation model may be refined. This may allow a spectrally more efficient planning if the
cellular layout is optimised for the propagation environment.
101
WIRELESS COMMUNICACTION
With shadowing, the interference power accumulates more rapidly than proportionally with
the number of signals!! The accumulation of multiple signals with shadowing is a relevant
issue in the planning of cellular networks.
Combined model by Mawira (Netherlands' PTT Research)
Mawaira modelled large-area and small-area shadowing as two independent superimposed
Markovian processes:
3 dB with coherence distance over 100 m, plus
4 dB with coherence distance 1200 m
Multiple log-normal signals
In cellular networks, interference does not come from only one source but from many cochannel transmitters. In a hexagonal reuse pattern the number of interferers typically is six.
At least two different methods are used to estimate the probability distribution of the joint
interference power accumulated from several log-normal signals. Such methods are relevant
to estimate the joint effect of multiple interfering signals with shadowing. Fenton and
Schwartz and Yeh both proposed to approximate the pdf of the joint interference power by
a log-normal pdf, yet neither could determine it exactly.
Fenton
The method by Fenton assesses the logarithmic mean and variance of the joint interference
signal directly as a function of the logarithmic means and variances of the individual
interference signals. This method is most accurate for small standard deviations of the
shadowing, say, for less than 4 dB.
Schwartz and Yeh
The technique proposed by Schwartz and Yeh is more accurate in the range of 4 to 12 dB
shadowing, which corresponds to the case of land-mobile radio in the VHF and UHF bands.
Their method first assesses the logarithmic mean and variance of a joint signal produced by
cumulation of two signals. Recurrence is then used in the event of more than two interfering
signals. A disadvantage of the latter method is that numerical computations become time
consuming.
Table Mean mt and standard deviation st (both in dB) of the joint power of n signals with
uncorrelated shadowing, each with mean 0 dB and with identical standard deviation.
Networks, with 0, 6, 8.3 and 12 dB of shadowing of individual signals.
102
WIRELESS COMMUNICACTION
n
0 dB
mt st
1
2
3
4
5
6
0.00 0.00
3.00 0.00
4.50 0.00
6.00 0.00
7.00 0.00
7.50 0.00
6 dB
mt
st
0.00
4.58
6.90
8.43
9.57
10.48
6.00
4.58
3.93
3.54
3.26
3.04
8.3 dB
mt
st
0.00 8.30
5.61 6.49
8.45 5.62
10.29 5.08
11.64 4.70
12.69 4.41
12 dB
mt
st
0.00 12.00
7.45
9.58
11.20
8.40
13.62
7.66
15.37
7.13
16.74
6.74
Besides these methods, by Fenton and Schwartz and Yeh, a number of alternative (and often
more simplified) techniques are used. For instance in VHF radio broadcasting, signals
fluctuate with location and with time according to log-normal distributions. Techniques to
compute the coverage of broadcast transmitters are in CCIR recommendations.
Outage probabilities for systems with multiple Rayleigh fading and shadowed signals can
however be computed easily without explicitly estimating the joint effect of multiple
shadowed signals.
Multipath Reception
The mobile or indoor radio channel is characterized by 'multipath reception': The signal
offered to the receiver contains not only a direct line-of- sight radio wave, but also a large
number of reflected radio waves.
These reflected waves interfere with the direct wave, which causes significant degradation of
the performance of the network. A wireless network has to be designed in such way that the
adverse effect of these reflections is minimized.
Although channel fading is experienced as an unpredictable, stochastic phenomenon,
powerful models have been developed that can accurately predict system performance.
Most conventional modulation techniques are sensitive to intersymbol interference unless
the channel symbol rate is small compared to the delay spread of the channel. Nonetheless, a
signal received at a frequency and location where reflected waves cancel each other, is
heavily attenuated and may thus suffer large bit error rates.
103
WIRELESS COMMUNICACTION
Models for multipath reception
Narrowband Rayleigh, or Rician models mostly address the channel behaviour at one
frequency only. Dispersion is modelled by the delay spread.
The effect of multipath reception
for a fast moving user: rapid fluctuations of the signal amplitude and phase
for a wideband (digital) signal: dispersion and intersymbol interference
for an analog television signal: "ghost" images (shifted slightly to the right)
for a multicarrier signal: different attenuation at different (sub-)carriers and at
different locations
for a stationary user of a narrowband system: good reception at some locations and
frequencies; poor reception at other locations and frequencies
for a satellite positioning system: strong delayed reflections may cause a severe
miscalculation of the distance between user and satellite. This can result in a wrong
"fix"
Rician fading
The model behind Rician fading is similar to that for Rayleigh fading, except that in Rician
fading a strong dominant component is present. This dominant component can for instance
be the line-of-sight wave. Refined Rician models also consider
that the dominant wave can be a phasor sum of two or more dominant signals, e.g.
the line-of-sight, plus a ground reflection. This combined signal is then mostly
treated as a deterministic (fully predictable) process.
that the dominant wave can also be subject to shadow attenuation. This is a popular
assumption in the modelling of satellite channels.
Besides the dominant component, the mobile antenna receives a large number of reflected
and scattered waves.
104
WIRELESS COMMUNICACTION
PDF of signal amplitude
The derivation is similar to the derivation for Rayleigh fading. In order to obtain the
probability density of the signal amplitude we observe the random processes I(t) and Q(t) at
one particular instant t0. If the number of scattered waves is sufficiently large, and are i.i.d.,
the central limit theorem says that I(t0) and Q(t0) are Gaussian, but, due to the deterministic
dominant term, no longer zero mean. Transformation of variables shows that the amplitude
and the phase have the joint pdf
Here, is the local-mean scattered power and C2/2 is the power of the dominant
component. The pdf of the amplitude is found from the integral
,
where I0(..) is the modified Bessel function of the first kind and zero order, defined as
Exercise:
Show that the total local-mean power is
Rician factor
The Rician K-factor is defined as the ratio of signal power in dominant component over the
(local-mean) scattered power. Thus
Expressed in terms of the local-mean power and the Rician K-factor, the pdf of the signal
amplitude becomes
105
WIRELESS COMMUNICACTION
Exercise
Show that for a large local-mean signal-to-noise ratio
, the probability that the
instantaneous power p drops below a noise threshold tends to
Rician Channels
Examples of Rician fading are found in
Microcellular channels
Vehicle to Vehicle communication, e.g., for AVCS
Indoor propagation
Satellite channels
Rayleigh fading
Rayleigh fading is caused by multipath reception. The mobile antenna receives a large
number, say N, reflected and scattered waves. Because of wave cancellation effects, the
instantaneous received power seen by a moving antennna becomes a random variable,
dependent on the location of the antenna.
In case of an unmodulated carrier, the transmitted signal has the form
.
Next we'll discuss the basic mechanisms of mobile reception.
Effect of Motion
Let the n-th reflected wave with amplitude cn and phase
the direction of the motion of the antenna.
The Doppler shift of this wave is
,
where v is the speed of the antenna.
arrive from an angle
relative to
106
WIRELESS COMMUNICACTION
Phasor representation
Figure: Phasor diagram of a set
a Rayleigh-fading envelope (in black)
of
scattered
waves
(in
blue),
resulting
The received unmodulated signal r(t) can be expressed as
An inphase-quadrature representation of the form
can be found with in-phase component
and quadrature phase component
.
This propagation model allows us, for instance,
to compute the probability density function of the received amplitude.
to compute the Doppler spread and the rate of channel fluctuations.
to build a channel simulator
to predict the effect of multiple interfering signals in a mobile channel
to find outage probabilities in a cellular network
If the set of reflected waves are dominated by one strong component, Rician fading is a
more appropriate model.
107
WIRELESS COMMUNICACTION
Doppler spectrum
Doppler shifts
We consider a Rayleigh fading signal. Let the n-th reflected wave with amplitude cn and
phase arrive from an angle relative to the direction of the motion of the antenna.
The Doppler shift of this wave is
where v is the speed of the antenna.
Such motion of the antenna leads to phase shifts of individual reflected waves, so it affects
the amplitude of the resulting signal. It is often assumed that the angle is uniformly
distributed within [0, 2 ]. This allows us to compute a probability density function of the
frequency of incoming waves. Assuming that the number of waves is very large, we can
obtain the Doppler spectrum of the received signal.
Rate of Rayleigh Fading
Consider the following sample of a set of reflected waves:
Figure: Phasor diagram of a set
a Rayleigh-fading envelope (in black)
of
scattered
Let the n-th reflected wave with amplitude cn and phase
the direction of the motion of the antenna.
waves
(in
blue),
arrive from an angle
resulting
relative to
108
WIRELESS COMMUNICACTION
If the mobile antenna moves a small distance d, the n-th incident wave, arriving from the
angle with respect to the instantaneous direction of motion, experiences a phase shift of
(2 d/ ) cos(
)
Thus all waves experience their own phase rotation. The resulting vector may significantly
change in amplitude if individual components undergo different phase shifts.
Figure: Phasor diagram of a set of scattered waves after antenna displacement (in blue)
and before motion (in light blue), resulting a Rayleigh-fading envelope (in black)
In mobile radio channels with high terminal speeds, such changes occur rapidly. Rayleigh
fading then causes the signal amplitude and phase to fluctuate rapidly.
If d is in the order of half a wave length ( /2) or more, the phases of all incident waves
become mutually uncorrelated, thus also the amplitude of the total received signal becomes
uncorrelated with the amplitude at the point of departure.
Doppler Shifts
Each reflected wave experiences its own Doppler shift. If an unmodulated carrier is being
transmitted, a spectrum of different components is received.
Autocovariance
The normalised covariance L(d) of the electric field strength for an antenna displacement d
is of the form
109
WIRELESS COMMUNICACTION
with
J0(.)
the
zero-order
Bessel
function
of
the
first
kind.
The signal remains almost entirely correlated for a small displacement, say d< /8, but
becomes rapidly independent for larger displacements, say for d> /2.
Figure: Auto-covariance L(d) of the electric field strength in a Rayleigh-fading channel
versus the normalised antenna displacement d/ in horizontal direction.
The antenna displacement can also be expressed in the terminal velocity v and the time
difference T between the two samples (d = v T). So with fm the maximum Doppler shift (fm
= v fc / c).
Analog Transmission over Fading Channels
This page covers analogue AM, SSB, PM and FM modulation.
Amplitude modulation
Various methods exist to transmit a baseband message m(t) using an RF carrier signal
c(t) = Ac cos ( c t + ). In linear modulation, such as Amplitude Modulation (AM) and
Single Side band (SSB) the amplitude Ac is made a linear function of the message m(t).
Figure: Phasor Diagram for AM with tonal modulation.
AM has the advantage that the detector circuit can be very simple. This allows inexpensive
production of mediumwave broadcast receivers. The transmit power amplifier, however,
needs to be highly linear and therefor expensive and power consuming.
110
WIRELESS COMMUNICACTION
For mobile reception of AM audio signals above 100 MHz, the spectrum of channel
fluctuations due to fading and in the message overlap. Hence the Automatic Gain Control in
the receiver IF stages can not distinguish the message and channel fading. AGC will thus
distort the message m(t).
AM is only rarely used for mobile communication, although it is still used for radio
broadcasting.
Single Side Band
In the frequency power spectrum of AM signals we recognize an upper side band and a
lower side sideband, with frequency components above and below the carrier at fc. In Single
Side Band transmission, the carrier and one of these side bands are removed.
An SSB message can be recovered by multiplying the received signal by cos( ct + ). If the
local oscillator has a phase offset ( - ), the detected signal is a linear combination of the
orginal message m(t) and a 90 degree phase-shifted version (its Hilbert transform). The
human ear is not very sensitive to such phase distortion; therefor the detected signal sounds
almost identical to m(t), despite any phase offset. However, such phase shifts make SSB
unsuitable for digital transmission.
The effect of a frequency error in the local oscillator is more dramatic to analog speech
signals. Its effect can best be understood from the frequency domain description of the SSB
signal. A frequency shift of all baseband tones occurs. In this case, the harmonic relation
between audio tones is lost and the signal sounds very artificial.
SSB is relatively sensitive to interference, which requires large frequency reuse spacings and
severely limits the spatial spectrum efficiency of cellular SSB networks.
AGC to reduce the effect of amplitude fades substantially affects the message signal.
Furthermore, SSB requires very sharp filters, which are mostly sensitive to physical damage,
temperature and humidity changes. This makes SSB not very attractive for mobile
communication.
PHASE MODULATION
In phase modulation, the transmit signal has the constant-amplitude form
where the constant is called the phase deviation.
Exercise
Show that for Narrowband Phase Modulation (NBPM) with constant << 1, Phase
modulation can be approximated by the linear expression s(t) = Ac cos ( c t) + m(t) cos ( c
t). Compare NBPM with AM
111
WIRELESS COMMUNICACTION
by drawing the phasor diagrams,
by computing the transmit power in the carrier and each sideband,
by calculating the signal-to-noise ratio for coherent detection.
Explain why in mobile communications NBPM has advantages over AM.
FREQUENCY MODULATION
For frequency deviation
, the transmit signal is of the form
That is, FM can be implemented by a PM modulator, and an integrator for the message
signal.
For a message bandwidth W, the transmit bandwidth BT can be approximated by the Carson
bandwidth
BT= 2 (
+ W)
In the event of 2W <
<10 W, a better approximation is BT = 2 (
+2 W)
Exercise
Find BT for FM broadcasting with =75 kHz and W = 15 kHz. Cellular telephone nets with
speech bandwidth W = 3000 Hz typically transmit over BT = 12.5 or 25 kHz. Find the
frequency deviation . Compare the use of 12.5 and 25 kHz bandwidth in terms of
implementation difficulty, audio quality, vulnerability to interference, reuse factors needed
and spectrum efficiency.
After frequency-nonselective multipath propagation, a received FM signal
contains random phase and frequency modulation, which are additive to the
modulation
suffers from dispersion. The audio signal will be distorted.
Reception above the FM-Threshold
If the signal-to-noise ratio is sufficiently large, the received signal is dominated by the wanted
signal. The effect of noise can be approximated as a liniear additive distrurbance. The
minimum signal-to-noise ratio for which this assumption is reasonable is called the FM
capture threshold.
Capture
In non-linear modulation, such as phase modulation (PM) or frequency modulation (FM),
the post-detection signal-to-noise ratio can be greatly enhanced as compared to baseband
112
WIRELESS COMMUNICACTION
transmission or compared to linear modulation. This enhancement occurs as long as the
received pre-detection signal-to-noise ratio is above the threshold. Below the threshold the
signal-to-noise ratio deteriorates rapidly. This is often perceived if the signal-to-noise ratio
(C/N) increases slowly: a sudden transition from poor to good reception occurs. The signal
appears to "capture" the receiver at certain C/N. A typical threshold value is 10 (10 dB)
C/N at RF. The audio SNR where capture occurs depends on the frequency deviation.
Effects of Rayleigh fading on FM reception
In a rapidly fading channel, the events of crossing the FM capture threshold may occur too
frequently to be distinguished as individual drop outs. The performance degradation is
perceived as an average degradation of the channel. The capture effect and the FM threshold
vanish in such situations.
Effect of Amplitude variations
Fluctuations of the signal-to-noise ratio cause fluctuations of received noise power and
fluctuations of the amplitude of the detected wanted signal. Some analyses assume that the
difference between the detected signal and the expected signal is perceived as a noise type of
disturbance. It is called the signal-suppression 'noise', even though disturbances that are
highly correlated with the signal are mostly perceived as 'distortion' rather than as noise.
Effect of Random FM
For large local-mean signal-to-noise ratios Random FM is the only remaining factor. For
voice communication with audio passband 300 - 3000 Hz, the noise contribution due to
random FM leads to a SNR of
with S the audio power at the detector output. This does not depend on additive predection
noise. Wideband transmission (large frequency deviation) is thus significantly less sensitive to
random FM than narrowband FM.
Threshold Crossing Rate
The average number of times per second that a fading signal crosses a certain threshold is
called the threshold crossing rate. Lets enlarge the following (orange) signal path, at the
(yellow) instant when it crosses the (purple) threshold.
113
WIRELESS COMMUNICACTION
Figure: Threshold crossing. Threshold R is crossed with derivative dr/dt.
The above crossing of the threshold R with width dr lasts for dt seconds. The derivative of
the signal amplitude, with respect to time, is dr / dt.
If the signal always crosses the threshold with the same derivative, then:
Average number of crossings per second * dt = Probability that the amplitude is in the
interval [R, R + dr].
The probability that the signal amplitude is within the window [R, R + dr] is known from the
probability density of the signal amplitude, which can for instance be Rayleigh, Rician or
Nakagami. Moreover, the joint pdf of signal amplitude and its derivative can be found. For a
Rayleigh-fading signal.
The amplitude is Rayleigh, with mean equal to the local-mean power
The derivative is zero-mean Gaussion with variance
var = 2 * (pi)^2 * (Doppler spread)^2 * localmean power
The expected number of crossings per second is found by integrating over all possible
derivatives.
Threshold Crossing Rate in Rayleigh-fading channel versus fade duration for n = 1, 2, and 6
Rayleigh-fading
interfering
signals
and
for
a
constant
noise
floor.
Normalized to the Doppler Spread.
The TCR curve has a maximum if the local-mean-power is about as large as the threshold
noise or interference power. If the signal is on average much stronger than the threshold, the
114
WIRELESS COMMUNICACTION
number of threshold crossings (i.e., deep fades) is relatively small. Also, if the signal is much
weaker than the threshold, the number of crossings is small because signal "up-fades" are
unlikely.
Fade Duration
The mobile Rayleigh or Rician radio channel is characterized by rapidly changing channel
characteristics. If a certain minimum (threshold) signal level is needed for acceptable
communication performance, the received signal will experience periods of
sufficient signal strength or "non-fade intervals"
insufficient signal strength or "fades"
It is of critical importance to the performance of mobile data networks that the used packet
duration is selected taking into account the expected duration of fades and non-fade
intervals.
Fade and non-fade duration for a sample of a fading signal.
Average Fade Duration
We use:
Outage Probability = Average number of fades per second * Average fade duration
where the average number of fades per second is called the threshold crossing rate.
Expressions for Average (Non-) Fade Duration
In a Rayleigh fading channel with fade margin M, the average nonfade duration (ANFD) is
where fD is the Doppler spread. M is the ratio of the local-mean signal power and the
minimum (threshold) power needed for reliable communication.
115
WIRELESS COMMUNICACTION
Average non-fade duration in Rayleigh-fading channel versus fade margin for n = 1, 2, 3, 4,
5 and 6 Rayleigh-fading interfering signals. Normalized by dividing by the Doppler Spread.
The curve for n = 6 closely resembles the curve the ANFD in an interference-free but noiselimited channel.
Thus
The ANFD is proportional to the speed of the mobile user. Channel fading occurs
mainly because the user moves. If the user is stationary almost no time variations of
the channel occur (except if reflecting elements in the environment move).
The ANFD increases proportional with the square root of the fade margin.
The non-fade duration is not so sensitive to whether the signal experiences fades
below a constant noise-floor or a fading interfering signal.
Calculation of the distribution of non-fade periods is tedious, but has been elaborated by
Rice. Because of the shape of the Doppler spectrum, fade durations that coincide with a
motion of about half a wavelength are relatively frequent.
The average fade duration (AFD) is
Thus
The AFD is proportional to the speed of the mobile user.
The fade durations rapidly reduce with increasing fade margin, but the time between
fades increases much slower.
116
WIRELESS COMMUNICACTION
Average fade duration in Rayleigh-fading channel versus fade margin for n = 1, 2, 3, 4, 5 and
6 Rayleigh-fading interfering signals. Normalized by dividing by the Doppler Spread.
Experiments revealed that at large fade margins, the fade durations are approximately
exponentially distributed around their mean value.
Delay Spread
Because of multipath reflections, the channel impulse response of a wireless channel looks
likes a series of pulses.
Figure: Example of impulse response and frequency transfer function of a multipath
channel.
We can define the local-mean average received power with excess delay within the interval
(T, T + dt). This gives the "delay profile" of the channel.
The delay profile determines to what extent the channel fading at two different frequencies
f1 and f2 are correlated.
Some definitions
The maximum delay time spread is the total time interval during which reflections
with significant energy arrive.
The rms delay spread Trms is the standard deviation (or root-mean-square) value of
the delay of reflections, weighted proportional to the energy in the reflected waves.
117
WIRELESS COMMUNICACTION
For a digital signal with high bit rate, this dispersion is experienced as frequency selective
fading and intersymbol interference (ISI). No serious ISI is likely to occur if the symbol
duration is longer than, say, ten times the rms delay spread.
Typical Values
In macro-cellular mobile radio, delay spreads are mostly in the range from Trms is about 100
nsec to 10 microsec. A typical delay spread of 0.25 microsec corresponds to a coherence
bandwidth of about 640 kHz. Measurements made in the U.S., indicated that delay spreads
are usually less than 0.2 microsec in open areas, about 0.5 microsec in suburban areas, and
about 3 micros in urban areas. Measurements in The Netherlands showed that delay spreads
are relatively large in European-style suburban areas, but rarely exceed 2 microsec. However,
large distant buildings such as apartment flats occasionally cause reflections with excess
delays in the order of 25 microsec.
In indoor and micro-cellular channels, the delay spread is usually smaller, and rarely exceed a
few hundred nanoseconds. Seidel and Rappaport reported delay spreads in four European
cities of less than 8 microsec in macro-cellular channels, less than 2 microsec in microcellular channels, and between 50 and 300 ns in pico-cellular channels.
Resolvable Paths
A wideband signal with symbol duration Tc (or a direct sequence (DS)-CDMA signal with
chip time Tc), can "resolve" the time dispersion of the channel with an accuracy of about T c.
For DS-CDMA, the number of resolvable paths is
N = round (Tdelay / Tchip ) +
1
where round(x) is the largest integer value smaller than x and Tdelay is total length of the delay
profile. A DS-CDMA Rake receiver can exploit N- fold path diversity.
Coherence Bandwidth
One can define 'narrowband' transmission defined in the time domain, considering
interarrival times of multipath reflections and the time scale of variations in the signal caused
by modulation. A signal sees a narrowband channel if the bit duration is sufficiently larger
than the interarrival time of reflected waves. In such case, the intersymbol interference is
small.
Transformed into constraints in the frequency domain, this criterion is found to be satisfied
if the transmission bandwidth does not substantially exceed the 'coherence' bandwidth Bc of
the channel. This is the bandwidth over which the channel transfer function remains
virtually constant.
118
WIRELESS COMMUNICACTION
For a Rayleigh-fading channel with an exponential delay profile, one finds
Bc = 1/(2 Trms)
where Trms is the rms delay spread.
Scatter Function
The scatter function combines information about Doppler shifts and path delays. Each path
can be described by its
Angle of arrival and Doppler shift
Excess delay
Thus we can plot the received energy in a two dimensional plane, with Doppler shift on one
horizontal axis and delay on the other horizontal axis.
Reciprocity - differences between uplink and downlink
Two-way communication requires facilities for 'inbound', i.e., mobile-to-fixed, as well as
'outbound', i.e., fixed-to-mobile communication. In circuit-switched mobile communication,
such as cellular telephony, the inbound and outbound channel are also called the 'uplink' and
'downlink', respectively. The propagation aspects described on other pages are valid for
inbound and outbound channels. This is understood from the reciprocity theorem:
If, in a radio communication link, the role of the receive and transmit antenna are
functionally interchanged, the instantaneous transfer characteristics of the radio channel
remain unchanged.
In mobile multi-user networks with fading channels, the reciprocity theorem does not imply
that the inbound channel behaves identically as the outbound channel. Particular differences
occur for a number of link aspects:
man-made noise levels
The antenna of the base station is usually mounted on an appropriate antenna mast, such
that it does not suffer from attenuation caused by obstacles in its vicinity. The mobile
antenna, on the other hand, is at most mounted a few metres above ground level. The
man-made noise level, particularly automotive ignition noise, is likely to be substantially
higher at the mobile antenna than at the base station antenna.
effect of antenna diversity
Multipath scatters mostly occur in the immediate vicinity of the mobile antenna. The
base station receives more or less a transversal electromagnetic wave, whereas the mobile
119
WIRELESS COMMUNICACTION
station receives a superposition of a set of reflected waves from random angles. Two
antennas at the mobile terminal are likely to receive uncorrelated signal powers if their
separation is more than a wave length. At the base station site, however, all reflections
arrive from almost identical directions. Therefore, diversity at the base station requires
much larger separation of the antennae to ensure uncorrelated received signal powers at
the two antennas. For the same reason, antenna directivity has different effects at the
mobile and the base station.
correlation of shadow fading of wanted signal and interfering signals
In a cellular network, shadow fading of the wanted signal received by the mobile station
is likely to be correlated with the shadow fading of the interference caused by other base
stations, or, in a spread-spectrum network, with the shadowing of simultaneously
transmitted signals from the same base station. In contrast to this, at the base station,
shadow fading of the wanted signal presumably is mostly statistically independent from
shadow fading of the interference. However, experimental results for correlation of
shadow attenuation are scarce.
full-duplex channels
In full-duplex operation, multipath fading of inbound and outbound channel, which
operate at widely different frequencies, may be uncorrelated. This will particularly be the
case if the delay spread is large.
multiplexing and multiple access
In a practical multi-user system with intermittent transmissions, inbound messages are
sent via a multiple-access channel, whereas in outbound channel, signals destined for
different users can be multiplexed. In the latter case, the receiver in a mobile station can
maintain carrier and bit synchronisation to the continuous incoming bit stream from the
base station, whereas the receiver in the base station has to acquire synchronisation for
each user slot. Moreover, in packet-switched data networks, the inbound channel has to
accept randomly occurring transmissions by the terminals in the service area. Randomaccess protocols are required to organise the data traffic flow in the inbound channel,
and access conflicts ('contention') may occur.
In cellular networks with large traffic loads per base station, spread-spectrum modulation
can be exploited in the downlink to combat multipath fading, whereas in the uplink, the
signal powers from the various mobile subscribers may differ too much to effectively
apply spread- spectrum multiple access unless sophisticated adaptive power control
techniques are employed.
industrial design
From a practical point of view, the downlink and the uplink will be designed under
entirely different (cost) constraints, such as power consumption, size, weight and other
ergonomic aspects, energy radiated into the human body, and consumer cost aspects.
120
WIRELESS COMMUNICACTION
data traffic patterns
In packet data networks applied for traffic and transportation, the characteristics of the
data traffic flows are known to differ for the uplink and the downlink. For instance,
(outbound) messages from a fleet management centre to the vehicles are likely to be of a
more routine type, of a more uniform length and occur in a more regular pattern than
messages in the opposite (inbound) direction.
Indoor Wireless RF Channel
There are several causes of signal corruption in an indoor wireless channel. The primary
causes are signal attenuation due to distance, penetration losses through walls and floors and
multipath propagation.
Effect of distance
Signal attenuation over distance is observed when the mean received signal power is
attenuated as a function of the distance from the transmitter. The most common form of
this is often called free space loss and is due to the signal power being spread out over the
surface area of an increasing sphere as the receiver moves farther from the transmitter.
In addition to free space loss effects, the signal experiences decay due to ground wave loss
although this typically only comes into play for very large distances (on the order of
kilometers). For indoor propagation this mechanism is less relevant, but effects of wave
guidance through corridors can occur.
Multipath
Multipath results from the fact that the propagation channel consists of several obstacles and
reflectors. Thus, the received signal arrives as an unpredictable set of reflections and/or
direct waves each with its own degree of attenuation and delay. The delay spread is a
parameter commonly used to quantify multipath effects. Multipath leads to variations in the
received signal strength over frequency and antenna location.
Rate of fading
Time variation of the channel occur if the communicating device (antenna) and components
of its environment are in motion. Closely related to Doppler shifting, time variation in
conjunction with multipath transmission leads to variation of the instantaneous received
signal strength about the mean power level as the receiver moves over distances on the order
of less than a single carrier wavelength. Time variation of the channel becomes uncorrelated
every half carrier wavelength over distance.
Fortunately, the degree of time variation within an indoor system is much less than that of
an outdoor mobile system. One manifestation of time variation is as spreading in the
frequency domain (Doppler spreading). Given the conditions of typical indoor wireless
121
WIRELESS COMMUNICACTION
systems, frequency spreading should be virtually nonexistent. Doppler spreads of 0.1 - 6.1
Hz (with RMS of 0.3 Hz) have been reported.
Some researchers have considered the effects of moving people. In particular it was found
by Ganesh and Pahlavan [9] that a line of sight delay spread of 40 ns can have a standard
deviation of 9.2 - 12.8 ns. Likewise an obstructed delay spread can have a std. dev. of 3.7 5.7 ns.
For wireless LANs this could mean that an antenna place in a local multipath null, remains
in fade for a very long time. Measures such as diversity are needed to guarantee reliable
communication irrespective of the position of the antenna. Wideband transmission, e.g.
direct sequence CDMA, could provide frequency diversity.
The measured rms delay spreads ranged from 16 - 52 ns. Delay spread increased with
transmitter/receiver distance as well as room dimensions. The delay spread in the hallway
was relatively constant when compared to the other rooms. The path loss drop off rates
were 1.72 in the hallway, 1.99 in room 307 and 2.18 in room 550. Significant signal
transmission occurs through the walls. For the U.C. Berkeley Cory Hall building, this
suggests that walls can not be used as natural cell boundaries.
Indoor Measurement Environment
We made measurements in three rooms and one hallway of Cory Hall as follows:
Room 550 Large office room with cubicles
Room 307 Medium office room with cubicles
Room 400 Small room used for seminars and presentations 4th Floor Hallway
(South End)
In all of the tests, the channel was kept stationary while the transfer function H'(f) was
measured in the sense that no non-essential movement occurred during a single sweep of
frequencies except that required to operate the test equipment. To average out fading, the
receive and transmit antennas were slightly varied over position for each given separation
distance between the two antennas. Thus, for each separation distance a set of
measurements were taken where we define a measurement as being the response from a
single sweep over frequency of the network analyzer. During the course of a set of
measurements, the amount of position variation between each individual measurement was
approximately one half of the carrier wavelength. This ensured that the various
measurements of the set would be uncorrelated.
122
WIRELESS COMMUNICACTION
Delay Spread
Essentially we treat the instantaneous impulse response as a pdf of various values of excess
delay where excess delay is defined as all possible values of delay in the instantaneous
impulse response after subtracting off the delay of the fastest arriving signal component. We
then numerically calculate the mean excess delay and the rms delay spread of this pdf. In
determining delay spread, we calculate a power level as the area under the instantaneous
impulse response curve.
The first point to note is that the rms value generally increases with distance. Heuristically
this can be explained by the idea that with greater distance, the long distanced reflections are
relatively stronger (compared to the line-of-sight) to their contribution to an increased delay
spread is more significant.
Note further that the smallest delay spreads are found in the hallway. The width of the
hallway was less than 8 feet as compared to over 12 feet for Room 400 well above 20 feet
for the other two rooms. These small dimensions in conjunction with the relative absence of
obstacles resulted in a delay spread that was essentially independent of distance. Heuristically
we can say that increasing the distance did not result in a greater number of multipath
reflections. Consistent with the above is the fact that delay spread is directly proportional to
the general dimensions of the rooms within which measurements were taken. Thus, Room
550 produced the greatest delay spreads and likewise Room 550 was the largest room.
123
WIRELESS COMMUNICACTION
FIGURE: RMS Delay Spread vs. Antenna Separation Distance
The Figures below exhibit instantaneous impulse responses from measurements in Room
550. These two figures serve as examples of different respective arrival times for the first and
the strongest received signal component arrival times. Measurements have been taken at a 35
foot received-transmit separation distance.
124
WIRELESS COMMUNICACTION
FIGURE: Impulse Response
FIGURE: Impulse Response
Path Loss
We typically expect received power to be a function of distance as follows
p = dn
125
WIRELESS COMMUNICACTION
where d represents distance. In the case of ideal free space loss, n = 2. We call n the path
loss rate.
As the three graphs show, signal strength drops off faster as dimensions of the room
increase. In the case of the hallway, the drop off rate was significantly smaller than 2.0 which
is the value for free space loss. The explanation for this is that the hallway acts as a
waveguide given its small width. Note that the received power levels in these Figures are in
dB (not in dBm) and are with respect to +10 dBm. To obtain absolute power values in the
126
WIRELESS COMMUNICACTION
figure, 10 dB must be added to each component. Note further that n is based on log-log data
although the graphs shown are log-linear.
Rician K Factor
Each K-factor is the result from an average of eight measurements taken from the same
transmitter/receiver separation distance. Note that the K-factors are independent of distance
and the three results shown are very close to one another. In all cases the presence of a line
of sight path was the same. Given that the above measurements all had a LOS path available,
one would expect that the ratio of the strongest received signal component to the reflected
components would be the same. Our measurements bear this out.
Discussion for Wideband CDMA System
Delay Spread and Resolvable Paths
One of the advantages of direct sequencing spread spectrum is the ability to distinguish
between differing signal path arrivals. These resolvable paths can then be used to mitigate
corruption caused by the channel. Delay spread is directly related to the number of
resolvable paths. The rms delay spread in Cory Hall ranged from 16 to 52 ns. Given the
proposed InfoPad downlink CDMA chip rate of several tens of Megachips per second, we
arrive at the following:
2 - 3 Resolvable Paths Room 307
3 - 4 Resolvable Paths Room 550
127
WIRELESS COMMUNICACTION
2 - 3 Resolvable Paths Room 400
2 Resolvable Paths Hallway
For CDMA systems with smaller chip rates the channel would behave as a narrowband, i.e.,
frequency nonselective channel.
Path Loss, Wall Penetration and Cell Layout
An important issue for indoor cellular reuse systems is the possibility of interference from
users in adjacent cells. In designing cells it would be convenient if natural barriers such as
walls and ceilings/floors could be used as cell boundaries. With these thoughts in mind, we
took measurements through walls of Cory Hall.
The instantaneous impulse response was taken on the 5th floor of Cory Hall. The
transmitter was located in room 550 and the receiver was located in the corridor near the
freight elevator. The received powers shown are with respect to +10 dBm so that absolute
values can be obtained by adding 10 to magnitudes shown. Clearly the received signal
strength through the wall is significant and shows that some of the walls in Cory Hall can
not serve as cell boundaries. Note that certain walls in Cory consist of different materials
than those on the 5th floor. Most notably room 473 was once an anechoic chamber and is
essentially a metal cage. With these exceptions, we reiterate that most walls in Cory can not
serve as cell boundaries.
128
WIRELESS COMMUNICACTION
Antenna Fundamentals
An antenna radiation pattern or antenna pattern is defined as "a mathematical function or
graphical representation of the radiation properties of the antenna as a function of space
coordinates. In most cases, the radiation pattern is determined in the far-field region (i.e.in
the region where electric and magnetic fields vary inversely with distance) and is represented
as a function of the directional coordinates." The radiation property of most concern is the
two- or three-dimensional spatial distribution of radiated energy as a function of an
observer's position along a path or surface of constant distance from the antenna. A trace of
the received power at a constant distance is called a power pattern. A graph of the spatial
variation of the electric or magnetic field along a contant distance path, is called a field
pattern.
An isotropic antenna is defined as "a hypothetical lossless antenna having equal radiation in
all directions." Clearly, an isotropic antenna is a ficticious entity, since even the simplest
antenna has some degree of directivity. Although hypothetical and not physically realizable,
an isotropic radiator is taken as a reference for expressing the directional properties of actual
antennas. A directional antenna is one "having the property of radiating or receiving
electromagetic waves more effectively in some directions than in others." The term is usually
applied to an antenna whose maximum directivity is significantly greater than that of a linear
dipole antenna. The power pattern of a half-wave linear dipole is shown below.
The linear dipole is an example of an omnidirectional antenna -- i.e. an antenna having a
radiation pattern which is nondirectional in a plane. As the figure above indicates, a linear
dipole has uniform power flow in any plane perpendicular to the axis of the dipole and the
maximum power flow is in the equatorial plane. One important class of directional antennas
are form from linear arrays of linear dipoles as illustrated below. The most famous and
ubiquitous member of this class is the so called Uda-Yagi antenna shown on the right.
129
WIRELESS COMMUNICACTION
As example of array patterns, the highly directional power pattern of an end-fire linear array
of 8 half-wave linear dipoles is shown below.
Like the Uda-Yagi antenna, the end-fire array picture here has is its maximum maximum
power flow in a direction parallel to the line along which the dipoles are deployed. The
power flow in the opposite or back direction is negligible!
Antenna Terms:
Directivity: The directivity of a transmitting antenna is defined as the ratio of the
radiation intensity flowing in a given direction to the radiation intensity averaged
over all direction. The average radiation intensity is equal to the total power radiated
by the antenna divided by 4 . If the direction is not specified, the direction of
130
WIRELESS COMMUNICACTION
maximum radiation intensity is usually implied. Directivity is some times refered to
as directive gain.
Absolute gain: The absolute gain of a transmitting antenna in a given direction is
defined as the ratio of the radiation intensity flowing in that direction to the radiation
intensity that would be obtained if the power accepted by the antenna were radiated
isotropically. If the direction is not specified, the direction of maximum radiation
intensity is usually implied. (Absolute gain is closely related to directivity, but it takes
into account the efficiency of antenna as well as its direction characteristics. To
distinquish it, the absolute gain is some times refered to as power gain.)
Relative gain: The relative gain of a transmitting antenna in a given direction is
defined as the ratio of the absolute gain of the antenna in the given direction to the
absolute gain of a reference antenna in the same direction. The power input to the
two antennas must be the same.
Efficiency: The efficiency of a transmitting antenna is the ratio of the total radiated
power radiated by the antenna to the input power to the antenna.
Effective area (aperature): The effective area or aperature of a receiving antenna in a
given direction is defined as the ratio of the available power at the terminals of the
antenna to the radiation intensity of a plane wave incident on the antenna in the
given direction. If the direction is not specified, the direction of maximum radiation
intensity is usually implied. It can be shown, that when an isotropic area is used as a
receiving antenna its effective area is the wavelength squared divided by 4 . Thus,
the gain of a receiving antenna is the ratio of the antennas effective area to that of an
isotropic antenna -- i.e.
.
Antenna factor: The ratio of the magnitude of the electric field incident upon a
receiving antenna to the voltage developed at the antenna's output connector
(assuming a 50 ohm coxial connector) is called the antenna factor. The antenna
factor is clearly related to the gain of antenna, but is often found to be the most
convenience parameter for use in the monitoring of electromagnetic emissions.
131
WIRELESS COMMUNICACTION
Chapter No:5
Introduction to Satellite
Communication
132
WIRELESS COMMUNICACTION
Communications Satellite
A
(sometimes abbreviated to comsat) is an
artificial satellite stationed in space for the purposes of telecommunications. Modern
communications satellites use geostationary orbits, Molniya orbits or low polar Earth orbits.
For fixed services, communications satellites provide a technology complementary to that of
fiber optic submarine communication cables. They are also used for mobile applications
such as communications to ships and planes, for which application of other technologies,
such as cable, are impractical or impossible.
History
Early missions
The first satellite equipped with on-board radio-transmitters was the Soviet Sputnik 1,
launched in 1957. The first American satellite to relay communications was Project SCORE
in 1958, which used a tape recorder to store and forward voice messages. It was used to send
a Christmas greeting to the world from President Eisenhower. NASA launched an Echo
satellite in 1960; the 100-foot aluminized PET film balloon served as a passive reflector for
radio communications. Courier 1B, (built by Philco) also launched in 1960, was the world‘s
first active repeater satellite.
Telstar was the first active, direct relay communications satellite. Belonging to AT&T as part
of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the
British General Post Office, and the French National PTT (Post Office) to develop satellite
communication, it was launched by NASA from Cape Canaveral on July 10, 1962, the first
privately sponsored space launch. Telstar was placed in an elliptical orbit (completed once
every 2 hours and 37 minutes), rotating at a 45° angle above the equator.
An immediate antecedent of the geostationary satellites was Hughes‘ Syncom 2, launched on
July 26, 1963. Syncom 2 revolved around the earth once per day at constant speed, but
because it still had north-south motion, special equipment was needed to track it.
The first truly geostationary satellite launched in orbit was the following Syncom 3, launched
on August 19, 1964. It was placed in orbit at 180° east longitude, over the International Date
Line. It was used that same year to relay television coverage on the 1964 Summer Olympics
in Tokyo to the United States, the first television transmission sent over the Pacific Ocean.
Shortly after Syncom 3, Intelsat I, aka Early Bird, was launched on April 6, 1965 and placed
in orbit at 28° west longitude. It was the first geostationary satellite for telecommunications
over the Atlantic Ocean.
On November 9, 1972, North America's first geostationary satellite serving the continent,
Anik A1, was launched by Telesat Canada, with the United States following suit with the
launch of Westar 1 by Western Union on April 13, 1974.
133
WIRELESS COMMUNICACTION
Geostationary orbits
A satellite in a geostationary orbit appears to be in a fixed position to an earth-based
observer. A geostationary satellite revolves around the earth at a constant speed once per day
over the equator.
The geostationary orbit is useful for communications applications because ground based
antennas, which must be directed toward the satellite, can operate effectively without the
need for expensive equipment to track the satellite‘s motion. Especially for applications that
require a large number of ground antennas (such as direct TV distribution), the savings in
ground equipment can more than justify the extra cost and onboard complexity of lifting a
satellite into the relatively high geostationary orbit.
The concept of the geostationary communications satellite was first proposed by Arthur C.
Clarke, building on work by Konstantin Tsiolkovsky and on the 1929 work by Herman
Potočnik (writing as Herman Noordung) Das Problem der Befahrung des Weltraums - der
Raketen-motor. In October 1945 Clarke published an article titled ―Extra-terrestrial Relays‖
in the British magazine Wireless World. The article described the fundamentals behind the
deployment of artificial satellites in geostationary orbits for the purpose of relaying radio
signals. Thus Arthur C. Clarke is often quoted as being the inventor of the communications
satellite.
After the launchings of Telstar, Syncom 3, Early Bird, Anik A1, and Westar 1, RCA
Americom (later GE Americom, now SES Americom) launched Satcom 1 in 1975. It was
Satcom 1 that was instrumental in helping early cable TV channels such as WTBS (now TBS
Superstation), HBO, CBN (now ABC Family), and The Weather Channel become
successful, because these channels distributed their programming to all of the local cable TV
headends using the satellite. Additionally, it was the first satellite used by broadcast TV
networks in the United States, like ABC, NBC, and CBS, to distribute their programming to
all of their local affiliate stations. Satcom 1 was so widely used because it had twice the
communications capacity of the competing Westar 1 in America (24 transponders as
opposed to Westar 1‘s 12), resulting in lower transponder usage costs.
By 2000 Hughes Space and Communications (now Boeing Satellite Development Center)
had built nearly 40 percent of the satellites in service worldwide. Other major satellite
manufacturers include Space Systems/Loral, Lockheed Martin (owns former RCA Astro
Electronics/GE Astro Space business), Northrop Grumman, Alcatel Space and EADS
Astrium.
Low-Earth-orbiting satellites
A low Earth orbit typically is a circular orbit about 400 kilometres above the earth‘s surface
and, correspondingly, a period (time to revolve around the earth) of about 90 minutes.
Because of their low altitude, these satellites are only visible from within a radius of roughly
1000 kilometres from the sub-satellite point. In addition, satellites in low earth orbit change
their position relative to the ground position quickly. So even for local applications, a large
number of satellites are needed if the mission requires uninterrupted connectivity.
134
WIRELESS COMMUNICACTION
Low earth orbiting satellites are less expensive to position in space than geostationary
satellites and, because of their closer proximity to the ground, require lower signal strength
(Recall that signal strength falls off as the square of the distance from the source, so the
effect is dramatic). So there is a trade off between the number of satellites and their cost. In
addition, there are important differences in the onboard and ground equipment needed to
support the two types of missions.
A group of satellites working in concert thus is known as a satellite constellation. Two such
constellations which were intended for provision for hand held telephony, primarily to
remote areas, were the Iridium and Globalstar. The Iridium system has 66 satellites. Another
LEO satellite constellation, with backing from Microsoft entrepreneur Paul Allen, was to
have as many as 720 satellites.
It is also possible to offer discontinuous coverage using a low Earth orbit satellite capable of
storing data received while passing over one part of Earth and transmitting it later while
passing over another part. This will be the case with the CASCADE system of Canada‘s
CASSIOPE communications satellite.
Molniya satellites
As mentioned, geostationary satellites are constrained to operate above the equator. As a
consequence, they are not always suitable for providing services at high latitudes: for at high
latitudes a geostationary satellite may appear low on (or even below) the horizon, affecting
connectivity and causing multipathing (interference caused by signals reflecting off the
ground into the ground antenna). The first satellite of Molniya series was launched on April
23, 1965 and was used for experimental transmission of TV signal from Moscow uplink
station to downlink stations, located in Siberia and Russian Far East, in Norilsk,
Khabarovsk, Magadan and Vladivostok. In November of 1967 Soviet engineers created a
unique system of national TV network of satellite television, called Orbita, that was based on
Molniya satellites.
Molniya orbits can be an appealing alternative in such cases. The Molniya orbit is highly
inclined, guaranteeing good elevation over selected positions during the northern portion of
the orbit. (Elevation is the extent of the satellite‘s position above the horizon. Thus a satellite
at the horizon has zero elevation and a satellite directly overhead has elevation of
90 degrees).
Furthermore, the Molniya orbit is so designed that the satellite spends the great majority of
its time over the far northern latitudes, during which its ground footprint moves only
slightly. Its period is one half day, so that the satellite is available for operation over the
targeted region for eight hours every second revolution. In this way a constellation of three
Molniya satellites (plus in-orbit spares) can provide uninterrupted coverage.
Molniya satellites are typically used for telephony and TV services over Russia. Another
application is to use them for mobile radio systems (even at lower latitudes) since cars
travelling through urban areas need access to satellites at high elevation in order to secure
good connectivity, e.g. in the presence of tall buildings.
135
WIRELESS COMMUNICACTION
IndoStar-1 Satellite
IndoStar-1, which also known as Cakrawarta-1, is a communication satellite that was
launched aboard via Ariane rocket from Kourou, French Guiana at November 1997. As the
first direct broadcasting satellite (DBS) in Indonesia, IndoStar-1 would initiate a new
communication service for Indonesian society such as Direct-To-Home television. Designed
and built by Orbital Sciences, IndoStar-1 was also the world's first commercial
communications satellite that uses S-band frequency, which are less vulnerable to
atmospheric interference than C-Band frequency. Provided high-quality transmissions by
small-diameter antennas and penetrated the atmosphere efficiently, this satellite is well suited
for
Indonesia,
the
tropical
climate
country
that
has
heavy
rain.
This satellite, which is managed and operated by PT Media Citra Indostar (MCI), provides a
direct broadcast by high quality digital transmission. Operationally, IndoStar-1 satellite is
used for commercial service by cable television. Cable television uses this satellite to relays
international programs and local programs directly that can be received to all over Indonesia.
Description about IndoStar-1 Satellite:
Status: Satellite was launched via Ariane (V102) in November 1997
Destination: Geosynchronous Orbit
Agency: PT Media Citra Indostar, Jakarta
Performance Launch mass: 1,350 kg
Class: Communications
Mission: Provide direct broadcast television to Indonesia (high quality digital
transmission, approximately 40 television channels)
Mission life: 12 years
Manufacturer: Orbital Sciences
Applications
Telephony
136
WIRELESS COMMUNICACTION
A BSS 601 model, owned by SES Astra, used for DTH television broadcasting in Europe
The first and historically the most important application for communication satellites is in
international telephony. Fixed-point telephones relay calls to an earth station, where they are
then transmitted to a geostationary satellite. An analogous path is then followed on the
downlink. In contrast, mobile telephones (to and from ships and airplanes) must be directly
connected to equipment to uplink the signal to the satellite, as well as being able to ensure
satellite pointing in the presence of disturbances, such as waves onboard a ship.
Hand held telephony (cellular phones) used in urban areas do not make use of satellite
communications. Instead they have access to a ground based constellation of receiving and
retransmitting stations.
Satellite Television and radio
There are two satellite types used for North American television and radio:
Direct Broadcast Satellite (DBS), and
Fixed Service Satellite (FSS).
A direct broadcast satellite is a communications satellite that transmits to small DBS satellite
dishes (usually 18 to 24 inches in diameter). Direct broadcast satellites generally operate in
the upper portion of the Ku band. DBS technology is used for DTH-oriented (Direct-ToHome) satellite TV services, such as DirecTV, DISH Network , and Sky Angel in the United
States, ExpressVu in Canada, and Sky Digital in the UK, Republic of Ireland and New
Zealand.
Fixed Service Satellites use the C band, and the lower portions of the Ku bands. They are
normally used for broadcast feeds to and from television networks and local affiliate stations
(such as program feeds for network and syndicated programming, live shots, and backhauls),
as well as being used for distance learning by schools and universities, business television
(BTV), video-conferencing, and general commercial telecommunications. FSS satellites are
also used to distribute national cable channels to cable TV headends.
FSS satellites differ from DBS satellites in that they have a lower RF power output than the
latter, requiring a much larger dish for reception (3 to 8 feet in diameter for Ku band, and
12 feet on up for C band), as well as using linear polarization for each of the transponders'
RF input and output (as opposed to circular polarization used by DBS satellites). FSS
satellite technology was also originally used for DTH satellite TV from the late 1970s to the
early 1990s in the United States in the form of TVRO (TeleVision Receive Only) receivers
and dishes (also known as big-dish, or more pejoratively known as "BUD" or "Big ugly dish"
systems). It was also used in its Ku band form for the now-defunct Primestar satellite TV
service.
This all changed when the first American DBS provider, DirecTV, was established in 1994,
stealing the limelight from FSS satellite technology for DTH programming (due to
137
WIRELESS COMMUNICACTION
DirecTV's smaller 18-inch diameter dishes and lower equipment cost). However, FSS
satellites on the C and Ku bands still are used by cable and satellite channels such as CNN,
The Weather Channel, HBO, Starz, and others, for distribution to cable TV headends (as
mentioned earlier), and to the DBS providers themselves such as DirecTV and DISH
Network who then re-distribute these channels over their own DBS systems.
The fact that these channels still exist on FSS satellites (more so for reception and redistribution by cable TV and DBS systems, instead of for DTH viewers) makes TVRO
systems for DTH viewing a still-viable option for satellite TV, often being a much-cheaper
alternative to DBS, as far as monthly subscription fees are concerned. TVRO-oriented
programming packages sold by companies such as National Programming Services,
Bigdish.com, and Skyvision, are often quite a bit cheaper than their DBS equivalents.
Motorola still makes digital 4DTV receivers for DTH TVRO use, and analog TVRO
receivers are still available.
However, the hardware for a brand-new TVRO system (dish and receiver, along with a
VideoCipher or DigiCipher descrambler, or an integrated receiver/decoder (IRD) like a
4DTV system, instead of a separate receiver and descrambler/decoder) nowadays costs quite
a bit more than a DBS system (about US$1500–2000, including installation). But most older
used TVRO systems can be had almost for free, due to most people converting over to DBS
systems over the years. Unlike DBS, big-dish TVRO satellite TV also provides a plethora of
unscrambled and unencrypted channels such as Classic Arts Showcase, and feeds of
syndicated TV shows for reception by local TV stations.
Free-to-air satellite TV channels are also usually distributed on FSS satellites in the Ku band.
The Intelsat Americas 5, Galaxy 10R and AMC 3 satellites over North America provide a
quite large amount of FTA channels on their Ku band transponders.
The American Dish Network DBS service has also recently utilized FSS technology as well
for their programming packages requiring their SuperDish antenna, due to Dish Network
needing more capacity to carry local television stations per the FCC's "must-carry"
regulations, and for more bandwidth to carry HDTV channels.
Satellites for communication have now been launched that have transponders in the Ka
band, such as DirecTV's SPACEWAY-1 satellite, and Anik F2. NASA as well has launched
experimental satellites using the Ka band recently.
The definitions of FSS and DBS satellites outside of North America, especially in Europe,
are a bit more ambiguous. Most satellites used for direct-to-home television in Europe have
the same high power output as DBS-class satellites in North America, but use the same
linear polarization as FSS-class satellites. Examples of these are the Astra, Eutelsat, and
Hotbird spacecraft in orbit over the European continent. Because of this, the terms FSS and
DBS are more so used throughout the North American continent, and are uncommon in
Europe.
See broadcast satellites for further information on FSS and DBS satellites in orbit.
138
WIRELESS COMMUNICACTION
Mobile satellite technologies
Initially available for broadcast to stationary TV receivers, by 2004 popular mobile direct
broadcast applications made their appearance with that arrival of two satellite radio systems
in the United States: Sirius and XM Satellite Radio Holdings. Some manufacturers have also
introduced special antennas for mobile reception of DBS television. Using GPS technology
as a reference, these antennas automatically re-aim to the satellite no matter where or how
the vehicle (that the antenna is mounted on) is situated. These mobile satellite antennas are
popular with some recreational vehicle owners. Such mobile DBS antennas are also used by
JetBlue Airways for DirecTV (supplied by LiveTV, a subsidiary of JetBlue), which
passengers can view on-board on LCD screens mounted in the seats.
Amateur radio
Amateur radio operators have access to the OSCAR satellites that have been designed
specifically to carry amateur radio traffic. Most such satellites operate as spaceborne
repeaters, and are generally accessed by amateurs equipped with UHF or VHF radio
equipment and highly directional antennas such as Yagis or dish antennas. Due to the
limitations of ground-based amateur equipment, most amateur satellites are launched into
fairly low Earth orbits, and are designed to deal with only a limited number of brief contacts
at any given time. Some satellites also provide data-forwarding services using the AX.25 or
similar protocols.
Satellite broadband
In recent years, satellite communication technology has been used as a means to connect to
the Internet via broadband data connections. This can be very useful for users who are
located in very remote areas, and cannot access a wireline broadband or dialup connection.
Very Small Aperture Terminal (VSAT)
Is a two-way satellite ground station with a dish antenna that is smaller than 3 meters (most
VSAT antennas range from 75 cm to 1.2 m). VSAT data rates typically range from
narrowband up to 4 Mbit/s. VSATs access satellites in geosynchronous orbit to relay data
from small remote earth stations (terminals) to other terminals (in mesh configurations) or
master earth station "hubs" (in star configurations).
VSATs are most commonly used to transmit narrowband data (point of sale transactions
such as credit card, polling or RFID data; or SCADA), or broadband data (for the provision
of Satellite Internet access to remote locations, VoIP or video). VSATs are also used for
transportable, on-the-move (with phased-array antennas) or mobile maritime (such as
Inmarsat or BGAN) communications.
Usage
The first commercial VSATs were C band (6 GHz) receive-only systems by Equatorial
Communications using spread spectrum technology. More than 30,000 60 cm antenna
139
WIRELESS COMMUNICACTION
systems were sold in the early 1980s. Equatorial later developed a C band (4/6 GHz) 2 way
system using 1 m x 0.5 m antennas and sold about 10,000 units in 1984-85.
In 1985, Schlumberger Oilfield Research co-developed the world's first Ku band (12-14
GHz) VSATs with Hughes Aerospace to provide portable network connectivity for oil field
drilling and exploration units. Ku Band VSATs make up the vast majority of sites in use
today for data or telephony applications.
The largest VSAT network (more than 12,000 sites) was deployed by Spacenet and MCI for
the US Postal Service. Other large VSAT network users include Walgreens Pharmacy, Dollar
General, Wal-Mart, CVS, Riteaid, Yum! Brands (Taco Bell, Pizza Hut, Long John Silver's
and other Quick Service Restaurant chains), GTECH and SGI for lottery terminals. VSATs
are used by car dealerships affiliated with manufacturers such as Ford and General Motors
for transmitting and receiving sales figures and orders, as well as for receiving internal
communications, service bulletins, and interactive distance learning courses from
manufacturers. The FordStar network, used by Ford and its local dealers, is an example of
this.
VSAT technology is also used for two-way satellite Internet providers such as HughesNet,
StarBand and WildBlue in the United States; and Bluestream, SatLynx and Technologie
Satelitarne in Europe, among others. These services are used across the world as a means of
delivering broadband Internet access to locations which cannot get less expensive broadband
connections such as ADSL or cable internet access; usually remote or rural locations.
Nearly all VSAT systems are now based on IP, with a very broad spectrum of applications.
As of December 2004, the total number of VSATs ordered stood at over 1 million, with
nearly 650,000 in service. Annual VSAT service revenues were $3.88 billion.
Configurations
Most VSAT networks are configured in one of these topologies:
A star topology, using a central uplink site, such as a network operations center
(NOC), to transport data back and forth to each VSAT terminal via satellite,
A mesh topology, where each VSAT terminal relays data via satellite to another
terminal by acting as a hub, minimizing the need for a centralized uplink site,
A combination of both star and mesh topologies. Some VSAT networks are
configured by having several centralized uplink sites (and VSAT terminals stemming
from it) connected in a multi-star topology with each star (and each terminal in each
star) connected to each other in a mesh topology. Others configured in only a single
star topology sometimes will have each terminal connected to each other as well,
resulting in each terminal acting as a central hub. These configurations are utilized to
minimize the overall cost of the network, and to alleviate the amount of data that has
to be relayed through a central uplink site (or sites) of a star or multi-star network.
Star topology services like HughesNet, Spacenet Connexstar/StarBand, WildBlue and others
can be used to provide broadband wide area networks, as well as to provide broadband
140
WIRELESS COMMUNICACTION
Internet access. Applications of this include intranet networking for front and back office
applications, managed store and forward solutions such as digital signage, and interactive
distance learning.
Pros and cons of VSAT networks
Advantages
Availability: VSAT services can be deployed anywhere having a clear view of the
Clarke Belt
Diversity: VSAT provides a wireless link completely independent of the local
terrestrial/wireline infrastructure - especially important for backup or disaster
recovery services
Deployability: VSAT services can be deployed in hours or even minutes (with autoacquisition antennas)
Homogenity: VSAT enables customers to get the same speeds and SLAs at all
locations across their entire network regardless of location
Acceleration: Most modern VSAT systems use onboard acceleration of protocols
such as TCP ("spoofing" of acknowledgement packets) and HTTP (pre-fetching of
recognized HTTP objects); this delivers high-quality Internet performance regardless
of latency (see below)
Multicast: Most current VSAT systems use a broadcast download scheme (such as
DVB-S) which enables them to deliver the same content to tens or thousands of
locations simultaneously at no additional cost
Security: Corporate-grade VSAT networks are private layer-2 networks over the air
Disadvantages
Latency: Since they relay signals off a satellite in geosynchronous orbit 22,300 miles
above the Earth, VSAT links are subject to a minimum latency of approximately 500
milliseconds round-trip. This makes them a poor choice for "chatty" protocols or
applications such as online gaming
Encryption: The acceleration schemes used by most VSAT systems rely upon the
ability to see a packet's source/destination and contents; packets encrypted via VPN
defeat this acceleration and perform slower than other network traffic
Environmental concerns: VSATs are subject to signal attenuation due to weather
("rain fade"); the effect is typically far less than that experienced by one-way TV
systems (such as DirecTV or DISH Network) that use smaller dishes, but is still a
function of antenna size and transmitter power and frequency band
Installation: VSAT services require an outdoor antenna installation with a clear view
of the sky (southern sky if the location is in the northern hemisphere or northern sky
if the location is in the southern hemisphere); this makes installation in skyscraper
urban environments or locations where a customer does not have "roof rights"
problematic
141
WIRELESS COMMUNICACTION
Future applications
Advances in technology have dramatically improved the price/performance equation of FSS
(Fixed Satellite Services) over the past five years. New VSAT systems are coming online
using Ka band technology that promise higher bandwidth rates for lower costs.
FSS satellite systems current on orbit have a huge capacity with a relatively low price
structure. FSS satellite systems provide various applications for subscribers, including: phone
conversations; fax; TV broadcast; high speed communication services; Internet access; video
conferencing; Satellite News Gathering (SNG); Digital Audio Broadcasting (DAB) and
others. These systems are applicable for providing various high-quality services because they
create efficient communication systems, both for residential and business users.
Microwave transmission refers to the technique of transmitting information over a
Microwave link. Since microwaves are highly susceptible to attenuation by the atmosphere
(especially during wet weather), the use of microwave transmission is limited to a few
contexts.
Microwave power transmission (MPT) is the use of microwaves to transmit power through
outer space or the atmosphere without the need for wires. It is a sub-type of the more
general Wireless energy transfer methods, and is the most interesting because microwave
devices offer the highest efficiency of conversion between DC-electicity and microwave
radiative power.
Following World War II, which saw the development of high-power microwave emitters
known as cavity magnetrons, the idea of using microwaves to transmit power was
researched. In 1964, William C. Brown demonstrated a miniature helicopter equipped with a
combination antenna and rectifier device called a rectenna. The rectenna converted
microwave power into electricity, allowing the helicopter to fly[1]. In principle, the rectenna is
capable of very high conversion efficiencies - over 90% in optimal circumstances.
Most proposed MPT systems now usually include a phased array microwave transmitter.
While these have lower efficiency levels they have the advantage of being electrically steered
using no moving parts, and are easier to scale to the necessary levels that a practical MPT
system requires.
Using microwave power transmission to deliver electricity to communities without having to
build cable-based infrastructure is being studied at Grand Bassin on Reunion Island in the
Indian Ocean.
Common safety concerns
The common reaction to microwave transmission is one of concern, as microwaves are
generally perceived by the public as dangerous forms of radiation - stemming from the fact
that they are used in microwave ovens. While high power microwaves can be painful and
dangerous as in the United States Military's Active Denial System, MPT systems are generally
proposed to have only low intensity at the rectenna.
142
WIRELESS COMMUNICACTION
Though this would be extremely safe as the power levels would be about equal to the leakage
from a microwave oven, and only slightly more than a cell phone, the relatively diffuse
microwave beam necessitates a large rectenna area for a significant amount of energy to be
transmitted.
Research has involved exposing multiple generations of animals to microwave radiation of
this or higher intensity, and no health issues have been found.
Proposed uses
MPT is the most commonly proposed method for transferring energy to the surface
of the Earth from solar power satellites or other in-orbit power sources.
MPT is occasionally proposed for the power source in beamed energy orbital space
ships. Although lasers are more commonly proposed, their low efficiency in light
generation and reception has led some designers to opt for microwave based
systems. Although microwaves are more easily scaled to high powers and suffer from
less atmospheric distortion, the engineering hurdles in building a man capable craft
to reach orbit involving the beaming of megawatts of power have prevented the
realization of such plans.
In the context of spaceflight, a satellite is any object which has been placed into orbit by
human endeavor. They are sometimes called artificial satellites to distinguish them from
natural satellites such as the Moon.
History of artificial satellites
Soviet Union
The first artificial satellite was Sputnik 1, launched by the Soviet Union on 4 October 1957.
This triggered the Space Race between the Soviet Union and the United States.
United States
In May, 1946, Project RAND had released the Preliminary Design of an Experimental
World-Circling Spaceship, which stated, "A satellite vehicle with appropriate instrumentation
can be expected to be one of the most potent scientific tools of the Twentieth Century.[4]
The United States had been considering launching orbital satellites since 1945 under the
Bureau of Aeronautics of the United States Navy. The Air Force's Project RAND eventually
released the above report, but did not believe that the satellite was a potential military
weapon; rather they considered it to be a tool for science, politics, and propaganda. In 1954,
the Secretary of Defense stated, "I know of no American satellite program."
Following pressure by the American Rocket Society, the National Science Foundation, and
the International Geophysical Year, military interest picked up and in early 1955 the Air
Force and Navy were working on Project Orbiter, which involved using a Jupiter C rocket to
launch a small satellite called Explorer 1 on January 31, 1958.
143
WIRELESS COMMUNICACTION
On July 29, 1955, the White House announced that the U.S. intended to launch satellites by
the spring of 1958. This became known as Project Vanguard. On July 31, the Soviets
announced that they intended to launch a satellite by the fall of 1957.
International
The largest artificial satellite currently orbiting the Earth is the International Space Station.
Space Surveillance Network
The United States Space Surveillance Network (SSN) has been tracking space objects since
1957 when the Soviets opened the space age with the launch of Sputnik I. Since then, the
SSN has tracked more than 26,000 space objects orbiting Earth. The SSN currently tracks
more than 8,000 man-made orbiting objects. The rest have re-entered Earth's turbulent
atmosphere and disintegrated, or survived re-entry and impacted the Earth. The space
objects now orbiting Earth range from satellites weighing several tons to pieces of spent
rocket bodies weighing only 10 pounds. About seven percent of the space objects are
operational satellites (i.e. - ~560 satellites), the rest are debris. USSTRATCOM is primarily
interested in the active satellites, but also tracks space debris which upon reentry might
otherwise be mistaken for incoming missiles. The SSN tracks space objects that are 10
centimeters in diameter (baseball size) or larger.
Communication satellites
Telecommunication satellite is a kind of satellite (later explained) that‘s very close to our
daily life. Arthur C. Clarke was one of the pioneers of this field; he fostered the idea of a
worldwide satellite system. Echo I is a passive communication satellite launched in 1960. It
was not equipped with a two-way system yet, and it was rather functioned as a reflector. Not
very long after then, the Telstar I, an active communication satellite, was launched in 1962,
with receiving and transmitting equipment, and was an active participant in the receptiontransmission process. Telstar created the world‘s first international television link. Therefore,
Mirabito & Morgernstern in their book, The New Communication Technologies:
Applications, Policy, and Impact, 5th edition, said that Telstar had paved the way for today‘s
communication spacecraft.
Types
MILSTAR:A communication satellite
144
WIRELESS COMMUNICACTION
Anti-Satellite weapons, sometimes called "Killer satellites" are satellites designed to
destroy "enemy" satellites, other orbital weapons and targets. Some are armed with
kinetic rounds, while others use energy and/or particle weapons to destroy satellites,
ICBMs, MIRVs. Both the U.S. and the USSR had these satellites. Links discussing
"Killer satellites", ASATS (Anti-Satellite satellite) include USSR Tests ASAT weapon
and ASAT Test. See also IMINT
Astronomical satellites are satellites used for observation of distant planets, galaxies,
and other outer space objects.
Biosatellites are satellites designed to carry living organisms, generally for scientific
experimentation.
Communications satellites are satellites stationed in space for the purpose of
telecommunications. Modern communications satellites typically use geosynchronous
orbits, Molniya orbits or Low Earth orbits.
Miniaturized satellites are satellites of unusually low weights and small sizes. New
classifications are used to categorize these satellites: minisatellite (500–200 kg),
microsatellite (below 200 kg), nanosatellite (below 10 kg).
Navigational satellites are satellites which use radio time signals transmitted to enable
mobile receivers on the ground to determine their exact location. The relatively clear
line of sight between the satellites and receivers on the ground, combined with everimproving electronics, allows satellite navigation systems to measure location to
accuracies on the order of a few metres in real time.
Reconnaissance satellites are Earth observation satellite or communications satellite
deployed for military or intelligence applications. Little is known about the full
power of these satellites, as governments who operate them usually keep information
pertaining to their reconnaissance satellites classified.
Earth observation satellites are satellites intended for non-military uses such as
environmental monitoring, meteorology, map making etc. (See especially Earth
Observing System.)
Solar power satellites are proposed satellites built in high Earth orbit that use
microwave power transmission to beam solar power to very large antennae on Earth
where it can be used in place of conventional power sources.
Space stations are man-made structures that are designed for human beings to live
on in outer space. A space station is distinguished from other manned spacecraft by
its lack of major propulsion or landing facilities — instead, other vehicles are used as
transport to and from the station. Space stations are designed for medium-term
living in orbit, for periods of weeks, months, or even years.
Weather satellites are satellites that primarily are used to monitor Earth's weather and
climate.
Orbit types
Centric Classifications
Galacto-centric Orbit - An orbit about the center of a galaxy. Earth's sun follows this
type of orbit about the galactic center of the Milky Way.
Heliocentric Orbit - An orbit around the Sun. In our Solar System, all planets,
comets, and asteroids are in such orbits, as are many artificial satellites and pieces of
145
WIRELESS COMMUNICACTION
space debris. Moons by contrast are not in a heliocentric orbit but rather orbit their
parent planet.
Geocentric Orbit - An orbit around the planet Earth, such as the Moon or artificial
satellites. Currently there are approximately 2465 artificial satellites orbiting the
Earth.
Areocentric Orbit - An orbit around the planet Mars, such as moons or artificial
satellites.
Altitude Classifications
Low Earth Orbit (LEO) - Geocentric orbits ranging in altitude from 0 - 2,000 km (0
- 1,240 miles)
Medium Earth Orbit (MEO) - Geocentric orbits ranging in altitude from 2,000 km
(1,240 miles) - to just below geosynchronous orbit at 35,786 km (22,240 miles). Also
known as an intermediate circular orbit.
High Earth Orbit (HEO) - Geocentric orbits above the altitude of geosynchronous
orbit 35,786 km (22,240 miles).
Inclination Classifications
Inclined Orbit - An orbit whose inclination in reference to the equatorial plane is not
0.
Polar Orbit - An orbit that passes above or nearly above both poles of the planet on
each revolution. Therefore it has an inclination of (or very close to) 90 degrees.
Polar Sun-synchronous Orbit - A nearly polar orbit that passes the equator at the
same local time on every pass. Useful for image taking satellites because shadows will
be the same on every pass.
Eccentricity Classifications
Circular Orbit - An orbit that has an eccentricity of 0 and whose path traces a circle.
Hohmann transfer orbit - An orbital maneuver that moves a spacecraft from one
circular orbit to another using two engine impulses. This maneuver was named after
Walter Hohmann.
Elliptic Orbit - An orbit with an eccentricity greater than 0 and less than 1 whose
orbit traces the path of an ellipse.
Geosynchronous Transfer Orbit - An elliptic orbit where the perigee is at the altitude
of a Low Earth Orbit (LEO) and the apogee at the altitude of a geosynchronous
orbit.
Geostationary Transfer Orbit - An elliptic orbit where the perigee is at the altitude of
a Low Earth Orbit (LEO) and the apogee at the altitude of a geostationary orbit.
Molniya Orbit - A highly elliptic orbit with inclination of 63.4° and orbital period of
½ of a sidereal day (roughly 12 hours). Such a satellite spends most of its time over a
designated area of the planet.
Tundra Orbit - A highly elliptic orbit with inclination of 63.4° and orbital period of
one sidereal day (roughly 24 hours). Such a satellite spends most of its time over a
designated area of the planet.
146
WIRELESS COMMUNICACTION
Hyperbolic orbit - An orbit with the eccentricity greater than 1. Such an orbit also
has a velocity in excess of the escape velocity and as such, will escape the
gravitational pull of the planet and continue to travel infinitely.
Parabolic Orbit - An orbit with the eccentricity equal to 1. Such an orbit also has a
velocity equal to the escape velocity and therefore will escape the gravitational pull of
the planet and travel until its velocity relative to the planet is 0. If the speed of such
an orbit is increased it will become a hyperbolic orbit.
Escape Orbit (EO) - A high-speed parabolic orbit where the object has escape
velocity and is moving away from the planet.
Capture Orbit - A high-speed parabolic orbit where the object has escape velocity
and is moving toward the planet.
Synchronous Classifications
Synchronous Orbit - An orbit where the satellite has an orbital period equal to the
average rotational period (earth's is: 23 hours, 56 minutes, 4.091 seconds) of the
body being orbited and in the same direction of rotation as that body. To a ground
observer such a satellite would trace an analemma in the sky.
Semi-Synchronous Orbit (SSO) - An orbit with an altitude of approximately
20,200 km (12544.2 miles) and an orbital period of approximately 12 hours
Geosynchronous Orbit (GEO) - Orbits with an altitude of approximately 35,786
km (22,240 miles). Such a satellite would trace an analemma (figure 8) in the sky.
Geostationary orbit (GSO): A geosynchronous orbit with an inclination of zero.
To an observer on the ground this satellite would appear as a fixed point in the sky.
Clarke Orbit - Another name for a geostationary orbit. Named after the writer
Arthur C. Clarke.
Supersynchronous orbit - A disposal / storage orbit above GSO/GEO. Satellites
will drift west. Also a synonym for Disposal Orbit.
Subsynchronous orbit - A drift orbit close to but below GSO/GEO. Satellites will
drift east.
Graveyard Orbit - An orbit a few hundred kilometers above geosynchronous that
satellites are moved into at the end of their operation.
Disposal Orbit - A synonym for graveyard orbit.
Junk Orbit - A synonym for graveyard orbit.
Areosynchronous Orbit - A synchronous orbit around the planet Mars with an
orbital period equal in length to Mars' sidereal day, 24.6229 hours.
Areostationary Orbit (ASO) - A circular areosynchronous orbit on the equatorial
plane and about 17,000 km(10557 miles) above the surface. To an observer on the
ground this satellite would appear as a fixed point in the sky.
Heliosynchronous Orbit - An heliocentric orbit about the Sun where the satellite's
orbital period matches the Sun's period of rotation. These orbits occur at a radius of
24.360 Gm (0.1628 AU) around the Sun, a little less than half of the orbital radius of
Mercury.
147
WIRELESS COMMUNICACTION
Special Classifications
Sun-synchronous Orbit - An orbit which combines altitude and inclination in such a
way that the satellite passes over any given point of the planets's surface at the same
local solar time. Such an orbit can place a satellite in constant sunlight and is useful
for imaging, spy, and weather satellites.
Moon Orbit - The orbital characteristics of earth's moon. Average altitude of
384,403 kilometres (238,857 mi), elliptical-inclined orbit.
Pseudo-Orbit Classifications
Horseshoe Orbit - An orbit that appears to a ground observer to be orbiting a
certain planet but is actually in co-orbit with the planet. See asteroids 3753 (Cruithne)
and 2002 AA29.
Exo-orbit - A maneuver where a spacecraft approaches the height of orbit but lacks
the velocity to sustain it.
Orbital Spaceflight - A synonym for Exo-orbit.
Lunar transfer orbit (LTO) Prograde Orbit - An orbit with an inclination of less than 90°. Or rather, an orbit
that is in the same direction as the rotation of the primary.
Retrograde orbit - An orbit with an inclination of more than 90°. Or rather, an orbit
counter to the direction of rotation of the planet. Almost no satellites are launched
into retrograde orbit because the quantity of fuel required to launch them is much
greater than for a prograde orbit. This is because when the rocket starts out on the
ground, it already has an eastward component of velocity equal to the rotational
velocity of the planet at its launch latitude.
Launch capable countries
This list includes countries with an independent capability to place satellites in orbit,
including production of the necessary launch vehicle. Note: many more countries have the
capability to design and build satellites — which relatively speaking, does not require much
economic, scientific and industrial capacity — but are unable to launch them, instead relying
on foreign launch services. This list does not consider those numerous countries, but only
lists those capable of launching satellites indigenously, and the date this capability was first
demonstrated. Does not include consortium satellites or multi-national satellites.
First launch by country
Country
Soviet Union
Year of first launch First satellite
Payloads in orbit in 2006[1]
1957
1390 (Russia)
Sputnik 1
148
WIRELESS COMMUNICACTION
United States
1958
Explorer 1
999
France
1965
Astérix
43
Japan
1970
Osumi
102
China
1970
Dong Fang Hong I 53
United Kingdom 1971
Prospero X-3
23[citation needed]
India
1981
Rohini
31
Israel
1988
Ofeq 1
6
Both North Korea and Iraq have claimed orbital launches but these are unconfirmed, and
unlikely. As of 2006, only eight countries and one regional space organisation have
independently launched satellites into orbit on their own indigenously developed launch
vehicles - in chronological order: USSR, USA, France, Japan, China, UK, ESA, India and
Israel.
First launch by country including help of another parties
Country
Year of first launch First satellite
Payloads in orbit in 2006[2]
Soviet Union
1957
Sputnik 1
1390 (Russia)
United States
1958
Explorer 1
999
Canada
1962
Alouette 1
149
WIRELESS COMMUNICACTION
France
1965
Astérix
Italy
1967
San Marco 2
Australia
1967
WRESAT
Japan
1970
Osumi
China
1970
Dong Fang Hong I 53
United Kingdom 1971
43
102
Prospero X-3
23[citation needed]
India
1981
Rohini
31
Israel
1988
Ofeq 1
6
Egypt
1998
NileSat 101
3
Kazakhstan
2006
KazSat 1
1
It should be noted that while Kazakhstan did launch their satellite independently, it was built
by the Russians, and the rocket was not independently designed. While Canada was the third
country to build a satellite which was launched into Space, it was launched aboard a U.S.
rocket from a U.S. spaceport. The same goes for Australia, who launched on-board a
donated Redstone rocket. The first Italian-launched was San Marco 2, launched on 26 April
1967 on a U.S. Scout rocket with U.S. support. Australia's launch project, in November
1967, involved a donated U.S. missile and U. S. support staff as well as a joint launch facility
with the United Kingdom. The launch capabilities of the United Kingdom and France now
fall under the European Space Agency (ESA), and the launch capabilities of the Soviet
Union fall under Russia, reducing the number of political entities with active satellite launch
capabilities to seven - six 'major' space powers: USA, Russia, China, India, EU, Japan, and a
minor space power - Israel.
150
WIRELESS COMMUNICACTION
Several other countries such as South Korea, Pakistan, Iran, Brazil and Egypt are in the early
stages of developing their own small-scale launch capabilities, and seek to become 'minor'
space powers - others may have the scientific and industrial capability, but not the economic
or political will.
Heraldry
The (artificial, though this is not stated in the blazon) satellite appears as a charge in the arms
of Arthur Maxwell House. This is in addition to numerous appearances of the natural
satellite the moon, and the moons of the planets Jupiter and Saturn (with those planets) in
the arms of Pierre-Simon LaPlace.
A model satellite in a museum
Communications satellite
Timeline of artificial satellites and space probes
List of Earth observation satellites
International Designator
Satellite Catalog Number
Satellites (sorted by launch date):
o Syncom 1 (1963), 2 (1963) and 3 (1964)
o Anik 1 (1972)
o EgyptSat 1 (2007) (Egypt, launched by Ukraine)
o Egypt to launch its first scientific satellite
o Aryabhata (1975) (India, launched by USSR)
o Hermes Communications Technology Satellite (1976)
o Experimental geodetic payload (1986)
o Munin (2000) (Sweden, launched by U.S.)
o KEO satellite - a space time capsule (2006)
Satellite Services:
o Satellite phone
o Satellite Internet
o Satellite television
o Satellite radio
Anti-satellite weapon
GoldenEye (fictional satellite weapon)
Tether satellite
151
WIRELESS COMMUNICACTION
A typical home satellite dish installation for receiving geostationary satellite transmissions
consists of a satellite dish, a LNB and a receiver/decoder.
View direction
To receive a transmission from a satellite, a satellite dish must be pointed exactly to the
satellite's position in space.
The view direction of a satellite from a location on Earth is determined by two angles:
altitude and azimuth. These angles depend on the receiver's latitude, longitude, and the
satellite's geostationary longitude.
For satellites that are on a longitude equal to that of the receiver, the altitude can be found
with a simple geometric construction:
For different satellite and receiver longitudes, the azimuth can be found by extending the
previous figure:
152
WIRELESS COMMUNICACTION
Altitude (A) and azimuth (a) relative to true north can be calculated from latitude (φ),
longitude (λ) and satellite longitude (λs) with these formulas:
Skew
Several satellites transmit with linear polarization, with polarization planes usually parallel to
and at right angles to Earth's axis. The dish's polarization must match the signal's
polarization, otherwise cross polarization will interfere with signals from the opposite
polarization. The requirements for this match are quite narrow and must be accurate to less
than one degree.
If receiver and satellite longitudes are different, the skew (rotation) of the receiver must be
adjusted.
153
WIRELESS COMMUNICACTION
The skew angle (s) for adjusting the LNB can be calculated with this formula:
A geographic coordinate system enables every location on the earth to be specified by the
three coordinates of a spherical coordinate system aligned with the spin axis of the Earth.
First and second Dimensions: latitude and longitude
Latitude phi (φ) and Longitude lambda (λ)
Borrowing from theories of the ancient Babylonians, later expanded by the famous Greek
thinker and geographer Ptolemy, a full circle is divided into 360 degrees (360°).
latitude (abbreviation: Lat.) is the angle at the centre of the co-ordinate system
between any point on the earth's surface and the plane of the equator. Lines joining
points of the same latitude are called parallels, and they trace concentric circles on
the surface of the earth. Each pole is 90 degrees: the north pole 90° N; the south
pole 90° S. The 0° parallel of latitude is designated the equator, an imaginary line that
divides the globe into the Northern and Southern Hemispheres.
longitude (abbreviation: Long.) is the angle east or west, at the centre of the coordinate system, between any point on the earth's surface and the plane of an
arbitrary north-south line between the two geographical poles. Lines joining points
154
WIRELESS COMMUNICACTION
of the same longitude are called meridians. All meridians are halves of great circles,
and are not parallel: by definition they converge at the north and south poles. The
line passing through the (former) Royal Observatory, Greenwich (near London in
the UK) is the international zero-longitude reference line, the Prime Meridian. The
antipodal meridian of Greenwich is both 180°W and 180°E.
By combining these two angles, the horizontal position of any location on Earth can be
specified.
For example, Baltimore, Maryland (in the USA) has a latitude of 39.3° North, and a
longitude of 76.6° West (39.3° N 76.6° W). So, a vector drawn from the center of the earth
to a point 39.3° north of the equator and 76.6° west of Greenwich will pass through
Baltimore.
This latitude/longitude "webbing" is known as the common graticule. There is also a
complementary transverse graticule (meaning the graticule is shifted 90°, so that the poles
are on the horizontal equator), upon which all spherical trigonometry is ultimately based.
Traditionally, degrees have been divided into minutes (1/60th of a degree, designated by ′ or
"m") and seconds (1/60th of a minute, designated by ″ or "s"). There are several formats for
degrees, all of them appearing in the same Lat-Long order:
DMS Degree:Minute:Second (49°30'00"-123d30m00s)
DM Degree:Minute (49°30.0'-123d30.0m)
DD Decimal Degree (49.5000°-123.5000d), generally with 4 decimal numbers.
To convert from DM or DMS to DD, decimal degrees = whole number of degrees, plus
minutes divided by 60, plus seconds divided by 3600. DMS is the most common format, and
is standard on all charts and maps, as well as global positioning systems and geographic
information systems.
The equator is the fundamental plane of all geographic coordinate systems. All spherical
coordinate systems define such a fundamental plane.
Latitude and longitude values can be based on several different geodetic systems or datums,
the most common being the WGS 84 used by all GPS equipment. In other words, the same
point on the earth‘s surface can be described by different latitude and longitude values
depending on the reference datum.
In popular GIS software, data projected in latitude/longitude is often specified via a
'Geographic Coordinate System'. For example, data in latitude/longitude with the datum as
the North American Datum of 1983 is denoted by 'GCS_North_American_1983'.
Third dimension: altitude, height, depth
To completely specify a location on, in, or above the earth, one has to also specify the
elevation, defined as the vertical position of the location relative to the centre of the
155
WIRELESS COMMUNICACTION
reference system or some definition of the earth's surface. This is expressed in terms of the
vertical distance to the earth below, but, because of the ambiguity of "surface" and "vertical",
is more commonly expressed relative to a more precisely defined datum such as mean sea
level (as height above mean sea level) or a geoid (a mathematical model of the shape of the
earth's surface). The distance to the earth's center can be used both for very deep positions
and for positions in space.
Other terms used with respect to the distance of a point from the earth's surface or some
other datum are altitude, height, and depth.
Geostationary coordinates
Geostationary satellites (e.g., television satellites ) are over the equator. So, their position
related to Earth is expressed in longitude degrees. Their latitude does not change, and is
always zero over the equator.
What Keeps Objects in Orbit?
For 10,000 years (or 20,000 or 50,000 or since he was first able to lift his eyes upward) man
has wondered about questions such as "What holds the sun up in the sky?", "Why doesn't
the moon fall on us?", and "How do they (the sun and the moon) return from the far west
back to the far east to rise again each day?" Most of the answers which men put forth in
those 10,000 or 20,000 or 50,000 years we now classify as superstition, mythology, or pagan
religion. It is only in the last 300 years that we have developed a scientific description of how
those bodies travel. Our description of course is based on fundamental laws put forth by the
English genius Sir Isaac Newton in the late 17th century.
The first of Newton's laws, which was a logical extension of earlier work by Johannes
Kepler, proposed that every bit of matter in the universe attracts every other bit of matter
with a force which is proportional to the product of their masses and inversely proportional
to the square of the distance between the two bits. That is, larger masses attract more
strongly and the attraction gets weaker as the bodies are moved farther apart.
Newton's law of gravity means that the sun pulls on the earth (and every other planet for
that matter) and the earth pulls on the sun. Furthermore, since both are quite large (by our
standards at least) the force must also be quite large. The question which every student asks
(well, most students anyway) is, "If the sun and the planets are pulling on each other with
such a large force, why don't the planets fall into the sun?" The answer is simply (The Earth,
Mars, Venus, Jupiter and Saturn are continuously falling into the Sun and the Moon is
continuously falling into the Earth.
Our salvation is that they are also moving "sideways" with a sufficiently large velocity that by
the time the earth has fallen the 93,000,000 miles to the sun it has also moved "sideways"
about 93,000,000 miles - far enough to miss the sun. By the time the moon has fallen the
240,000 miles to the earth, it has moved sideways about 240,000 miles - far enough to miss
the earth. This process is repeated continuously as the earth (and all the other planets) make
their apparently unending trips around the sun and the moon makes its trips around the
156
WIRELESS COMMUNICACTION
earth. A planet, or any other body, which finds itself at any distance from the sun with no
"sideways" velocity will quickly fall without missing the sun, will be drawn into the sun's
interior and will be cooked to well-done. Only our sideways motion (physicists call it our
"angular velocity" ) saves us. The same of course is true for the moon, which would fall to
earth but for its angular velocity. This is illustrated in the drawing below.
The Earth Orbits the Sun With Angular Velocity
People sometimes (erroneously) speak of orbiting objects as having "escaped" the effects of
gravity, since passengers experience an apparent weightlessness. Be assured, however, that
the force of gravity is at work. Were it suddenly to be turned off, the object in question
would instantly leave its circular orbit, take up a straight line trajectory, which, in the case of
the earth, would leave it about 50 billion miles from the sun after just one century. Hence
the gravitational force between the sun and the earth holds the earth in its orbit. This is
shown in the drawing below, where the earth was happily orbiting the sun until it reached
point A, where the force of gravity was suddenly turned off.
157
WIRELESS COMMUNICACTION
The Earth No Longer Orbits the Sun if Gravity is Switched Off
The apparent weightlessness experienced by the orbiting passenger is the same
weightlessness which he would feel in a falling elevator or an amusement park ride. The
earth orbiting the sun or the moon orbiting the earth might be compared to a rock on the
end of a string which you swing in a circle around your head. The string holds the rock in
place and is continuously pulling it toward your head. Because the rock is moving sideways
however, it always misses your head. Were the string to be suddenly broken, the rock would
be released from its orbit and fly off in a straight line, just as earth did in the drawing above.
One question which one might ask is " Does the time required to complete an orbit depend
on the distance at which the object is orbiting?" In fact, Kepler answered this question
several hundred years ago, using the data of an earlier astronomer, Tycho Brahe.
After years of trial and error analysis (by hand - no computers, no calculators) , Kepler
discovered that the quantity R3 / T2 was the same for every planet in our solar system. (R is
the distance at which a planet orbits the sun, T is the time required for one complete trip
around the sun.)Hence, an object which orbits at a larger distance will require longer to
complete one orbit than one which is orbiting at a smaller distance. One can understand this
at least qualitatively in terms of our "falling and missing" model. The planet which is at a
larger distance requires longer to fall to where it would strike the sun. As a result, it takes a
longer time to complete the ¼ trip around the sun which is necessary to make a circular
orbit.
158
WIRELESS COMMUNICACTION
Can We Imitate Nature? (Artificial Satellites)
Very soon after Newton's laws were published, people realized that in principle it should be
possible to launch an artificial satellite which would orbit the earth just as the moon does. A
simple calculation, however, using the equations which we developed above, will show that
an artificial satellite, orbiting near the surface of the earth (R = 4000 miles) will have a period
of approximately 90 minutes. This corresponds to a sideways velocity (needed in order to
"miss" the earth as it falls), of approximately 17,000 miles/hour (that's about 5
miles/second) . To visualize the "missing the earth" feature, let's imagine a cannon firing a
cannonball.
Launching an Artificial Satellite
In the first frame of the cartoon, we see it firing fairly weakly. The cannonball describes a
parabolic arc as we expect and lands perhaps a few hundred yards away. In the second
frame, we bring up a little larger cannon, load a little more powder and shoot a little farther.
The ball lands perhaps a few hundred miles away. We can see just a little of the earth's
curvature, but it doesn't really affect anything. In the third frame, we use our super-shooter
and the cannonball is shot hard enough that it travels several thousand miles. Clearly the
159
WIRELESS COMMUNICACTION
curvature of the earth has had an effect. The ball travels much farther than it would have
had the earth been flat. Finally, our mega-super-big cannon fires the cannonball at the
unbelievable velocity of 5 miles/second or nearly 17,000 miles/hour. (Remember - the
fastest race cars can make 250 miles/hour. The fastest jet planes can do a 2 or 3 thousand
miles/hour.) The result of this prodigious shot is that the ball misses the earth as it falls.
Nevertheless, the earth's gravitational pull causes it to continuously change direction and
continuously fall. The result is a "cannonball" which is orbiting the earth. In the absence of
gravity, however, the original throw (even the shortest, slow one) would have continued in a
straight line, leaving the earth far behind.
For many years, such a velocity was unthinkable and the artificial satellite remained a dream.
Eventually, however, the technology (rocket engines, guidance systems, etc.) caught up with
the concept, largely as a result of weapons research started by the Germans during the
second World War. Finally, in 1957, the first artificial satellite, called Sputnik, was launched
by the Soviets. Consisting of little more than a spherical case with a radio transmitter, it
caused quite a stir. Americans were fascinated listening to the "beep. beep, beep" of Sputnik
appear and then fade out as it came overhead every 90 minutes. It was also quite frightening
to think of the Soviets circling overhead inasmuch as they were our mortal enemies.
After Sputnik, it was only a few years before the U.S. launched its own satellite; the Soviets
launched Yuri Gagarin, the first man to orbit the earth; and the U.S. launched John Glenn,
the first American in orbit. All of these flights were at essentially the same altitude (a few
hundred miles) and completed one trip around the earth approximately every 90 minutes.
People were well aware, however, that the period would be longer if they were able to reach
higher altitudes. In particular Arthur Clarke pointed out in the mid-1940s that a satellite
orbiting at an altitude of 22,300 miles would require exactly 24 hours to orbit the earth.
Hence such an orbit is called "geosynchronous" or "geostationary." If in addition it were
orbiting over the equator, it would appear, to an observer on the earth, to stand still in the
sky. Raising a satellite to such an altitude, however, required still more rocket boost, so that
the achievement of a geosynchronous orbit did not take place until 1963.
Why Satellites for Communications
By the end of World War II, the world had had a taste of "global communications." Edward
R. Murrow's radio broadcasts from London had electrified American listeners. We had, of
course, been able to do transatlantic telephone calls and telegraph via underwater cables for
almost 50 years. At exactly this time, however, a new phenomenon was born. The first
television programs were being broadcast, but the greater amount of information required to
transmit television pictures required that they operate at much higher frequencies than radio
stations. For example, the very first commercial radio station (KDKA in Pittsburgh)
operated ( and still does) at 1020 on the dial. This number stood for 1020 KiloHertz - the
frequency at which the station transmitted. Frequency is simply the number of times that an
electrical signal "wiggles" in 1 second. Frequency is measured in Hertz. One Hertz means
that the signal wiggles 1 time/second. A frequency of 1020 kiloHertz means that the
electrical signal from that station wiggles 1,020,000 times in one second.
160
WIRELESS COMMUNICACTION
Television signals, however required much higher frequencies because they were transmitting
much more information - namely the picture. A typical television station (channel 7 for
example) would operate at a frequency of 175 MHz. As a result, television signals would not
propagate
the
way
radio
signals
did.
Both radio and television frequency signals can propagate directly from transmitter to
receiver. This is a very dependable signal, but it is more or less limited to line of sight
communication. The mode of propagation employed for long distance (1000s of miles) radio
communication was a signal which traveled by bouncing off the charged layers of the
atmosphere (ionosphere) and returning to earth. The higher frequency television signals did
not bounce off the ionosphere and as a result disappeared into space in a relatively short
distance. This is shown in the diagram below.
Radio Signals Reflect Off the Ionosphere; TV Signals Do Not
Consequently, television reception was a "line-of-sight" phenomenon, and television
broadcasts were limited to a range of 20 or 30 miles or perhaps across the continent by
coaxial cable. Transatlantic broadcasts were totally out the question. If you saw European
news events on television, they were probably delayed at least 12 hours, and involved the use
of the fastest airplane available to carry conventional motion pictures back to the U.S. In
addition, of course, the appetite for transatlantic radio and telephone was increasing rapidly.
Adding this increase to the demands of the new television medium, existing communications
capabilities were simply not able to handle all of the requirements. By the late 1950s the
newly developed artificial satellites seemed to offer the potential for satisfying many of these
needs.
161
WIRELESS COMMUNICACTION
Low Earth-Orbiting Communications Satellites
In 1960, the simplest communications satellite ever conceived was launched. It was called
Echo, because it consisted only of a large (100 feet in diameter) aluminized plastic balloon.
Radio and TV signals transmitted to the satellite would be reflected back to earth and could
be received by any station within view of the satellite.
Echo Satellite
Unfortunately, in its low earth orbit, the Echo satellite circled the earth every ninety minutes.
This meant that although virtually everybody on earth would eventually see it, no one
person, ever saw it for more than 10 minutes or so out of every 90 minute orbit. In 1958, the
Score satellite had been put into orbit. It carried a tape recorder which would record
messages as it passed over an originating station and then rebroadcast them as it passed over
the destination. Once more, however, it appeared only briefly every 90 minutes - a serious
impediment to real communications. In 1962, NASA launched the Telstar satellite for
AT&T.
162
WIRELESS COMMUNICACTION
Telstar Communications Satellite
Telstar's orbit was such that it could "see" Europe" and the US simultaneously during one
part of its orbit. During another part of its orbit it could see both Japan and the U.S. As a
result, it provided real- time communications between the United States and those two areas
- for a few minutes out of every hour.
Geosynchronous Communications Satellites
The solution to the problem of availability, of course, lay in the use of the geosynchronous
orbit. In 1963, the necessary rocket booster power was available for the first time and the
first geosynchronous satellite , Syncom 2, was launched by NASA. For those who could
"see" it, the satellite was available 100% of the time, 24 hours a day. The satellite could view
approximately 42% of the earth. For those outside of that viewing area, of course, the
satellite was NEVER available.
163
WIRELESS COMMUNICACTION
Syncom II Communications Satellite
However, a system of three such satellites, with the ability to relay messages from one to the
other could interconnect virtually all of the earth except the polar regions. The one
disadvantage (for some purposes) of the geosynchronous orbit is that the time to transmit a
signal from earth to the satellite and back is approximately ¼ of a second - the time required
to travel 22,000 miles up and 22,000 miles back down at the speed of light. For telephone
conversations, this delay can sometimes be annoying. For data transmission and most other
uses it is not significant. In any event, once Syncom had demonstrated the technology
necessary to launch a geosynchronous satellite, a virtual explosion of such satellites followed.
Today, there are approximately 150 communications satellites in orbit, with over 100 in
geosynchronous orbit. One of the biggest sponsors of satellite development was Intelsat, an
internationally-owned corporation which has launched 8 different series of satellites (4 or 5
of each series) over a period of more than 30 years. Spreading their satellites around the
globe and making provision to relay from one satellite to another, they made it possible to
transmit 1000s of phone calls between almost any two points on the earth. It was also
possible for the first time, due to the large capacity of the satellites, to transmit live television
pictures between virtually any two points on earth. By 1964 (if you could stay up late
enough), you could for the first time watch the Olympic games live from Tokyo. A few years
later of course you could watch the Vietnam war live on the evening news.
164
WIRELESS COMMUNICACTION
Basic Communications Satellite Components
Every communications satellite in its simplest form (whether low earth or geosynchronous)
involves the transmission of information from an originating ground station to the satellite
(the uplink), followed by a retransmission of the information from the satellite back to the
ground (the downlink). The downlink may either be to a select number of ground stations or
it may be broadcast to everyone in a large area. Hence the satellite must have a receiver and a
receive antenna, a transmitter and a transmit antenna, some method for connecting the
uplink to the downlink for retransmission, and prime electrical power to run all of the
electronics. The exact nature of these components will differ, depending on the orbit and the
system architecture, but every communications satellite must have these basic components.
This is illustrated in the drawing below.
Basic Components of a Communications Satellite Link
Transmitters
The amount of power which a satellite transmitter needs to send out depends a great deal on
whether it is in low earth orbit or in geosynchronous orbit. This is a result of the fact that
the geosynchronous satellite is at an altitude of 22,300 miles, while the low earth satellite is
only a few hundred miles. The geosynchronous satellite is nearly 100 times as far away as the
low earth satellite. We can show fairly easily that this means the higher satellite would need
almost 10,000 times as much power as the low-orbiting one, if everything else were the
165
WIRELESS COMMUNICACTION
same. (Fortunately, of course, we change some other things so that we don't need 10,000
times as much power.)
For either geosynchronous or low earth satellites, the power put out by the satellite
transmitter is really puny compared to that of a terrestrial radio station. Your favorite rock
station probably boasts of having many kilowatts of power. By contrast, a 200 watt
transmitter would be very strong for a satellite.
Antennas
One of the biggest differences between a low earth satellite and a geosynchronous satellite is
in their antennas. As mentioned earlier, the geosynchronous satellite would require nearly
10,000 times more transmitter power, if all other components were the same. One of the
most straightforward ways to make up the difference, however, is through antenna design.
Virtually all antennas in use today radiate energy preferentially in some direction. An antenna
used by a commercial terrestrial radio station, for example, is trying to reach people to the
north, south, east, and west. However, the commercial station will use an antenna that
radiates very little power straight up or straight down. Since they have very few listeners in
those directions (except maybe for coal miners and passing airplanes) power sent out in
those
directions
would
be
totally
wasted.
The communications satellite carries this principle even further. All of its listeners are
located in an even smaller area, and a properly designed antenna will concentrate most of the
transmitter power within that area, wasting none in directions where there are no listeners.
The easiest way to do this is simply to make the antenna larger. Doubling the diameter of a
reflector antenna (a big "dish") will reduce the area of the beam spot to one fourth of what it
would be with a smaller reflector. We describe this in terms of the gain of the antenna. Gain
simply tells us how much more power will fall on 1 square centimeter (or square meter or
square mile) with this antenna than would fall on that same square centimeter (or square
meter or square mile) if the transmitter power were spread uniformly (isotropically) over all
directions. The larger antenna described above would have four times the gain of the smaller
one. This is one of the primary ways that the geosynchronous satellite makes up for the
apparently larger transmitter power which it requires.
One other big difference between the geosynchronous antenna and the low earth antenna is
the difficulty of meeting the requirement that the satellite antennas always be "pointed" at
the earth. For the geosynchronous satellite, of course, it is relatively easy. As seen from the
earth station, the satellite never appears to move any significant distance. As seen from the
satellite, the earth station never appears to move. We only need to maintain the orientation
of the satellite. The low earth orbiting satellite, on the other hand, as seen from the ground is
continuously moving. It zooms across our field of view in 5 or 10 minutes.
Likewise, the earth station, as seen from the satellite is a moving target. As a result, both the
earth station and the satellite need some sort of tracking capability which will allow its
antennas to follow the target during the time that it is visible. The only alternative is to make
that antenna beam so wide that the intended receiver (or transmitter) is always within it. Of
course, making the beam spot larger decreases the antenna gain as the available power is
166
WIRELESS COMMUNICACTION
spread over a larger area , which in turn increases the amount of power which the
transmitter must provide.
Power Generation
You might wonder why we don't actually use transmitters with thousands of watts of power,
like your favorite radio station does. You might also have figured out the answer already.
There simply isn't that much power available on the spacecraft. There is no line from the
power company to the satellite. The satellite must generate all of its own power. For a
communications satellite, that power usually is generated by large solar panels covered with
solars cells - just like the ones in your solar-powered calculator. These convert sunlight into
electricity. Since there is a practical limit to the how big a solar panel can be, there is also a
practical limit to the amount of power which can generated. In addition, unfortunately,
transmitters are not very good at converting input power to radiated power so that 1000
watts of power into the transmitter will probably result in only 100 or 150 watts of power
being radiated. We say that transmitters are only 10 or 15% efficient. In practice the solar
cells on the most "powerful" satellites generate only a few thousand watts of electrical
power.
Satellites must also be prepared for those periods when the sun is not visible, usually because
the earth is passing between the satellite and the sun. This requires that the satellite have
batteries on board which can supply the required power for the necessary time and then
recharge by the time of the next period of eclipse.
The nature of future satellite communications systems will depend on the demands of the
marketplace (direct home distribution of entertainment, data transfers between businesses,
telephone traffic, cellular telephone traffic, etc.); the costs of manufacturing, launching, and
operating various satellite configurations; and the costs and capabilities of competing
systems - especially fiber optic cables, which can carry a huge number of telephone
conversations or television channels. In any case, however, several approaches are now being
tested
or
discussed
by
satellite
system
designers.
Advanced Communications Technology Satellite (ACTS)
One approach, which is being tested experimentally, is the "switchboard in the sky" concept.
NASA's Advanced Communications Technology Satellite (ACTS) consists of a
relatively large geosynchronous satellite with many uplink beams and many downlink beams,
each of which covers a rather small spot (several hundred miles across) on the earth.
However, many of the beams are "steerable". That is to say, the beams can be moved to a
different spot on the earth in a matter of milliseconds, so that one beam provides uplink or
downlink service to a number of locations. Moving the beams in a regular scheduled manner
allows the satellite to gather uplink traffic from a number of locations, store it on board, and
then transmit it back to earth when a downlink beam comes to rest on the intended
destination. The speed at which the traffic is routed and the agility with which the beams
move make the momentary storage and routing virtually invisible to the user. The ACTS
satellite is also unique in that it operates at frequencies of 30 GHz on the uplink and 20 GHz
167
WIRELESS COMMUNICACTION
on the downlink. It is one of the first systems to demonstrate and test such high frequencies
for
satellite
communications.
The ACTS concept involves a single, rather complicated, and expensive geosynchronous
satellite. An alternative approach is to deploy a "constellation" of low earth orbiting satellites.
By planning the orbits carefully, some number (perhaps as few as 20, perhaps as many as
250) of satellites could provide continuous contact with the entire earth, including the poles.
By providing relay links between satellites, it would be possible to provide communications
between any two points on earth, even though the user might only be able to see any one
satellite for a few minutes every hour. Obviously, the success of such a system depends
critically on the cost of manufacturing and launching the satellites. It will be necessary to
mass produce communications satellites, so that they can turned out quickly and cheaply, the
way VCRs are manufactured now. This seems a truly ambitious goal since until now the
average communications satellite might require 6 months to 2 years to manufacture.
Nevertheless, at the present time, several companies including Hughes Electronics,
Motorola, and Teledesic, Inc., have indicated their intent to undertake such a system.
Higher Frequencies
The ACTS satellite receives information from ground stations at one of several frequencies
near 30 GHz. Its downlink transmissions to the destination earth stations are at frequencies
near 20 GHz. These frequencies are much higher than those currently used on satellite
systems. Most commercial satellites presently in use operate with a 6 GHz uplink and a 4
GHz down link. By comparison, a typical AM radio station operates at 1000 KHZ (0.001
GHz) an FM radio station operates at 100 MHz (0.1 GHz). Channel 7 on your TV set is
about 175 MHz. The higher frequencies used by ACTS have 4 significant effects:
It demonstrates the feasibility of an entirely new resource. Just as you can only place
a limited number of FM radio stations within the piece of the frequency spectrum
allocated for that purpose, so you can only operate a certain number of "satellite
stations" within any given frequency band. The lower frequencies which
communications satellites have been using until now (4 - 6 GHz, known as the Cband and 11 - 14 GHz, known as the Kuband) are rapidly being filled up. Use of the
30 and 20 GHz bands will nearly double the amount of frequency space available for
satellite communications.
The higher frequency means that an antenna of a certain size will have greater gain
than it would at a lower frequency. The satellite designer can use this fact to his
advantage in two ways. First, a higher gain means a smaller spot will be illuminated
by the beam when it strikes the earth. As a result, the power which the satellite
radiates is concentrated in a smaller area. This will either improve the quality of
communications or will allow the designer to actually reduce the power which the
satellite emits.The smaller beam spot also means that the satellite can be more
precise in selecting the destination for its transmissions. In fact, it can use two small
beams, each aimed at a different destination, and, if they do not overlap, it can
transmit to both destinations at the same frequency, at the same time without
interference. The ACTS satellite has a total of 5 uplink and 5 downlink beams, each
168
WIRELESS COMMUNICACTION
with a width of a little less than 0.5 degrees, which corresponds to a "footprint" on
earth approximately 150 miles across. Most previous satellites had beams which
covered at least half of the continental United States and frequently covered all of it
or even the entire earth.
A third effect of the higher frequencies at which ACTS receives and transmits is that
it is capable of carrying much more information than would a similar satellite at
lower frequency. Both the uplink and downlink frequency bands are approximately
2.5 GHz wide (from 27.5 to 30.0 GHz on the uplink; from 17.5 to 20.0 GHz on the
downlink). This 2.5 GHz of bandwidth is enough for about 400 conventional
television stations or 250,000 telephone calls, or 100,000 "high speed" data transfers
(say from your "fast" 28K baud modem. As a demonstration version of a 30/20
GHz satellite, ACTS is not equipped to use the full data-carrying capability of the
frequency band. However, it can handle a throughput of 1800 megabits/second, the
equivalent of 450 television stations.
Finally, ACTS' operating at these higher frequencies means that many of its
components had never been built before and required extensive engineering
research. The development of these components (transmitters, receivers, antennas
and switching devices of the required performance) represented major advances in
electronics technology. Their development and demonstration in space means that
the satellite communications industry will have relatively inexpensive "off-the-shelf"
components available when commercial use of these frequency bands becomes a
reality.
Moveable Beams
Another unique feature of the ACTS satellite is that not only are its beams smaller than
previous satellites, but some of them are moveable. The moveability is accomplished not by
physically reorienting the antenna system to point in another direction, but by electrically
switching the signal. To understand how this works, we need to look at the satellite's antenna
in a little more detail. Attached to the transmitter is a small radiating element which launches
the signal into space. For the frequencies used here, the radiating element consists of a
"horn", which "feeds" a large reflector or "dish". The reflector is shaped like a parabola and
acts just like an optical mirror, focusing the energy emitted by the feed horn located at "A"
and sending it all off in the same direction, shown as the beam "A" in the figure below. Just
as with an optical mirror, if the source of radiation (the feedhorn) is relocated to "B", the
focused and reflected beam will go off in a different direction, shown as beam "B" in the
figure.
169
WIRELESS COMMUNICACTION
Beam Steering by Means of a Switch
The ACTS satellite has an array of feedhorns, each one corresponding to a different
destination on the ground. Switches will route the transmitter signal to that horn which will
reach the desired location. Because it is done by electrical switching rather than actually
moving antenna parts, the beams can be rearranged in a matter of microseconds. If the beam
moves through a series of preprogrammed locations, we say that the beam has "scanned" the
entire area. A moveable or scanning beam on ACTS can visit 40 locations, covering a total of
about 750,000 square miles, in 1/000 of a second. Two of ACTS' beams have this hopping
capability. An operational satellite (that is, a money-maker, not an experiment) could use
about 6 such "scanning" beams to provide service to the entire continental United States.
170
WIRELESS COMMUNICACTION
Switching and Routing Capabilities
The ACTS satellite differs from all previous satellites in that switching devices on board the
satellite (combined with ACTS' small spot beams) actually route messages, data, or TV
programs to a particular destination, rather than "broadcasting" them across the entire
country. As a result, it is sometimes referred to as a "switchboard in the sky". ACTS uses
two different concepts in switching, depending on whether the message has arrived through
one of the fixed beams or one of the scanning beams.
The second mode of routing involves the use of the moveable or scanning beams. In this
case, the ground station still needs to "tape record" the message which it wishes to send. It
must now synchronize its fast-forward bursts of information with the arrival of the scanning
beam at its location. The on board processor receives the message, stores it and retransmits
it to the ground when the downlink beam is over the destination. Again, the receiving station
will pass the message on to the intended recipient at a normal speed. He'll never suspect all
of the skullduggery which has taken place. The moveable beam technique allows the satellite
to provide flexibility in its service. If at any instant there are few or no transmissions from a
particular beam spot, the satellite can reduce or even eliminate the time it spends on that
particular spot, using the time for an area with more demand.
ACTS Ground Station
The ACTS ground station at Lewis Research Center functions as the master control station,
as well as being one of the "users". The computers which one can see in the control room
provide the programming which determines where the hopping beams will travel, what
interconnections will be made on the satellite, which ground stations will have access to the
computer, exactly what times they can transmit and receive, and what data rates they can
transmit. In addition, the facility here monitors the "health and welfare" of the satellite.
These include things like whether the solar cells are providing the specified levels of power,
whether the satellite is maintaining its position and orientation properly, and whether
electronic
systems
in
general
are
functioning
properly.
On the roof of the building which houses the control room, there are two large dish
antennas (4.5 meters and 5 meters in diameter.) One is for controlling the satellite and the
user network. The other is for a user's terminal. Their orientation can be adjusted. However,
it is a slow process and should be unnecessary so long as the satellite maintains its position
properly. These antennas are shown in the photograph below.
171
WIRELESS COMMUNICACTION
ACTS Antennas
Just below the roof is located the power equipment for the transmitter. It is located as close
as possible to the antenna so as to lose as little power as possible in the cable which connects
it to the antenna.
Fixed Service Satellite (or FSS), is the official classification (used chiefly in North
America) for geostationary communications satellites used for broadcast feeds for television
and radio stations and networks, as well as for telephony and data communications.
FSS satellites have also been used for Direct-To-Home (DTH) satellite TV channels in
North America since the late 1970s. This role has been mostly supplanted by direct
broadcast satellite (DBS) television systems starting in 1994 when DirecTV launched the
first DBS television system. However, FSS satellites in North America are also used to relay
channels of cable tv networks from their originating studios, to local cable headends and to
the operations centers of DBS services (such as DirecTV and Dish Network) to be rebroadcasted over their DBS systems.
FSS satellites were the first geosynchronous communications satellites launched in space
(such as Intelsat 1 (Early Bird), Syncom 3, Anik 1, Westar 1, Satcom 1 and Ekran) and new
ones are still being launched and utilized to this day.
FSS satellites operate in either the C band (from 3.7 to 4.2 GHz) or the FSS Ku bands (from
11.45 to 11.7 and 12.5 to 12.75 GHz in Europe, and 11.7 to 12.2 GHz in the United States).
FSS satellites operate at a lower power than DBS satellites, requiring a much larger dish than
a DBS system, usually 3 to 8 feet for Ku band, and 12 feet or larger for C band (compared to
18 to 24 inches for DBS dishes). Also, unlike DBS satellites which use circular polarization
on their transponders, FSS satellite transponders use linear polarization.
172
WIRELESS COMMUNICACTION
Systems used to receive television channels and other feeds from FSS satellites are usually
referred to as TVRO (Television Receive Only) systems, as well as being referred to as bigdish systems (due to the much larger dish size compared to systems for DBS satellite
reception), or, more pejoratively, BUD, or big ugly dish systems.
The Canadian StarChoice satellite TV service relies on FSS satellite technology in the Ku
band. Primestar in the USA used Ku transponders on an FSS satellite as well for its delivery
to subscribing households, until Primestar was acquired by DirecTV in 1999.
FSS and the rest of the world
The term of Fixed Service Satellite is chiefly a North American one, and is seldom used
outside of the North American continent. This is due to the fact that most satellites used for
direct-to-home television in Europe, Asia, and elsewhere have the same high power output
as DBS-class satellites in North America, but use the same linear polarization as FSS-class
satellites.
Dish Network and FSS
The DiSH Network satellite TV service also relies on FSS satellite technology in the Ku band
to provide the necessary additional capacity to handle local channels required by FCC mustcarry rules and make room for HDTV resolution. The recently-introduced SuperDish system
receives circularly-polarized DBS 12.7 GHz from both 110-degree (the Echostar 8 & 10
satellites) and 119-degree (the Echostar 7 satellite) orbital locations as well as linearlypolarized FSS 11.7 GHz from either the 121-degree (Echostar 9) or 105-degree (AMC 15)
orbital locations depending on consumer choice. Dish has started using 118.7-degree (AMC16 -FSS) on their Dish 500+ and Dish 1000+ dishes. It has an oval LNB called a DP
DBS/FSS Dual Band. This LNB will receive both the 119-degree and 118.7-degree satellites.
While the original DiSH Network satellites use circular polarity at 12.7 GHz, the newer
Intelsat 13/Echostar 9 satellite at 121-degrees uses the older FSS technology to broadcast
local channels and international packages such as the Chinese Great Wall TV Package. As a
result, newer DiSH Network receivers are designed to receive both circular and linearlypolarized signals at two different intermediate frequencies from up to 5 different orbital
locations.
The SuperDish has three low-noise block downconverters to accommodate the three
satellites and two different technologies. SuperDish comes in two configurations:
SuperDiSH 121 is for international programming and SuperDiSH 105 is intended for high
definition and for those customers in areas whose local channels are only available on the
105-degree satellite. As with other FSS technologies these signals are much lower power and
as a result the SuperDiSH is a very large and lopsided appendage. However, since the
SuperDiSH is under 1-meter in width it cannot be banned by homeowners' associations.
173
WIRELESS COMMUNICACTION
Direct broadcast satellite (DBS) is a term used to refer to satellite television broadcasts
intended for home reception, also referred to as direct-to-home signals. It covers both analog
and digital television and radio reception, and is often extended to other services provided
by modern digital television systems, including video-on-demand and interactive features. A
"DBS service" usually refers to either a commercial service, or a group of free channels
available from one orbital position targeting one country.
Terminology confusion
In certain regions of the world, especially in North America, DBS is used to refer to
providers of subscription satellite packages, and has become applied to the entire equipment
chain involved. With modern satellite providers in the United States using high power Kuband transmissions using circular polarization, which result in small dishes, and digital
compression (hence bringing in an alternative term, Digital Satellite System, itself likely
connected to the proprietary encoding system used by DirecTV, Digital Satellite Service),
DBS is often misused to refer to these. DBS systems are often driven by pay television
providers, which drives further confusion. Additionally, in some areas it is used to refer to
specific segments of the Ku-band, normally 12.2 to 12.7 GHz, as this bandwidth is often
referred to as DBS or one of its synonyms. In comparison, European "Ku band" DBS
systems can drop as low as 10.7 GHz.
Adding to the naming complexity, the ITU's original frequency allocation plan for Europe,
the Soviet Union and Northern Africa from 1977 introduced a concept of extremely high
power spot-beam broadcasting (see Ekran satellite) which they termed DBS, although only a
handful of the participating countries even went as far as to launch satellites under this plan,
even fewer operated anything resembling a DBS service.
Commercial DBS services
The first commercial DBS service, Sky Television plc (now BSkyB), was launched in 1989.
Sky TV started as a four-channel free-to-air analogue service on the Astra 1A satellite,
serving the United Kingdom and Republic of Ireland. By 1991, Sky had changed to a
conditional access pay model, and launched a digital service, Sky Digital, in 1998, with
analogue transmission ceasing in 2001. Since the DBS nomenclature is rarely used in the UK
or Ireland, the popularity of Sky's service has caused the terms "minidish" and "digibox" to
be applied to products other than Sky's hardware. BSkyB is controlled by News Corporation.
PrimeStar began transmitting an analog service to North America in 1991, and was joined by
DirecTV Group's DirecTV (then owned by GM Hughes Electronics), in 1994. At the time,
DirecTV's introduction was the most successful consumer electronics debut in American
history. Although PrimeStar transitioned to a digital system in 1994, it was ultimately unable
to compete with DirecTV, which required a smaller satellite dish and could deliver more
programming. DirecTV eventually purchased PrimeStar in 1999 and migrated all PrimeStar
subscribers to DirecTV equipment. In 2003, News Corporation purchased a controlling
interest in DirecTV's parent company, Hughes Electronics, and renamed the company
DirecTV Group.
174
WIRELESS COMMUNICACTION
In 1996, EchoStar's Dish Network went online in the United States and, as DirecTV's
primary competitor, achieved similar success. AlphaStar also launched but soon went under.
Dominion Video Satellite Inc.'s Sky Angel also went online in the United States in 1996 with
its DBS service geared toward the faith and family market. It has since grown from six to 36
TV and radio channels of family entertainment, Christian-inspirational programming and 24hour news. Dominion, under its former corporate name Video Satellite Systems Inc., was
actually the second from among the first nine companies to apply to the FCC for a highpower DBS license in 1981 and is the sole surviving DBS pioneer from that first round of
forward-thinking applicants. Sky Angel, although a separate and independent DBS service,
uses the satellites, transmission facilities, & receiving equipment used for Dish Network
through an agreement with Echostar. Because of this, Sky Angel subscribers also have the
option of subscribing to Dish Network's channels as well.
In 2003, EchoStar attempted to purchase DirecTV, but the U.S. Department of Justice
denied the purchase based on anti-competitive concerns.
Free DBS services
Germany is likely the leader in free-to-air DBS, with approximately 40 analogue and 100
digital channels broadcast from the SES Astra 1 position at 19.2E. These are not marketed
as a DBS service, but are received in approximately 12 million homes, as well as in any home
using the German commercial DBS system, Premiere.
The United Kingdom has approximately 90 free-to-air digital channels, for which a
promotional and marketing plan is being devised by the BBC and ITV, to be sold as
"Freesat". It is intended to provide a multi-channel service for areas which cannot receive
Freeview, and eventually replace their network of UHF repeaters in these areas
India's national broadcaster, Doordarshan, promotes a free-to-air DBS package as "DD
Direct Plus", which is provided as in-fill for the country's terrestrial transmission network.
While originally launched as backhaul for their digital terrestrial television service, a large
number of French channels are free-to-air on 5W, and have recently been announced as
being official in-fill for the DTT network.
In North America (USA, Canada and Mexico) there are over 80 FTA digital channels
available on Intelsat Americas 5, the majority of them are ethnic or religious. Other popular
FTA satellites include AMC-4, AMC-6, Galaxy 10R and SatMex 5. A company called
GloryStar promotes FTA religious broadcasters on IA-5 and AMC-4.
175
WIRELESS COMMUNICACTION
Cable television is a system of providing television to consumers via radio frequency signals
transmitted to televisions through fixed optical fibers or coaxial cables as opposed to the
over-the-air method used in traditional television broadcasting (via radio waves) in which a
television antenna is required. FM radio programming, high-speed Internet, telephony and
similar non television services may also be provided.
The abbreviation CATV is often used to mean "Cable TV". It originally stood for
Community Antenna Television, from cable television's origins in 1948: in areas where
over-the-air reception was limited by mountainous terrain, large "community antennas" were
constructed, and cable was run from them to individual homes.
In electrodynamics, circular polarization (also circular polarisation) of electromagnetic
radiation is a polarization such that the tip of the electric field vector, at a fixed point in
space, describes a circle as time progresses. The name is derived from this fact. The electric
vector, at one point in time, describes a helix along the direction of wave propagation (see
the polarization article for pictures). The magnitude of the electric field vector is constant as
it rotates. Circular polarization is a limiting case of the more general condition of elliptical
polarization. The other special case is the easier-to-understand linear polarization.
Circular (and elliptical) polarization is possible because the propagating electric (and
magnetic) fields can have two orthogonal components with independent amplitudes and
phases (and the same frequency).
A circularly polarized wave may be resolved into two linearly polarized waves, of equal
amplitude, in phase quadrature (90 degrees apart) and with their planes of polarization at
right angles to each other.
Circular polarization may be referred to as right or left, depending on the direction in which
the electric field vector rotates. Unfortunately, two opposing, historical conventions exist. In
physics and astronomy, polarization is defined as seen from the receiver, such as a telescope or
radio telescope. By this definition, if you could stop time and look at the electric field along
the beam, it would trace a helix which is the same shape as the same-handed screw. For
example, right circular polarization produces a right threaded (or forward threaded) screw. In
the U.S., Federal Standard 1037C also defines the handedness of circular polarization in this
manner. In electrical engineering, however, it is more common to define polarization as seen
from the source, such as from a transmitting antenna. To avoid confusion, it is good practice
to specify "as seen from the receiver" (or transmitter) when polarization matters.
176
WIRELESS COMMUNICACTION
FM radio
The term "circular polarization" is often used erroneously to describe mixed polarity signals
used mostly in FM radio (87.5 to 108.0 MHz), where a vertical and a horizontal component
are propagated simultaneously by a single or a combined array. This has the effect of
producing greater penetration into buildings and difficult reception areas than a signal with
just one plane of polarization.
Circular dichroism
Circular dichroism (CD), is the differential absorption of left- and right-handed circularly
polarized light. It is a form of spectroscopy used to determine the optical isomerism and
secondary structure of molecules.
In general, this phenomenon will be exhibited in absorption bands of any optically active
molecule. As a consequence, circular dichroism is exhibited by biological molecules, because
of the dextrorotary (e.g. some sugars) and levorotary (e.g. some amino acids) molecules they
contain. Noteworthy as well is that a secondary structure will also impart a distinct CD to its
respective molecules. Therefore, the alpha helix of proteins and the double helix of nucleic
acids have CD spectral signatures representative of their structures.
177
WIRELESS COMMUNICACTION
Mathematical description of circular polarization
The classical sinusoidal plane wave solution of the electromagnetic wave equation for the
electric and magnetic fields is
for the magnetic field, where k is the wavenumber,
is the angular frequency of the wave, and c is the speed of light.
Here
is the amplitude of the field and
is the Jones vector in the x-y plane.
If αy is rotated by π / 2 radians with respect to αx and the x amplitude equals the y amplitude
the wave is circularly polarized. The Jones vector is
where the plus sign indicates right circular polarization and the minus sign indicates left
circular polarization. In the case of circular polarization, the electric field vector of constant
magnitude rotates in the x-y plane.
If unit vectors are defined such that
and
178
WIRELESS COMMUNICACTION
then the polarization state can written in the "R-L basis" as
where
and
.
In electrodynamics, linear polarization or plane polarization of electromagnetic radiation
is a confinement of the electric field vector or magnetic field vector to a given plane along
the direction of propagation. See polarization for more information.
Historically, the orientation of a polarized electromagnetic wave has been defined in the
optical regime by the orientation of the electric vector, and in the radio regime, by the
orientation of the magnetic vector.
179
WIRELESS COMMUNICACTION
Mathematical description of linear polarization
The classical sinusoidal plane wave solution of the electromagnetic wave equation for the
electric and magnetic fields is (cgs units)
for the magnetic field, where k is the wavenumber,
is the angular frequency of the wave, and c is the speed of light.
Here
is the amplitude of the field and
180
WIRELESS COMMUNICACTION
is the Jones vector in the x-y plane.
The wave is linearly polarized when the phase angles
are equal,
.
This represents a wave polarized at an angle θ with respect to the x axis. In that case the
Jones vector can be written
.
The state vectors for linear polarization in x or y are special cases of this state vector.
If unit vectors are defined such that
and
then the polarization state can written in the "x-y basis" as
.
In electrodynamics, elliptical polarization is the polarization of electromagnetic radiation
such that the tip of the electric field vector describes an ellipse in any fixed plane
intersecting, and normal to, the direction of propagation. An elliptically polarized wave may
be resolved into two linearly polarized waves in phase quadrature with their polarization
planes at right angles to each other.
Other forms of polarization, such as circular and linear polarization, can be considered to be
special cases of elliptical polarization.
181
WIRELESS COMMUNICACTION
Mathematical description of elliptical polarization
The classical sinusoidal plane wave solution of the electromagnetic wave equation for the
electric and magnetic fields is (cgs units)
for the magnetic field, where k is the wavenumber,
is the angular frequency of the wave, and c is the speed of light.
Here
is the amplitude of the field and
182
WIRELESS COMMUNICACTION
is the Jones vector in the x-y plane. Here θ is an angle that determines the tilt of the ellipse
and αx − αy determines the aspect ratio of the ellipse. If αx and αy are equal the wave is
linearly polarized. If they differ by
they are circularly polarized.
Low-Noise Block Converter (LNB)
A low-noise block converter (LNB, for low-noise block, or sometimes LNC, for lownoise converter) is used in communications satellite (usually broadcast satellite) reception.
The LNB is usually fixed on or in the satellite dish, for the reasons outlined below.
LNBF dissassembled
Satellites use comparatively high radio frequencies to transmit their signals.
Ku-band linear-polarised LNBF
As microwave satellite signals do not easily pass through walls, roofs, or even glass windows,
satellite antennas are required to be outdoors, and the signal needs to be passed indoors via
cables. When radio signals are sent through coaxial cables, the higher the frequency, the
more losses occur in the cable per unit of length. The signals used for satellite are of such
high frequency (in the multiple gigahertz range) that special (costly) cable types or
waveguides would be required and any significant length of cable leaves very little signal left
on the receiving end.
183
WIRELESS COMMUNICACTION
The job of the LNB is to use the superheterodyne principle to take a wide block (or band) of
relatively high frequencies, amplify and convert them to similar signals carried at a much
lower frequency (called intermediate frequency or IF). These lower frequencies travel
through cables with much less attenuation of the signal, so there is much more signal left on
the satellite receiver end of the cable. It is also much easier and cheaper to design electronic
circuits to operate at these lower frequencies, rather than the very high frequencies of
satellite transmission.
The ―low-noise‖ part means that special electronic engineering techniques are used, that the
amplification and mixing takes place before cable attenuation and that the block is free of
additional electronics like a power supply or a digital receiver. This all leads to a signal which
has less noise (unwanted signals) on the output than would be possible with less stringent
engineering. Generally speaking, the higher the frequencies with which an electronic
component has to operate, the more critical it is that noise be controlled. If low noise
engineering techniques were not used, the sound and picture of satellite TV would be of very
low quality, if it could even be received at all without a much larger dish reflector. The lownoise quality of an LNB is expressed as the noise figure or noise temperature.
For the reception of wideband satellite television carriers, typically 27 MHz wide, the
accuracy of the frequency of the LNB local oscillator need only be in the order of ±500kHz,
so low cost dielectric oscillators (DRO) may be used. For the reception of narrow bandwidth
carriers or ones using advanced modulation techniques, such as 16-QAM, highly stable and
low phase noise LNB local oscillators are required. These use an internal crystal oscillator or
an external 10 MHz reference from the indoor unit and a phase-locked loop (PLL) oscillator.
LNBFs
Direct broadcast satellite (DBS) dishes use an LNBF (―LNB feedhorn‖), which integrates
the antenna‘s feedhorn with the LNB. Small diplexers are often used to distribute the
resulting IF signal (usually 950 to 1450 MHz) ―piggybacked‖ in the same cable TV wire that
carries lower-frequency terrestrial television from an outdoor antenna. Another diplexer then
separates the signals to the receiver of the TV set, and the integrated receiver/decoder (IRD)
of the DBS set-top box.
Newer Ka band systems use additional IF blocks from the LNBF, one of which will cause
interference to UHF and cable TV frequencies above 250MHz, precluding the use of
diplexers. The other block is higher than the original, up to 2.5GHz, requiring the LNB to
be connected to high-quality all-copper RG-6/U cables. This is in addition to higher
electrical power and electrical current requirements for multiple dual-band LNBFs.
For some satellite Internet and free-to-air (FTA) signals, a universal LNB (Ku band) is
recommended. Most North American DBS signals use circular polarization, instead of linear
polarization, therefore requiring a different LNB type for proper reception. In this case, the
polarization must be adjusted between clockwise and counterclockwise, rather than
horizontal and vertical.
184
WIRELESS COMMUNICACTION
In the case of DBS, the voltage supplied by the set-top box to the LNB determines the
polarisation setting. With multi-TV systems, a dual LNB allows both to be selected at once
by a switch, which acts as a distribution amplifier. The amplifier then passes the proper
signal to each box according to what voltage each has selected. The newest systems may
select polarization and which LNBF to use by sending codes instead. The oldest satellite
systems actually powered a rotating antenna on the feedhorn, at a time when there was
typically only one LNB or LNA on a very large TVRO dish.
Universal LNB
A universal LNB can receive both polarisations and the full range of frequencies in the
satellite Ku or C band. Some models can receive both polarisations simultaneously through
two different connectors, and others are switchable or fully adjustable in their polarisation.
Here is an example of Universal LNB specifications:
LO: 9.75 / 10.6 GHz.
Freq: 10.7–12.75 GHz.
NF: 0.7 dB.
Polarization: Linear
Standard North America Ku-band LNB
By covering a smaller frequency range a LNB with a better noise figure can be produced. Pay
TV operators can also supply a single fixed polarization LNBF to save a small amount of
expense.
Here is an example of a Standard Linear LNB:
Local oscillator: 10.75 GHz
Frequency: 11.7-12.2 GHz
Noise Figure: 0.5 dB
Polarization: Linear
North America DBS LNB
Here is an example of an LNB used for DBS:
Local oscillator: 11.25 GHz
Frequency: 12.2-12.7 GHz
Noise figure: 0.7 dB
Polarization: Circular
C-band LNB
Here is an example of a North American C-band LNB:
185
WIRELESS COMMUNICACTION
Local oscillator: 5.15 GHz
Frequency: 3.6-4.2 GHz
Noise figure: ranges from 15 to 100 kelvins (uses Kelvin ratings as opposed to dB
rating).
Polarization: Linear
Dual/Quad LNB's
Two or Four LNB's in one unit to enable use of multiple receivers on one Dish.
Monobloc LNB's
a unit consisting of two LNB's designed to receive satellites spaced close together. For
example in parts of Europe Monoblocs designed to receive the Hotbird (13E) and Astra 1
(19E) satellites are popular because they enable reception of both satellites on a single dish
without requiring an expensive and noisy rotator.
Low noise block downconverter ( LNB )
Have you ever wondered what is an LNB and what an LNB LO frequency ? Here is some
information about LNBs that I hope will help explain matters.
The abbreviation LNB stands for Low Noise Block. It is the device on the front of a
satellite dish that receives the very low level microwave signal from the satellite, amplifies it,
changes the signals to a lower frequency band and sends them down the cable to the indoor
receiver.
The expression low noise refers the the quality of the first stage input amplifier transistor.
The quality is measured in units called Noise Temperature, Noise Figure or Noise Factor.
Both Noise Figure and Noise Factor may be converted into Noise Temperature. The
lower the Noise Temperature the better. So an LNB with Noise Temperature = 100K is
twice as good as one with 200K.
The expression Block refers to the conversion of a block of microwave frequencies as
received from the satellite being down-converted to a lower (block) range of frequencies in
the cable to the receiver. Satellites broadcast mainly in the range 4 to 12 to 21 GHz.
Low noise block downconverter (LNB) diagram
186
WIRELESS COMMUNICACTION
The diagram shows the input waveguide on the left which is connected to the collecting feed
or horn. As shown there is a vertical pin through the broad side of the waveguide that
extracts the vertical polarisation signals as an electrical current. The satellite signals first go
through a band pass filter which only allows the intended band of microwave frequencies to
pass through. The signals are then amplified by a Low Noise Amplifier and thence to the
Mixer. At the Mixer all that has come through the band pass filter and amplifier stage is
severely scrambled up by a powerful local oscillator signal to generate a wide range of
distorted output signals. These include additions, subtractions and multiples of the wanted
input signals and the local oscillator frequency. Amongst the mixer output products are the
difference frequencies between the wanted input signal and the local oscillator frequencies.
These are the ones of interest. The second band pass filter selects these and feeds them to
the output L band amplifier and into the cable. Typically the output frequency = input
frequency - local oscillator frequency. In some cases it is the other way round so that the
output frequency = local oscillator frequency - input frequency. In this case the output
spectrum is inverted.
Examples of input frequency band, LNB local oscillator frequency and output frequency
band are shown below.
Input frequency
Local
band
from
Oscillator
Input band GHz
satellite
(LO)
waveguide
frequency
Output L
into cable.
C band
Ku band
Ka band
3.4-4.2
5.15
950-1750
3.625-4.2
4.5-4.8
4.5-4.8
5.15
5.76
5.95
950-1525
950-1260
1150-1450
10.7-11.7
10.95-11.7
9.75
10
950-1950
950-1700
10.95 - 12.15
10
950-2150
11.45-11.95
11.2-11.7
10.5
10.25
950-1450
950-1450
11.7-12.75
10.75
950-2000
12.25-12.75
11.3
950-1450
11.7-12.75
10.6
1100-2150
19.2-19.7
19.7-20.2
20.2-20.7
20.7-21.2
18.25
18.75
19.25
19.75
950-1450
950-1450
950-1450
950-1450
band
Comments
inverted output
spectrum
"
"
"
Invacom
50SM
SPV-
Invacom
60SM
Invacom
70SM
SPVSPV-
187
WIRELESS COMMUNICACTION
All the above illustrate a simple LNB, with one LNA and one LO frequency.
More complex LNBs exist, particularly for satellite TV reception where people wish to
receive signals from multiple bands, alternative polarisations, and possibly simultaneously.
Dual-band LNBs
These will typically have two alternative local oscillator frequencies, for example 9.75 GHz
and 10.6 GHz with the higher frequency option selected using a 22 kHz tone injected into
the cable. Such an LNB may be used to receive 10.7 - 11.7 GHz using the lower 9.75 GHz
LO frequency or the higher band 11.7 - 12.75 GHz using the higher 10.6 GHz LO
frequency.
Dual polarisation LNBs
The LNB shown above has one wire going into the waveguide to pick up vertical
polarisation. If the input waveguide is circular is can support two polarisations and it can be
arranged for there to be two input probes at right angles, thus allowing two alternative
polarisations to be selected (vertical or horizontal), either one or the other. Dual polarisation
LNBs may commonly be switched remotely using two alternative DC supply voltages.
e.g. 13 volts makes it receive vertical polarisation and 19 volts make it receive horizontal
polarisation.
Multi-LNBs
If both input probes have their own LNB amplifiers etc you have effectively two LNBs in
the same module, which will have two output cables, one for each polarisation. Many
variants on this theme exist, with options also for multiple bands. Such a "Quad LNB"
might thus have 4 outputs, for each polarisation and each of two bands. Such an
arrangement is attractive for a block of flats, head end antenna, which need to feed multiple
indoor satellite TV receivers with the viewers all wanting all permutations of the two
polarisations and two frequency bands.
LNB Frequency stability
All LNBs used for satellite TV reception use dielectric resonator stabilised local oscillators.
The DRO is just a pellet of material which resonates at the required frequency. Compared
with quartz crystal a DRO is relatively unstable with temperature and frequency accuracies
may be +/- 250 kHz to as much as +/- 2 MHz at Ku band. This variation includes both the
initial value plus variations of temperature over the full extremes of the operating range.
Fortunately most TV carriers are quite wide bandwidth (like 27 MHz) so even with 2 MHz
error the indoor receiver will successfully tune the carrier and capture it within the automatic
frequency
control
capture
range.
If you want the LNB for the reception of narrow carriers, say 50 kHz wide, you have a
problem since the indoor receiver may not find the carrier at all or may even find the wrong
one. In which case you need a rather clever receiver that will sweep slowly over a range like
+/- 2 MHz searching for the carrier and trying to recognise it before locking on to it.
188
WIRELESS COMMUNICACTION
Alternatively it is possible to buy Phase Lock Loop LNBs which have far better frequency
accuracy. Such PLL LNBs have in internal crystal oscillator or rely on an external 10 MHz
reference signal sent up the cable by the indoor receiver. PLL LNBs are more expensive.
The benefit of using an external reference PLL LNB is that the indoor reference oscillator is
easier to maintain at a stable constant temperature.
LNB Phase noise
All modern DRO LNBs are sold as 'digi-ready'. What this means is that some attention has
been paid in the design to keeping the phase noise down so as to facilitate the reception of
digital TV carriers. The phase noise of DRO LNBs is still far worse than for PLL LNBs.
What good phase noise performance is really needed for is for the reception of low bit rate
digital carriers and for digital carriers using high spectral efficiency modulation methods like
8-PSK, 8-QAM or 16-QAM modulation, which reduce the bandwidth required but need
more power from the satellite, a bigger receive dish and better quality PLL type oscillators in
both the transmit and receive chains.
LNB supply voltages
The DC voltage power supply is fed up the cable to the LNB. Often by altering this voltage
it is possible to change the polarisation or, less commonly, the frequency band. Voltages are
normally
13
volts
or
19
volts.
Perfect weatherproofing of the outdoor connector is essential, otherwise corrosion is rapid.
Note that both the inner and outer conductors must make really good electrical contact.
High resistance can cause the LNB to switch permanently into the low voltage state. Very
peculiar effects can occur if there poor connections amongst multiple cables to say an LNB
and to a transmit BUC module as the go and return DC supplies may become mixed up and
the wrong voltage applied across the various items. The electrical connections at the
antennas between the LNB and the BUC chassis are often indeterminate and depend of
screws in waveguide flanges etc. Earth loop currents may also be a problem - it is possible
to find 50 Hz or 60 Hz mains currents on the outer conductors - so be careful. Such stray
currents and induced RF fields from nearby transmitters and cell phones may interfere with
the wanted signals inside the cables. The quality and smoothing of the the DC supplies used
for the LNBs is important.
LNB Transmit reject filter
Some LNBs, such as those from Invacom, incorporate a receive band pass, transmit band
reject filter at the front end. This provides both good image reject response for the receive
function but also protects the LNB from spurious energy from the transmitter, which may
destroy the LNB. See Invacom pdf data sheet for an example.
How to test an LNB:
Check with a current meter that it is drawing DC current from the power supply. The
approx number of milliamps will be given by the manufacturer. Badly made or corroded F
189
WIRELESS COMMUNICACTION
type connections are the most probable cause of faults. Remember that the centre pin of the
F connector plug should stick out about 2mm, proud of the surrounding threaded ring.
Use a satellite finder power meter. If you point the LNB up at clear sky (outer space) then
the noise temperature contribution from the surroundings will be negligible, so the meter
reading will correspond to the noise temperature of the LNB, say 100K (K means degrees
Kelvin, above the 0 K absolute zero temperature). If you then point the LNB at your hand
or towards the ground, which is at a temperature of approx 300K then the noise power
reading on the meter should go up, corresponding to approx 400K (100K +300K).
Note that LNBs may fail on one polarisation or on one frequency band and that the failure
mode may only occur at certain temperatures.
If you choose to try a replacement LNB in a VSAT system check the transmit reject filter
and supply voltage - you don't want to be one of those people who keeps blowing up LNBs
trying to find a good one !
Overloading an LNB
If you have a very large dish, say 7m diameter and point it at a satellite whose signals are
intended for reception by small 70cm diameter antennas then the 20 dB increase in total
power of the signals into the LNB may be sufficient to overload some of the transistor
amplifier stages inside. This is not always obvious. Measuring the composite output power
of the LNB using a power meter is suggested and comparing this with the -1 dB
compression point in the manufacturer's specification. An alternative is to do an antenna
pattern test on both a high power and a low power satellite. Any non linearity problem with
the high power satellite is then clearly visible. Special low gain or high power output level
LNBs are available for use with large dishes.
190
WIRELESS COMMUNICACTION
Chapter No:6
Introduction to Mobile
Communication
191
WIRELESS COMMUNICACTION
Universal Mobile Telecommunications System
(UMTS) is one of the third-generation (3G) mobile phone technologies. Currently,
the most common form uses W-CDMA as the underlying air interface, is standardized by
the 3GPP, and is the European answer to the ITU IMT-2000 requirements for 3G cellular
radio systems.
To differentiate UMTS from competing network technologies, UMTS is sometimes
marketed as 3GSM, emphasizing the combination of the 3G nature of the technology and
the GSM standard which it was designed to succeed.
Preface
This article discusses the technology, business, usage and other aspects encompassing and
surrounding UMTS, the 3G successor to GSM which utilizes the W-CDMA air interface and
GSM infrastructures. Any issues relating strictly to the W-CDMA interface itself may be
better described in the W-CDMA page.
Features
UMTS, using W-CDMA, supports up to 14.0 Mbit/s data transfer rates in theory (with
HSDPA), although at the moment users in deployed networks can expect a performance up
to 384 kbit/s for R99 handsets, and 3.6 Mbit/s for HSDPA handsets in the downlink
connection. This is still much greater than the 14.4 kbit/s of a single GSM error-corrected
circuit switched data channel or multiple 14.4 kbit/s channels in HSCSD, and - in
competition to other network technologies such as CDMA2000, PHS or WLAN - offers
access to the World Wide Web and other data services on mobile devices.
Precursors to 3G are 2G mobile telephony systems, such as GSM, IS-95, PDC, PHS and
other 2G technologies deployed in different countries. In the case of GSM, there is an
evolution path from 2G, called GPRS, also known as 2.5G. GPRS supports a much better
data rate (up to a theoretical maximum of 140.8 kbit/s, though typical rates are closer to 56
kbit/s) and is packet switched rather than connection oriented (circuit switched). It is
deployed in many places where GSM is used. E-GPRS, or EDGE, is a further evolution of
GPRS and is based on more modern coding schemes. With EDGE the actual packet data
rates can reach around 180 kbit/s (effective). EDGE systems are often referred as "2.75G
Systems".
Since 2006, UMTS networks in many countries have been or are in the process of being
upgraded with High Speed Downlink Packet Access (HSDPA), sometimes known as 3.5G.
Currently, HSDPA enables downlink transfer speeds of up to 3.6 Mbit/s. Work is also
progressing on improving the uplink transfer speed with the High-Speed Uplink Packet
Access (HSUPA). Longer term, the 3GPP Long Term Evolution project plans to move
UMTS to 4G speeds of 100 Mbit/s down and 50 Mbit/s up, using a next generation air
interface technology based upon OFDM.
192
WIRELESS COMMUNICACTION
UMTS supports mobile videoconferencing, although experience in Japan and elsewhere has
shown that user demand for video calls is not very high.
Other possible uses for UMTS include the downloading of music and video content, as well
as live TV.
Real-world implementations
Beginning in 2003 under the name 3, Hutchison Whampoa gradually launched their startup
UMTS networks worldwide including Australia, Austria, Denmark, Hong Kong, Italy,
United Kingdom, Ireland and Sweden.
Operators are starting to sell mobile internet products that combine 3G and Wi-Fi in one
service. Laptop owners are sold a UMTS modem and given a client program that detects the
presence of a Wi-Fi network and switches between 3G and Wi-Fi when available. Initially
Wi-Fi was seen as a competitor to 3G, but it is now recognised that as long as the operator
owns or leases the Wi-Fi network, they will be able to offer a more competitive product than
with UMTS only. Nokia predicted that by the end of 2006 one sixth of all cellular phones
would be UMTS devices.
UMTS combines the W-CDMA, TD-CDMA, or TD-SCDMA air interfaces, GSM's Mobile
Application Part (MAP) core, and the GSM family of speech codecs. In the most popular
cellular mobile telephone variant of UMTS, W-CDMA is currently used. Note that other
wireless standards use W-CDMA as their air interface, including FOMA.
UMTS over W-CDMA uses a pair of 5 MHz channels. In contrast, the competing
CDMA2000 system uses one or more arbitrary 1.25 MHz channels for each direction of
communication. UMTS and other W-CDMA systems are widely criticized for their large
spectrum usage, which has delayed deployment in countries that acted relatively slowly in
allocating new frequencies specifically for 3G services (such as the United States).
The specific frequency bands originally defined by the UMTS standard are 1885-2025 MHz
for uplink and 2110-2200 MHz for downlink. In the US, the 1700 MHz band will be used
instead of 1900 MHz - which is already in use - for the uplink by many UMTS operators.
Additionally, UMTS is commonly run on 850 MHz and 1900 MHz (independently, for both
the uplink and downlink) in some countries, notably in the US by Cingular Wireless.
For existing GSM operators, it is a simple but costly migration path to UMTS: much of the
infrastructure is shared with GSM, but the cost of obtaining new spectrum licenses and
overlaying UMTS at existing towers can be prohibitively expensive.
A major difference of UMTS compared to GSM is the air interface forming Generic Radio
Access Network (GeRAN). It can be connected to various backbone networks like the
Internet, ISDN, GSM or to a UMTS network. GeRAN includes the three lowest layers of
OSI model. The network layer (OSI 3) protocols form the Radio Resource Management
protocol (RRM). They manage the bearer channels between the mobile terminals and the
fixed network including the handovers.
193
WIRELESS COMMUNICACTION
3G external modems
Using a cellular router, PCMCIA or USB card, customers are able to access 3G broadband
services, regardless of their choice of computer (such as a tablet PC or a PDA). Even the
software installs itself from the modem, so that absolutely no knowledge of technology is
required to get online in moments.
Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can
be connected to the Internet. The phone acts as a router, but via Bluetooth rather than
wireless networking(802.11) or a USB connection.
Interoperability and global roaming
At the air interface level, UMTS itself is incompatible with GSM. UMTS phones sold in
Europe (as of 2004) are UMTS/GSM dual-mode phones, hence they can also make and
receive calls on regular GSM networks. If a UMTS customer travels to an area without
UMTS coverage, a UMTS phone will automatically switch to GSM (roaming charges may
apply). If the customer travels outside of UMTS coverage during a call, the call will be
transparently handed off to available GSM coverage.
Regular GSM phones cannot be used on the UMTS networks.
Softbank (formerly Vodafone Japan, formerly J-Phone) operates a 3G network based on
UMTS compatible W-CDMA technology, that launched in December 2002. Lack of
investment in the network through 2003 meant that coverage was slow to expand and
subscriber numbers remained low. The network was publicly relaunched in October 2004
and again in 2005, and Vodafone now claims network coverage of 99% of the population,
while 15% of their subscribers are 3G users.
NTT DoCoMo's 3G network, FOMA, was the first commercial network using W-CDMA
since 2002. The first W-CDMA version used by NTT DoCoMo was incompatible with the
UMTS standard at the radio level, however USIM cards used by FOMA phones are
compatible with GSM phones, so that USIM card based roaming is possible from Japan to
GSM areas without any problem. Today the NTT DoCoMo network — as well as all the WCDMA networks in the world — use the standard version of UMTS, allowing potential
global roaming. Whether and under which conditions roaming can actually be used by
subscribers depends on the commercial agreements between operators.
All UMTS/GSM dual-mode phones should accept existing GSM SIM cards. Sometimes, you
are allowed to roam on UMTS networks using GSM SIM cards from the same provider.
In the United States, UMTS is currently offered by Cingular on 850 MHz and 1900 MHz,
due to the limitations of the spectrum available to them at the time they launched UMTS
service. T-Mobile will be rolling out UMTS on the 2100/1700 MHz frequencies in mid 2007.
Because of the frequencies used, early models of UMTS phones designated for the US will
likely not be operable overseas and vice versa; other standards, notably GSM, have faced
194
WIRELESS COMMUNICACTION
similar problems, an issue dealt with by the adoption of multi-band phones. Most UMTS
licensees seem to consider ubiquitous, transparent global roaming an important issue.
Spectrum allocation
Over 120 licenses have already been awarded to operators worldwide (as of December
2004), specifying W-CDMA radio access technology that builds on GSM. In Europe, the
license process occurred at the end of the technology bubble, and the auction mechanisms
for allocation set up in some countries resulted in some extremely high prices being paid,
notable in the UK and Germany. In Germany, bidders paid a total 50.8 billion euros for six
licenses, two of which were subsequently abandoned and written off by their purchasers
(Mobilcom and the Sonera/Telefonica consortium). It has been suggested that these huge
license fees have the character of a very large tax paid on income expected 10 years down the
road - in any event they put some European telecom operators close to bankruptcy (most
notably KPN). Over the last few years some operators have written off some or all of the
license costs.
The UMTS spectrum allocated in Europe is already used in North America. The 1900 MHz
range is used for 2G (PCS) services, and 2100 MHz range is used for satellite
communications. Regulators have however, freed up some of the 2100 MHz range for 3G
services, together with the 1700 MHz for the uplink. UMTS operators in North America
who want to implement a European style 2100/1900 MHz system will have to share
spectrum with existing 2G services in the 1900 MHz band. 2G GSM services elsewhere use
900 MHz and 1800 MHz and therefore do not share any spectrum with planned UMTS
services.
AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly
using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired
AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Initial rollout
of UMTS in Canada will also be handled exclusively by the 1900 MHz band. T-Mobile's rollout of UMTS in the US will focus on the 2100/1700 MHz bands just auctioned.
Cingular Wireless is rolling out some cities with a UMTS network at 850 MHz to enhance its
existing UMTS network at 1900 MHz and now offers subscribers a number of UMTS
850/1900 phones. Telstra has rolled out a UMTS network in Australia under the brand Next
G in the 850 MHz band that will eventually replace the Telstra CDMA network and enhance
its existing 2100 MHz UMTS network. The 850 MHz band will allow better coverage in
rural areas where there are greater distances between subscriber and base station. As current
phones on the market do not support the UMTS 850/2100 bands, handset choices available
to Telstra subscribers will initially be limited.
Other competing standards
There are other competing 3G standards, such as FOMA, CDMA2000 and TD-SCDMA,
though UMTS can use the latter's air interface standard. FOMA and UMTS similarly share
the W-CDMA air interface system.
195
WIRELESS COMMUNICACTION
On the Internet access side, competing systems include WiMAX and Flash-OFDM.
Different variants of UMTS compete with different standards. While this article has largely
discussed UMTS-FDD, a form oriented for use in conventional cellular-type spectrum,
UMTS-TDD, a system based upon a TD-CDMA air interface, is used to provide UMTS
service where the uplink and downlink share the same spectrum, and is very efficient at
providing asymmetric access. It provides more direct competition with WiMAX and similar
Internet-access oriented systems than conventional UMTS.
Both the CDMA2000 and W-CDMA air interface systems are accepted by ITU as part of
the IMT-2000 family of 3G standards, in addition to UMTS-TDD's TD-CDMA, Enhanced
Data Rates for GSM Evolution (EDGE) and China's own 3G standard, TD-SCDMA.
CDMA2000's narrower bandwidth requirements make it easier than UMTS to deploy in
existing spectrum along with legacy standards. In some, but not all, cases, existing GSM
operators only have enough spectrum to implement either UMTS or GSM, not both. For
example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5
MHz in each direction. A standard UMTS system would saturate that spectrum.
In many markets however, the co-existence issue is of little relevance, as legislative hurdles
exist to co-deploying two standards in the same licensed slice of spectrum.
Most GSM operators in North America as well as others around the world have accepted
EDGE as a temporary 3G solution. AT&T Wireless launched EDGE nationwide in 2003,
Cingular launched EDGE in most markets and T-Mobile USA has launched EDGE
nationwide as of October 2005. Rogers Wireless launched nation-wide EDGE service in late
2003 for the Canadian market. Bitė Lietuva (Lithuania) was one of the first operators in
Europe to launch EDGE in December 2003. TIM (Italy) launched EDGE in 2004. The
benefit of EDGE is that it leverages existing GSM spectrums and is compatible with existing
GSM handsets. It is also much easier, quicker, and considerably cheaper for wireless carriers
to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to
support EDGE than having to install almost all brand-new equipment to deliver UMTS.
EDGE provides a short-term upgrade path for GSM operators and directly competes with
CDMA2000.
Problems and issues
In the early days of UMTS there were issues with rollout and continuing development:overweight handsets with poor battery life;
problems with handover from UMTS to GSM, connections being dropped or
handovers only possible in one direction (UMTS->GSM) with the handset only
changing back to UMTS after hanging up, even if UMTS coverage returns - in most
networks around the world this is no longer an issue;
initially poor coverage due to the time it takes to build a network;
for fully fledged UMTS incorporating Video on Demand features, one base station
needs to be set up every 1–1.5 km (0.62–0.93 mi). While this is economically feasible
in urban areas, it is infeasible in less populated suburban and rural areas;
196
WIRELESS COMMUNICACTION
Some countries, such as the United States, have allocated spectrum that differs from
what had been almost universally agreed, preventing the use of existing UMTS-2100
equipment, and requiring the design and manufacture of different equipment for use
in these markets. As is the case with GSM today, this presumably will mean that
some UMTS equipment will work only in some markets, and some will work only in
others, and some more-expensive equipment may be available that works in all
markets. It also diminishes the economy of scale and benefit to users from the
network effect that would have existed if these few countries had harmonized their
regulations with the others.
The Base Station Subsystem (BSS) is the section of a traditional cellular telephone
network which is responsible for handling traffic and signaling between a mobile phone and
the Network Switching Subsystem. The BSS carries out transcoding of speech channels,
allocation of radio channels to mobile phones, paging, quality management of transmission
and reception over the Air interface and many other tasks related to the radio network.
Base Transceiver Station
Base Transceiver Station Antenna in Paris
Base Transceiver Station (BTS) is the equipment which facilitates the wireless
communication between user equipments and the network. User equipments are devices like
mobile phones (handsets), WLL phones, computers with wireless internet connectivity, WiFi
and WiMAX gadgets etc. The network can be that of any of the wireless communication
technologies like GSM, CDMA, WLL , WAN, WiFi, WiMAX etc. BTS is also referred to as
RBS (Radio Base Station), Node B (in 3G Networks) or simply BS (Base Station).
BTS in Mobile Communication
Though the term BTS can be applicable to any of the wireless communication standards, it is
generally and commonly associated with mobile communication technologies like GSM and
CDMA. In this regard, a BTS forms part of the Base Station Subsystem (BSS) and has the
equipments (transceivers) for transmitting and receiving of radio signals, signal processors,
197
WIRELESS COMMUNICACTION
signal paths, signal amplifiers, and equipments for system management. It may also have
equipments for encrypting and decrypting communications, spectrum filtering tools (band
pass filters) etc. Antennas may also be considered as components of BTS in general sense as
they facilitate the functioning of BTS. Typically a BTS will have several transceivers (TRXs)
which allow it to serve several different frequencies and different sectors of the cell (in the
case of sectorised base stations). A BTS is controlled by a parent Base Station Controller via
the Base station Control Function (BCF). The BCF is implemented as a discrete unit or even
incorporated in a TRX in compact base stations. The BCF provides an Operations and
Maintenance (O&M) connection to the Network management system (NMS), and manages
operational states of each TRX, as well as software handling and alarm collection. The basic
structure and functions of the BTS remains the same regardless of the wireless technologies.
Important terms regarding a Mobile BTS
Diversity Techniques
Antenna positioning to implement horizontal space diversity
In order to improve the quality of received signal, often two receiving antennas are used,
placed at an equal distance to an uneven multiple of a quarter of wavelength (For 900 MHz
the wavelength is 30 cm). This technique, famous as Antenna diversity or diversity in the
space, concurs to resolve the problems connected to the fading. The antennas can be spaced
horizontally or vertically ; in the first case though a greater facility of installation is required,
advanced performance is obtained.
Other than antenna or space diversity, there are other diversity techniques like
frequency/time diversity, antenna pattern diversity, polarization diversity etc.
Splitting
The process of creating more coverage and capacity in a wireless system by having more
than one cell site cover a particular amount of geography. Each cell site covers a smaller
area, with lower power MHz and thus offers the ability to reuse frequencies more times in a
larger geographic coverage area, such as a city or MTA.
198
WIRELESS COMMUNICACTION
Sectoring
A cell is subdivided to a sure number of fields, every one of which ―is illuminated‖ from an
antenna directive (or panel), that is an antenna that ―does not illuminate‖ in all the
directions, but concentrates the flow of power within a particular area of the cell, known as
sector. Every field can therefore be considered like one new cell. By using directional
antennas, the co-channel interference is. A typical structure is the trisector,also known as
clover, in which they are 3 sectors, each one served by separate antennas. Every sector has
separate direction of tracking of 120° with respect to the adjacent ones. If not sectorrised,
the cell will be served by a unidirectional antenna, which radiates in all directions. Bisectored
cells are also implemented with the antennas serving sectors of 180° separation to one
another.
General Architecture
A BTS in general has the following units:
TRX : Transceiver
- Quite widely referred to as DRX (Driver Receiver)
- Basically does transmission and reception of signals
- Also does sending and reception of signals to/from higher network entities (like Base
Station Controller in mobile telephony)
PA : Power Amplifier
- Amplifies the signal from DRX for transmission through antenna
- May be integrated with DRX
Combiner
- Combines feeds from several DRXs so that they could be sent out through a single
antenna
- For reduction of number of antenna used
Duplexer
- For separating sending and receiving signals to/from antenna
- Does sending and receiving signals through the same antenna ports (cables to antenna)
Antenna
199
WIRELESS COMMUNICACTION
- Antenna also considered as not a part of BTS
Alarm Extension System
- Collects working status alarms of various units in BTS and extends them to Operations and
Maintenance (O&M) monitoring stations
Control Function
- Does the control of BTS
- Manages the various units of BTS
- Has the software for functioning of BTS
- On-the-spot configurations, status changes, software upgrades etc. done through the
control function
Fresnel zone
In optics and radio communications, a Fresnel zone (pronounced FRA-nel Zone), named for
physicist Augustin-Jean Fresnel, is one of a (theoretically infinite) number of a concentric
ellipsoids of revolution which define volumes in the radiation pattern of a (usually) circular
aperture. Fresnel zones result from diffraction by the circular aperture.
The cross section of the first Fresnel zone is circular. Subsequent Fresnel zones are annular
in cross section, and concentric with the first.
To maximize receiver strength you need to minimize the effect of the out of phase signals.
To do that you must make sure the strongest signals don't bump into anything - they have
the maximum chance of getting to the receiver location. The strongest signals are the ones
closest to the direct line between transmitter and receiver and always lie in the 1st Fresnel
Zone.
200
WIRELESS COMMUNICACTION
The concept of Fresnel zones may also be used to analyze interference by obstacles near the
path of a radio beam. The first zone must be kept largely free from obstructions to avoid
interfering with the radio reception. However, some obstruction of the Fresnel zones can
often be tolerated, as a rule of thumb the maximum obstruction allowable is 40%, but the
recommended obstruction is 20% or less.
For establishing Fresnel zones, we must first determine the RF Line of Sight (RF LoS),
which in simple terms is a straight line between the transmitting and receiving antennas.
Now the zone surrounding the RF LoS is said to be the Fresnel zone.
The general equation for calculating Fresnel zones at any point P in the middle of the link is
the following:
where,
Fn = The nth Fresnel Zone radius in metres
d1 = The distance of P from one end in metres
d2 = The distance of P from the other end in metres
λ = The wavelength of the transmitted signal in metres
The cross section radius of the first Fresnel zone is the highest in the center of the RF LoS
which can be calculated as:
where
r = radius in feet
D = total distance in miles
f = frequency transmitted in gigahertz.
Or even:
201
WIRELESS COMMUNICACTION
where
r = radius in metres
D = total distance in kilometres
f = frequency transmitted in gigahertz.
General Packet Radio Service (GPRS)
is a Mobile Data
Service available to users of GSM and IS-136 mobile phones. GPRS data transfer is typically
charged per megabyte of transferred data, while data communication via traditional circuit
switching is billed per minute of connection time, independent of whether the user has
actually transferred data or has been in an idle state. GPRS can be utilized for services such
as WAP access, SMS and MMS, but also for Internet communication services such as email
and web access.
2G cellular systems combined with GPRS is often described as "2.5G", that is, a technology
between the second (2G) and third (3G) generations of mobile telephony. It provides
moderate speed data transfer, by using unused TDMA channels in for example the GSM
system. Originally there was some thought to extend GPRS to cover other standards, but
instead those networks are being converted to use the GSM standard, so that GSM is the
only kind of network where GPRS is in use. GPRS is integrated into GSM standards releases
starting with Release 97 and onwards. First it was standardized by ETSI but now that effort
has been handed onto the 3GPP.
GPRS basics
GPRS is different from the older Circuit Switched Data (or CSD) connection included in
GSM standards. In CSD, a data connection establishes a circuit, and reserves the full
bandwidth of that circuit during the lifetime of the connection. GPRS is packet-switched
which means that multiple users share the same transmission channel, only transmitting
when they have data to send. This means that the total available bandwidth can be
immediately dedicated to those users who are actually sending at any given moment,
providing higher utilisation where users only send or receive data intermittently. Web
browsing, receiving e-mails as they arrive and instant messaging are examples of uses that
require intermittent data transfers, which benefit from sharing the available bandwidth.
Usually, GPRS data are billed per kilobytes of information transceived while circuit-switched
data connections are billed per second. The latter is to reflect the fact that even during times
when no data are being transferred, the bandwidth is unavailable to other potential users.
The multiple access methods used in GSM with GPRS are based on frequency division
duplex (FDD) and FDMA. During a session, a user is assigned to one pair of uplink and
downlink frequency channels. This is combined with time domain statistical multiplexing, i.e.
packet mode communication, which makes it possible for several users to share the same
frequency channel. The packets have constant length, corresponding to a GSM time slot. In
the downlink, first-come first-served packet scheduling is used. In the uplink, a scheme that
is very similar to reservation ALOHA is used. This means that slotted Aloha (S-ALOHA) is
202
WIRELESS COMMUNICACTION
used for reservation inquiries during a contention phase, and then the actual data is
transferred using first-come first-served scheduling.
GPRS originally supported (in theory) IP, PPP and X.25 connections. The last has been
typically used for applications like wireless payment terminals although it has been removed
as a requirement from the standard. X.25 can still be supported over PPP, or even over IP,
but doing this requires either a router to do encapsulation or intelligence built into the end
terminal. In practice, mainly IPv4 is used. PPP is often not supported by the operator, while
IPv6 is not yet popular.
The GPRS capability classes
Class A
Class B
Can be connected to GPRS service and GSM service (voice, SMS), using both at the
same time. Such devices are known to be available today.
Can be connected to GPRS service and GSM service (voice, SMS), but using only
one or the other at a given time. During GSM service (voice call or SMS), GPRS
service is suspended, and then resumed automatically after the GSM service (voice
call or SMS) has concluded. Most GPRS mobile devices are Class B.
Class C
Are connected to either GPRS service or GSM service (voice, SMS). Must be
switched manually between one or the other service.
A true Class A device may be required to transmit on two different frequencies at the same
time, and thus will need two radios. To get around this expensive requirement, a GPRS
mobile may implement the dual transfer mode (DTM) feature. A DTM-capable mobile may
use simultaneous voice and packet data, with the network coordinating to ensure that it is
not required to transmit on two different frequencies at the same time. Such mobiles are
considered to be pseudo Class A. Some networks are expected to support DTM in 2007.
GPRS multislot classes
The five-layer TCP/IP model
5. Application layer
DHCP • DNS • FTP • Gopher • HTTP • IMAP4 •
IRC • NNTP • XMPP • MIME • POP3 • SIP • SMTP
• SNMP • SSH • TELNET • RPC • RTP • RTCP •
TLS/SSL • SDP • SOAP • …
4. Transport layer
TCP • UDP • DCCP • SCTP • RSVP • GTP • …
3. Internet layer
IP (IPv4 • IPv6) • IGMP • ICMP • BGP • RIP •
OSPF • ISIS • IPsec • ARP • RARP • …
203
WIRELESS COMMUNICACTION
2. Data link layer
802.11 • ATM • DTM • Ethernet • FDDI • Frame
Relay • GPRS • EVDO • HSPA • HDLC • PPP •
L2TP • PPTP • …
1. Physical layer
Ethernet physical layer • ISDN • Modems • PLC •
SONET/SDH • G.709 • …
This box: view • talk • edit
GPRS speed is a direct function of the number of TDMA time slots assigned, which is the
lesser of (a) what the particular cell supports and (b) the maximum capability of the mobile
device expressed as a GPRS Multislot Class.
Multislot Class
Downlink Slots
Uplink Slots
Active Slots
1
1
1
2
2
2
1
3
3
2
2
3
4
3
1
4
5
2
2
4
6
3
2
4
7
3
3
4
8
4
1
5
9
3
2
5
10
4
2
5
11
4
3
5
12
4
4
5
32
5
3
6
The most common GPRS multislot classes are:
Class 2
Minimal GPRS implementation
Class 4
Modest GPRS implementation, 50% faster download than Class 2
Class 6
Modest implementation, but with better uploading than Class 4
Class 8
Better implementation, 33% faster download than Classes 4 & 6
Class 10
204
WIRELESS COMMUNICACTION
Better implementation, and with better uploading than Class 8, seen in better cell
phones and PC Cards
Class 12
Best implementation, with maximum upload performance, typically seen only in
high-end PC Cards
GPRS coding scheme
Transfer speed depends also on the channel encoding used. The least robust (but fastest)
coding scheme (CS-4) is available near the Base Transceiver Station (BTS) while the most
robust coding scheme (CS-1) is used when the Mobile Station (MS) is further away from the
BTS.
Using the CS-4 it is possible to achieve a user speed of 20.0 kbit/s per time slot. However,
using this scheme the cell coverage is 25% of normal. CS-1 can achieve a user speed of only
8.0 kbit/s per time slot, but has 98% of normal coverage. Newer network equipment can
adapt the transfer speed automatically depending on the mobile location.
Note: Like CSD, HSCSD establishes a circuit and is usually billed per minute. For an
application such as downloading, HSCSD may be preferred, since circuit-switched data are
usually given priority over packet-switched data on a mobile network, and there are relatively
few seconds when no data are being transferred.
GPRS is packet based. When TCP/IP is used, each phone can have one or more IP
addresses allocated. GPRS will store and forward the IP packets to the phone during cell
handover (when you move from one cell to another). A radio noise induced pause can be
interpreted by TCP as packet loss, and cause a temporary throttling in transmission speed.
GPRS services and hardware
GPRS upgrades GSM data services providing:
MMS - Multimedia Messaging Service
Push To Talk over Cellular PoC / PTT - Push to talk
Instant Messaging and Presence Wireless Village
Internet Applications for Smart Devices through WAP
Point-to-point (PTP) service: internetworking with the Internet (IP protocols).
Short Message Service (SMS): bearer for SMS.
Future enhancements: flexible to add new functions, such as more capacity, more
users, new accesses, new protocols, new radio networks.
USB GPRS modem
USB GPRS modems use a terminal like interface USB 2.0 and upper, data formats V.42bis,
and RFC 1144 and external antennas.
205
WIRELESS COMMUNICACTION
GPRS in practice
Telephone operators have priced GPRS relatively cheaply (compared to older GSM data
transfer, CSD and HSCSD) in many areas, such as Finland. Some mobile phone operators
offer flat rate access to the Internet and some other mobile phone operators base their tariffs
on data transferred, usually rounded off per 100 kilobyte.
The maximum speed of a GPRS connection (as offered in 2003) is similar to a modem
connection in an analog wire telephone network, about 32–40 kbit/s (depending on the
phone used). Latency is very high; a round-trip ping being typically about 600–700 ms and
often reaching one second round trip time. GPRS is typically prioritized lower than speech,
and thus the quality of connection varies greatly.
In order to set up a GPRS connection for a wireless modem, a user needs to specify Access
Point Name (APN), optionally a user name and password, and very rarely an IP address, all
provided by the network operator.
Devices with latency /RTT improvements (via e.g. the extended UL TBF mode feature) are
rather widely available. Also network upgrades the feature(s) are available within certain
operators. With these enhancements the active RTT can be reduced, resulting in significant
increase in application-level throuhput speeds.
Enhanced Data rates for GSM Evolution (EDGE)
or Enhanced GPRS (EGPRS), is a digital mobile phone technology
that allows to increase data transmission rate and improve data transmission reliability. It is
generally classified as a 2.75G network technology. EDGE has been introduced into GSM
networks around the world since 2003, initially in North America.
It can be used for any packet switched application such as an Internet connection. Highspeed data applications such as video services and other multimedia benefit from EGPRS'
increased data capacity. EDGE Circuit Switched is a possible future development.
EDGE Evolution continues in Release 7 of the 3GPP standard providing doubled
performance e.g. to complement High-Speed Packet Access (HSPA).
Technology
EDGE/EGPRS is implemented as a bolt-on enhancement to 2G and 2.5G GSM and GPRS
networks, making it easier for existing GSM carriers to upgrade to it. EDGE/EGPRS is a
superset to GPRS and can function on any network with GPRS deployed on it, provided the
carrier implements the necessary upgrade.
Although EDGE requires no hardware or software changes to be made in GSM core
networks, base stations must be modified. EDGE compatible transceiver units must be
installed and the base station subsystem (BSS) needs to be upgraded to support EDGE.
New mobile terminal hardware and software is also required to decode/encode the new
206
WIRELESS COMMUNICACTION
modulation and coding schemes and carry the higher user data rates to implement new
services.
Transmission techniques
In addition to Gaussian minimum shift keying (GMSK), EDGE uses 8 phase shift keying
(8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3bit word for every change in carrier phase. This effectively triples the gross data rate offered
by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the modulation
and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate
and robustness of data transmission. It introduces a new technology not found in GPRS,
Incremental Redundancy, which, instead of retransmitting disturbed packets, sends more
redundancy information to be combined in the receiver. This increases the probability of
correct decoding.
EDGE can carry data speeds up to 236.8 kbit/s for 4 timeslots (theoretical maximum is
473.6 kbit/s for 8 timeslots) in packet mode and will therefore meet the International
Telecommunications Union's requirement for a 3G network, and has been accepted by the
ITU as part of the IMT-2000 family of 3G standards. It also enhances the circuit data mode
called HSCSD, increasing the data rate of this service.
EGPRS modulation and coding scheme (MCS)
Coding and modulation Speed
Modulation
scheme (MCS)
(kbit/s/slot)
MCS-1
8.8
GMSK
MCS-2
11.2
GMSK
MCS-3
14.8
GMSK
MCS-4
17.6
GMSK
MCS-5
22.4
8-PSK
MCS-6
29.6
8-PSK
MCS-7
44.8
8-PSK
MCS-8
54.4
8-PSK
MCS-9
59.2
8-PSK
Classification
Whether EDGE is 2G or 3G depends on implementation. While Class 3 and below EDGE
devices clearly are not 3G, class 4 and above devices perform at a higher bandwidth than
other technologies conventionally considered as 3G (such as 1xRTT). Because of the
variability, EDGE is generally classified as 2.75G network technology.
207
WIRELESS COMMUNICACTION
EDGE Evolution
EDGE Evolution improves on EDGE in a number of ways. Latencies are reduced by
lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are
increased using dual carriers, higher symbol rate and higher-order modulation (32QAM and
16QAM instead of 8-PSK), and "Turbo Codes" to improve error correction. And finally
signal quality is improved using dual antennas. An EDGE Evolution terminal or network
can support only some of these improvements, or roll them out in stages.
Networks
EDGE is actively supported by GSM operators in North America. Some GSM operators
elsewhere view UMTS as the ultimate upgrade path and either plan to skip EDGE altogether
or use it outside the UMTS coverage area. However, the high cost and slow uptake of
UMTS have resulted in fairly common support for EDGE in the global GSM/GPRS
market.
The Base Transceiver Station, or BTS, contains the equipment for transmitting and receiving
of radio signals (transceivers), antennas, and equipment for encrypting and decrypting
communications with the Base Station Controller (BSC). Typically a BTS for anything other
than a picocell will have several transceivers (TRXs) which allow it to serve several different
frequencies and different sectors of the cell (in the case of sectorised base stations). A BTS is
controlled by a parent BSC via the Base Station Control Function (BCF). The BCF is
implemented as a discrete unit or even incorporated in a TRX in compact base stations. The
BCF provides an Operations and Maintenance (O&M) connection to the Network
Management System (NMS), and manages operational states of each TRX, as well as
software handling and alarm collection.
The functions of a BTS vary depending on the cellular technology used and the cellular
telephone provider. There are vendors in which the BTS is a plain transceiver which receives
information from the MS (Mobile Station) through the Um (Air Interface) and then converts
it to a TDM ("PCM") based interface, the Abis, and sends it towards the BSC. There are
vendors which build their BTSs so the information is preprocessed, target cell lists are
generated and even intracell handover (HO) can be fully handled. The advantage in this case
is less load on the expensive Abis interface.
The BTSs are equipped with radios that are able to modulate layer 1 of interface Um; for
GSM 2G+ the modulation type is GMSK, while for EDGE-enabled networks it is GMSK
and 8-PSK.
Antenna combiners are implemented to use the same antenna for several TRXs (carriers),
the more TRXs are combined the greater the combiner loss will be. Up to 8:1 combiners are
found in micro and pico cells only.
Frequency hopping is often used to increase overall BTS performance, this involves the
rapid switching of voice traffic between TRXs in a sector. A hopping sequence is followed
by the TRXs and handsets using the sector. Several hopping sequences are available, the
208
WIRELESS COMMUNICACTION
sequence in use for a particular cell is continually broadcast by that cell so that it is known to
the handsets.
A TRX transmits and receives according to the GSM standards, which specify eight TDMA
timeslots per radio frequency. A TRX may lose some of this capacity as some information is
required to be broadcast to handsets in the area that the BTS serves. This information allows
the handsets to identify the network and gain access to it. This signalling makes use of a
channel known as the BCCH (Broadcast Control Channel).
Sectorisation
By using directional antennas on a base station, each pointing in different directions, it is
possible to sectorise the base station so that several different cells are served from the same
location. Typically these directional antennas have a beamwidth of 65 to 85 degrees. This
increases the traffic capacity of the base station (each frequency can carry eight voice
channels) whilst not greatly increasing the interference caused to neighboring cells (in any
given direction, only a small number of frequencies are being broadcast). Typically two
antennas are used per sector, at spacing of ten or more wavelengths apart. This allows the
operator to overcome the effects of fading due to physical phenomena such as multipath
reception. Some amplification of the received signal as it leaves the antenna is often used to
preserve the balance between uplink and downlink signal.
Base Station Controller
The Base Station Controller (BSC) provides, classically, the intelligence behind the BTSs.
Typically a BSC has 10s or even 100s of BTSs under its control. The BSC handles allocation
of radio channels, receives measurements from the mobile phones, controls handovers from
BTS to BTS (except in the case of an inter-BSC handover in which case control is in part the
responsibility of the Anchor MSC). A key function of the BSC is to act as a concentrator
where many different low capacity connections to BTSs (with relatively low utilisation)
become reduced to a smaller number of connections towards the Mobile Switching Center
(MSC) (with a high level of utilisation). Overall, this means that networks are often
structured to have many BSCs distributed into regions near their BTSs which are then
connected to large centralised MSC sites.
The BSC is undoubtedly the most robust element in the BSS as it is not only a BTS
controller but, for some vendors, a full switching center, as well as an SS7 node with
connections to the MSC and SGSN (when using GPRS). It also provides all the required
data to the Operation Support Subsystem (OSS) as well as to the performance measuring
centers.
A BSC is often based on a distributed computing architecture, with redundancy applied to
critical functional units to ensure availability in the event of fault conditions. Redundancy
often extends beyond the BSC equipment itself and is commonly used in the power supplies
and in the transmission equipment providing the A-ter interface to PCU.
209
WIRELESS COMMUNICACTION
The databases for all the sites, including information such as carrier frequencies, frequency
hopping lists, power reduction levels, receiving levels for cell border calculation, are stored in
the BSC. This data is obtained directly from radio planning engineering which involves
modelling of the signal propagation as well as traffic projections.
Transcoder
Two GSM base station antennas disguised as trees in Dublin, Ireland.
Although the Transcoding (compressing/decompressing) function is as standard defined as
a BSC function, there are several vendors which have implemented the solution in a standalone rack using a proprietary interface. This subsystem is also referred to as the TRAU
(Transcoder and Rate Adaptation Unit). The transcoding function converts the voice
channel coding between the GSM (Regular Pulse Excited-Long Term Prediction, also
known as RPE-LPC) coder and the CCITT standard PCM (G.711 A-law or u-law). Since the
PCM coding is 64 kbit/s and the GSM coding is 13 kbit/s, this also involves a buffering
function so that PCM 8-bit words can be recoded to construct GSM 20 ms traffic blocks, to
compress voice channels from the 64 kbit/s PCM standard to the 13 kbit/s rate used on the
air interface. Some networks use 32 kbit/s ADPCM on the terrestrial side of the network
instead of 64 kbit/s PCM and the TRAU converts accordingly. When the traffic is not voice
but data such as fax or email, the TRAU enables its Rate Adaptation Unit function to give
compatibility between the BSS data rates and the MSC capability.
However, at least in Siemens' and Nokia's architecture, the Transcoder is an identifiable
separate sub-system which will normally be co-located with the MSC. In some of Ericsson's
systems it is integrated to the MSC rather than the BSC. The reason for these designs is that
if the compression of voice channels is done at the site of the MSC, fixed transmission link
costs can be reduced.
Packet Control Unit
The Packet Control Unit (PCU) is a late addition to the GSM standard. It performs some of
the processing tasks of the BSC, but for packet data. The allocation of channels between
voice and data is controlled by the base station, but once a channel is allocated to the PCU,
the PCU takes full control over that channel.
The PCU can be built into the base station, built into the BSC or even, in some proposed
architectures, it can be at the SGSN site.
BSS interfaces
210
WIRELESS COMMUNICACTION
Image of the GSM network, showing the BSS interfaces to the MS, NSS and GPRS Core
Network
Um - The air interface between the MS (Mobile Station) and the BTS. This interface
uses LAPDm protocol for signaling, to conduct call control, measurement reporting,
Handover, Power control, Authentication, Authorization, Location Update and so
on. Traffic and Signaling are sent in bursts of 0.577 ms at intervals of 4.615 ms, to
form data blocks each 20 ms.
Abis - The interface between the Base Transceiver Station and Base Station
Controller. Generally carried by a DS-1, ES-1, or E1 TDM circuit. Uses TDM
subchannels for traffic (TCH), LAPD protocol for BTS supervision and telecom
signaling, and carries synchronization from the BSC to the BTS and MS.
A - The interface between the BSC and Mobile Switching Center. It is used for
carrying Traffic channels and the BSSAP user part of the SS7 stack. Although there
are usually transcoding units between BSC and MSC, the signaling communication
takes place between these two ending points and the transcoder unit doesn't touch
the SS7 information, only the voice or CS data are transcoded or rate adapted.
Ater - The interface between the Base Station Controller and Transcoder. It is a
proprietary interface whose name depends on the vendor (for example Ater by
Nokia), it carries the A interface information from the BSC leaving it untouched.
Gb - Connects the BSS to the Serving GPRS Support Node (SGSN) in the GPRS
Core Network.
Network Switching Subsystem, or NSS, is the component of a GSM system that carries
out switching functions and manages the communications between mobile phones and the
Public Switched Telephone Network. It is owned and deployed by mobile phone operators
and allows mobile phones to communicate with each other and telephones in the wider
telecommunications network. The architecture closely resembles a telephone exchange, but
there are additional functions which are needed because the phones are not fixed in one
location. Each of these functions handle different aspects of mobility management and are
described in more detail below.
211
WIRELESS COMMUNICACTION
The Network Switching Subsystem, also referred to as the GSM core network, usually refers
to the circuit-switched core network, used for traditional GSM services such as voice calls,
SMS, and Circuit Switched Data calls.
There is also an overlay architecture on the GSM core network to provide packet-switched
data services and is known as the GPRS core network. This allows mobile phones to have
access to services such as WAP, MMS, and Internet access.
All mobile phones manufactured today have both circuit and packet based services, so most
operators have a GPRS network in addition to the standard GSM core network.
Mobile Switching Centre (MSC)
Description
The Mobile Switching Centre or MSC is a sophisticated telephone exchange which
provides circuit-switched calling, mobility management, and GSM services to the mobile
phones roaming within the area that it serves. This means voice, data and fax services, as
well as SMS and call divert.
In the GSM mobile phone system, in contrast with earlier analogue services, fax and data
information is sent directly digitally encoded to the MSC. Only at the MSC is this re-coded
into an "analogue" signal (although actually this will almost certainly mean sound encoded
digitally as PCM signal in a 64-kbit/s timeslot, known as a DS0 in America).
There are various different names for MSCs in different contexts which reflects their
complex role in the network, all of these terms though could refer to the same MSC, but
doing different things at different times.
A Gateway MSC is the MSC that determines which visited MSC the subscriber who is
being called is currently located. It also interfaces with the Public Switched Telephone
Network. All mobile to mobile calls and PSTN to mobile calls are routed through a GMSC.
The term is only valid in the context of one call since any MSC may provide both the
gateway function and the Visited MSC function, however, some manufacturers design
dedicated high capacity MSCs which do not have any BSSes connected to them. These
MSCs will then be the Gateway MSC for many of the calls they handle.
The Visited MSC is the MSC where a customer is currently located. The VLR associated
with this MSC will have the subscriber's data in it.
The Anchor MSC is the MSC from which a handover has been initiated. The Target MSC
is the MSC toward which a Handover should take place. An MSC Server is a part of the
redesigned MSC concept starting from 3GPP Release 5.
Mobile Switching Centre Server (MSC-S)
Description
212
WIRELESS COMMUNICACTION
The Mobile Switching Centre Server or MSC Server is a soft switch variant of Mobile
Switching Centre, which provides circuit-switched calling, mobility management, and GSM
services to the mobile phones roaming within the area that it serves. MSC Server
functionality enables split between control (signalling) and user plane (bearer in network
element called as Media Gateway), which guarantees more optimal placement of network
elements within the network.
MSC Server and MGW Media Gateway makes it possible to cross-connect circuit switched
calls switched by using IP, ATM AAL2 as well as TDM.
More information is available in 3GPP TS 23.205.
Other GSM Core Network Elements connected to the MSC
The MSC connects to the following elements:
The HLR for obtaining data about the SIM and MSISDN
the Base Station Subsystem which handles the radio communication with 2G and
2.5G mobile phones.
the UTRAN which handles the radio communication with 3G mobile phones.
the VLR for determining where other mobile subscribers are located.
other MSCs for procedures such as handover.
Procedures implemented
Tasks of the MSC include
delivering calls to subscribers as they arrive based on information from the VLR
connecting outgoing calls to other mobile subscribers or the PSTN.
delivering SMSs from subscribers to the SMSC and vice versa
arranging handovers from BSC to BSC
carrying out handovers from this MSC to another
supporting supplementary services such as conference calls or call hold.
collecting billing information.
Home Location Register (HLR)
Description
The Home Location Register or HLR is a central database that contains details of each
mobile phone subscriber that is authorized to use the GSM core network.
213
WIRELESS COMMUNICACTION
There is one HLR in one Public Land Mobile Network. HLR is a single database but can be
maintained as separate databases when the data to be stored is more than the capacity.
More precisely, the HLR stores details of every SIM card issued by the mobile phone
operator. Each SIM has a unique identifier called an IMSI which is one of the primary keys
to each HLR record.
The next important items of data associated with the SIM are the telephone numbers used to
make and receive calls to the mobile phone, known as MSISDNs. The main MSISDN is the
number used for making and receiving voice calls and SMS, but it is possible for a SIM to
have other secondary MSISDNs associated with it for fax and data calls. Each MSISDN is
also a primary key to the HLR record.
Examples of other data stored in the HLR in a SIM record is:
GSM services that the subscriber has requested or been given
GPRS settings to allow the subscriber to access packet services
Current Location of subscriber (VLR and SGSN)
Call divert settings applicable for each associated MSISDN.
The HLR data is stored for as long as a subscriber remains with the mobile phone operator.
At first glance, the HLR seems to be just a database which is merely accessed by other
network elements which do the actual processing for mobile phone services. In fact the
HLR is a system which directly receives and processes MAP transactions and messages. If
the HLR fails, then the mobile network is effectively disabled as it is the HLR which
manages the Location Updates as mobile phones roam around.
As the number of mobile subscribers has grown in mobile phone operators the HLR has
become a more powerful computer server rather than the traditional telephone exchange
hardware in the early days of GSM.
Other GSM Core Network Elements connected to the HLR
The HLR connects to the following elements:
the Gateway MSC (G-MSC) for handling incoming calls
The VLR for handling requests from mobile phones to attach to the network
The SMSC for handling incoming SMS
The voice mail system for delivering notifications to the mobile phone that a
message is waiting
Procedures implemented
The main function of the HLR is to manage the fact that SIMs and phones move around a
lot. The following procedures are implemented to deal with this:
214
WIRELESS COMMUNICACTION
Manage the mobility of subscribers by means of updating their position in
administrative areas called 'location areas', which are identified with a LAC. The
action of a user of moving from one LA to another is followed by the HLR with a
Location area update while retrieving information from BSS as BSIC (cell identifier).
Send the subscriber data to a VLR or SGSN when a subscriber first roams there.
Broker between the GMSC or SMSC and the subscriber's current VLR in order to
allow incoming calls or text messages to be delivered.
Remove subscriber data from the previous VLR when a subscriber has roamed away
from it.
Authentication Centre (AUC)
Description
The Authentication Centre or AUC is a function to authenticate each SIM card that
attempts to connect to the GSM core network (typically when the phone is powered on).
Once the authentication is successful, the HLR is allowed to manage the SIM and services
described above. An encryption key is also generated that is subsequently used to encrypt all
wireless communications (voice, SMS, etc.) between the mobile phone and the GSM core
network.
If the authentication fails, then no services are possible from that particular combination of
SIM card and mobile phone operator attempted. There is an additional form of
identification check performed on the serial number of the mobile phone described in the
EIR section below, but this is not relevant to the AUC processing.
Proper implementation of security in and around the AUC is a key part of an operator's
strategy to avoid SIM cloning.
The AUC does not engage directly in the authentication process, but instead generates data
known as triplets for the MSC to use during the procedure. The security of the process
depends upon a shared secret between the AUC and the SIM called the Ki. The Ki is
securely burned into the SIM during manufacture and is also securely replicated onto the
AUC. This Ki is never transmitted between the AUC and SIM, but is combined with the
IMSI to produce a challenge/response for identification purposes and an encryption key
called Kc for use in over the air communications.
Other GSM Core Network Elements connected to the AUC
The AUC connects to the following elements:
the MSC which requests a new batch of triplet data for an IMSI after the previous
data have been used. This ensures that same keys and challenge responses are not
used twice for a particular mobile.
215
WIRELESS COMMUNICACTION
Procedures implemented
The AUC stores the following data for each IMSI:
the Ki
Algorithm id (the standard algorithms are called A3 or A8, but an operator may
choose a proprietary one).
When the MSC asks the AUC for a new set of triplets for a particular IMSI, the AUC first
generates a random number known as RAND. This RAND is then combined with the Ki to
produce two numbers as follows:
The Ki and RAND are fed into the A3 algorithm and a number known as Signed
RESponse or SRES is calculated.
The Ki and RAND are fed into the A8 algorithm and a session key called Kc is
calculated.
The numbers (RAND, SRES, KC) form the triplet sent back to the MSC. When a particular
IMSI requests access to the GSM core network, the MSC sends the RAND part of the
triplet to the SIM. The SIM then feeds this number and the Ki (which is burned onto the
SIM) into the A3 algorithm as appropriate and an SRES is calculated and sent back to the
MSC. If this SRES matches with the SRES in the triplet (which it should if it is a valid SIM),
then the mobile is allowed to attach and proceed with GSM services.
After successful authentication, the MSC sends the encryption key Kc to the Base Station
Controller (BSC) so that all communications can be encrypted and decrypted. Of course, the
mobile phone can generate the Kc itself by feeding the same RAND supplied during
authentication and the Ki into the A8 algorithm.
The AUC is usually collocated with the HLR, although this is not necessary. Whilst the
procedure is secure for most everyday use, it is by no means crack proof. Therefore a new
set of security methods was designed for 3G phones.
Visitor Location Register (VLR)
Description
The Visitor Location Register or VLR is a temporary database of the subscribers who
have roamed into the particular area which it serves. Each Base Station in the network is
served by exactly one VLR, hence a subscriber cannot be present in more than one VLR at a
time.
The data stored in the VLR has either been received from the HLR, or collected from the
MS. In practice, for performance reasons, most vendors integrate the VLR directly to the VMSC and, where this is not done, the VLR is very tightly linked with the MSC via a
proprietary interface.
216
WIRELESS COMMUNICACTION
Data stored includes:
IMSI (the subscriber's identity number)
authentication data
MSISDN (the subscriber's phone number)
GSM services that the subscriber is allowed to access
Access Point (GPRS) subscribed
the HLR address of the subscriber
Other GSM Core Network Elements connected to the VLR
The VLR connects to the following elements:
the Visited MSC (V-MSC) to pass data needed by the V-MSC during its procedures,
e.g. authentication or call setup.
The HLR to request data for mobile phones attached to its serving area.
Other VLRs to transfer temporary data concerning the mobile when they roam into
new VLR areas (for example TMSI which is an ephemeral temporary IMSI used in
communication).
Procedures implemented
The primary functions of the VLR are:
to inform the HLR that a subscriber has arrived in the particular area covered by the
VLR
to track where the subscriber is within the VLR area (location area) when no call is
ongoing
to allow or disallow which services the subscriber may use
to allocate roaming numbers during the processing of incoming calls
to purge the subscriber record if a subscriber becomes inactive whilst in the area of a
VLR. The VLR deletes the subscriber's data after a fixed time period of inactivity
and informs the HLR (e.g. when the phone has been switched off and left off or
when the subscriber has moved to an area with no coverage for a long time).
to delete the subscriber record when a subscriber explicitly moves to another, as
instructed by the HLR
EIR
The EIR (Equipment Identity Register) is often integrated to the HLR. The EIR keeps a
list of mobile phones (identified by their IMEI) which are to be banned from the network or
monitored. This is designed to allow tracking of stolen mobile phones. In theory all data
about all stolen mobile phones should be distributed to all EIRs in the world through a
Central EIR. It is clear, however, that there are some countries where this is not in
217
WIRELESS COMMUNICACTION
operation. The EIR data does not have to change in real time, which means that this
function can be less distributed than the function of the HLR.
Other support functions
Connected more or less directly to the GSM core network are many other functions.
BC
The Billing Centre is responsible for processing the toll tickets generated by the VLRs and
HLRs and generating a bill for each subscriber. it is also responsible for to generate billing
data of roaming subscriber
SMSC
The Short Message Service Centre supports the sending of text messages.
MMSC
The Multimedia Messaging System Centre supports the sending of multimedia messages
(e.g. Images, Audio, Video and their combinations) to (or from) MMS-enabled Handsets.
VMS
The Voicemail System records and stores voicemails.
Bluetooth
Bluetooth is an industrial specification for wireless personal area networks (PANs).
Bluetooth provides a way to connect and exchange information between devices such as
mobile phones, laptops, PCs, printers, digital cameras, and video game consoles over a
secure, globally unlicensed short-range radio frequency. The Bluetooth specifications are
developed and licensed by the Bluetooth Special Interest Group.
A typical Bluetooth mobile phone headset
218
WIRELESS COMMUNICACTION
Bluetooth is a radio standard and communications protocol primarily designed for low
power consumption, with a short range (power-class-dependent: 1 meter, 10 meters, 100 m)
based on low-cost transceiver microchips in each device.
Bluetooth lets these devices communicate with each other when they are in range. The
devices use a radio communications system, so they do not have to be in line of sight of each
other, and can even be in other rooms, as long as the received transmission is powerful
enough.
Class
Maximum Permitted Power Range
(mW/dBm)
(approximate)
Class 1 100 mW (20 dBm)
~100 meters
Class 2 2.5 mW (4 dBm)
~10 meters
Class 3 1 mW (0 dBm)
~1 meter
Bluetooth profiles
In order to use Bluetooth, a device must be compatible with certain Bluetooth profiles.
These define the possible applications and uses.
List of applications
More prevalent applications of Bluetooth include:
Wireless control of and communication between a cell phone and a hands-free
headset or car kit. This was one of the earliest applications to become popular.
Wireless networking between PCs in a confined space and where little bandwidth is
required.
Wireless communications with PC input and output devices, the most common
being the mouse, keyboard and printer.
Transfer of files between devices with OBEX.
Transfer of contact details, calendar appointments, and reminders between devices
with OBEX.
Replacement of traditional wired serial communications in test equipment, GPS
receivers, medical equipment and traffic control devices.
For controls where infrared was traditionally used.
Sending small advertisements from Bluetooth enabled advertising hoardings to
other, discoverable, Bluetooth devices.
219
WIRELESS COMMUNICACTION
Seventh-generation game consoles—Nintendo Wii, Sony PlayStation 3—use
Bluetooth for their respective wireless controllers.
Bluetooth vs. Wi-Fi in networking
Bluetooth and Wi-Fi both have their places in today's offices, homes, and on the move:
setting up networks, printing, or transferring presentations and files from PDAs to
computers.
Bluetooth
Bluetooth is implemented in a variety of new products such as phones, printers, modems,
and headsets. Bluetooth is acceptable for situations when two or more devices are in
proximity to each other and don't require high bandwidth. Bluetooth is most commonly
used with phones and hand-held computing devices, either using a Bluetooth headset or
transferring files from phones/PDAs to computers.
Bluetooth also simplifies the discovery and setup of services. Bluetooth devices advertise all
services they provide. This makes the utility of the service that much more accessible,
without the need to worry about network addresses, permissions and all the other
considerations that go with typical networks.
Wi-Fi
Wi-Fi is more analogous to the traditional Ethernet network and requires configuration to
set up shared resources, transmit files, set up audio links (for example, headsets and handsfree devices). It uses the same radio frequencies as Bluetooth, but with higher power output
resulting in a stronger connection. Wi-Fi is sometimes called "wireless Ethernet." Although
this description is inaccurate, it provides an indication of its relative strengths and
weaknesses. Wi-Fi requires more setup, but is better suited for operating full-scale networks
because it enables a faster connection, better range from the base station, and better security
than Bluetooth.
One method for comparing the efficiency of wireless transmission protocols such as
Bluetooth and Wi-Fi is spatial capacity, or bits per second per square meter.
Computer requirements
220
WIRELESS COMMUNICACTION
A typical Bluetooth USB dongle (BCM2045A), shown here next to a metric ruler
An internal notebook Bluetooth card (14×36×4 mm)
A personal computer must have a Bluetooth adapter in order to be able to communicate
with other Bluetooth devices (such as mobile phones, mice and keyboards).While some
portable computers and fewer desktop computers already contain an internal Bluetooth
dongle, most PCs require an external USB Bluetooth dongle. Most Macs come with built-in
Bluetooth adapters.
Unlike its predecessor, IrDA, in which each device requires a separate dongle, multiple
Bluetooth devices can communicate with a computer over a single dongle.
Operating system support
Linux provides two Bluetooth stacks, with the BlueZ stack included with most Linux
kernels. It was originally developed by Qualcomm and Affix. BlueZ supports all core
Bluetooth protocols and layers.
Only Microsoft Windows XP Service Pack 2 and later versions of Windows have native
support for Bluetooth. Previous versions required the users to install their Bluetooth
adapter's own drivers, which was not directly supported by Microsoft.[4] Microsoft's own
Bluetooth dongles (that are packaged with their Bluetooth computer devices) have no
external drivers and thus require at least Windows XP Service Pack 2.
Mac OS X has supported Bluetooth since version 10.2 released in 2002.
Specifications and features
221
WIRELESS COMMUNICACTION
The Bluetooth specification was developed in 1994 by Sven Mattisson and Jaap Haartsen,
who were working for Ericsson Mobile Platforms in Lund, Sweden. The specifications were
formalized by the Bluetooth Special Interest Group (SIG). The SIG was formally announced
on May 20, 1998. Today it has over 7000 companies worldwide. It was established by
Ericsson, Sony Ericsson, IBM, Intel, Toshiba, and Nokia, and later joined by many other
companies. Bluetooth is also known as IEEE 802.15.1.
Bluetooth 1.0 and 1.0B
Versions 1.0 and 1.0B had many problems, and manufacturers had difficulties making their
products interoperable. Versions 1.0 and 1.0B also had mandatory Bluetooth hardware
device address (BD_ADDR) transmission in the Connecting process, rendering anonymity
impossible at a protocol level, which was a major setback for services planned to be used in
Bluetooth environments, such as Consumerium.
Bluetooth 1.1
Many errors found in the 1.0B specifications were fixed.
Added support for non-encrypted channels.
Received Signal Strength Indicator (RSSI).
Bluetooth 1.2
This version is backward-compatible with 1.1 and the major enhancements include the
following:
Faster Connection and Discovery
Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio
frequency interference by avoiding the use of crowded frequencies in the hopping
sequence.
Higher transmission speeds in practice, up to 721 kbit/s, as in 1.1.
Extended Synchronous Connections (eSCO), which improve voice quality of audio
links by allowing retransmissions of corrupted packets.
Host Controller Interface (HCI) support for three-wire UART.
Bluetooth 2.0
This version, specified in November 2004, is backward-compatible with 1.1. The main
enhancement is the introduction of an enhanced data rate (EDR) of 3.0 Mbit/s. This has
the following effects:
Three times faster transmission speed—up to 10 times in certain cases (up to 2.1 Mbit/s).
Lower power consumption through a reduced duty cycle.
Simplification of multi-link scenarios due to more available bandwidth.
Further improved (bit error rate) performance.
222
WIRELESS COMMUNICACTION
Bluetooth 2.1
Bluetooth Core Specification Version 2.1 + EDR, is fully backward-compatible with 1.1, and
will be adopted by the Bluetooth SIG once interoperability testing has completed. This
specification includes the following features:
Extended inquiry response: provides more information during the inquiry procedure
to allow better filtering of devices before connection. This information includes the
name of the device, a list of services the device supports, as well as other information
like the time of day, and pairing information.
Sniff subrating: reduces the power consumption when devices are in the sniff lowpower mode, especially on links with asymmetric data flows. Human interface
devices (HID) are expected to benefit the most, with mouse and keyboard devices
increasing the battery life from 3 to 10 times those currently used.
Encryption Pause Resume: enables an encryption key to be refreshed, enabling much
stronger encryption for connections that stay up for longer than 24 hours.
Secure Simple Pairing: radically improves the pairing experience for Bluetooth
devices, while increasing the use and strength of security. It is expected that this
feature will significantly increase the use of Bluetooth.
Future of Bluetooth
Broadcast Channel: enables Bluetooth information points. This will drive the
adoption of Bluetooth into cell phones, and enable advertising models based around
users pulling information from the information points, and not based around the
object push model that is used in a limited way today.
Topology Management: enables the automatic configuration of the piconet
topologies especially in scatternet situations that are becoming more common today.
This should all be invisible to the users of the technology, while also making the
technology just work.
Alternate MAC PHY: enables the use of alternative MAC and PHY's for
transporting Bluetooth profile data. The Bluetooth Radio will still be used for device
discovery, initial connection and profile configuration, however when lots of data
needs to be sent, the high speed alternate MAC PHY's will be used to transport the
data. This means that the proven low power connection models of Bluetooth are
used when the system is idle, and the low power per bit radios are used when lots of
data needs to be sent.
QoS improvements: enable audio and video data to be transmitted at a higher
quality, especially when best effort traffic is being transmitted in the same piconet.
223
WIRELESS COMMUNICACTION
Bluetooth technology already plays a part in the rising Voice over IP (VOIP) scene, with
Bluetooth headsets being used as wireless extensions to the PC audio system. As VOIP
becomes more popular, and more suitable for general home or office users than wired
phone lines, Bluetooth may be used in cordless handsets, with a base station connected to
the Internet link.
The next version of Bluetooth after v2.1, code-named Seattle, that will be called Bluetooth
3.0, has many of the same features, but is most notable for plans to adopt ultra-wideband
(UWB) radio technology. This will allow Bluetooth use over UWB radio, enabling very fast
data transfers of up to 480 Mbit/s, synchronizations, and file pushes, while building on the
very low-power idle modes of Bluetooth. The combination of a radio using little power
when no data is transmitted and a high data rate radio to transmit bulk data could be the
start of software radios. Bluetooth, given its world-wide regulatory approval, low-power
operation, and robust data transmission capabilities, provides an excellent signaling channel
to enable the soft radio concept.
On 28 March 2006, the Bluetooth Special Interest Group announced its selection of the
WiMedia Alliance Multi-Band Orthogonal Frequency Division Multiplexing (MB-OFDM)
version of UWB for integration with current Bluetooth wireless technology.
UWB integration will create a version of Bluetooth wireless technology with a highspeed/high-data-rate option. This new version of Bluetooth technology will meet the highspeed demands of synchronizing and transferring large amounts of data, as well as enabling
high-quality video and audio applications for portable devices, multi-media projectors and
television sets, and wireless VOIP.
At the same time, Bluetooth technology will continue catering to the needs of very low
power applications such as mice, keyboards, and mono headsets, enabling devices to select
the most appropriate physical radio for the application requirements, thereby offering the
best of both worlds.
The Bluetooth SIG have also announced that they are looking to include Ultra Low Power
use cases into Bluetooth, enabling a whole new set of use cases. This inculdes watches
displaying Caller ID information, sports sensors monitoring your heart rate during exercise,
as well as medical devices. The Medical Devices Working Group is also creating a medical
devices profile and associated protocols to enable this market.
The Draft High Speed Bluetooth Specification is available at the Bluetooth website.
Technical information
Communication and connection
A master Bluetooth device can communicate with up to seven devices. This network group
of up to eight devices is called a piconet.
224
WIRELESS COMMUNICACTION
A piconet is an ad-hoc computer network, using Bluetooth technology protocols to allow
one master device to interconnect with up to seven active devices. Up to 255 further devices
can be inactive, or parked, which the master device can bring into active status at any time.
At any given time, data can be transferred between the master and one other device.
However, the master switches rapidly from device to another in a round-robin fashion.
(Simultaneous transmission from the master to multiple other devices is possible, but not
used much.) Either device can switch roles and become the master at any time.
Bluetooth specification allows connecting two or more piconets together to form a
scatternet, with some devices acting as a bridge by simultaneously playing the master role
and the slave role in one piconet. These devices are planned for 2007.
Setting up connections
Any Bluetooth device will transmit the following sets of information on demand:
Device name.
Device class.
List of services.
Technical information, for example, device features, manufacturer, Bluetooth
specification, clock offset.
Any device may perform an inquiry to find other devices to which to connect, and any
device can be configured to respond to such inquiries. However, if the device trying to
connect knows the address of the device, it always responds to direct connection requests
and transmits the information shown in the list above if requested. Use of device services
may require pairing or acceptance by its owner, but the connection itself can be started by
any device and held until it goes out of range. Some devices can be connected to only one
device at a time, and connecting to them prevents them from connecting to other devices
and appearing in inquiries until they disconnect from the other device.
Every device has a unique 48-bit address. However, these addresses are generally not shown
in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This
name appears when another user scans for devices and in lists of paired devices.
Most phones have the Bluetooth name set to the manufacturer and model of the phone by
default. Most phones and laptops show only the Bluetooth names and special programs that
are required to get additional information about remote devices. This can be confusing as,
for example, there could be several phones in range named T610 (see Bluejacking).
Pairing
Pairs of devices may establish a trusted relationship by learning (by user input) a shared
secret known as a passkey. A device that wants to communicate only with a trusted device can
cryptographically authenticate the identity of the other device. Trusted devices may also
encrypt the data that they exchange over the air so that no one can listen in. The encryption
225
WIRELESS COMMUNICACTION
can, however, be turned off, and passkeys are stored on the device file system, not on the
Bluetooth chip itself. Since the Bluetooth address is permanent, a pairing is preserved, even
if the Bluetooth name is changed. Pairs can be deleted at any time by either device. Devices
generally require pairing or prompt the owner before they allow a remote device to use any
or most of their services. Some devices, such as Sony Ericsson phones, usually accept
OBEX business cards and notes without any pairing or prompts.
Certain printers and access points allow any device to use its services by default, much like
unsecured Wi-Fi networks. Pairing algorithms are sometimes manufacturer-specific for
transmitters and receivers used in applications such as music and entertainment.
Air interface
The protocol operates in the license-free ISM band at 2.45 GHz. To avoid interfering with
other protocols that use the 2.45 GHz band, the Bluetooth protocol divides the band into
79 channels (each 1 MHz wide) and changes channels up to 1600 times per second.
Implementations with versions 1.1 and 1.2 reach speeds of 723.1 kbit/s. Version 2.0
implementations feature Bluetooth Enhanced Data Rate (EDR) and reach 2.1 Mbit/s.
Technically, version 2.0 devices have a higher power consumption, but the three times faster
rate reduces the transmission times, effectively reducing power consumption to half that of
1.x devices (assuming equal traffic load).
Bluetooth differs from Wi-Fi in that the latter provides higher throughput and covers greater
distances, but requires more expensive hardware and higher power consumption. They use
the same frequency range, but employ different multiplexing schemes. While Bluetooth is a
cable replacement for a variety of applications, Wi-Fi is a cable replacement only for local
area network access. Bluetooth is often thought of as wireless USB, whereas Wi-Fi is
wireless Ethernet, both operating at much lower bandwidth than the cable systems they are
trying to replace. However, this analogy is not entirely accurate since any Bluetooth device
can, in theory, host any other Bluetooth device—something that is not universal to USB
devices.
Many USB Bluetooth adapters are available, some of which also include an IrDA adapter.
Older (pre-2003) Bluetooth adapters, however, have limited services, offering only the
Bluetooth Enumerator and a less-powerful Bluetooth Radio incarnation. Such devices can
link computers with Bluetooth, but they do not offer much in the way of services that
modern adapters do.
Health concerns
Bluetooth uses the microwave radio frequency spectrum in the 2.4 GHz to 2.4835 GHz
range. Maximum power output from a Bluetooth radio is 1mW, 2.5mW, and 100mW for
Class 3, Class 2, and Class 1 devices respectively, which puts Class 1 at roughly the same
level as cell phones, and the other two classes much lower. Accordingly, Class 2 and Class 3
Bluetooth devices are considered less of a potential hazard than cell phones, and Class 1 may
be comparable to that of cell phones, which are of little concern.
226
WIRELESS COMMUNICACTION
Origin of the name and the logo
Bluetooth was named after a late tenth century king, Harald Bluetooth King of Denmark
and Norway. He is known for his unification of previously warring tribes from Denmark
(including Scania, present-day Sweden, where the Bluetooth technology was invented), and
Norway. Bluetooth likewise was intended to unify different technologies, such as computers
and mobile phones.
The name may have been inspired less by the historical Harald than the loose interpretation
of him in The Long Ships by Frans Gunnar Bengtsson, a Swedish Viking-inspired novel.
The Bluetooth logo merges the Nordic runes analogous to the modern Latin H and B:
(haglaz) and (berkanan) forming a bind rune.
Bluetooth Consortium
In 1998, Ericsson, IBM, Intel, and Nokia, formed a consortium and adopted the code name
Bluetooth for their proposed open specification. In December 1999, 3Com, Lucent
Technologies, Microsoft, and Motorola joined the initial founders as the promoter group.
Since that time, Lucent Technologies transferred their membership to their spinoff Agere
Systems, and 3Com has left the promoter group.
The Infrared Data Association (IrDA) defines physical specifications communications
protocol standards for the short-range exchange of data over infrared light, for uses such as
personal area networks (PANs).
IrDA is a very short-range example of free-space optical communication.
IrDA interfaces are used in palmtop computers and mobile phones.
IrDA specifications include IrPHY, IrLAP, IrLMP, IrCOMM, Tiny TP, IrOBEX,
IrLAN and IrSimple. IrDA has now produced another standard, IrFM, for Infrared
financial messaging (i.e., for making payments) also known as "Point & Pay".
For the devices to communicate via IrDA they must have a direct line of sight.
Specifications
IrPHY
The mandatory IrPHY (Infrared Physical Layer Specification) is the lowest layer of the
IrDA specifications. The most important specifications are:
Range (Standard: 1 m, low-power to low power: 0.2 m, Standard to low power: 0.3
m)
Angle (minimum cone +-15°)
Speed (2.4 kbit/s to 16 Mbit/s)
Modulation (Base band, no carrier)
Infrared window
227
WIRELESS COMMUNICACTION
IrDA transceivers communicate with infrared pulses in a cone that extends minimum 15
degrees half angle off center. The IrDA physical specifications require that a minimum
irradiance be maintained so that a signal is visible up to a meter away. Similarly, the
specifications require that a maximum irradiance not be exceeded so that a receiver is not
overwhelmed with brightness when a device comes close. In practice, there are some devices
on the market that do not reach one meter, while other devices may reach up to several
meters. There are also devices that do not tolerate extreme closeness. The typical sweet spot
for IrDA communications is from 5 cm to 60 cm away from a transceiver, in the center of
the cone. IrDA data communications operate in half-duplex mode because while
transmitting, a device‘s receiver is blinded by the light of its own transmitter, and thus, fullduplex communication is not feasible. The two devices that communicate simulate full
duplex communication by quickly turning the link around. The primary device controls the
timing of the link, but both sides are bound to certain hard constraints and are encouraged
to turn the link around as fast as possible. Transmission rates fall into three broad categories:
SIR, MIR, and FIR. Serial Infrared (SIR) speeds cover those transmission speeds normally
supported by an RS-232 port (9600 bit/s, 19.2 kbit/s, 38.4 kbit/s, 57.6 kbit/s, 115.2 kbit/s).
Since the lowest common denominator for all devices is 9600 bit/s, all discovery and
negotiation is performed at this baud rate. MIR (Medium Infrared) is not an official term,
but is sometimes used to refer to speeds of 0.576 Mbit/s and 1.152 Mbit/s. Fast Infrared
(FIR) is deemed an obsolete term by the IrDA physical specification, but is nonetheless in
common usage to denote transmission at 4 Mbit/s. ―FIR‖ is sometimes used to refer to all
speeds above SIR. However, different encoding approaches are used by MIR and FIR, and
different approaches are used to frame MIR and FIR packets. For that reason, these
unofficial terms have sprung up to differentiate these two approaches. The future holds
faster transmission speeds (currently referred to as Very Fast Infrared, or VFIR) which will
support speeds up to 16 Mbit/s. There are (VFIR) infrared transceivers available such as the
TFDU8108 operating from 9.6 kbit/s to 16 Mbit/s. The UFIR (Ultra Fast Infrared)
protocol is also in development. It will support speeds up to 100 Mbit/s.
IrLAP
The mandatory IrLAP (Infrared Link Access Protocol) is the second layer of the IrDA
specifications. It lies on top of the IrPHY layer and below the IrLMP layer. It represents the
Data Link Layer of the OSI model. The most important specifications are:
Access control
Discovery of potential communication partners
Establishing of a reliable bidirectional connection
Negotiation of the Primary/Secondary device roles
On the IrLAP layer the communicating devices are divided into a Primary Device and one
or more Secondary Devices. The Primary Device controls the Secondary Devices. Only if
the Primary Device requests a Secondary Device to send it is allowed to do so.
228
WIRELESS COMMUNICACTION
IrLMP
The mandatory IrLMP (Infrared Link Management Protocol) is the third layer of the
IrDA specifications. It can be broken down into two parts. First, the LM-MUX (Link
Management Multiplexer) which lies on top of the IrLAP layer. Its most important
achievements are:
Provides multiple logical channels
Allows change of Primary/Secondary devices
Second, the LM-IAS (Link Management Information Access Service), which provides a list,
where service providers can register their services so other devices can access these services
via querying the LM-IAS.
Tiny TP
The optional Tiny TP (Tiny Transport Protocol) lies on top of the IrLMP layer. It
provides:
Transportation of large messages by SAR (Segmentation and Reassembly)
Flow control by giving credits to every logical channel
IrCOMM
The optional IrCOMM (Infrared Communications Protocol) lets the infrared device act
like either a serial or parallel port. It lies on top of the IrLMP layer.
IrOBEX
The optional IrOBEX (Infrared Object Exchange) provides the exchange of arbitrary
data objects (e.g. vCard, vCalendar or even applications) between infrared devices. It lies on
top of the Tiny TP protocol, so Tiny TP is mandatory for IrOBEX to work.
IrLAN
The optional IrLAN (Infrared Local Area Network) provides the possibility to connect an
infrared device to a local area network. There are three possible methods:
Access Point
Peer to Peer
Hosted
As IrLAN lies on top of the Tiny TP protocol, the Tiny TP protocol must be implemented
for IrLAN to work.
229
WIRELESS COMMUNICACTION
IrSimple
IrSimple achieves at least 4 to 10 times faster data transmission speeds by improving the
efficiency of the infrared IrDA protocol. IrSimple protocol maintains backward
compatibility with the existing IrDA protocols.
IrSimpleShot
One of the primary targets of IrSimpleShot (IrSS) is to allow the millions of IrDA-enabled
camera phones to wirelessly transfer pictures to printers and printer kiosks.
IrSimpleShot communication combines the familiarity of a TV Remote Control with highspeed (up to 16 Mbit/s) IrDA connectivity.
230
WIRELESS COMMUNICACTION
Chapter No:7
Introduction to Wireless Local
Loop WLL
231
WIRELESS COMMUNICACTION
Wireless local loop (WLL),
also called Broadband Wireless Access
(BWA) radio in the loop (RITL) or fixed-radio access (FRA) or fixed-wireless access
(FWA) or Fixed Wireless Terminal (FWT), is the use of a wireless communications link as
the "last mile / first mile" connection for delivering plain old telephone service (POTS) and
broadband Internet to telecommunications customers. Various types of WLL systems and
technologies exist.
Definition of Fixed Wireless Service
Fixed Wireless Terminals or FWT units differ from conventional mobile terminal units
operating within cellular networks -such as GSM- in that a fixed wireless terminal or
deskphone will be limited to an almost permanent location with almost no roaming or findme anywhere facilities.
WLL and FWTs are generic terms for radio based telecommunications technologies and the
respective devices which can be implemented using a number of different wireless and radio
technologies.
Wireless Local Loop service is segmented into three broad market and deployment.
Licensed Point to Point Microwave service
Licensed Microwave service has been used since the 1960s to transmit very large amounts of
data. The AT&T coast to coast backbone was largely carried over a chain of microwave
towers. These systems have been largely using 1500-2500 MHz and 5000-6200 MHz. The 5
GHz band was even known as the "common carrier" band. This service typically was
prohibitively expensive to be used for Local Loop, and was used for backbone networks. In
the 80s and 90s it flourished under the growth of cell towers. This growth spurred research
in this area by companies such as DMC, and gradually the equipment cost has come down,
and this technology is now being used as an alternative to T-1, T-3 and fiber connectivity.
Licensed Point to Multi Point Microwave service
Multipoint microwave licenses are hundreds of times more expensive than point to point
licenses. A single point to point system could be installed and licensed for 50,000 to 200,000
USD. A multipoint license would start in the millions of dollars. MMDS and LMDS were
the first true multi point services for wireless local loop. While Europe and the rest of the
world developed the 3500 MHz band for affordable broadband fixed wireless, the U.S.
provided LMDS and MMDS, and most implementations in the United States were
conducted at 2500 MHz. The largest was Sprint Broadbands deployment of Hybrid
Networks equipment. Sprint was plagued with difficulties operating the network profitably,
and service was often spotty, due to inadequate radio link quality.
232
WIRELESS COMMUNICACTION
License-Free Multi Point Wireless Service
Most of the growth in long range radio communications since 2002 has been in the license
free bands. Terago Networks in Canada and NextWeb Networks of Fremont were two of
the early leaders in deploying reliable license free service. The equipment that they used
employed proprietary protocols.
1995 - 2004: License-Free Equipment
Most of the early vendors of license free fixed wireless equipment such as Adaptive
Broadband (Axxcelera), Trango Broadband, Motorola (Orthogon), Proxim Networks,
RedLine and BreezeCom (Alvarion) used propietary protocols and hardware, creating
pressure on the industry to adopt a standard for unlicensed fixed wireless. These Mac Layers
typically used a 15-20 MHz channel using Direct Sequence Spread Spectrum and BPSK,
CCK and QPSK for modulation.
These devices all describe the customer premises wireless system as the Subecriber Unit
"SU", and the operator transmitter delivering the last mile local loop services as the "Access
Point" (AP). 802.11 uses the terms AP and STA (Station).
2002 - 2005: The growth of Wi-Fi technology for use in wireless local loop service
Originally designed for short range mobile internet and local area network access , 802.11
has emerged as the defacto standard for Wireless Local Loop. There is more 802.11
equipment deployed for long range data service than any other technology. These systems
have provided varying results, as the operators were often small and poorly trained in radio
communications, additionally 802.11 was not intended to be used at long ranges and suffered
from a number of problems, such as the hidden node problem. Many companies such as
KarlNet began modifying the 802.11 MAC to attempt to deliver higher performance at long
ranges.
2005 - Present: Maturation of the Wireless ISP market
In nearly every metropolitan area worldwide, operators and hobbyists deployed more and
more unlicensed broadband fixed wireless multipoint systems. Providers that had rave
reviews when they started, faced the prospect of seeing their networks degrade in
performance, as more and more devices were deployed using the license free UNII and ISM
bands.
The growing interference problem
Interference caused the majority of unlicensed wireless loop services to have much higher
error rates and interruptions than equivalenty wired networks, such as the copper telephone
network, and the coaxial cable network. This caused growth to slow, customers to cancel,
and many operators to rethink their business model.
233
WIRELESS COMMUNICACTION
There were several responses to these problems.
2003: Voluntary frequency coordination
NextWeb, Etheric Networks, GateSpeed and a handful of other companies founded the first
voluntary spectrum coordination body - working entirely independently of government
regulators. This organization was founded in March of 2003 as BANC, "Bay Area Network
Coordination". By maintaining frequencies used in an interoperator database, disruptions
between coordinating parties were minimized, as well as the cost of identifying new or
changing transmission sources, by using the frequency database to determine what bands
were in use. Because the parties in BANC, comprised the majority of operators in the Bay
Area, they used peer pressure to imply that operators who did not play nice would be
collectively punished by the group, through interfering with the non cooperative, while
striving not to interfere with the cooperative. Banc was then deployed in Los Angeles.
Companies such as Deutsche Telekom joined. It looked like the idea had promise.
2005: Operators flee unlicensed for licensed
However the better capitalized operators began reducing their focus on unlicensed and
instead focused on licensed systems, as the constant fluctuations in signal quality caused
them to have very high maintenance costs. NextWeb, acquired by Covad for a very small
premium over the capital invested in it, is one operator who focused on licensed service, as
did WiLine. This led to fewer of the more responsible and significant operators actually
using the BANC system. Without its founders active involvement, the system languished.
2002 to present: Operators get into "Noise, power and gain Arms Race"
Certain companies began to adopt "Wild West" tactics, violating Spectrum regulations by
increasing the power and gain of the wireless systems. This was particularly true in parts of
the world that were either already lawless, such as Nigeria, and in the U.S. in areas where the
operators were usually "mom and pops", often with no real knowledge of radio technology,
predominantly the midwest. Operators in the coastal metro areas often had capital and
technically experienced staff. These tactics have included shooting and sabotaging equipment
on wireless towers, as well as installing wireless devices whose sole purpose was to disrupt
competitor operations by focusing the antenna directly at the competitors base stations.
2005 to present: Operators develop adaptive network technology.
Third, the operators began to apply the principles of self healing networks. Etheric
Networks followed this path. Etheric Networks focused on improving performance by
developing dynamic interference and fault detection and reconfiguration. As well as
optimizing quality based routing software, such as MANET and using multiple paths to
deliver service to customers. Success can be evidenced as Etheric Networks is the fastest
wireless local loop operator in the World, per DSL Reports. This approach is generally called
"Mesh Networking" which relies on Ad-Hoc Networking Protocols, however Mesh and adhoc networking protocols have yet to deliver high speed low latency business class end to
234
WIRELESS COMMUNICACTION
end reliable local loop service, as the paths can sometimes traverse exponentially more radio
links than a traditional star (AP->SU) topology.
Adaptive network management actively monitors the local loops quality and behaviour,
using automation to reconfigure the network and its traffic flows, to avoid interference and
other failures.
2006 - 2008: The Next Technology - WiMAX (or IEEE 802.16)
Currently more operators are running on the 802.11 MAC at 2 and 5 GHz. 802.16 is unlikely
to outperform 802.11 until at least late 2007. It may become the dominant medium for
wireless local loop. Intel is promoting this standard, while Atheros and Broadcom are still
focused largely on 802.11. Atheros, using its highly successful 802.11 OFDM chipsets, will
likely be able to deliver comparable service levels to Intel's 802.16 TDM OFDM chipsets for
the foreseeable future.
Mobile Technologies
These are available in Code Division Multiple Access(CDMA), Digital Enhanced Cordless
Telecommunications - DECT (TDMA/DCA) ( See ETSI 6 EN 300 765-1 V1.3.1 (2001-04)
-"Digital Enhanced Cordless Telecommunications (DECT); Radio in the Local Loop (RLL)
Access Profile (RAP); Part 1: Basic telephony services"), Global System for Mobile
Communications(GSM), IS136 Time Division Multiple Access(TDMA) as well as analog
access technologies such as Advanced Mobile Phone System(AMPS), for which there will be
independent standards defining every aspect of modulation, protocols, error handling, etc.
Deployment
The Wireless Local Loop market is currently an extremely high growth market, offering
Internet Service Providers immediate access to customer markets without having to either
lay cable through a metropolitan area MTA, or work through the ILEC's, reselling the
telephone, cable or satellite networks, owned by companies that prefer to largely sell direct.
This trend revived the prospects for local and regional ISPs, as those willing to deploy fixed
wireless networks were not at the mercy of the large telecommunication monopolies. They
were at the mercy of unregulated re-use of unlicensed frequencies upon which they
communicate.
Due to the enormous quantity of 802.11 "Wi-Fi" equipment and software, coupled with the
fact that spectrum licensed are not requires in the ISM and UNII bands, the Industry has
moved well ahead of the regulators and the standards bodies.
Sprint and ClearWire are preparing to roll out massive WiMAX networks in the United
States.
235
WIRELESS COMMUNICACTION
Wireless Local Loop (WLL) Standards
Mobile:
CDMA (USA).
TDMA (USA).
GSM (ITU - Worldwide).
UMTS 3rd Generation (World).
Personal Handy-phone System (PHS in Japan, PAS/Xiaolingtong in China)
Fixed or local area network:
o DECT, for local loop
o corDECT (variant of DECT)
o LMDS
o 802.11, originally designed for short range mobile internet and network
access service, it has emerged as the facto standard for Wireless Local Loop.
o WiMAX or IEEE 802.16 may become the dominant medium for wireless
local loop. Currently more operators are running on the 802.11 MAC at 2
and 5 GHz. 802.16 is unlikely to outperform 802.11 until at least late 2007.
Intel is promoting this standard, while Atheros and Broadcom are still
focused largely on 802.11.
o
o
o
o
o
Wi-Fi, popularly known as an acronym for wireless fidelity (see below for origin), but,
in actuality is simply a play on the term "Hi-Fi," was originally a brand licensed by the Wi-Fi
Alliance to describe the embedded technology of wireless local area networks (WLAN)
based on the IEEE 802.11 specifications. Use of the term has now broadened to generically
describe the wireless interface of mobile computing devices, such as laptops in LANs. Wi-Fi
is now increasingly used for more services, including Internet and VoIP phone access,
gaming, and basic connectivity of consumer electronics such as televisions, DVD players,
and digital cameras. More standards are in development that will allow Wi-Fi to be used by
cars on highways in support of an Intelligent Transportation System to increase safety,
gather statistics, and enable mobile commerce (see IEEE 802.11p). Wi-Fi and the Wi-Fi
CERTIFIED logo are registered trademarks of the Wi-Fi Alliance - the trade organization
that tests and certifies equipment compliance with the 802.11x standards. There has been a
growing worry among both health officials and consumers on possible negative health effect
from the radiation of these systems.
The five-layer TCP/IP model
5. Application layer
DHCP • DNS • FTP • Gopher • HTTP • IMAP4 •
IRC • NNTP • XMPP • MIME • POP3 • SIP • SMTP
• SNMP • SSH • TELNET • RPC • RTP • RTCP •
TLS/SSL • SDP • SOAP • …
4. Transport layer
TCP • UDP • DCCP • SCTP • RSVP • GTP • …
236
WIRELESS COMMUNICACTION
3. Internet layer
IP (IPv4 • IPv6) • IGMP • ICMP • BGP • RIP •
OSPF • ISIS • IPsec • ARP • RARP • …
2. Data link layer
802.11 • ATM • DTM • Ethernet • FDDI • Frame
Relay • GPRS • EVDO • HSPA • HDLC • PPP •
L2TP • PPTP • …
1. Physical layer
Ethernet physical layer • ISDN • Modems • PLC •
SONET/SDH • G.709 • …
This box: view • talk • edit
Uses
A person with a Wi-Fi enabled device such as a PC, cell phone or PDA can connect to the
Internet when in proximity of an access point. The region covered by one or several access
points is called a hotspot. Hotspots can range from a single room to many square miles of
overlapping hotspots. Wi-Fi can also be used to create a mesh network. Both architectures
are used in community networks.[citation needed]
Wi-Fi also allows connectivity in peer-to-peer (wireless ad-hoc network) mode, which
enables devices to connect directly with each other. This connectivity mode is useful in
consumer electronics and gaming applications.
When the technology was first commercialized there were many problems because
consumers could not be sure that products from different vendors would work together.
The Wi-Fi Alliance began as a community to solve this issue so as to address the needs of
the end user and allow the technology to mature. The Alliance created the branding Wi-Fi
CERTIFIED to show consumers that products are interoperable with other products
displaying the same branding.
Wi-Fi at home
Home Wi-Fi clients come in many shapes and sizes, from stationary PCs to digital cameras.
The trend today is to incorporate wireless into every electronic where mobility is desired.
Wi-Fi devices in home or consumer-type environments connect in the following ways:
Via a broadband Internet connection into a single router which can serve both wired
and wireless clients
Ad-hoc mode for client to client connections
Built into non-computer devices to enable wireless connectivity to other devices or
the Internet
237
WIRELESS COMMUNICACTION
Wi-Fi in Business
Business and industrial Wi-Fi has taken off, with the trends in implementation varying
greatly over the years. Current technology trends in the corporate wireless world are:
Dramatically increasing the number of Wi-Fi Access Points in an environment, in
order to provide redundancy,support fast roaming and increasing overall network
capacity by using more channels and/or creating smaller cells
Designing for wireless voice applications (VoWLAN or WVOIP)
Moving toward 'thin' Access Points, with more of the network intelligence housed in
a centralized network appliance; relegating individual Access Points to be simply
'dumb' radios
Outdoor applications utilizing true mesh topologies
A proactive, self-managed network that functions as a security gateway, firewall,
DHCP server, intrusion detection system, and a myriad of other features not
previously considered relevant to a wireless network.
Wi-Fi at Hotspots
The most publically visible use of Wi-Fi is at hotspots. These trends include:
Free Wi-Fi at venues like Panera Bread, It's a Grind Coffee House, and over 100,000
locations in the USA has been growing in popularity. According to a door-to-door
survey in San Jose, CA, the number of venues and users is growing fast.
Paid Wi-Fi at venues like Starbucks, McDonalds, and at hotels. This trend is growing
rapidly at venues that require a higher rate of customer churn, such as sit-down
restaurants.
According to Muni Wireless, metropolitan-wide WiFi (Mu-Fi) already has more than
300 projects in process.
Evolution of Wi-Fi standards
Wi-Fi technology has gone through several generations since its inception in 1997.
802.11. The original version of the standard IEEE 802.11 released in 1997 specifies two raw
data rates of 1 and 2 megabits per second (Mbps) to be transmitted via infrared (IR) signals
or by either frequency hopping or direct-sequence spread spectrum in the Industrial
Scientific Medical frequency band at 2.4 GHz. IR remains a part of the standard but has no
actual implementations.
802.11a. The 802.11a amendment to the original standard was ratified in 1999. The 802.11a
standard uses the same core protocol as the original standard and yields realistic throughput
in the mid-20 Mbps. Since the 2.4 GHz band is heavily used, using the 5 GHz band gives
802.11a the advantage of less interference. However, this high carrier frequency also brings
disadvantages. It restricts the use of 802.11a to almost line of sight, necessitating the use of
more access points.
238
WIRELESS COMMUNICACTION
802.11b. The 802.11b amendment to the original standard was ratified in 1999. 802.11b has a
maximum raw data rate of 11 Mbps and uses the same CSMA/CA media access method
defined in the original standard. The dramatic increase in throughput of 802.11b (compared
to the original standard) along with substantial price reductions led to the rapid acceptance
of 802.11b as the definitive wireless LAN technology.
802.11g. In June 2003, a third standard was ratified: 802.11g. This works in the 2.4 GHz
band (like 802.11b) but operates at a maximum raw data rate of 54 Mbps, or about 24.7
Mbps net throughputs (like 802.11a). Despite its major acceptance, 802.11g suffers from the
same interference as 802.11b in the already crowded 2.4 GHz range. Devices operating in
this range include microwave ovens, Bluetooth devices, and cordless telephones.
802.11n. 802.11n builds upon previous standards by adding MIMO (multiple-input multipleoutput). MIMO uses multiple transmitter and receiver antennas to allow for increased data
throughput through spatial multiplexing and increased range by exploiting the spatial
diversity, through coding. On January 19, 2007, the IEEE 802.11 Working Group
unanimously approved 802.11n to issue a new Draft 2.0 of the proposed standard.
Technical information
Wi-Fi: How it Works
Wi-Fi networks use radio technologies called IEEE 802.11 to provide secure, reliable, fast
wireless connectivity. A typical Wi-Fi setup contains one or more Access Points (APs) and
one or more clients. An AP broadcasts its SSID (Service Set Identifier, "Network name") via
packets that are called beacons, which are usually broadcast every 100 ms. The beacons are
transmitted at 1 Mbit/s, and are of relatively short duration and therefore do not have a
significant effect on performance. Since 1 Mbit/s is the lowest rate of Wi-Fi it assures that
the client that receives the beacon can communicate at at least 1 Mbit/s. Based on the
settings (e.g. the SSID), the client may decide whether to connect to an AP. If two APs of
the same SSID are in range of the client, the client firmware might use signal strength to
decide with which of the two APs to make a connection.
The Wi-Fi standard leaves connection criteria and roaming totally open to the client. This is
a strength of Wi-Fi, but also means that one wireless adapter may perform substantially
better than another. Since Wi-Fi transmits in the air, it has the same properties as a nonswitched wired Ethernet network, and therefore collisions can occur. Unlike a wired
Ethernet, and like most packet radios, Wi-Fi cannot do collision detection, and instead uses
an acknowledgment packet for every data packet sent. If no acknowledgement is received
within a certain time a retransmission occurs. Also, a medium reservation protocol can be
used when excessive collisions are experienced or expected (RequestToSend/ClearToSend
used for Collision Avoidance or CA) in an attempt to try to avoid collisions.
A Wi-Fi network can be used to connect computers to each other to the internet and to
wired networks (which use IEEE 802.3 or Ethernet). Wi-Fi networks operate in the
unlicensed 2.4 (802.11b/g) and 5 GHz (802.11a/h) radio bands, with an 11 Mbit/s (802.11b)
or 54 Mbit/s (802.11a or g) data rate or with products that contain both bands (dual band).
239
WIRELESS COMMUNICACTION
They can provide real world performance similar to the basic 10BaseT wired Ethernet
networks.
Channels
Except for 802.11a/h, which operates at 5 GHz, Wi-Fi devices historically primarily use the
spectrum in 2.4 GHz, which is standardized and unlicensed by international agreement,
although the exact frequency allocations and maximum permitted power vary slightly in
different parts of the world. Channel numbers, however, are standardized by frequency
throughout the world, so authorized frequencies can be identified by channel numbers. The
2.4 GHz band is also used by microwave ovens, cordless phones, baby monitors and
Bluetooth devices.
The maximum number of available channels for Wi-Fi enabled devices are:
13 for Europe
11 for North America. Only channels 1, 6, and 11 are recommended for 802.11b/g
to minimize interference from adjacent channels.
14 for Japan
Advantages of Wi-Fi
Wireless Internet on the beach, Taba, Egypt
Allows LANs to be deployed without cabling for client devices, typically reducing
the costs of network deployment and expansion. Spaces where cables cannot be run,
such as outdoor areas and historical buildings, can host wireless LANs.
Built into most modern laptops, getting a laptop without a built in WiFi has become
an exception.
240
WIRELESS COMMUNICACTION
Wi-Fi chipset pricing continues to come down, making Wi-Fi a very economical
networking option and driving inclusion of Wi-Fi in an ever-widening array of
devices.
Wi-Fi products are widely available in the market. Different competitive brands of
access points and client network interfaces are inter-operable at a basic level of
service. Products designated as Wi-Fi CERTIFIED by the Wi-Fi Alliance are
backwards inter-operable.
Wi-Fi is a global set of standards. Unlike cellular carriers, the same Wi-Fi client
works in different countries around the world.
Widely available in more than 250,000 public hot spots and tens of millions of
homes and corporate and university campuses worldwide.
As of 2007, WPA is not easily cracked if strong passwords are used and WPA2
encryption has no known weaknesses.
New protocols for Quality of Service (WMM) and power saving mechanisms (WMM
Power Save) make Wi-Fi even more suitable for latency-sensitive applications (such
as voice and video) and small Form-Factor
Disadvantages of Wi-Fi
Spectrum assignments and operational limitations are not consistent worldwide;
most of Europe allows for an additional 2 channels beyond those permitted in the
US (1-13 vs 1-11); Japan has one more on top of that (1-14) - and some countries,
like Spain, prohibit use of the lower-numbered channels. Furthermore some
countries, such as Italy, used to require a 'general authorization' for any Wi-Fi used
outside an operator's own premises, or require something akin to an operator
registration.
Equivalent isotropically radiated power (EIRP) in the EU is limited to 20 dBm (0.1
W).
Power consumption is fairly high compared to some other low bandwidth standards
(Zigbee and Bluetooth), making battery life a concern.
The most common wireless encryption standard, Wired Equivalent Privacy or WEP,
has been shown to be easily breakable even when correctly configured. Wi-Fi
Protected Access (WPA and WPA2) which began shipping in 2003 aims to solve this
problem and is now available on most products.
Wi-Fi Access Points typically default to an open (encryption-free) mode. Novice
users benefit from a zero configuration device that works out of the box but without
security enabled providing open wireless access to their LAN. To turn security on
requires the user to configure the device, usually via a software GUI.
Many 2.4 GHz 802.11b and 802.11g Access points default to the same channel on
initial start up, contributing to congestion on certain channels. To change the
channel of operation for an access point requires the user to configure the device.
Wi-Fi networks have limited range. A typical Wi-Fi home router using 802.11b or
802.11g with a stock antenna might have a range of 45 m (150 ft) indoors and 90 m
(300 ft) outdoors. Range also varies with frequency band. Wi-Fi in the 2.4 GHz
frequency block has slightly better range than Wi-Fi in the 5 GHz frequency block.
Outdoor range with improved antennas can be several kilometres or more with lineof-sight.
241
WIRELESS COMMUNICACTION
Wi-Fi pollution, of an excessive number of an access point with other access points
in the area, especially on the same or neighboring channel, can prevent access and
interfere with the use of other access points by others caused by overlapping
channels in the 802.11g/b spectrum as well as with decreased signal-to-noise ratio
(SNR) between access points. This can be a problem in high-density areas such as
large apartment complexes or office buildings with many Wi-Fi access points.
Additionally, other devices use the 2.4 GHz band: microwave ovens, cordless
phones, baby monitors, security cameras, and Bluetooth devices can cause significant
additional interference.
It is also an issue when municipalities or other large entities such as universities seek
to provide large area coverage. Everyone is considered equal for the base standard
without 802.11e/WMM when they use the band. This openness is also important to
the success and widespread use of 2.4 GHz Wi-Fi, but makes it unsuitable for "must
have" public service functions or where reliability is required. Users sometimes suffer
network "frustrations" or a total network breakdown if gaming because a neighbour
microwaves some pop corn.
Interoperability issues between brands or proprietary deviations from the standard
can disrupt connections or lower throughput speeds on other user's devices that are
within range. And, Wi-Fi devices do not presently pick channels to avoid
interference.
Wi-Fi networks that are open (unencrypted) can be monitored and used to read and
copy data (including personal information) transmitted over the network unless
another security method is used to secure the data like a VPN or a secure web page.
Standard Devices
Wireless Access Point (WAP)
A wireless access point connects a group of wireless devices to an adjacent wired LAN. An
access point is similar to an ethernet hub, relaying data between connected wireless devices
in addition to a (usually) single connected wired device, most often an ethernet hub or
switch, allowing wireless devices to communicate with other wired devices.
Wireless Adapter
A wireless adapter allows a device to connect to a wireless network. These adapters connect
to devices using various external or internal interconnects such as PCI, miniPCI, USB,
ExpressCard, Cardbus and PC card. Most newer laptop computers are equipped with
internal adapters. Internal cards are generally more difficult to install.
Wireless Router
A wireless router integrates a WAP, ethernet switch, and internal Router firmware
application that provides IP Routing, NAT, and DNS forwarding through an integrated
WAN interface. A wireless router allows wired and wireless ethernet LAN devices to
connect to a (usually) single WAN device such as cable modem or DSL modem. A wireless
router allows all three devices (mainly the access point and router) to be configured through
242
WIRELESS COMMUNICACTION
one central utility. This utility is most usually an integrated web server which serves web
pages to wired and wireless LAN clients and often optionally to WAN clients. This utility
may also be an application that is run on a desktop computer such as Apple's AirPort.
Wireless Ethernet Bridge
A wireless Ethernet bridge connects a wired network to a wireless network. This is different
from an access point in the sense that an access point connects wireless devices to a wired
network at the data-link layer. Two wireless bridges may be used to connect two wired
networks over a wireless link, useful in situations where a wired connection may be
unavailable, such as between two separate homes.
Range Extender
A wireless range extender or wireless repeater can extend the range of an existing wireless
network. Range extenders can be strategically placed to elongate a signal area or allow for the
signal area to reach around barriers such as those created in L-shaped corridors. Wireless
devices connected through repeaters will suffer from an increased latency for each hop.
Additionally, a wireless device at the end of chain of wireless repeaters will have a
throughput that is limited by the weakest link within the repeater chain.
Antenna connectors
Most commercial devices (routers, access points, bridges, repeaters) designed for home or
business environments use either RP-SMA or RP-TNC antenna connectors. PCI wireless
adapters also mainly use RP-SMA connectors.
Most PC card and USB wireless only have internal antennas etched on their printed circuit
board while some have MMCX connector or MC-Card external connections in addition to
an internal antenna. A few USB cards have a RP-SMA connector.
Most Mini PCI wireless cards utilize Hirose U.FL connectors, but cards found in various
wireless appliances contain all of the connectors listed.
Many high-gain (and homebuilt antennas) utilize the Type N connector more commonly
used by other radio communications methods.
Non-Standard Devices
DIY Range Optimizations
USB-Wi-Fi adapters, food container "Cantennas", parabolic reflectors, and many other types
of self-built antennae are increasingly made by do-it-yourselvers. For minimal budgets, as
low as a few dollars, signal strength and range can be improved dramatically.
243
WIRELESS COMMUNICACTION
There is also a type of optimization by polarizing the signal to achieve a planar coverage like
a plate. Many of these high-gain aftermarket modifications are technically illegal under FCC
and other regulatory guidelines.
Long Range Wi-Fi
Recently, long range Wi-Fi kits have begun to enter the market. Companies like RadioLabs
and BroadbandXpress offer long range, inexpensive kits that can be setup with limited
knowledge. These kits utilize specialized antennas which increase the range of Wi-Fi
dramatically, in the case of the world record 137.2 miles (220 km). These kits are commonly
used to get Broadband internet to a place that cannot access the service itself.
The longest link ever achieved was by the Swedish space agency. They attained 310 km, but
used 6 watt amplifiers to reach an overhead stratospheric balloon.
Wi-Fi and its support by operating systems
There are two sides to Wi-Fi support under an operating system: driver level support, and
configuration and management support.
Driver support is usually provided by the manufacturer of the hardware or, in the case of
Unix clones such as Linux and FreeBSD, sometimes through open source projects.
Configuration and management support consists of software to enumerate, join, and check
the status of available Wi-Fi networks. This also includes support for various encryption
methods. These systems are often provided by the operating system backed by a standard
driver model. In most cases, drivers emulate an ethernet device and use the configuration
and management utilities built into the operating system. In cases where built in
configuration and management support is non-existent or inadequate, hardware
manufacturers may include their own software to handle the respective tasks.
Microsoft Windows
Microsoft Windows has comprehensive driver-level support for Wi-Fi, the quality of which
depends on the hardware manufacturer. Hardware manufactures almost always ship
Windows drivers with their products. Windows ships with very few Wi-Fi drivers and
depends on the original equipment manufacturers (OEMs)and device manufacturers to
make sure users get drivers. Configuration and management depend on the version of
Windows.
Earlier versions of Windows, such as 98, ME and 2000 do not have built-in
configuration and management support and must depend on software provided by
the manufacturer
Microsoft Windows XP has built-in configuration and management support. The
original shipping version of Windows XP included rudimentary support which was
dramatically improved in Service Pack 2. Support for WPA2 and some other security
protocols require updates from Microsoft. There are still problems with XP support
244
WIRELESS COMMUNICACTION
of Wi-Fi. (One simple interface problem is that if the user makes a mistake in the
(case sensitive) passphrase, XP keeps trying to connect but never tells the user that
the passphrase is wrong. A second problem is not allowing the user to see different
BSSID's for the same ESSID; that is, it provides no way for the user to differentiate
access points with the same name.) To make up for Windows‘ inconsistent and
sometimes inadequate configuration and management support, many hardware
manufacturers include their own software and require the user to disable Windows‘
built-in Wi-Fi support. See article "Windows XP Bedevils Wi-Fi Users" in Wired
News.
Microsoft Windows Vista has improved Wi-Fi support over Windows XP. The
original betas automatically connected to unsecured networks without the user‘s
approval. The release candidate (RC1 or RC2) does not continue to display this
behavior, requiring user permissions to connect to an unsecured network, as long as
the user account is in the default configuration with regards to User Account
Control.
Apple Mac OS
Apple was an early adopter of Wi-Fi, introducing its AirPort product line, based on the
802.11b standard, in July 1999. Apple then introduced AirPort Extreme as an
implementation of 802.11g. All Macs starting with the original iBook included AirPort slots
for which an AirPort card can be used, connecting to the computer's internal antenna. All
Intel-based Macs either come with built-in Airport Extreme or a slot for an AirPort card. In
late 2006, Apple began shipping Macs with Broadcom Wi-Fi chips that also supported the
Draft 802.11n standard which can be unlocked through buying a $2 driver released by Apple
at the January 2007 Macworld Expo. The driver is also included for free with Apple's
802.11n AirPort Extreme.
Apple makes the Mac OS operating system, the computer hardware, the accompanying
drivers, AirPort WiFi base stations, and configuration and management software, simplifying
Wi-Fi integration. The built-in configuration and management is integrated throughout many
of the operating system's applications and utilities. Mac OS X has Wi-Fi support, including
WPA2, and ships with drivers for Apple‘s Broadcom-based AirPort cards. Many third-party
manufacturers make compatible hardware along with the appropriate drivers which work
with Mac OS X‘s built-in configuration and management software. Other manufacturers
distribute their own software.
Apple's older Mac OS 9 does not have built in support for Wi-Fi configuration and
management nor does it ship with Wi-Fi drivers, but Apple provides free drivers and
configuration and management software for their AirPort cards for OS 9, as do a few other
manufacturers. Versions of Mac OS before OS 9 predate Wi-Fi and do not have any Wi-Fi
support, although some third-party hardware manufacturers have made drivers and
connection software that allows earlier OSes to use Wi-Fi.
245
WIRELESS COMMUNICACTION
Open source Unix-like systems
Linux, FreeBSD and similar Unix-like clones have much coarser support for Wi-Fi. Due to
the open source nature of these operating systems, many different standards have been
developed for configuring and managing Wi-Fi devices. The open source nature also fosters
open source drivers which have enabled many third party and proprietary devices to work
under these operating systems. See Comparison of Open Source Wireless Drivers for more
information on those drivers.
Linux has patchy Wi-Fi support. Native drivers for many Wi-Fi chipsets are available
either commercially or at no cost, although some manufacturers don't produce a
Linux driver, only a Windows one. Consequently, many popular chipsets either don't
have a native Linux driver at all, or only have a half-finished one. For these, the
freely available NdisWrapper and its commercial competitor DriverLoader allow
Windows x86 and 64 bit variants NDIS drivers to be used on x86-based Linux
systems but not on other architectures. As well as the lack of native drivers, some
Linux distributions do not offer a convenient user interface and configuring Wi-Fi
on them can be a clumsy and complicated operation compared to configuring wired
Ethernet drivers. This is changing with NetworkManager, a utility that allows users
to automatically switch between networks without using the command line.
FreeBSD has Wi-Fi support similar to Linux. Support under FreeBSD is best in the
6.x versions, which introduced full support for WPA and WPA2, although in some
cases this is driver dependent. FreeBSD comes with drivers for many wireless cards
and chipsets, including those made by Atheros, Ralink, Cisco, D-link, Netgear, and
many Centrino chipsets, and provides support for others through the ports
collection. FreeBSD also has "Project Evil", which provides the ability to use
Windows x86 NDIS drivers on x86-based FreeBSD systems as NdisWrapper does
on Linux, and Windows amd64 NDIS drivers on amd64-based systems.
NetBSD, OpenBSD, and DragonFly BSD have Wi-Fi support similar to FreeBSD.
Code for some of the drivers, as well as the kernel framework to support them, is
mostly shared among the 4 BSDs.
Haiku has no Wi-Fi support at all as of April 2007.
Embedded systems
Wi-Fi availability in the home is on the increase. This extension of the Internet into the
home space will increasingly be used for remote monitoring. Examples of remote
monitoring include security systems and tele-medicine. In all these kinds of implementation,
if the Wi-Fi provision is provided using a system running one of operating systems
mentioned above, then it becomes unfeasible due to weight, power consumption and cost
issues.
246
WIRELESS COMMUNICACTION
Increasingly in the last few years (particularly as of early 2007), embedded Wi-Fi modules
have become available which come with a real-time operating system and provide a simple
means of wireless enabling any device which has and communicates via a serial port.
This allows simple monitoring devices, for example a portable ecg monitor hooked up to a
patient in the home, to be created. This Wi-Fi enabled device effectively becomes part of the
internet cloud and can communicate with any other node on the internet. The data collected
can hop via the home's Wi-Fi access point to anywhere on the internet.
These Wi-Fi modules are designed so that minimal Wi-Fi knowledge is required by designers
to wireless enable their product.
Social concerns
Unintended and intended use by outsiders
Measures to deter unauthorized users include suppressing the AP's service set identifier
(SSID) broadcast, allowing only computers with known MAC addresses to join the network,
and various encryption standards. Access points and computers using no encryption are
vulnerable to eavesdropping by an attacker armed with packet sniffer software. If the
eavesdropper has the ability to change his MAC address then he can potentially join the
network by spoofing an authorised address.
WEP encryption can protect against casual snooping but may also produce a misguided
sense of security since freely available tools such as AirSnort can quickly recover WEP
encryption keys. Once it has seen 5-10 million encrypted packets, AirSnort will determine
the encryption password in under a second. The newer Wi-Fi Protected Access (WPA) and
IEEE 802.11i (WPA2) encyption standards do not have the serious weaknesses of WEP
encryption, but require strong passphrases for full security.
Recreational exploration of other people's access points has become known as wardriving,
and the leaving of graffiti describing available services as warchalking. These activities may
be illegal in certain jurisdictions, but existing legislation and case-law is often unclear.
However, it is also common for people to unintentionally use others' Wi-Fi networks
without explicit authorization. Operating systems such as Windows XP SP2 and Mac OS X
automatically connect to an available wireless network, depending on the network
configuration. A user who happens to start up a laptop in the vicinity of an access point may
find the computer has joined the network without any visible indication. Moreover, a user
intending to join one network may instead end up on another one if the latter's signal is
stronger. In combination with automatic discovery of other network resources (see DHCP
and Zeroconf) this could possibly lead wireless users to send sensitive data to the wrong
destination, as described by Chris Meadows in the February 2004 RISKS Digest.
In Singapore, using another person's Wi-Fi network is illegal under the Computer Misuse
Act. A 17 year old has been arrested for simply tapping into his neighbor's wireless Internet
connection and faces up to 3 years' imprisonment and a fine.
247
WIRELESS COMMUNICACTION
Wi-Fi vs. amateur radio
In the US, Canada and Australia, a portion of the 2.4 GHz Wi-Fi radio spectrum is also
allocated to amateur radio users. In the US, FCC Part 15 rules govern non-licensed operators
(i.e. most Wi-Fi equipment users). Under Part 15 rules, non-licensed users must "accept" (i.e.
endure) interference from licensed users and not cause harmful interference to licensed
users. Amateur radio operators are licensed users, and retain what the FCC terms "primary
status" on the band, under a distinct set of rules (Part 97). Under Part 97, licensed amateur
operators may construct their own equipment, use very high-gain antennas, and boost
output power to 100 watts on frequencies covered by Wi-Fi channels 2-6. However, Part 97
rules mandate using only the minimum power necessary for communications, forbid
obscuring the data, and require station identification every 10 minutes. Therefore, output
power control is required to meet regulations, and the transmission of any encrypted data
(for example https) is questionable.
In practice, microwave power amplifiers are expensive. On the other hand, the short
wavelength at 2.4 GHz allows for simple construction of very high gain directional antennas.
Although Part 15 rules forbid any modification of commercially constructed systems,
amateur radio operators may modify commercial systems for optimized construction of long
links, for example. Using only 200 mW link radios and high gain directional antennas, a very
narrow beam may be used to construct reliable links with minimal radio frequency
interference to other users.
Health risks
There has been some debate about the effects of Wi-Fi transmissions on human health.
Although the radiated energy is very low the use of laptops and PDAs may bring the sources
very close to parts of the body for prolonged periods of time. In the United Kingdom two
media reports, the latest in an episode of the current affairs television program Panorama in
May 2007 has led to an unprecedented consumer reaction seeing large scores of families as
well as institutions ordering the removal of their Wi-Fi systems. One authority upon which
the reports weigh is Sir William Stewart, chairman of the Health Protection Agency and a
former Chief Scientific Adviser to the British Government, however, numerous reports
from individuals of deleterious effects appearing to originate from the low-energy radiation
of the Wi-Fi systems have also been reported. These symptoms have included headaches and
lethargy. However, there consensus amongst scientists is that there is no evidence of harm,
the call for more research in the effects on human health is called for by some.
History
Wi-Fi uses both single carrier direct-sequence spread spectrum radio technology (part of the
larger family of spread spectrum systems) and multi-carrier OFDM (Orthogonal Frequency
Division Multiplexing) radio technology. These regulations then enabled the development of
Wi-Fi, its onetime competitor HomeRF, and Bluetooth.
Unlicensed spread spectrum was first made available by the Federal Communications
Commission in 1985 and these FCC regulations were later copied with some changes in
248
WIRELESS COMMUNICACTION
many other countries enabling use of this technology in all major countries. The FCC action
was proposed by Michael Marcus of the FCC staff in 1980 and the subsequent controversial
regulatory action took 5 more years. It was part of a broader proposal to allow civil use of
spread spectrum technology and was opposed at the time by main stream equipment
manufacturers and many radio system operators.
The precursor to Wi-Fi was invented in 1991 by NCR Corporation/AT&T (later Lucent &
Agere Systems) in Nieuwegein, the Netherlands. It was initially intended for cashier systems;
the first wireless products were brought on the market under the name WaveLAN with
speeds of 1 Mbit/s to 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years
and has been named the 'father of Wi-Fi,' was involved in designing standards such as IEEE
802.11b, 802.11a and 802.11g.
Origin and meaning of the term 'Wi-Fi'
Despite the similarity between the terms 'Wi-Fi' and 'Hi-Fi', statements reportedly made by
Phil Belanger of the Wi-Fi Alliance contradict the popular conclusion that 'Wi-Fi' stands for
'Wireless Fidelity.' According to Mr. Belanger, the Interbrand Corporation developed the
brand 'Wi-Fi' for the Wi-Fi Alliance to use to describe WLAN products that are based on
the IEEE 802.11 standards. In Mr. Belanger's words, "Wi-Fi and the yin yang style logo were
invented by Interbrand. We [the founding members of the Wireless Ethernet Compatibility
Alliance, now called the Wi-Fi Alliance] hired Interbrand to come up with the name and logo
that we could use for our interoperability seal and marketing efforts. We needed something
that was a little catchier than 'IEEE 802.11b Direct Sequence'."
One possibility for the origin of the actual term is a simplified spelling of "Wi-Phy" or
"Wireless Physical Network Layer".
The Wi-Fi Alliance themselves invoked the term 'Wireless Fidelity' with the marketing of a
tag line, "The Standard for Wireless Fidelity," but later removed the tag from their
marketing. The Wi-Fi Alliance now seems to discourage the propagation of the notion that
'Wi-Fi' stands for 'Wireless Fidelity', but it has been referred to as such by the Wi-Fi Alliance
in White Papers currently held in their knowledge base:
"A Short History of WLANs... The association created the Wi-Fi (Wireless Fidelity) logo to
indicate that a product had been certified for interoperability."
Wireless networks are very common, both for organizations and individuals.
Many laptop computers have wireless cards pre-installed for the buyer. The ability to enter a
network while mobile has great benefits. However, wireless networking has many security
issues. Crackers have found wireless networks relatively easy to break into, and even use
wireless technology to crack into wired networks.
249
WIRELESS COMMUNICACTION
Security risks
The risks to users of wireless technology have increased exponentially as the service has
become more popular. There were relatively few dangers when wireless technology was first
introduced. Crackers had not yet had time to latch on to the new technology and wireless
was not commonly found in the work place. Currently, however, there are a great number of
security risks associated with wireless technology. Security threats are growing in the wireless
arena. Crackers have learned that there is much vulnerability in the current wireless
protocols, encryption methods, and in the carelessness and ignorance that exists at the user
and corporate IT level. Cracking methods have become much more sophisticated and
innovative with wireless. Cracking has become much easier and more accessible with easyto-use Windows-based and Linux-based tools being made available on the web at no charge.
Wireless being used to crack into non-wireless networks
Some organizations that have no wireless access points installed do not feel that they need to
address wireless security concerns. This is a common deceptive inference. In-Stat MDR and
META Group have estimated that 95% of all corporate laptop computers that were planned
to be purchased in 2005 were equipped with wireless. Issues can arise in a supposedly nonwireless organization when a wireless laptop is plugged into the corporate network. A
cracker could sit out in the parking lot and break in through the wireless card on a laptop
and gain access to the wired network. If no security measures are implemented at these
access points, it is no different from providing a patch cable out the back door for crackers
to plug into whenever they wish.
Types of unauthorized access to company networks
Accidental association
Unauthorized access to company wireless and wired networks can come from a number of
different methods and intents. One of these methods is referred to as ―accidental
association‖. This is when a user turns on their computer and it latches on to a wireless
access point from a neighboring company‘s overlapping network. The user may not even
know that this has occurred. However, this is a security breach in that proprietary company
information is exposed and now there could exist a link from one company to the other.
This is especially true if the laptop is also hooked to a wired network.
Malicious association
―Malicious associations‖ are when wireless devices can be actively made by crackers to
connect to a company network through their cracking laptop instead of a company access
point (AP). These types of laptops are known as ―soft APs‖ and are created when a cracker
runs some software that makes his/her wireless network card look like a legitimate access
point. Once the cracker has gained access, he/she can steal passwords, launch attacks on the
wired network, or plant trojans. Since wireless networks operate in the Layer-2 world, Layer3 protections such as network authentication and virtual private networks (VPNs) offer no
protection. Wireless 802.1x authentications do help with protection but are still vulnerable to
250
WIRELESS COMMUNICACTION
cracking. The idea behind this type of attack may not be to break into a VPN or other
security measures. Most likely the cracker is just trying to take over the client at the Layer-2
level.
Ad-hoc networks
Ad-hoc networks can pose a security threat. Ad-hoc networks are defined as peer to peer
networks between wireless computers that do not have an access point in between them.
While these types of networks usually have little security, encryption methods can be used to
provide security.
Non-traditional networks
Non-traditional networks such as personal network Bluetooth devices are not safe from
cracking and should be regarded as a security risk. Even bar code scanners, handheld PDAs,
and wireless printers and copiers should be secured. These non-traditional networks can be
easily overlooked by IT personnel that have narrowly focused on laptops and APs.
Identity theft (MAC spoofing)
Identity theft (or MAC spoofing) occurs when a cracker is able to listen in on network traffic
and identify the MAC address of a computer with network privileges. Most wireless systems
allow some kind of MAC filtering to only allow authorized computers with specific MAC
IDs to gain access and utilize the network. However, a number of programs exist that have
network ―sniffing‖ capabilities. Combine these programs with other software that allow a
computer to pretend it has any MAC address that the cracker desires, and the cracker can
easily get around that hurdle.
Man-in-the-middle attacks
A man-in-the-middle attack is one of the more sophisticated attacks that have been cleverly
thought up by hackers. This attack revolves around the attacker enticing computers to log
into his/her computer which is set up as a soft AP (Access Point). Once this is done, the
hacker connects to a real access point through another wireless card offering a steady flow of
traffic through the transparent hacking computer to the real network. The hacker can then
sniff the traffic for user names, passwords, credit card numbers...etc. One type of man-inthe-middle attack relies on security faults in challenge and handshake protocols. It is called a
―de-authentication attack‖. This attack forces AP-connected computers to drop their
connections and reconnect with the cracker‘s soft AP. Man-in-the-middle attacks are getting
easier to pull off due to software such as LANjack and AirJack automating multiple steps of
the process. What was once done by cutting edge crackers can now be done by script
kiddies, less knowledgeable and skilled hackers sitting around public and private hotspots.
Hotspots are particularly vulnerable to any attack since there is little to no security on these
networks.
251
WIRELESS COMMUNICACTION
Denial of service
A Denial-of-Service attack (DoS) occurs when an attacker continually bombards a targeted
AP (Access Point) or network with bogus requests, premature successful connection
messages, failure messages, and/or other commands. These cause legitimate users to not be
able to get on the network and may even cause the network to crash. These attacks rely on
the abuse of protocols such as the Extensible Authentication Protocol (EAP).
Network injection
The final attack to be covered is the network injection attack. A cracker can make use of
access points that are exposed to non-filtered network traffic. Specifically broadcast network
traffic such as ―Spanning Tree‖ (802.1D), OSPF, RIP, HSRP…etc. The cracker injects
bogus networking re-configuration commands that affect routers, switches, and intelligent
hubs. A whole network can be brought down in this manner and require rebooting or even
reprogramming of all intelligent networking devices.
Counteracting risks
Risks from crackers are sure to remain with us for any foreseeable future. The challenge for
IT personnel will be to keep one step ahead of crackers. Members of the IT field need to
keep learning about the types of attacks and what counter measures are available.
Methods of counteracting security risks
There are many technologies available to counteract wireless network intrusion, but currently
no method is absolutely secure. The best strategy may be to combine a number of security
measures.
There are three steps to take towards securing a wireless network:
1. All wireless LAN devices need to be secured
2. All users of the wireless network need to be educated in wireless network
security
3. All wireless networks need to be actively monitored for weaknesses and
breaches
MAC ID filtering
Most wireless access points contain some type of MAC ID filtering that allows the
administrator to only permit access to computers that have wireless functionalities that
contain certain MAC IDs. This can be helpful; however, it must be remembered that MAC
IDs over a network can be faked. Cracking utilities such as SMAC are widely available, and
some computer hardware also gives the option in the BIOS to select any desired MAC ID
for its built in network capability.
252
WIRELESS COMMUNICACTION
Static IP addressing
Disabling at least the IP address assignment function of the network's DHCP server, with
the IP addresses of the various network devices then set by hand, will also make it more
difficult for a casual or unsophisticated intruder to log onto the network. This is especially
effective if the subnet size is also reduced from a standard default setting to what is
absolutely necessary and if permitted but unused IP addresses are blocked by the access
point's firewall. In this case, where no unused IP addresses are available, a new user can log
on without detection using TCP/IP only if he or she stages a successful Man in the Middle
Attack using appropriate software.
WEP encryption
WEP stands for Wired Equivalency Privacy. This encryption standard was the original
encryption standard for wireless. As its name implies, this standard was intended to make
wireless networks as secure as wired networks. Unfortunately, this never happened as flaws
were quickly discovered and exploited. There are several open Source utilities like aircrackng, weplab, WEPCrack or airsnort that can be used by crackers to break in by examining
packets and looking for patterns in the encryption. WEP comes in different key sizes. The
common key lengths are currently 128- and 256-bit. The longer the better as it will increase
the difficulty for crackers. However, this type of encryption has seen its day come and go. In
2005 a group from the FBI held a demonstration where they used publicly available tools to
break a WEP encrypted network in three minutes. WEP protection is better than nothing,
though generally not as secure as the more sophisticated WPA-PSK encryption. A big
problem is that if a cracker can receive packets on a network, it is only a matter of time until
the WEP encryption is cracked.
WPA
Wi-Fi Protected Access (WPA) is an early version of the 802.11i security standard that was
developed by the WiFi Alliance to replace WEP. The TKIP encryption algorithm was
developed for WPA to provide improvements to WEP that could be fielded as firmware
upgrades to existing 802.11 devices. The WPA profile also provides optional support for the
AES-CCMP algorithm that is the preferred algorithm in 802.11i and WPA2.
WPA Enterprise provides RADIUS based authentication using 802.1x. WPA Personal uses a
pre-shared Shared Key (PSK) to establish the security using an 8 to 63 character passphrase.
The PSK may also be entered as a 64 character hexadecimal string. Weak PSK passphrases
can be broken using off-line dictionary attacks by capturing the messages in the four-way
exchange when the client reconnects after being deauthenticated. Wireless suites such as
aircrack-ng can crack a weak passphrase in less than a minute. WPA Personal is secure when
used with ‗good‘ passphrases or a full 64-character hexadecimal key.
WPA2
WPA2 is a WiFi Alliance branded version of the final 802.11i standard. The primary
enhancement over WPA is the inclusion of the AES-CCMP algorithm as a mandatory
253
WIRELESS COMMUNICACTION
feature. Both WPA and WPA2 support EAP authentication methods using RADIUS servers
and preshared key (PSK) based security.
802.1X
This is an IEEE standard for access of wireless and wired LANs. It provides for
authentication and authorization of LAN nodes. This standard defines the Extensible
Authentication Protocol (EAP) which uses a central authentication server. Unfortunately,
during 2002 a Maryland professor discovered some shortcomings.
LEAP
This stands for the Lightweight Extensible Authentication Protocol. This protocol is based
on 802.1X and helps minimize the original security flaws by using WEP and a sophisticated
key management system. This also uses MAC address authentication. LEAP is not safe from
crackers. THC-LeapCracker can be used to break Cisco‘s version of LEAP and be used
against computers connected to an access point in the form of a dictionary attack.
PEAP
This stands for Protected Extensible Authentication Protocol. This protocol allows for a
secure transport of data, passwords, and encryption keys without the need of a certificate
server. This was developed by Cisco, Microsoft, and RSA Security.
TKIP
This stands for Temporal Key Integrity Protocol and the acronym is pronounced as tee-kip.
This is part of the IEEE 802.11i standard. TKIP implements per-packet key mixing with a
re-keying system and also provides a message integrity check. These avoid the problems of
WEP.
RADIUS
This stands for Remote Authentication Dial In User Service. This is an AAA (authentication,
authorization and accounting) protocol used for remote network access. This service
provides an excellent weapon against crackers. RADIUS was originally proprietary but was
later published under ISOC documents RFC 2138 and RFC 2139. The idea is to have an
inside server act as a gatekeeper through the use of verifying identities through a username
and password that is already pre-determined by the user. A RADIUS server can also be
configured to enforce user policies and restrictions as well as recording accounting
information such as time connected for billing purposes.
WAPI
This stands for WLAN Authentication and Privacy Infrastructure. This is a wireless security
standard defined by the Chinese government.
254
WIRELESS COMMUNICACTION
Smart cards, USB tokens, and software tokens
This is a very high form of security. When combined with some server software, the
hardware or software card or token will use its internal identity code combined with a user
entered PIN to create a powerful algorithm that will very frequently generate a new
encryption code. The server will be time synced to the card or token. This is a very secure
way to conduct wireless transmissions. Companies in this area make USB tokens, software
tokens, and smart cards. They even make hardware versions that double as an employee
picture badge. Currently the safest security measures are the smart cards / USB tokens.
However, these are expensive. The next safest methods are WPA2 or WPA with a RADIUS
server. Any one of the three will provide a good base foundation for security. The third item
on the list is to educate both employees and contractors on security risks and personal
preventive measures. It is also IT's task to keep the company workers' knowledge base upto-date on any new dangers that they should be cautious about. If the employees are
educated, there will be a much lower chance that anyone will accidentally cause a breach in
security by not locking down their laptop or bring in a wide open home access point to
extend their mobile range. Employees need to be made aware that company laptop security
extends to outside of their site walls as well. This includes places such as coffee houses
where workers can be at their most vulnerable. The last item on the list deals with 24/7
active defense measures to ensure that the company network is secure and compliant. This
can take the form of regularly looking at access point, server, and firewall logs to try and
detect any unusual activity. For instance, if any large files went through an access point in the
early hours of the morning, a serious investigation into the incident would be called for.
There are a number of software and hardware devices that can be used to supplement the
usual logs and usual other safety measures.
Steps in securing a wireless network
The following are some basic steps that are recommended to be taken to secure a wireless
network; in order of importance:
1. Turn on encryption. WPA2 encryption should be used if possible. WPA
encryption is the next best alternative, and WEP is better than nothing.
2. Change the default password needed to access a wireless device — Default
passwords are set by the manufacturer and are known by crackers. By
changing the password you can prevent crackers from accessing and
changing your network settings.
3. Change the default SSID, or network name — Crackers know the default
names of the different brands of equipment, and use of a default name
suggests that the network has not been secured. Change it to something that
will make it easier for users to find the correct network. You may wish to use
a name that will not be associated with the owner in order to avoid being
specifically targeted.
4. Disable file and print sharing if it is not needed — this can limit a cracker's
ability to steal data or commandeer resources in the event that they get past
the encryption.
255
WIRELESS COMMUNICACTION
5. Access points should be arranged to provide radio coverage only to the
desired area if possible. Any wireless signal that spills outside of the desired
area could provide an opportunity for a cracker to access the network
without entering the premises. Directional antennas should be used, if
possible, at the perimeter directing their broadcasting inward. Some access
points allow the signal strength to be reduced in order to minimise such
signal leakage.
6. Divide the wired and wireless portions of the network into different
segments, with a firewall in between. This can prevent a cracker from
accessing a wired network by breaking into the wireless network.
7. Implement an overlay Wireless intrusion prevention system to monitor the
wireless spectrum 24x7 against active attacks and unauthorized devices such
as Rogue Access Points. These systems can detect and stop the most subtle
or brute force methods of wireless attacks, and provide you with deep
visibility into the use and performance of the WLAN.
Here are some often-recommended security steps that are not usually of any benefit against
experienced crackers (they will however prevent the larger group of inexperienced users
from gaining access to your network easily, should they find your password). These are:
Disabling the SSID broadcast option — Theoretically, hiding the SSID will prevent
unauthorised users from finding the network. In fact, while it will prevent opportunistic
users from finding the network, any serious cracker can simply scan your other network
traffic to find the SSID. It will also make it harder for legitimate users to connect to the
network, since they must know the SSID in advance and type it in to their equipment.
Hiding the SSID will not prevent anyone from reading the data that is transmitted, only
encryption will do that.
Enabling MAC address filtering — MAC address filtering will prevent casual users from
connecting to your network by maintaining a list of MAC addresses that are allowed access,
(or not) but a serious cracker will simply scan your network traffic to find a MAC address
that is allowed access, then change their equipment to use that address. Any new equipment
will require another MAC address to be added to the list before it can be connected. Again,
enabling MAC address filtering will not prevent anyone from reading the data that is
transmitted without encryption.
256
WIRELESS COMMUNICACTION
Chapter No:8
Introduction to Global system
For Mobile Communication
GSM
257
WIRELESS COMMUNICACTION
The Global System for Mobile communications (GSM: originally from Groupe Spécial
Mobile) is the most popular standard for mobile phones in the world. GSM service is used
by over 2 billion people across more than 212 countries and territories. The ubiquity of the
GSM standard makes international roaming very common between mobile phone operators,
enabling subscribers to use their phones in many parts of the world. GSM differs
significantly from its predecessors in that both signaling and speech channels are Digital call
quality, which means that it is considered a second generation (2G) mobile phone system.
This fact has also meant that data communication was built into the system from the 3rd
Generation Partnership Project (3GPP).
The GSM logo is used to identify compatible handsets and equipment
From the point of view of the consumers, the key advantage of GSM systems has been
higher digital voice quality and low cost alternatives to making calls such as the Short
Message Service (SMS). The advantage for network operators has been the ability to deploy
equipment from different vendors because the open standard allows easy inter-operability.
Like other cellular standards GSM allows network operators to offer roaming services which
mean subscribers can use their phones all over the world.
As the GSM standard continued to develop, it retained backward compatibility with the
original GSM phones; for example, packet data capabilities were added in the Release '97
version of the standard, by means of GPRS. Higher speed data transmission has also been
introduced with EDGE in the Release '99 version of the standard.
History of GSM
The growth of cellular telephone systems started in the early 1980s, particularly in Europe.
The lack of a technological standardization prompted the European Conference of Postal
and Telecommunications Administrations (CEPT) to create the Groupe Spécial Mobile
(GSM) in 1982 with the objective of developing a standard for a mobile telephone system
that could be used across Europe.
In 1989, GSM responsibility was transferred to the European Telecommunications
Standards Institute (ETSI), and phase I of the GSM specifications were published in 1990.
The first GSM network was launched in 1991 by Radiolinja in Finland.[By the end of 1993,
over a million subscribers were using GSM phone networks being operated by 70 carriers
across 48 countries.
258
WIRELESS COMMUNICACTION
Radio Interface
GSM is a cellular network, which means that mobile phones connect to it by searching for
cells in the immediate vicinity. GSM networks operate in four different frequency ranges.
Most GSM networks operate in the 900 MHz or 1800 MHz bands. Some countries in the
Americas (including the United States and Canada) use the 850 MHz and 1900 MHz bands
because the 900 and 1800 MHz frequency bands were already allocated.
The rarer 400 and 450 MHz frequency bands are assigned in some countries, notably
Scandinavia, where these frequencies were previously used for first-generation systems.
In the 900 MHz band the uplink frequency band is 890-915 MHz, and the downlink
frequency band is 935-960 MHz. This 25 MHz bandwidth is subdivided into 124 carrier
frequency channels, each spaced 200 kHz apart. Time division multiplexing is used to allow
eight full-rate or sixteen half-rate speech channels per radio frequency channel. There are
eight radio timeslots (giving eight burst periods) grouped into what is called a TDMA frame.
Half rate channels use alternate frames in the same timeslot. The channel data rate is 270.833
kbit/s, and the frame duration is 4.615 ms.
The transmission power in the handset is limited to a maximum of 2 watts in GSM850/900
and 1 watt in GSM1800/1900.
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 6 and 13
kbit/s. Originally, two codecs, named after the types of data channel they were allocated,
were used, called "Full Rate" (13 kbit/s) and "Half Rate" (6 kbit/s). These used a system
based upon linear predictive coding (LPC). In addition to being efficient with bitrates, these
codecs also made it easier to identify more important parts of the audio, allowing the air
interface layer to prioritize and better protect these parts of the signal.
GSM was further enhanced in 1997 with the GSM-EFR codec, a 12.2 kbit/s codec that uses
a full rate channel. Finally, with the development of UMTS, EFR was refactored into a
variable-rate codec called AMR-Narrowband, which is high quality and robust against
interference when used on full rate channels, and less robust but still relatively high quality
when used in good radio conditions on half-rate channels.
There are four different cell sizes in a GSM network - macro, micro, pico and umbrella cells.
The coverage area of each cell varies according to the implementation environment. Macro
cells can be regarded as cells where the base station antenna is installed on a mast or a
building above average roof top level. Micro cells are cells whose antenna height is under
average roof top level; they are typically used in urban areas. Picocells are small cells whose
diameter is a few dozen meters; they are mainly used indoors. Umbrella cells are used to
cover shadowed regions of smaller cells and fill in gaps in coverage between those cells.
Cell horizontal radius varies depending on antenna height, antenna gain and propagation
conditions from a couple of hundred meters to several tens of kilometers. The longest
distance the GSM specification supports in practical use is 35 km or 22 miles. There are also
several implementations of the concept of an extended cell, where the cell radius could be
259
WIRELESS COMMUNICACTION
double or even more, depending on the antenna system, the type of terrain and the timing
advance.
Indoor coverage is also supported by GSM and may be achieved by using an indoor picocell
base station, or an indoor repeater with distributed indoor antennas fed through power
splitters, to deliver the radio signals from an antenna outdoors to the separate indoor
distributed antenna system. These are typically deployed when a lot of call capacity is needed
indoors, for example in shopping centers or airports. However, this is not a prerequisite,
since indoor coverage is also provided by in-building penetration of the radio signals from
nearby cells.
The modulation used in GSM is Gaussian minimum shift keying (GMSK), a kind of
continuous-phase frequency shift keying. In GMSK, the signal to be modulated onto the
carrier is first smoothed with a Gaussian low-pass filter prior to being fed to a frequency
modulator, which greatly reduces the interference to neighboring channels (adjacent channel
interference).
A nearby GSM handset is usually the source of the "dit dit dit, dit dit dit, dit dit dit" signal
that can be heard from time to time on home stereo systems, televisions, computers, and
personal music devices. When these audio devices are in the near field of the GSM handset,
the radio signal is strong enough that the solid state amplifiers in the audio chain function as
a detector. The clicking noise itself represents the power bursts that carry the TDMA signal.
These signals have been known to interfere with other electronic devices, such as car stereos
and portable audio players. This is a form of RFI, and could be mitigated or eliminated by
use of additional shielding and/or bypass capacitors in these audio devices, however, the
increased cost of doing so is difficult for a designer to justify.
Network structure
The structure of a GSM network
260
WIRELESS COMMUNICACTION
The network behind the GSM system seen by the customer is large and complicated in order
to provide all of the services which are required. It is divided into a number of sections and
these are each covered in separate articles.
the Base Station Subsystem (the base stations and their controllers).
the Network and Switching Subsystem (the part of the network most similar to a
fixed network). This is sometimes also just called the core network.
the GPRS Core Network (the optional part which allows packet based Internet
connections).
all of the elements in the system combine to produce many GSM services such as
voice calls and SMS.
Subscriber identity module
One of the key features of GSM is the Subscriber Identity Module (SIM), commonly known
as a SIM card. The SIM is a detachable smart card containing the user's subscription
information and phonebook. This allows the user to retain his or her information after
switching handsets. Alternatively, the user can also change operators while retaining the
handset simply by changing the SIM. Some operators will block this by allowing the phone
to use only a single SIM, or only a SIM issued by them; this practice is known as SIM
locking, and is illegal in some countries.
In the United States, Canada, Europe and Australia, many operators lock the mobiles they
sell. This is done because the price of the mobile phone is typically subsidised with revenue
from subscriptions and operators want to try to avoid subsidising competitor's mobiles. A
subscriber can usually contact the provider to remove the lock for a fee, utilize private
services to remove the lock, or make use of ample software and websites available on the
Internet to unlock the handset themselves. While most web sites offer the unlocking for a
fee, some do it for free. The locking applies to the handset, identified by its International
Mobile Equipment Identity (IMEI) number, not to the account (which is identified by the
SIM card). It is always possible to switch to another (non-locked) handset if such other
handset is available.
Some providers will unlock the phone for free if the customer has held an account for a
certain period. Third party unlocking services exist that are often quicker and lower cost than
that of the operator. In most countries removing the lock is legal. Cingular and T-Mobile
provide free unlock services to their customers after 3 months of subscription.
In countries like India, Pakistan, Indonesia, Belgium, etc., all phones are sold unlocked.
However, in Belgium, it is unlawful for operators there to offer any form of subsidy on the
phone's price. This was also the case in Finland until April 1, 2006, when selling subsidized
combinations of handsets and accounts became legal though operators have to unlock
phone free of charge after a certain period (at most 24 months).
261
WIRELESS COMMUNICACTION
GSM security
GSM was designed with a moderate level of security. The system was designed to
authenticate the subscriber using shared-secret cryptography. Communications between the
subscriber and the base station can be encrypted. The development of UMTS introduces an
optional USIM, that uses a longer authentication key to give greater security, as well as
mutually authenticating the network and the user - whereas GSM only authenticated the user
to the network (and not vice versa). The security model therefore offers confidentiality and
authentication, but limited authorization capabilities, and no non-repudiation.
GSM uses several cryptographic algorithms for security. The A5/1 and A5/2 stream ciphers
are used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger
algorithm used within Europe and the United States; A5/2 is weaker and used in other
countries. A large security advantage of GSM over earlier systems is that the Key, the crypto
variable stored on the SIM card that is the key to any GSM ciphering algorithm, is never sent
over the air interface. Serious weaknesses have been found in both algorithms, and it is
possible to break A5/2 in real-time in a ciphertext-only attack. The system supports multiple
algorithms so operators may replace that cipher with a stronger one.
See also
Core technology:
o 2G
o 2.5G
o 3G
o 4G
Architectural elements:
o Base Station Controller (BSC)
o Base Station Subsystem (BSS)
o Home Location Register (HLR)
o Mobile Switching Center (MSC)
o Subscriber Identity Module (SIM)
o Visitors Location Register (VLR)
o Equipment Identity Register (EIR)
Radio:
o GSM frequency ranges
Cellular traffic
Services:
o GSM localization
o GSM services
 GSM codes for supplementary services
o MMS
o SMS
o WAP Wireless Application Protocol
o GPRS
o Cell Broadcast
Standards:
262
WIRELESS COMMUNICACTION
Comparison of mobile phone standards
European Telecommunications Standards Institute (ETSI)
Intelligent network (IN)
Parlay
Common terms:
o International Mobile Equipment Identity (IMEI)
o International Mobile Subscriber Identity (IMSI)
o Mobile Station Integrated Services Digital Network (MSISDN)
o Handoff
Related technologies:
o GSM-R (GSM-Railway)
o
o
o
o
A cellular network is a radio network made up of a number of radio cells (or
just cells) each served by a fixed transmitter, known as a cell site or base station. These cells
are used to cover different areas in order to provide radio coverage over a wider area than
the area of one cell. Cellular networks are inherently asymmetric with a set of fixed main
transceivers each serving a cell and a set of distributed (generally, but not always, mobile)
transceivers which provide services to the network's users.
Cellular networks offer a number of advantages over alternative solutions:
increased capacity
reduced power usage
better coverage
A good (and simple) example of a cellular system is an old taxi driver's radio system
where the taxi company will have several transmitters based around a city each
operated by an individual operator.
General characteristics
The primary requirement for a network to be succeed as a cellular network is for it to have
developed a standardised method for each distributed station to distinguish the signal
emanating from its own transmitter from the signals received from other transmitters.
Presently, there are two standardised solutions to this issue: · frequency division multiple
access (FDMA) and; · code division multiple access (CDMA).
FDMA works by using varying frequencies for each neighbouring cell. By tuning to the
frequency of a chosen cell the distributed stations can avoid the signal from other cells. The
principle of CDMA is more complex, but achieves the same result; the distributed
transceivers can select one cell and listen to it. Other available methods of multiplexing such
as polarization division multiple access (PDMA) and time division multiple access (TDMA)
cannot be used to separate signals from one cell to the next since the effects of both vary
with position and this would make signal separation practically impossible. Time division
multiple access, however, is used in combination with either FDMA or CDMA in a number
of systems to give multiple channels within the coverage area of a single cell.
263
WIRELESS COMMUNICACTION
In the case of the aforementioned taxi company, each radio has a knob. The knob acts as a
channel selector and allows the radio to tune to different frequencies. As the drivers move
around, they change from channel to channel. The drivers know which frequency covers
approximately what area, when they don't get a signal from the transmitter, they also try
other channels until they find one which works. The taxi drivers only speak one at a time, as
invited by the operator (in a sense TDMA).
Broadcast messages and paging
Practically every cellular system has some kind of broadcast mechanism. This can be used
directly for distributing information to multiple mobiles, commonly, for example in mobile
telephony systems, the most important use of broadcast information is to set up channels for
one to one communication between the mobile transreceiver and the base station. This is
called paging.
The details of the process of paging vary somewhat from network to network, but normally
we know a limited number of cells where the phone is located (this group of cells is called a
location area in the GSM system or Routing Area in UMTS). Paging takes place by sending
the broadcast message on all of those cells. Paging messages can be used for information
transfer. This happens in pagers, in CDMA systems for sending SMS messages, and in the
UMTS system where it allows for low downlink latency in packet-based connections.
Our taxi network is a very good example here. The broadcast capability is often used to tell
about road conditions and also to tell about work which is available to anybody. On the
other hand, typically there is a list of taxis waiting for work. When a particular taxi comes up
for work, the operator will call their number over the air. The taxi driver acknowledges that
they are listening, then the operator reads out the address where the taxi driver has to go.
Frequency reuse
264
WIRELESS COMMUNICACTION
Frequency reuse in a cellular network
The increased capacity in a cellular network, compared with a network with a single
transmitter, comes from the fact that the same radio frequency can be reused in a different
area for a completely different transmission. If there is a single plain transmitter, only one
transmission can be used on any given frequency. Unfortunately, there is inevitably some
level of interference from the signal from the other cells which use the same frequency. This
means that, in a standard FDMA system, there must be at least a one cell gap between cells
which reuse the same frequency.
The frequency reuse factor is the rate at which the same frequency can be used in the
network. It is 1/n where n is the number of cells which cannot use a frequency for
transmission. A common value for the frequency reuse factor is 7.
Code division multiple access-based systems
use a
wider frequency band to achieve the same rate of transmission as FDMA, but this is
compensated for by the ability to use a frequency reuse factor of 1. In other words, every cell
uses the same frequency and the different systems are separated by codes rather than
frequencies.
Depending on the size of the city, a taxi system may not have any frequency-reuse in its own
city, but certainly in other nearby cities, the same frequency can be used. In a big city, on the
other hand, frequency-reuse could certainly be in use.
Movement from cell to cell and handover
The use of multiple cells means that, if the distributed transceivers are mobile and moving
from place to place, they also have to change from cell to cell. The mechanism for this
differs depending on the type of network and the circumstances of the change. For example,
if there is an ongoing continuous communication and we don't want to interrupt it, then
great care must be taken to avoid interruption. In this case there must be clear coordination
between the base station and the mobile station. Typically such systems use some kind of
multiple access independently in each cell, so an early stage of such a handover (handoff) is
to reserve a new channel for the mobile station on the new base station which will serve it.
The mobile then moves from the channel on its current base station to the new channel and
from that point on communication takes place.
The exact details of the mobile system's move from one base station to the other varies
considerably from system to system. For example, in all GSM handovers and W-CDMA
inter-frequency handovers the mobile station will measure the channel it is meant to start
using before moving over. Once the channel is confirmed okay, the network will command
the mobile station to move to the new channel and at the same time start bi-directional
communication there, meaning there is no break in communication. In CDMA2000 and WCDMA same-frequency handovers, both channels will actually be in use at the same time
(this is called a soft handover or soft handoff). In IS-95 inter-frequency handovers and older
analog systems such as NMT it will typically be impossible to measure the target channel
265
WIRELESS COMMUNICACTION
directly whilst communicating. In this case other techniques have to be used such as pilot
beacons in IS-95. This means that there is almost always a brief break in the communication
whilst searching for the new channel followed by the risk of an unexpected return to the old
channel.
If there is no ongoing communication or the communication can be interrupted, it is
possible for the mobile station to spontaneously move from one cell to another and then
notify the network if needed.
In the case of the primitive taxi system that we are studying, handovers won't really be
implemented. The taxi driver just moves from one frequency to another as needed. If a
specific communication gets interrupted due to a loss of a signal then the taxi driver asks the
controller to repeat the message. If one single taxi driver misses a particular broadcast
message (e.g. a request for drivers in a particular area), the others will respond instead. If
nobody responds, the operator keeps repeating the request.
The effect of frequency on cell coverage means that different frequencies serve better for
different uses. Low frequencies, such as 450 MHz NMT, serve very well for countryside
coverage. GSM 900 (900 MHz) is a suitable solution for light urban coverage. GSM 1800
(1.8 GHz) starts to be limited by structural walls. This is a disadvantage when it comes to
coverage, but it is a decided advantage when it comes to capacity. Pico cells, covering e.g.
one floor of a building, become possible, and the same frequency can be used for cells which
are practically neighbours. UMTS, at 2.1 GHz is quite similar in coverage to GSM 1800. At 5
GHz, 802.11a Wireless LANs already have very limited ability to penetrate walls and may be
limited to a single room in some buildings. At the same time, 5 GHz can easily penetrate
windows and goes through thin walls so corporate WLAN systems often give coverage to
areas well beyond that which is intended.
Moving beyond these ranges, network capacity generally increases (more bandwidth is
available) but the coverage becomes limited to line of sight. Infra-red links have been
considered for cellular network usage, but as of 2004 they remain restricted to limited pointto-point applications.
Cell service area may also vary due to interference from transmitting systems, both within
and around that cell. This is true especially in CDMA based systems. The receiver requires a
certain signal-to-noise ratio. As the receiver moves away from the transmitter, the power
transmitted is reduced. As the interference (noise) rises above the received power from the
transmitter, and the power of the transmitter cannot be increased any more, the signal
becomes corrupted and eventually unusable. In CDMA-based systems, the effect of
interference from other mobile transmitters in the same cell on coverage area is very marked
and has a special name, cell breathing.
Old fashioned taxi radio systems, such as the one we have been studying, generally use low
frequencies and high sited transmitters, probably based where the local radio station has its
mast. This gives a very wide area coverage in a roughly circular area surrounding each mast.
Since only one user can talk at any given time, coverage area doesn't change with number of
266
WIRELESS COMMUNICACTION
users. The reduced signal to noise ratio at the edge of the cell is heard by the user as
crackling and hissing on the radio.
To see real examples of cell coverage look at some of the coverage maps provided by real
operators on their web sites; in certain cases they may mark the site of the transmitter, in
others it can be located by working out the point of strongest coverage.
Cellular telephony
Cell site
The most common example of a cellular network are mobile phone (cell phone) networks. A
mobile phone is a portable telephone which receives or makes calls through a cell site (base
station), or transmitting tower. Radio waves are used to transfer signals to and from the cell
phone. Large geographic areas (representing the coverage range of a service provider) are
split up into smaller cells to deal with line-of-sight signal loss and the large number of active
phones in an area. In cities, each cell site has a range of up to approximately ½ mile, while in
rural areas, the range is approximately 5 miles. Many times in clear open areas, a user may
receive signal from a cell 25 miles away. Each cell overlaps other cell sites. All of the cell sites
are connected to cellular telephone exchanges "switches", which in turn connect to the
public telephone network or another switch of the cellular company.
As the phone user moves from one cell area to another, the switch automatically commands
the handset and a cell site with a stronger signal (reported by the handset) to go to a new
radio channel (frequency). When the handset responds through the new cell site, the
exchange switches the connection to the new cell site.
With CDMA, multiple CDMA handsets share a specific radio channel; the signals are
separated by using a pseudonoise code (PN code) specific to each phone. As the user moves
267
WIRELESS COMMUNICACTION
from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of
the same site) simultaneously. This is known as "soft handoff" because, unlike with
traditional cellular technology, there is no one defined point where the phone switches to the
new cell.
Modern mobile phones use cells because radio frequencies are a limited, shared resource.
Cell-sites and handsets change frequency under computer control and use low power
transmitters so that a limited number of radio frequencies can be reused by many callers with
less interference. CDMA handsets, in particular, must have strict power controls to avoid
interference with each other. An incidental benefit is that the batteries in the handsets need
less power.
Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS
(analog), the term "cell phone" is used interchangeably with "mobile phone"; however, an
exception of mobile phones not using cellular technology is satellite phones.
Old systems predating the cellular principle may still be in use in places. The most notable
real hold-out is used by many amateur radio operators who maintain phone patches in their
clubs' VHF repeaters.
There are a number of different digital cellular technologies, including: Global System for
Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division
Multiple Access (CDMA), Evolution-Data Optimized (EV-DO), Enhanced Data Rates for
GSM Evolution (EDGE), 3GSM, Digital Enhanced Cordless Telecommunications (DECT),
Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).
Frequency Division Multiple Access or FDMA is an
access technology that is used by radio systems to share the radio spectrum. The terminology
―multiple access‖ implies the sharing of the resource amongst users, and the ―frequency
division‖ describes how the sharing is done: by allocating users with different carrier
frequencies of the radio spectrum.
This technique relies upon sharing of the available radio spectrum by the communications
signals that must pass through that spectrum. The terminology ―multiple access‖ indicates
how the the radio spectrum resource is intended to be used: by enabling more than one
communications signal to pass within a particular band; and the ―frequency division‖
indicates how the sharing is accomplished: by allocating individual frequencies for each
communications signal within the band.
In an FDMA scheme, the given Radio Frequency (RF) bandwidth is divided into adjacent
frequency segments. Each segment is provided with bandwidth to enable an associated
communications signal to pass through a transmission environment with an acceptable level
of interference from communications signals in adjacent frequency segments.
In demand assigned multiple access (DAMA) systems, a control mechanism is used to
establish or terminate voice and/or data links between the source and destination stations.
268
WIRELESS COMMUNICACTION
Consequently, any of the subdivisions is used by any of the participating earth stations at any
given time.
FDMA also supports demand assignment in addition to fixed assignment. Demand
assignment allows all users apparently continuous access of the transponder bandwidth by
assigning carrier frequencies on a temporary basis using a statistical assignment process. The
first FDMA demand-assignment system for satellite was developed by COMSAT for use on
the Intelsat series IVA and V satellites.
In contrast to FDMA, Time Division Multiple Access (TDMA) is an access technique
whereby users are separated in time. Code Division Multiple Access CDMA is an access
technology whereby users are separated by codes. Other access techniques include: SDMA –
Space division multiple access, CSMA – Carrier sense multiple access, and MF-TDMA Multi-Frequency TDMA
Code division multiple access (CDMA) is a form of multiplexing and a method of multiple
access that divides up a radio channel not by time (as in time division multiple access), nor
by frequency (as in frequency-division multiple access), but instead by using different
pseudo-random code sequences for each user. CDMA is a form of "spread-spectrum"
signaling, since the modulated coded signal has a much higher bandwidth than the data being
communicated.
To clarify the CDMA scheme, imagine a large room containing many people speaking many
different languages. Each group of people speaking the same language can understand each
other, but not any of the people speaking other languages. Similarly in CDMA, each pair of
users is given a single code which uses the channel. There are many codes occupying the
channel, but only the users associated with the code can decode it.
CDMA also refers to digital cellular telephony systems that make use of this multiple access
scheme, such as those pioneered by QUALCOMM, and W-CDMA by the International
Telecommunication Union or ITU.
CDMA has been used in many communications and navigation systems, including the
Global Positioning System and in the OmniTRACS satellite system for transportation
logistics.
Usage in mobile telephony
A number of different terms are used to refer to CDMA implementations. The original U.S.
standard defined by QUALCOMM was known as IS-95, the IS referring to an Interim
Standard of the Telecommunications Industry Association (TIA). IS-95 is often referred to
as 2G or second generation cellular. The QUALCOMM brand name cdmaOne may also be
used to refer to the 2G CDMA standard. CDMA has been submitted for approval as a
mobile air interface standard to the ITU International Telecommunication Union.
Whereas the Global System for Mobile Communications (GSM) standard is a specification
of an entire network infrastructure, the CDMA interface relates only to the air interface—the
269
WIRELESS COMMUNICACTION
radio part of the technology. For example GSM specifies an infrastructure based on
internationally approved standard while CDMA allows each operator to provide the network
features as it finds suited. On the air interface, the signalling suite (GSM: ISDN SS7) work
has been progressing to harmonise these.
After a couple of revisions, IS-95 was superseded by the IS-2000 standard. This standard was
introduced to meet some of the criteria laid out in the IMT-2000 specification for 3G, or
third generation, cellular. It is also referred to as 1xRTT which simply means "1 times Radio
Transmission Technology" and indicates that IS-2000 uses the same 1.25 MHz carrier shared
channel as the original IS-95 standard. A related scheme called 3xRTT uses three 1.25 MHz
carriers for a 3.75 MHz bandwidth that would allow higher data burst rates for an individual
user, but the 3xRTT scheme has not been commercially deployed. More recently,
QUALCOMM has led the creation of a new CDMA-based technology called 1xEV-DO, or
IS-856, which provides the higher packet data transmission rates required by IMT-2000 and
desired by wireless network operators.
This CDMA system is frequently confused with a similar but incompatible technology called
Wideband Code Division Multiple Access (W-CDMA) which forms the basis of the WCDMA air interface. The W-CDMA air interface is used in the global 3G standard UMTS
and the Japanese 3G standard FOMA, by NTT DoCoMo and Vodafone; however, the
CDMA family of US national standards (including cdmaOne and CDMA2000) are not
compatible with the W-CDMA family of International Telecommunication Union (ITU)
standards.
Another important application of CDMA — predating and entirely distinct from CDMA
cellular — is the Global Positioning System or GPS.
The QUALCOMM CDMA system includes highly accurate time signals (usually referenced
to a GPS receiver in the cell base station), so cell phone CDMA-based clocks are an
increasingly popular type of radio clock for use in computer networks. The main advantage
of using CDMA cell phone signals for reference clock purposes is that they work better
inside buildings, thus often eliminating the need to mount a GPS antenna on the outside of a
building.
Coverage and Applications
The size of a given cell depends on the power of the signal transmitted by the handset, the
terrain, and the radio frequency being used. Various algorithms can reduce the noise
introduced by variations in terrain, but require extra information be sent to validate the
transfer. Hence, the radio frequency and power of the handset effectively determine the cell
size. Long wavelengths need less energy to travel a given distance vs. short wavelengths, so
lower frequencies generally result in greater coverage while higher frequencies result in lesser
coverage. These characteristics are used by mobile network planners in determining the size
and placement of the cells in the network. In cities, many small cells are needed; the use of
high frequencies allows sites to be placed more-closely together, with more subscribers
provided service. In rural areas with a lower density of subscribers, use of lower frequencies
allows each site to provide broader coverage. (See also the Market situation section of GSM.)
270
WIRELESS COMMUNICACTION
Various companies use different variants of CDMA to provide fixed-line networks using
Wireless local loop (WLL) technology. Since they can plan with a specific number of
subscribers per cell in mind, and these are all stationary, this application of CDMA can be
found in most parts of the world.
CDMA is suited for data transfer with bursty behaviour and where delays can be expected. It
is therefore used in Wireless LAN applications; the cell size here is 500 feet because of the
high frequency (2.4 GHz) and low power. The suitability for data transfer is the reason for
why W-CDMA seems to be "winning technology" for the data portion of third-generation
(3G) mobile cellular networks.
Technical details
Code Division Multiplexing (Synchronous CDMA)
Synchronous CDMA, also known as Code Division Multiplexing (CDM), exploits at its core
mathematical properties of orthogonality. Suppose we represent data signals as vectors. The
binary string "1011" could be represented by the vector (1, 0, 1, 1). We may wish to give a
vector a name, we may do so by using boldface letters, e.g. a. We also use an operation on
vectors, known as the dot product, to "multiply" vectors, by summing the product of the
components. The operation is denoted with a dot between the vectors. For example, the dot
product of
and
, written as
, would be
. In
the case where the dot product of two vectors is zero, the two vectors are said to be
orthogonal to each other.
The dot product has a number of properties, which will aid in understanding how CDM
works. For vectors a, b, c:
and
, where k is an arbitrary (scalar) constant, and not a vector
The square root of
denoted as
is a real number, and is called the magnitude of the vector
Suppose vectors a and b are orthogonal. Then:
. It is
271
WIRELESS COMMUNICACTION
[edit] Example
An example of four mutually orthogonal digital signals.
Start with a set of vectors that are mutually orthogonal; though mutual orthogonality is the
only necessary constraint, these vectors are usually constructed for ease of decoding—for
example, columns or rows from Walsh matrices. An example of orthogonal functions is
shown in the picture on the right. Now, associate with one sender a vector from this set, say
v, which is called the code (sometimes chipping code or chip code). Associate a zero digit
with the vector –v, and a one digit with the vector v. For example, if v=(1,–1), then the
binary vector (1, 0, 1, 1) would correspond to (v, –v, v, v) which is then constructed in
binary as ((1,–1),(–1,1),(1,–1),(1,–1)). For the purposes of this article, we call this constructed
vector the transmitted vector.
Each sender has a different, unique vector v chosen from that set, but the construction
method of the transmitted vector is identical.
Now, the physical properties of interference say that if two signals at a point are in phase,
they will "add up" to give twice the amplitude of each signal, but if they are out of phase,
they will "subtract" and give a signal that is the difference of the amplitudes. Digitally, this
behaviour can be modelled simply by the addition of the transmission vectors, component
by component.
If sender0 has code (1,–1) and data (1,0,1,1), and sender1 has code (1,1) and data (0,0,1,1),
and both senders transmit simultaneously, then this table describes the coding steps:
272
WIRELESS COMMUNICACTION
Step Encode sender0
Encode sender1
0 vector0=(1,–1), data0=(1,0,1,1)=(1,–1,1,1) vector1=(1,1), data1=(0,0,1,1)=(–1,–1,1,1)
1 encode0=vector0.data0
encode1=vector1.data1
2 encode0=(1,–1).(1,–1,1,1)
encode1=(1,1).(–1,–1,1,1)
3 encode0=((1,–1),(–1,1),(1,–1),(1,–1))
encode1=((–1,–1),(–1,–1),(1,1),(1,1))
4 signal0=(1,–1,–1,1,1,–1,1,–1)
signal1=(–1,–1,–1,–1,1,1,1,1)
Because signal0 and signal1 are transmitted at the same time into the same air, we'll add
them together to model the raw signal in the air. (1,–1,–1,1,1,–1,1,–1) + (–1,–1,–1,–1,1,1,1,1)
= (0,–2,–2,0,2,0,2,0)
This raw signal may be called an interference pattern.
How does a receiver make sense of this interference pattern? The receiver knows the codes
of the senders, and this knowledge can be combined with the received interference pattern
to extract an intelligible signal for any known sender. The following table explains how this
process works.
Step Decode sender0
Decode sender1
0 vector0=(1,–1), pattern=(0,–2,–2,0,2,0,2,0) vector1=(1,1), pattern=(0,–2,–2,0,2,0,2,0)
1 decode0=pattern.vector0
decode1=pattern.vector1
2 decode0=((0,–2),(–2,0),(2,0),(2,0)).(1,–1) decode1=((0,–2),(–2,0),(2,0),(2,0)).(1,1)
3 decode0=((0+2),(–2+0),(2+0),(2+0))
decode1=((0–2),(–2+0),(2+0),(2+0))
4 data0=(2,–2,2,2)=(1,0,1,1)
data1=(–2,–2,2,2)=(0,0,1,1)
Remember the dot product of two vectors v=(a,b) and u=(c,d) is then (a*c + b*d). So, for
example, (0,–2).(1,–1) = (0*1 + –2*–1) = (0+2) = 2
Further, after decoding, all values greater than 0 are interpreted as 1 while all values less than
zero are interpreted as 0. For example, after decoding, data0 is (2,–2,2,2), but the receiver
interprets this as (1,0,1,1).
Asynchronous CDMA
The previous example of orthogonal Walsh sequences describes how 2 users can be
multiplexed together in a synchronous system, a technique that is commonly referred to as
Code Division Multiplexing (CDM). The set of 4 Walsh sequences shown in the figure will
afford up to 4 users, and in general, an NxN Walsh matrix can be used to multiplex N users.
Multiplexing requires all of the users to be coordinated so that each transmits their assigned
sequence v (or the complement, -v) starting at exactly the same time. Thus, this technique
finds use in base-to-mobile links, where all of the transmissions originate from the same
transmitter and can be perfectly coordinated.
273
WIRELESS COMMUNICACTION
On the other hand, the mobile-to-base links cannot be precisely coordinated, particularly due
to the mobility of the handsets, and require a somewhat different approach. Since it is not
mathematically possible to create signature sequences that are orthogonal for arbitrarily
random starting points, unique "pseudo-random" or "pseudo-noise" (PN) sequences are
used in Asynchronous CDMA systems. These PN sequences are statistically uncorrelated,
and the sum of a large number of PN sequences results in Multiple Access Interference
(MAI) that is approximated by a Gaussian noise process (following the "central limit
theorem" in statistics). If all of the users are received with the same power level, then the
variance (e.g., the noise power) of the MAI increases in direct proportion to the number of
users.
All forms of CDMA use spread spectrum process gain to allow receivers to partially
discriminate against unwanted signals. Signals encoded with the specified PN sequence
(code) are received, while signals with different codes (or the same code but a different
timing offset) appear as wideband noise reduced by the process gain.
Since each user generates MAI, controlling the signal strength is an important issue with
CDMA transmitters. A CDM (Synchronous CDMA), TDMA or FDMA receiver can in
theory completely reject arbitrarily strong signals using different codes, time slots or
frequency channels due to the orthogonality of these systems. This is not true for
Asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the
unwanted signals are much stronger than the desired signal, they will overwhelm it. This
leads to a general requirement in any Asynchronous CDMA system to approximately match
the various signal power levels as seen at the receiver. In CDMA cellular, the base station
uses a fast closed-loop power control scheme to tightly control each mobile's transmit
power.
Advantages of Asynchronous CDMA over other techniques
Asynchronous CDMA's main advantage over CDM (Synchronous CDMA), TDMA and
FDMA is that it can use the spectrum more efficiently in mobile telephony applications.
(Quick note: In theory, CDMA, TDMA and FDMA have exactly the same spectral
efficiency. When it comes to practical application, each has its own challenges. Timing in the
case of TDMA, power control in the case of CDMA and frequency generation/filtering in
the case of FDMA.). TDMA systems must carefully synchronize the transmission times of
all the users to ensure that they are received in the correct timeslot and do not cause
interference. Since this cannot be perfectly controlled in a mobile environment, each
timeslot must have a guard-time, which reduces the probability that users will interfere, but
decreases the spectral efficiency. Similarly, FDMA systems must use a guard-band between
adjacent channels, due to the random doppler shift of the signal spectrum which occurs due
to the user's mobility. The guard-bands will reduce the probability that adjacent channels will
interfere, but decrease the utilization of the spectrum.
Most importantly, Asynchronous CDMA offers a key advantage in the flexible allocation of
resources. There are a fixed number of orthogonal codes, timeslots or frequency bands that
can be allocated for CDM, TDMA and FDMA systems, which remain underutilized due to
the bursty nature of telephony and packetized data transmissions. There is no strict limit to
274
WIRELESS COMMUNICACTION
the number of users that can be supported in an Asynchronous CDMA system, only a
practical limit governed by the desired bit error probability, since the SIR (Signal to
Interference Ratio) varies inversely with the number of users. In a bursty traffic environment
like mobile telephony, the advantage afforded by Asynchronous CDMA is that the
performance (bit error rate) is allowed to fluctuate randomly, with an average value
determined by the number of users times the percentage of utilization. Suppose there are 2N
users that only talk half of the time, then 2N users can be accommodated with the same
average bit error probability as N users that talk all of the time. The key difference here is
that the bit error probability for N users talking all of the time is constant, whereas it is a
random quantity (with the same mean) for 2N users talking half of the time.
In other words, Asynchronous CDMA is ideally suited to a mobile network where large
numbers of transmitters each generate a relatively small amount of traffic at irregular
intervals. CDM (Synchronous CDMA), TDMA and FDMA systems cannot recover the
underutilized resources inherent to bursty traffic due to the fixed number of orthogonal
codes, time slots or frequency channels that can be assigned to individual transmitters. For
instance, if there are N time slots in a TDMA system and 2N users that talk half of the time,
then half of the time there will be more than N users needing to use more than N timeslots.
Furthermore, it would require significant overhead to continually allocate and deallocate the
orthogonal code, time-slot or frequency channel resources. By comparison, Asynchronous
CDMA transmitters simply send when they have something to say, and go off the air when
they don't, keeping the same PN signature sequence as long as they are connected to the
system.
Macro diversity usage
Soft handover
Soft handoff (or soft handover) is an innovation in mobility. It refers to the technique of
adding additional base stations (in IS-95 as many as 5) to a connection to be certain that the
next base is ready as you move through the terrain. However, it can also be used to move a
call from one base station that is approaching congestion to another with better capacity. As
a result, signal quality and handoff robustness is improved compared to TDMA systems.
In TDMA and analog systems, each cell transmits on its own frequency, different from those
of its neighbouring cells. If a mobile device reaches the edge of the cell currently serving its
call, it is told to break its radio link and quickly tune to the frequency of one of the
neighbouring cells where the call has been moved by the network due to the mobile's
movement. If the mobile is unable to tune to the new frequency in time the call is dropped.
In CDMA, a set of neighbouring cells all use the same frequency for transmission and
distinguish cells (or base stations) by means of a number called the "PN offset", a time offset
from the beginning of the well-known pseudo-random noise sequence that is used to spread
the signal from the base station. Because all of the cells are on the same frequency, listening
to different base stations is now an exercise in digital signal processing based on offsets from
the PN sequence, not RF transmission and reception based on separate frequencies.
275
WIRELESS COMMUNICACTION
As the CDMA phone roams through the network, it detects the PN offsets of the
neighbouring cells and reports the strength of each signal back to the reference cell of the
call (usually the strongest cell). If the signal from a neighbouring cell is strong enough, the
mobile will be directed to "add a leg" to its call and start transmitting and receiving to and
from the new cell in addition to the cell (or cells) already hosting the call. Likewise, if a cell's
signal becomes too weak the mobile is directed to drop that leg. In this way, the mobile can
move from cell to cell and add and drop legs as necessary in order to keep the call up
without ever dropping the link.
It should be noted that this "soft handoff" does not happen via CDMA from cell tower to
cell tower. A group of cell sites are linked up with wire and the call is synced over wire, over
TDM, Asynchronous Transfer Mode, or even Internet Protocol.
Hard handover
When there are frequency boundaries between different carriers or sub-networks, a CDMA
phone behaves in the same way as TDMA or analog and performs a hard handoff in which
it breaks the existing connection and tries to pick up on the new frequency where it left off.
CDMA Roaming
The capabilty to use many services of the home system in other wireless systems is known as
roaming. Roaming is a critical capability of wide-area wireless systems, such as cdmaOne and
CDMA2000 systems. While most subscribers spend most of their time within their home
system, many do spend time outside it, and expect their phones to work everywhere with all
services.
When TIA/EIA-41 was first developed, the systems it connected were predomintately
regional in extent. Domestic roaming was the first priority. However, international roaming
soon became an important requirement. A series of ad-on standards were developed to
support a variety of enhancements to gradually remove the barriers of international roaming.
Thereby allowing CDMA carriers to launch international roaming globally.
CDMA2000 bases its roaming capabilities on ANSI-41, and consequently inherits more than
20 years of development and experience with this standard, beginning with the first intercarrier handoff trials in Canada and the United states in 1989. The goal of roaming is to have
integrated networks where one network, through agreements with other networks, extends
coverage to its customers.
Regional, National or International roaming all have several elements that are required in
order for CDMA roaming to be facilitated:
CDMA Roaming Business Elements
CDMA Roaming Technical Elements
CDMA Roaming Service Features
276
WIRELESS COMMUNICACTION
Roaming Service Providers
CDMA Roaming Inter-carrier Implementation
CDMA Roaming Carrier Maintenance
CDMA features
Narrowband message signal multiplied by wideband spreading signal or pseudonoise
code
Each user has his own pseudonoise (PN) code
Soft capacity limit: system performance degrades for all users as number of users
increases
Cell frequency reuse: no frequency planning needed
Soft handoff increases capacity
Near-far problem
Interference limited: power control is required
Wide bandwidth induces diversity: rake receiver is used
Time division multiple access (TDMA)
is a channel access method for shared medium (usually radio) networks. It allows several
users to share the same frequency channel by dividing the signal into different timeslots. The
users transmit in rapid succession, one after the other, each using his own timeslot. This
allows multiple stations to share the same transmission medium (e.g. radio frequency
channel) while using only the part of its bandwidth they require. TDMA is used in the digital
2G cellular systems such as Global System for Mobile Communications (GSM), IS-136,
Personal Digital Cellular (PDC) and iDEN, and in the Digital Enhanced Cordless
Telecommunications (DECT) standard for portable phones. It is also used extensively in
satellite systems, and combat-net radio systems. For usage of Dynamic TDMA packet mode
communication, see below.
277
WIRELESS COMMUNICACTION
TDMA frame structure showing a data stream divided into frames and those frames divided
into timeslots.
TDMA is a type of Time-division multiplexing, with the special point that instead of having
one transmitter connected to one receiver, there are multiple transmitters. In the case of the
uplink from a mobile phone to a base station this becomes particularly difficult because the
mobile phone can move around and vary the timing advance required to make its
transmission match the gap in transmission from its peers.
TDMA features
Shares single carrier frequency with multiple users
Non-continuous transmission makes handoff simpler
Slots can be assigned on demand in dynamic TDMA
Less stringent power control than CDMA due to reduced intra cell interference
Higher synchronization overhead than CDMA
Advanced equalization is necessary for high data rates
Cell breathing (borrowing resources from adjacent cells) is more complicated than in
CDMA
Frequency/slot allocation complexity
Pulsating power envelop: Interference with other devices
TDMA in 2G cellular systems
Most 2G cellular systems, with the notable exception of IS-95, are based around TDMA.
GSM, D-AMPS, PDC, and PHS are examples of TDMA cellular systems. GSM combines
TDMA with Frequency Hopping and wideband transmission to reduce interference, this
minimizes common types of interference.
In the GSM system, the synchronization of the mobile phones is achieved by sending timing
advance commands from the base station which instructs the mobile phone to transmit
earlier and by how much. This compensates for propagation delay as the speed of radio
waves is the same as light (finite). The mobile phone is not allowed to transmit for its entire
timeslot, but there is a guard interval at the end of each timeslot. As the transmission moves
into the guard period, the mobile network adjusts the timing advance to synchronize the
transmission.
Initial synchronization of a phone requires even more care. Before a mobile transmits there
is no way to actually know the offset required. For this reason, an entire timeslot has to be
dedicated to mobiles attempting to contact the network (known as the RACH in GSM). The
mobile attempts to broadcast at the beginning of the timeslot, as received from the network.
If the mobile is located next to the base station, there will be no time delay and this will
succeed. If, however, the mobile phone is at just less than 35km from the base station, the
time delay will mean the mobile's broadcast arrives at the very end of the timeslot. In that
case, the mobile will be instructed to broadcast its messages starting nearly a whole timeslot
earlier than would be expected otherwise. Finally, if the mobile is beyond the 35 km cell
range in GSM, then the RACH will arrive in a neighboring timeslot and be ignored. It is this
278
WIRELESS COMMUNICACTION
feature, rather than limitations of power which limits the range of a GSM cell to 35
kilometers when no special extension techniques are used. By changing the synchronization
between the uplink and downlink at the base station, however, this limitation can be
overcome.
TDMA in 3G Cellular Systems
Most major 3G systems are primarily based upon CDMA. However, Time Division
duplexing and multiple access schemes are available in 3G form, sometimes combined with
CDMA to take advantage of the benefits of both technologies.
While the most popular form of the UMTS 3G GSM system uses CDMA instead of TDMA,
TDMA is combined with CDMA and Time Division Duplexing in two standard UMTS
UTRA modes, UTRA TDD-HCR (better known as TD-CDMA), and UTRA TDD-LCR
(better known as TD-SCDMA). In each mode, more than one handset may share a single
time slot. UTRA TDD-HCR is used most commonly by UMTS-TDD to provide Internet
access, whereas UTRA TDD-LCR provides some interoperability with the forthcoming
Chinese 3G standard.
Comparison with other multiple-access schemes
In radio systems, TDMA is usually used alongside Frequency-division multiple access
(FDMA) and Frequency division duplex (FDD); the combination is referred to as
FDMA/TDMA/FDD. This is the case in both GSM and IS-136 for example. Exceptions to
this include the DECT and PHS micro-cellular systems, UMTS-TDD UMTS variant, and
China's TD-SCDMA, which use Time Division duplexing, where different time slots are
allocated for the base station and handsets on the same frequency.
A major advantage of TDMA is that the radio part of the mobile only needs to listen and
broadcast for its own timeslot. For the rest of the time, the mobile can carry out
measurements on the network, detecting surrounding transmitters on different frequencies.
This allows safe inter frequency handovers, something which is difficult in CDMA systems,
not supported at all in IS-95 and supported through complex system additions in Universal
Mobile Telecommunications System (UMTS). This in turn allows for co-existence of
microcell layers with macrocell layers.
CDMA, by comparison, supports "soft hand-off" which allows a mobile phone to be in
communication with up to 6 base stations simultaneously, a type of "same-frequency
handover". The incoming packets are compared for quality, and the best one is selected.
CDMA's "cell breathing" characteristic, where a terminal on the boundary of two congested
cells will be unable to receive a clear signal, can often negate this advantage during peak
periods.
A disadvantage of TDMA systems is that they create interference at a frequency which is
directly connected to the timeslot length. This is the irritating buzz which can sometimes be
heard if a GSM phone is left next to a radio or speakers. Another disadvantage is that the
"dead time" between timeslots limits the potential bandwidth of a TDMA channel. These are
279
WIRELESS COMMUNICACTION
implemented in part because of the difficulty ensuring that different terminals transmit at
exactly the times required. Handsets that are moving will need to constantly adjust their
timings to ensure their transmission is received at precisely the right time, because as they
move further from the base station, their signal will take longer to arrive. This also means
that the major TDMA systems have hard limits on cell sizes in terms of range, though in
practice the power levels required to receive and transmit over distances greater than the
supported range would be mostly impractical anyway.
Dynamic TDMA
In dynamic time division multiple access, a scheduling algorithm dynamically reserves a
variable number of timeslots in each frame to variable bit-rate data streams, based on the
traffic demand of each data stream. Dynamic TDMA is used in
HIPERLAN/2 broadband radio access network.
IEEE 802.16a WiMax
Bluetooth
The Packet radio multiple access (PRMA) method for combined circuit switched
voice communication and packet data.
TD-SCDMA
Network planning and design is an iterative process, encompassing topological design,
network-synthesis, and network-realization, and is aimed at ensuring that a new network or
service meets the needs of the subscriber and operator. The process can be tailored
according to each new network or service.
This is an extremely important process which must be performed before the establishment
of a new telecommunications network or service.
A network planning methodology
A traditional network planning methodology involves four layers of planning, namely:
business planning
long-term and medium-term network planning
short-term network planning
operations and maintenance.
Each of these layers incorporate plans for different time horizons, i.e. the business planning
layer determines the planning that the operator must perform to ensure that the network will
perform as required for its intended life-span. The Operations and Maintenance layer,
however, examines how the network will run on a day-to-day basis.
The network planning process begins with the acquisition of external information. This
includes:
forecasts of how the new network/service will operate;
280
WIRELESS COMMUNICACTION
the economic information concerning costs; and
the technical details of the network‘s capabilities.
It should be borne in mind that planning a new network/service involves implementing the
new system across the first four layers of the OSI Reference Model. This means that even
before the network planning process begins, choices must be made, involving protocols and
transmission technologies.
Once the initial decisions have been made, the network planning process involves three main
steps:
Topological design: This stage involves determining where to place the components
and how to connect them. The (topological) optimisation methods that can be used
in this stage come from an area of mathematics called Graph Theory. These methods
involve determining the costs of transmission and the cost of switching, and thereby
determining the optimum connection matrix and location of switches and
concentrators.
Network-synthesis: This stage involves determining the size of the components used,
subject to performance criteria such as the Grade of Service (GoS). The method
used is known as "Nonlinear Optimisation", and involves determining the topology,
required GoS, cost of transmission, etc., and using this information to calculate a
routing plan, and the size of the components.
Network realization: This stage involves determining how to meet capacity
requirements, and ensure reliability within the network. The method used is known
as "Multicommodity Flow Optimisation", and involves determining all information
relating to demand, costs and reliability, and then using this information to calculate
an actual physical circuit plan.
These steps are interrelated and are therefore performed iteratively, and in parallel with one
another. The planning process is highly complex, meaning that at each iteration, an analyst
must increase his planning horizons, and in so doing, he must generate plans for the various
layers outlined above.
The role of forecasting
During the process of Network Planning and Design, it is necessary to estimate the expected
traffic intensity and thus the traffic load that the network must support. If a network of a
similar nature already exists, then it may be possible to take traffic measurements of such a
network and use that data to calculate the exact traffic load. However, as is more likely in
most instances, if there are no similar networks to be found, then the network planner must
use telecommunications forecasting methods to estimate the expected traffic intensity.
The forecasting process involves several steps as follows:
Definition of problem;
281
WIRELESS COMMUNICACTION
Data acquisition;
Choice of forecasting method;
Analysis/Forecasting;
Documentation and analysis of results.
Dimensioning
The purpose of dimensioning a new network/service is to determine the minimum capacity
requirements that will still allow the Teletraffic Grade of Service (GoS) requirements to be
met. To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during
the day during which traffic intensity is at its peak.
The dimensioning process involves determining the network‘s topology, routing plan, traffic
matrix, and GoS requirements, and using this information to determine the maximum call
handling capacity of the switches, and the maximum number of channels required between
the switches.
A dimensioning rule is that the planner must ensure that the traffic load should never
approach a load of 100%. To calculate the correct dimensioning to comply with the above
rule, the planner must take on-going measurements of the network‘s traffic, and
continuously maintain and upgrade resources to meet the changing requirements.
282
WIRELESS COMMUNICACTION
Chapter No:9
Introduction to Global
Positioning System GPS
283
WIRELESS COMMUNICACTION
The Global Positioning System (GPS) is currently the only
fully functional Global Navigation Satellite System (GNSS). Utilizing a constellation of at
least 24 medium Earth orbit satellites that transmit precise radio signals, the system enables a
GPS receiver to determine its location, speed and direction.
Developed by the United States Department of Defense, it is officially named NAVSTAR
GPS (Contrary to popular belief, NAVSTAR is not an acronym for NAVigation Satellite
Timing And Ranging, but simply a name given by Mr. John Walsh (no relation to John
Walsh of America's Most Wanted), a key decision maker when it came to the budget for the
GPS program. The satellite constellation is managed by the United States Air Force 50th
Space Wing. The cost of maintaining the system is approximately US$750 million per year,[2]
including the replacement of aging satellites, and research and development. Despite this
fact, GPS is free for civilian use as a public good.
GPS has become a widely used aid to navigation worldwide, and a useful tool for mapmaking, land surveying, commerce, and scientific uses. GPS also provides a precise time
reference used in many applications including scientific study of earthquakes, and
synchronization of telecommunications networks.
Simplified method of operation
A GPS receiver calculates its position by measuring the distance between itself and three or
more GPS satellites. Measuring the time delay between transmission and reception of each
GPS radio signal gives the distance to each satellite, since the signal travels at a known speed.
The signals also carry information about the satellites' location. By determining the position
of, and distance to, at least three satellites, the receiver can compute its position using
trilateration. Receivers typically do not have perfectly accurate clocks and therefore track one
or more additional satellites to correct the receiver's clock error.
Technical description
GPS satellite on test rack
System segmentation
284
WIRELESS COMMUNICACTION
The current GPS consists of three major segments. These are the space segment (SS), a
control segment (CS), and a user segment (US).
Space segment
The space segment (SS) is composed of the orbiting GPS satellites, or Space Vehicles (SV) in
GPS parlance. The GPS design calls for 24 SVs to be distributed equally among six circular
orbital planes. The orbital planes are centered on the Earth, not rotating with respect to the
distant stars. The six planes have approximately 55° inclination (tilt relative to Earth's
equator) and are separated by 60° right ascension of the ascending node (angle along the
equator from a reference point to the orbit's intersection).
Orbiting at an altitude of approximately 20,200 kilometers (12,600 miles or 10,900 nautical
miles; orbital radius of 26,600 km (16,500 mi or 14,400 NM)), each SV makes two complete
orbits each sidereal day, so it passes over the same location on Earth once each day. The
orbits are arranged so that at least six satellites are always within line of sight from almost
everywhere on Earth's surface.
As of April 2007, there are 30 actively broadcasting satellites in the GPS constellation.
The additional satellites improve the precision of GPS receiver calculations by providing
redundant measurements. With the increased number of satellites, the constellation was
changed to a nonuniform arrangement. Such an arrangement was shown to improve
reliability and availability of the system, relative to a uniform system, when multiple
satellites fail.
Control segment
The flight paths of the satellites are tracked by US Air Force monitoring stations in Hawaii,
Kwajalein, Ascension Island, Diego Garcia, and Colorado Springs, Colorado, along with
monitor stations operated by the National Geospatial-Intelligence Agency (NGA). The
tracking information is sent to the Air Force Space Command's master control station at
Schriever Air Force Base in Colorado Springs, which is operated by the 2d Space Operations
Squadron (2 SOPS) of the United States Air Force (USAF). 2 SOPS contacts each GPS
satellite regularly with a navigational update (using the ground antennas at Ascension Island,
Diego Garcia, Kwajalein, and Colorado Springs). These updates synchronize the atomic
clocks on board the satellites to within one microsecond and adjust the ephemeris of each
satellite's internal orbital model. The updates are created by a Kalman Filter which uses
inputs from the ground monitoring stations, space weather information, and various other
inputs.[10]
285
WIRELESS COMMUNICACTION
GPS receivers come in a variety of formats, from devices integrated into cars, phones, and
watches, to dedicated devices such as those shown here from manufacturers Trimble,
Garmin and Leica (left to right).
User segment
The user's GPS receiver is the user segment (US) of the GPS system. In general, GPS
receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites,
receiver-processors, and a highly-stable clock (often a crystal oscillator). They may also
include a display for providing location and speed information to the user. A receiver is
often described by its number of channels: this signifies how many satellites it can monitor
simultaneously. Originally limited to four or five, this has progressively increased over the
years so that, as of 2006, receivers typically have between twelve and twenty channels.
A typical OEM GPS receiver module, based on the SiRF Star III chipset, measuring 12 x 15
mm, and used in many products.
GPS receivers may include an input for differential corrections, using the RTCM SC-104
format. This is typically in the form of a RS-232 port at 4,800 bit/s speed. Data is actually
sent at a much lower rate, which limits the accuracy of the signal sent using RTCM.
Receivers with internal DGPS receivers can outperform those using external RTCM data. As
of 2006, even low-cost units commonly include Wide Area Augmentation System (WAAS)
receivers.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183
protocol. NMEA 2000 is a newer and less widely adopted protocol. Both are proprietary and
controlled by the US-based National Marine Electronics Association. References to the
NMEA protocols have been compiled from public records, allowing open source tools like
gpsd to read the protocol without violating intellectual property laws. Other proprietary
protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with
other devices using methods including a serial connection, USB or Bluetooth.
Navigation signals
286
WIRELESS COMMUNICACTION
GPS broadcast signal
GPS satellites broadcast three different types of data in the primary navigation signal. The
first is the almanac which sends coarse time information along with status information about
the satellites. The second is the ephemeris, which contains orbital information that allows
the receiver to calculate the position of the satellite. This data is included in the 37,500 bit
Navigation Message, which takes 12.5 minutes to send at 50 bps.
The satellites also broadcast two forms of clock information, the Coarse / Acquisition code,
or C/A which is freely available to the public, and the restricted Precise code, or P-code,
usually reserved for military applications. The C/A code is a 1,023 bit long pseudo-random
code broadcast at 1.023 MHz, repeating every millisecond. Each satellite sends a distinct
C/A code, which allows it to be uniquely identified. The P-code is a similar code broadcast
at 10.23 MHz, but it repeats only once a week. In normal operation, the so-called "antispoofing mode", the P code is first encrypted into the Y-code, or P(Y), which can only be
decrypted by units with a valid decryption key. Frequencies used by GPS include:
L1 (1575.42 MHz): Mix of Navigation Message, coarse-acquisition (C/A) code and
encrypted precision P(Y) code.
L2 (1227.60 MHz): P(Y) code, plus the new L2C code on the Block IIR-M and
newer satellites.
L3 (1381.05 MHz): Used by the Defense Support Program to signal detection of
missile launches, nuclear detonations, and other high-energy infrared events.
L4 (1379.913 MHz): Being studied for additional ionospheric correction.
L5 (1176.45 MHz): Proposed for use as a civilian safety-of-life (SoL) signal (see GPS
modernization). This frequency falls into an internationally protected range for
aeronautical navigation, promising little or no interference under all circumstances.
The first Block IIF satellite that would provide this signal is set to be launched in
2008.
Calculating positions
The coordinates are calculated according to the World Geodetic System WGS84 coordinate
system. To calculate its position, a receiver needs to know the precise time. The satellites are
equipped with extremely accurate atomic clocks, and the receiver uses an internal crystal
oscillator-based clock that is continually updated using the signals from the satellites.
The receiver identifies each satellite's signal by its distinct C/A code pattern, then measures
the time delay for each satellite. To do this, the receiver produces an identical C/A sequence
287
WIRELESS COMMUNICACTION
using the same seed number as the satellite. By lining up the two sequences, the receiver can
measure the delay and calculate the distance to the satellite, called the pseudorange.
Overlapping pseudoranges, represented as curves, are modified to yield the probable
position
The orbital position data from the Navigation Message is then used to calculate the satellite's
precise position. Knowing the position and the distance of a satellite indicates that the
receiver is located somewhere on the surface of an imaginary sphere centered on that
satellite and whose radius is the distance to it. When four satellites are measured
simultaneously, the intersection of the four imaginary spheres reveals the location of the
receiver. Receivers known to be near sea level can substitute the sphere of the planet for one
satellite by using their altitude. Often, these spheres will overlap slightly instead of meeting at
one point, so the receiver will yield a mathematically most-probable position (and often
indicate the uncertainty).
Calculating a position with the P(Y) signal is generally similar in concept, assuming one can
decrypt it. The encryption is essentially a safety mechanism: if a signal can be successfully
decrypted, it is reasonable to assume it is a real signal being sent by a GPS satellite. In
comparison, civil receivers are highly vulnerable to spoofing since correctly formatted C/A
signals can be generated using readily available signal generators. RAIM features do not
protect against spoofing, since RAIM only checks the signals from a navigational
perspective.
Accuracy and error sources
The position calculated by a GPS receiver requires the current time, the position of the
satellite and the measured delay of the received signal. The position accuracy is primarily
dependent on the satellite position and signal delay.
To measure the delay, the receiver compares the bit sequence received from the satellite with
an internally generated version. By comparing the rising and trailing edges of the bit
transitions, modern electronics can measure signal offset to within about 1% of a bit time, or
approximately 10 nanoseconds for the C/A code. Since GPS signals propagate nearly at the
speed of light, this represents an error of about 3 meters. This is the minimum error possible
using only the GPS C/A signal.
288
WIRELESS COMMUNICACTION
Position accuracy can be improved by using the higher-speed P(Y) signal. Assuming the
same 1% bit time accuracy, the high frequency P(Y) signal results in an accuracy of about 30
centimeters.
Electronics errors are one of several accuracy-degrading effects outlined in the table below.
When taken together, autonomous civilian GPS horizontal position fixes are typically
accurate to about 15 meters (50 ft). These effects also reduce the more precise P(Y) code's
accuracy.
Atmospheric effects
Inconsistencies of atmospheric conditions affect the speed of the GPS signals as they pass
through the Earth's atmosphere and ionosphere. Correcting these errors is a significant
challenge to improving GPS position accuracy. These effects are smallest when the satellite
is directly overhead and become greater for satellites nearer the horizon since the signal is
affected for a longer time. Once the receiver's approximate location is known, a
mathematical model can be used to estimate and compensate for these errors.
Because ionospheric delay affects the speed of radio waves differently based on frequency—
a characteristic known as dispersion—both frequency bands can be used to help reduce this
error. Some military and expensive survey-grade civilian receivers compare the different
delays in the L1 and L2 frequencies to measure atmospheric dispersion, and apply a more
precise correction. This can be done in civilian receivers without decrypting the P(Y) signal
carried on L2, by tracking the carrier wave instead of the modulated code. To facilitate this
on lower cost receivers, a new civilian code signal on L2, called L2C, was added to the Block
IIR-M satellites, which was first launched in 2005. It allows a direct comparison of the L1
and L2 signals using the coded signal instead of the carrier wave.
The effects of the ionosphere generally change slowly, and can be averaged over time. The
effects for any particular geographical area can be easily calculated by comparing the GPSmeasured position to a known surveyed location. This correction is also valid for other
receivers in the same general location. Several systems send this information over radio or
other links to allow L1 only receivers to make ionospheric corrections. The ionospheric data
are transmitted via satellite in Satellite Based Augmentation Systems such as WAAS, which
transmits it on the GPS frequency using a special pseudo-random number (PRN), so only
one antenna and receiver are required.
Humidity also causes a variable delay, resulting in errors similar to ionospheric delay, but
occurring in the troposphere. This effect is both more localized and changes more quickly
than ionospheric effects and is not frequency dependent. These traits making precise
measurement and compensation of humidity errors more difficult than ionospheric effects.
Changes in altitude also change the amount of delay due to the signal passing through less of
the atmosphere at higher elevations. Since the GPS receiver computes its approximate
altitude, this error is relatively simple to correct.
289
WIRELESS COMMUNICACTION
Multipath effects
GPS signals can also be affected by multipath issues, where the radio signals reflect off
surrounding terrain; buildings, canyon walls, hard ground, etc. These delayed signals can
cause inaccuracy. A variety of techniques, most notably narrow correlator spacing, have been
developed to mitigate multipath errors. For long delay multipath, the receiver itself can
recognize the wayward signal and discard it. To address shorter delay multipath from the
signal reflecting off the ground, specialized antennas may be used. Short delay reflections are
harder to filter out since they are only slightly delayed, causing effects almost
indistinguishable from routine fluctuations in atmospheric delay.
Multipath effects are much less severe in moving vehicles. When the GPS antenna is
moving, the false solutions using reflected signals quickly fail to converge and only the direct
signals result in stable solutions.
Ephemeris and clock errors
The navigation message from a satellite is sent out only every 12.5 minutes. In reality, the
data contained in these messages tend to be "out of date" by an even larger amount.
Consider the case when a GPS satellite is boosted back into a proper orbit; for some time
following the maneuver, the receiver's calculation of the satellite's position will be incorrect
until it receives another ephemeris update. The onboard clocks are extremely accurate, but
they do suffer from some clock drift. This problem tends to be very small, but may add up
to 2 meters (6 ft) of inaccuracy.
This class of error is more "stable" than ionospheric problems and tends to change over days
or weeks rather than minutes. This makes correction fairly simple by sending out a more
accurate almanac on a separate channel.
Selective availability
The GPS includes a feature called Selective Availability (SA) that introduces intentional,
slowly changing random errors of up to a hundred meters (300 ft) into the publicly available
navigation signals to confound, for example, guiding long range missiles to precise targets.
Additional accuracy was available in the signal, but in an encrypted form that was only
available to the United States military, its allies and a few others, mostly government users.
SA typically added signal errors of up to about 10 meters (30 ft) horizontally and 30 meters
(100 ft) vertically. The inaccuracy of the civilian signal was deliberately encoded so as not to
change very quickly, for instance the entire eastern U.S. area might read 30 m off, but 30 m
off everywhere and in the same direction. To improve the usefulness of GPS for civilian
navigation, Differential GPS was used by many civilian GPS receivers to greatly improve
accuracy.
During the Gulf War, the shortage of military GPS units and the wide availability of civilian
ones among personnel resulted in a decision to disable Selective Availability. This was ironic,
as SA had been introduced specifically for these situations, allowing friendly troops to use
290
WIRELESS COMMUNICACTION
the signal for accurate navigation, while at the same time denying it to the enemy. But since
SA was also denying the same accuracy to thousands of friendly troops, turning it off or
setting it to an error of zero meters (effectively the same thing) presented a clear benefit.
In the 1990s, the FAA started pressuring the military to turn off SA permanently. This
would save the FAA millions of dollars every year in maintenance of their own radio
navigation systems. The military resisted for most of the 1990s, but SA was eventually
"discontinued"; the amount of error added was "set to zero"[at midnight on May 1, 2000
following an announcement by U.S. President Bill Clinton, allowing users access to the
error-free L1 signal. Per the directive, the induced error of SA was changed to add no error
to the public signals (C/A code). Selective Availability is still a system capability of GPS, and
error could, in theory, be reintroduced at any time. In practice, in view of the hazards and
costs this would induce for US and foreign shipping, it is unlikely to be reintroduced, and
various government agencies, including the FAA, have stated that it is not intended to be
reintroduced.
The US military has developed the ability to locally deny GPS (and other navigation services)
to hostile forces in a specific area of crisis without affecting the rest of the world or its own
military systems.
One interesting side effect of the Selective Availability hardware is the capability to correct
the frequency of the GPS caesium and rubidium clocks to an accuracy of approximately 2 ×
10-13 (one in five trillion). This represented a significant improvement over the raw accuracy
of the clocks.
Relativity
According to the theory of relativity, due to their constant movement and height relative to
the Earth-centered inertial reference frame, the clocks on the satellites are affected by their
speed (special relativity) as well as their gravitational potential (general relativity). For the
GPS satellites, general relativity predicts that the atomic clocks at GPS orbital altitudes will
tick more rapidly, by about 45,900 nanoseconds (ns) per day, because they are in a weaker
gravitational field than atomic clocks on Earth's surface. Special relativity predicts that
atomic clocks moving at GPS orbital speeds will tick more slowly than stationary ground
clocks by about 7,200 ns per day. When combined, the discrepancy is 38 microseconds per
day; a difference of 4.465 parts in 1010. To account for this, the frequency standard onboard
each satellite is given a rate offset prior to launch, making it run slightly slower than the
desired frequency on Earth; specifically, at 10.22999999543 MHz instead of 10.23 MHz.
GPS observation processing must also compensate for another relativistic effect, the Sagnac
effect. The GPS time scale is defined in an inertial system but observations are processed in
an Earth-centered, Earth-fixed (co-rotating) system, a system in which simultaneity is not
uniquely defined. The Lorentz transformation between the two systems modifies the signal
run time, a correction having opposite algebraic signs for satellites in the Eastern and
Western celestial hemispheres. Ignoring this effect will produce an east-west error on the
order of hundreds of nanoseconds, or tens of meters in position.
291
WIRELESS COMMUNICACTION
The atomic clocks on board the GPS satellites are precisely tuned, making the system a
practical engineering application of the scientific theory of relativity in a real-world system.
GPS interference and jamming
Since GPS signals at terrestrial receivers tend to be relatively weak, it is easy for other
sources of electromagnetic radiation to desensitize the receiver, making acquiring and
tracking the satellite signals difficult or impossible.
Solar flares are one such naturally occurring emission with the potential to degrade GPS
reception, and their impact can affect reception over the half of the Earth facing the sun.
GPS signals can also be interfered with by naturally occurring geomagnetic storms,
predominantly found near the poles of the Earth's magnetic field.
Man-made interference can also disrupt, or jam, GPS signals. In one well documented case,
an entire harbor was unable to receive GPS signals due to unintentional jamming caused by a
malfunctioning TV antenna preamplifier. Intentional jamming is also possible. Generally,
stronger signals can interfere with GPS receivers when they are within radio range, or line of
sight. In 2002, a detailed description of how to build a short range GPS L1 C/A jammer was
published in the online magazine Phrack.
The U.S. government believes that such jammers were used occasionally during the 2001 war
in Afghanistan and the U.S. military claimed to destroy a GPS jammer with a GPS-guided
bomb during the Iraq War. Such a jammer is relatively easy to detect and locate, making it an
attractive target for anti-radiation missiles.
Due to the potential for both natural and man-made noise, numerous techniques continue to
be developed to deal with the interference. The first is to not rely on GPS as a sole source.
According to John Ruley, "IFR pilots should have a fallback plan in case of a GPS
malfunction". Receiver Autonomous Integrity Monitoring (RAIM) is a feature now included
in some receivers, which is designed to provide a warning to the user if jamming or another
problem is detected. The U.S. military has also deployed their Selective Availability / AntiSpoofing Module (SAASM) in the Defense Advanced GPS Receiver (DAGR). In
demonstration videos, the DAGR is able to detect jamming and maintain its lock on the
encrypted GPS signals during interference which causes civilian receivers to lose lock.
Techniques to improve accuracy
Augmentation
Augmentation methods of improving accuracy rely on external information being integrated
into the calculation process. There are many such systems in place and they are generally
named or described based on how the GPS sensor receives the information. Some systems
transmit additional information about sources of error (such as clock drift, ephemeris, or
ionospheric delay), others provide direct measurements of how much the signal was off in
the past, while a third group provide additional navigational or vehicle information to be
integrated in the calculation process.
292
WIRELESS COMMUNICACTION
Examples of augmentation systems include the Wide Area Augmentation System,
Differential GPS, and Inertial Navigation Systems.
Precise monitoring
The accuracy of a calculation can also be improved through precise monitoring and
measuring of the existing GPS signals in additional or alternate ways.
The first is called Dual Frequency monitoring, and refers to systems that can compare two
or more signals, such as the L1 frequency to the L2 frequency. Since these are two different
frequencies, they are affected in different, yet predictable ways by the atmosphere and
objects around the receiver. After monitoring these signals, it is possible to calculate and
nullify that error.
Receivers that have the correct decryption key can relatively easily decode the P(Y)-code
transmitted on both L1 and L2 to measure the error. Receivers that do not possess the key
can still use a process called codeless to compare the encrypted information on L1 and L2 to
gain much of the same error information. However, this technique is currently limited to
specialized surveying equipment. In the future, additional civilian codes are expected to be
transmitted on the L2 and L5 frequencies (see #GPS modernization, below). When these
become operational, all users will be able to make the same comparison and directly measure
some errors.
A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). The
error, which this corrects, arises because the pulse transition of the PRN is not
instantaneous, and thus the correlation (satellite-receiver sequence matching) operation is
imperfect. The CPGPS approach utilizes the L1 carrier wave, which has a period 1000 times
smaller than that of the C/A bit period, to act as an additional clock signal and resolve the
uncertainty. The phase difference error in the normal GPS amounts to between 2 and 3
meters (6 to 10 ft) of ambiguity. CPGPS working to within 1% of perfect transition reduces
this error to 3 centimeters (1 inch) of ambiguity. By eliminating this source of error, CPGPS
coupled with DGPS normally realizes between 20 and 30 centimeters (8 to 12 inches) of
absolute accuracy.
Relative Kinematic Positioning (RKP) is another approach for a precise GPS-based
positioning system. In this approach, determination of range signal can be resolved to an
accuracy of less than 10 centimeters (4 in). This is done by resolving the number of cycles in
which the signal is transmitted and received by the receiver. This can be accomplished by
using a combination of differential GPS (DGPS) correction data, transmitting GPS signal
phase information and ambiguity resolution techniques via statistical tests—possibly with
processing in real-time (real-time kinematic positioning, RTK).
GPS time and date
While most clocks are synchronized to Coordinated Universal Time (UTC), the Atomic
clocks on the satellites are set to GPS time. The difference is that GPS time is not corrected
to match the rotation of the Earth, so it does not contain leap seconds or other corrections
293
WIRELESS COMMUNICACTION
which are periodically added to UTC. GPS time was set to match Coordinated Universal
Time (UTC) in 1980, but has since diverged. The lack of corrections means that GPS time
remains synchronized with the International Atomic Time (TAI).
The GPS navigation message includes the difference between GPS time and UTC, which as
of 2006 is 14 seconds. Receivers subtract this offset from GPS time to calculate UTC and
specific timezone values. New GPS units may not show the correct UTC time until after
receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap
seconds (eight bits) which, at the current rate of change of the Earth's rotation, is sufficient
to last until the year 2330.
As opposed to the year, month, and day format of the Julian calendar, the GPS date is
expressed as a week number and a day-of-week number. The week number is transmitted as
a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every
1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January
6, 1980 and the week number became zero again for the first time at 23:59:47 UTC on
August 21, 1999 (00:00:19 TAI on August 22, 1999). To determine the current Gregorian
date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to
correctly translate the GPS date signal. To address this concern the modernized GPS
navigation messages use a 13-bit field, which only repeats every 8,192 weeks (157 years), and
will not return to zero until near the year 2137.
GPS modernization
Having reached the program's requirements for Full Operational Capability (FOC) on July
17, 1995, the GPS completed its original design goals. However, additional advances in
technology and new demands on the existing system led to the effort to modernize the GPS
system. Announcements from the Vice President and the White House in 1998 initiated
these changes, and in 2000 the U.S. Congress authorized the effort, referring to it as GPS
III.
The project aims to improve the accuracy and availability for all users and involves new
ground stations, new satellites, and four additional navigation signals. New civilian signals are
called L2C, L5 and L1C; the new military code is called M-Code. Initial Operational
Capability (IOC) of the L2C code is expected in 2008. A goal of 2013 has been established
for the entire program, with incentives offered to the contractors if they can complete it by
2011.
Applications
The Global Positioning System, while originally a military project, is considered a dual-use
technology, meaning it has significant applications for both the military and the civilian
industry.
294
WIRELESS COMMUNICACTION
Military
Concerning military applications, GPS allows accurate targeting of various military weapons
including ICBMs, cruise missiles and precision-guided munitions. It is used to navigate and
coordinate the movement of troops and supplies.
The GPS satellites also carry nuclear detonation detectors, which form a major portion of
the United States Nuclear Detonation Detection System.
Civilian
This antenna is mounted on the roof of a hut containing a scientific experiment needing
precise timing.
An almost unlimited number of civilian applications benefit from GPS signals, all of which
utilize one or more of three basic components of the GPS; absolute location, relative
movement, time transfer.
The ability to determine the receiver's absolute location allows GPS receivers to perform as a
surveying tool or as an aid to navigation. The capacity to determine relative movement
enables a receiver to calculate local velocity and orientation, useful in vessels or observations
of the Earth. Being able to synchronize clocks to exacting standards enables time transfer,
which is critical in large communication and observation systems.
Finally, measurements of all these components enable researchers to understand the Earth
environment better, such as the atmosphere and the planet's gravity, by observing how those
environmental components alter the propagation of GPS signals.
295
WIRELESS COMMUNICACTION
To help prevent civilian GPS guidance from being used in an enemy's military or improvised
weaponry, the US Government controls the export of civilian receivers. A US-based
manufacturer cannot generally export a GPS receiver unless the receiver contains limits
restricting it from functioning when it is simultaneously (1) at an altitude above 18
kilometers (60,000 ft) and (2) traveling at over 515 m/s (1,000 knots).
History
The design of GPS is based partly on the similar ground-based radio navigation systems,
such as LORAN and the Decca Navigator developed in the early 1940s, and used during
World War II. Additional inspiration for the GPS system came when the Soviet Union
launched the first Sputnik in 1957. A team of U.S. scientists led by Dr. Richard B. Kershner
were monitoring Sputnik's radio transmissions. They discovered that, because of the
Doppler effect, the frequency of the signal being transmitted by Sputnik was higher as the
satellite approached, and lower as it continued away from them. They realized that since they
knew their exact location on the globe, they could pinpoint where the satellite was along its
orbit by measuring the Doppler distortion.
The first satellite navigation system, Transit, used by the United States Navy, was first
successfully tested in 1960. Using a constellation of five satellites, it could provide a
navigational fix approximately once per hour. In 1967, the U.S. Navy developed the
Timation satellite which proved the ability to place accurate clocks in space, a technology the
GPS system relies upon. In the 1970s, the ground-based Omega Navigation System, based
on signal phase comparison, became the first world-wide radio navigation system.
The first experimental Block-I GPS satellite was launched in February 1978. The GPS
satellites were initially manufactured by Rockwell International and are now manufactured by
Lockheed Martin.
A Global Navigation Satellite System (GNSS) receiver, which may use the GPS,
GLONASS, or Beidou system, is capable of being used in many applications.
Navigation
Automobiles can be equipped with GNSS receivers at the factory or as after-market
equipment. Units often display moving maps and information about location, speed,
direction, and nearby streets and landmarks.
Main article: Automotive navigation system
296
WIRELESS COMMUNICACTION
A GPS receiver in civilian automobile use.
Aircraft navigation systems usually display a "moving map" and are often connected
to the autopilot for en-route navigation. Cockpit-mounted GNSS receivers and glass
cockpits are appearing in general aviation aircraft of all sizes, using technologies such
as WAAS or LAAS to increase accuracy. Many of these systems may be certified for
instrument flight rules navigation, and some can also be used for final approach and
landing operations. Glider pilots use GNSS Flight Recorders to log GNSS data
verifying their arrival at turn points in gliding competitions. Flight computers
installed in many gliders also use GNSS to compute wind speed aloft, and glide paths
to waypoints such as alternate airports or mountain passes, to aid en route decision
making for cross-country soaring.
Boats and ships can use GNSS to navigate all of the world's lakes, seas and oceans.
Maritime GNSS units include functions useful on water, such as ―man overboard‖
(MOB) functions that allow instantly marking the location where a person has fallen
overboard, which simplifies rescue efforts. GNSS may be connected to the ships
self-steering gear and Chartplotters using the NMEA 0183 interface. GNSS can also
improve the security of shipping traffic by enabling AIS.
A GPS unit showing basic way point and tracking information which is typically required for
outdoor sport and recreational use
Heavy Equipment can use GNSS in construction, mining and precision agriculture.
The blades and buckets of construction equipment are controlled automatically in
GNSS-based machine guidance systems. Agricultural equipment may use GNSS to
steer automatically, or as a visual aid displayed on a screen for the driver. This is very
useful for controlled traffic and row crop operations and when spraying. Harvesters
with yield monitors can also use GNSS to create a yield map of the paddock being
harvested.
Bicycles often use GNSS in racing and touring. GNSS navigation allows cyclists to
plot their course in advance and follow this course, which may include quieter,
297
WIRELESS COMMUNICACTION
narrower streets, without having to stop frequently to refer to separate maps. Some
GNSS receivers are specifically adapted for cycling with special mounts and
housings.
Hikers, climbers, and even ordinary pedestrians in urban or rural environments can
use GNSS to determine their position, with or without reference to separate maps.
In isolated areas, the ability of GNSS to provide a precise position can greatly
enhance the chances of rescue when climbers or hikers are disabled or lost (if they
have a means of communication with rescue workers).
GNSS equipment for the visually impaired is available.
Spacecraft are now beginning to use GNSS as a navigational tool. The addition of a
GNSS receiver to a spacecraft allows precise orbit determination without ground
tracking. This, in turn, enables autonomous spacecraft navigation, formation flying,
and autonomous rendezvous. The use of GNSS in MEO, GEO, HEO, and highly
elliptical orbits is feasible only if the receiver can acquire and track the much weaker
(15 - 20 dB) GNSS side-lobe signals. This designed constraint, and the radiation
environment found in space, prevents the use of COTS receivers.
Surveying and mapping
Surveying — Survey-Grade GNSS receivers can be used to position survey markers,
buildings, and road construction. These units use the signal from both the L1 and L2
GPS frequencies. Even though the L2 code data are encrypted, the signal's carrier
wave enables correction of some ionospheric errors. These dual-frequency GPS
receivers typically cost US$10,000 or more, but can have positioning errors on the
order of one centimeter or less when used in carrier phase differential GPS mode.
Mapping and geographic information systems (GIS) — Most mapping grade GNSS
receivers use the carrier wave data from only the L1 frequency, but have a precise
crystal oscillator which reduces errors related to receiver clock jitter. This allows
positioning errors on the order of one meter or less in real-time, with a differential
GNSS signal received using a separate radio receiver. By storing the carrier phase
measurements and differentially post-processing the data, positioning errors on the
order of 10 centimeters are possible with these receivers.
Geophysics and geology — High precision measurements of crustal strain can be
made with differential GNSS by finding the relative displacement between GNSS
sensors. Multiple stations situated around an actively deforming area (such as a
volcano or fault zone) can be used to find strain and ground movement. These
measurements can then be used to interpret the cause of the deformation, such as a
dike or sill beneath the surface of an active volcano.
Archeology — As archaeologists excavate a site, they generally make a threedimensional map of the site, detailing where each artifact is found.
298
WIRELESS COMMUNICACTION
Other uses
Precise time reference — Many systems that must be accurately synchronized use
GNSS as a source of accurate time. GNSS can be used as a reference clock for time
code generators or Network Time Protocol (NTP) time servers. Sensors (for
seismology or other monitoring application), can use GNSS as a precise time source,
so events may be timed accurately. Time division multiple access (TDMA)
communications networks often rely on this precise timing to synchronize RF
generating equipment, network equipment, and multiplexers.
Mobile Satellite Communications — Satellite communications systems use a
directional antenna (usually a "dish") pointed at a satellite. The antenna on a moving
ship or train, for example, must be pointed based on its current location. Modern
antenna controllers usually incorporate a GNSS receiver to provide this information.
Emergency and Location-based services — GNSS functionality can be used by
emergency services to locate cell phones. The ability to locate a mobile phone is
required in the United States by E911 emergency services legislation. However, as of
September 2006 such a system is not in place in all parts of the country. GNSS is less
dependent on the telecommunications network topology than radiolocation for
compatible phones. Assisted GPS reduces the power requirements of the mobile
phone and increases the accuracy of the location. A phone's geographic location may
also be used to provide location-based services including advertising, or other
location-specific information.
Location-based games — The availability of hand-held GNSS receivers has led to
games such as Geocaching, which involves using a hand-held GNSS unit to travel to
a specific longitude and latitude to search for objects hidden by other geocachers.
This popular activity often includes walking or hiking to natural locations.
Geodashing is an outdoor sport using waypoints.
Aircraft passengers — Most airlines allow passenger use of GNSS units on their
flights, except during landing and take-off when other electronic devices are also
restricted. Even though consumer GNSS receivers have a minimal risk of
interference, a few airlines disallow use of hand-held receivers during flight. Other
airlines integrate aircraft tracking into the seat-back television entertainment system,
available to all passengers even during takeoff and landing.[1]
Heading information — The GNSS system can be used to determine heading
information, even though it was not designed for this purpose. A "GNSS compass"
uses a pair of antennas separated by about 50 cm to detect the phase difference in
the carrier signal from a particular GNSS satellite.[2] Given the positions of the
satellite, the position of the antenna, and the phase difference, the orientation of the
two antennas can be computed. More expensive GNSS compass systems use three
antennas in a triangle to get three separate readings with respect to each satellite. A
GNSS compass is not subject to magnetic declination as a magnetic compass is, and
299
WIRELESS COMMUNICACTION
doesn't need to be reset periodically like a gyrocompass. It is, however, subject to
multipath effects.
GPS tracking systems use GNSS to determine the location of a vehicle, person, pet
or freight, and to record the position at regular intervals in order to create a log of
movements. The data can be stored inside the unit, or sent to a remote computer by
radio or cellular modem. Some systems allow the location to be viewed in real-time
on the Internet with a web-browser.
Recent innovations in GPS tracking technology include its use for monitoring the
whereabouts of convicted sex offenders, using GPS devices on their ankles as a
condition of their parole. This passive monitoring system allows law enforcement
officials to review the daily movements of offenders for a cost of only $5 or $10 per
day. Real time, or instant tracking is considered too costly for GPS tracking of
criminals. (cited from).
Weather Prediction Improvements — Measurement of atmospheric bending of
GNSS satellite signals by specialized GNSS receivers in orbital satellites can be used
to determine atmospheric conditions such as air density, temperature, moisture and
electron density. Such information from a set of six micro-satellites, launched in
April 2006, called the Constellation of Observing System for Meteorology,
Ionosphere and Climate COSMIC has been proven to improve the accuracy of
weather prediction models.
Photograph annotation — Combining GNSS position data with photographs taken
with a (typically digital) camera, allows one to lookup the locations where the
photographs were taken in a gazeteer, and automatically annotate the photographs
with the name of the location they depict. The GNSS device can be integrated into
the camera, or the timestamp of a picture's metadata can be combined with a GNSS
track log.
Skydiving — Most commercial drop zones use a GNSS to aid the pilot to "spot" the
plane to the correct position relative to the dropzone that will allow all skydivers on
the load to be able to fly their canopies back to the landing area. The "spot" takes
into account the number of groups exiting the plane and the upper winds. In areas
where skydiving through cloud is permitted the GNSS can be the sole visual
indicator when spotting in overcast conditions, this is referred to as a "GPS Spot".
Marketing — Some market research companies have combined GIS systems and
survey based research to help companies to decide where to open new branches, and
to target their advertising according to the usage patterns of roads and the sociodemographic attributes of residential zones.
Wreck diving — A popular variant of scuba diving is known as wreck diving. In
order to locate the desired shipwreck on the bottom of the ocean floor GPS is used
to navigate to the approximate location and then the shipwreck is found using an
echosounder.
300
WIRELESS COMMUNICACTION
Social Networking A growing number of companies are marketing cellular phones
equipped with GPS technology, offering the ability to pinpoint friends on custom
created maps, along with alerts that inform the user when the party is within a
programmed range. Not only do many of these phones offer social networking
functions, they offer standard GPS navigation features such as audible voice
commands for in-vehicle GPS navigation. (cited from)
The GPRS system is used by GSM Mobile phones, the most common mobile phone system
in the world (as of 2004), for transmitting IP packets. The GPRS Core Network is the
centralised part of the GPRS system and also provides support for WCDMA based 3G
networks. The GPRS core network is an integrated part of the GSM core network.
GPRS Core Network in General
main article Packet Data_Protocol#GPRS Tunnelling Protocol (GTP)
GPRS Core Network Structure
The GPRS Core Network (GPRS stands for General Packet Radio Services) provides
mobility management, session management and transport for Internet Protocol packet
services in GSM and WCDMA networks. The core network also provides support for other
additional functions such as charging and lawful interception. It was also proposed, at one
stage, to support packet radio services in the US TDMA system, however, in practice, most
of these networks are being converted to GSM so this option is becoming largely irrelevant.
Like GSM in general, GPRS is an open standards driven system and the standardisation
body is the 3GPP.
GPRS Tunnelling Protocol (GTP)
GPRS Tunnelling Protocol is the defining IP protocol of the GPRS core network. Primarily it is
the protocol which allows end users of a GSM or WCDMA network to move from place to
place whilst continuing to connect to the internet as if from one location at the Gateway
GPRS Support Node (GGSN). It does this by carrying the subscriber's data from the
subscriber's current Serving GPRS Support Node (SGSN) to the GGSN which is handling the
subscriber's session. Three forms of GTP are used by the GPRS core network.
301
WIRELESS COMMUNICACTION
GTP-U: for transfer of user data in separated tunnels for each PDP context
GTP-C: for control reasons including:
o setup and deletion of PDP contexts
o verification of GSN reachability
o updates, e.g. as subscribers move from one SGSN to another.
GTP' : for transfer of charging data from GSNs to the charging function.
GGSNs and SGSNs (collectively known as GSNs) listen for GTP-C messages on UDP port
2123 and for GTP-U messages on port 2152. This communication happens within a single
network or may, in the case of international roaming, happen internationally, probably across
a GPRS Roaming Exchange (GRX).
The "Charging Gateway Function" (CGF) listens to GTP' messages sent from the GSNs on
UDP port 3386. The core network sends charging information to the CGF, typically
including PDP context activation times and the quantity of data which the end user has
transferred. However, this communication which occurs within one network is less
standardised and may, depending on the vendor and configuration options, use proprietary
encoding or even an entirely proprietary system.
GPRS Support Nodes (GSN)
A GSN is a network node which supports the use of GPRS in the GSM core network. All
GSNs should have a Gn interface and support the GPRS tunnelling protocol. There are two key
variants of the GSN; the GGSN and the SGSN defined below.
GGSN - Gateway GPRS Support Node
A gateway GPRS support node (GGSN) acts as an interface between the GPRS backbone
network and the external packet data networks (radio network and the IP network). It
converts the GPRS packets coming from the SGSN into the appropriate packet data
protocol (PDP) format (e.g. IP or X.25) and sends them out on the corresponding packet
data network. In the other direction, PDP addresses of incoming data packets are converted
to the GSM address of the destination user. The readdressed packets are sent to the
responsible SGSN. For this purpose, the GGSN stores the current SGSN address of the
user and his or her profile in its location register. The GGSN is responsible for IP address
assignment and is the default router for the connected UE (User Equipment).The GGSN
also performs authentication and charging functions
SGSN - Serving GPRS Support Node
A Serving GPRS Support Node (SGSN) is responsible for the delivery of data packets from
and to the mobile stations within its geographical service area. Its tasks include packet
routing and transfer, mobility management (attach/detach and location management), logical
link management, and authentication and charging functions. The location register of the
SGSN stores location information (e.g., current cell, current VLR) and user profiles (e.g.,
IMSI, address(es) used in the packet data network) of all GPRS users registered with this
SGSN.
302
WIRELESS COMMUNICACTION
Common SGSN Functions
Detunnel GTP packets from the GGSN (downlink)
Tunnel IP packets toward the GGSN (uplink)
Carry out mobility management as Standby mode mobile moves from Routing Area
to Routing Area.
Billing user data
GSM/EDGE Specific SGSN functions
Carry up to about 60 kbit/s (150 kbit/s for Edge) traffic per subscriber
Connect via frame relay or IP to the PCU using the Gb protocol stack
Accept uplink data to form IP packets
Encrypt downlink data, decrypt uplink data
Carry out mobility management to the level of a cell for connected mode mobiles;
WCDMA Specific SGSN functions
Carry up to about 300 kbit/s traffic per subscriber
Tunnel/detunnel downlink/uplink packets toward the RNC
Carry out mobility management to the level of an RNC for connected mode mobiles.
These differences in functionality have led some manufacturers to create specialist SGSNs
for each of WCDMA and GSM which do not support the other networks, whilst other
manufacturers have succeeded in creating both together, but with a performance cost due to
the compromises required.
Access Point
An access point is:
An IP network to which a mobile can be connected
A set of settings which are used for that connection
A particular option in a set of settings in a mobile phone
When a GPRS mobile phone sets up a PDP context, the access point is selected. At this
point an Access Point Name (APN) is determined
Example: flextronics.mnc012.mcc345.gprs.
Example: internet
Example: mywap.
This access point is then used in a DNS query to a private DNS network. This process
(called APN resolution) finally gives the IP address of the GGSN which should serve the
access point. At this point a PDP context can be activated..
303
WIRELESS COMMUNICACTION
PDP Context
The PDP(Packet Data Protocol, e.g. IP, X.25, FrameRelay) context is a data structure
present on both the SGSN and the GGSN which contains the subscriber's session
information when the subscriber has an active session. When a mobile wants to use GPRS, it
must first attach and then activate a PDP context. This allocates a PDP context data
structure in the SGSN that the subscriber is currently visiting and the GGSN serving the
subscribers access point. The data recorded includes.
Subscriber's IP address
Subscriber's IMSI
Subscriber's
o Tunnel ID (TEID) at the GGSN
o Tunnel ID (TEID) at the SGSN
The tunnel ID (TEID) is a number allocated by the GSN which identifies the tunnelled data
related to a particular PDP context.
There are two kinds of PDP contexts.
Primary PDP Context
o Has a unique IP address associated with it
Secondary PDP Context
o Shares an IP address with another PDP context
o Is created based on an existing PDP context (to share the IP address)
o Secondary PDP contexts may have different Quality Of Service settings
A total of 11 PDP contexts (with any combination of Primary and Secondary) can co-exist.
Reference Points and Interfaces
Within the GPRS core network standards there are a number of interfaces and reference
points (logical points of connection which probably share a common physical connection
with other reference points). Some of these names can be seen in the network structure
diagram on this page.
Interfaces in the GPRS network
Gb - Interface between the Base Station Subsystem and the SGSN the transmission
protocol could be Frame Relay or IP.
Gn - IP Based interface between SGSN and other SGSNs and (internal) GGSNs.
DNS also shares this interface. Uses the GTP Protocol.
Gp - IP Based interface between internal SGSN and external GGSNs. Between the
SGSN and the external GGSN, there is the Border Gateway (which is essentially a
firewall). Also uses the GTP Protocol.
304
WIRELESS COMMUNICACTION
Ga - The interface servers the CDRs (Accounting records) which are written in the
GSN and sent to the CG (Charging Gateway). This interface uses an GTP Protocol,
with extensions that supports CDRs (Called GTP' or GTP prime).
Gr - Interface between the SGSN and the HLR. Messages going through this
interface uses the MAP3 Protocol.
Gd - Interface between the SGSN and the SMS Gateway. Can use MAP1, MAP2 or
MAP3.
Gs - Interface between the SGSN and the MSC (VLR). Uses the BSSAP+ Protocol.
This interface allows paging and station availability when it performs data transfer.
When the station is attached to the GPRS network, the SGSN keeps track of which
RA (Routing Area) the station is attached to. An RA is a part of a larger LA
(Location Area). When a station is paged this information is used to conserve
network resources. When the station performs a PDP Context, the SGSN has the
exact BTS the station is using.
Gi - The interface between the GGSN and other external networks
(Internet/WAP)[disambiguation needed]. Uses the IP protocol.
Ge - The interface between the SGSN and the SCP (Service Control Point). Uses the
CAP Protocol.
Gx - The on-line policy interface between the GGSN and the CRF (Charging Rules
Function). It is used for provisioning service data flow based charging rules. Uses the
Diameter Protocol.
Gy - The on-line charging interface between the GGSN and the OCS (Online
Charging System). Uses the Diameter Protocol (DCCA application).
Gz - The off-line charging interface between the GSN and the CG (Charging
Gateway). Uses the CDRs (Accounting records).
305
WIRELESS COMMUNICACTION
306
WIRELESS COMMUNICACTION
307
WIRELESS COMMUNICACTION
308
WIRELESS COMMUNICACTION
309
WIRELESS COMMUNICACTION
310
WIRELESS COMMUNICACTION
311
WIRELESS COMMUNICACTION
312
WIRELESS COMMUNICACTION
313
WIRELESS COMMUNICACTION
314
WIRELESS COMMUNICACTION
315
WIRELESS COMMUNICACTION
316
WIRELESS COMMUNICACTION
Reference Books:
 Handbook of Radio and Wireless Technology by
Stan Gibilisco
 The Mobile Technology by Ron Schneiderman
 Wireless data technologies by Dubendorf Vern
Questionnaire:
Q-1
What are the basic parameters and techniques of Communication?
Q-2
How can an Intelligent Network can be defined into different classes?
Q-3
Explain the basics of SDH and its different levels of STM?
Q-4
Please label different range of Frequencies for entire communication spectrum?
Q-5
What are the technical parameters of the satellite communication?
Q-6
How can any mobile communication system be divided into sub categories?
Q-7
Define WLL and its network applications?
Q-8
Differentiate between GSM, CDMA, TDMA and FDMA with system architecture?
Q-9
How GPS system can be improved against atmospheric and environmental
conditions?
Q-10
Draw a block diagram to understand the basic Mobile Communication System
Model?
317