satlife d321 - Department of Electrical and Electronic Engineering

Transcription

satlife d321 - Department of Electrical and Electronic Engineering
SATLIFE-IST-1-507675
D321
DVB-RCS regenerative (and transparent) system Services Provision and Devices
Design Report
Contractual Date of Delivery to the CEC:01/04/05
Actual Date of Delivery to the CEC:
Author(s):
31/05/06
Antonio Sánchez, Carlos García, Carlos Miguel, Jaime J. Ruiz, Jesús Macías,
Pedro A. Baigorri, Jose A. Torrijos
Participant(s):
TID, UPM
Workpackage:
WP3200
Est. person months: 27
Security:
Pub
Nature:
Version:
R
2.0
Total number of pages:
107
Abstract:
This document provides a description of the design of the service provision solution and the
devices considered in the work conducted in WP2000 of the SATLIFE project. Based on the
research activities done, and thus on the set of requirements established as a conclusion, a
detailed design will be performed, in order to develop all the considered functionalities in the
next work packages.
The goal of this document is to establish a clear guideline and design for the development of the
DVB-RCS transparent and regenerative services. The design will consider all the aspects to be
covered in the development, in order to provide an efficient solution.
Keyword list: DVB-RCS, Satellite network, Multimedia, Broadcasting, Interactive services,
Video on Demand, QoS in satellite access
SATLIFE
D321
CHANGE RECORD
VERSION
DATE
AUTHOR
1.0
21/04/2005
SATLIFE PARTNERS
(TID, UPM)
SATLIFE PARTNERS
(TID, UPM)
1.1
15/11/2005
1.2
01/03/2006
SATLIFE PARTNERS
(TID, UPM)
2.0
31/05/2006
SATLIFE PARTNERS
(TID, UPM)
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
2 / 107
CHANGE
First version
LAN interconnection services revision
Internet Access PEP in Start transparent mode.
Update maximum UDP payload size for software
download service to 65000 bytes
Digital TV chapter reorganization
Webcast chapter reorganization
Executive summary and conclusions updated.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
3 / 107
TABLE OF CONTENTS
1.
EXECUTIVE SUMMARY................................................................................................................................7
2.
REFERENCES...................................................................................................................................................8
3.
ABBREVIATIONS ............................................................................................................................................9
4.
CONTENT SERVICE PROVIDER...............................................................................................................12
4.1
4.1.1
4.1.2
4.2
4.2.1
4.2.2
4.2.3
4.2.4
4.2.5
4.2.6
4.2.7
4.3
4.3.1
4.3.2
4.3.3
4.4
4.4.1
4.4.2
4.4.3
4.5
4.5.1
4.5.2
4.5.3
4.6
4.6.1
4.6.2
4.6.3
4.7
4.7.1
4.7.2
4.7.3
4.8
DIGITAL TV............................................................................................................................................12
Digital TV broadcasting ......................................................................................................................13
Interactive applications broadcasting .................................................................................................18
VIDEO ON DEMAND.............................................................................................................................23
Components and modules ....................................................................................................................23
Client-Server interface.........................................................................................................................26
Internal interfaces................................................................................................................................27
User interface ......................................................................................................................................27
Server capabilities ...............................................................................................................................28
Content protection system....................................................................................................................28
Near video on demand .........................................................................................................................30
MULTICONFERENCE ...........................................................................................................................31
SIP .......................................................................................................................................................31
Multiconference Service ......................................................................................................................32
Softphone .............................................................................................................................................38
INTERNET ACCESS...............................................................................................................................41
Customer LAN configuration...............................................................................................................42
RCST functionality...............................................................................................................................43
Internet services...................................................................................................................................44
LAN INTERCONNECTION ...................................................................................................................45
Corporate facility LAN configuration..................................................................................................46
RCST functionality...............................................................................................................................46
LAN interconnection services ..............................................................................................................47
STREAMING...........................................................................................................................................48
Streaming protocols.............................................................................................................................48
Unicast and multicast streaming .........................................................................................................50
Streaming video servers.......................................................................................................................51
SOFTWARE DOWNLOAD ....................................................................................................................60
Push mode ...........................................................................................................................................61
High-level Architecture .......................................................................................................................61
Components and modules ....................................................................................................................62
WEBCAST WITH LOW INTERACTIVITY FOR CORPORATIVE SERVICES (E-LEARNING, ETC)
74
4.8.1
Corporative Applications & Services ..................................................................................................74
4.8.2
Service Functionalities ........................................................................................................................75
5.
DEVICES..........................................................................................................................................................79
5.1
MIDDLEWARE.......................................................................................................................................79
5.1.1
Architecture overview..........................................................................................................................80
5.1.2
External interfaces...............................................................................................................................81
5.1.3
Managers.............................................................................................................................................83
5.1.4
Listener ................................................................................................................................................87
5.1.5
Multimedia stream player....................................................................................................................89
5.2
ADSL AND SATELLITE INTERNETWORKING.................................................................................91
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
4 / 107
5.2.1
Architecture Overview .........................................................................................................................91
5.2.2
Components and modules ....................................................................................................................92
5.2.3
User Equipment ...................................................................................................................................92
5.2.4
Local Head End ...................................................................................................................................94
5.2.5
Provider Backbone ..............................................................................................................................95
5.2.6
IP Addressing (Possible Scenarios).....................................................................................................97
5.3
HOME GATEWAY ...............................................................................................................................102
5.3.1
Architecture overview........................................................................................................................102
5.3.2
Components and modules ..................................................................................................................103
5.3.3
Applications.......................................................................................................................................106
5.3.4
User Interface ....................................................................................................................................106
6.
CONCLUSIONS ............................................................................................................................................107
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
5 / 107
FIGURE INDEX
FIGURE 1. DIGITAL TV SCENARIO...................................................................................................................14
FIGURE 2. DIGITAL TV COMPONENTS ...........................................................................................................15
FIGURE 3. REGENERATIVE DIGITAL TV PROTOCOL STACK .................................................................17
FIGURE 4. INTERACTIVE APPLICATION SCENARIO..................................................................................19
FIGURE 5. VIDEO ON DEMAND SERVICE CONCEPTUAL ARCHITECTURE.........................................23
FIGURE 6. VOD SERVER MODULES .................................................................................................................24
FIGURE 7. VIDEO ON DEMAND PROTOCOL STACK ...................................................................................27
FIGURE 8. VOD CONTENT PROTECTION SYSTEM ......................................................................................29
FIGURE 9. MULTICONFERENCE SERVICE.....................................................................................................32
FIGURE 10. SATLIFE MULTICONFERENCE SCENARIOS ...........................................................................33
FIGURE 11. H.323 SCENARIO ..............................................................................................................................34
FIGURE 12. MULTICAST FLOW WITH MCU...................................................................................................35
FIGURE 13. INTERACTION BETWEEN SIP MCU AND SIP PROXY SERVER ..........................................35
FIGURE 14. SIP PROXY ARCHITECTURE........................................................................................................36
FIGURE 15. MULTICONFERENCE SERVICE AND RSWG............................................................................36
FIGURE 16. SOFTPHONE ......................................................................................................................................39
FIGURE 17. INTERNET ACCESS SCENARIO WITH A GATEWAY STATION ..........................................41
FIGURE 18. LAN INTERCONNECTION SCENARIO .......................................................................................45
FIGURE 19. QUICKTIME STREAMING SERVER............................................................................................52
FIGURE 20. DARWIN STREAMING SERVER ...................................................................................................54
FIGURE 21. WINDOWS MEDIA STREAMING SERVER.................................................................................56
FIGURE 22. REALNETWORKS STREAMING SERVER .................................................................................57
FIGURE 23. SATELLITE CONNECTION SCHEME .........................................................................................60
FIGURE 24. COMPONENT ALLOCATION ........................................................................................................62
FIGURE 25. BROADCAST CENTER SOFTWARE ARCHITECTURE ...........................................................63
FIGURE 26. PACKET DIVISION STRUCTURE .................................................................................................66
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
6 / 107
FIGURE 27. PATROL ARCHITECTURE ............................................................................................................70
FIGURE 28. USER TERMINAL SOFTWARE ARCHITECTURE ....................................................................72
FIGURE 29. LIVE EVENTS/COURSES MULTIMEDIA FLOWS (USER PLANE) ........................................77
FIGURE 30. DEFERRED EVENTS/COURSES MULTIMEDIA FLOWS (USER PLANE)............................77
FIGURE 31. ON-DEMAND MEDIA REPOSITORY MULTIMEDIA FLOWS (USER PLANE) ...................78
FIGURE 32. MIDDLEWARE ARCHITECTURE ................................................................................................80
FIGURE 33. STORAGE MANAGER PROCESS..................................................................................................84
FIGURE 34. SERVICE MANAGER PROCESS....................................................................................................85
FIGURE 35. SCREEN MANAGER PROCESS .....................................................................................................87
FIGURE 36. LISTENER PROCESS .......................................................................................................................88
FIGURE 37. MULTIMEDIA STREAM PLAYER PROCESS.............................................................................90
FIGURE 38. COMMON DSL ARCHITECTURE .................................................................................................91
FIGURE 39. DSL AND SATELLITE INTERNETWORKING ARCHITECTURE ..........................................92
FIGURE 40. COMMON DSL USER EQUIPMENT .............................................................................................93
FIGURE 41. VOIP PROTOCOL STACK USING AN IP DSLAM......................................................................94
FIGURE 42. LOCAL HEAD END SCHEMA ........................................................................................................95
FIGURE 43. SERVICE PROVIDER SCHEMA ....................................................................................................96
FIGURE 44. PPPOE SINGLE USER SCENARIO ................................................................................................98
FIGURE 45. PPPOE MULTI USER SCENARIO .................................................................................................98
FIGURE 46. PPPOE SINGLE USER SCENARIO PROTOCOL STACK..........................................................99
FIGURE 47. ROUTING SINGLE USER SCENARIO ........................................................................................100
FIGURE 48. ROUTING MULTI USER SCENARIO .........................................................................................100
FIGURE 49. ROUTING SINGLE USER SCENARIO PROTOCOL STACK..................................................101
FIGURE 50. HOME GATEWAY ARCHITECTURE.........................................................................................103
FIGURE 51. HOME GATEWAY PHYSICAL INTERFACES..........................................................................105
FIGURE 52. HOME GATEWAY HARDWARE ARCHITECTURE................................................................105
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
7 / 107
1. EXECUTIVE SUMMARY
This document is an overview of all of the aspects and architecture of the design for the
implementation of the SATLIFE system, as well as a general description of the different parts
and building blocks of the system from the service point of view.
Since the development of the system will take place through successive design and
implementation cycles, the overall system design will undergo revisions. For this reason, in
initial versions, some parts of this document could have a more elaborate level of detail than
others. Subsequent work will add more refinement to the other elements. Therefore the following
deliverables will provide a more complete specification of those elements.
The focus of WP3200 is on service provisioning issues as well as devices. Service provisioning
covers the audio and video aspects of the client services, including the formats of streaming
contents or the new formats that could impact the satellite industry, additionally it covers the
design of all services that will be integrated with the DVB-S/RCS network, these services will be
the first services tested over a regenerative DVB-RCS architecture, so the will be fundamental in
the future of DVB-RCS services.
Hence, the following topics will be covered:
• Digital TV: The impact of Digital TV will be evaluated, taking mandatory requirements
from the previous work packages and considering the possibility of remote video
contributions to the actual Digital TV networks. Using the satellite in regenerative mode
we will broadcast directly to standards DVB-S set-top-boxes Digital TV contents through
a RCST, enabling microbroadcasters to upload their own contributions directly. We will
also describe how to integrate new video format features in the DVB-RCS regenerative
networks, and how to include interactive applications using the return channel of the
satellite.
• Video on Demand: Video on Demand services are a very innovative feature for which the
new capabilities of the satellite platform are very suited. We will use the mesh
connectivity provided by SATLIFE system to decrease the delay with the users.
• Videoconference: we will describe how multimedia signalling protocols like SIP can be
integrated in the satellite network, and how to interface with other networks. Services
with low interactivity as E-learning will be covered, too.
• Internet: the target of providing Internet access with start connectivity will be covered,
including the new functionalities of the DVB-RCST that increases the performance of
Internet access.
• Software download: it is described the design of a software download architecture that
will use the regenerative satellite to broadcast its contents to standard equipment without
using a complete hub or a satellite link gateway.
From a device point of view, three topics will be covered:
• Home gateway: the home gateway is a powerful device which allows to control home
domotic networks and has many other functionalities. We will describe how to integrate
it on a satellite network and how to interface with it from other networks.
• Rural ADSL: we will describe the design of the interfacing of ADSL with the satellite
network, in order to provide a low-cost access network for isolated places.
• Middleware: a middleware for set-top boxes will be designed to help programmers to
develop applications and services in a hardware independent way.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
8 / 107
2. REFERENCES
[1] SATLIFE D210, DVB-RCS, Regenerative (and Transparent) System Services Requirements
Report.
[2] SATLIFE D220, DVB-RCS, Regenerative (and Transparent) Network Aspects Report.
[3] SATLIFE D230, DVB-RCS regenerative (and transparent) Technology and Subsystems
Requirements Report
[4] ITU-T and ISO/IEC, “Amendment 3: Transport of AVC video data over ITU-T rec. H.222.0 |
ISO/IEC 13818-1 streams”, ITU-T Recommendation H.222.0 (2000)/Amd.3 (03/2004) –
ISO/IEC International Standard 13818-1:2000/Amd.3:2004, March 2004
[5] UDP, Postel, J., " User Datagram Protocol," RFC 768, USC/Information Sciences Institute,
August 1980. (http://www.faqs.org/rfcs/rfc768.html)
[6] MPEG-2 Video | ISO/IEC 13818-2:2000 “Information technology - Generic coding of
moving pictures and associated audio information: Video”
[7] RTSP | Schulzrinne, H., “Real Time Streaming Protocol”, RFC 2326 Columbia University,
April 1998. (ftp://ftp.isi.edu/in-notes/rfc2326.txt)
[8] DVB-CA | ISO/IEC 13818-1, ETSI EN 300 468, Digital Video Broadcasting: Specification
for
Service
Information
in
DVB
Systems
(http://webapp.etsi.org/action%5COP/OP20041022/en_300468v010601o.pdf)
[9] RFC 1631, The IP Network Address Translator (NAT). K. Egevang, P. Francis. May 1994.
[10] RFC 2663, IP Network Address Translator (NAT) Terminology and Considerations. P.
Srisuresh, M. Holdrege. August 1999.
[11] RFC 1889, RTP: A Transport Protocol for Real-Time Applications. Audio-Video
Transport Working Group, H. Schulzrinne, S. Casner, R. Frederick, V. Jacobson. January
1996.
[12] RFC 3550, RTP: A Transport Protocol for Real-Time Applications. H. Schulzrinne, S.
Casner, R. Frederick, V. Jacobson. July 2003.
[13]
RFC 2327, SDP: Session Description Protocol. M. Handley, V. Jacobson. April 1998.
[14]
DVB-MHP | ETSI TS 102 812, “Multimedia Home Platform (MHP) Specification 1.1”
[15] MPEG-4 | ISO/IEC 14496-10, “Coding of audio-visual objects - Part 10: Advanced
Video Coding” ETSI-EN 301-192, “Digital Video Broadcasting MultiProtocol
Encapsulation”
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
9 / 107
3. ABBREVIATIONS
For the purposes of the present document the following abbreviations apply:
3GPP
AAC
AIT
AMR
API
ASI
ASP
AVC
BAT
CA
CAT
CIF
CPU
CRA
DB
DHCP
DSM-CC
DTV
DVB
DVB-CA
DVB-MPE
DVB-RCS
DVB-RCST
DVB-S
DVD
ECM
EIT
FCA
FIFO
FTP
HDTV
HPA
HTML
HTTP
HTTPS
IETF
IF
IGMP
IIS
IP
IRD
Third-Generation Partnership Project
Advanced Audio Coding
Application Information Table
Adaptive Multi Rate
Application Programming Interface
Asynchronous Serial Interface
Application Service Provider
Advanced Video Coding
Bouquet Association Table
Conditional Access
Conditional Access Table
Common Intermediate Format
Central Processor Unit
Committed Rate Assignment
Database
Dynamic Host Configuration Protocol
Digital Storage Media - Command & Control
Digital TV
Digital Video Broadcasting
Digital Video Broadcasting- Conditional Access
Digital Video Broadcasting- Multi-protocol Encapsulation
Digital Video Broadcasting- Return Channel Signalling
Digital Video Broadcasting- Return Channel Signalling Terminal
Digital Video Broadcasting- Satellite
Digital Versatile/Video Disc
Electronic Counter Measure
Event Information Table
Full Capacity Assignment
First In First Out
File Transfer Protocol
High Definition TV
High Power Amplifier
Hypertext Mark-up Language
Hypertext Transfer Protocol
Secure Hypertext Transfer Protocol
Internet Engineering Task Force
Intermediate Frequency
Internet Group Management Protocol
Microsoft Internet Information Services
Internet Protocol
Integrated Receiver Decoder
SATLIFE
D321
ISDN
ISP
KB
KM
LAN
LNB
MAN
MHP
MIB
MMS
MP3
MPEG
MPEG-TS
MPTS
MCU
NAPT
NAT
NCC
NIT
NVoD
OBP
PAL
PAT
PCI
PEP
PID
PMT
POTS
PSI
QCIF
QoS
QPSK
QTSS
RAM
RBDC
RCS
RCST
ROM
RSGW
RTP
RTSP
SAP
SD
SDT
SDP
SIP
Integrated Services Digital Network
Internet Service Provider
Kilobytes
Knowledge Modules
Local Area Network
Low Noise Block
Metropolitan Area Network
Multimedia Home Platform
Management Information Base
Multimedia Messaging
MPEG-1 Audio Layer 3
Motion Picture Experts Group
MPEG Transport Stream
Multiple Program Transport Stream
Multipoint Control Unit
Network Address Port Translation
Network Address Translation
Network Control Center
Network Information Table
Near Video-on-Demand
On-Board Processor
Phase Alternating Line
Program Association Table
Peripheral Component Interconnect
Performance Enhancing Proxy
Packet Identifier
Program Map Table
Plain Old Telephone Service
Program Specific Information
Quarter Common Intermediate Format
Quality of Service
Quadrature Phase Shift Keying
QuickTime Streaming Server
Random Access Memory
Rate Based Dynamic Capacity
Return Channel Satellite
Return Channel Satellite Terminal
Read Only Memory
Regenerative Satellite Gateway
Real Time Protocol
Real Time Streaming Protocol
Session Announcement Protocol
Standard Definition
Service Description Table
Session Description Protocol
Session Initiation Protocol
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
10 / 107
SATLIFE
D321
SMATV
SNMP
SPTS
STB
TCP
TDT
TS
UDP
URI
URL
VBDC
VCR
VGA
VLC
VoD
VoIP
VPN
VSN
VSP
WM
WWW
Satellite Master Antenna TeleVision
Simple Network Management Protocol
Single Program Transport Stream
Set Top Box
Transmission Control Protocol
Time and Date Table
Transport Stream
User Datagram Protocol
Uniform Resource Identifier
Uniform Resource Locator
Volume Based Dynamic Assignment
Video Cassette Recorder
Video Graphics Array
Video LAN Client
Video On Demand
Voice Over IP
Virtual Private Network
Virtual Satellite Network
Video Service Provider
Windows Media
World Wide Web
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
11 / 107
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
12 / 107
4. CONTENT SERVICE PROVIDER
As previously stated, the tasks for this work package fall into two categories: content service
provider elements, and device development. This chapter will cover the content service provider
items to be developed along the duration of the work package activities.
What will be explained is a complete overview of design aspects needed to fulfil all of the
requirements stated in previous documents and to develop all of the elements of which the
different services consist. This includes a general review of functional capabilities of the service,
an architectural description in order to cover all elements required, an explanation of features and
components of this architecture, the protocols and functional modules included, and all of the
topics needed to achieve the goals previously stated.
The services to be explained have already been defined in document D210, and their technology,
networking and subsystems aspects have been unveiled in the subsequent documents D220 and
D230. In addition, some aspects will be explained more in detail as separate elements; for
example, interactive applications, included in the Digital TV service, are considered as an
important new topic to be thoroughly discussed. Multimedia signalling, the key feature to Multiconferencing and E-learning services, will be completely defined.
4.1 DIGITAL TV
Digital TV broadcasting is the most classic service in the satellite networks nowadays, through
this service it's possible to provide video, audio and other data services to a wide range of users
without a great deployment of infrastructure.
Digital TV also provides new opportunities for public TV to, for instance, provide interactive
education and training programs never before possible under the current analog standard of
broadcasting. Digital TV is the biggest change in the TV medium since the advent of television
itself.
Comparing with the analog system, Digital TV is a very different technology; all DTV programs
are sent as data, this data is used to carry from a simple picture to a very complex picture. In
transparent satellite DTV a TV station sends the data to the satellite, and the satellite returns this
same data over-the-air to the coverage area, where the parabolic antennas receive and transmit
the information to the IRD which translates instantaneously the received data and presents it on
the TV.
Digital TV would not be possible without the ability to squeeze out some unneeded parts of TV
pictures and sound (known as compression). Compression simply reduces the amount of a digital
picture's tiniest details by removing anything that is not critical for us to see. This shortcut does
not affect the way we see moving pictures on the screen, but it does mean they take up less
signal space, that is spectrum, and are easier to broadcast with other services at the same time.
Because of the ability to manipulate digital information, it is possible to divide a TV station's
signal into multiple parts. This allows each Digital TV station to separate its large single on-theair signal into several TV signals or channels. Therefore, a single station may broadcast many
different channels at one time. Depending on the transponder bandwidth and on the compression
used, it's possible to broadcast more than 10 channels per transponder.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
13 / 107
Other advantages of Digital TV are:
• Digital high resolution quality picture and CD-quality sound.
• It's reliable no matter what the weather is.
• Provides Dolby digital sound.
• Access to on demand contents.
• Access to Pay Per View channels
• Picture-in-picture capabilities without additional hardware.
• Interactive program guide with parental controls, program reminder features, program
summaries
• Ready for HDTV.
• Time shifting: watch prime-time shows when it's convenient for you.
The following chapters will describe the architecture design for two scenarios: Digital TV
broadcasting of video and interactive applications.
4.1.1 Digital TV broadcasting
In SATLIFE project, Digital TV broadcasting is oriented to provide a new method for
broadcaster to make TV content distribution directly to end users using DVB-RCS terminals.
In traditional Digital TV broadcasting solutions, the up-link consists in a DVB-S signal using a
whole satellite transponder carrying several TV services; in this case, the satellite receive the
signal with an onboard dish, amplifies the signal and uses another dish to beam the DVB-S
signal back to Earth, where viewers can pick it up.
The scenario proposed in SATLIFE project, the up-link consists in a DVB-RCS signal carrying
one or several TV services (depending on the quality and the compression used); this signal is
received in the satellite, which process and transform it in a new DVB-S signal that is sent back
to Earth and received by viewers.
The main advantages of this solution is that isn’t needed a whole hub for the distribution of
Digital TV contents, the transmission antenna is smaller, allows the TV contribution from
microbroadcasters, or TV contributions from remote regions and the end users don’t need to
change their satellite terminals.
In the scenario proposed, the satellite will be used in regenerative mode, as shown in the
following figure:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
14 / 107
Figure 1. Digital TV scenario
4.1.1.1 Components and modules
The conceptual architecture of this service is divided in the following modules:
• Satellite TV broadcast center: the reduced ground station in which the TV signal is
composed and transmitted to the satellite.
• Satellite system: the equipment which returns the signal over-the-air to the coverage area.
• User equipment: the equipment used to translate the signal and present the contents to the
users.
The components of the Digital TV scenario are presented in the next figure:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
15 / 107
Figure 2. Digital TV components
4.1.1.1.1 Satellite TV broadcast center
In this regenerative scenario the satellite TV broadcast center is simplified to use the minimum
components, this way it's possible to have a more economic and portable system.
The main components of this satellite TV broadcast center are detailed in the following lines.
•
Flux server:
The flux server is a Unix machine that injects compressed video files in the service
provider unit. The flux server shall be able to perform the following tasks:
• Add a new stream to the transport stream
• Remove a stream from the transport stream
• Start a stream
• Stop a stream
• Set or change the file associated to a stream
• Set or change the bit rate associated to a stream
• Set or change the PID associated to a stream
•
Service Provider unit (SP):
The Service Provider unit is an equipment based on a MPEG2/IP multiplexer that
provides Digital TV services. It allows the input of encoded streams to be transmitted
over IP. The encoded streams may be stored on a video server an/or delivered to the
multiplexer for encapsulation and generation of the transport stream.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
16 / 107
The SP shall be able to encode MPEG2-TS video over RTP/UDP/IP using different
codecs (MPEG2, MPEG4, H.264 and WM9).
•
SP DVB-RCST:
It is the Service Provider DVB-RCS terminal. It behaves from the session point of view
in the same way as a normal RCST, but they have a capacity reserved and provisioned in
the NCC due to the amount of traffic in real time that these terminals have to transmit.
The SP RCST is able to receive MPEG2 TS over RTP/UDP/IP on the Ethernet interface
from the SP and transmit it (removing the RTP, UDP and IP headers) over the satellite.
•
Antenna:
The antenna is just a satellite dish designed to focus on a specific broadcast source. In the
satellite scenario a smaller antenna dish is needed because only is transmitted a piece of a
full transport stream, so the bandwidth transmitted is lower.
4.1.1.1.2 Satellite system
In Digital TV broadcasting in regenerative mode, the satellite picks up the DVB-RCS signal with
an onboard dish, amplifies, demodulates and de-multiplexes the received signal and composes a
new DVB-S transport stream that is transmitted back to Earth using another dish, where viewers
can pick it up.
4.1.1.1.3 User equipment
The main user components are detailed in the following lines.
•
Antenna:
The satellite dish on the receiving end can't transmit information; it can only receive it.
The receiving dish works in the exact opposite way of the transmitter. When a beam hits
the curved dish, the parabolic shape reflects the radio signal inward onto a particular
point, just like a concave mirror focuses light onto a particular point.
The central element in the feed horn is the low noise block converter, or LNB. The LNB
amplifies the radio signal bouncing off the dish and filters out the noise (radio signals not
carrying programming). The LNB passes the amplified, filtered signal to the satellite
receiver inside the viewer's house
•
Set top box
The end component in the entire satellite TV system is the set-top-box or IRD or receiver.
The set-top-box has four essential tasks:
• It de-scrambles (when necessary) the encrypted signal. In order to unlock the
signal, the receiver needs the proper decoder chip for that programming package.
The provider can communicate with the chip, via the satellite signal, to make
necessary adjustments to its decoding programs. The provider may occasionally
send signals that disrupt illegal de-scramblers, as an electronic counter measure
(ECM) against illegal users.
•
It takes the digital MPEG-2 signal and converts it into an analog format that a
standard television can recognize. In Europe, the receivers convert the digital
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
17 / 107
signal to the analog PAL format. Some dish and receiver setups can also output an
HDTV signal.
•
It extracts the individual channels from the larger satellite signal. When you
change the channel on the receiver, it sends just the signal for that channel to your
TV.
•
It keeps track of pay-per-view programs and periodically phones a computer at
the provider's headquarters to communicate billing information.
Depending on the codec used to compress the video and audio, a specific set-top-box is
needed to decode that video and audio; for instance, if an H.264 video is transmitted from
the satellite broadcast center, a set-top-box that supports H.264 video will be needed to
decode it.
•
TV set:
The TV set itself.
4.1.1.2 SP – SP RCST Interface
The communications interface between the flux server and the SP is a MPEG-2 transport stream
through an ASI interface.
The communications interface between SP and SP RCST is defined in Annex 13 of the
document D220 DVB-RCS regenerative (and transparent) Network Aspects Report.
4.1.1.3 Protocol stack
The protocol stack can be seen in the next figure:
Figure 3. Regenerative Digital TV protocol stack
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
18 / 107
The SP physical interface is an Ethernet port 10/100Mbps baseT, where MPEG-2 TS are
transmitted packets (that contain audio, video, service information or other data) over
RTP/UDP/IP.
The SP RCST receives these IP packets and removes the IP, UDP and RTP headers to obtain the
original MPEG-2 TS packets that are transmitted over-the-air to the satellite.
The satellite composes a new DVB-S transport stream with the DVB-RCS signal and it is
transmitted to the IRDs.
The IRD receives the DVB-S signal and recovers the MPEG-2 TS packets and the audio, video,
service information or other data that was sent by the SP.
4.1.2 Interactive applications broadcasting
One of the major advantages of the Digital TV transmission is the transmission of applications
that enable the final users to communicate with the interactive service providers, and permit
viewers to interact on-screen with that extra material. These applications are executed on the set
top box and they are programmed in a language with a specific API (Application Program
Interface) that communicates with the multimedia commands (audio, video, data, etc) and with
the return channel.
This enhanced TV permits viewers to watch programs and interact with extra features that are
fed as extra data within the actual broadcasts.
The transmitted data inside the transport streams for interactive applications depends on the
specific API used during the application development. Nowadays there are several proprietary
and incompatible APIs in the market (OpenTV, Mediahighway, etc), but the DVB group has
developed the standard DVB-MHP (Multimedia Home Platform), this standard defines an open
API based on Java and HTML for interactive applications.
The proposed architecture for this scenario is similar to that of broadcasting Digital TV contents.
In order to broadcast interactive applications, a flux server, that generates an endless carousel of
interactive applications based on MHP through its ASI output, will be needed.
Some interactive applications provide local interactivity, whereby the viewer interacts with the
receiver or IRD, without the need for outside communications. For interactive applications that
require support for remote interactivity (e.g., to contact with a service provider to perform
purchases or other financial transactions online), the IRD needs access to a return channel. To
enable remote interactivity, a bi-directional interactive channel needs to be maintained between
the IRD and a server hosting the interactive session. In this architecture a DVB-RCS terminal
could be used in the client side to provide the return channel to several IRDs.
This scenario uses the existing infrastructure, adding an interactive application provider with an
RCST uplinking to the satellite. User terminals must have IP return support, and are connected to
an RCST which can be deployed for a single building or neighbourhood, using common
infrastructure or SMATV structures. The next figure shows this structure.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
19 / 107
Figure 4. Interactive application scenario
DVB-S is used for downstream interactive applications to the IRDs, so users' terminals have to
support DVB-S for receiving data. In the other hand return channel is via IP, and is completely
independent of the forward channel.
This architecture considers that the interactive applications provider is in the same satellite
network, but it could also be out and have to be accessed via a gateway. Even if it is inside the
satellite network, it may be placed on a different beam from the feeder and/or the users.
4.1.2.1 Components and modules
The conceptual architecture for broadcasting applications is similar to the broadcast of Digital
TV in regenerative mode, with the following details:
• There is a flux server that generates a complete transport stream adding the PIDs related
with the AIT and with the MHP object carousels. The output of the flux server has an
ASI interface that can be connected to the ASI VSP input.
The flux server is not mandatory because the AIT and MHP object carousels can be
inserted in the VSP as MPEG-2 TS files that can be broadcasted.
• The multiplexer inside the VSP shall transmit in the PMT the related descriptors
associated with the interactive applications. This data is static per interactive application.
• The set-top-box shall be a standard DVB-S set top box that supports MHP defined in
ETSI TS 102 812.1 to be able to receive and execute MHP applications.
The conceptual architecture of this service is divided into the following modules:
• Interactive application provider: the interactive application provider is a server that
receives the requests from the users through the return channel and replies to the user
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
20 / 107
using the same channel . The nature of the users' requests depends on the type of the
interactive application that the user is using (e.g. from a single measure of audience to be
store up to a financial request).
The interactive application provider and the satellite TV broadcast center are completely
independent, so they could be (or not) located in the same place.
A DVB RCST will be used to provide connectivity to the interactive application provider.
• Satellite system: the satellite system used for the return channel and the satellite system
to broadcast the interactive applications are independent. This satellite system could be
regenerative or not.
• User equipment: the equipment used will be a standard DVB-S set top box that supports
MHP defined in ETSI TS 102 812.1 to be able to receive and execute MHP applications.
The interactive channel of this set top box must be IP.
The set top box is connected to a DVB RCST that provides connectivity to the interactive
application provider. It's possible to connect several set-top-boxes to the same DVB
RCST in order to provide connectivity to all of them.
4.1.2.2 MHP interactive applications
The interactive applications used in SATLIFE will be MHP, so the MHP object carousel, the
MHP signalling and the MHP transport structure will be used.
4.1.2.2.1 MHP object carousels
MHP applications consists in one or several object carousels and in an additional signalling
information to allow the IRD access to the application inside these carousels. In order to simplify
we are going to use only one carousel per MHP application.
An object carousel is a subset of the DSM-CC specification for data and control interoperability
that allows an iTV data server to present a set of distinct objects to a decoder by cyclically
broadcasting the contents one or more times.
The object carousel supports the transmission of structured groups of objects from a server to
receivers using the following objects:
• File objects: contain either interactive applications (Java or HTML code) or data that is
referenced by the applications, like images, texts, libraries, icons, etc.
• Directory objects: provide the location of specific file objects within the carousel stream.
• Stream objects: are references to MPEG-2 streams, usually containing video or audio
data.
• Service gateway: represent a concept that is similar to a directory, in fact the service
gateway identify the root directory of the object carousel
One or more object carousels may be carried within a transport stream in the same way as the
other video, audio and service information streams. Each object carousel is associated with a
unique PID and this PID is referenced in the PMT.
The payload bandwidth that is used by an MHP application is almost the same as that used by an
object carousel; an object carousel typically takes a bandwidth of up to 1 or 2 Mbps but in reality
this will be limited by the ability of the MHP IRD to decode and process the interactive
applications.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
21 / 107
4.1.2.2.2 MHP signaling
The minimum signalling information to transmit for each interactive application shall be:
• An Application Information Table (AIT).
An Application Information Table (AIT) has to be transmitted carrying MHP content.
The AIT provides information to the IRD about the data services and the state of each
MHP application. MHP-capable IRDs rely on information carried within the AIT for
‘house-keeping’ activities such as clearing old application data from local memory.
There shall be an AIT per PTM, this AIT shall carry information for all the MHP
applications inside the DTV service that is referenced in its PTM.
The AIT shall contain the following descriptors:
o A transport_protocol_descriptor in the second descriptor loop.
o An application_descriptor in the second descriptor loop.
o An application_name_descriptor in the second descriptor loop.
o A DVB-J or DVB-HTML_application_descriptor in the second descriptor loop.
o A DVB-J or DVB-HTML_application_location_descriptor in the second descriptor
loop.
• A Carousel: one PID shall contain the data for the MHP carousel, that is the interactive
application Java or HTML code.
• The following descriptor in the PMT of the associated service:
o A stream_identifier_descriptor in the second descriptor loop of the PID containing the
object carousel .
o A data_broadcast_descriptor in the second descriptor loop of the PID that contains the
object carousel.
o A carousel_identifier_descriptor in the second descriptor loop of the PID that contains
the object carousel.
o An application_signalling_descriptor in the second descriptor loop of the PID that
contains the AIT.
4.1.2.2.3 MHP transport structure
The final structure of a MHP application inside the transport stream will be as shown in the
following list:
PAT PID 0x0
• Add the PMT information associated with the service or program that contains the MHP
application.
PMT PID
• Add the PIDs associated with the audio, video, or other streams needed by this service.
• Add a new PID for the object carousel number 1 as a service type 0x0B. Add the
following descriptors in the second descriptor loop of the PTM:
o A stream_identifier_descriptor with a specific component_tag.
o A data_broadcast_descriptor
o A carousel_identifier_descriptor
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
22 / 107
Add a new PID for the object carousel number n as a service type 0x0B and with the
previous descriptors.
• Add a new PID for the AIT as a service type 0x05.
o Inside the second descriptor loop add a application_signalling_descriptor
AIT PID
• Add the AIT data for the application number 1 with the following descriptors in the
second descriptor loop:
o A transport_protocol_descriptor, with the associated component_tag that relates
this application with the PMT stream_identifier_descriptor.
o An application_descriptor
o An application_name_descriptor
o A DVB-J or DVB-HTML_application_descriptor
o A DVB-J or DVB-HTML_application_location_descriptor
• Add the AIT data for the application number n with the previous descriptors.
Object carousel 1 PID
• Add the transport packet of the object carousel number 1.
Object carousel n PID
• Add the transport packet of the object carousel number n.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
23 / 107
4.2 VIDEO ON DEMAND
Video on Demand (VoD) is an audiovisual interactive service where a streaming server is
remotely controlled by the user, so that it emulates the functionality of a home video player. The
interactivity provided by the service implies both content selection and streaming control. Users
are able to select the content they want to watch among all the available video contents stored in
the video database. Furthermore, they can control the video stream playback: the VoD service
provides VCR features such as pause, fast forward or fast rewind of the video content.
The VoD service conceptual architecture is shown in the following figure:
Figure 5. Video on demand service conceptual architecture
This VOD service will use mesh connections, allowing a DVB-RCST to be a video service
provider, in this case, for VOD services. The video client will be a commercial IP set-top-box
that supports IP unicast connections using RTSP connected to the satellite network through
another DVB-RCST.
The previous picture shows the VOD server connected to the DVB-RCST, so the video and
audio transported is encapsulated in IP. The upper layers include UDP for audio and video realtime transmission and TCP for the session control. The DVB-RCST in the client side deencapsulates IP traffic and distributes it to the IP set-top-boxes, which in turn extract and decode
the video.
4.2.1 Components and modules
The main components of the VOD service are detailed in the following lines.
• A video on demand server that allows connections from several clients simultaneously
with an IP interface.
• The satellite network to connect the server with the clients.
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
24 / 107
The client side, that will be commercial IP set-top-boxes.
4.2.1.1 Video on demand server
The video on demand server has been developed by Telefónica I+D and is used as a internal
prototype VOD server for MPEG-2 encoded videos. In SATLIFE project this VOD server will
be upgrade to support MPEG-4 AVC or H.264 contents and to provide a DVB compatible
content protection system for such contents. Also it will be integrated with the satellite network
in order to get the best configuration of the DVB-RCST to be used for this service.
The video server is a UNIX computer with the modules shown in the next figure:
Figure 6. VOD server modules
4.2.1.1.1 The Video Database
The Video DB module stores the video content to be played and metadata about it. For each
content, some different files are stored in the system:
•
The video file, in MPEG-2 TS format. Valid video format will be MPEG-2 and MPEG-4
AVC/H.264.
•
Video files, which content the images to be displayed in the fast-forward and fast-rewind
playback modes.
•
Index file, with the information to switch between the normal video file and their fastforward and fast-rewind files.
•
Metadata information: video format, bit-rate, frame-rate… to comply DB-web server and
DB-video pump interfaces.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
25 / 107
All of these files are stored in the same directory: one for each video content. Browsing the video
database is, therefore, as easy as browsing the directory tree.
Storing a video content in the database requires a previous indexation process to create the index
files and extract information such as bit- or frame-rate from the video sequence. This is an offline processing that must be done automatically. Thus a content ingest system will be developed
to provide this functionality.
4.2.1.1.2 The web server
The web server provides the possibility to the set-top-boxes of browsing the contents stored in
the video database and selecting which one will be played. This web server will be a standard
web server like an Apache or IIS server.
4.2.1.1.3 The RTSP server
After selecting a content through the web server, a RTSP dialog is established between the settop-box client and the RTSP server inside the VOD server in order to check the availability of
the demanded content and to begin with its transmission. For each RTSP session a new system
process, called Video pump, is created.
4.2.1.1.4 The Video pump module
The Video pump is a streaming server, which sends the video content to the set-top-box through
the IP network using UDP. It receives commands from the RTSP server to control the playback.
4.2.1.2 The satellite network
The satellite network is in charge of connecting the server with the client side. A mesh
configuration will be used, so the delay in the RTSP commands to be processed will be
minimum and the interactivity will be the maximum.
4.2.1.3 The video client
The video client is an IP set-top-box with a web browser, an RTSP client and a video stream demultiplexer and decoder. It could be possible to use MPEG-2 or MPEG-4 AVC/H264 set-topboxes.
Server architecture maximizes independence between set-top-box programming interface and
video streaming. However, the dialog process between server and client requires previous
integration, to make sure that the server applications comply the set-top-box API. This
integration must be assured in two levels:
•
Compliance of the JavaScript objects used by the web server.
•
Compliance with the RTSP messages and parameters used by the set-top-box.
Once this compliance is achieved, all of the application dialog between server and client must
work.
Another important feature of the VOD system is the way it sends the fast-forward and fastrewind sequences. Since they have special features, they are not to be transmitted at a
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
26 / 107
broadcasting picture rate (say 25 frames per second), but a lower one. The Set-Top-Box is
expected to switch into trick mode when asked for a fast-forward or fast-rewind sequence: video
fast-forward and fast-rewind streams have different features than normal playback, such as
absence of audio or lower frame rate, and the set-top-box must be able to decode these special
streams.
4.2.2 Client-Server interface
Three interfaces are defined between the set-top-box and the VOD server: the web interface, the
RTSP interface and the video streaming interface.
•
The web interface is a common web browsing environment. The web site shows
information about the video content which are available in the server: title, genre,
duration… JavaScript objects are defined to implement the video selection. These objects
start the RTSP dialog.
•
The RTSP interface allows the client to control the video streaming. Different RTSP
messages are defined to provide the following functions:
o Describe. A RTSP DESCRIBE message, to retrieve a description of the video
resource and initialize the RTSP session.
o Start. RTSP SETUP and PLAY messages, to playback the content from its
beginning.
o Play. RTSP SETUP and PLAY messages, to playback the content from the
present position.
o Pause. RTSP PAUSE message, to pause the video playback.
o Fast forward. RTSP SETUP and PLAY messages, to playback the content in fastforward mode from the present position.
o Fast rewind. RTSP SETUP and PLAY messages, to playback the content in fastrewind mode from the present position
o Keep alive. RTSP message to keep the session alive.
o Teardown. RTSP TEARDOWN message to stop the session.
•
The video interface is a unicast communication from video server to set-top-box client.
The protocol stack is shown in the figure. This is the interface which has to withstand the
largest traffic load.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
27 / 107
Figure 7. Video on demand protocol stack
4.2.3 Internal interfaces
Three interfaces are defined inside the video server:
•
The DB – web server interface is a set of meta-data which are stored in each video
content directory and read by the web application. This information must include:
o File name, as input to the JavaScript objects.
o Content title, to be displayed in the web application.
o Content description: duration, genre, synopsis…
•
The DB – video pump interface is another set of data which are stored in the video
directory. They include:
o Metadata information, in the metadata file:
§ Transport Stream bit-rate.
§ Video format.
o
•
Index information: for each image stored in the fast-forward and fast-rewind
files, time code and pointer to the same image in the main video file.
The RTSP server – video pump interface defines the interaction between these
modules. Basically, the RTSP server receives messages from the set-top-box and parses
them to a format which is understandable by the video pump. This interface is based on
UNIX pipes: when the RTSP session starts, a new video pump process is created and a
pipe (FIFO) is opened. The RTSP server sends messages to the video pump through this
pipe.
4.2.4 User interface
The user interface is a web site which the user can browse. The user only has to interact with the
set-top-box remote control. All the networking process is completely transparent, so that he can
navigate through the menus as if they were the ones in a DVD. Therefore the user interface
consists in:
•
A web-based menu, where the user can select the content he wants to watch, and
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
28 / 107
The VCR controls of the Set-Top-Box remote control: play, pause, stop, fast forward and
fast rewind.
4.2.5 Server capabilities
This architecture and interface definition is designed for flexibility and robustness, so that the
following goals are to be achieved:
•
Good rate cost-efficiency.
•
Portable to different hardware settings and Unix systems.
•
Compatible with different set-top-boxes.
•
MPEG-2 and MPEG-4 AVC/H.264 video codecs.
4.2.6 Content protection system
In this section we will analyse a content protection system for VOD. The main target for this
system is to prevent the copy of the distributed contents by the final users, even the users that are
purchasing such events.
The proposed system provides a mechanism that distributes the VOD contents in an encrypted
way that does not require Smart Card Readers or major hardware updates in commercially
available terminals. The main issue of the encryption scheme described here is that content
should be pre-encrypted, that is, scrambled in advance and not related to when the content will
be delivered.
This content protection system isn't comparable with a full DVB-CA system, because it is not
intended to be used in the scenarios where DVB-CA is used like DTV broadcasting, NVOD
events or VOD live events. It is simpler and more limited, but it could be integrated more easily,
requires little CPU power to decrypts contents, is cheaper, and could be used by an operator that
wants to prevent the copy of digital contents during the deployment of a new VOD service.
The key components of the content protection system are:
• The video on demand server that deliver contents over the satellite network.
• The authorization server that validates the user and generate the keys to decrypt the
contents.
• The set-top-box that receives the encrypted content and performs the decryption process.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
29 / 107
Figure 8. VOD content protection system
4.2.6.1 The Video on demand server
The video on demand server needs to transmit the pre-encrypted contents, so it will be necessary
to develop an encryption tool to perform this task.
This encryption tool will have the following functionality:
• The input of this tool will be a TS MPEG2 file and its output will be another encrypted
TS MPEG2 file.
• The packet size will be 188 bytes.
• It has to deal with files whose fist byte isn't the sync byte.
• Only video packets will be encrypted.
• It will be possible to encrypt packets with the start of an I frame, P frame or B frame.
Only the first packet of a frame will be encrypted. This will be enough to make
impossible the recovery of this frame by the set-top-box, so it's not necessary to encrypt
the whole frame, and it will help the decoding process.
• It will be possible to encrypt only a percentage of the video frames selected.
• This tool will encode the whole payload of the packet, without any changes in the TS
header or in the adaptation field (if present).
• When a packet is encoded, this tool will change the value of the
"transport_scrambling_control” flag in the MPEG2-TS header. For example, a 00 value
means it’s not encrypted, and the rest may be for different encryption algorithms, which
should be supported by the decoder. Value 11 is reserved, according to the standard, so it
shouldn’t be used on a first approach.
The transmission of the pre-encrypted contents are DVB compliant, so it won't be necessary to
adapt either the VOD server or the satellite network.
4.2.6.2 The authorization server
The customer is able to contact to the web server inside the VOD service, this web site shows
information about the video content which are available in the server and it allows to select a
content for its playback.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
30 / 107
This request from the customer is done using HTTPS. This secure connection ensures that no
snooping device can store the key served to the set-top-box. The request is processed by the
authorization server that ensures that the consumer has the relevant rights to request that content
and return the information needed by the set-top-box that will be used to decrypt the encrypted
DVB transport packets.
In fact, this key isn’t the key to decrypt DVB, but a personalized key (specific to the set-topbox), and via an appropriate function, whose input includes a certain parameter contained in the
set-top-box, the clear key is recovered.
When the authorization process is finished the VOD server delivers the secured content to the
customer, who is able to control the playback for a limited time, usually 24 hours.
4.2.6.3 The set-top-box
The set-top-box in the client size is in charge of the following tasks:
• Allow the customer to search for a content.
• Request the content and the information needed to decrypt it.
• Receive the content, decrypt the encrypted MPEG2 TS packets (using the previous
information) that has the "transport_scrambling_control” flag in the MPEG2-TS header
equals to '01' or '10'.
• Control the playback of the content through RTSP commands.
4.2.7 Near video on demand
Near Video On Demand is an interesting service in a satellite platform. NVOD services are more
oriented to unidirectional scenarios where there is no interactivity with final users, but they allow
to share the bandwidth between many users and this fact is one of the most advantages in a
satellite environment.
In the SATLIFE system a NVOD service could be optimized using the Amerhis on-board
multicast capacity. Thanks to this feature, only a multicast flow will be transmitted from the
NVOD server to the satellite and multiple flows could be regenerated, one per each coverture
zone.
The aim of the SATLIFE project is a VOD system, but we will provide a NVOD scenario open
because it will be more feasible its exploitation and deployment.
The main components of a NVOD scenario are the same than in a VOD scenario. In the server
side it will be needed a NVOD server to provide and transmit contents and a web server to show
which contents are available to the end users; there is any RTSP connection but a multicast
transmission. In the final user side will be the same IP set-top-box equipment, but it will be
necessary to support multicast and it will need to perform a join in the multicast channel in
which the content is going to be transmitted and perform a leave when it stops to listen the
multicast channel.
In the development phase we will study the feasibility to adapt the content protection scheme to
NVOD contents to provide a minimum level of security in the content distribution.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
31 / 107
4.3 MULTICONFERENCE
4.3.1 SIP
4.3.1.1 SIP Introduction
The Session Initiation Protocol (SIP) is a signalling protocol used to establish sessions over an IP
network. Actually, SIP is the favourite protocol for communication sessions, such as:
• Multimedia
• Conferencing
• Telephony
• Instant Messaging
• …
Its principal advantage (against other signaling protocol) locates on it is just a mechanism to
establish sessions, with 3 purposes: Initiates, Terminates and Modifies sessions (it is not
important for it the details of them). This means that SIP scales, it is extensible, and fits well in
different architectures and deployment scenarios.
It is a request-response protocol that closely resembles two other Internet protocols, HTTP and
SMTP. For this resemblance, SIP is well integrated in Internet applications. I.e.: Telephony
becomes a web application and can be integrated easily into other Internet services.
For these properties, SIP is considered as a powerful alternative to H.323. Is a more flexible
solution, simpler, easier to implement and better suited to the support of intelligent user devices.
SIP is designed to be a part of the overall Internet Engineering Task Force (IETF) multimedia
data and control architecture and was originally developed in the MMUSIC (Multiparty
Multimedia Session Control) working group of the IETF.
SIP signaling should be considered separately from the media itself because the signaling can
pass via one or more proxy or redirect servers while the media stream takes a more direct path.
4.3.1.2 SIP Benefits
SIP protocol offers an amount of important benefits:
•
•
Simplicity: It is a very simple protocol and because SIP encodes its messages as text,
then parsing and generation, simply. Moreover it is similar to HTTP so the existing
HTTP parsers can be quickly modified for SIP usage.
Extensibility: It is a key metric for measuring an IP telephony signalling protocol. By
default, unknown headers and values are ignored. If any unknown value arrives to a
server it returns an error code and lists the set of features it understands. To further
enhance extensibility, numerical error codes are hierarchically organised as in HTTP.
Using textual encoding to describe the header fields keeps their meaning self-evident.
SATLIFE
D321
•
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
32 / 107
Modularity: Internet telephony requires many different functions that should have
separate and general protocols to be duplicated in other applications. SIP is very modular
and encompasses call signalling, user location and basic registration. Quality of service,
directory accesses, service discovery… are orthogonal and reside in separate protocols. A
key feature of SIP is its ability to separate the notion of a session from the protocol used
to invite a user to a session, for that purpose SDP is used.
Scalability: There are a number of different levels where we can observe scalability:
domains (end systems could be located anywhere on the Internet), server processing
(transaction through server could be stateful or stateless) and conference sizes (SIP scales
to all different conference sizes. Conference coordination can be fully distributed or
centralised).
Integration: It has the ability to integrate with the Web, e-mail, streaming media
applications and protocols.
4.3.2 Multiconference Service
The Multiconference service provided by the system shall support both unicast and multicast
users, and use multicast flows for transmitting the multimedia flows in the satellite network.
Figure 9. Multiconference Service
4.3.2.1 Multicast
The mixed satellite-terrestrial network and the relatively high satellite delay implies that it is not
desirable to send unicast audio/video from one terminal to a remote MCU through a satellite and
then receive the composite signal again through the satellite link.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
33 / 107
This means that the service will take into account the multicast features planned for the
SATLIFE satellite network, for example in order to improve related QoS network and perceptual
measurements for the deployed services like MC
In this scenario, multiconference participants will be located in the Satellite Network, Internet or
ISDN. Terminals send and receive multimedia streams via unicast to the MCUs (Multipoint
Control Units) which collect the streams, manipulate them and generate multicast flows that will
be received by the rest of MCUs. This model minimises the bandwidth in comparison to the
multicast conference and simplifies the terminal requirements (no mixing/switching is needed).
The mixed satellite-terrestrial network and the relatively high satellite delay implies that it is not
desirable to send unicast audio/video from one terminal to a remote MCU through a satellite and
then receive the composite signal again through the satellite link. This scenario doesn’t require a
Web Conference Server, because this functionality is implemented in the MCU software.
Figure 10. Satlife multiconference scenarios
- MCU (Multipoint Control Unit):
The main idea of the MCU is to collects the streams, manipulates them and generates multicast
flows received by all terminals on a MCU, and terminals need just receive and decode one
multicast stream.
The difference from the previous models is the sender should unicast its media data to a
preconfigured MCU that is in the same Autonomous System (AS) or, at least, in the same
satellite beam. The MCU collects and mixes all of the media streams coming from all of the
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
34 / 107
current senders who is in the same multicast group and send the mixed stream to all of this
group’s members in this AS via a multicast tree. The MCU should also transmit the mixed data
to the satellite. When the MCU receives the media data from the satellite, it multicasts them,
through terrestrial network, to all of the members in the same AS. Therefore, extra delay is
introduced by the overhead processing at the MCU and propagation of the unicast from sources
to MCUs.
In SATLIFE, SIP and H.323 exists, but it should be considered that the RSGW will not provide
real interworking between these protocols. Then, and in order to provide a possible solution for
this lack, SATLIFE partners involved in this service will try to cover this feature at the
Multiconference Service platform level. This could enable a basic interworking between both
protocols, avoiding that H.323 clients could only establish multiconference sessions with others
H.323 clients, and that the same happens with SIP clients.
Call Control and Signaling
Media Streaming
Figure 11. H.323 Scenario
The next figure shows the interaction between a specific number of Users and a MCU. Using
MCU, the system obtains the property of Multicast flow between these MCUs.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
35 / 107
Figure 12. Multicast flow with MCU
Figure 13. Interaction between SIP MCU and SIP Proxy Server
- SIP Proxy:
The SIP Proxy consists of a call control engine and SIP stack. The SIP stack parses incoming
messages, does DNS lookup and takes care of retransmission. The call control engine does the
registration and the call processing.
The SIP Proxy accepts signalling messages on UDP or TCP. UDP provides for faster signalling,
whereas TCP provides for reliable signalling. The SIP Proxy also accepts registrations based on
multicast.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
36 / 107
SIP Proxy architecture:
Figure 14. SIP Proxy architecture
The next figure shows the communication established between two SIP Proxies, one which
belongs to the multiconference service, and other placed with the Gateway (RSGW).
Figure 15. Multiconference Service and RSWG
4.3.2.2 Audio/Video
The multiconference Service uses RFC 1890, RTP Profile for Audio and Video Conferences
with Minimal Control, covering unicast and multicast users.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
37 / 107
This profile is intended for the use within audio and video conferences with minimal session
control. In particular, no support for the negotiation of parameters or membership control is
provided. The protocol is expected to be useful in sessions where no negotiation or membership
control are used (e.g., using the static payload types and the membership indications provided by
RTCP), but this profile may also be useful in conjunction with a higher-level control protocol.
Use of this profile occurs by use of the appropriate applications; there is no explicit indication by
port number, protocol identifier or the like.
encoding sample/frame
1016
DVI4
G721
G722
G728
GSM
L8
L16
LPC
MPA
PCMA
PCMU
VDVI
frame
sample
sample
sample
frame
frame
sample
sample
frame
frame
sample
sample
sample
bits/sample ms/frame
N/A
4
4
8
N/A
N/A
8
16
N/A
N/A
8
8
var.
30
2.5
20
20
Table 1. Properties of Audio Encodings
The MC Service shall support RTP (RFC 3550) Media protocol.
The real-time transport protocol (RTP), which provides end-to-end delivery services for data
with real-time characteristics, such as interactive audio and video.
RTP consists on two closely-linked parts:
•
•
The real-time transport protocol (RTP), to carry data that has real-time properties.
The RTP control protocol (RTCP), to monitor the quality of service and to convey
information about the participants in an on-going session. The latter aspect of RTCP may
be sufficient for "loosely controlled" sessions, i.e., where there is no explicit membership
control and set-up, but it is not necessarily intended to support all of an application's
control communication requirements. This functionality may be fully or partially
subsumed by a separate session control protocol.
If both audio and video media are used in a conference, they are transmitted as separate RTP
sessions. That is, separate RTP and RTCP packets are transmitted for each medium using two
different UDP port pairs and/or multicast addresses. There is no direct coupling at the RTP level
between the audio and video sessions, except that a user participating in both sessions should use
the same distinguished (canonical) name in the RTCP packets for both so that the sessions can be
associated.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
38 / 107
One motivation for this separation is to allow some participants in the conference to receive only
one medium if they choose. Despite the separation, synchronized playback of a source's audio
and video can be achieved using timing information carried in the RTCP packets for both
sessions.
Video compression codecs allowed:
•
•
H.261 is an older standard and is supported by most room systems. H.261 is video coding
standard published by the ITU (International Telecom Union) in 1990. It was designed
for datarates which are multiples of 64Kbit/s, and is sometimes called p x 64Kbit/s (p is
in the range 1-30). These data rates suit ISDN lines, for which this video codec was
designed for.
H.263 is the IP-centric standard. H.263 handles lower to higher-speed connections. In
general, H.263 serves most video conferencing needs.
Audio codecs allowed:
•
•
•
•
•
G.723 is the lowest bandwidth compression, for connections of less than 128K in which
you must preserve bandwidth for video and data collaboration. While using G.723
decreases bandwidth use by client systems, it increases processor use on both client and
server systems. This increase in processor cycles causes the entire conferencing system to
support less than the maximum number of simultaneous connections.
G.711 is the most interoperable and lowest CPU intensive codec, but requires a larger
amount of bandwidth. The quality is good, and it is recommended for use when you have
a variety of endpoints. You must have bandwidth set at 128K or better if you want to use
any other features besides audio in the meeting.
G.729: it is a low bit-rate and low complexity algorithm that has a good balance of
coding efficiency, speech quality, and stability under extreme conditions. G.729 uses a 10
ms frame, which makes the encoding delay shorter than G.723.1, for example, which has
a 30ms frame.
iLBC: the codec is designed for narrow band speech and results in a payload bit rate of
13.33 kbit/s with an encoding frame length of 30 ms and 15.20 kbps with an encoding
length of 20 ms. The iLBC codec enables graceful speech quality degradation in the case
of lost frames, which occurs in connection with lost or delayed IP packets.
Speex: is based on CELP and is designed to compress voice at bitrates ranging from 2 to
44 kbps.
4.3.3 Softphone
The multiconference client for the final user will be the softphone.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
39 / 107
Softphone is a multiprotocol software client of multivideoconference for PCs. Its support the
principal VoIP standards protocols and IM&P, allowing the interoperability with the principal
manufacturers of this systems.
The application could be used on residential or corporative environments, and its different
connectivity scenarios.
This softphone is customizable by the user and/or the operator. It supports changes in its
appearance (skins), configuration and functionalities:
Figure 16. Softphone
Actually, TID has developed and is proprietary of a sotfphone. For Satlife, there are some
changes and adaptations that have been done, and others that will be done until the finalization of
this project.
-
FEATURES:
o IP Telephony:
o
o
o
o
§ SIP RFC3261
§ H323 v4
§ Codecs: Free: G.711, Speex, GSM, iLBC. Licensed: G.723.1, G.729,
GIPS
§ Terminal implementations for call transfer and divert.
Video:
§ H.323 & SIP supported
§ QCIF & CIF broadcasting
§ Standard Codecs: H.261 & H.263
IM&P:
§ SIMPLE & Jabber Support
Agenda:
§ Local Agenda
§ LDAP directory services access.
Network:
§ Static mapping assistant on routers with NAT
§ STUN for transversal NAT
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
40 / 107
§ UPnP experimental support
o Languages:
§ Different languages on skins, resources and help assistant
o Customization:
§ Skins, languages, configuration and functionality
o Others features:
§ USB handsets support
§ Voice detection, echo suppression, adaptive jitter buffer
§ Use statistics: CPU and bandwidth consume.
§ Quality of services: video bandwidth limitation, QoS, ToS
§ Functionality: “Click to talk”
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
41 / 107
4.4 INTERNET ACCESS
High speed Internet access over satellite is one of the most used solutions to connect to the
Internet in areas where there in not a broadband terrestrial network. Satellite bidirectional
solutions are preferred to unidirectional because one-way communications need a POTS/ISDN
connection to send data from the user station to the network, so there is an extra expense in the
monthly invoice. The main drawbacks of bi-directional solutions are the price of its equipment,
which is usually twice the price of unidirectional equipment, and the delay in the
communications, which is larger than terrestrial delays due to the distance of the satellite.
SATLIFE project will reduce these problems developing low-cost terminals, which will allow
customers to purchase satellite solutions easily and with mesh connections, that will allow users
to connect each other through a single satellite hop.
In order to provide high speed Internet access, users have to connect to a gateway. This gateway
will be in charge of providing access to terrestrial networks and other services. Currently
commercial solutions based on transparent systems offer this gateway station as a component of
the Hub. The use of a regenerative instead of a conventional transparent satellite allows the
definition of more flexible, low cost gateway stations.
In the SATLIFE system, the gateway station will work in regenerative mode, so it could be
located anywhere within the satellite coverage area and thanks to the on board conversion from
DVB-RCS to DVB-S, the satellite terminals inside the gateway station will be very similar to
regenerative and transparent RCS terminals.
Figure 17. Internet access scenario with a gateway station
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
42 / 107
4.4.1 Customer LAN configuration
In the previous scenario, an enterprise needs a DVB-RCST configured in regenerative mode to
connect to the Internet. This DVB-RCST will work as the default router to provide connectivity
out to the customer site and will be the limit of his LAN.
The main requirement for the customer is that all its equipment support TCP/IP, so any operating
systems that supports TCP/IP will work in the SATLIFE system.
4.4.1.1 NAT configuration
The typical configuration will be one or several customer LANs with private addresses, so, in
order to access the Internet, two NAT configurations are foreseen:
• The DVB-RCST will perform a static or dynamic NAT and NAPT. This way, the DVBRCST will have a public IP address besides its private and management IP addresses.
• The NAT/NAPT will be performed in the ISP. In this configuration the DVB-RCST will
not perform any NAT/NAPT.
The first option, where the NAT is performed in the terminal is the most extended for Internet
access over satellite.
4.4.1.2 IP addressing
There are two methods to provide IP address to the customer LAN:
• Static and private IP addresses. A static IP address will be assigned manually by the
customer's network administrator to every equipment that requests its own IP address.
Every IP address will be unique in the customer's LAN if the RCST performs NAT, or it
will be unique per VSN if the NAT is performed in the ISP.
• Dynamic and private IP addresses. A dynamic address will be assigned automatically by
DHCP. The DHCP configuration will be set up by the customer's network administrator
(or the service installer) in the RCST. With this configuration, the NAT configuration
will be done in the RCST.
The network mask used depends on the customer requirements, mainly it will be based on the
LAN size, so any private class C, B or A could be used from a LAN with a few hosts to larger
LANs.
Dynamic IP address is usually configured in the satellite terminals to connect to the Internet.
4.4.1.3 Multi-station
The customer LAN could be composed of :
• A single host. In this scenario, the typical configuration will be a public IP address for the
DVB-RCST and another public IP address for the host. No NAT will be necessary.
• Several hosts. In this scenario, private addresses will be used and NAT will be needed (in
the RCST or in the ISP, as commented in the previous sections).
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
43 / 107
4.4.2 RCST functionality
4.4.2.1 QoS
The connection to Internet will be limited according to the type of service that the customer has
in his contract. The traffic of the return link will be classified into different quality of service
classes based on IP addresses, protocol types, DSCP/TOS, and port numbers of the IP packets to
be transmitted.
The RCST supports several categories of capacity request that could be configured per terminal:
• CRA: CRA is rate capacity which shall be provided in full for each and every superframe
while required. Such capacity shall be negotiated directly between the RCST and the
NCC.
• FCA: FCA is volume capacity which shall be assigned to RCSTs from capacity which
would be otherwise unused. Such capacity assignment shall be automatic and shall not
involve any signaling from the RCST to the NCC.
• RBDC: RBDC is rate capacity which is requested dynamically by the RCST. RBDC
capacity shall be provided in response to explicit requests from the RCST to the NCC,
such requests being absolute (i.e. corresponding to the full rate currently being
requested). Each request shall override all previous RBDC requests from the same RCST,
and shall be subject to a maximum rate limit negotiated directly between the RCST and
the NCC.
• VBDC: VBDC is volume capacity which is requested dynamically by the RCST. VBDC
capacity will not be available in SATLIFE system because it is not available in the NCC.
The QoS in the satellite network has to be combined with the regenerative gateway, where a
contract of a Service Level Agreement (defining minimum guaranteed and maximum bandwidth)
is offered. The operator will be able to define the range of profiles that adapts better to its
potential clients.
4.4.2.2 PEP in star configuration
In satellite networks, the efficiency of standard TCP suffers over error-prone long delay links.
Although some performance improvement is possible by simply tuning the TCP parameters, as
TCP windows size, in general a TCP/IP PEP is required to ensure that TCP connections over
satellite provide similar performance than terrestrial broadband.
In SATLIFE a PEP will be developed oriented to work in a star configuration in transparent
mode. This star configuration is typically used for Internet access, where customers connect to a
single site, usually their gateway to Internet.
Thanks to this PEP, the expected performance gains of common TCP-based applications such as
HTTP and FTP download.
4.4.2.3 Monitoring
The network operator is able to manage the RCST terminal over the air interface using the web
interface, a telnet client or using SNMP. The RCST MIB is described in the document "DVBRCS regenerative (and transparent) Technology and Subsystems Requirements Report" , Annex
C "RCST SATLIFE MIB" [3].
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
44 / 107
4.4.3 Internet services
Internet services can be classified according to their level of interactivity: conversational,
interactive, streaming and background.
The target of this service is to integrate a full service of Internet access involving a complete
scenario with common Internet services to determine and recommend the better configuration for
the satellite terminal and the gateway station.
4.4.3.1 Conversational services
Conversational services are the services that demand then most interactivity. Some examples of
conversational service are the multiconference, that is described in detail in chapter 4.3
MULTICONFERENCE and server access or interactive game.
For server access the following scenarios will be configured:
• A Telnet client with a Unix and a Telnet server running on the Internet.
• A remote desktop connection client in a Windows S.O. and a Terminal Server running on
the Internet.
4.4.3.2 Interactive services
Interactive services requires lower interactivity than conversational services. An example of
interactive service is WWW browsing.
For interactive services the following scenarios will be configured:
• A web navigator client requesting HTTP command and a web server on the Internet.
• A messenger client sending messages to a messenger server on the Internet.
• A chat client sending messages to a chat server on the Internet.
4.4.3.3 Streaming services
Streaming services requires lower interactivity than interactive services. An example of
streaming service is video and audio streaming.
For streaming services the following scenarios will be configured:
• A Windows media player client requesting a video content on the Internet.
• A Windows RTSP client requesting a RTP/RTSP video content on the Internet.
4.4.3.4 Background services
Background services requires the lowest interactivity. Some examples of background services
are file transfer and e-mail.
For background services the following scenarios will be configured:
• An FTP client running in a Windows S.O. and an FTP server on the Internet. Upload and
download operations will be measured.
• An email client running in a Windows S.O. and an email server on the Internet.
Reception and transmission operations will be measured.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
45 / 107
4.5 LAN INTERCONNECTION
LAN interconnection via satellite can be used to complement and extend existing terrestrial
networks through interconnection of clusters of broadband islands (such as LANs and MANs) in
remote regions, where terrestrial lines are expensive to install and operate. Interconnected
satellite or hybrid satellite-terrestrial LANs have certain advantages over terrestrially connected
LANs. A satellite can be used to interconnect LANs located in remote areas, and in a hybrid
situation can be used for large data-file transfers, so this interconnection could avoid overburdening the terrestrial networks.
In this service, we can identify two types of corporate facilities: a central facility and a client
facility. All of the application servers are located in the central facility, including Internet access
proxy/firewall, Web and E-mail server, and several client facilities accessing the different
servers through their RCSTs. The general scheme of this architecture is depicted in the following
figure:
Figure 18. LAN interconnection scenario
This scheme only considers the scenario where all LANs are directly connected to the satellite
network in order to provide typical office services. The interconnection will be performed at
level 3 (IP).
In LAN interconnection service, the most needed applications are sharing data and
communications services between several corporate facilities. Some Internet services are also
demanded for LAN interconnection, as:
• Electronic mail
• Internet / Intranet web access
• Videoconference
• VOIP phone calls
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
46 / 107
Videoconference and VOIP phone calls services in a mesh configuration have been analyzed in a
previous chapter. See 4.3. MULTICONFERENCE.
4.5.1 Corporate facility LAN configuration
In the previous scenario, corporate facility needs at least a DVB-RCST configured in
regenerative mode to connect to other facilities. This DVB-RCST will work as the default router
to provide connectivity out to the corporate facility and will be the limit of this LAN.
The main requirement for the corporate facility is that all of its equipment supports TCP/IP, so
any operating systems that supports TCP/IP will work in the SATLIFE system.
4.5.1.1 NAT configuration
The typical configuration will be one or several customer LANs with private addresses, so, in
order to access the Internet, the central facility will have to perform a static or dynamic NAT.
The DVB-RCST will not use its NAT function.
4.5.1.2 IP addressing
The IP addresses used in all corporate facilities will be private. There are two methods to provide
IP address to the facilities LAN:
• Static and private IP addresses. A static IP address will be assigned manually by the
facility's network administrator to every equipment that requests its own IP address.
Every IP address will be unique per VSN.
• Dynamic and private IP addresses. A dynamic address will be assigned automatically by
DHCP. The DHCP configuration will be set up by the facility's network administrator in
the RCST.
The network mask used depends on the customer requirements, mainly it will be based on the
LAN size, so any private class C, B or A could be used from a LAN with a few hosts to larger
LANs.
The DVB-RCST will work at IP level, so it will act as a router. Because of that, each facility will
have its own subnet delimited by its network mask and IP packets between facilities will be
routed through the DVB-RCST.
Only will be able to configure a single VPN per VSN.
4.5.1.3 Multi-station
The facilities LAN will be configured in a multi-station configuration.
4.5.2 RCST functionality
The main functionality that will be developed in SATLIFE regarding the LAN interconnection
service is the PEP in mesh mode.
4.5.2.1 PEP in mesh configuration
In satellite networks, the efficiency of standard TCP suffers over error-prone long delay links.
Although some performance improvement is possible by simply tuning the TCP parameters, as
TCP windows size, in general a TCP/IP PEP is required to ensure that TCP connections over
satellite provide similar performance than terrestrial broadband.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
47 / 107
In SATLIFE a PEP will be developed oriented to work in a mesh configuration. This mesh
configuration is the one typically used in a LAN interconnection service, where several facilities
with their own DVB-RCST connect each other to interchange information through a single
satellite hop.
Thanks to this PEP, the expected performance of common TCP-based applications such as
HTTP and FTP download increases.
PEP mesh will be available only in transparent configuration, PEP mesh in regenerative mode is
out of the scope of SATLIFE project.
4.5.3 LAN interconnection services
The target of this service is to integrate a LAN interconnection service to check the
enhancements developed in the SATLIFE project. For this purpose, see the following scenarios:
• Remote desktop and management between two computers placed in different
locations/divisions of the same company.
• Netmeeting applications between the employees of the two companies whose LANs are
interconnected.
• File repository access and transfer as allowed by current operating systems between two
connected LANs.
• Corporate web resources: corporative site, services, etc.
• An email client running in a Windows S.O. and an email server on the Internet.
Reception and transmission operations will be measured.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
48 / 107
4.6 STREAMING
Streaming video and audio consists of the transmission of audio and video contents to final users
who are able to view it, without requiring them to wait for the entire file to be downloaded.
Streaming is an asymmetrical service where the communication channel from the server to the
users is the most used. Usually the most extended form of video streaming is one-to-many or
one-to-all, for example, broadcast television. Another form of streaming is point-to-point or oneto-one, for example, a unicast video communication over Internet. And finally another form is
point-to-multipoint or one-to-many as IP-multicast over Internet (multiple unicast is also a form
of point-to-multipoint, but multicast is more efficient)
Broadcast communications have been considered in previous chapters, so in this chapter we will
focus on unicast and multicast streaming.
In general, as streaming doesn't involve user interactivity, the end-to-end latency isn't a critical
factor, so latency of many seconds or potentially even minutes will be acceptable.
Packet-switched networks, such as Ethernet LANs and the Internet, are shared networks where
the individual packets of data may exhibit variable delay, may arrive out of order, or may be
completely lost.
QoS support can facilitate video streaming, as it can enable a number of capabilities including
provisioning for video data, prioritizing delay-sensitive video data relative to other forms of data
traffic, and also prioritize among the different forms of video data that must be communicated.
SATLIFE will support QoS in the satellite link, so most of the problems of packet switched
networks are not affected. However, usually the terrestrial link does not provide any QoS
support, and it is often referred to as Best Effort (BE), since the basic function is to provide
simple network connectivity by best effort (without any guarantees) packet delivery .
The basic idea of video streaming is to split the video into parts, transmit these parts in
succession, and enable the receiver to decode and playback the video as these parts are received,
without having to wait for the entire video to be delivered. Video streaming can conceptually be
thought to consist of the follow steps:
1) Partition the compressed video into packets.
2) Start delivery of these packets.
3) Begin decoding and playback at the receiver while the video is still being delivered.
In video streaming there usually is a short delay (usually on the order of 5-15 seconds) between
the start of delivery and the beginning of playback at the client. This delay is commonly referred
to as the pre-roll delay.
Video streaming provides a number of benefits including low delay before viewing starts and
low storage requirements since only a small portion of the video is stored at the client at any
point in time. The length of the delay is given by the time duration of the pre-roll buffer, and the
required storage is approximately given by the amount of data in the pre-roll buffer.
4.6.1 Streaming protocols
This section briefly describes the network protocol for streaming over IP networks and the
protocols for media delivery, control and description.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
49 / 107
4.6.1.1 TCP
TCP has been empirically proven to be sufficient in most cases to support video streaming. TCP
takes network measures in order to detect packet losses or congestion. When no congestion is
detected, it increases the packet transmission in a constant rate and when congestion is inferred,
packet transmission rate is halved.
The advantages of TCP are:
• The packet delivery is guarantee.
• The rate control is stable and scalable.
• Works with firewalls and NAT configurations.
The drawbacks of TCP are:
• Retransmissions for packet loss. The TCP reliable message delivery is unnecessary for
video and audio where losses are tolerable and TCP retransmission causes further jitter
and skew.
• The rate control produces a tooth-saw profile not suitable for streaming. On the other
hand, the flow control and the windowing schemes on the data stream destroy the
temporal relations between video frames and audio packets
4.6.1.2 UDP
Current streaming systems for the Internet rely instead on the best-effort delivery service in the
form of User Datagram Protocol (UDP). This allows more flexibility both in terms of error
control and rate control. For instance, instead of relying on retransmissions alone, other error
control techniques can be incorporated or substituted
The advantages of UDP are:
• There are no retransmissions for packet loss.
• The is no flow control. This provides more flexibility for the applications to determine
the appropriate flow control and congestion control procedures.
• UDP doesn't require a back channel for acknowledgements.
The drawbacks of UDP are:
• The packet delivery is not guaranteed.
• Many network firewalls block UDP information. Currently a number of products change
from UDP to HTTP or TCP when UDP can't get through firewall restrictions, but this
reduces the quality of the video.
For media streaming the uncontrollable delay of TCP is unacceptable and compressed media
data is usually transmitted via UDP/IP, although control information is usually transmitted via
TCP/IP.
4.6.1.3 Media delivery protocol
The Real-time Transport Protocol (RTP) and Real-time Control Protocol (RTCP) are IETF
protocols designed to support streaming media. RTP is designed for data transfer and RTCP for
control messages. Note that these protocols do not enable real-time services, only the underlying
network can do this, however they provide functionalities that support real-time services.
RTP does not guarantee QoS or reliable delivery, but provides support for applications with time
constraints by providing a standardized framework for common functionalities such as time
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
50 / 107
stamps, sequence numbering, and payload specification. RTP enables detection of lost packets.
RTCP provides feedback on quality of data delivery. It provides QoS feedback in terms of
number of lost packets, inter-arrival jitter, delay, etc. RTCP specifies periodic feedback packets,
where the feedback uses no more than 5 % of the total session bandwidth and where there is at
least one feedback message every 5 seconds. The sender can use the feedback to adjust its
operation, e.g. adapt its bit rate. The conventional approach for media streaming is to use
RTP/UDP for the media data and RTCP/TCP or RTCP/UDP for the control.
4.6.1.4 Media Control
Media control is provided by either of two session control protocols: Real-Time Streaming
Protocol (RTSP) or Session Initiation Protocol (SIP). RTSP is commonly used in video
streaming to establish a session. It also supports basic VCR functionalities such as play, pause,
seek and record. SIP is commonly used in voice over IP (VoIP), and it is similar to RTSP, but in
addition it can support user mobility and a number of additional functionalities.
4.6.1.5 Media Description and Announcement
The Session Description Protocol (SDP) provides information describing a session, for example
whether it is video or audio, the specific codec, bit rate, duration, etc. SDP is a common
exchange format used by RTSP for content description purposes, e.g., in 3G wireless systems. It
is also used with the Session Announcement Protocol (SAP) to announce the availability of
multicast programs.
4.6.2 Unicast and multicast streaming
A multicast transmission creates a one-to-many relationship with clients, whereas in a unicast
transmission, each client initiates its own stream, generating many one-to-one connections
between client and server.
With multicast, the server broadcasts a single stream, and users may then access the stream in
progress. Users do not have control of the data stream, so they cannot stop, pause, rewind or
advance it, users can only connect to the multicast stream or disconnect from it, multicast
streams must be scheduled rather than offered on-demand. Multicast streams are much less
demanding on the server and network but may require network modification for multicasts and
regular network traffic to coexist effectively. Multicast streaming is used to broadcast content to
a large audience when network bandwidth and server capacity are limited and when the network
supports multicast.
Multicasting is more efficient in its bandwidth usage because multiple copies of data are not sent
across the network. Data is not sent to clients who do not want it. The user is simply instructing
the network card on the computer to listen to a particular IP address for the multicast. The client
does not have to be identified to the computer originating the multicast. Any number of
computers can receive a multicast transmission without bandwidth saturation (again, only one
copy of the data is sent over the network), However, multicast delivery requires special network
capabilities, to enable multicast you have to update your routers, or in this case the RCST
configuration.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
51 / 107
Actually many streaming servers support multicast and unicast network transports to deliver
streaming media, giving you the flexibility to choose what’s right for your audience and what
works best on your network. Multicast streaming will be considered in the following streaming
servers.
4.6.3 Streaming video servers
One of the main objectives of this chapter is to analyse the most extended commercial video
server with two objectives:
• To create a test-bed in order to provide streaming contents over the satellite network and
to integrate them as video and audio sources.
• To obtain the best configuration of the DVB-RCST to be used for streaming services.
The main codecs that we are going to use with these servers are MPEG-2, MPEG-4, Windows
Media, Real Media and Quick Time.
Actually, there aren't many streaming video servers that supports MPEG-4 AVC/H.264 contents,
so the streaming video server developed by Telefónica I+D will be analysed in this chapter.
4.6.3.1 QuickTime Streaming Server
QuickTime was originally developed in 1991 and now is one of the video formats most extended
with more than 100 million copies distributed world-wide. QuickTime's major advantages are its
maturity and the large number of codecs available for it. It features an open plug-in feature to
allow third party codecs to be added. MPEG1 and MPEG4 codecs are currently available.
QuickTime server is supported natively in MAC OS. Actually the streaming server runs on Mac
OS X Server v10.3 and delivers audio and video over Internet using RTP/RTSP. QuickTime
Broadcaster captures and encodes QuickTime content in the latest media formats, including
MP3, 3GPP, MPEG-4 and AAC audio.
The main features of QuickTime Streaming Server v2.0.1 are:
•
•
•
•
•
•
•
•
•
Delivers contents in real time using RTP/RTSP.
Supports MPEG-4 streaming.
Supports multicast streaming.
Using QuickTime Broadcaster, it's possible to capture and encode QuickTime-compatible
audio and video for live streaming over the web.
QTSS Publisher streams media via RTP/RTSP to local area networks and over the
Internet.
No license fees or per-stream charges.
Encodes and streams content using MPEG-4, enabling any ISO-compliant MPEG-4
player to receive your broadcast event.
Reduces the delay caused by buffering the media stream prior to playback
Provides secure remote administration of broadcasting and streaming settings from an
intuitive interface.
On the downside, Streaming Server 2.0.1 cannot compete with Microsoft or Real's solutions in
terms of features and manageability.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
52 / 107
QuickTime has launched QuickTime player v7. This version has a new codec for H.264, 3GPP
and 3GPP2. With QuickTime Pro it's possible to create MPEG-4 contents and prepare contents
for streaming.
You need to do the following tasks before you start streaming any content using QuickTime:
•
Compress the content (it's possible to use QuickTime Pro or any application that uses
QuickTime, such as Final Cut Pro, Discreet Cleaner, Final Cut Express, etc, to create MPEG4 files for streaming with QuickTime Streaming Server.
• Hinting the compressed content (contents intended for streaming must be "hinted", that is,
they need a hint track for every streamable media track). QTSS Publisher automatically
applies hint tracks to any movies that have not yet been hinted.
• Upload the content (QTSS Publisher interface makes it easy to upload prerecorded media to
the streaming server and manage media playlists).
Finally, the final user will need a QuickTime player to display the video.
QuickTime streaming is only supported in MAC OS, so it isn't possible to test it under Windows
or Linux platforms. Because of that, QuickTime Streaming Server will not be tested during
the integration phase of the project, however, QuickTime format will be availabe using
Darwing Streaming Server (see next section 4.6.3.2 Darwin Streaming Server). The streaming
scenario using QuickTime Streaming Server is shown in the following figure.
Figure 19. QuickTime streaming server
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
53 / 107
4.6.3.2 Darwin Streaming Server
Darwin Streaming Server is the open source version of Apple’s QuickTime Streaming Server
(based on the same code base). Darwin allows you to send streaming media to clients across the
Internet using standard RTP and RTSP protocols and it runs on a variety of platforms like MAC,
Windows, Linux and Solaris.
The main features of Darwin Streaming Server v3.0.1 are:
•
It's open source.
•
It's available for free under Apple's public-source license
•
It's intended for developers who need to stream QuickTime and MPEG-4 contents.
•
It's multi platform.
•
It's possible to stream contents using RTP and RTSP over HTTP in addition to UDP. This
means users can view media through most firewalls.
•
It supports multicast streaming.
•
You can control access to media files using the authentication modules.
•
Web based administration.
In the following picture is presented the proposed scenario to stream MPEG-4 using RTSP/RTP
over the satellite network (both scenarios will be considered: streaming live contents and preencoded contents). The main components of this scenario are:
•
IP camera: the source of the audio and video is an IP camera that allows connections using
standard RTP/RTSP sessions. The proposed camera is the Vivotek IP3121 or the wireless
version IP7124; this camera encodes video using MPEG-4 simple profile and audio using the
AMR codec. Also, it's possible to limit the used bandwidth or the quality of the audio and
video codecs so it will be usefull to test the audio/video transmission over the satellite
network.
•
TID session capturer: this software will be a RTP capturer; the main objetive of it will be
send RTP packets to the Darwin Streaming Server. The main tasks of this capturer are:
1. Initiate a new RTSP session with the camera.
2. Order a RTSP DESCRIBE.
3. Order a RTPS SETUP to obtain audio and video properties.
4. Order a RTSP PLAY to start receiving audio and video.
5. Receive all RTP audio and video packets and forward them to the Darwing Streaming
Server.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
54 / 107
6. Manage the RTCP sessions (one RTCP for the audio and another for the video) to inform
the IP camera about the RTP flows.
7. Order a RTSP TEARDOWN when needed.
•
Darwin Streaming Server: it is the streaming server for this scenario. It receives the RTP
packets from the capturer using two differents ports (for audio and video). An SDP file will
be needed inside the streaming server in order to know what type of data will be received in
each port. The streaming server will handle the final user requests for connections.
•
Web administration: it is the web server for the Darwin Streaming Server administration and
configuration.
•
DVB-RCST: it is the RCS terminal used to transmit the audio and video over the satellite.
•
The player: it is the client application used to display the video. In this scenario any player
that supports MPEG-4 over RTP could be used, for example: QuickTime player, Mpegable
player or Real Player.
Figure 20. Darwin streaming server
There are some commercial products, like Sorenson Broadcaster, that allow the user to capture
audio and video from external sources and are integrable with Darwin Streaming Server. We
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
55 / 107
have decided to develop the capturer software because this type of products aren't free and
because we only need to use a well known specific functionality to integrate this IP camera with
Darwin Streaming server.
4.6.3.3 Windows Media Series
Microsoft's streaming server (called Microsoft Windows Media 9 Series) is an end-to-end
platform for creating, delivering, and playing digital media content over the entire range of
enterprise networks; from low-bandwidth to high-bandwidth networks. This product is free and
supplied as standard under Windows 2000 and 2003 Servers and as a free download for
Windows NT server. Microsoft have not open sourced the code, so it's not supported in other
platforms. This is considered a major disadvantage as far as flexibility is concerned.
Windows media series contains the following components:
•
Windows Media Player: Windows Media Player (currently at version 10) is the newcomer to
the streaming world. Because of this there are fewer codecs available for it. Microsoft give
the player away free and it is quickly gaining in popularity.
•
Windows Media Encoder: Windows Media Encoder 9 guides users through the basic steps of
authoring—from quality settings to bandwidth selection. Media Encoder 9 lets you create
streaming content by linking to capture devices or by converting existing files into
Microsoft's streaming format. It's flexible encoder includes constant, variable and multiple
bit rate and one or two pass encoding .
•
Windows Media Services: Windows Media Services 9 Series is the server component of the
Windows Media 9 Series platform. It supports the main protocols, as RTSP, HTTP, MMS,
IGMP, and transmission over TCP or UDP. Multicast streaming is only available in
Windows Media Services in Windows Server 2003.
•
Windows Media Audio and Video codecs: the main codecs are Windows Media Video 9 and
Windows Media Audio 9.
In the following picture is presented the proposed scenario to stream Windows Media contents
using RTSP/RTP over the satellite network (both scenarios will be considered: streaming live
contents and pre-encoded contents):
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
56 / 107
Figure 21. Windows Media streaming server
4.6.3.4 RealNetworks streaming server
The streaming server from RealNetworks is Helix Server, it is available for most OS platforms
like Windows or Linux, but is only free for a basic 25-user licence. Streaming is RealNetwork's
core business so they cannot subsidise the technology in favour of market share as Apple and
Microsoft do. Serving more than a couple of hundred simultaneous streams can become quite
expensive and one major drawback of the system.
The streaming solution from RealNetworks contains the following components:
•
RealPlayer: RealPlayer (currently at version 10) is a very popular player which is very
widely distributed and available for all major OS platforms. RealNetworks claims over 70%
of the Internet streaming market with the player being installed on over 90% of home PCs.
This player is free and plays all major media formats (over 50 media formats) including
RealVideo and RealAudio.
•
RealProducer: RealProducer (currently at version 10) is the encoder tool to create RealAudio
and RealVideo contents for live broadcast or on-demand streaming. The encoder includes
high-quality video scalling, constant and variable bit rates and one or two pass encoding.
RealProducer comes with a bandwidth simulator will give you a good idea of what your
audience will see.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
57 / 107
RealNetworks has also integrated another useful feature into the encoder: publishing. With a
few clicks, RealProducer Plus will generate an HTML page that includes all the necessary
tags and code; you can simply cut and paste the output into your page.
•
Helix Server is the streaming server from RealNetworks. There are two types of servers:
Helix Server that allows you to deliver RealAudio and RealVideo 10 to RealPlayers and
Helix Universal Server that allows you to deliver more than 50 data types including Real,
Windows and MPEG-4 contents and supports multicast streaming.
•
RealAudio and RealVideo codecs: the main codecs are RealVideo 10 and RealAudio 10.
The proposed scenario to stream MPEG-4 and Real contents using RTSP/RTP over the satellite
network (both scenarios will be considered: streaming live contents and pre-encoded contents) is
presented in the following picture:
Figure 22. RealNetworks streaming server
In the previous figure are shown two cameras: the first one will be an analog or digital camera
that will be the source of audio and video for real-time broadcasting using the HelixProducer
encoder to obtain RealAudio and RealVideo contents; the second camera will be an IP camera
that will be used for the RTSP/RTP scenario with MPEG-4 contents, in this case the TID session
capturer will be used (see paragraph 4.6.3.2 Darwin Streaming Server for details).
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
58 / 107
4.6.3.5 Telefónica I+D streaming server
Telefónica I+D streaming server is a system that streams unicast and multicast media contents,
such as video and audio, through an IP network. It is designed to serve different video contents
carried in MPEG-2 Transport Stream supporting MPEG-2 video.
This streaming server is the same one that has been used for the VOD scenario, so its
components, modules and architecture are detailed in chapter 4.2.1.1 Video on demand server. In
the frame of the SATLIFE project the streaming server will be upgraded to support MPEG-4
AVC (H-264) contents in unicast and multicast modes.
In this scenario, the client must use an application which is able to listen to a unicast port or
multicast group, as well as to play the media stream received. In other words, the client must be a
media player which is able to:
•
Directly listen to an IP address and port, and
•
Real-time decode the media streams, specifically H.264 video.
Two possibilities are now available as PC clients:
•
VLC (Video Lan Client), which can effectively decode H.264, but uses the ffmeg
(libavcodec) decoder. This is not a stable decoder yet.
•
Moonlight Elecard Media player, that makes use of Windows DirectShow filters for
decoding process. As for H.264, Moonlight Cordless has implemented a H.264-decoder
DirectShow filter, which gives good results. It can be downloaded for free from the
company’s website, and outperforms by now other free software decoders.
4.6.3.5.1 MPEG-4 AVC/H.264 contents
An interesting feature of this design is its possibility to develop a content ingest subsystem that
allows the server to stream H.264 video over MPEG-2 Transport Stream. Server modularity
allows the introduction of new video formats over Transport Stream only by modifying the
ingest process.
H.264 compatibility is achieved by two means: encoding and analysis.
Contents may not need to be available in H.264 video format over MPEG-2 TS. Therefore, a
transcoding and multiplexing process is necessary before storing the content in the video
repository. VLC software provides both functionalities by using two different software libraries:
•
libavformat, from the ffmpeg project, which can multiplex different media streams into a
variety of media file formats. Specifically, it is prepared to multiplex H.264 streams and
associated audio in MPEG-2 Transport Streams, as it is defined in MPEG-2:2000 Amd. 3
(2004) [4].
•
x264, which is a free library for encoding H.264 video streams. x264 is still in early
development stage. However, it can effectively encode H.264 video contents keeping
good image quality.
The transcoding process, then, is automatically performed by VLC, that:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
59 / 107
•
Demultiplexes audio and video streams from native format, typically MPEG-2 TS with
MPEG-2 video and audio.
•
Transcodes video stream to H.264.
•
Remultiplexes audio and video streams in MPEG-2 TS.
H.264 is a very complex video format, which gives the encoder a variety of tools to achieve good
quality with a very low bit rate. Implementing all of them in a software decoder can make it
difficult to run in a personal computer, because of the need for a very complex decoding process
that must be performed real-time. Furthermore, it is necessary to analyze the video stream to
determine if it is suitable for streaming or if, on the contrary, it must be recoded using different
parameters (mainly profile and level).
H.264 defines three profiles (baseline, main and extended) that limit the set of tools allowed in
the encoding process. For each profile there are different levels, which limit allowed values for
some parameters, such as frame rate or bit rate. The main profile is the most suitable for
streaming applications. Depending on the resolution and frame-rate required, different levels can
be used. Assuming 25 frames per second, a good recommendation can be
•
Level 1.1 (< 192 kbps) for QCIF.
•
Level 2.0 (< 2 Mbps) for CIF.
•
Level 3.0 (< 10 Mbps) for 625-SD (like PAL) or VGA.
The bit-rates mentioned are the upper limits. Good video quality can be achieved at lower rates,
such as 2.0 Mbps for 625-SD resolution.
4.6.3.5.2 MPEG-4 AVC/H.264 Video analyzer
To control the input video parameters, it is necessary to develop an efficient H.264 video
analyzer, which can get all the video information needed for the good performance of the server.
This analyzer must be able to read an H.264 video stream in MPEG-2 TS and extract the
following information:
•
Level and profile.
•
Resolution and frame-rate.
•
Encoding parameters, as stored in Sequence and Picture parameter sets in the image.
•
GOP structure.
This information will be used to decide whether an incoming H.264 video stream must be
recoded or not.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
60 / 107
4.7 SOFTWARE DOWNLOAD
Software download service is a satellite data service targeting PCs that provides value-added
services at high speed. The broadcast service is similar to TV point-to-multipoint transmissions.
Basically, the contents are transmitted in a carousel, which enables users to access relevant
Internet contents without being on-line since contents are received continuously through the
satellite link.
In the present chapter, the hardware and software architecture of the software download service
will be presented. Through this service, users will access a huge diversity of contents in a nearon-demand fashion. Data will be supplementary available not only in broadcast but also in the
more versatile multicast mode.
Figure 23. Satellite connection scheme
In the previous figure, the user terminal is a PC with a DVB-S card, or some kind of multimedia
set-top box with a hard disk. The proposal here is to use transparent DVB-S and thus encapsulate
software into MPEG2-TS; otherwise, DVB-RCS and IP could be used, which is regenerated in
the satellite and received as DVB-S in the user terminal.
Thanks to the high data rates offered by downlink channel, lots of information can be put at
user’s hands in a flash without the need of an uplink channel.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
61 / 107
4.7.1 Push mode
Besides its general purpose as an Internet gateway, satellite usage for data transmission can be
extended to offer further kinds of services, such as that above mentioned multicasting method,
software download, etc. All of these value-added services make use of the so-called PUSH
mode. Thus media can be delivered much faster than typical Internet access can provide.
Therefore, helped by PUSH mode, many assets can be distributed from the broadcasting heading
through multicast addressed carousels. Approaching in this manner will allow information to be
selectively served and closely fitted to customers’ requirements with no superfluous waste of
bandwidth.
In order to accomplish the purpose of permitting users to subscribe to the appropriate multicast, a
content guide will be sent out so that they can choose between anything they are authorized to
access.
Consequently, PUSH mode is especially well-suited for multimedia broadcasting, software
download, etc.
Furthermore, these contents may vary depending on the day or hour, even matching special
profiles of bandwidth usage aimed for making the most of remaining resources.
4.7.2 High-level Architecture
4.7.2.1 Introduction
In the course of this chapter, high-level architecture will be presented. First of all, the software
download service will be depicted roughly and then it will be exposed as detached in its
integrated parts.
To begin with the main description, it is convenient to remark that the software download service
platform features PUSH mode content broadcasting.
4.7.2.2 Functional Architecture
This platform is functionally broken down to two parts:
• PUSH mode data distribution system, by means of which, a user will be able to access
all of the contents offered via broadcast or multicast methods. These contents could be of
diverse nature and source: from static web pages to software applications or media such
as audio, video, etc. In any case, a phone line or other return channel will not be needed
by users to enjoy those contents.
• PUSH mode data receiving system consists of a software and hardware bundle that will
allow customers access to PUSH contents. Likewise, users will be provided with a
proxy/cache software application for PUSH contents first to store everything broadcasted
and then to share it among all of the users on a LAN.
Therefore, as those will be integrated with each other, more and more the need arises to count on
a management system intended to control and oversee all hardware and software elements and
broadcasted contents.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
62 / 107
4.7.3 Components and modules
The main components of the software download service are located at two separate places:
• Broadcast center: the broadcast center is composed by the PUSH mode distribution
system and the hardware elements required for digital data DVB broadcasting through
satellite, such as the SP unit, DVB-RCST, antenna and other network components, such
as routers, firewalls, etc.
• User Terminal: the user terminal comprises hardware elements needed to receive
satellite transmissions (antenna, LNB, decoder board), other hardware stuff such as
desktop PC, network interface and LAN-related (if available), software applications such
as the proxy-cache (to receive and store PUSH mode data) and a regular browser able to
display the downloaded information.
Figure 24. Component allocation
4.7.3.1 The broadcast center
The broadcast center is divided in the following modules:
• Content Capturer: this module obtains the contents from Internet. It provides an
interface to program downloads from Internet. The downloaded contents could be a
single web page or an entire web site.
• UDP Packer: the main task of this module is to prepare the content to the multicast
transmission mode. Multicast mode uses UDP packets.
SATLIFE
D321
•
•
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
63 / 107
Scheduler: the scheduler is responsible for sending PUSH contents towards emission
modules according to the planning established by management system. Such planning
will mean that a schedule must be issued. Moreover, it must contain summaries of each
content and will be sent together with the rest of media.
Flow server: this module will send the contents and the program guide to the satellite
gateway at the bandwidth specified.
Satellite gateway: the satellite gateway is the link between the IP world and the satellite
medium. This module is in fact an SP unit plus an SP RCST.
Management system: the management system is in charge of the management and the
monitoring of the broadcast center.
The path which a content follows inside the software download service is shown in the next
figure:
Figure 25. Broadcast center software architecture
4.7.3.1.1 Content capturer
The content capturer module allows to program automatic captures of new contents from
Internet. The management system will issue a capture order with the following parameters:
• The URL to download: it could be a single web page, file, video or an entire web site.
• The output directory where the content will be stored.
• The number of levels of links that will be downloaded.
SATLIFE
D321
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
64 / 107
The number of retries.
The date and time when the capture will begin.
4.7.3.1.2 UDP Packer
Contents could be composed of several types, like web pages, binaries, audio or video files. All
contents must be formatted before their transmission, and the UDP packer performs this task.
All contents will be received using multicast by the proxy/cache module that are in the final user
side, so, the transmission protocol used at application level will be HTTP (we will use a
proprietary version of the PUT method). At IP level, the transmission will use multicast, so the
protocol will be UDP.
The UDP packer module will perform the following tasks:
• First of all, it will add an HTTP header to each file. This header will consist of:
o PUT header.
o Content type.
o Content length.
• Later, the content will be packed in UDP. Each UDP will be composed by:
o Source IP address.
o Target IP address (multicast).
o Source UDP port.
o Target UDP port.
• In the case of a content with several files, all files will be joined into a single file. Each
file will contain its file name and directory in its HTTP header.
• Finally, a text file with a description of the content will be created. This file will contain:
o Content file name.
o Content description.
o Source/target IP addresses and ports.
o Content size.
o URL used by the final user to access this content.
For each UDP packer job, a new thread will be created. This thread will be deleted when the
UDP packer job ends.
4.7.3.1.3 Scheduler
The scheduler has two main tasks:
• Transmit the contents according to the schedule that the operator has programmed.
• Add a DVB-MPE header for each UDP packet.
For each scheduled content to be transmitted a new thread will be created to do this job. This
thread will be deleted when the scheduled job ends.
Each schedule job has an internal value which identifies its status. The status value could be:
• Inactive: when it is registered but not active.
• Active: when it is ready to transmit, but it isn’t transmitting.
• Transmitting: when it is transmitting the content.
• Finished: when the content has been transmitted and it is going to be deleted.
4.7.3.1.4 Flow server
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
65 / 107
The flow server is the module in charge of sending the contents and the program guide when
they are demanded by the scheduler. Contents are sent to the flow server as UDP packets with an
extra MPE header in each packet. The flow server split these packets into one or several 188
bytes MPEG-2 TS packets and write them to its local ASI interface in order to send them to the
Service Provider unit and finally to the SP RCST.
The flow server will manage several flows, typically a flow per content, but it will be possible to
associate several contents to a single flow. Each flow is associated with an unique DVB PID.
The basic operations with flows will be:
• Add a new flow.
• Remove a flow
• Start a flow
• Stop a flow
• Set or change the file associated to a flow
• Set or change the bit rate associated to a flow
• Set or change the PID associated to a flow
4.7.3.1.5 Satellite gateway
The satellite gateway is composed by the Service Provider Unit and the SP RCST.
The interface between the flow server and the SP Unit will be the transport stream that will be
generated by the flow server and transported to the satellite gateway using an ASI interface.
The interface between the SP unit and the SP RCST will be the same as the Video Service
Provider and is detailed in Annex 13 of the document D220 DVB-RCS regenerative (and
transparent) Network Aspects Report.
The RCST SP accepts MPEG-2 packets over RTP/UDP. When the SP RCST receives a
RTP/UDP packet it removes the RTP/UDP headers to obtain the payload. This payload is
composed by one or several MPEG-2 TS packets (the maximum payload of a UDP packet is
65000 bytes, so it will be possible to send up to 24 TS packets per RTP/UDP packet). Finally the
MPEG-2 TS packets are transmitted to the satellite using DVB-RCS, the satellite regenerates the
payload of a new DVB-S signal that is received by the final users.
In the next figure a content is shown show divided into packets:
• First the content is divided into UDP packets with fixed length and a UDP header is
added to each packet.
• Then an MPE header is added for each UDP packet in order to transmit IP over MPEG-2.
• Later, each MPE packet is divided into 188 bytes packets, each 188 byte packet is an
MPEG-2 TS packet (4 bytes header plus 184 payload).
• Finally, the flow server sends these TS packets to the SP unit that encapsulate all of the
TS packets using RTP/UDP for its transmission to the SP RCST.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
66 / 107
Figure 26. Packet division structure
4.7.3.1.6 Management system
The broadcast center management system is based on a commercial tool for system, process and
application management. It is in charge of the following tasks:
• Content management.
• System, process and application management.
4.7.3.1.6.1 Content management
The content management system allows the management and the content programming. It will
be based on a web interface that allows the operator to interact with the process of the system.
The main components of the content management system are:
• Content capturer interface: through this interface, the operator will add new contents
that subsequently could be transmitted to the users. Contents could be added from several
locations:
o It will be possible to acquire contents from the Internet.
o It will be possible to acquire contents from local servers.
• UDP packer interface: in order to broadcast a content it needs to be in a packet form.
The content management system will be in charge of this task, so, before starting to send
a new content to the final users, the operator should transform the content into packets
using this interface.
SATLIFE
D321
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
67 / 107
Scheduler interface: the content management system will provide a tool to build and
edit the programming using the existing contents. This tool will insert new entries into
the scheduler depending on the programming, and it will generate the static program
guide that will be sent to the final users to inform them about the programmed contents
that they will receive.
Channel administration interface: through this interface the operator will manage the
association between ports and flows. This interface will allow addition and deletion of
programs for the content’s transmission.
The operator will have a web interface to manage the contents of the software download service.
This web interface will be divided into several parts:
• Program guide: the program guide contains information about the actual and following
contents that the service is transmitting or will transmit. More in detail, it will be in
charge of:
o Show events. It will be possible to select the date and the cannel of the events that
the operator wants to know.
o Add events. It will be possible to add a new event; the operator needs to specify
the following data:
§ Date and time of the event.
§ Number of times that this event is repeated.
§ Number of hours between repetitions.
§ Content to send.
o Delete events. It will be possible to delete an event; the operator needs to specify
the following data:
§ Date and time of the event.
§ Content to delete.
o Generate program guide: It will be possible to generate the updated program
guide that will be sent to the final users.
• Content administration: the content administration allows one to manage the system
contents. The main operations that it will allow are:
o Add new contents. It will be possible to add two types of contents:
§ From a local file: in this case, it will be necessary to specify the following
parameters:
• Output directory where the content will be stored.
• Name of the new content.
• Minimum bitrate used to transmit this content.
• Maximum bitrate used to transmit this content..
• Content description.
• Local file name where the content is located. This file will be a
compressed file like a tar, gzip o zip file.
§ From Internet: in this case, it will be necessary to specify the following
parameters:
• Output directory where the content will be stored.
• Name of the new content.
• Minimum bitrate used to transmit this content.
SATLIFE
D321
•
•
•
•
•
•
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
68 / 107
Maximum bitrate used to transmit this content..
Description of this content.
Full URL where the content is located.
Number of links levels that will be downloaded.
Time to start the download.
Date to start the download.
Number of times that the content will be downloaded and
frequency in hours of each repetition.
o List active downloads. It will be possible to show the downloaded contents and
show their status. For each content will be shown all of its parameters (output,
URL, start date, start time, etc) and its exit status (that is, the download process
exit code).
o Stop active downloads. It will be possible from the previous list, to delete any
completed or pending process.
o List active packing processes. It will be possible to show the current processes
that are packing the contents. For each process it is necessary to show the target
port, the web server and its name.
o Scheduler list. It will be possible to list the contents that will be sent to the users.
The following parameters will be shown for each scheduler:
§ Start date of the transmission.
§ Start time of the transmission.
§ End date of the transmission.
§ End time of the transmission.
§ Number of times that will be transmitted this content.
§ Bitrate to be used.
§ Complete file name to be used.
§ Description of this content.
§ Status of the scheduler. It will be one of the following:
• Inactive: when it is registered but not active.
• Active: when it is ready to transmit, but it isn’t transmitting.
• Transmitting: when it is transmitting the content.
• Finished: when the content has been transmitted.
§ Port where the content is sent.
§ Exit code from the scheduler when it has finished.
o Contents query. It will be possible to show a list of contents. The following
parameters will be shown for each content:
§ Content name.
§ Content size in KB.
§ Date in which the content was added.
§ Number of times that the content is transmitted.
§ Content description.
Channel administration: the channel administration allows management of the relations
between the flow and reception ports. Through a channel the reception port is defined in
the client side in which the content will be received, usually a flow will be associated to a
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
69 / 107
single port, so the contents sent using a flow, will reach to the related port on the client
side. The main operations that it will allow are:
o Add a channel: to add a channel related to a flow.
o Delete a channel: to delete an existing channel.
4.7.3.1.6.2 System, process and application management
The management system is based on the commercial tool Patrol. Patrol is a management tool to
control the resources, processes and applications that compose a software and hardware
platform.
Patrol 2000, owned by BMC Software, is based on a manager/agent architecture, where the
agents get the management information (applications, parameters for monitoring, thresholds,
actions, …) from the KMs (Knowledge modules) that resides in the manager and in the agents.
The information is received in the manager through the Patrol Console; the Patrol Console
allows visualization of all managed objects and their related events.
Patrol is based on the following components:
• Knowledge modules: the knowledge modules are a set of files that detail the resources
that the agents must manage. In addition, the KMs contain information related with the
object identity, monitored parameters and actions that should be done when a resource
changes its state. Patrol has developed many KMs to manage other commercial tools or
applications, but also let developers to build their own KMs to manage their own
applications. In the software download service we will develop some KMs to manage the
software download platform.
• Patrol agent: this component is installed in every system and it manages the host
resources like applications and processes. The main tasks of the patrol agent are:
o Obtain the system information depending on the KMs installed.
o Store the status and the event information and report it to the Patrol Console.
o Generate events.
o Execute commands requested by the user though the Patrol Console.
• Patrol Console: the Patrol Console allows the centralized management of the system
using an object oriented graphic interface. The managed resources by the Patrol agents
are presented in the Patrol Console as icons that show their status.
The following image presents the architecture of a management system based on Patrol, with
their components:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
70 / 107
Figure 27. Patrol architecture
The Patrol Console allows you to discover the systems that are ready for management by means
of running a patrol agent inside. The Patrol agents obtain and store the resources information and
send it to the Patrol Console using the UDP/TCP port 3181.
A new KM will be developed to manage and monitor the broadcast center, this KM will be
divided into some instances:
• Flow server. This section of the KM will be divided into:
o IP Flows: a flow associate an IP flow data to a DVB program, using a fixed
bandwidth. Through this section it will be possible to:
§ Add a new flow. The information requested is the PID of the DVB stream
and the bitrate in bps.
§ Show the parameters of a flow. It will present the PID, the bitrate and the
status of the selected flow.
§ Modify the parameters of a flow. It will be possible to modify the PID and
the bitrate of the selected flow.
§ Delete a flow. It will delete the selected flow.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
71 / 107
§ Enable a flow. It will begin to transmit the content related with this flow to
the satellite gateway.
§ Stop a flow. It will stop the processing of a flow.
o PSI flows: the PSI flows transmit the DVB-SI needed by the receiver to recover
the IP flows. Through this section it will be possible to:
§ Regenerate PSI: when an IP flow is added or deleted, it is needed to
update the PSI to be sure that the receiver is able to detect the
modifications.
§ Transmit PSI: it will transmit the PSI to the satellite gateway.
§ Not transmit PSI: it will stop the transmission of the PSI to the satellite
gateway.
§ Show PSI: it will inform about the transmission status of the PSI.
• Processes: This section of the KM will be in charge of monitoring and will allow to start,
stop or view the trace file of the main service processes:
o SF_IP
o PushMgr
o UdpFile
o WgetMgr
Each process will be presented as a semaphore. When the semaphore is green, the process
is working without problems, when it is yellow, the process has detected a warning
condition, and finally, when it is red, the process is not running.
4.7.3.2 User Terminal
In the previous sections, the components that compose the broadcast center in order to provide
the software download service over the satellite network were shown. At this point, hardware
and software elements will be described, with special emphasis on the user side.
4.7.3.2.1 Hardware Architecture
The user will need the following hardware:
• A reception antenna. The size of this antenna should be the minimum to receive a full
transport stream from the Amazonas satellite.
• An USB or PCI DVB satellite receptor. This satellite receptor must be DVB-MPE
compliant.
• An standard PC with Windows installed.
4.7.3.2.2 Software Architecture
The following figure shows software parts comprised in customer equipment:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
72 / 107
Figure 28. User terminal software architecture
The whole architecture is based on the use of a proxy/cache scheme. This proxy inserts a variety
of modules that allow reading of the multicast UDP packets sent by the broadcast center.
Proxy PUSH module will be responsible for gathering all of the UDP traffic that arrives from the
satellite and sending it to the cache memory. Every time a user requests some content using the
service browser application, the proxy will provide the demanded content immediately, being
unnecessary to wait for its transmission.
In addition to these modules, Interactive Program Guide will be available. This guide has to be
composed by the PUSH scheduler and it contains all of the information about content
programme. It will consist of a web interface, downloaded onto the PC, that will not only make
possible the look up of data being broadcasted but also deciding which data to store. As added
functionality, it will offer usage statistics, stored item listing, etc.
Due to the fact that this guide will be sent through a special channel within data carousel, it is
easy to update user software as soon as newer versions are released. In the same way, cache
modules can be updated automatically in a dynamic fashion.
Finally, the user browser must be configured to make use of the built-in proxy, installed on the
host itself.
4.7.3.2.3 User terminal on a LAN
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
73 / 107
It is possible to use other PCs in the same LAN than the gateway PC (the PC that contains the
satellite link). In this case, it is necessary to set up each one of the hosts to use the gateway PC as
a software proxy system.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
74 / 107
4.8 WEBCAST WITH LOW INTERACTIVITY FOR CORPORATIVE SERVICES (ELEARNING, ETC)
The system that will be integrated in SATLIFE for satellite trials provides Web Based
Corporative oriented services, that includes a wide range of functionalities fulfilling with the
SATLIFE applications defined as e-Learning. It copes with the SATLIFE e-learning service
requirements for synchronous and asynchronous modes with low interactivity.
It is a background product of SIRE (Sistemas y Redes Telemáticas) SME company that is
currently providing these services to corporative clients, and will be adapted to SATLIFE system
trials and evaluation.
4.8.1 Corporative Applications & Services
The system to be integrated in SATLIFE supports different services and applications. The main
characteristics of these corporative services can be summarised as:
• Web based
• Casting (open or group oriented, live or on-demand)
• Interactivity (Low rate)
• Standard Users (Internet explorer and standard PC)
• Asymmetric and adaptative low bandwidth requirements.
With these characteristics, different types of applications are supported, such as:
• e-Learning
• Media Repository
• Live events
• Webcasting Channels
Because of its Corporative orientation, learning is mainly directed to staff training, such as
internal training courses and continuous learning for the corporative users.
This service supports the transmission of “Live Events” or “Synchronous Courses” which may
cope with different corporation needs:
• Internal communication,
• International experts conferences,
• Sponsored external events,
• Directive or Shareholders meetings,
• Training courses.
Besides the above live events service, a “Media Repository” is available for on-demand access
with a flexible and powerful set of tools for cataloguing and searching on the stored contents.
This contents may comprise:
• Courses,
• Stored Live events,
• Historical Archives of the Corporation, and
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
75 / 107
Advertisings and Corporative Image.
This service is properly structured to simultaneously support different live and on demand
contents of different corporations.
The life cycle of all these contents are also properly supported. This life cycle ends with the
publishing of the contents to the end users, but a contents review process can previously be
performed by an editor.
Each corporation contracting this service has only visibility to their own contents and their own
managements interfaces. Through this management interface, for example, the contents editor
may review the contents before publishing.
All these service capabilities and the service architecture are described in the sections below.
4.8.2 Service Functionalities
The service may be integrated and hosted in a Server with capacity to provide the service to a big
user population on an intranet or the Internet. It may be installed in the SP premises or in the
client premises; but always having the required network bandwidth resources. Also the multicast
availability of the satellite environment needs to be exploited.
The service provides the required multimedia quality (audio and video) to the final users
according to their terminal capabilities and network access rate. These quality levels may be
configured for each corporation according to their needs.
Different streaming Technologies are also supported (Windows Media and Real Networks),
which are properly integrated in a WEB base player where all the contents (audio, video, slides,
text, web navigation, ….) are presented in a synchronised way.
The main service functionalities are grouped as “Tele-services”, and those complementary
functionalities to the Tele-services are grouped as “supplementary:
Tele-Services:
• Live Events/Courses/Conferences
• Deferred Events/Courses/Conferences
• On-Demand Express
• On-Demand Multimedia Repository
Supplementary Services:
• Low interactivity service based on integrated Chat service (for live events).
• Multi-platform.
• Multi-quality.
• Simultaneous Translation.
• Multilingual Contents.
• E-mail based support.
• Synchronised content presentation: audio/video, slides, text, web navigation.
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
76 / 107
Downloading of additional conference information: Speaker curriculum, conference
program, related documents, slides, text, …
Below, a description of the Tele-services is provided.
4.8.2.1 Live Events/Courses/Conferences
As said before, a wide range of live events can be supported by this service; from internal
meetings placed in the corporation premises, to external sponsored events with multiple
international speakers.
To cope with this range of possibilities, a flexible and easy to manage service is provided. It
comprises different possibilities:
• “Live events stations” permanently installed at the corporation premises (in its
conference room).
• Mobile “Live events stations” with an easy installation and adaptation on different
conference rooms.
• Local or remote monitoring and control system of the live event.
All the media contents are presented to the user in a synchronised way: current speaker name,
conference title, event warnings, conference slides, web navigation during the conference, and
previously stored synchronized text.
The monitoring and control system provides a simple interface to handle all the real time
incidences that typically happens in a live event, such as changes in the scheduled program
(speakers, titles, time left to start a session….), new conference material or breaks in the middle
of the program. It provides the required continuity control to make the user feel comfortable with
its interface, and properly informed of all the incidences.
The existence of redundant Live events stations, and also, of a redundant operation and control
system provides a very high service reliability.
The figure below shows the user plane flows for this service where the multicast capability of the
networks is fully exploited.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
77 / 107
EU1
Multicast to Mcast users
(terrestrial and Satellite)
Througth the RSGW/MR.
RCST
EU2
EU3
EUn
DVB-S
Unicast from the
server to unicast users.
Corporate/SMATV
Users
Server
SPi Unicast EUs
MR
RSGWk
Residential Users
BR
Corporate Users
(Central)
SPi Network Domain
SPi Multicast EUs
Sat. Network Domain
Speaker/
Teacher
Unicast from the
speaker to the server.
INTERNET
(Rest of SPs)
Figure 29. Live events/courses multimedia flows (user plane)
4.8.2.2 Deferred Events/Courses
This service provides the scheduled or carrousel transmission of events previously stored in the
server. The figure below shows the user plane flows for this service where the multicast
capability of the networks is also fully exploited.
EU1
RCST
EU2
EU3
Deferred (scheduled or
carrusel transmisión of
lectures stored in the
server): MULTICAST or
Unicast.
DVB-RCS
EUn
DVB-S
Corporate/SMATV
Users
Server
SPi Unicast EUs
MR
RSGWk
Residential Users
BR
Corporate Users
(Central)
SPi Network Domain
SPi Multicast EUs
Sat. Network Domain
INTERNET
(Rest of SPs)
Figure 30. Deferred events/courses multimedia flows (user plane)
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
78 / 107
4.8.2.3 On-Demand Express
After a very short time interval (near 30 mins.), the just finished live event conferences are made
available to the users for on-demand access. This means that during a live event or course, the
previously transmitted conferences or lectures are becoming available on-demand.
This service is called “Express” because the contents of the published conference are the same as
they were in the live event, without further elaboration (including breaks, incidencences, …).
A further elaboration of this contents will be part of the permanent on-demand media repository
that is described below.
4.8.2.4 On-Demand Media Repository
Besides the above services, a “Media Repository” is available for on-demand access with
flexible and powerful set of tools for cataloguing and searching on the stored contents. This
contents may comprise:
o Courses,
o Stored Live events,
o Historical Multi-Media Archives of the Corporation,
o Advertisings and Corporative Image.
This repository is based on a Multi-Media Data Base where all the information of the events is
properly related. This fact makes that the searching capabilities be very powerful. The user can
find information about speakers and companies participating in events, and keywords searching
in all the stored conferences. The results of this search are presented to the user in a friendly
way, ordered with different available criteria such as by date or alphabetically.
Figure 31 shows the Multimedia flows for this on-demand service.
EU1
RCST
EU2
EU3
DVB-RCS
EUn
DVB-S
On-demand: UNICAST.
Corporate/SMATV
Users
Server
SPi Unicast EUs
MR
RSGWk
Residential Users
BR
Corporate Users
(Central)
SPi Network Domain
SPi Multicast EUs
Sat. Network Domain
INTERNET
(Rest of SPs)
Figure 31. On-demand Media Repository multimedia flows (user plane)
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
79 / 107
5. DEVICES
This second chapter contains the design aspects to be considered in the development of three
devices that are key for the successful completion of the SATLIFE project, as it will greatly
enhance its low-cost, interoperability, and additional features capabilities.
The common idea behind the three devices is precisely: to provide low-cost alternatives, interact
with sufficiently tested and studied technology and to allow developers to hook into the
SATLIFE concept by easily implementing useful new applications for the existing hardware.
Hence, certain particular targets are being sought: the development of an easy platform for
software development on an open operating system environment, the use of low-cost terminals
and network facilities, the extension of the reach of present networks, the addition of new
functionalities and the inclusion of the possibility to easily extend the features of the SATLIFE
system.
These targets are completely filled by the development of three devices: a middleware for the
set-top boxes, a setup for internetworking between present standard ADSL networks and the new
bi-directional DVB-RCS satellite network system, and a home gateway in order to easily access
several home and domotics functionalities.
This chapter will cover the design aspects of these three devices; as in the service provisioning
chapter, the starting point will be the requirements stated in previous deliverable documents, and
the focus will be to cover all details needed for development in the next work packages.
5.1 MIDDLEWARE
The development of digital technology and its implementation in home environments for
applications such as entertainment, surveillance or communications serve as a rationale to the
possibility of integrating different kinds of services in the same device. An appropriate kind of
device for this is the set-top-box, which are already present in all homes where Digital TV
service is installed. This device comes from the audiovisual world. In origin, they were only used
to decode Digital TV signals and provide a minimal user interface. However, the evolution of
set-top-boxes has focused in reaching process, memory and even storage capabilities which
enable them to provide advanced interactive services much beyond simple TV playback.
The importance of middleware arises at this point. Middleware is the layer in charge of
providing a common interface for all service applications with the hardware and operating
system of devices. Hence, it simplifies the integration of different services in the same device.
The main objective of middleware is the provision of a software abstraction layer to developers
that make easier the implementation of multimedia TV services. Thanks to this middleware, it
will be possible to implement services like Internet access or VOD in the set-top-box, and use
this set-top-box as the user's equipment in other scenarios of this project. In this chapter, we will
describe this software abstraction layer.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
80 / 107
5.1.1 Architecture overview
The following diagram shows the middleware architecture proposal.
Figure 32. Middleware architecture
The system, as can be seen, is Linux-based. The functionalities offered to the application
developer, as stated in the requirements document, come in three levels:
•
•
•
Web browser. This is the higher access level for the developer, and thus the uppermost
abstraction level.
Javascript objects. The programmer is allowed to use them in its applications, keeping a
high level of abstraction, too.
C++ objects. This is for advanced application programming, and allows to access the
middleware directly from a lower layer.
In addition, a screen manager is provided.
But let’s get into the internal design of the middleware and see what elements are needed to offer
these functionalities. Internally, the different components of the middleware shall communicate
through request/reply interfaces and event signalling. A main listener is considered in the system
for music, TV and content-on-demand distribution (ListenerSAP in the picture). Its functions
are:
SATLIFE
D321
•
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
81 / 107
Signalling external events related to TV distribution and content playback control
protocols.
Deliver output streams to the appropriate devices.
Provide a service interface to other middleware entities.
The next layer, as depicted in the picture, is a manager process layer, which consists of a Storage
Manager and a Service Manager, in charge of managing storage and centralized event
recollection, respectively.
To access the functions offered by the mentioned processes, there exists a single programming
interface, in the form of a dynamic Linux library, over which the last system layer relies. This
latter layer is formed by several tightly interrelated entities, and allows user applications the use
of window management and advanced web browsing resources.
Next, we will show the external interfaces of the middleware (the C++ library and the Javascript
interface), and afterwards we will cover in depth with the way the internal structure of the
middleware shall operate.
5.1.2 External interfaces
5.1.2.1 C++ objects
The C++ interface shall consist of a series of classes which provide a programming interface
either for creating JavaScript objects, which allow to access transparently system services from
the applications, or for creating native applications. Three classes will be offered:
• The talBrowser class provides functionalities to interact with the browser. Its methods
are:
o IsPrimary: returns whether the browser is primary or secondary.
o subscribe, unsubscribe: the object is subscribed or unsubscribed to the reception
of all browser-related events, and establishes a specified callback function.
o sendEventToCoBrowser: sends an event to the complementary browser.
o getURL, getCoBrowserURL: loads a specific URL in the current or
complementary browser.
o lower: minimizes the browser.
o raise: maximizes the browser.
o setPosition, setCoBrowserPosition, getPosition, getCoBrowserPosition: sets and
gets the browsers positions.
o getFocus, getCoBrowserFocus: gives focus to one of the browsers.
• The talSystemPeer class controls services which are active in the system, as well as
information needed for management. Its methods are:
o saveContext, loadContext: they shall store and retrieve the information related to
the context on a specified file.
o setMasterVolume, getMasterVolume: these shall control the master volume.
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
82 / 107
The talTerminalPeer class allows the control of the physical terminal: display devices,
video planes, alpha channels, chroma keys, color palettes and input interfaces. Its
methods are:
o getGCaps, setGCaps: obtains information or sets video plane geometry,
configuration and color palette.
o getVCaps, setVCaps: obtains information or sets video plane window size.
5.1.2.2 JavaScript objects
We shall discuss the design of JavaScript objects, induced from the required functionalities. A
set-top-box must be equipped with flexible software in order to create new applications for new
services. The proposal is to instantiate two HTML browsers, working in parallel and on different
windows. The advantages of this design are:
• It allows the programmer to develop applications which need to be always-on together
with the browsing.
• It allows to fully customize the user interface.
The primary browser shall perform the following tasks:
• It shall work as a system toolbar.
• It shall allow the user to access system configuration resources.
• It shall show system events to the user.
• It shall listen to incoming messages from other users.
• It shall control the secondary browser.
The secondary browser will act as an application viewer, where interactive applications are
loaded and executed.
In this context, the JavaScript objects offer functions to control the look and feel of browsers, the
position on the Z-axis (depth) and the size of both.
The following objects will be developed:
•
•
•
The FrameObject shall provide a control in area display parameters associated to a
multimedia input stream. Its ChromaKey method will specify a key color for
transparency.
The EventObject will provide a way to represent system events, which are managed
through special functions (callbacks) declared in multimedia objects.
The VideoObject provides a way to manage video and audio streams received through
IP. Its methods are:
o Open: this method picks a video or audio channel using one of the supported
protocols. It shall not automatically start the selected content playback and shall
return the associated video plane.
o Play: this method shall start the playback of the content -a previous call to the
Open method is required-. It shall also control playback speed.
SATLIFE
D321
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
83 / 107
o Close: this method stops the associated video or audio decoding, and unloads any
allocated resources. It will be necessary to re-call the Open method to allow
further decoding.
The BrowserObject shall control the browser instances behaviour. This object shall be
used from any browser instance. Its methods are:
o showPrimaryURL, showSecondaryURL: these methods load a new page on the
primary or secondary browser.
o isPrimary: this method represents whether the object is being used from the
primary or from the secondary browser.
o getFocus, releaseFocus, hasFocus: these methods shall control the focus of the
browser instances.
o raise, lower, isRaised: these control the Z-axis (depth) for each browser instance,
in order for one of them to overlap the other. This shall be kept independent from
focus.
The SystemObject shall provide the functionality needed to manage some general
adjustments regarding system parameters. Its methods shall be getMasterVolume and
setMasterVolume, to manage the master volume.
5.1.3 Managers
Managers are software entities in charge of internal management of different functional blocks.
They will generally be built on Linux processes, but sometimes they may be embedded into
specific libraries.
Three managers have been announced in the architecture overview: the Storage Manager, the
Service Manager and the Screen Manager.
5.1.3.1 The Storage Manager
The Storage Manager can basically be considered as an “intelligent storage” where stored
resources are referenced by URIs (for example, http://localhost/page.html). At the same time,
this component is in charge of classifying and selecting the point where each resource is located,
following a set of access rules. These location point classification covers:
• Resources stored locally and persistently: files stored in a Flash ROM, in order to keep
them after turning off the STB. Within this category fall configuration files and installed
application packages. The URIs are those referencing the own STB’s IP address.
• Resources stored locally and temporarily: These are RAM-cached contents, for improved
access, and are often HTML pages and their corresponding images, which the Storage
Manager saves following a most-recently-used policy.
• Resources stored remotely: These are resources located in a remote server, either because
they haven’t been requested recently, or because they are classified as non-cacheables,
just as are dynamic contents.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
84 / 107
Figure 33. Storage Manager process
The Storage Manager, as the picture shows, acts as a proxy between the internal HTML engine
(B) and the remote resources (D), as well as with other intranet-connected devices (C).
In addition, contents stored in this component can be managed with an administration port (A),
which allows to access from the application to the STB storage system. This access allows to:
• Copy cached resources to the persistent media (install).
• Erase persistent resource (uninstall).
• Change remaining life time of temporary resources.
• Store and recover config files or logs.
• Change resource access level.
• Obtain partial content listings (resource searches).
When the applications don’t state any preference regarding cache resource life time, the Storage
Manager will use the following methods to recover free space:
• Resource inactivity time: the less active will be erased.
• Resource size: the resource which allows to free the most space will be erased, if the
previous method gives several candidates for elimination.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
85 / 107
5.1.3.2 The Service Manager
The service manager is the entity in charge of centralised process management. This
management consists of start and stop tasks, as well as the maintenance of a table which links
processes with the resources they use.
In addition to starting and stopping processes, this component is in charge of communication
between them, which allows the exchange of:
• Messages between two processes or applications.
• Events between the system itself and a process or application.
Figure 34. Service Manager process
To receive events as well as messages, applications have at their disposal two possibilities for
action:
• With a specific handler: the application defines a method which will be executed when
the associated event or message is received.
• By sampling: the application asks explicitly for the event queue to be examined. If there
is no event in queue, it can optionally stay on hold, for not wasting resources while no
new petitions for it arrive.
From a middleware point of view -that is, communicating low-level layers with applications-, the
Service Manager is a signal router, which communicates the rest of the entities between them,
twofold:
• It transfers signaling from listeners to applications.
• It transports messages between applications.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
86 / 107
This is needed because applications are executed on a constrained environment, and they
reference external resources to this environment -other entities- in a logical manner. These
references are processed in the Service Manager, which translates that logical vision to a
physical addressing, which allows real communication through the middleware, by datagram
transmission.
When the Service Manager starts existing Java processes, it produces new entries on a special
table, the process description table. This table saves the list of resources used by each process, to
allow auditing and identification.
The addresses of other middleware entities are stored here, even if the Service Manager doesn't
start them directly. This allows uniform communication between Java processes and middleware.
When a message or event takes place, this table allows translation from logical reference to
physical address.
5.1.3.3 The Screen Manager
The Screen Manager is in charge of managing the system's graphic environment. Although it's a
well differenced entity, it's strongly bound to the HTML browser, because of the tight
relationship with the tasks the browser performs.
Most of its functionality will be based in the QT/embedded windows environment, from
Tolltrech. This environment provides easy window management for different processes without
needing an X-windows server, which is a great advantage from the memory saving point of
view.
The Screen Manager executes commands related to the windows environment, like requesting a
page to the browser, changing the size of a window, screen resolution or focus. These commands
come from browser-associated JavaScript objects.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
87 / 107
Figure 35. Screen Manager process
5.1.4 Listener
Multimedia services involve large data streams which, in addition, can have strong real-time
requirements. This makes its processing a particularly difficult issue.
This listener is composed of daemon-type Linux processes, in charge of efficiently processing
audio and video streams inside the system as independently as possible in order to simplify the
addition of new streams or new stream types, which are a common consequence of incorporating
new services.
Listeners are mainly in charge of allowing the transfer of streams to physical and logical device
handlers which consume or produce them, without direct intervention from the upper layer’s
applications.
The independent process structure is mainly aimed to provide uncoupled stream processing, as
has been previously mentioned, whereas autonomous stream processing without application
intervention means higher efficiency.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
88 / 107
In addition, the listener implements control mechanisms to synchronize with external entities.
Figure 36. Listener process
The SAP listener gives the ability to decode and display multimedia content, like video on
demand, music on demand, or live TV channels.
This process, just like all the listeners, communicates with the rest of the system entities through
UDP ports, either to exchange commands and replies or to notify events.
The content decode and display task is performed by an independent process, namely mplayer,
which is launched and controlled by the SAP listener. We will later talk about this process.
Commands accepted by the SAP listener can be classified as:
• Start commands: In a start command, the type of content to be played back is defined,
and needed resources for playback are reserved.
• Termination commands: Content playback is stopped, and all reserved resources are
freed.
• Control commands: Their function is to control content display process, and can be either
shared by all the content types or specific to certain types.
In general, the functionality of this listener differs for each of the content types identified for use
in the middleware. These types are as the following list defines:
• IP TV distribution.
• DVB-S TV distribution.
• IP-RTP TV distribution.
• Local video playback.
SATLIFE
D321
•
•
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
89 / 107
Video-on-demand playback.
Local music playback.
Music-on-demand playback.
A thing to take into account is that during TV distribution content playback all commands are at
disposal, whereas when playing content on demand video server sync has to be considered.
5.1.5 Multimedia stream player
In the SAP listener chapter, we named the mplayer, which is a multimedia stream player, also
shown in the architecture. We will now find out more about this middleware component.
From the end user application point of view, a relevant part of middleware functionalities can be
seen as a multimedia stream transformation factory.
Applications inject streams into the processing service in the access points -frequently, listener
processes sockets- and control their transformations along a production chain, until the resulting
stream is finally collected by middleware internal entities for its consumption or -in most of the
cases- it ends up being re-conducted to the final stream representation element.
The representation can be done as a final stream transformation inside one of the listeners,
relying directly on standard Linux handlers for the representation devices -video device drivers
for Linux, DVB toolbox, etc-.
Additionally, we have the possibility to use mplayer as a display tool, which shall be rather more
flexible than the other possibilities for the Linux environment.
In this case, the SAP listener is the listener process in charge of starting and controlling the
mplayer process. The general functionality diagram of the mplayer process can be seen in the
following figure:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
90 / 107
Figure 37. Multimedia stream player process
In this figure, the following functional layers can be distinguished:
• TS transport stream input from the net or disk. When the stream input comes from the net
it's necessary to perform an intermediate storage -cache- to compensate peaks that may
appear. Disk reading is on demand and doesn't need this mechanism.
• Transport stream de-multiplexing. Interpretation of program tables and retrieval of audio
and video streams.
• Audio and video stream decoding. Mplayer allows to use different audio and video
codecs, and it's prepared to include new codecs. There is a codec configuration file and a
procedure to register them. New efficient codecs will be included, to make use of new
platform hardware resources.
• Filtering over audio and video. It also allows to change format, resize frames and some
other functionalities.
• Stream synchronization and representation over virtual output devices -internal to
mplayer- which make abstraction for the upper layers from physical devices.
• Physical output device delivery.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
91 / 107
5.2 ADSL AND SATELLITE INTERNETWORKING
5.2.1 Architecture Overview
Digital Subscriber Line (DSL) is a broadband connection that uses the existing telephone line.
DSL provides high-speed data transmissions over the twisted copper wire, the so-called
"lastmile" or "local loop", that connects a customer's home or office to their local telephone
company central offices (COs).
DSL is a more cost effective option than many other broadband connections, such as leased
lines, terrestrial broadcast, cable and cable modem and fibre optic connections, because it is able
to take advantage of the existing telephone infrastructure for both voice and data traffic. Only the
user’s modem and the telecommunications equipment needs to be upgraded when moving to a
DSL connection because it utilises the existing cable infrastructure.
DSL is always on, always fast and always reliable. DSL connections are point-to-point dedicated
circuits that are always connected, so there is no time lost dialling up.
ADSL is the transmission of integrated voice and data services with higher data rates
downstream (to the user) than upstream. ADSL can reach speeds of up to 10 Mbps downstream
and 1 Mbps upstream. ADSL enables customers to use both their normal telephone service and
high-speed digital transmissions on an existing telephone line.
ADSL is ideally suited to home and small office users who require fast download rates for video
on demand, home shopping, Internet access, remote LAN access, multimedia access and other
specialised PC services.
The following diagram shows the typical DSL Internetworking architecture proposal.
Figure 38. Common DSL architecture
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
92 / 107
In rural environments, there are two key problems for establishing a terrestrial ADSL setup. The
first one is the distance from the telephone exchange to the end users. If subscriber loops are
already deployed, this implies that high speed may not be achieved; if subscriber loops aren’t
even deployed, this also implies a cost-effectiveness problem. The second one is that the
telephone exchange may be hundreds of miles away from a backbone node, which means that a
very long fiber link has to be deployed, which is very costly and should be avoided by all means.
So, if a full duplex satellite terminal could be installed in the telephone exchange, this problem
could be easily solved.
The following diagram shows the typical ADSL and Satellite Internetworking architecture
proposal.
Figure 39. DSL and satellite internetworking architecture
5.2.2 Components and modules
The conceptual architecture of this service is divided in the following components:
• User equipment: It’s the equipment used by the user to access to the different services of
the network.
• Local Head End: In this point is the telephone exchange, where the DSLAM is installed.
• Provider Backbone: The Provider Backbone is the ISP, provides a gateway to Internet
to the LAN of the Local Head End.
5.2.3 User Equipment
The end user installation comprises of an ADSL modem, which provides access to several
services, and can be connected to a VoIP platform, an IP set-top box for Video on Demand or
Digital TV service, and a PC or other terminal with Internet access. Thus, differentiated services
integrated over a single network can be provided with this approach.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
93 / 107
Although there are a lot of services in a ADSL scenario, the main focus of this integration is to
give Internet access to final users, the rest of services like: VoIP, Video Services, etc. are other
possible services that will not be integrated in this scenario.
The following diagram shows this.
Figure 40. Common DSL user equipment
The main components of the User Equipment are detailed in the following lines:
• DSL Modem:
ADSL modems can be divided into two categories: Intelligent, or modem-routers, which
allow management of different protocols, and passive, or USB modems, which means
connection protocols reside in the PC.
In intelligent modems, PC connection is materialized via an Ethernet link, allowing to wire
several PC equipments. In passive modems, connection is done through USB, which means
that only one PC can be connected to it.
• IP Set-top Box:
An IP set-top box is a dedicated computing device that serves as an interface between a
television set and a broadband network. In addition to decoding and rendering broadcast TV
signals, an IP set-top box can provide functionality that includes video-on-demand (VOD),
Electronic Program Guide (EPG), digital rights management (DRM), and a variety of
interactive and multimedia services.
IP set-top boxes can support in-demand features such as Web browsing, e-mail and viewing
e-mail attachments, advanced multimedia codecs, home networking, personal computer
connectivity, gateway functionality, instant messaging (IM), and real-time voice over IP
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
94 / 107
(VoIP). These types of advanced functionality are in demand by end-users and enable
incremental network operator service opportunities.
• VoIP:
VoIP stands for Voice over Internet Protocol. As the term says VoIP tries to let voice (mainly
human) go through IP packets and, concretely, through the Internet. VoIP can use
accelerating hardware to achieve this purpose and can also be used in a PC environment.
The next figure shows the protocol stack of a pure IP DSLAM transporting the Voice over
IP.
Figure 41. VoIP protocol stack using an IP DSLAM
• PC:
The PC is the core of the User Equipment, like DSL is always on, always fast and always
reliable, the PC it’s the main door to the wide range of applications and services of the
Internet.
5.2.4 Local Head End
The next network point is the telephone exchange, where the DSLAM is installed. Older
DSLAMs used ATM circuits, but nowadays the usual approach is to directly provide IP
networking, which is a much more flexible and cost-effective solution, and, as will be seen,
much more appropriate for integration with satellite networks.
Currently, Mini DSLAMs are commonly used; this hardware provides connectivity to a reduced
set of users. But anyway, hardware manufacturers can supply several DSLAM capacities, up to
360 users in many cases. As will be seen, the best option is to concentrate the user traffic in the
DSLAM as much as possible, because it will simplify the rest of the network. The DSLAM
generally uses a Virtual LAN approach, mapping user circuits as LAN terminals, and accessing
other networks via an Ethernet switch.
This is shown in the following diagram.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
95 / 107
Figure 42. Local head end schema
The main components of the Local Head End are detailed in the following lines.
• IP DSLAM:
Multiplexed DSL access based in IP (DSLAM) overcomes ATM circuit technology and
directly converts traffic to the Internet.
An IP DSLAM offers clear advantages over ATM-based DSL access systems, as IP
technology is the most natural approach to data transmission, providing better bandwidth
efficiency, ease of use and economic advantages.
IP mini-DSLAM requirements are basically two: on one hand, DSLAM functionalities (DSL
access multiplexer), and on the other, BRAS (Broadband access server), which are IP
functionalities.
• RCST:
The RCST is an integrated unit with connections for the two coax cables to the antenna and a
CAT5 10/100 Base-T connection for an Ethernet cable direct to the customers computing
equipment. There is also the mains power input. Apart from the initial set up, the box is
controlled entirely from the hub.
The received MPEG-2 stream is recovered from the outbound signal by an integrated circuit
consisting of a DVB-S demodulator and de-multiplexer. This logic demodulates the
outbound signal and the demux recovers the IP packets intended for the specific customer
terminal are then delivered to the external network via the Ethernet interface.
5.2.5 Provider Backbone
The Provider Backbone (ISP) is connected to the Local Head End LAN and provides a gateway
to Internet. Several other services can be directly connected here; for example, a VoIP gateway
or a Video on Demand server. All of these are generally deployed far away from the telephone
exchange, at some point of the ADSL provider’s backbone, and are connected via a fiber-optic
link, which directly supports IP traffic.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
96 / 107
The following diagram shows this.
Figure 43. Service provider schema
The main components of the Local Head End are detailed in the following lines:
• RCST:
It’s the same component explained previously.
• Internet Service Provider (ISP):
It’s a company that provides access to the Internet. For a monthly fee, the service provider
gives you a software package, username, password and access phone number. Equipped with
a modem, you can then log on to the Internet and browse the World Wide Web and
USENET, and send and receive e-mail, etc.
In addition to serving individuals, ISPs also serve large companies, providing a direct
connection from the company's networks to the Internet. ISPs themselves are connected to
one another through Network Access Points (NAPs).
ISPs are also called IAPs (Internet Access Providers).
•
Other Services Equipment:
o VoIP Gateway:
The VoIP gateway controller has the responsibility for some or all of the call
signalling coordination, phone number translations, host lookup, resource
management, and signalling gateway services to the PSTN (SS7 gateway). The
amount of functionality is based on the particular VoIP enabling products used.
o Video Server:
Streaming video on the Internet is a practical way to view video but avoid the burden
of downloading and storing huge encoded files. Video and audio, when encoded to
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
97 / 107
digital formats, become large and cumbersome files for transfer over data networks,
making them impractical to deliver through normal HTTP web servers.
Streaming server technology allows for managed, controlled playback ("streaming")
of video and audio files over the Internet, in proportion to the user's bandwidth. The
computer's video player establishes communication with the dedicated video server,
pulls down the first few seconds of the video file, and plays it.
The computer or the set-top-box never actually saves the entire video file; it only
holds small part of it in sequence for the few seconds necessary to display in the
player, then deletes it. The end result is that you can watch web streaming video
within moments of your first click, at maximum possible quality and bitrate, without
ever having to worry about where to store the large file or first wait for a complete
download.
5.2.6 IP Addressing (Possible Scenarios)
We will consider two main scenarios:
• Using PPPoE and a radius server
• Using Multi-protocol Encapsulation Over AAL5
5.2.6.1 PPPoE (RFC2516)
The Point-to-Point Protocol (PPP) provides a standard method for transporting multiprotocol datagrams over point-to-point links. PPP over Ethernet (PPPoE) provides the ability
to connect a network of hosts over a simple bridging access device to a remote Access
Concentrator. With this model, each host utilizes it's own PPP stack and the user is presented
with a familiar user interface. Access control, billing and type of service can be done on a
per-user, rather than a per-site, basis.
To provide a point-to-point connection over Ethernet, each PPP session must learn the
Ethernet address of the remote peer, as well as establish a unique session identifier. PPPoE
includes a discovery protocol that provides this.
The user authenticates with userid and password, then connects. This is similar to dialup
except no actual dialling takes place.
The next figure shows the network topology for single user PPPoE client deployment.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
98 / 107
Figure 44. PPPoE single user scenario
The next figure shows the network topology for Multi User PPPoE client deployment.
Figure 45. PPPoE multi user scenario
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
99 / 107
This is the stack protocol for a PPPoE Single User Scenario.
Figure 46. PPPoE single user scenario Protocol stack
5.2.6.2 Multi-protocol Encapsulation Over AAL5(RFC1483b and RFC1483r)
This is the way for carrying two different methods of connectionless network interconnect
traffic, routed and bridged Protocol Data Units (PDUs), over an ATM network.
The first method allows multiplexing of multiple protocols over a single ATM virtual circuit.
The protocol of a carried PDU is identified by prefixing the PDU by an IEEE 802.2 Logical
Link Control (LLC) header. This method is in the following called "LLC Encapsulation".
The second method does higher-layer protocol multiplexing implicitly by ATM Virtual
Circuits (VCs). It is in the following called "VC Based Multiplexing".
The following diagrams are focused in the Multi-protocol Encapsulations Over AAL5
Routing (RFC1483b).
The next figure shows the network topology for a Single User Routing deployment.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
100 / 107
Figure 47. Routing single user scenario
The next figure shows the network topology for a Multi User Routing deployment.
Figure 48. Routing multi user scenario
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
101 / 107
This is the stack protocol for a Routing Single User Scenario.
Figure 49. Routing single user scenario protocol stack
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
102 / 107
5.3 HOME GATEWAY
Home systems and gateways define the way electronic and electrical devices in and around a
home can be connected and how these devices can communicate to integrate their functions in
daily-life patterns.
These represent a major new market opportunity, offering distributed control of all electronic
and electric equipment throughout the home, a common language and way of communicating,
for all equipment in the home - and indeed in the apartment buildings, schools, offices, hospitals
and hotels.
The TV set becomes a broader window on the world - allowing you to watch your favorite TV
programs, while monitoring the home at the same time. Few of the many applications possibly
available are: setting up the central heating, checking the electricity bill, watching the baby's
room, checking the oven, knowing when the washing machine has finished its cycle, and setting
all clocks at once.
Home gateways interconnect and integrate all kinds of products in the home, all working
together in a common home automation system. The technology behind it all is common to all
products, because the major European electronic and electric industries all agreed upon one
technology beforehand.
Home gateways are the solution for current needs that are appearing within the new
communication technologies and devices. In the near future they will be connected in a home
network to provide new services. Even, in a few years, will be normal that each house has its
own broadband connection to the Internet and a local network to share files, printers or execute
distributed amusement applications. The best way to share this connection is to have a local
network inside the house and to connect it to the Internet through a single point, the home
gateway, that will manage this access.
5.3.1 Architecture overview
We will analyse the definition of the RCS functionality that should be included in the actual
Home Gateway devices. The development of home equipment which provides final access inhome to different home devices will add a new dimension to the satellite return channel. This
equipment will provide access to automatic control and home digital network.
In the proposed architecture, the home gateway will be accessed from anywhere outside of the
house. The home gateway will be integrated with a DVB-RCST that will allow it to be accessible
from another DVB-RCST or the Internet. This way, the home gateway can be remotely operated
and controlled; instead of having the set-top-box inside the same building, the possibilities of the
return channel make possible to control it from an outside facility, which could be located many
kilometers away.
The following picture depicts the scenario considered.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
103 / 107
Figure 50. Home gateway architecture
The end user is located behind a certain RCST or the Internet, and uses the return channel to
communicate with his personal home gateway to control his home network.
The end user and home networks can be located anywhere as long as they are accessible through
standard IP routing. This way, using a RSWG, a home gateway could be managed from
anywhere, the only requirement is Internet access.
5.3.2 Components and modules
The conceptual architecture of this service is divided in the following components:
• User equipment: It’s the equipment installed in the user's house that composes the
domotic network and the equipments used to provide connectivity.
• Satellite network: It's the network that connects the domotic equipment with the end
users. This network will be composed of by a satellite link, so it will provide
management of sites almost anywhere, even in remote rural areas where there is no
broadband access.
• End users: They are the clients that will manage the home gateway remotely .
5.3.2.1 User equipment
The user's equipment is composed of:
• Domotic devices: these devices are able to be managed remotely and can change their
status upon request and also are able to send their own status and their associated alarms
to a central server.
Examples of domotic devices are lights, shutters, sensors, the washing machine, the oven,
etc.
The domotic devices have the following functionalities:
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
104 / 107
o Domotic devices have always a status. The value of its status could be requested
by the central server. This status could be a digital value, for example, a light is
on or off, or analogical, for example, a shutter could be opened at any percentage
o Domotic devices could be assigned a specific value. This way, a light could be
ordered to switch on or off.
o Domotic devices could generate alarms depending on the value of their own
status.
o The protocol used to communicate with the central server depends on the specific
device. The most extended protocols for domotic devices are X.10 and
LonWorks.
•
Home gateway: the home gateway is the central server that interconnects and integrates
all domotic devices together to create the domotic network. The proposed home gateway
to be integrated in this service will be PylixTM of Siemens; there are other commercial
home gateways in the market, but it is a generic home gateway that complies to many of
the requirements stated in D230, and its functionality will be enough to validate the
integration proposed.
The MetaVector-Siemens manufacturer have developed the following gateway fulfilling
the OSGi specification:
o Serial port: this serial port connects the gateway with the domotic devices. This
serial port could be used with specific adapters to interconnect LonWorks or X.10
devices.
o Ethernet LAN: this port allows the communication with any Ethernet equipment
inside the house.
o Ethernet WAN: this port allows the communication with the outside. It is a
standard Ethernet port, so there is no ADSL interface as other models.
o 8 ports of PTSN: it's possible to connect up to 8 telephones to the gateway.
o Parallel port: there is a parallel port that could be used to share a printer.
o USB port.
o 10GB hard disk.
o Linux operating system: this O.S. runs an OSGi platform compatible with OSGi
2.0.
o Apache Web server 1.3.20, HTTP 1.1 compatible.
o It has a DHCP client.
o It performs the tasks of a file server, print server, back-up server and firewall.
The physical specifications of this home gateway are:
o 75mm high, 245mm width, 293mm length
o Operating temperature between 0 and 40ºC and between 15 and 80% humidity.
o It has activity and link lights.
o Input power 200-240 V 50/60Hz @15W.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
105 / 107
Figure 51. Home gateway physical interfaces
Figure 52. Home gateway hardware architecture
SATLIFE
D321
•
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
106 / 107
DVB-RCST: The DVB-RCST is the two-way terminal used in the SATLIFE system
that provides connectivity to the satellite network. This terminal works like a router,
forwarding the IP data from the home gateway to the network and receiving the IP
data from the network that needs to reach the home gateway. The DVB-RCST will be
configured with NAT in order to protect the home gateway and will use NAPT to
make the home gateway's web server port available from the outside.
5.3.2.2 Satellite network
The satellite network used to provide connectivity to the home gateway is the SATLIFE system.
The DVB-RCST inside the house could be used in two possible configurations:
• With a mesh connectivity to another DVB-RCST. In this scenario, the final user will be
connected to this terminal.
• With a star connectivity to the RSGW that will provide connectivity with Internet.
5.3.2.3 User equipment
The end users are the users that are allowed to access remotely the domotic network. These users
are able to check the status of the different domotic equipment, change their status or be notified
of alarm conditions.
The main user equipment is any device with an Internet connection that supports web.
5.3.3 Applications
There are many possible services or applications of the home gateway, but we are going to
integrate the following:
• Telecontrol: remote control of home devices.
• Security: surveillance of houses or premises, motion detection sensors.
5.3.4 User Interface
The user interface is a web site which the user can browse. Through this interface the user will
be able to do the following tasks:
•
Check the status of each domotic device.
•
Telecontrol of each domotic device.
•
View the associated events triggered by each domotic device.
•
View the streaming video from the surveillance cameras. The cameras used in this
service will be IP cameras, so a PC will not be needed.
•
View the motion detection alarms sent by the surveillance cameras based on motion
detection sensors.
SATLIFE
D321
DATE:
ISSUE:
PAGE:
31/05/2006
v2.0
107 / 107
6. CONCLUSIONS
In this document has been defined the specific needed architecture for each service (Digital TV,
VoD, Multiconference, Internet Access, LAN interconnection, Streaming, Software Download
and Webcasting) and the different scenarios that will be considered in the following phases of
the SATLIFE project.
In the case of the above mentioned user services, they have still been specified in a mid-high
abstraction level approach but now performing a description of separate customer, network and
service provider elements. Despite its conceptual character, functional breakdown will be
meaningful enough to support a approximate deployment of the whole platform.
For every device, it has been shown an overview of its functional architecture, aimed to adopt
normalized interfaces between system forming parts. Moreover, it has been proposed several
interconnection schemes for equipment satisfying the more uncommon scenarios, as it happens
for example with rural ADSL.
It was presented a brand-new design of the middleware layer that will be employed for set-top
boxes, just as the home gateway and its operating features. That middleware based approach will
allow to dissociate hardware and software avoiding dependence of specific platform or
manufacturer.
The services proposed in this document covers a huge range of applications, some of them are
intended for broadcasters, as Digital TV and software download, other for Telecom operators, as
Internet access, rural ADSL, middleware, home gateway, VoD, streaming and other for
corporations, as LAN internetworking, e-learning and multiconference.
All these services use the advanced features developed in the SATLIFE system layer inside the
NCC and the DVB-RCS terminals. DVB-RCS regenerative scenarios have been proposed, many
of the services and devices covered are compatible with a transparent solution. The provision of
all these services and devices will be very useful to the deployment of new DVB-RCS services,
transparent or regenerative ones, thanks to this integration and to the real tests that will be done
in the last phase of the SATLIFE project.