Zebroid - ACM Digital Library

Transcription

Zebroid - ACM Digital Library
Zebroid: Using IPTV Data to Support Peer-Assisted VoD
Content Delivery
Yih-Farn Robin Chen, Rittwik Jana,
Daniel Stern, Bin Wei, Mike Yang
Hailong Sun
School of Computer Science and Engineering
Beihang University, Beijing
[email protected]
AT&T Labs - Research
Florham Park, NJ, 07932
chen,rjana,dstern,bw,[email protected]
ABSTRACT
topic due to its potential to help both IPTV service providers
and Internet video services offload the VoD servers during busy hours, increase overall network delivery capacity,
and maximize profits with the existing infrastructure. Traditional P2P approaches such as BitTorrent[5] and CoolStreaming[12] use a tit-for-tat approach that is not needed
among set-top boxes in an IPTV neighborhood; the bandwidth and storage of the peers can be reserved, either through
static allocation or dynamic allocation with proper incentives, to assist the IPTV service providers to deliver VoD
content [7]. In addition, the default BitTorrent download
strategy is not well suited to the VoD environment because
it fetches pieces of the desired video, mostly out of order,
from other peers without considering when those pieces will
be needed by the media viewer.
Toast[4] corrects this problem by providing a simple dedicated streaming server and it favors downloading pieces of
content that will be needed by the media viewer sooner. One
potential problem with using Toast in a typical IPTV environment is the low uplink bandwidth of peers - typically in
the order of 1-2 Mbps; in addition, it is desirable to allocate only a small fraction of the peer’s upload bandwidth
for VoD delivery as the peer may need the rest of the upload bandwidth for other activities. This makes it difficult
to find enough peers to participate in the delivery of a requested video with the aggregated bandwidth that meets
the video encoding rate (at least 6 Mbps for high-definition
(HD) content).
Push-to-Peer[11] and Zebra[2][3] go one step further by
proposing to pre-stripe popular VoD content on peer settop boxes (STB) during idle hours so that many peers will
be available to assist in the delivery of a popular video during peak hours; however, both Push-to-Peer and Zebra only
used analysis and simulations without employing a real implementation or using operational data to model their systems. This paper describes Zebroid, a peer-assisted VoD
scheme (and a successor to Zebra) that departs from previous approaches by using real IPTV operation data to estimate content placement striping parameters based on video
popularity, peer availability and available bandwidth distributions.
The contributions of this paper are in three areas:
P2P file transfers and streaming have already seen a tremendous growth in Internet applications. With the rapid growth
of IPTV, the need to efficiently disseminate large volumes of
Video-on-Demand (VoD) content has prompted IPTV service providers to consider peer-assisted VoD content delivery. This paper describes Zebroid, a VoD solution that uses
IPTV operational data on an on-going basis to determine
how to pre-position popular content in customer set-top
boxes during idle hours to allow these peers to assist the
VoD server in content delivery during peak hours. Latest
VoD request distribution, set-top box availability, and capacity data on network components are all taken into consideration in determining the parameters used in the striping
algorithm of Zebroid. We show both by simulation and emulation on a realistic IPTV testbed that the VoD server load
can be significantly reduced by more than 50-80% during
peak hours by using Zebroid.
Categories and Subject Descriptors
H.4 [Information Systems Applications]: Communications Applications; H.5.1 [Multimedia Information Systems]: Video
General Terms
Algorithms, Design, Measurement, Performance
Keywords
Video-on-Demand, Peer-to-Peer, IPTV
1.
INTRODUCTION
Video-On-Demand (VoD) services are being offered by
many IPTV service providers today. Peer to peer systems
(P2P) have become immensely successful for large scale content distribution on the Internet. Recently, peer-assisted
VoD delivery [6][8][1][9] has become an important research
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
NOSSDAV’09, June 3–5, 2009, Williamsburg, Virginia, USA
Copyright 2009 ACM 978-1-60558-433-1/09/06 ...$5.00.
• Use of IPTV operational data to estimate Zebroid prepopulation parameters.
• Design and implementation of a peer-assisted VoD system that responds to typical residential network access
conditions and busy-hour VoD request patterns.
115
• N : Number of subscribers under each DSLAM (typically 96 or 192 subscriber homes).
• u: Average streaming bit rate of a video (typically 6
Mbps for HD and 2Mbps for SD).
It was observed in [7] that in order to enable a P2P content delivery solution across communities, content needs to
be shared between customer residential gateways (RG) by
traversing up from one RG to the CO and routed back down
to another RG. This peering activity may result in a potential bottleneck in the DSLAM to CO link. The situation can
be alleviated by pre-populating content intelligently across
multiple STBs in a DSLAM community and allow these
peers to serve requesting peers from the same community;
however, to avoid traversing the uplink to the CO would require changes in the existing IPTV architecture to allow IP
routing through the DSLAM switch. Alternatively, if ample
cache storage is available at a DSLAM switch (not the case
today for a variety of engineering reasons), then even P2P
would not be needed.
Figure 1: An IPTV architecture with FTTN access
• Emulations of Zebroid on a realistic IPTV testbed
based on traces from operational data.
2.
IPTV ARCHITECTURE AND DATA
2.1 An IPTV Architecture
2.2
In [7], we described and analyzed a typical physical model
of FTTN (Fiber-to-the-node/neighborhood) access networks
for IPTV services. As shown in Figure 1, video streaming
servers are organized in two levels - a local video hub office
(VHO), which consists of a cluster of streaming servers or
proxies to serve viewers in a particular regional market, and
national super head end (SHE) offices, which can distribute
videos to local serving offices based on existing policies or on
demand. Each local VHO connects to a set of local central
offices (CO), and then to a set of access switches such as
DSL or FTTN switches called DSLAM, Digital Subscriber
Line Access Multiplexer, through optical fiber cables. Each
DSLAM switch connects a community of IPTV service customers through twisted-pair copper wires. A community
consists of all homes which are connected to the same access
switch. The local video serving office (VHO) has a cluster
of video servers, which stream on-demand or live broadcast
videos to viewers, typically with a streaming bandwidth of
at least a few hundred Mbps.
We list important parameters based on this physical architecture that would help the reader understand the design
behind the Zebroid algorithm:
IPTV Operational Data
In this section we analyze data from an operational service
to estimate pertinent Zebroid parameters like STB uptime
from power state data (PSD) to help select the right set of
peers for striping, VoD request data distribution to identify busy hours and popular content, and the capacity data
(CMD) that gives us the fan out ratio of the number of subscribers that subtends a CO and the measured bandwidth
consumption of these subscribers. This data was analyzed
over a period of one month. We discuss only a few that are
relevant to Zebroid in the following subsections.
2.3 VoD Data
The anonymous VoD request data provide information on
the purchase timestamp, content title, duration, price and
genre. VoD request data was provided by a national IPTV
service provider covering a footprint of over a million homes
and multiple VHOs. Recent data was used from 2009 over
a period of one month. Joining the purchase data with the
CMD data allows us to associate the VoD requests with the
corresponding DSLAM and CO.
Figure 2 shows the request distribution by hour across all
the CO’s under one particular VHO. If we consider 1500 requests as the high threshold, the chart shows that the peak
hours fall between 8pm and 11pm every night, with Saturday night enjoying the highest number of VoD requests.
On the other hand, if we consider 500 requests as the low
threshold, then 2am to 8am would be the idle hours for the
VoD traffic, an ideal time for striping popular VoD titles assuming few other network activities are going on at the
same time. In addition, there is a significant jump in daytime viewing on Saturday and Sunday from 9am to 5pm,
with the number of requests approaching those of the peak
hours during weekday evenings. This calls for potentially
different content striping and placement strategies between
weekdays and weekends. The VoD request data also gives
us the popularity distribution to help us determine the most
popular video titles to stripe across STBs.
• B0D : Download bandwidth into a home (typically 2550 Mbps). This bandwidth is shared by broadcast
channels (typically 2 HD and 2 SD channels through
multicast), VoIP, and High Speed Internet (HSI) access.
• B0U : Upload bandwidth out of a home (typically 1-2
Mbps).
• B1S : Total capacity of the south-bound links (downlinks) of a local access switch. A typically FTTN
switch has 24 Gbps downlink switching capacity.
• B1N : Capacity of the north-bound link (uplink) of an
access switch to the service router in the CO. B1N is
typically 1 Gbps, but is upgradable.
• B2S : Link capacity from the CO to all the DSLAM’s
- typically in the order of 10 Gbps, but its actual
throughput is limited by the VoD server streaming capacity.
2.4 Power State Data
Power State data (PSD) allows us to determine when a
particular STB was turned on or off (either by the user or
116
Figure 4: The Zebroid architecture
ing server capacity is likely to become the bottleneck in the
overall IPTV architecture. In the former case, an IPTV service provider will face the question of whether to upgrade
the large number of B1N links or to push popular VoD content to set-top boxes in the local neighborhood during idle
hours so that these STBs can serve as peers to assist in the
VoD delivery. Since most peers have limited upload bandwidth (B0U < 2 Mbps) and only a portion of it should be
used for peering activities, Zebroid needs to stripe a video
over multiple peers so that the aggregated bandwidth will
be sufficient to meet the video encoding rate u.
Figure 4 shows the Zebroid architecture, which consists of
a VoD Server and a community of STBs. The connections
between the VoD server and the STBs depend on the specific
IPTV service solution. For example, a typical IPTV architecture as shown in Figure 1 adopts a hierarchical network
structure to distribute multimedia content to its customers.
A Zebroid-based system works as follows:
First, the VoD server determines which content are popular through the analysis of the historical client request data
and decide how many copies of each popular content can
be stored based on the available collective P2P storage on
STBs. Typically, a service provider would allocate dedicated
storage in a STB that is separate from a user-allocated disk
space.
Second, the VoD server divides each popular VoD file into
a set of chunks. Each chunk is in turn divided into a set
of stripes. The size of the stripe set is based on the upload
bandwidth allocated for P2P and the number of peers that
is required to participate in P2P content delivery. Figure 5
presents a simplified view on the result of striping a content
file. For example, an HD movie at 6 Mbps and 200 Kbps
from each peer would require at least 30 stripes for each
chunk. Zebroid then sends the stripes to a set of chosen
STBs during off-peak hours based on their availability data.
Third, when a client requests the VoD server for a particular popular content, the STBs with the stripes of the
corresponding popular content will serve the request concurrently. As a result, even if the upload bandwidth of each
contributing STBs is limited to 200 Kbps, the downloading
client can still have a smooth viewing experience due to the
aggregation of the upload bandwidths from all the participating peers. If there are not enough peers to support the
required bandwidth due to busy STBs or STB failures, then
the downloading client can go back to the VoD server to
request the missing stripes. The Zebroid peer-assisted VoD
delivery greatly reduces the load of VoD server as we shall
see in the experiments and simulations (Section 5.2 and 5.3).
Figure 2: VoD request distribution
Figure 3: Percentage distribution of consistently active STBs
provider). We use the data to avoid striping popular VoD
content during idle hours on boxes that may not be up during the peak hours. For each DSLAM neighborhood (up to
192 neighbors), we can determine the percentage of STBs
in that neighborhood that were on at 2am (striping time)
remain on at 8pm (peak hours). The analysis on a set of
DSALM’s (see Figure 3) shows that, for most DSLAM’s,
roughly 80-90% of the STBs satisfy this criterion.
This allows us to determine the degree of redundancy that
is needed. An erasure coding[10] rate of 4/5 (see Section
3.2: Zr = 0.8) would be sufficient for these DSLAM’s. For
example, for a chunk that is divided into 40 stripes, we will
need to create 50 stripes with erasure code to ensure that
enough stripes will be available even when 20% of the STBs
failed or are unavailable (turned off by the users).
2.5 Capacity Management Data
The Capacity Management Data (CMD) allows us to determine the hierarchy of VHO’s, CO’s, DSLAMs, RG’s, and
STBs as shown in Figure 1, and their current capacity utilization. The CMD data also allows us to partition the VoD
requests under different network components and shows the
size of subscriber community under each VHO, CO, and
DSLAM. Additional data on downlink/uplink bandwidth
utilization is discussed in Section 4.
3.
ZEBROID
3.1 The Zebroid Architecture
As the popularity of VoD requests increases, either B1N
(the link between CO and DSLAM) or the VHO stream-
117
30
32 serving peers, No failure, α=1
32 serving peers, 20% failure, α=0.8
64 serving peers, 20% failure, α=0.8
64 serving peers, No failure, α=1
Downlink bandwidth (Mbps)
25
20
15
10
5
0
0
Figure 5: Content striping and serving in Zebroid
3.2
5
10
15
20
# of requesting peers, p
25
30
Figure 6: Average downlink bandwidth of requesting peers
The Zebroid Parameters
The following parameters are used in describing the Zebroid system in the rest of the paper, in addition to those
specified in Section 2.1):
ZN
Zn
Zk
Zp
Zc
Zr
Zs
Zg
total number of videos in the VoD content library
number of striped, popular videos; 20% of the
videos typically represent 80% of the requests
the maximum upload rate from each peer
min. no. of stripes required to reconstruct a chunk.
number of copies of a striped video
erasure coding rate
number of stripes for each chunk of the video,
Zs = Zp /Zr .
size of the peer storage (in GB) reserved
for P2P content delivery.
Figure 7: Average downlink bandwidth utilization
distribution
given by
# of supplying peers ∗ P eer uplink bandwidth
# of requesting peers, p
(1)
Figure 6 shows the monotone decrease in the downlink bandwidth vs the number of requesting peers. This is explained
by the aggregate peer bandwidth shared between requesting
peers. For example, in Figure 6, the total bandwidth of 32
or 64 supplying peers is shared equally between p requesting peers. In our environment, we also impose a maximum
constraint on the uplink bandwidth to be at 1.8 Mbps and
downlink bandwidth to be at 26 Mbps. This is seen as a
truncated downlink average bandwidth for the case of 1 requesting peer. Figure 6 also shows the downlink bandwidth
observed at requesting peers for 20% of serving peer failures,
α.
BWd =
4. ANALYSIS
In this section we analyze a striping mechanism for placing
content at STBs in a community. Our model follows the full
striping strategy detailed in Suh et al. [11]. Initial content
placement increases content availability and improves the
use of peer uplink bandwidth. However, we note the following differences. First, we do not require a set of always-on
peers. Peers can also fail as shown in Section 2.4. Second, we
do not assume constant available bandwidth between peers.
Uplink bandwidth availability is also modeled in Section 4.1.
Third, peers are not disconnected from the VoD server after
being bootstrapped as in [11]. We continue to maintain an
uplink connection from each STB to the VoD server in anticipation that when sufficient aggregate bandwidth to match
the playout rate is not available from supplying peers, the
residual bandwidth is supplied by the VoD server.
Assume each movie chunk of length W to be divided into
Zs stripes, each stripe of size W/Zs . Each STB stores a
distinct stripe of a window. Consequently, a movie (window) request by a client requires Zs − 1 distinct stripes to
be downloaded concurrently from Zs − 1 peers (or Zs if the
downloading peer does not have one of the stripes). Denoting the movie playout rate to be u bps, each stripe is
therefore received at the rate of uj = u/Zs bps.
For a completely peer-assisted solution with no VoD server
as per our experiments in Section 5.2, the maximum downlink bandwidth, BWd that a requesting peer observes is
4.1 Available Bandwidth
We investigate the average bandwidth utilization distribution to predict the amount of time required to pre-populate
a set of movies within a CO. Figure 7 and Figure 8 show
the average downlink and uplink peak bandwidths utilized
respectively. The measurement trace includes a national deployed footprint of STBs. Note that the bandwidth observed
is the High Speed Internet (HSI) portion of the total download bandwidth of B0D , which also includes bandwidth used
for linear programming content (HD and SD channels). A
few unique observations can be drawn from the following
traces. First, downlink bandwidth utilization is not uni-
118
4
10
x 10
9
8
# of STBs
7
6
5
4
3
2
1
0
0
0.5
1
1.5
Uplink bandwidth usage (Mbps)
2
Figure 8: Average uplink bandwidth utilization distribution
Figure 9: Testbed network diagram
south-side subnet, and each virtual machine on it were configured with a virtual Ethernet that bridged to this port. In
effect, all VM peers had full Ethernet connectivity to the
south-side subnet. In order to simulate the point-to-point
video endpoint to ISP connection, each virtualized peer created a VLAN connection over its south-side Ethernet link to
its peered router during bootstrapping. This design allowed
us to simulate a vast number of point-to-point connected
end users without needing to run a separate Ethernet wire
for each.
formly distributed. Second, there are characteristic peaks
at multiples of 2 Mbps as shown in Figure 7. This is a result
of STBs being situated at different loop lengths from the
CO. The mean downlink utilization is 6 Mbps. Similarly,
for uplink bandwidth utilization, a large fraction of STBs
experience less than 1 Mbps. The distribution shows a sharp
decrease in the number of STBs as upload speeds approach 1
Mbps. Less than 2% exhibit speeds greater than 1 Mbps. As
an example of pre-population of VoD content, we show from
actual traces how long a typical set of popular movies would
take to complete. A set of 180 movies were requested from
a community during a period of one day. The total duration of all movies is 312720 seconds. Each movie is streamed
at 2 Mbps. Assume that 50% of this set needs to be prepopulated across a base of 200 homes to realize a reduction
of 50% at the VoD server. Each home has a mean downlink
bandwidth of 6 Mbps. The amount of time required to complete the distribution of pre-positioned popular content in
one community is below 5 minutes (312720*0.5*2/(6*200)).
The data suggest that a VoD server can stripe movies over
many communities during idle hours. The use of multicast
striping can further increase the number of striped movies
in each community, limited mainly by the storage capacity
of each STB.
5.
5.1
5.2
Testbed Experiments
For experiments conducted for this paper, we typically
assume the following values unless otherwise specified:
sym.
ZN
Zn
Zk
Zp
Zc
Zr
Zs
Zg
value
1024
256
200Kbps
32
1
1
32
5
comments
can be adjusted based on the real VoD data
stripe only the top 25% of HD videos
rate limit for each upload thread (up t0 8)
for HD video (6.4 Mbps)
for all popular videos in most experiments
no erasure coding in the testbed experiments
for HD video
5GB each of 64 peers
The HD videos we used are 1.5GB each, which represents
roughly 30 minutes of HD video. Figure 10 shows the bandwidth of clustered peers. Clustered peers are referred to
as homes that experience similar downlink bandwidths (see
Figure 7). Average downlink bandwidths degrade with increasing number of requesting peers. A peer requests for a
VoD randomly with a probability of 75% being from a popular set. Each supplying peer’s total uplink bandwidth is
fixed at 1.8 Mbps. A supplying peer can have a maximum
of 8 concurrent upload threads, with each contributing 200
Kbps (while leaving at least 200 Kbps for other upload activities). Similarly, a requesting peer can have a maximum
of 32 download threads that amount to 6.4 Mbps. Note
that requests for unpopular content would go back to the
VoD server, which may provide a higher bandwidth when
it is not busy since we did not rate-limit the VoD server in
these experiments. The chart shows that peer-assisted HD
delivery is possible only for the 8Mbps and 12 Mbps peer
clusters and only when the number of requesting peers is
less than or equal to 8.
TESTBED AND EXPERIMENTS
The VP2P Testbed
Our experimental platform is described in detail in a previous publication [2] [3]. We provide here a high level summary as well as describe changes made since then.
The testbed used in this paper consisted of a cluster of
64 virtual machines that were spread evenly on four identical Apple Mac Pro’s quipped with 2 dual-core 2.66 GHz
Xeon processors, 16 GBs of main memory, and four 750GB
hard disks. There were also four Linux based bandwidthshaping routers, and a central server for experiment control
and data collection. Figure 9 shows the networking configuration. The control network was used for system bootstrapping and monitoring. The north-side network emulated the
connection from the access nodes to the VoD servers. This
segment also contained the experiment run controller.
The south-side network was used to emulate peer connection to an IPTV service provider (ISP). One of the Ethernet
ports on the host Mac Pro was directly connected to the
5.3
Simulation
In order to effectively investigate how Zebroid behaves in
a large community, we simulated a typical deployment with
119
9
7
8 Mbps clients
12 Mbps clients
6
5
Server bandwidth utilization(Mbps)
Average cluster bandwidth (Mbps)
# of active peers
300
4 Mbps clients
6 Mbps clients
8
4
3
2
1
0
0
5
10
15
20
25
30
35
Number of requesting peers
200
100
0
0
Peers with failure rate − 0.2
Always active peers
1000
2000
3000
Time index
4000
5000
400
Peers with failure rate − 0.2
Always active peers
300
200
100
0
0
1000
2000
3000
Time index
4000
5000
Figure 10: Average downlink bandwidth of clustered
peers
Figure 11: Number of active peers and VoD server
capacity utilization
300 peers. This was necessary since our testbed currently
implements a maximum of 64 peers using virtual machines
and there can be deployment scenarios with up to 500 members in a DSLAM community. We employed an event-driven
simulator on one community of 300 peers and a movie set of
500 SD movies where each movie has one copy striped across
12 peers. With an erasure coding rate of 5/6, a peer needs
10 stripes to reconstruct a chunk. Each movie has a steaming rate of 2 Mbps. A supplying peer consumes 200 Kbps
per stripe. Figure 11 shows the number of active peers vs
time and its corresponding VoD server utilization. There is
an initial transition period when the users are making peer
requests. The transient nature stabilizes after a period of
time and the average server capacity used is 120 Mbps with
an average number of 250 users. On the other hand to support 250 concurrent users using unicast, the VoD server has
to be provisioned with at least 500 Mbps. With a peering
solution, one can support a maximum of 300 peers with less
than 120 Mbps. In addition, with a worst case scenario of
20% of peers failing on average the utilized server capacity
increases to 150 Mbps since the VoD server now has to supplement for the failed capacity of the peers. In a typical
deployment, the peer failure rate would be very small. Note
that peers were also allowed to recover at 5% on average
after a certain time explaining the steady state value to not
fluctuate rapidly. All peers were used for striping; however,
peers that fail do not participate in uploading chunks until
they recover.
much needed improvements in the overall delivery capacity
without employing network infrastructure upgrades.
6.
7.
ACKNOWLEDGMENTS
The authors would like to thank Chris Volinsky, Deborah
Swayne, Ralph Knag, and Alex Gerber for their assistance
with IPTV data, and Matti Hiltunen for his valuable comments on this paper.
8.
REFERENCES
[1] M. Cha, P. Rodriguez, S. Moon, and J. Crowcroft. On
Next-Generation Telco-Managed P2P TV Architectures. In
International workshop on Peer-To-Peer Systems (IPTPS),
2008.
[2] Y. Chen, Y. Huang, R. Jana, H. Jiang, M. Rabinovich, J. Rahe,
B. Wei, and Z. Xiao. Towards Capacity and Profit Optimization
of Video-on-Demand Services in a Peer-Assisted IPTV
Platform. ACM Multimedia Systems, 15(1):19–32, Feb. 2009.
[3] Y. Chen, R. Jana, D. Stern, M. Yang, and B. Wei. VP2P - A
Virtual Machine-Based P2P Testbed for VoD Delivery . In
Proceedings of IEEE Consumer Communications and
Networking Conference, 2009.
[4] Y. Choe, D. Schuff, J. Dyaberi, and V. Pai. Improving VoD
Server Efficiency with BitTorrent. In The 15th International
Conference on Multimedia, September 2007.
[5] B. Cohen. Incentives to Build Robustness in BitTorrent. In
Proceedings of Workshop on Economics of P2P Systems,
2008.
[6] C. Huang, J. Li, and K. Ross. Peer-Assisted VoD: Making
Internet Video Distribution Cheap. In Proc. of IPTPS, 2007.
[7] Y. Huang, Y. Chen, R. Jana, H. Jiang, M. Rabinovich,
A. Reibman, B. Wei, and Z. Xiao. Capacity Analysis of
MediaGrid: a P2P IPTV Platform for Fiber to the Node
(FTTN) Networks. IEEE JSAC - Special Issue on
Peer-to-Peer Communications and Applications,
25(1):131–139, 2007.
[8] Y. Huang, T. Fu, D. M. Chiu, J. Lui, and C. Huang.
Challenges, Design and Analysis of a Large-scale P2P-VoD
System. In Proceedings of SIGCOMM, 2008.
[9] V. Janardhan and H. Schulzrinne. Peer assisted VoD for set-top
box based IP network. In Proceedings of the 2007 workshop on
peer-to-peer streaming and IP-TV, pages 335–339. ACM New
York, NY, USA, 2007.
[10] L. Rizzo. Effective Erasure Codes for Reliable Computer
Communication Protocols. In ACM SIGCOMM Computer
Communication Review, April 1997.
[11] K. Suh, C. Diot, J. Kurose, and L. Massoulie. Push-to-peer
video-on-demand system: design and evaluation. IEEE JSAC,
2007.
[12] X. Zhang, J. Liu, B. Li, and T. Yum. CoolStreaming/DONet:
A Data-Driven Overlay Network for Efficient Live Media
Streaming. In Proceedings of IEEE INFOCOM, volume 3,
pages 13–17, 2005.
CONCLUSION
This paper describes Zebroid, a VoD solution that uses
IPTV operational data as an on-going basis to determine
how to pre-position popular content in customer set-top
boxes during idle hours. On-going analysis of data on VoD
request distribution, set-top box availability, and capacity
data on network components are all taken into consideration
in determining the parameters used in the striping algorithm
of Zebroid. We show both by simulation and emulation on a
realistic IPTV testbed that the VoD server load can be significantly reduced by more than 50-80% during peak hours
by using Zebroid. While a typical P2P-VoD scheme without active intervention may see less synchrony in the users
sharing video content, we demonstrate that our peer content
placement strategy based on IPTV operational data offers
120