You are on page 1of 24

Peer to Peer Streaming

Peer-to-Peer Systems and Applications, Springer LNCS 3485 1

Overview

1.
IP-Level vs Application-Level Streaming
2.
P2P Streaming Basics
3. PPLive
4. Summary

2
Multicast Internet Applications

y Multi-party applications
f Video on Demand and IPTV
f Audio/video conferencing
f Multi-party games
f Distributed simulation
f Broadcast of web cams
y Consider a world with ...
f Tens of millions of simultaneously running multi-point
applications
f Each application with tens to several thousand of end points

Multi-unicast vs. IP Multicast

Unicast IP Multicast

4
IP Multicast Overview

y Seminal work by Steve Deering in 1989


f Huge amount of follow-on work
f Research
  1000s papers on multicast routing,
routing reliable multicast,
multicast multicastcongestion
control, layered multicast
  SIGCOMM, ACM Multimedia award papers, ACM Dissertation Award
f Standard: IPv4 and IPv6, DVMRP/CBT/PIM
f Development: in both routers (Cisco etc) and end systems (Microsoft,
all versions of Unix)
f Deployment:
p y Mbone, major
j ISP’s
f Applications: vic/vat/rat/wb…
y Situation today
f Still not used across the Internet Reasons?

Router state issue

y How to tell a packet is a multicast packet?


f Each group needs a group address
y How to decide where and how to branch?
f routing protocol needs to set up per group state at routers
y Violates stateless packet forwarding principle
f Currently IP layer only maintains routing state
  Highly aggregated
  140K routing entries today for hundreds of millions hosts

6
Ack Explosion

End-to-end ack Router-based ack

y Large number of acknowledgements required


f End-to-end acknowledgments inefficient
f Router-based acknowledgments overloads routers
  requires even larger large state maintenance

IP Multicast Issues – Summary

y Poor routing scalability


f Routers need to keep per group/connection state
f Violation of fundamental Internet architecture principle
y Difficult to support higher functionalities
f Error control, flow control, congestion control
y Security concerns
f access control, both senders and receivers
f Denial of Service attacks

8
IP Architecture

y “Dumb” IP layer
f minimal functionalities for connectivity
f Unicastaddressing, forwarding, routing
y Smart end system
f transport layer or application performs more
sophisticated functionalities
f flow control, error control, congestion
control
y Advantages
f accommodate heterogeneous technologies
f support diverse applications and
“Hourglass” model
decentralized network administration

Multicast revisited

y Can we achieve
f efficient multi-point delivery
f without support from the IP layer?

10
Application Layer Multicast (ALM)

y Do stream distribution on application level


f Multicasting implemented at end hosts instead of network routers
f Nodes form unicast channels or tunnels between them
f Use default IP infrastructure
f Receivers form self-organized network (peer-to-peer principle)

S E1

Unicast

Unicast R1 R2 Unicast

E2 E3

11

P2P Media Streaming

y Media streaming extremely expensive


f 1 hour of video encoded at 300Kbps = 128.7 MB
f Serving 1000 users would require 125.68 GB

y Approach: same idea as in P2P file sharing:


f Peers form overlay network
f Nodes offer their uplink bandwidth while downloading
and viewing the media content
f Takes load off the server
f Scalable

12
Peer-to-Peer Streaming Benefits

y Easy to deploy
f No change to network infrastructure
y Programmable end-hosts
f Overlay construction algorithms at end hosts can be easily applied
f Application-specific customizations
  Network structure
  Packet forwarding strategies

13

Challenges

y Need to playback the media in real time


f Quality of Service
y Procure future media stream packets
f Needs reliable neighbors and effective management
y High “churn” rate – Users join and leave in between
f Needs robust network topology to overcome churn
y Internet dynamics and congestion in the interior of the
network
f Degrades QoS
y Fairness policies extremely difficult to apply
f High bandwidth users have no incentive to contribute
f Tit-for-tat doesn’t work due to asymmetry

14
Peer-to-Peer Streaming Models

y Media content is broken down in small pieces and


disseminated through the network
f Push model: content forwarded as soon as it arrives
f Pull model: nodes request missing pieces

y Neighboring nodes use Gossip protocol to exchange


buffer information
y Nodes trade unavailable pieces
y Robust
R b ta and
d scalable,
calable bbutt more
e dela
delay

15

Network efficiency

y Optimization Goals
f Delay between source and receivers should be small
  Relative Delay Penalty (RDP)

f Number of redundant packets on any physical link should be low


  Physical Link Stress (PLS)

CMU CMU CMU


Stan2 Stan2 Stan2
Stan1 Stan1 Stan1

Berk1 Gatech Berk1 Gatech


Berk1 Gatech
Berk2 Berk2 Berk2

High latency High degree (unicast) “Efficient” overlay

16
Physical Link Stress (PLS)

y PLS is given by the number of identical copies of a


packet that traverse a physical link
f Indicates the bandwidth inefficiency

y Example: S E1
f PLS for link S-R1 is 2.
f Average PLS is 7/5.
R1 R2

E2 E3

17

Relative Delay Penalty (RDP)

y RDP is given by the ratio of the delay in the overlay and


the delay in the direct unicast path.
f Indicates the delay inefficiency

y Example: S E1
f Overlay delay for the path
10 ms 10 ms
from S to E3 is 60 ms.
f Unicast delay is 40 ms. R1 R2
20 ms
f Therefore,
Therefore the RDP for E3 10 ms 10 ms
is 1.5 ( = 60 ms / 40 ms). E2 E3

18
Network topologies

y Tree Based
f Content flows from server to nodes in a
tree like fashion
f One
O point
i t off failure
f il for
f a complete
l t subtree
bt
f High recovery time
f Multiple trees for increased robustness

y Mesh Based
f Overcomes tree based flaws
f Nodes maintain state information of many
neighbors
f High control overhead

19

Streaming Topologies: Tree

y Tree construction based upon minimal delays


y Permanent peer monitoring for tree maintenance and repair
f In case individual links/path fail or become congested
f In case receivers join and leave (churn)
y Push-based delivery from the source(s) to the receivers
f Data forwarded along the tree with minimal delay
y Multiple trees (=multiple root nodes)
f Provide redundant delivery paths against failures (churn)
f Provide complementary data flows to each receiver
y Issues
f Maintaining an “optimal” tree structure incurs a lot of overhead
f Particularly in conjunction with churn
f Churn may also cause disruptions to downstream receivers

20
Tree-based System: End System Multicast (ESM)

y First application level streaming system


f Developed at CMU by Hui Zhang et al. (2002)

y Objectives
f Self-organizing: adapts to dynamic membership changes
f Self-improving: automatically evolves into efficient overlays

y Two versions of protocol


f Multi-source, smaller scale conferencing apps
f Single source, larger scale broadcasting apps

y Tree-based, Push model

21

ESM Node Join

y Bootstrapping process (for node X)


f Connect to source (S)
f Get a subset of group membership
y Parent
P t selection
l ti algorithm
l ith
f Send probe message to known nodes
f Decision criteria for parent node (P)
  Filter out P if it is a descendant of X
  Performance of P
  Delay of path from S to P
  Saturation level of P
  Performance of link P-X ?
  Delay of link P-X
  TCP bandwidth of link P-X

22
ESM Tree Maintenance

y Build separate control structure decoupled from tree


f Each member knows small random subset of group members
f Information maintained using gossip-like algorithm
f Members also maintain path from source

y Continouosly apply parent selection strategy


f React on nodes becoming unavailable
f Repeat probing regularly
f Improve
I bandwidth
b d idth usage
f Improve clustering

23

Multiple Tree Construction

Video stream
… …

24
Join Procedure

y Initial join Video stream


… …
y Contact video source
y Receives peer list, number of trees

y Probe peers
y Connect to multicast trees

25

Disconnect / Rejoin Procedure

Yellow
Parent tree is
of3 yellow
Retransmissions
Parent trees
leave tree
is
is downrecovered
requested
detected

Yellow tree is
down?

26
Streaming Topologies: Mesh

y Mesh construction and maintenance similar to Tree


f Find set of peers with minimal delays
f Choose subset of the peers initially provided via bootstrapping
f Gossiping protocols to learn about further peers
f Continuous optimization of neighbor set
y Active pulling of media segments from peers
f Exchange of buffer maps (who has which data)
f Explicit requests for missing chunks (receiver-controlled)
f Kept locally available for forwarding to other peers
f Similar to BitTorrent, but needs to consider time
time-constraints
constraints
y Issues
f Lots of control traffic (explicit pull)
f Higher end-to-end delay (due to buffering and pull-based
forwarding)

27

CoolStreaming

y X. Zhang, J. Liu, B. Li, and T.-S. Peter Yum: “CoolStreaming/DONet:


A data-driven overlay network for efficient live media streaming”
(IEEE INFOCOM, 2005)
f Joining Node obtains a list of 40
nodes from the source
f Each node contacts these nodes for media
content
  Extend list of known nodes via gossiping

f In steady state, every node has typically


4-8 active neighbors
f Real world deployed and highly
successful system
  Stopped in 2005, due to copyright issues

f Successor: PPLive

28
Cooltreaming distribution algorithm

y Stream is chopped by server and disseminated


y Each node periodically shares its buffer content map
with neighbors
f Request segments as part of this exchange message
y Reply strategy
f Send scarce packages first (like BitTorrent)
f If no package is scarce: send to peer with highest bandwidth first

29

L3S Research Center, University of Hannover

Quality Criteria

30
Quality Criteria

y Quality of Service
f Jitter less transmission
f Low end to end latency
y Network efficiency
y Uplink utilization
f High uplink throughput leads to scalable P2P systems
y Robustness and Reliability
f Churn, node failure or departure should not affect QoS
y Scalability
y Fairness
f Determined in terms of content served (Share Ratio)
f No user should be forced to upload much more than what it has
downloaded

31

Quality of Service

y QoS is the most important metric


y Jitter: Unavailability of stream content at play time
causes jitter
f Jitterless transmission ensures good media playback
f Continuous supply of stream content ensures no jitters
y Latency: Difference in time between playback at server
and user
f Lower latency keeps users interested: a live event (e.g. soccer match)
can lose importance in crucial moments
moments, if the transmission is delayed
f Reducing hop count reduces latency

32
Uplink Utilization

y Uplink is the most sparse and important resource in


the network
f Sum of uplinks of all nodes is the load taken off the server
y Utilization = (Uplink used / Uplink Available)
f Needs effective node organization and topology to maximize uplink
utilization
y High uplink throughput means more bandwidth in the
network and hence leads to scalable P2P systems

33

Scalability

y Serve as many users as possible with an acceptable level


of QoS
f Increasing number of nodes should not degrade QoS

y An effective overlay node topology and high uplink


throughput ensures scalable systems


34
Fairness

y Measured in terms of content served to the network


f Share Ratio = (Uploaded Volume / Downloaded Volume)

y Randomness in network causes high disparity


f Many nodes upload huge volume of content
f Many nodes get a free ride with no or very little contribution
y Must have an incentive for an end user to contribute
f P2P file sharing system like BitTorrent use tit-for-tat policy to combat
free riding
f Not easy to use it in streaming as nodes procure pieces in real time
and applying tit-for-tat can cause delays

35

L3S Research Center, University of Hannover

Existing Systems

36
Overview of Existing Systems

y Most noted approach in recent years: CoolStreaming


f PPLive and SOPCast are derivates of CoolStreaming
f Proprietary and working philosophy not published
f Reverse engineered and measurement studies released

37

PPLive Overview

y One of the largest


deployed P2P multimedia
streaming systems
y Developed
D l d in
i China
Chi
y Hundred thousands of
simultaneous viewers

38
PPLive Membership Protocol

Client

An overlay

39

PPLive analysis

y Long Vu et al.: “Measurement and Modeling of a Large-


scale Overlay for Multimedia Streaming”, QShine 2007

y Analyzed:
f Channel size variation
f Node degree
f Overlay randomness
f Node availability
f Session length

40
Channel Size Varies over a day

y Huge popularity variation


f Peaks at noon and night
y Higher dynamics than P2P file sharing

41

Node Degree

Average node
d
degree scale-free
l f

y Degree independent of channel size


y Similar to P2P file sharing

42
PPLive Peers are Impatient

50% sessions are


less than 10
minutes
i t

y Short sessions (probably channel hopping)


y Different from P2P file sharing

43

Conclusions

y Characteristics of PPLive and P2P file sharing are


different
f Higher variance in item popularity over time
f Shorter average session duration
  Much higher network churn

44
L3S Research Center, University of Hannover

Summary

45

Current Issues

y High buffering time for P2P streaming


f Half a minute for popular streaming channels and around 2 minutes
for less popular
y Some nodes lag with their peers by more than 2 minutes
in playback time
f Better peering strategy needed
y Uneven distribution of uplink bandwidths (unfairness)
y No consideration of duplicate packets
f Huge volumes of cross ISP traffic
f ISPs use bandwidth throttling to limit bandwidth usage
f Degrade QoS perceived at used end
y Sub-optimal uplink utilization

46
Comparison to P2P File Sharing

y Similarities
f Distribution costs move from stream provider to network provider
f Need incentives for end-users to contribute resources
f Scalability needs uniform usage of link capacities (content replication
proportional to popularity)

y Differences
f QoS constraints essential for streaming
f No time
time-consuming
consuming strategies against free
free-riding
riding possible
f Much higher churn due to huge fraction of short sessions
f Much smaller number of shared items

47

Conclusion

y P2P streaming efficient way to realize application level


multicast
f Considers heterogeneous nodes
f Conforms to IP network hourglass model
f Self-optimizing
y Widely used P2P application
f PPLive: 75 million global installed base and 20 million monthly active
users (in 2007)

48

You might also like