Presentation

Download Report

Transcript Presentation

SABRE

: A client based technique for mitigating the buffer bloat effect of adaptive video flows Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco) 1

What is buffer bloat?

Significantly high queuing delays from TCP & large buffers Bottleneck = C bps Server Client RTT • • • TCP sender tries to fill the pipe by increasing the sender window (

cwnd

) Ideally,

cwnd

should grow to BDP = C x RTT TCP uses packet loss to detect congestion, and then it reduces its rate Large buffers increase queuing delays and also delays loss events 2

DASH: Dynamic Adaptive Streaming over HTTP

DASH client Manifest HTTP server 350kbps 600kbps 900kbps 1200kbps Video is split into short segments Buffer 100% S. Akhshabi et al, “An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP”, MMSys’ 11 Initial buffering phase Steady state (On/Off) Time 3

Problem description

DASH VoIP Does DASH cause buffer bloat?

Will the quality of VoIP calls get affected by DASH flows? And if yes, how can we solve this problem?

4

Our approach

• In order to answer the first two questions – We perform experiments on a testbed in the lab to measure the buffer bloat effect of DASH flows • We developed a scheme

SABRE

:

S

mooth

A

daptive

B

it

R

at

E

to mitigate this problem • We use the testbed to evaluate our solution 5

Measuring the buffer bloat effect

UDP traffic: 80kbps, pkt=150bytes iPerf client 1Gbps RTT 100ms 6Mbps (DSL)

OTT VoIP traffic

iPerf server HTTP Video server Bottleneck emulator Tail-drop: 256 packets DASH client Adaptive HTTP video flows have a significant effect on

VoIP

traffic

Understanding the problem – Why do we get large bursts?

1Gbps 6Mbps

TCP is bursty

7

Possible solutions

• Middlebox techniques – Active Queue Management (AQM) • RED, BLUE, CODEL, etc.

• RED is on every router but hard to tune • Server techniques – Rate limiting at the server to reduce burst size • Our solution: Smooth download driven by the client 8

Server

Some hidden details

HTTP GET 2 Client Playout buffer recv DASH player 1 OS Socket buffer Two data channels In traditional DASH players: • while(true) recv • 1 and 2 are coupled 9

Smooth download to eliminate bursts Socket buffer

Idea

TCP can send a burst of rwnd min(rwnd, cwnd)

Two objectives

is a function of the empty space on the receiver socket Since we can not control cwnd •Keep socket buffer almost full all the time •Not to starve the playout buffer Client Playout buffer recv DASH player HTTP GET Server OS Socket buffer 10

Keeping the socket buffer full Controlling recv rate Rate On

While(1) recv

On Rate

While(timer) recv

Off Off Server T HTTP GET Socket Client Playout GET S1 S1 Off GET S2 S2 Off Server HTTP GET Socket Client Playout T GET S1 S1 Bursty S2 GET S2 Empty socket buffer 11

Keeping the socket buffer full

HTTP Pipelining

# segments = 1 + #Segments = 1 + Socket buffer size Segment size Server S1 S2 Client Socket Playout GET S1 Server Socket S1 Off GET S2 S2 S3 Off Socket buffer is always full,

rwnd

is small Client Playout GET S1, S2

S1

GET S3

S2

GET S4 12

Still one more problem

• Socket buffer level drops temporarily when the available bandwidth drops Available BW Socket buffer Video bitrate • This results in larger values of rwnd – Can lead to large bursts and hence delay spikes • Continuous monitoring of the socket buffer level can help 13

iPerf client

Experimental results

UDP traffic: 80kbps, pkt=150bytes

OTT VoIP traffic

iPerf server 1Gbps RTT 100ms HTTP Video server 6Mbps (DSL) Bottleneck emulator Tail-drop: 256 packets DASH client We implemented SABRE in the VLC DASH player 14

Single DASH flow -

constant

available bandwidth

OnOff

: delay > 200ms about 40% of the time

SABRE

: delay < 50ms for 100% of the time On/Off SABRE 15

Video adaptation: how does

SABRE

react to variable bandwidth?

Client Playout buffer recv DASH player HTTP GET OS Server Socket buffer Rate Socket buffer is full Players tries to up shift to a higher bitrate, but can’t sustain it

Video bitrate

Socket buffer gets grained, reduce recv rate, down-shift to a lower bitrate Socket buffer is full, can not estimate the available BW

Available BW

Player can support this bitrate, shoot for a higher one Time 16

Single DASH Flow –

variable

available bandwidth Rate 6Mbps 3Mbps T=180 T=380 Time (sec) On/Off SABRE 17

Two On/Off clients

Two clients

C1 Server C2 Two SABRE clients 18

Summary

• The On/Off behavior of adaptive video players can have a significant buffer bloat effect • We designed and implemented a client based technique to mitigate this problem • A single On/Off client significantly increases queuing delays • Future work: – Improve SABRE adaptation logic for the case of a mix of On/Off and SABRE clients – Investigate DASH-aware middlebox and server based techniques 19

Thank you!

Questions?

20

Backup slides

21

Random Early Detection: Can RED help?

P=1 P=0 Avg queue size min max Once the burst is on the wire, not much can be done!

How can we eliminate large bursts?

22

Single DASH Flow -

constant

available bandwidth SABRE 23

Single DASH flow -

constant

available bandwidth On/Off SABRE

OnOff

: delay > 200ms about 40% of the time

SABRE

: delay < 50ms for 100% of the time 24

Single DASH Flow –

variable

available bandwidth Rate 6Mbps 3Mbps T=180 T=380 Time (sec) On/Off SABRE 25

Single ABR Flow –

variable

available bandwidth On/Off SABRE 26

Two clients

At least one OnOff DASH client significantly increases queuing delays 27

Two clients

28