Scheduling: Buffer Management # 1

Download Report

Transcript Scheduling: Buffer Management # 1

Scheduling:
Buffer
Management
# 1
The setting
# 2
Buffer Scheduling
 Who to send next?
 What happens when buffer is full?
 Who to discard?
# 3
Requirements of scheduling
 An ideal scheduling discipline
 is easy to implement
 is fair and protective
 provides performance bounds

Each scheduling discipline makes a
different trade-off among these
requirements
# 4
Ease of implementation
 Scheduling discipline has to make a decision
once every few microseconds!
 Should be implementable in a few instructions
or hardware
for hardware: critical constraint is VLSI space
 Complexity of enqueue + dequeue processes

 Work per packet should scale less than
linearly with number of active connections
# 5
Fairness
 Intuitively
 each
connection should get no more than
its demand
 the excess, if any, is equally shared
 But it also provides protection
 traffic hogs cannot overrun others
 automatically isolates heavy users
# 6
Max-min Fairness:
Single Buffer






Allocate bandwidth equally among all users
If anyone doesn’t need its share, redistribute
maximize the minimum bandwidth provided to any flow not
receiving its request
To increase the smallest need to take from larger.
Consider fluid example.
Ex: Compute the max-min fair allocation for a set of four
sources with demands 2, 2.6, 4, 5 when the resource has a
capacity of 10.
• s1= 2;
• s2= 2.6;
• s3 = s4= 2.7

More complicated in a network.
# 7
FCFS / FIFO Queuing
 Simplest Algorithm, widely used.
 Scheduling is done using first-in first-out
(FIFO) discipline
 All flows are fed into the same queue
# 8
FIFO Queuing (cont’d)
 First-In First-Out (FIFO) queuing
First Arrival, First Transmission
 Completely dependent on arrival time
 No notion of priority or allocated buffers
 No space in queue, packet discarded

 Flows
can interfere with each other; No
isolation; malicious monopolization;

Various hacks for priority, random drops,...
# 9
Priority Queuing
 A priority index is assigned to each packet upon arrival
 Packets transmitted in ascending order of priority index.


Priority 0 through n-1
Priority 0 is always serviced first
 Priority i is serviced only if 0 through i-1 are empty
 Highest priority has the



lowest delay,
highest throughput,
lowest loss
 Lower priority classes may be starved by higher priority
 Preemptive and non-preemptive versions.
# 10
Priority Queuing
Packet discard
when full
High-priority
packets
Transmission link
Low-priority
packets
Packet discard
when full
When
high-priority
queue empty
# 11
Round Robin: Architecture
 Round Robin: scan class queues serving one from
each class that has a non-empty queue
Flow 1
Transmission link
Flow 2
Round robin
Flow 3
Hardware requirement:
Jump to next non-empty queue
# 12
Round Robin Scheduling
 Round Robin: scan class queues serving one from
each class that has a non-empty queue
# 13
Round Robin (cont’d)
 Characteristics:
Classify incoming traffic into flows (sourcedestination pairs)
 Round-robin among flows

 Problems:
 Ignores packet length (GPS, Fair queuing)
 Inflexible allocation of weights (WRR,WFQ)
 Benefits:
 protection against heavy users (why?)
# 14
Weighted Round-Robin
 Weighted round-robin
 Different weight wi (per flow)
 Flow j can sends wj packets in a period.
 Period of length  wj
 Disadvantage
 Variable packet size.
 Fair only over time scales longer than a period time.
• If a connection has a small weight, or the number of
connections is large, this may lead to long periods of
unfairness.
# 15
DRR (Deficit RR) algorithm
 Choose a quantum of bits to serve from each
connection in order.
 For each HoL (Head of Line) packet,




credit := credit + quantum
if the packet size is ≤ credit; send and save excess,
otherwise save entire credit.
If no packet to send, reset counter (to remain fair)
 Each connection has a deficit counter (to store
credits) with initial value zero.
 Easier implementation than other fair policies

WFQ
# 16
Deficit Round-Robin
 DRR can handle variable packet size
Quantum size : 1000 byte
2000
1000
1500
500 300
1200
Second
Round
First
Round
 1st Round
 A’s count : 1000
0
 B’s count : 200 (served twice)
A
 C’s count : 1000
 2nd Round
B
 A’s count : 500 (served)
 B’s count : 0
C
 C’s count : 800 (served)
Head of
Queue
# 17
DRR: performance
 Handles variable length packets fairly
 Backlogged sources share bandwidth equally
 Preferably, packet size < Quantum
 Simple to implement

Similar to round robin
# 18
Generalized
Processor
Sharing
# 19
Generalized Process Sharing (GPS)
 The methodology:

Assume we can send infinitesimal packets
• single bit

Perform round robin.
• At the bit level
 Idealized policy to split bandwidth
 GPS is not implementable
 Used mainly to evaluate and compare real
approaches.
 Has weights that give relative frequencies.
# 20
GPS: Example 1
40
30
A
20
B
C
10
0
30
50
60
Packets of size 10, 20 & 30
arrive at time 0
# 21
GPS: Example 2
GPS example 2
25
queue size
20
A
15
B
10
C
5
0
5
15
30
40
45
time
Packets: time 0 size 15
time 5 size 20
time 15 size 10
# 22
GPS: Example 3
GPS examlpe 3
25
queue size
20
15
A
10
B
C
5
0
5
15
Packets: time 0
time 5
time 15
time 18
30
time
size
size
size
size
45
15
20
10
15
60
# 23
GPS : Adding weights
 Flow j has weight wj
 The output rate of flow j, Rj(t) obeys:
d
R (t )  R j (t ) 
dt
'
j

wj
w
kACTIVE ( t ) k
 For the un-weighted case (wj=1):
d
1
R (t )  R j (t ) 
dt
| ACTIVE (t ) |
'
j
# 24
Fairness using GPS
 Non-backlogged connections, receive what
they ask for.
 Backlogged connections share the remaining
bandwidth in proportion to the assigned
weights.
 Every backlogged connection i, receives a
service rate of :
R (t ) 
'
i
wi
 jACTIVE ( t ) w j
Active(t): the set of
backlogged flows at time t
# 25
GPS: Measuring unfairness
 No packet discipline can be as fair as GPS
while a packet is being served, we are unfair to others
Degree of unfairness can be bounded
Define: workA (i,a,b) = # bits transmitted for flow i in time
[a,b] by policy A.
Absolute fairness bound for policy S
 Max (workGPS(i,a,b) - workS(i, a,b))
Relative fairness bound for policy S
 Max (workS(i,a,b) - workS(j,a,b))
assuming both i and j are backlogged in [a,b]





# 26
GPS: Measuring unfairness
 Assume fixed packet size and round robin
 Relative bound: 1
 Absolute bound: 1-1/n
 n is the number of flows
 Challenge: handle variable size packets.
# 27
Weighted Fair
Queueing
# 28
GPS to WFQ
 We can’t implement GPS
 So, lets see how to emulate it
 We want to be as fair as possible
 But also have an efficient implementation
# 29
# 30
GPS vs WFQ (equal length)
Queue 1
@ t=0
GPS:both packets
served at rate 1/2
1
Queue 2
@ t=0
Both packets complete
service at t=2
t
0
1
Packet from
queue 2 waiting
2
Packet-by-packet system (WFQ):
queue 1 served first at rate 1;
then queue 2 served at rate 1.
1
Packet from
queue 1 being
served
Packet from queue 2
being served
t
0
1
2
# 31
GPS vs WFQ (different length)
2
Queue 1
@ t=0
GPS: both packets served
at rate 1/2
1
Queue 2
@ t=0
Packet from queue
2 served at rate 1
0
2
t
3
Packet from
queue 2 waiting
queue 2 served at rate 1
1
Packet from
queue 1 being
served at rate 1
0
1
2
t
3
# 32
GPS vs WFQ
Queue 1
@ t=0
GPS: packet from queue 1
served at rate 1/4;
Queue 2
@ t=0
1
Packet from queue 1
served at rate 1
Weight:
Packet from queue 2
Queue 1=1
served at rate 3/4
Queue 2 =3
t
0
1
Packet from
queue 1 waiting
2
WFQ: queue 2 served first at rate 1;
then queue 1 served at rate 1.
1
Packet from queue 1
being served
Packet from
queue 2 being
served
t
0
1
2
# 33
Completion times
 Emulating a policy:
Assign each packet p a value time(p).
 Send packets in order of time(p).

 FIFO:
 Arrival of a packet p from flow j:
last = last + size(p);
time(p)=last;
 perfect emulation...
# 34
Round Robin Emulation
 Round Robin (equal size packets)
Arrival of packet p from flow j:
 last(j) = last(j)+ 1;
 time(p)=last(j);
 Idle queue not handle properly!!!

 Sending packet q: round = time(q)
 Arrival: last(j) = max{round,last(j)}+ 1
 time(p)=last(j);
 What kind of low level scheduling?
# 35
Round Robin Emulation
 Round Robin (equal size packets)
Sending packet q:
 round = time(q); flow_num = flow(q);
 Arrival:
 last(j) = max{round,last(j) }
 IF (j =< flow_num) & (last(j)=round)
THEN last(j)=last(j)+1
 time(p)=last(j);

 What kind of low level scheduling?
# 36
GPS emulation (WFQ)
 Arrival of p from flow j:
 last(j)= max{last(j), round} + size(p);
 using weights:
last(j)= max{last(j), round} + size(p)/wj;
 How should we compute the round?
 We like to simulate GPS:
 x is the period of time in which #active did not
change
 round(t+x) = round(t) + x/B(t)
 B(t) = # active flows
 A flow j is active while round(t) < last(j)
# 37
WFQ: Example (GPS view)
1
½
0
1
0
1/2
2
3
5/6
7/6
4
11/6
t
round
Note that if in a time interval round progresses by amount x
Then every non-empty buffer is emptied by amount x during the interval
# 38
WFQ: Example (equal size)
Time 0: packets arrive to flow 1 & 2.
last(1)= 1; last(2)= 1; Active = 2
round (0) =0; send 1
Time 1: A packet arrives to flow 3
round(1) = 1/2; Active = 3
last(3) = 3/2; send 2
Time 2: A packet arrives to flow 4.
round(2) = 5/6; Active = 4
last(4) = 11/6; send 3
Time
Time
Time
Time
2+2/3:
3 :
3+2/3:
4 :
round
round
round
round
=
=
=
=
1; Active = 2
7/6 ; send 4;
3/2; Active = 1
11/6 ; Active=0
# 39
WFQ: Example (GPS view)
1
½
0
1
0
1/2
2
3
5/6
7/6
4
11/6
t
round
Note that if in a time interval round progresses by amount x
Then every non-empty buffer is emptied by amount x during the interval
# 40
Worst Case Fair
Weighted Fair
2
Queuing (WF Q)
# 41
Worst Case Fair Weighted Fair Queuing
(WF2Q)
 WF2Q fixes an unfairness problem in WFQ.
 WFQ: among packets waiting in the system, pick
one that will finish service first under GPS
 WF2Q: among packets waiting in the system,
that have started service under GPS, select one
that will finish service first GPS
 WF2Q provides service closer to GPS
 difference in packet service time bounded by
max. packet size.
# 42
# 43
# 44
# 45
# 46
Multiple Buffers
# 47
Buffers
Fabric
Buffer locations
 Input ports
 Output ports
 Inside fabric
 Shared Memory
 Combination of all
# 48
fabric
Outputs
Inputs
Input Queuing
# 49
Input Buffer : properties
•
•
•
•
•
Input speed of queue – no more than input line
Need arbiter (running N times faster than input)
FIFO queue
Head of Line (HoL) blocking .
Utilization:
• Random destination
• 1- 1/e = 59% utilization
• due to HoL blocking
# 50
Head of Line Blocking
# 51
# 52
# 53
Head of Line Blocking
Stadium
Beer/Soda/Chips
Kwiky Mart
# 54
Output Queuing
Stadium
Beer/Soda/Chips
Kwiky Mart
# 55
Head of Line Blocking
A
B C A C B
B
C
# 56
Head of Line Blocking
A
B A C B C
ACB
B
C
# 57
Head of Line Blocking
A
B B C C B
ACBCA
B
C
# 58
VOQ—Virtual Output Queues
A
ARB
B C A C B
B
C
# 59
VOQ—Virtual Output Queues
A
ARB
AA
B A C B C
B
B
C
C
# 60
VOQ—Virtual Output Queues
A
ARB
AAAA
B C C A B
B
B
C
C
# 61
Performance Issue with CrossBars
58.6%
Source: M. J. Karol, M.G. Hluchyj, S. P. Morgan, “Input Versus Output Queueing [sic] on
a Space-Division Packet Switch”, IEEE Transactions on Communications, Vol COM-35,
No 12, December 1987, page 1353
# 62
Overcoming HoL blocking:
look-ahead
 The fabric looks ahead into the input
buffer for packets that may be
transferred if they were not blocked by
the head of line.
 Improvement depends on the depth of the
look ahead.
 This corresponds to virtual output queues
where each input port has buffer for each
output port.
# 63
Input Queuing
Virtual output queues
# 64
Overcoming HoL blocking:
output expansion
 Each output port is expanded to L output
ports
 The fabric can transfer up to L packets to
the same output instead of one cell.
Karol and Morgan,
IEEE transaction on communication, 1987: 1347-1356
# 65
Input Queuing
Output Expansion
L
fabric
# 66
Output Queuing
The “ideal”
2
1
1
2
1
2 1
2
11
2
2
1
# 67
Output Buffer : properties
 No HoL problem
 Output queue needs to run faster than input lines
 Need to provide for N packets arriving to same
queue
 solution: limit the number of input lines that can be
destined to the output.
# 68
MEMORY
FABRIC
FABRIC
Shared Memory
a common pool of buffers divided into
linked lists indexed by output port number
# 69
Shared Memory: properties
•
•
•
•
Packets stored in memory as they arrive
Resource sharing
Easy to implement priorities
Memory is accessed at speed equal to sum of
the input or output speeds
• How to divide the space between the sessions
# 70