Fair Scheduling in a Wireless LAN

Download Report

Transcript Fair Scheduling in a Wireless LAN

Distributed
Fair Scheduling in
Wireless Networks
Nitin Vaidya
Computer Science
Texas A&M University
[email protected]
http://www.cs.tamu.edu/faculty/vaidya
© Nitin Vaidya 2000
Acknowledgements
• Work partly performed while at
Microsoft Research
• Joint work with Victor Bahl of Microsoft
Research and Seema Gupta of Texas
A&M.
Medium Access Control
• Wireless transmissions by multiple
nodes can interfere
– Need medium access control (MAC)
• Many proposals
– Centralized
– Distributed
Centralized Protocols
• Base station coordinates access to the
wireless channel
Node
1
Node
2
Base
Station
Node
3
Node
n
Distributed Protocols
• All nodes have identical responsibilities
Node
1
Node
2
Wireless LAN
Node
3
Node
n
Distributed Protocols
• Token-ring protocol
– need to form a ring
– nodes need to be aware of their neighbors
– not “fully” distributed
• In fully distributed protocols, a node
need not be aware of identity/existence
of other nodes
– for instance, CSMA
Comparison
• In centralized protocols, if a node
cannot talk to the base station, it cannot
transmit to any other nodes
• In centralized protocols, base station
needs to keep track of state of other
nodes
• Hard to use failure-prone nodes as
coordinators in centralized protocols
Fairness
• Packets to be transmitted belong to
several flows
• Each flow is assigned a weight
• Bandwidth assigned to each
backlogged flow is proportional to its
weight
Fairness
• Generalized Processor Scheduling
(GPS)
Three flows with weights 2,
Backlogged
flows:
1, 1
Fair Queueing
• Many centralized fair queueing
protocols exist
– WFQ, WF2Q, SCFQ, SFQ, ...
Flow 1
Flow 2
Flow n
Output link
Our Objectives
• Fully distributed fair scheduling protocol
– All nodes have identical responsibilities
• Nodes do not need to be aware of each
other’s state (or existence)
• Maintain compatibility/resemblance with
an existing standard
– IEEE 802.11 Distributed Coordination
Function
Related Work
• Centralized fair queueing on wired links
[Bennett,Demers,Parekh]
• Centralized fair queueing in wireless
environments, taking
location-dependent errors into
account
[Bharghavan,Ramanathan,Zhang]
• Real-time scheduling [Sobrinho]
• Priority-based scheduling
Proposed Approach
Combination of
• IEEE 802.11 Distributed Coordination
Function
• A centralized fair queueing protocol
IEEE 802.11
Distributed Coordination Function
• When transmitting a packet, choose a
backoff interval in the range [0,cw]
– cw is contention window
• Count down the backoff interval when
medium is idle
• When backoff interval reaches 0,
transmit
DCF Example
B1 = 25
B1 = 5
wait
data
data
B2 = 20
cw = 31
wait
B2 = 15
Generic Centralized
Fair Queueing Protocol
• Maintain a virtual clock at the
coordinator
Backlogged flows
1,2,3
Weights
1,2
Flow 1
3, 1, 2
Flow 2
Flow n
1
2 3 4 5 6 rounds
Virtual clock is proportional
to the “number of rounds”
Virtual Clock
• To service a packet,
length / (delta * weight)
rounds are required, where delta is
granularity of the round-robin scheduler
• To service a packet,
length / weight
virtual time units are required
Generic Centralized
Fair Queueing Protocol
• Tag each packet on each flow with start
and finish tags
– start tags represent the “round number” at
which the packet should ideally start
(finish) transmission
• Transmit packets in order of their
– finish tags (WFQ, WF2Q, SCFQ)
– start tags (SFQ)
Virtual Clock
• Designed to keep track of the “rounds”
of service completed by the scheduler
• Two techniques for updating virtual
clock
– as a function of the weights of backlogged
flows
– using start or finish tag of the most recent
packet in service
• Second technique more suitable for a
distributed implementation
– no need to know state of flows at other nodes
Self-Clocked Fair Queueing [Golestani]
(Centralized Algorithm)
• Virtual clock updated to be equal to
finish tag of packet in service
• When a packet arrives on a flow
– start tag set to the maximum of
• current virtual time
• finish tag of previous packet in the flow
– finish tag =
start tag + packet length / flow weight
SCFQ Example
F=4
8
Weight = 0.25
2
10
Weight = 0.50
0 2
4
8
10
Virtual clock
Proposed Protocol
• Based on IEEE 802.11 Distributed
Coordination Function (DCF)
• Emulates centralized SCFQ protocol
– but not exactly
– can also emulate SFQ [Goyal] similarly
Proposed Protocol
• Borrows from SCFQ
– procedure for maintaining virtual clock
– determination of start/finish tags
• Borrows from DCF in IEEE 802.11
– Packet with smallest finish tag determined
using backoff interval mechanism.
Distributed Fair Scheduling
(DFS)
• When transmitting a packet, tag it with
its finish time
finish tag =
start tag +
Scaling_Factor * packet length / flow weight
• In DFS, the chosen scale for virtual time
affects algorithm behavior
Distributed Fair Scheduling
(DFS)
• When a node hears a packet on the
channel, the node updates its local
virtual clock
– similar to centralized SCFQ
DFS
• Choose a backoff interval proportional
to (finish tag - virtual time)
• Node with smallest backoff interval
transmits
– Effectively, node with smallest finish tag
transmits
Collisions
• Collisions occur when two nodes count
down to 0 simultaneously
– In centralized implementations, ties can be
broken without causing “collisions”
• The version of DFS evaluated here
reduces the contention window to
CollisionWindow on first collision
– Window increases exponentially on further
collisions
DFS Example
Scaling Factor = 1.0
L = 0.1
S=0
F = 0 + 1.0 * 0.1/0.25 = 0.4
Weight = 0.25
L = 0.1
S=0
F = 0 + 1.0 * 0.1/0.5 = 0.2
Weight = 0.50
Backoff interval = (0.4 - 0) = 0.4
====> 0 slot
COLLISION!
Backoff interval = (0.2 - 0) = 0.2
====> 0 slot
DFS Example
Scaling Factor = 4.0
L = 0.1
S=0
F = 0 + 4.0 * 0.1/0.25 = 1.6
Weight = 0.25
L = 0.1
S=0
F = 0 + 4.0 * 0.1/0.5 = 0.8
Weight = 0.50
Backoff interval = (1.6 - 0) = 1.6
====> 1 slot
Backoff interval = (0.8 - 0) = 0.8
====> 0 slot
NO
Collision
Scaling Factor: Trade-Off
• Larger scaling factor results in larger
backoff intervals
– higher overhead
• Larger scaling factor results in fewer
collisions
– larger scaling factor causes less
“compression” due to the mapping from
continuous virtual tags to discrete number
of slots
Other Observations
• DFS attempts to emulate SCFQ, but
differs in several ways
– “priority” reversal
– non-work conserving schedule due to MAC
overhead
“Priority” Reversal
• Suppose three nodes choose backoff
intervals of 25, 25, 26 and start counting
down together
• First two nodes will collide -- they will
retry again with backoff intervals of, say,
2 and 3 slots, respectively.
• In the meantime, node 3 has counted
down to 1, and will transmit first, even
though its backoff interval was largest
originally.
DFS Non-Work Conserving
Centralized SCFQ
DFS
Impact of Small Weights
finish tag =
start tag +
Scaling_Factor * packet length / flow weight
• If flow weights are small, finish tags,
and backoff intervals become large
• More time wasted counting down the
backoff intervals
Alternative Mappings
Chosen
backoff
interval
Desired backoff interval
Alternative Mappings
• Advantage
– smaller backoff intervals
– less time wasted in counting down when
weights of all backlogged flows are small
• Disadvantage
– possibility for greater number of collisions
– backoff intervals that are different on a
linear scale may become identical on the
compressed scale
Impact of Very Large Weights
• Leads to more collisions
– avoid very large weights
finish tag =
start tag +
Scaling_Factor * packet length / flow weight
Performance Evaluation
• Using modified ns-2 simulator
• Long-term fairness measured based on
throughput of each flow
• Errors not considered presently
Throughput / Weight Variation
Across Flows : 16 Flows
Throughput /
Weight
Flow destination
Throughput / Weight Variation
Across Flows : 32 Flows
Throughput /
Weight
Flow destination
Fairness Index
Comparison with 802.11
Aggregate
throughput
Number of flows
Comparison with 802.11
Fairness
index
Number of flows
Impact of Scaling Factor
Fairness
index
Scaling Factor * 512
Impact of Scaling Factor
Aggregate
throughput
Scaling factor * 512
Variable Packet Size
Throughput /
Weight
Flow destination (three flows with
differing packet sizes)
Short-Term Fairness
Number of
packets
received
over the
window
Start time of sliding observation window
Benefit of Alternative Mappings
• When at least one backlogged flow has
a high weight, EXP and SQRT
mappings do not benefit
• When all backlogged flows have small
weights, these mappings improve
throughput compared to Linear
Impact of Location-Dependent
Errors
• Location-dependent errors degrade
fairness
• The centralized Long-Term Fairness
Server [Ramanathan] can be easily
incorporated into DFS
– We are presently evaluating this approach
for handling errors
Multi-Hop Wireless Networks
• How to define fairness?
– Traditional definition does not apply
• How to develop distributed fair
scheduling algorithms?
– DFS can be applied to multi-hop networks
– Resultant behavior not characterized yet
Conclusions
• Simulation results show that DFS
achieves good long-term fairness
• Short-term fairness appears good, but
need more work to evaluate it
adequately
On-going / Future Work
• Definition and evaluation of short-term
fairness
– Need to take non-work-conserving nature
of DFS into account
• Alternative collision resolution
mechanisms
• Improving aggregate throughput without
sacrificing fairness
• Fairness in ad hoc (multi-hop) networks
• Handling transmission errors
References
• August 1999 Microsoft Research
technical report by Vaidya and Bahl
• March 2000 Texas A&M University
technical report by Vaidya, Bahl and
Gupta.
Thank you!
Questions?
[email protected]
Thank you!
Questions?
[email protected]
Additional slides not used in the
presentation
Centralized SCFQ
Scaling Factor = 1.0
L = 0.1
S=0
F = 0 + 1.0 * 0.1/0.25 = 0.4
Weight = 0.25
L = 0.1
S=0
F = 0 + 1.0 * 0.1/0.5 = 0.2
Weight = 0.50
0 0.2
0.4
Virtual clock
Centralized SCFQ
Scaling Factor = 4.0
L = 0.1
S=0
F = 0 + 4.0 * 0.1/0.25 = 1.6
Weight = 0.25
L = 0.1
S=0
F = 0 + 4.0 * 0.1/0.5 = 0.8
Weight = 0.50
0 0.2
0.4
Virtual clock