Algorithmic Foundations of Ad-hoc Networks Andrea W. Richa, Arizona State U. Rajmohan Rajaraman, Northeastern U. ETH Zurich, Summer Tutorial 2004

Download Report

Transcript Algorithmic Foundations of Ad-hoc Networks Andrea W. Richa, Arizona State U. Rajmohan Rajaraman, Northeastern U. ETH Zurich, Summer Tutorial 2004

Algorithmic Foundations of
Ad-hoc Networks
Andrea W. Richa, Arizona State U.
Rajmohan Rajaraman, Northeastern U.
ETH Zurich, Summer Tutorial 2004
Thanks!
 I would like to thank Roger Wattenhoffer, ETH Zurich,
and Rajmohan Rajaraman, Northeastern U., for some
of the slides / figures used in this presentation
A. Richa
ETH Summer 04
2
What are ad-hoc networks?
 Sometimes there is no infrastructure
-
-
remote areas, ad-hoc meetings, disaster areas
cost can also be an argument against an infrastructure
 Wireless communication (why?)
 Sometimes not every station can hear every other
station
-
Data needs to be forwarded in a “multihop” manner
A
A. Richa
B
C
ETH Summer 04
3
An ad-hoc network as a graph
 A node is a mobile station
N1
 All nodes are equal (are they?)
 Edge (u,v) in the graph iff node
v can “hear” node u
 These arcs can have weights
that represent the signal
strength
 Directed X undirected graphs
N3
N2
N4
N5
N1
N3
N2
N4
N5
 Multi-hop
A. Richa
ETH Summer 04
4
An ad-hoc network as a graph
 Close-by nodes have MAC issues such as
hidden/exposed terminal problems
 Optional: links are symmetric (undirected graph)
 Optional: the graph is Euclidian, i.e., there is a link
between two nodes iff the distance of the nodes is
less than a certain distance threshold r
A. Richa
ETH Summer 04
5
What is a Hop?
 Broadcast within a certain range
- Variable range depending on power control
capabilities
 Interference among contending transmissions
- MAC layer contention resolution protocols, e.g.,
IEEE 802.11, Bluetooth
 Packet radio network model (PRN)
- Model each hop as a “broadcast hop” and
consider interference in analysis
 Multihop network model
- Assume an underlying MAC layer protocol
- The network is a dynamic interconnection network
 In practice, both views important
A. Richa
ETH Summer 04
6
Mobile (and Multihop) Ad-Hoc Networks
(MANET)
 Nodes move
N1
N1
N2
N3
N4
time = t1
N3
N2
N4
N5
good link
weak link
N5
time = t2
 First and foremost issue: Routing
 Mobility: dynamic network
A. Richa
ETH Summer 04
7
Overview: Part I
 General overview
 Models
-
Ad-hoc network models; mobility models
 Routing paradigms
-
Some basic tricks to improve routing
 Clustering
-
Dominating sets, connected dominating sets,
clustering under mobility
 Geometric Routing Algorithms
A. Richa
ETH Summer 04
8
Overview: Part II
 MAC protocols
 Power and Topology Control
-
connectivity, energy efficiency and interference
 Algorithms for Sensor Networks
-
A. Richa
MAC protocols, synchronization, and
query/stream processing
ETH Summer 04
9
Literature
• Wireless Networking work
- often heuristic in nature
- few provable bounds
- experimental evaluations in (realistic) settings
• Distributed Computing work
- provable bounds
- often worst-case assumptions and general graphs
- often complicated algorithms
- assumptions not always applicable to wireless
A. Richa
ETH Summer 04
10
Performance Measures
• Time
• Communication
{
path length (# of hops)
number of messages
• Memory requirements
• Adaptability
• Energy consumption
• Other QoS measures
A. Richa
}
correlation
ETH Summer 04
11
What is different from wired networks?
 Mobility: highly dynamic scenario
 Distance no longer matters, number of hops does
 Omnidirectional communication
-
Next generation: unidirectional antennas?
 Interference
-
Collisions
 Energy considerations
A. Richa
ETH Summer 04
12
Degrees of Mobility/Adaptability
 Static
 Limited mobility
- a few nodes may fail, recover, or be moved (sensor
networks)
- tough example: “throw a million nodes out of an airplane”

Highly adaptive/mobile
- tough example: “a hundred airplanes/vehicles moving at
high speed”
- impossible (?): “a million mosquitoes with wireless links”

“Nomadic/viral” model:
- disconnected network of highly mobile users
- example: “virus transmission in a population of bluetooth
users”
A. Richa
ETH Summer 04
13
Unit-disk Graph Model
 We are given a set V of nodes in the plane (points
with coordinates).
 The unit disk graph UDG(V) is defined as an
undirected graph (with E being a set of undirected
edges). There is an edge between two nodes u,v iff
the Euclidian distance between u and v is at most 1.
 Think of the unit distance as the maximum
transmission range.
 All nodes are assumed to have the same
communication range
A. Richa
ETH Summer 04
14
Fading Channel Model
 Signal strength decreases quadraticaly with distance
(assuming free space)
 “noise” (i.e., signal being transmitted from other
nearby processors) may reduce the strength of a
signal
 No sharp threshold for communication range (or
interference range)
A. Richa
ETH Summer 04
15
Variations of the Unit-disk Model
 Quasi-unit disk graph model:
-
All nodes can receive a transmission from another
node within distance αr
-
All nodes cannot receive a message from another node
within distance >r
-
A node may or may not receive a message from
another node within distance in (αr,r]
?
?
?
αr
?
A. Richa
r
?
ETH Summer 04
16
Interference Models
 Basic Model:
-
Each node v has a communication range r and an
interference range which is given by (1+c)r, where c is
a positive constant
-
If node u is within r distance from node v, then v can
receive a message from u
-
If node u is within distance d in (r,(1+c)r] from v, then if
u sends a message it can interfere with other
messages being received at node v, but v cannot
“hear” the message from node u
(1+c)r
r
A. Richa
ETH Summer 04
17
Mobility Models
 Random Walk (including its many derivatives): A simple mobility model
based on random directions and speeds.
 Random Waypoint: A model that includes pause times between
changes in destination and speed.
 Probabilistic Version of the Random Walk: A model that utilizes a set of
probabilities to determine the next position of an MN.
 Random Direction: A model that forces MNs to travel to the edge of the
simulation area before changing direction and speed.
 Boundless Simulation Area: A model that converts a 2D rectangular
simulation area into a torus-shaped simulation area.
 Gauss-Markov: A model that uses one tuning parameter to vary the
degree of randomness in the mobility pattern.
 City Section: A simulation area that represents streets within a city.
 All of the models above fail to capture some intrinsic characteristics of
mobiliy (e.g., group movement). What is a good mobility model?
A. Richa
ETH Summer 04
18
Routing Paradigms
 We will spend the next few slides reviewing some of
the “classic” routing algorithms that have been
proposed for ad-hoc networks
A. Richa
ETH Summer 04
19
Routing in (Mobile) Ad-hoc Networks
destination
source
 changing, arbitrary topology
 nodes are not 100% reliable
 may need routing tables to find path to destination
 related problem: name resolution (finding “closest”
item of certain type)
A. Richa
ETH Summer 04
20
Basic Routing Schemes
 Proactive Routing:
-
“keep routing information current at all times”
good for static networks
examples: distance vector (DV), link state (LS) algorithms,
link reversal (e.g., TORA) algorithms
 Reactive Routing:
-
“find a route to the destination only after a request comes
in”
-
good for more dynamic networks and low communication
-
examples: AODV, dynamic source routing (DSR)
 Hybrid Schemes:
A. Richa
“keep some information current”
example: Zone Routing Protocol (ZRP)
ETH Summer 04
21
Reactive Routing: Flooding
 What is Routing?
 “Routing is the act of moving information across an
internetwork from a source to a destination.“
(CISCO)
s
c
b
a
A. Richa
t
ETH Summer 04
22
Reactive Routing: Flooding
 The simplest form of routing is “flooding”: a source
s sends the message to all its neighbors; when a
node other than destination t receives the message
the first time it re-sends it to all its neighbors.
+ simple
– a node might see the same message
more than once. (How often?)
– what if the network is huge but the
target t sits just next to the source s?
 We need a smarter routing algorithm
A. Richa
ETH Summer 04
23
Reactive Routing: Other algorithms
 Ad-Hoc On Demand Distance Vector (AODV) [Perkins-Royer 99]
 Dynamic Source Routing (DSR) [Johnson-Maltz 96]
 Temporally Ordered Routing Algorithm [Park-Corson 97]
 If source does not know path to destination, issues discovery
request
 DSR caches route to destination
 Easier to avoid routing loops
source
destination
A. Richa
ETH Summer 04
24
Proactive Routing 1: Link-State Routing
Protocols
 Link-state routing protocols are a preferred iBGP
method (within an autonomous system – think:
service provider) in the Internet
 Idea: periodic notification of all nodes about the
complete graph
s
c
b
a
A. Richa
t
ETH Summer 04
25
Proactive Routing 1: Link-State Routing
Protocols
 Routers then forward a message along (for example)
the shortest path in the graph
+ message follows shortest path
– every node needs to store whole graph,
even links that are not on any path
– every node needs to send and receive
messages that describe the whole
graph regularly
A. Richa
ETH Summer 04
26
Proactive Routing 2: Distance Vector
Routing Protocols
 The predominant method for wired networks
 Idea: each node stores a routing table that has an
entry to each destination (destination, distance,
neighbor); each node maintains distance to every
other node
t?
t=1
s
A. Richa
Dest
Dir
Dst
a
a
1
b
b
1
c
b
2
t
b
2
c
b
a
t
t=2
ETH Summer 04
27
Proactive Routing 2: Distance Vector
 If a router notices a change in its neighborhood or receives an
update message from a neighbor, it updates its routing table
accordingly and sends an update to all its neighbors
+ message follows shortest path
+ only send updates when topology changes
– most topology changes are irrelevant for a given
source/destination pair
– Single edge/node failure may require most nodes to change
most of their entries
– every node needs to store a big table:
O(n 2 log n) bits
– temporary loops
A. Richa
ETH Summer 04
28
Proactive Routing 2: Distance Vector
 Single edge/node failure may require most nodes to
change most of their entries
half of
the nodes
A. Richa
half of
the nodes
ETH Summer 04
29
Discussion of Classic Routing Protocols
 There is no “optimal” routing protocol; the choice of
the routing protocol depends on the circumstances.
Of particular importance is the mobility/data ratio.
 On the other hand, designing a protocol whose
complexity (number of messages, elapsed time) is
proportional on the distance between source and
destination nodes would be desirable
A. Richa
ETH Summer 04
30
Trick 1: Radius Growth
 Problem of flooding (and similarly other algorithms):
The destination is in two hops but we flood the whole
network
 Idea: Flood with growing radius; use time-to-live
(TTL) tag that is decreased at every node, for the first
flood initialize TTL with 1, then 2, then 3 (really?),
…when destination is found, how do we stop?
 Alternative idea: Flood very slowly (nodes wait some
time before they forward the message) – when the
destination is found a quick flood is initiated that
stops the previous flood
+
A. Richa
Tradeoff time vs. number of messages
ETH Summer 04
31
Trick 2: Source Routing
 Problem: nodes have to store routing information for
others
 Idea: Source node stores the whole path to the
destination; source stores path with every message,
so nodes on the path simply chop off themselves and
send the message to the next node.
 “Dynamic Source Routing” discovers a new path
with flooding (message stores history, if it arrives at
the destination it is sent back to the source through
the same path)
A. Richa
ETH Summer 04
32
Trick 2: Source Routing
+ Nodes only store the paths they need
– Not efficient if mobility/data ratio is high
– Asymmetric Links?
A. Richa
ETH Summer 04
33
Trick 3: Asymmetric Links
 Problem: The destination cannot send the newly
found path to the source because at least one of the
links used was unidirectional.
s
c
b
a
A. Richa
t
ETH Summer 04
34
Trick 3: Asymmetric Links
 Idea: The destination needs to find the source by
flooding again, the path is attached to the flooded
message. The destination has information about the
source (approximate distance, maybe even
direction), which can be used.
A. Richa
ETH Summer 04
35
Trick 4: Re-use/cache routes
 This idea comes in many flavors:
-
Clearly a source s that has already found a route “s-ab-c-t” does not need to flood again in order to find a
route to node c.
-
Also, if node u receives a flooding message that
searches for node v, and node u knows how to reach v,
u might answer to the flooding initiator directly.
A. Richa
ETH Summer 04
36
Trick 5: Local search
 Problem: When trying to forward a message on path
“s-a-u-c-t” node u recognizes that node c is not a
neighbor anymore.
 Idea: Instead of not delivering the message and
sending a NAK to s, node u could try to search for t
itself; maybe even by flooding.
 Some algorithms hope that node t is still within the
same distance as before, so they can do a flooding
with TTL being set to the original distance (plus one)
A. Richa
ETH Summer 04
37
Trick 5: Local search
 If u does not find t, maybe the predecessor of u (a)
does?
– One can construct examples where this works, but of
course also examples where this does not work.
A. Richa
ETH Summer 04
38
Trick 6: Hierarchy
 Problem: Especially proactive algorithms do not
scale with the number of nodes. Each node needs to
store big tables
 Idea: In the Internet there is a hierarchy of nodes; i.e.
all nodes with the same IP prefix are in the same
direction. One could do the same trick in ad-hoc
networks
+ Well, if it happens that the ad-hoc nodes with the
same numbers are in the same area are together,
hierarchical routing is a good idea.
– There are not too many applications where this is the
case. Nodes are mobile after all.
A. Richa
ETH Summer 04
39
Trick 7: Clustering
 Idea: Group the ad-hoc nodes into clusters (if you
want hierarchically). One node is the head of the
cluster. If a node in the cluster sends a message it
sends it to the head which sends it to the head of the
destination cluster which sends it to the destination
Internet
cluster
super cluster
A. Richa
ETH Summer 04
40
Trick 7: Clustering
+ Simplifies operation for most nodes (that are not
cluster heads); this is particularly useful if the nodes
are heterogeneous and the cluster heads are
“stronger” than others.
+ Brings network density down to constant, if
implemented properly (e.g., no two cluster heads can
communicate directly with each other)
– A level of indirection adds overhead.
– There will be more contention at the cluster heads.
A. Richa
ETH Summer 04
41
Trick 8: Implicit Acknowledgement
 Problem: Node u only knows that neighbor node v
has received a message if node v sends an
acknowledgement.
 Idea: If v is not the destination, v needs to forward
the message to the next node w on the path. If links
are symmetric (and they need to be in order to send
acknowledgements anyway), node u will
automatically hear the transmission from v to w
(unless node u has interference with another
message).
A. Richa
ETH Summer 04
42
Trick 8: Implicit Acknowledgement
 Can we set up the MAC layer such that interference
is impossible?
+ Finally a good trick
A. Richa
ETH Summer 04
43
Trick 9: Smarter updates
 Sequence numbers for all routing updates
+ Avoids loops and inconsistencies
+ Assures in-order execution of all updates
 Decrease of update frequency
 Store time between first and best announcement of a
path
A. Richa
ETH Summer 04
44
Trick 9: Smarter updates
 Inhibit update if it seems to be unstable (based on
the stored time values)
+ Less traffic
 Implemented in Destination Sequenced Distance
Vector (DSDV)
A. Richa
ETH Summer 04
45
Trick 10: Use other distance metrics
 Problem: The number of hops is fine for some
applications, but for ad-hoc networks other metrics
might be better, for example: Energy, Congestion,
Successful transmission probability, Interference*,
etc.
N
1
N2
R1
–
S1
N3
N4
S2
N5
N7
A. Richa
N6
N8
R2
N9
ETH Summer 04
46
Trick 10: Use other distance metrics
How do we compute interference in an online manner?
*Interference: a receiving node is also in the
receiving area of another transmission.
A. Richa
ETH Summer 04
47
Routing in Ad-Hoc Networks
 10 Tricks  210 routing algorithms
 In reality there are almost that many!
 Q: How good are these routing algorithms?!? Any
hard results?
 A: Almost none! Method-of-choice is simulation…
 Perkins: “if you simulate three times, you get three
different results”
A. Richa
ETH Summer 04
48
Clustering
 disjoint or overlapping
 flat or hierarchical
 internal and border nodes/edges
Flat Clustering
A. Richa
ETH Summer 04
49
Hierarchical Clustering
Hierarchical Clustering
A. Richa
ETH Summer 04
50
Routing by Clustering
Routing by One-Level Clustering
[Baker-Ephremedis 81]
 Gateway nodes maintain routes within cluster
 Routing among gateway nodes along a spanning tree
or using DV/LS algorithms
 Hierarchical clustering (e.g., [Lauer 86, RamanathanSteenstrup 98])
A. Richa
ETH Summer 04
51
Hierarchical Routing
 The nodes organize themselves into a hierarchy
 The hierarchy imposes a natural addressing scheme
 Quasi-hierarchical routing: Each node maintains
- next hop node on a path to every other level-j
cluster within its level-(j+1) ancestral cluster
 Strict-hierarchical routing: Each node maintains
- next level-j cluster on a path to every other level-j
cluster within its level-(j+1) ancestral cluster
- boundary level-j clusters in its level-(j+1) clusters
and their neighboring clusters
A. Richa
ETH Summer 04
52
Example: Strict-Hierarchical Routing
 Each node maintains:
- Next hop node on a min-cost path to every other node in cluster
- Cluster boundary node on a min-cost path to neighboring cluster
- Next hop cluster on the min-cost path to any other cluster in
supercluster
 The cluster leader participates in computing this information and
distributing it to nodes in its cluster
A. Richa
ETH Summer 04
53
Space Requirements and Adaptability
 Each node has
O(mC ) entries
-
m is the number of levels
-
C is the maximum, over all j, of the number of level-j
clusters in a level-(j+1) cluster
 If the clustering is regular, number of entries per node is
1/ m
O(mn )
 Restructuring the hierarchy:
-
Cluster leaders split/merge clusters while maintaining
size bounds (O(1) gap between upper and lower bounds)
-
Sometimes need to generate new addresses
-
Need location management (name-to-address map)
A. Richa
ETH Summer 04
54
Space Requirements for Routing
2
 Distance Vector: O(n log n) bits per node, O(n log n) total
 Routing via spanning tree: O(n log n) total, very non-optimal
 Optimal (i.e., shortest path) routing requires Theta(n2 ) bits total
on almost all graphs [Buhrman-Hoepman-Vitanyi 00]
 Almost optimal routing (with stretch < 3) requires Theta(n 2) on
some graphs [Fraigniaud-Gavoille 95, Gavoille-Gengler 97,
Gavoille-Perennes 96]
 Tradeoff between stretch and space: [Peleg-Upfal 89]
-
upper bound: O(n
1+1/k
) memory with stretch O(k)
-
lower bound: Theta(n 1+1/(2k+4)) bits with stretch O(k)
-
about O(n3/2 ) with stretch 5
A. Richa
[Eilam-Gavoille-Peleg 00]
ETH Summer 04
55
Proactive Routing: Link Reversal Routing
 An interesting proactive routing protocol with low
overhead.
 Idea: For each destination, all communication links
are directed, such that following the arrows always
brings you to the destination.
 Example (with only one destination D):
6
9
11
5
7
13
3
1
4
A. Richa
D
8
17
12
ETH Summer 04
56
Link Reversal Routing
 Note that positive labels can be chosen such that
higher labels point to lower labels (and the
destination label D = 0).
A. Richa
ETH Summer 04
57
Link Reversal Routing: Mobility
 Links may fail/disappear: if nodes still have outlinks
 no problem!
 New links may emerge: just insert them such that
there are no loops (use the labels to figure that out)
6
9
11
5
3
X
7
1
4
A. Richa
X
D
8
13
X
17
12
ETH Summer 04
58
Link Reversal Routing: Mobility
 Only problem: Non-destination becomes a sink 
reverse all links!
 Not shown in example: If you reverse all links, then
increase label.
 Recursive progress can be quite tedious…
X
6
9
11
5
7
13
X
3
1
4
A. Richa
D
8
17
12
ETH Summer 04
59
Link Reversal Routing: Analysis
 In a ring network with n nodes, a deletion of a single
link (close to the sink) makes the algorithm reverse
like crazy: Indeed a single link failure may start a
reversal process that takes n rounds, and n links
reverse themselves n2 times!
 That’s why some researchers proposed partial link
reversal, where nodes only reverse links that were
not reversed before.
A. Richa
ETH Summer 04
60
Link Reversal Routing: Analysis
 However, it was shown by Busch et al. (SPAA’03) that
in the extreme case also partial link reversal is not
efficient, it may in fact be even worse than regular
link reversal.
 Still, some popular protocols (TORA) are based on
link reversal.
A. Richa
ETH Summer 04
61
Hybrid Schemes
•
•
•
•
•
•
•
Zone Routing [Haas 97]
r
every node knows a zone of radius r
around it
nodes at distance exactly r are called peripheral
bordercasting: “sending a message to all peripheral nodes”
global route search; bordercasting reduces search space
radius determines trade-off
maintain up-to-date routes in zone and cache routes to
external nodes
A. Richa
ETH Summer 04
62
Clustering
Overview
 Motivation
 Dominating Set
 Connected Dominating Set
 The Greedy Algorithm
 The Tree Growing Algorithm
 The Local Randomized Greedy Algorithm
 The k-Local Algorithm
 The “Dominator!” Algorithm
A. Richa
ETH Summer 04
64
Discussion
 Flooding is key component of (many) proposed
algorithms, including most prominent ones (AODV,
DSR)
 At least flooding should be done efficiently
 We have also briefly seen how clustering can be
used for point-to-point routing
A. Richa
ETH Summer 04
65
Finding a Destination by Flooding
A. Richa
ETH Summer 04
66
Finding a Destination Efficiently
A. Richa
ETH Summer 04
67
Backbone
 Idea: Some nodes become backbone nodes
(gateways). Each node can access and be accessed
by at least one backbone node.
 Routing:
1. If source is not a
gateway, transmit
message to gateway
A. Richa
ETH Summer 04
68
Backbone Contd.
1. Gateway acts as proxy source and routes message
on backbone to gateway of destination.
2. Transmission gateway to destination.
A. Richa
ETH Summer 04
69
(Connected) Dominating Set
 A Dominating Set (DS) is a subset of nodes such that
each node is either in DS or has a neighbor in DS.
 A Connected Dominating Set (CDS) is a connected
DS, that is, there is a path between any two nodes in
CDS that does not use nodes that are not in CDS.
 A CDS is a good choice
for a backbone.
 It might be favorable to
have few nodes in the
CDS. This is know as the
Minimum CDS problem
A. Richa
ETH Summer 04
70
Formal Problem Definition: M(C)DS
 Input: We are given an (arbitrary) undirected graph.
 Output: Find a Minimum (Connected) Dominating
Set, that is, a (C)DS with a minimum number of
nodes.
 Problems
-
M(C)DS is NP-hard
-
Find a (C)DS that is “close” to minimum
(approximation)
-
The solution must be local (global solutions are
impractical for mobile ad-hoc network) – topology of
graph “far away” should not influence decision who
belongs to (C)DS
A. Richa
ETH Summer 04
71
Greedy Algorithm for (C)DS
 Idea: Greedy choose “good” nodes into the
dominating set.
 Black nodes are in the DS
 Grey nodes are neighbors of nodes in the CDS
 White nodes are not yet dominated, initially all nodes
are white.
 Algorithm: Greedily choose a white or grey node that
colors most white nodes.
A. Richa
ETH Summer 04
72
Greedy Algorithm for Dominating Sets
 One can show that this gives a log  approximation,
if  is the maximum node degree of the graph.
 One can also show that there is no polynomial
algorithm with better approximation factor unless NP
can be solved in nO (loglog n ) detreministic time.
A. Richa
ETH Summer 04
73
CDS: The “too simple tree growing”
algorithm
 Idea: start with the root, and then greedily choose a
neighbor of the tree that dominates as many as
possible new nodes
 Black nodes are in the CDS
 Grey nodes are neighbors of nodes in the CDS
 White nodes are not yet dominated, initially all nodes
are white.
A. Richa
ETH Summer 04
74
CDS: The “too simple tree growing”
algorithm
 Start: Choose the node a maximum degree, and
make it the root of the CDS, that is, color it black
(and its white neighbors grey).
 Step: Choose a grey node with a maximum number
of white neighbors and color it black (and its white
neighbors grey).
A. Richa
ETH Summer 04
75
Example of the “too simple tree growing”
algorithm
 |CDS| = n/2 + 1;
u
v
A. Richa
|MCDS| = 4
u
u
v
v
ETH Summer 04
76
Tree Growing Algorithm
 Idea: Don’t scan one but two nodes!
 Alternative step: Choose a grey node and its white
neighbor node with a maximum sum of white
neighbors and color both black (and their white
neighbors grey).
A. Richa
ETH Summer 04
77
Analysis of the tree growing algorithm
 Theorem: The tree growing algorithm finds a
connected set of size |CDS| <= 2(1+H()) |DSOPT|.
 DSOPT is a (not connected) minimum dominating set
  is the maximum node degree in the graph
 H is the harmonic function with H(n) = Theta(log n)
A. Richa
ETH Summer 04
78
Analysis of the tree growing algorithm
 In other words, the connected dominating set of the
tree growing algorithm is at most a O(log()) factor
worse than an optimum minimum dominating set
(which is NP-hard to compute).
 With a lower bound argument (reduction to set
cover) one can show that a better approximation
factor is impossible, unless NP can be solved in
time nO (loglogn ) .
A. Richa
ETH Summer 04
79
Proof Sketch
 The proof is done with amortized analysis.
 Let Su be the set of nodes dominated by u in DSOPT,
or u itself. If a node is dominated by more than one
node, we put it in one of the sets.
 We charge the nodes in the graph for each node we
color black. In particular we charge all the newly
colored grey nodes. Since we color a node grey at
most once, it is charged at most once.
 We show that the total charge on the vertices in an
Su is at most 2(1+H()), for any u.
A. Richa
ETH Summer 04
80
Charge on Su
 Initially |Su| = u0.
 Whenever we color some nodes of Su, we call
this a step.
 The number of white nodes in Su after step i is
ui.
 After step k there are no more white nodes in
Su.
u
A. Richa
ETH Summer 04
81
Charge on Su
 In the first step u0 – u1 nodes are colored
(grey or black). Each vertex gets a charge of
at most 2/(u0 – u1).
 After the first step, node u becomes eligible to be
colored (as part of a pair with one of the grey nodes
in Su). If u is not chosen in step i (with a potential to
paint ui nodes grey), then we have found a better
(pair of) node. That is, the charge to any of the new
grey nodes in step i in Su is at most 2/ui.
A. Richa
ETH Summer 04
82
Adding up the charges in Su
A. Richa
ETH Summer 04
83
Discussion of the tree growing algorithm
 We have an extremely simple algorithm that is
asymptotically optimal unless NP can be solved in
nO (loglogn ) time. And even the constants are small.
 Are we happy?
 Not really. How do we implement this algorithm in a
real mobile network? How do we figure out where the
best grey/white pair of nodes is? How slow is this
algorithm in a distributed setting?
 We need a fully distributed algorithm. Nodes should
only consider local information.
A. Richa
ETH Summer 04
84
Local Randomized Greedy
 A local randomized greedy algorithm, LRG [JiaRajaraman-Suel 01]
-
Computes an O(log n) approximation of MDS in
O(log 2 n) time with high probability
-
Generalizes to weighted case and multiple coverage
A. Richa
ETH Summer 04
85
Local Randomized Greedy - LRG
 Each round of LRG consists of these steps.
-
-
-
-
A. Richa
Rounded span calculation : Each node y calculates its
span, the number of yet uncovered nodes that y covers;
it rounds up its span to the nearest power of base b ,
eg 2.
Candidate selection : A node announces itself as a
candidate if it has the maximum rounded span among
all nodes within distance 2.
Support calculation : Each uncovered node u
calculates its support number s(u), which is the
number of candidates that covers .
Dominator selection: Each candidate v selects itself a
dominator with probability 1/med(v), where med(v) is
the median support of all the uncovered nodes that
covers.
ETH Summer 04
86
Performance Characteristics of LRG
 Terminates in O(log n log ) rounds whp
 Approximation ratio is O(log ) in expectation and
whp O(log n)
 Running time is independent of diameter and
approximation ratio is asymptotically optimal
 Tradeoff between approximation ratio and running
time
-
Terminates in O(log n log ) rounds whp (the
constant in the O-notation depends on 1/ε.
-
Approximation ratio is (1   ) H in expectation
A. Richa
ETH Summer 04
87
The k-local Algorithm
Input:
Local Graph
Fractional
Dominating Set
Connected
Dominating Set
0.1
0.2
0
0.5
0.3
0
0.3
0.5 0.2
Phase A:
Distributed
linear program
rel. high degree
gives high value
A. Richa
Dominating
Set
0.8
0.1
Phase B:
Probabilistic
algorithm
Phase C:
Connect DS
by “tree” of
“bridges”
ETH Summer 04
88
Result of the k-local Algorithm
 Distributed Approximation:
2/k
Theorem: E[|DS|] · O(k 
log  |MDS|)
 The approximation factor depends on the number of
rounds k (the locality)
 Distributed Compl.: O(k ) rounds; O(k ) messages
of size O(log ).
 If k= log , then constant2 approximation
2 on MDS;
running time O(log ).
 [Kuhn-Wattenhofer, PODC’03]
A. Richa
ETH Summer 04
89
Unit Disk Graph
 We are given a set V of nodes in the plane (points
with coordinates).
 The unit disk graph UDG(V) is defined as an
undirected graph (with E being a set of undirected
edges). There is an edge between two nodes u,v iff
the Euclidean distance between u and v is at most 1.
 Think of the unit distance as the maximum
transmission range.
A. Richa
ETH Summer 04
90
Unit Disk Graph
 We assume that the unit disk
graph UDG is connected
(that is, there is a path
between each pair of nodes)
 The unit disk graph has
many edges.
 Can we drop some edges in
the UDG to reduced
complexity and
interference?
A. Richa
ETH Summer 04
91
The “Dominator!” Algorithm
 For the important special case of Euclidean Unit Disk
Graphs there is a simple marking algorithm that does
the job.
 We make the simplifying assumptions that MAC layer
issues are resolved: Two nodes u,v within
transmission range 1 receive both all their
transmissions. There is no interference, that is, the
transmissions are locally always completely ordered.
A. Richa
ETH Summer 04
92
The “Dominator!” Algorithm
 Initially no node is in the connected dominating set
CDS.
1. If a node u has not yet received an DOMINATOR
message from any other node, node u will transmit a
DOMINATOR message
2. If node v receives a DOMINATOR message from node
u, then node v is dominated by node u.
A. Richa
ETH Summer 04
93
Example
A. Richa
ETH Summer 04
94
The “Dominator!” Algorithm Continued
3. If a node w is dominated by more two dominators u
and v, and node w has not yet received a message “I
am dominated by u and v”, then node w transmits “I
am dominated by u and v” and enters the CDS.
 And since this is still not quite enough…
4. If a neighboring pair of nodes w1 and w2 is dominated
by dominators u and v, respectively, and have not yet
received a message “I am dominated by u and v”, or
“We are dominated by u and v”, then nodes w1 and
w2 both transmit “We are dominated by u and v” and
enter the CDS.
A. Richa
ETH Summer 04
95
Results
 The “Dominator!” Algorithm produces a connected
dominating set.
 The algorithm is completely local
 Each node only has to transmit one or two messages
of constant size.
 The connected dominating set is asymptotically
optimal, that is, |CDS| = O(|MCDS|)
A. Richa
ETH Summer 04
96
Results
 If nodes in the CDS calculate the Gabriel Graph
GG(UDG(CDS)), the CDS graph is also planar
 The routes in GG(UDG(CDS)) are “competitive”.
 But: is the UDG Euclidean assumption realistic?
A. Richa
ETH Summer 04
97
Overview of (C)DS Algorithms
Algorithm
Worst-Case
Guarantees
Local
(Distributed)
General
Graphs
CDS
Greedy
Yes, optimal unless
NP in …
No
Yes
No
Tree Growing Yes, optimal unless
NP in …
No
Yes
Yes
LRG
Yes, optimal unless
NP in …
Yes
Yes
?
k-local
Yes, but with add.
approx. factor
Yes (k-local)
Yes
Yes
Yes
No
Yes
“Dominator!” Asymptotically
Optimal
A. Richa
ETH Summer 04
98
Handling Mobility
 Unit-disk graph model
 We will present a constant approximation 1-hop
clustering algorithm (i.e., an algorithm for finding a
DS) which can handle mobility locally and in
constant time per relevant change in topology
 All the other algorithms seen handle the mobility of a
node by fully recomputing the (C)DS, which is not
desirable (global update)
 For that, we will first to introduce the notion of
piercing sets…
A. Richa
ETH Summer 04
99
Minimum Piercing Set Problem
 A set of objects
 A set of points to “pierce” all objects (green)
 Goal: find a minimum cardinality piercing set (red)
 NP-hard
A. Richa
ETH Summer 04
100
Mobile Piercing Set Problem (MPS)
 The objects can move (mobility)
 Goal: maintain a minimum piercing set while objects
are moving, with minimum update cost.
-
Distributed algorithm
-
Constant approximation on min. piercing set
 Only consider unit-disks (disks with diameter 1).
(Why?)
A. Richa
ETH Summer 04
101
Setup and Update Costs
 Setup cost: the cost of computing an initial piercing
set.
 Update cost: charged per event.
 We define two types of events
-
When there are redundant piercing points
-
When there are unpierced disks
 When either happens, an update is mandatory.
A. Richa
ETH Summer 04
102
Clustering in Ad-hoc Network
 Clustering
-
Simplify the network
-
Scalability
 1-hop Clustering
-
A. Richa
Mobiles are in 1-hop range from clusterhead.
ETH Summer 04
103
Ad-hoc Networks: Model
 Mobiles are of the same communication range (unitradius)
 Unit-disk graph:
-
A. Richa
Intersection graph of unit-diameter disks
ETH Summer 04
104
Our Contribution: Clustering Algorithm
 M-Algorithms for MPS translate directly to 1-hop
clustering algorithms
 Formal analyses of popular clustering algorithms,
showing that they both achieve same approximation
factor as M-algorithm.
-
Lowest-id algorithm: O(|P*|) update cost
-
Least Clusterhead Change (LCC) algorithm:
optimal update cost
A. Richa
ETH Summer 04
105
Related Work
 Piercing set (static)
- A 2d-1-approximation, L norm, O(dn+nlog |P*|) centralized
algorithm [Nielsen 96]
- A 4-approximation, L2 & 2D, sequential algorithm for k-center
problem [Agarwal & Procopiuc ’98]
 Clustering
- Geometric centers: expected constant approx, update time
O(log3.6 n) [Gao, Guibas, Hershburger, Zhang & Zhu ’01]
- Lowest ID [Gerla & Tsai ’95]
- LCC [Chiang, Wu, Liu & Gerla ’97]
A. Richa
ETH Summer 04
106
Simple Case: 1D
 The optimal solution for intervals [Sharir & Welzl 98]
A. Richa
ETH Summer 04
107
General Case: L1 & L norm
 An example when d = 2, L norm
-
No two piercing disks intersect
Piercing disk
Normal disk
A. Richa
ETH Summer 04
108
Distributed Algorithm
 Select piercing disks in a distributed, “top-down”
fashion
A. Richa
ETH Summer 04
109
Cascading Effect
 Cascading effect: one event incurs global updates
-
A. Richa
high costs
ETH Summer 04
110
Handling Mobility
 Sever the cascading effect
-
Start from an arbitrary unpierced disk and use 4
“corners”
 Main idea: find a set
of points which is
guaranteed to pierce
any configuration of
a neighborhood of a
disk
A. Richa
ETH Summer 04
111
Handling Mobility
 2D, L2 norm: 7approximation
 M-Algorithm: constant
approximation factor,
and optimal update cost
A. Richa
ETH Summer 04
112
M-Algorithm: Setup
 M-Setup: find an initial piercing set
-
Every unpierced disk tries to become a piercing
disk and to pierce all of its neighbors
-
If two neighboring disks try to become piercing
disks at the same time: lowest labeled disk wins
 Setup cost O(|P*|)
A. Richa
ETH Summer 04
113
M-Algorithm: Update
 M-Update: O(1) cost per event
-
When two piercing disks meet – one piercing disk is
set back to be normal disk
-
When a disk becomes unpierced, it invokes M-Setup
locally
A. Richa
ETH Summer 04
114
M-Algorithm: Summary
 Approximation factor:
-
2D, L2 norm: 7-approximation
-
3D, L2 norm: 21-approximation
 Setup cost:
-
O(|P*|)
 Update cost:
-
A. Richa
O(1) per event
ETH Summer 04
115
Clustering Algorithm
 1-hop clustering algorithm:
-
The mobile at the center of each piercing disk is a
clusterhead
A 1-hop cluster is defined by a clusterhead and all
disks (mobiles) pierced by it
No two clusterheads are neighbors
 A k-approximation M-Algorithm for MPS gives a kapproximation on the minimum number of 1-hop
clusters
A. Richa
ETH Summer 04
116
Lowest ID Algorithm
 Lowest ID algorithm: M-Setup
-
1-hop clustering algorithm
-
The lowest ID mobile becomes the clusterhead
-
Constant approximation factor: same as Malgorithm
-
Setup cost:
 O(|P*|)
-
Update cost:
 O(|P*|) due to cascading effect
A. Richa
ETH Summer 04
117
LCC Algorithm
 Least Clusterhead Change (LCC): M-update
- Cluster changes when necessary
 Two clusterheads meet
 A mobile is out of its clusterhead’s range
- Constant approximation factor: same as Malgorithm
- Setup cost: O(|P*|)
- Update cost: O(1) cost per event, as update is
required when event occurs, the update cost is
optimal
A. Richa
ETH Summer 04
118
A. Richa
ETH Summer 04
119
A. Richa
ETH Summer 04
120