Transcript Document

• http://www.theregister.co.uk/2013/06/08/facebook_cloud_versus_cloud/

Graph Algorithms with MapReduce Chapter 5 Thanks to Jimmy Lin slides

Topics

• • • Introduction to graph algorithms and graph representations Single Source Shortest Path (SSSP) problem – Refresher: Dijkstra’s algorithm – Breadth-First Search with MapReduce PageRank

What’s a graph?

• • • G = (V,E), where – V represents the set of vertices (nodes) – E represents the set of edges (links) – Both vertices and edges may contain additional information Different types of graphs: – Directed vs. undirected edges – Presence or absence of cycles Graphs are everywhere: – Hyperlink structure of the Web – Physical structure of computers on the Internet – – Interstate highway system Social networks

Some Graph Problems

• • • • • • Finding shortest paths – Routing Internet traffic and UPS trucks Finding minimum spanning trees – Telco laying down fiber Finding Max Flow – Airline scheduling Identify “special” nodes and communities – Breaking up terrorist cells, spread of avian flu Bipartite matching – Monster.com, Match.com

And of course... PageRank

Graphs and MapReduce

• • Graph algorithms typically involve: – Performing computation at each node – Processing node-specific data, edge-specific data, and link structure – Traversing the graph in some manner Key questions: – How do you represent graph data in MapReduce?

– How do you traverse a graph in MapReduce?

Representing Graphs

• • G = (V, E) – A poor representation for computational purposes Two common representations – Adjacency matrix – Adjacency list

Adjacency Matrices

Represent a graph as an n x n square matrix Mn = |V| –

M ij

= 1 means a link from node i to j

1 2 3 4 1

0 1 0 1 1

2 3 4

1 1 1 0 0 0 1 0 1 1 0 0 4 2 3

Adjacency Matrices: Critique

• • Advantages: – Naturally encapsulates iteration over nodes – Rows and columns correspond to inlinks and outlinks Disadvantages: – Lots of zeros for sparse matrices – Lots of wasted space

Adjacency Lists

Take adjacency matrices… and throw away all the zeros

1 2 3 4 1

0 1 1 1

2

1 0 0 0

3

0 1 0 1

4

1 1 0 0 1: 2, 4 2: 1, 3, 4 3: 1 4: 1, 3

Adjacency Lists: Critique

• • Advantages: – Much more compact representation – Easy to compute over outlinks – Graph structure can be broken up and distributed Disadvantages: – Much more difficult to compute over inlinks

Single Source Shortest Path

• • • Problem: find shortest path from a source node to one or more target nodes “Graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative edge path costs, producing a shortest path tree” Wikipedia First, a refresher: Dijkstra’s algorithm – Single machine

Example from CLR

Dijkstra’s Algorithm Example

n0 n1 1 n3 10 5 2 3 n2 2 9 4 6 7 n4

Dijkstra’s Algorithm

// G graph, w weights from edge u to v, s source, d distances, V vertices 1: Dijkstra(G, w, s) 2: d[s] ← 0 3: for all vertex v ∈ V do 4: d[v] ← ∞ 5: Q ← {V } 6: while Q != ∅ do 7: u ← ExtractMin(Q) 8: for all vertex v ∈ u.AdjacencyList do 9: if d[v] > d[u] + w(u, v) then 10: d[v] ← d[u] + w(u, v) Figure 5.2: Pseudo-code for Dijkstra’s algorithm, which is based on maintaining a global priority queue of nodes with priorities equal to their distances from the source node. At each iteration, the algorithm expands the node with the shortest distance and updates distances to all reachable nodes.

Example from CLR

Dijkstra’s Algorithm Example

0  1  10 5 2 3  2 9 4 6 7 

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

 1

n3

 10 5 2 3 2 9 4 6 7

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

10 1

n3

 10 5 2 3

n2

5 2 9 4 6 7

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

8 1

n3

14 10 5 2 3

n2

5 2 9 4 6 7

n4

7

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

8 1

n3

13 10 5 2 3

n2

5 2 9 4 6 7

n4

7

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

8 1

n3

9 10 5 2 3

n2

5 2 9 4 6 7

n4

7

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

8 1

n3

9 10 5 2 3

n2

5 2 9 4 6 7

n4

7

Single Source Shortest Path

• • • Problem: find shortest path from a source node to one or more target nodes Single processor machine: Dijkstra’s Algorithm MapReduce: parallel Breadth-First Search (BFS) – How to do it? First simplify the problem!!

Finding the Shortest Path

• • • First, consider equal edge weights Solution to the problem can be defined inductively Here’s the intuition: – – DistanceTo(startNode) = 0 For all nodes n directly reachable from startNode, DistanceTo(n) = 1 – For all nodes n reachable from some other set of nodes S, DistanceTo(n) = 1 + min(DistanceTo(m), m  S

Finding the Shortest Path

• This strategy advances the “known frontier” by one hop – Subsequent iterations include more reachable nodes as frontier advances – Multiple iterations are needed to explore entire graph

Visualizing Parallel BFS

3 1 2 2 2 3 3 3 4 4

How to Implement?

• • • • Assume each node in graph assigned to one mapper Must pass info about the graph structure, e.g. which nodes can be reached from which nodes Must keep track of current minimum for each node Determine when to stop

Termination

• • Does the algorithm ever terminate?

– Eventually, all nodes will be discovered, all edges will be considered (in a connected graph) When do we stop?

– When distances at every node no longer change at next frontier

Example from CLR

Dijkstra’s Algorithm Example

n0 n1 1 n3 10 5 2 3 n2 2 9 4 6 7 n4

Assume d = 1

From Intuition to Algorithm

• • What info does the map task require?

– A map task receives (k,v) • Key: – nodeid n • Value: – Node - data structure N – AdjacencyList of nodes reachable from n – Distance D What does the map task do?

– Computes distances – – Emits graph structure N of node n (n, N), which contains the current shortest distance to nodes pointed to by n

Emit (p, D+1)

to Reducer 

p

 points-to: Emits distance to reachable nodes

From Intuition to Algorithm

• • What info does the reduce task require?

– – Need adjacency graph The reduce task gathers possible distances to a given p What does the reduce task do? – For every value d in list v (k,v) • Tests to see if value d is the data structure or a distance value • If a distance d, compares to current minimum for a node, updates if not the minimum – Emits Distance when compared every d to current minimum

Multiple Iterations Needed

• This MapReduce task advances the “known frontier” by one hop – Subsequent iterations include more reachable nodes as frontier advances – Multiple iterations are needed to explore entire graph – Each iteration a MapReduce task – Final output is input to next iteration - MapReduce task –

Feed output back into the same MapReduce task

Next Step to Solving - Weighted Edges

• Next – – No longer assume distance to each node is 1 – Instead of adding 1 as traverse through the graph, add the positive weights to the edges – Simple change: Include a weight w for each node p (p, D+w

p

) – Map Reduce emit pairs, needs a points-to-list, keep track of current minimum

Tracing Algorithm

• Assume each node in graph assigned to one mapper

1. class Mapper 2.

method MAP(nid n, node N) 3.

4.

D ← N.Distance

Emit(nid n, N) // Pass along graph structure 5.

6.

for all nodeid m € N.AdjacencyList do Emit(nid m, d+w) // Emit distances to reachable nodes 1. class Reducer 2.

method REDUCE (nid m, [d1, d2, ...]) 3.

4.

d min ← ∞ M ← Φ 5.

6.

7.

8.

9.

10.

11.

12.

13.

for all d € counts [d1, d2, ...] do if IsNode(d) then else if d < d if M.Distance > d min Emit(nid m, node M) M ← d // Recover graph structure min then // Look for shorter distance d min ← d M.Distance ← d min // update shortest distance Increment counter for driver

Map Algorithm

• • • • Line 2. N is an adjacency list and current distance (shortest) Line 4. Emits (k,v) in k which is current node info , but only one of these for a node because assume each node assigned to one mapper Line 6. Emits different type of (k,v) which only has distance to neighbor not adjacency list Shuffles (k,v) with same k to same reducers

Reduce Algorithm

• • • • • Line 2. Will have different types of (k,v) as input Line 5. Determine what type of (k,v) if adjacency list Line 6. If v is not adjacency list (Node structure) then it is a distance, find shortest • Only 1 IsNode as far as I can tell Line 9. Determine if new shortest Line 10. Update current shortest, increment a counter to determine if should stop

Shortest path – one more thing

• • Only finds shortest distances, not the shortest path Is this true?

– Do we have to use backpointers to find shortest path to retrace – NO - – Emit paths along with distances, each node has

shortest path accessible at all times

• Most paths relatively short, uses little space

• • • • • •

Weighted edges Finds Minimum?

Discover node r Discovered shortest D to p and shortest D to r goes through p Maybe path through q to r that is shorter, but path lies outside current search frontier –

Not true if D = 1 since shortest path cannot lie outside search frontier, since would be longer path

Have found shortest path within frontier Will discover shortest path as frontier expands With sufficient iterations, eventually discover shortest Distance

Termination

• Does this ever terminate?

– Yes! Eventually, no better distances will be found. When distance is the same, we stop – Checking of termination must occur outside of MapReduce – Driver program submits MR job to iterate algorithm, see if termination condition met – Hadoop provides Counters (drivers) outside MapReduce • Drivers determine after reducers if done • In shortest path reducers count each change to min distance, passes count to driver

Iterations

• How many iterations needed to compute shortest distance to all nodes?

– Diameter of graph or greatest distance between any pair of nodes – Small for many real-world problems – 6 degrees of separation • For global social network – 6 MapReduce iterations

Fig. 5.6 needs how many iterations for n1-n6 ?

Worst case? need (#nodes – 1)

General Approach

• • • MapReduce is adept at manipulating graphs – Store graphs as adjacency lists Graph algorithms with MapReduce: – Each map task receives a node and its outlinks – Map task compute some function of the link structure, emits value with target as the key – Reduce task collects keys (target nodes) and aggregates Iterate multiple MapReduce cycles until some termination condition – Remember to “pass” graph structure from one iteration to next

Comparison to Dijkstra

• • Dijkstra’s algorithm is more efficient – At any step it only pursues edges from the minimum-cost path inside the frontier MapReduce explores all paths in parallel – Brute force – wastes time – Divide and conquer – Except at search frontier, within frontier repeating same computations – Throw more hardware at the problem

Another example – Random Walks Over the Web

• • • Model: – – User starts at a random Web page User randomly clicks on links, surfing from page to page (may also teleport to completely diff page How frequently will a page be encountered during this surfing?

This is PageRank – Probability distribution over nodes in a graph representing likelihood random walk over a graph will arrive at a particular node

• What characteristics would you use to rank the pages?

• For a given node n – Assign a value to node(s) m pointing to n • How many pages does m point to?

• What is its current value?

PageRank: Defined

Given page n with in-bound links L(n), where – C(m) is the out-degree of m – – P(m) is the page rank of m  is probability of random jump – |G| is the total number of nodes in the graph

m 1 P

(

n

)     | 1

G

|    ( 1   ) 

m

L

(

n

)

P

(

m

)

C

(

m

) n

m n

m n

Computing PageRank

• • Properties of PageRank – – Can be computed iteratively Effects at each iteration is local Sketch of algorithm: – Start with seed (P

i

) values – Each page distributes (P

i

to ) “credit” to all pages it links – – Each target page adds up “credit” from multiple in bound links to compute (P

i+1 )

Iterate until values converge

Computing PageRank

• • What does map do?

What does reduce do?

PageRank MapReduce

• • • • • • Fig. 5.7

Assume alpha=0 Begins with 5 nodes splitting 1.0 -> 0.2 each Each node must split their 0.2 to outgoing nodes (map) Then add up all incoming values (reduce) Each iteration is one MapReduce job

PageRank in MapReduce

Map: distribute PageRank “credit” to link targets Reduce: gather up PageRank “credit” from multiple sources to compute new PageRank value Iterate until convergence ...

Convergence to end Page Rank

• • • Stop when few changes (some tolerance for precision errors) or reached fixed number of iterations Driver checks for convergence How many iterations needed for PageRank to converge, e.g. if 322 M edges?

– Fewer than expected – 52 iterations

Dangling nodes and random jumps

• • Must redistribute mass lost at dangling nodes (no out going edges – so mass lost) – 3 approaches to determine missing mass • Count dangling nodes and multiply by constant • Emit special key, handle special key with logic • Write as side data, sum across all map tasks – Next, Redistribute missing mass m across all nodes Compute final page rank p’ where  is random jump probability •

p

'     | 1

G

|    ( 1   )( |

m G

| 

p

) Need 2 MapReduce jobs for one iteration – 1 to distribute mass across edges, the other to take care of lost mass

PageRank

• • • Assume honest users No Spider trap – infinite chain of pages all link to single page to inflate PageRank PageRank only one of thousands of features used in ranking web pages

Issues with Graph processing

• • • • No global data structures can be used Local computation on each node, results passed to neighbors With multiple iterations, convergence on global graph Amount of intermediate data order of number of edges – Worst case?

– O(n 2 ) for dense graph

• CS591 – read the original Page Rank paper: The anatomy of a large-scale hypertextual Web search engine by S. Brin and L. Page

Issues with Graph processing

• Role of combiner?

PageRank in MapReduce

Map: distribute PageRank “credit” to link targets Reduce: gather up PageRank “credit” from multiple sources to compute new PageRank value Iterate until convergence ...

Example from CLR

Dijkstra’s Algorithm Example

n0

0

n1

 1

n3

 10 5 2 3 2 9 4 6 7

Issues with Graph processing

• Combiners only useful if?

– can do partial aggregation • Only if multiple nodes being processed by individual mapper and point to same nodes • Otherwise combiner not useful – Assume we have a mapper process more than one node • How to assign nodes (partition graph) so useful?

Issues with Graph processing

• • Desirable to partition graph so many intra component links and few inter-component link Consider a social network - – Partitioning heuristics – – Order nodes by: • Last name?

• Zip code?

• Language spoken?

• School?

So people are connected

Summary

• • • • Graph structure represented with adjacency list Map over nodes, pass partial results to nodes on adjacency list, partial results aggregated for each node in reducer Graph structure passed from mapper to reducer, output in same form as input Algorithms iterative, under control of non MapReduce driver checking for termination at end of each iteration