Belief Propagation algorithm in Markov Random Fields

Download Report

Transcript Belief Propagation algorithm in Markov Random Fields

Jan 11 2010, WSAC 2010
NP-Completeness
Kyomin Jung
KAIST
Applied Algorithm Lab
What is an Algorithm?

In computing and mathematics, an algorithm is an
effective method for solving a problem using a finite
sequence of instructions.

For example, to compute addition of two decimal
integers, we use the following simple algorithm.
1
34
+28
62
Ex: Euclid Algorithm
To find the GCD(Greatest Common Divisor) of integers A1 and
A2 with A1 > A2 >= 0
a. If A2 = 0 then GCD = A1
b. If A2 > 0 then A1 = A2 q2 + A3 with A2>A3 >=0
c. Replace A1 by A2, A2 by A3 and go to step a.
Example: GCD(120,85)
120 = 85*1 + 35
85 = 35*2 + 15
35 = 15*2 + 5
15 = 5*3 + 0
GCD = 5
(GCD is the last non-zero remainder)
Turing Machine

First Goal of Turing Machine
A model that can compute anything that a human can
compute.
 Before invention of electronic computers the term
“computer” actually referred to a person whose work is
to calculate numerical quantities.



A Turing Machine (TM) is a device with a finite amount
of read-only “hard” memory (states), and an
unbounded amount of read/write tape-memory.
There is no separate input. Rather, the input is
assumed to be on the tape at the time when the TM
starts running.
Formal definition of TM (for decision problem) :
State Diagram
These instructions can be expressed by flow
diagram:
D
0
B
A
1
C
E
rej
acc
Hamiltonian Path Problem

Given an undirected graph G, a Hamiltonian path is a
path which visits each vertex exactly once.

Hamiltonian Path Problem

Decide whether a graph G has a hamiltonian path or not
c
b
a
e
d
P and NP




P = set of problems that can be solved in polynomial
time
NP = set of problems for which a solution can be
verified in polynomial time (ex: Hamiltonian path)
P  NP
Intuitively,
P is a problem that a normal people can find its solution
easily.
 NP is a problem that a normal people can check easily
whether a given solution is correct or not.


The big question: Does P = NP?
Reduction

A problem P can be reduced to another problem Q if
any instance of P can be rephrased to an instance of
Q, the solution to which provides a solution to the
instance of P

Intuitively: If P reduces in polynomial time to Q, P is
“no harder to solve than” Q

Traveling Salesman Problem (TSP)


Given an undirected graph with non-negative weighted edge,
the task is to find a shortest possible tour that visits each city
exactly once.
Hamiltonian Path problem can be reduced to TSP
NP-Hard and NP-Complete



If P is polynomial-time reducible to Q, we denote
this P p Q
Definition of NP-Hard and NP-Complete:
 If all problems R  NP are reducible to P, then P
is NP-Hard
 We say P is NP-Complete if P is NP-Hard
and P  NP
If P p Q and P is NP-Complete, Q is also
NP- Complete
Proving NP-Completeness

What steps do we have to take to prove a problem P
is NP-Complete?
 Pick a known NP-Complete problem Q
 Reduce Q to P
 Prove the reduction
 Prove it runs in polynomial time
 Prove P  NP

First (known) NP-complete problem
 Cook-Levin Theorem says that SAT problem is
NP-Complete
Why Prove NP-Completeness?

Though nobody has proven that P != NP, if you
prove a problem NP-Complete, most people accept
that it is probably intractable (used in cyrptography)

Therefore it is important to prove that a problem is
NP-hard or NP-Complete to understand the hardness
of the problem
 Can instead work on approximation algorithms, or
heuristics
Combinatorial Optimization
A combinatorial optimization problem.
Example: max-cut.
Input. A graph.
Feasible solution. A set S of vertices.
Value of solution. Number of edges cut.
Objective. Maximize.
Coping with NP-hardness
Finding the optimal solution is NP-hard.
Practical implication: no polynomial time algorithm
exists which always finds optimum solution.
Approximation algorithms: polynomial time, guaranteed
to find “near optimal” solutions for every input.
Heuristics: useful algorithmic ideas that often work, but
fail on some inputs.
Approximation Ratio
For maximization problems (max cut):
Ex: Special Case of the Traveling
Salesman Problem

TSP: Given a complete, weighted graph, find a cycle
of minimum cost that visits each vertex.
 TSP is NP-hard
 Special case: edge weights satisfy the triangle
inequality(which is common in many applications):
 w(a,b) + w(b,c) > w(a,c)
5
a
b
4
7
c
A 2-Approximation for TSP Special Case
Euler tour P of MST M
Output tour T
Algorithm TSPApprox(G)
Input weighted complete graph G,
satisfying the triangle inequality
Output a TSP tour T for G
M  a minimum spanning tree for G
P  an Euler tour traversal of M,
starting at some vertex s
T  empty list
for each vertex v in P (in traversal order)
if this is v’s first appearance in P then
T.insertLast(v)
T.insertLast(s)
return T
A 2-Approximation for TSP Special
Case - Proof
The optimal tour is a spanning tour; hence |M|<|OPT|.
The Euler tour P visits each edge of M twice; hence |P|=2|M|
Therefore, |T|<|P|=2|M|<2|OPT|
Output tour T
(at most the cost of P)
Euler tour P of MST M
(twice the cost of M)
Optimal tour OPT
(at least the cost of MST M)
Polynomial-Time Approximation Schemes

Polynomial-Time Approximation Scheme (PTAS)
If the problem has a polynomial-time (on input size)
(1   ) -approximation algorithm, for any fixed  >0.
 Ex: TSP for Euclidean Graphs (Graphs in Euclidean space)


Fully Polynomial-Time Approximation Scheme
(FPTAS)
If the problem has a polynomial-time (on input size & 1/ )
(1   ) -approximation algorithm.
 Ex: subset sum problem

Complexity theory
Time, space, information complexity
Nondeterminism, good characteriztion,
completeness
Polynomial hierarchy
Classification of many real-life problems
into P vs. NP-complete
Randomization, parallelism
Other developments
Approximation algorithms
positive and negative results
Probabilistic algorithms
Markov chains, high concentration, nibble methods,
phase transitions
Pseudorandom number generators
theory and constructions