Transcript Document

Approximations for hard problems
… just a few examples
… what is an approximation algorithm
… quality of approximations:
from arbitrarily bad to arbitrarily good
Example: Scheduling
Given: n jobs with processing times,
m identical machines.
Problem: Schedule so as to minimize makespan.
Algorithm: List scheduling
Basic idea: In a list of jobs,
schedule the next one as soon as a machine is free
a
b
machine 1
e
c
d
Good or bad ?
machine 2
machine 3
machine 4
Algorithm: List scheduling
Basic idea: In a list of jobs,
schedule the next one as soon as a machine is free
machine 1
a
b
e
c
d
machine 2
f
S
machine 3
machine 4
A
job f finishes last, at time A
compare to time OPT of best schedule: how ?
(1) job f must be scheduled in the best schedule at some time:
A – S <= OPT.
(2) up to time S, all machines were busy all the time, and OPT
cannot beat that, and job f was not yet included: S < OPT.
(3) both together: A = A – S + S < 2 OPT.
“2-approximation” (Graham, 1966)
machine 1
a
b
e
c
d
machine 2
f
S
machine 3
machine 4
A
job f finishes last, at time A
compare to time OPT of best schedule: how ?
Approximations in more generality
P = set of problems with polynomial time solutions
NP = set of problems with polynomial time certificates
“guess and verify”
Example
Problem: Clique.
Given: Graph G = (V,E); positive integer k.
Decide: Does G contain a clique of size k ?
Hardness: with respect to transformation in P
Problem  is NP-hard:
every problem ’ in NP can be transformed into 
in polynomial time
Problem  is NP-complete:  is NP-hard
and is in NP.
Basis for transformation (reduction): 3-SAT
Theorem: Clique is NP-complete.
Proof: (1) is in NP: guess and verify;
(2) is NP-hard, by reduction:
a or b or not c
literal = vertex
edge = compatibility
requested clique size k = number of clauses
NP for decision problem
value problem
Compute the largest value k
for which G contains a clique
of size k
optimization problem
Compute a largest
clique in G
polynomial relationship:
IF decision in P  value in P  optimization in P
Problem: Independent set.
Given: G = (V,E); positive integer bound k.
Decide: Is there a subset V’ of V of at least k vertices,
such that no two vertices in V’ share an edge ?
Theorem: Independent set is NP-complete.
Proof: (1) is in NP: guess and verify;
(2) is NP-hard, by reduction from Clique:
build complement graph: edge  no edge
no edge  edge.
Problem: Minimum vertex cover.
Given: G = (V,E).
Minimize: Find a smallest subset V’ of V,
such that every edge in E has an incident
vertex in V’ (“is covered”) ?
Theorem: Vertex cover is NP-hard.
Proof: by reduction from Independent set:
vertex v in independent set  v not in vertex cover.
Lots of hard problems; what can we do ?
Solve problem approximately….
Minimum vertex cover
First idea
Repeat greedily:
pick vertex that covers the largest number of
edges not yet covered,
and remove it and its incident edges.
Solve problem approximately….
Minimum vertex cover
First idea
Repeat greedily:
pick vertex that covers the largest number of
edges not yet covered,
and remove it and its incident edges.
Not too good … as we will see later
Solve problem approximately….
Minimum vertex cover
Second idea
Repeat greedily:
pick both vertices of an arbitrary edge,
and remove both and their incident edges.
Solve problem approximately….
Minimum vertex cover
Second idea
Repeat greedily:
pick both vertices of an arbitrary edge,
and remove both and their incident edges.
great
Theorem: This is a 2-approximation.
Proof: One vertex per edge is needed, two are taken.
Solve problem approximately….
Independent set
… by the reduction we know …
Solve problem approximately….
Independent set
… by the reduction we know …
… does not work:
assume graph with 1000 vertices,
minimum vertex cover 499 vertices,
approximate vertex cover of 998 vertices.
 maximum independent set has 501 vertices,
approximate independent set has 2 vertices.
Solve problem approximately….
Independent set
polynomial reductions
need not preserve
approximability
… by the reduction we know …

… does not work:
careful, use special
reductions …
assume graph with 1000 vertices,
minimum vertex cover 499 vertices,
approximate vertex cover of 998 vertices.
 maximum independent set has 501 vertices,
approximate independent set has 2 vertices.
Decision versus optimization ?
NPO = set of “NP-hard optimization problems”
= roughly:
verify that a proposed solution is feasible
and compute its value
in polynomial time.
What is an approximation algorithm ?
A is an approximation algorithm for problem  in NPO:
for any input I, A runs in time polynomial in the length of I
and if I is a legal input, A outputs a feasible solution A(I).
What is an approximation algorithm ?
A is an approximation algorithm for problem  in NPO:
for any input I, A runs in time polynomial in the length of I
and if I is a legal input, A outputs a feasible solution A(I).
The approximation ratio of A for  on input I is
value (A(I)) / value (OPT(I))
… is at least 1 for minimization problems and
at most 1 for maximization problems.
What is the approximation ratio of an approximation algorithm ?
… the maximum over all inputs for minimization problems
minimum
maximization
(and sometimes only the asymptotic ratio is of interest,
for large problem instances)
The approximation ratio of A for  on input I is
value (A(I)) / value (OPT(I))
… is at least 1 for minimization problems and
at most 1 for maximization problems.
Example
Problem: k-center.
Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j;
positive integer number k of clusters.
Compute set S of k cluster centers, S a subset of V, such that
the largest distance of any point to its closest cluster center
is minimum.
Example
Problem: k-center.
Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j;
positive integer number k of clusters.
Compute set S of k cluster centers, S a subset of V, such that
the largest distance of any point to its closest cluster center
is minimum.
Theorem: k-center is NP-complete (decision version).
Proof: Vertex cover reduces to dominating set reduces to k-center.
Example
Problem: k-center.
Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j;
positive integer number k of clusters.
Compute set S of k cluster centers, S a subset of V, such that
the distance of any point to its closest cluster center is minimum.
Theorem: k-center is NP-complete (decision version).
Proof: Vertex cover reduces to dominating set reduces to k-center.
For G=(V,E), find a smallest subset V’ of V such that every vertex
is either itself in V’ or has a neighbor in V’.
VC
DS
Proof: Vertex cover reduces to dominating set reduces to k-center.
For G=(V,E), find a smallest subset V’ of V such that every vertex
is either itself in V’ or has a neighbor in V’.
DS
k-center
1
2
bound 1
on cluster
radius
Proof: Vertex cover reduces to dominating set reduces to k-center.
For G=(V,E), find a smallest subset V’ of V such that every vertex
is either itself in V’ or has a neighbor in V’.
Non-approximability
Theorem: Finding a ratio-M-approximation for fixed M is NP-hard
for k-center.
Proof: Replace 2 in the construction above by more than M.
Non-approximability
Theorem: Finding a ratio-M-approximation for fixed M is NP-hard
for k-center.
Proof: Replace 2 in the construction above by more than M.
Theorem: Finding a ratio-less-than-2-approximation is NP-hard
for k-center with triangle inequality.
Proof: Exactly as in the reduction above.
Theorem: A 2-approximation for k-center with triangle inequality
exists.
Theorem: A 2-approximation for k-center with triangle inequality
exists.
Proof: Gonzalez’ algorithm.
Pick v1 arbitrarily as the first cluster center.
Pick v2 farthest from v1.
Pick v3 farthest from the closer of v1 and v2.
…
Pick vi farthest from the closest of the v1,…vi-1.
…
until vk is picked.
Example
Problem: Traveling salesperson with triangle inequality.
Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j.
Compute a round trip that visits each vertex exactly once
and has minimum total cost.
Comment: is NP-hard.
Example
Problem: Traveling salesperson with triangle inequality.
Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j.
Compute a round trip that visits each vertex exactly once
and has minimum total cost.
Approximation algorithm
Find a minimum spanning tree.
Run around it and take shortcuts to avoid repeated visits.
Approximation algorithm
Find a minimum spanning tree.
Run around it and take shortcuts to avoid repeated visits.
Quality ?
Theorem: This is a 2-approximation.
Proof: (1) OPT-TSP minus 1 edge is spanning tree.
(2) MST is not longer than any spanning tree.
(3) APX-TSP <= 2 MST <= 2 ST <= 2 OPT-TSP.
Better quality ?
Christofides’ algorithm.
1. Find MST.
2. Find all odd degree vertices V’ in MST.
Comment: There is an even number of them.
3. Find a minimum cost perfect matching for V’ in the induced
subgraph of G (no even vertices present). Call this M.
4. In MST + M, find an Euler circuit.
5. Take shortcuts in the Euler circuit.
Theorem: Christofides’ algorithm is a 1.5-approximation
of TSP with triangle inequality.
Theorem: Christofides’ algorithm is a 1.5-approximation
of TSP with triangle inequality.
Proof:
1. MST <= TSP.
2. M <= TSP / 2
…. as we see soon.
3. Shortcuts make the tour only shorter.
Proof of
2. M <= TSP / 2:
consider the subgraph induced by odd degree vertices V’;
consider TSP restricted to that subgraph, and compare
with the sub-TSP(V’) found there:
sub-TSP(V’) <= TSP;
picking alternate edges in sub-TSP(V’) gives
two perfect matchings for V’,
call them M1 and M2;
pick the cheaper of M1, M2, call it M, with M <= TSP / 2.
Notes:
1. Christofides algorithm may be as bad as 1.5 really.
2. It is unkown whether this is best possible.
3. For Euclidean TSP, a better bound is known:
any quality above 1 can be achieved.
4. For TSP without triangle inequality, no fixed approximation
is possible.
Example
Problem: Set cover.
Given: Universe U of elements e1, e2, …, en;
collection of subsets S1, S2, …, Sk of U,
nonnegative cost per Si.
Find a collection of subsets that cover U with minimum cost.
Example
Problem: Set cover.
Given: Universe U of elements e1, e2, …, en;
collection of subsets S1, S2, …, Sk of U,
nonnegative cost per Si.
Find a collection of subsets that cover U with minimum cost.
Idea for an approximation:
Repeat greedily
choose a best set (cheapest per new covered element)
until all of U is covered.
Quality ?
Consider a step in the iteration.
The greedy algorithm has selected some of the Sj,
with covered C = union of all selected Sj so far.
Choose Si in this step.
Price per new element in Si is cheapest:
price(i) = cost(Si) / (number of new elements of Si)
is minimum.
Consider the elements in the order they are chosen.
Consider the elements in the order they are chosen, and rename:
e1, e2, e3, …., ek, …. en
Consider ek, call the set in which ek is chosen Si.
What is the price of ek?
Bound the price from above:
Instead of Si, the greedy algorithm could have picked any of the
sets in OPT that have not been picked yet (there must be one),
but at which price?
 compare with the optimum
Instead of Si, the greedy algorithm could have picked any of the
sets in OPT that have not been picked yet (there must be one),
but at which price?
Take all elements not covered yet, and their total cost will be
at most all of OPT.
Across all sets in OPT not picked yet, the average cost is
therefore at most OPT / size of U-C.
Hence, at least one of the sets in OPT not picked yet has at most
this average price. This set could have been picked.
Hence, the price of ek is at most OPT / size of U-C.
For the k-th picked element, the size of U-C is at least n-k+1.
Therefore, price(ek) <= OPT / (n-k+1) for each ek.
The sum of all prices of ek gives the total cost of the greedy
solution:
SUM(k=1,..,n) price(ek) <= SUM(k=1,..,n) OPT / (n-k+1)
<= OPT SUM(k=1,..,n) 1 / k
<= OPT (1 + ln n)
Theorem: Greedy set cover gives a (1+ln n)- approximation.
Notes: It can really be that bad.
That is also a best possible approximation for set cover.
Example
Problem: Knapsack.
Given: n objects with weights and values, and weight bound:
positive integers w1,w2, …, wn, W (weights, total weight);
positive integers v1, v2, …, vn (values).
Find a subset of the set of objects with total weight at most W
and maximum total value.
… is NP-hard
An exact algorithm for knapsack
A
1 2 3
v’
n vmax
1
2
3
j
n
A(j,v’) = smallest weight subset of objects 1,…,j with total value =v’.
A(j,v’) = smallest weight subset of objects 1,…,j with total value =v’.
inductively:
A(1,v) =
If v = v1 then w1 else infinity
A(i+1,v) =
min ( A(i,v) , A(i, v – v(i+1)) + w(i+1) )
if >= 0
… the result is: max v such that A(n,v) <= W
… the runtime is: O(n2 vmax) …. pseudopolynomial
pseudopolynomial ?
polynomial if numbers are small
= value is polynomial in input length
pseudopolynomial ?
polynomial if numbers are small
= value is polynomial in input length
Idea: scale numbers down, i.e., ignore less significant digits.
Approximation algorithm for knapsack (FPTAS)
1. Given error bound eps > 0, define K := eps vmax / n .
2. For each object i, define vi := rounded down (vi / K).
3. Use dynamic programming to find optimal solution S
for the rounded problem version.
4. Return max (original value for S , vmax).
Theorem: Let A be the set of objects so computed.
Then value(A) >= (1 – eps) OPTvalue.
Theorem: Let A be the set of objects so computed.
Then value(A) >= (1 – eps) OPTvalue.
Proof:
Observe the rounding effect:
(1)
(2)
K vi <= vi
vi – K <= K vi
Sum all objects in OPT, with value(OPT) = SUM(i in OPT) vi:
value(OPT) – K value(OPT) <= K n
But: for the rounded values, S is at least as good as OPT
But: for the rounded values, S is at least as good as OPT:
value(S) >= value(S) K >= value(OPT) K
>= value(OPT) – K n
= value(OPT) – eps vmax.
But: for the rounded values, S is at least as good as OPT:
value(S) >= value(S) K >= value(OPT) K
>= value(OPT) – K n
= value(OPT) – eps vmax.
But: dynamic programming delivers A (and not necessarily S):
value(A) >= value (S) >= value(OPT) – eps value(A)
…because
vmax <= value(A)
But: for the rounded values, S is at least as good as OPT:
value(S) >= value(S) K >= value(OPT) K
>= value(OPT) – K n
= value(OPT) – eps vmax.
But: dynamic programming delivers A (and not necessarily S):
value(A) >= value (S) >= value(OPT) – eps value(A)
…because
vmax <= value(A)
value(A) >= ( 1 / (1+eps) ) value(OPT)
that’s all, since ( 1 / (1+eps) ) >= 1 – eps .
Theorem: This is a fully polynomial approximation scheme.
= given eps, delivers solution with ratio (1-eps) for max
and ratio (1+eps) for min,
and runs in time polynomial in the input size and (1/eps)
Proof:
(1) Quality of the solution: above.
(2) Runtime of dynamic programming:
O( n2 vmax / K) = O(n2 rounded down (n / eps))
= O(n3 / eps).
Comment: nothing better can exist unless P = NP.
Example: Use set cover to solve other problems approximately
Shortest superstring:
Given set of strings s1, s2, …, sn.
Find a shortest superstring s.
Cleanup: No si is a substring of an sj.
For superstring instance, create set cover instance:
- each si is an element ei of the universe in set cover;
- each legal nonzero overlap of two strings is a set
representing all given strings that it contains
TCGCG and GCGAA overlap as
GCGAA and as
GCGAA
Cost of a set = string length of the “overlap string”.
Algorithm: Solve set cover approximately.
Concatenate strings of chosen sets in any order.
(1) The solution is a superstring.
(2) Quality of the solution?
Quality Lemma: For OPT solution of superstring problem,
there is a set cover of “string length” at most 2 OPT.
Proof:
s1
s2
= first occ. of an si in OPT
s3
group 1
OPT
superstring
OPT superstring
s1
s2
s3
group 1
group 2
group 3
per group: the proper set covers the group
All these group sets form a set cover. It’s string has length
at most 2 OPT (only adjacent groups can overlap).
 Approx set cover <= 2 (1 + ln n) OPT superstring.
Bin packing
Given n items with sizes a1, a2, …. ,an in [0,1]
Minimize the number of unit size bins to pack the items.
Algorithm First Fit (FF)
Pack the next item in the leftmost bin that can take it.
Quality
Out of the m bins that FF uses, at least m-1 are more than half full.
(m-1)/2 < sum of all ai <= OPT number of bins
 m-1 < 2 OPT
 m <= 2 OPT
2-approximation
Lower bound on the approximation ratio?
Theorem: No bin packing approximation algorithm
with ratio less than 1.5 exists.
Proof: Reduce from PARTITION with a1, a2, … an
and bins of size half the sum of all ai.
Answer “yes” if 2 bins are enough, “no” otherwise.
 approx better than 3/2 must solve exactly.
But: Small instances are boring.
Large instances, high number of bins?
Theorem
For any e(psilon), 0 < e <= ½, there is an algorithm Ae
with runtime polynomial in n
that finds a packing with <= (1 + 2 e) OPT + 1 bins.
“asymptotic PTAS”: for any eps > 0 exists N > 0
s.t. approx <= (1 + eps) OPT
Proof:
(1) The structure of the algorithm
(2) The details and quality analysis
(1) The structure of the algorithm
1. Remove all items of size < e(psilon).
2. Round the item sizes [see B] so that only a constant number
of different item sizes remains.
3. Find an optimal packing for the rounded items [see A and B].
4. Use this packing for the original items.
5. Pack the items with size < e with First Fit.
(2) The details and quality analysis
[A] Find optimal packing for special case of
<= K different item sizes
and each item size >= e
Lemma: This can be done optimally in polynomial time,
for fixed values of K and e.
Proof:
per bin: number m of items is <= floor of 1/e
 number t of different bin types (no. of items per size)
is function of m and K only  constant.
Total number of bins used <= n
 total number of possible feasible packings is
polynomial in n (but not in 1/e).
Algorithm enumerates them all and picks the best.
[B] Lemma: Given items with size >= e, there is an algorithm with
runtime polynomial in n and approximation factor 1+e.
Proof: let input instance be I = {a1, a2, …. , an}.
- Sort items by increasing size.
- Partition items into K= ceiling(1/e^2) groups:
 each group has <= floor (n e^2) items.
- Round up the size of each item to the largest in its group:
this instance J has <= K different item sizes.
By [A], J can be solved optimally in time poly in n.
Solution for J is valid for the original items; is it good?
Quality lemma: OPT(J) <= (1+e) OPT (I).
Proof:
Define J’ in analogy with J, but rounded to the lowest value per group:
… obvious: OPT(J’) <= OPT(I).
Trick: Discard highest group of J and lowest group of J’,
and match the remaining groups.
J
J’
A packing for J’ yields a packing for all except the group
of largest size items of J.
… pack each of these floor (n e^2) large items in its own bin
OPT(J) <= OPT(J’) + floor (n e^2)
<= OPT(I) + floor (n e^2)
We have: OPT(J) <= OPT(I) + floor (n e^2)
Since no small items are present: OPT (I) >= n e
 floor (n e^2) <= e OPT (I)
OPT (J) <= OPT(I) + e OPT (I) = (1 + e) OPT (I).
… this proves the quality lemma.
Remainder of the algorithm: Step 5
{situation: we have approx solution for input I without small items}
Put small items of original input origI by first fit into bins.
Case 1: no extra bins are needed for this.
number of bins is <= (1+e) OPT(I) <= (1+e) OPT(origI).
Case 2: extra bins are needed,
totalling M bins.
 at least M-1 of them are full to at least level 1-e
 sum of all item sizes >= (M-1)(1-e)
 OPT(origI) >= (M-1)(1-e)
 M <= OPT(origI) / (1-e) + 1 <= (1+2e) OPT(origI) + 1.
… PTAS for bin packing, but not FPTAS.
TSP: Huge difference from triangle inequality….
New approximation algorithm for TSP
Idea: Build MST and go around it. Take shortcuts differently.
p’
T1
q’
p
MST edge
e
q
T2
T1, T2 are two parts of the MST.
Invariant in the induction: path within T1 contains edge (p,p’) once
and each other edge of T1 twice.
inductive step:
p’
q’
p
e
q
add the path p’, p, q, q’
preserves the invariant: edge (p,q) once, any other edge twice.
Induction:
Basis: 1 vertex  vacuously true
2 vertices p, p’  path is edge (p,p’)
Rest by induction as above….
 effect: shortcuts only for three edges at a time.
Algorithm (Sekanina)
1. Build MST T.
2. Build T^3.
3. Find round trip in T^3 such that each of T appears exactly twice
(induction, and a single extra edge at the very end).
Quality for TSP with slight violation of triangle inequality?
cost(i,j) <= (1+r) (cost(i,k) + cost(k,j))
for all i,j,k
 Shortcut increases length by factor <= (1+r)^2
 Approximation ratio 2 (1+r)^2
… stability of the approximation
Stability of approximation
-what about r < 0, e.g. r = -1/2 in the extreme ?
- what about other problems with stable approximations ?
Summary
Problems in NPO have very different approximability properties:
… some are impossible to approximate (k-center, TSP)
… some are hard, with a bound depending on the input size (set cover)
… some can be approximated with some constant ratio
(vertex cover, k-center with triangle inequality,
TSP with triangle inequality)
… and some can be approximated as closely as you like (knapsack)
Approximability has its own hierarchy of complexity classes