presentation in Powerpoint

Download Report

Transcript presentation in Powerpoint

Strong LP Formulations & Primal-Dual Approximation Algorithms

David Shmoys (joint work Tim Carnes & Maurice Cheung) June 23, 2011

Introduction

   The standard approach to solve combinatorial integer programs in practice – start with a “ simple ” formulation & add valid inequalities Our agenda algorithms : show that same approach can be theoretically justified by approximation An ® approximation algorithm produces a solution of cost within a factor of ® of the optimal solution in polynomial time

Introduction

   Primal-dual method a leading approach in the design of approximation algorithms for NP-hard problems Consider several capacitated covering problems - covering knapsack problem - single-machine scheduling problems Give constant approximation algorithms based on strong linear programming (LP) formulations

Approximation Algorithms and LP

      Use LP to design approximation algorithms Optimal value for LP gives bound on optimal integer programming (IP) value Want to find feasible IP solution of value within a factor of ® of optimal LP solution Key is to start with “ right ” LP relaxation LP-based approximation algorithm produces additional performance guarantee on each problem instance Empirical success of IP cutting plane methods suggests stronger formulations - needs theory!

Primal-Dual Approximation Algorithms

 Do not even need to solve LP!!

 Each min LP has a dual max LP of equal optimal value  Goal: Construct feasible integer solution S along with feasible solution D to dual of LP relaxation such that cost(S) · ®¢ cost(D) · ®¢ LP-OPT · ®¢ OPT ) ® -approximation algorithm

Adding Valid Inequalities to LP

  LP formulation can be too “ weak ” if there is “ big ” integrality gap OPT/LP-OPT is often unbounded Fixed by adding additional inequalities to formulation  Restricts set of feasible LP solutions  Satisfied by all integer solutions, hence “ valid ”  Key technique in practical codes to solve integer programs

Knapsack-Cover Inequalities

 Carr, Fleischer, Leung and Phillips (2000) developed valid knapsack-cover inequalities and LP-rounding algorithms for several capacitated covering problems  Requires solving LP with ellipsoid method  Further complicated since inequalities are not known to be separable  GOAL: develop a primal-dual analog!

Highlights of Our Results

 For each of the following problems, we have a primal-dual algorithm that achieves a performance guarantee of 2   Min-Cost (Covering) Knapsack Single-Demand Capacitated Facility Location Used valid knapsack-cover inequalities developed by Carr, Fleischer, Leung and Phillips as LP formulation    Single-Item Capacitated Lot-Sizing  We extend the knapsack-cover inequalities to handle this more general setting Single-Machine Minimum-Weight Late Jobs 1||  w j U j Single-Machine General Minimum-Sum Scheduling 1||  f j

Highlights of Our Results

 For each of the following problems, we have a primal-dual algorithm that achieves a performance guarantee of 2   Min-Cost (Covering) Knapsack Single-Demand Capacitated Facility Location Used valid knapsack-cover inequalities developed by Carr, Fleischer, Leung and Phillips as LP formulation    Single-Item Capacitated Lot-Sizing  We extend the knapsack-cover inequalities to handle this more general setting Single-Machine Minimum-Weight Late Jobs 1||  w j U j Single-Machine General Minimum-Sum Scheduling 1||  f j

    

Min-Sum 1-Machine Scheduling 1||

f

j Each job j has a cost function f j (C j ) that is non-negative non-decreasing function of its completion time C j Goal: minimize  j f j (C j ) What is known? Bansal & Pruhs (FOCS ’10) gave first constant-factor algorithm Main result of Bansal-Pruhs adds release dates, and permits preemption – result is O(loglog(nP))-approximation algorithm OPEN QUESTIONS – Is a constant-factor doable?

  

10 Open Problems

Better constant factors?

Any constant factor?

Primal-dual when rounding is known?

 But – nothing of the type – good constant factor is known, but is a factor of 1+  possible for any  >0?

    

Min-Sum 1-Machine Scheduling 1||

f

j Each job j has a cost function f j (C j ) that is non-negative non-decreasing function of its completion time C j Goal: minimize  j f j (C j ) What is known? Bansal & Pruhs (FOCS ’10) gave first constant-factor algorithm Main result of Bansal-Pruhs adds release dates, and permits preemption – result is O(loglog(nP))-approximation algorithm OPEN QUESTIONS – Is a constant-factor doable?

- Can 1+ ² be achieved w/o release dates?

Primal-Dual for Covering Problems

    Early primal-dual algorithms   Bar-Yehuda and Even (1981) – weighted vertex cover Chvátal (1979) – weighted set cover Agrawal, Klein and Ravi (1995) Goemans and Williamson (1995) generalized Steiner (cut covering) problems Bertismas & Teo (1998) Jain & Vazirani (1999) uncapacitated facility location problem Inventory problems  Levi, Roundy and Shmoys (2006)

Minimum (Covering) Knapsack Problem

  Given a set of items F value u i each with a cost c i and a Want to find a subset of items with minimum cost such that the total value is at least D minimize  i  F c i x i subject to  i  F u i x i  D x i  {0,1} for each i  F t est ing

Bad Integrality Gap

   Consider the min knapsack problem with the following two items c 1 u 1 = 1 = D u 2 c 2 = 0 = D-1 Integer solution must take item 1 and incurs a cost of 1 LP solution can take all of item 2 and just 1/D fraction of item 1, incurring a cost of 1/D Int eger solut ion = LP solut ion 1 1=D = D

Knapsack-Cover Inequalities

   Proposed by Carr, Fleischer, Leung and Phillips (2000) Consider a subset A of items in F If we were to take all items in A, then we still need to take enough items to meet leftover demand D A = {1,2,3}  i 2 F n A u i x i ¸ D-u(A) u(A) =  i 2 A u i 1 2 3 4 5 6 7

Knapsack-Cover Inequalities

 i 2 F n A u i x i ¸ D-u(A)  This inequality adds nothing new, but we can now restrict the values of the items  i 2 F n A u i (A) x i ¸ D-u(A) where u i (A) = min{ u i , D-u(A) } since these inequalities only need to be valid for integer solutions D 1 2 3 4 5 6 7 A = {1,2,3}

Strengthened Min Knapsack LP

 When A = ; the knapsack-cover inequality becomes  i 2 F n A u i (A) x i ¸ D-u(A) )  i 2 F u i x i ¸ D which is the original min knapsack inequality  New strengthened LP is Minimize  i 2 F c i x i subject to  i 2 F n A u i (A) x i ¸ D- u(A), for each subset A x i

¸

0, for each i

2

F

Dual Linear Program

opt Dual := max  A

µ

F (D-u(A))v(A) subject to  A µ F : i  A u i (A) v(A)

·

c i , for each i

2

F v(A)

¸

0, for each A

µ

F  Dual of LP formed by knapsack-cover inequalities

Primal-Dual

D = 5 1.25

2 0.25

2 0.75

Primal-Dual

D = 5 1.25

1 0.25

2 0.75

A = {3} D - u(A) = 3 Increase v(A) Dual Variables: v(;) = 0.25

Primal-Dual

D = 5 1.25

1 0.25

2 0.75

A = {3} D - u(A) = 3 Increase v(A) Dual Variables: v(;) = 0.25

Primal-Dual

D = 5 1 1.75

0 2.92

0.5

v({3}) = 0.5

Primal-Dual

D = 5 1 1.75

0 2.92

0.5

D - u(A) = 1 Increase v(A) Dual Variables: v(;) = 0.25

v({3}) = 0.5

Primal-Dual

D = 5 1 1.25

0 7.25

0 v({3,5}) = 1

Primal-Dual

Primal-Dual Cost = 4.5

c 1 u 1 = 2.5

= 2 Opt. Integer Cost = 4 c 1 u 1 = 2.5

= 2 c 2 u 2 = 2 = 1 c 3 u 3 = 0.5

= 2 c 4 u 4 = 10 = 5 c 5 u 5 = 1.5

= 2 c 2 u 2 = 2 = 1 c 3 u 3 = 0.5

= 2 c 4 u 4 = 10 = 5 c 5 u 5 = 1.5

= 2

Primal-Dual Summary

     Start with all variables set to zero and solution A as the empty set Increase variable v(A) until a dual constraint becomes tight for some item i Add item i to solution A and repeat Stop once solution A has large enough value to meet demand D Call final solution S and set x i = 1 for all i 2 S

Analysis

 Let l be last item added to solution S    If we increased dual variable v(A) then not in A l Thus if v(A) > 0 then A µ ( S \ l ) Since u( S\ l ) < D then u((S\ l )

\

A)

<

D – u(A) was

Analysis (continued)

  We have u((S\ l Cost of solution is )\A) < D – u(A) if v(A) > 0 Dual LP

Primal-Dual Theorem

For the min-cost covering knapsack problem, the LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual 2-approximation algorithm.

Knapsack-Cover Inequalities Everywhere

   Bansal, Buchbinder, Naor (2008) weighted paging) Randomized competitive algorithms for generalized caching (and Bansal, Gupta, & Krishnaswamy (2010) 485 approximation algorithm for min-sum set cover Bansal & Pruhs (2010) O(log log nP)-approximation algorithm for general 1-machine preemptive scheduling + O(1) with identical deadlines

Minimum-Weight Late Jobs on 1 Machine

    Each job j has processing time p j , deadline d j , weight w j Choose a subset L of jobs of minimum-weight to be late scheduled to complete by deadline This problem is (weakly) NP-hard can be solved in O( n  j p j ) time [ Lawler & Moore ], (1+ ² )-approximation in O(n 3 /

²

) time [ Sahni ] - not If there also are release dates that constrain when a job may start, no approximation result is possible - focus on max-weight set of jobs scheduled on time [ Bar-Noy, Bar-Yehuda, Freund, Naor, & Schieber ] - allow preemption [ Bansal & Pruhs ]

What if all deadlines are the same?

     Total processing time is  j p j !

P WLOG assume schedule runs through [0,P] Deadline D done after D

)

at least P-D units of processing are So just select set of total processing at least P-D of minimum total weight

,

minimum-cost covering knapsack problem

Same Idea for General Deadlines

    Total processing time is  j p j  P WLOG assume schedule runs through [0,P] Assume d 1

·

d 2

·

·

d n Deadline d i  P(i)-d i where

)

among all jobs with deadlines

·

d i units of processing are done after d i , S(i) = { j : d j  d i } and P(i) =  j  S(i) p j Minimize subject to   j w j  y S(i) y j j ¸ p j y j ¸ P(i)-d i , i=1,…,n 0 , j=1,…,n

Strengthened LP – Knapsack Covers

Minimize subject to  w j  j  y j S(L,i) p j (L,i) y j  D(L,i), for each L,i where S(L,i) = { j: d j  D(L,i) = max{  j d i  p j (L,i) = min{ p j , j  S(L,i) L} p j , D(L,i) } - d i , 0} Dual: Maximize  D(L,i) v(L,i) subject to  (L,i): j 2 S(L,i) p j (L,i) v(L,i) v(L,i) ¸ · w j 0 for each j for each L,i

Primal-Dual Summary

      Start with all dual variables set to 0 and solution A as the empty set Increase variable v( A,i ) with largest D(A,i) until a dual constraint becomes tight for some item i Add item i to solution A and repeat Stop once solution A is sufficient so remaining jobs N-A can be scheduled on time Examine each item j in A in reverse order and delete j if reduced late set is still feasible Call final solution L * and set y j = 1 for all j 2 L *

Highlights of the Analysis

Lemma . Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible.

Note: in previous case, since all deadlines were equal, the last job l added satisfies this property.

Here, the reverse delete process is set exactly to ensure that the Lemma holds.

Highlights of the Analysis

Lemma . Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible.

Lemma .

 j: j  i, j  L(i) – {k} p j (L,i) < D(L,i) if v(L,i)>0.

Highlights of the Analysis

Lemma . Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible.

Lemma .

 j: j  i, j  L(i) – {k} p j (L,i) < D(L,i) if v(L,i)>0.

Fact.

p k (L,i)  D(L,i) (by definition of p k (L,i) )

Highlights of the Analysis

Lemma . Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible.

Lemma .

 j: j  i, j  L(i) – {k} p j (L,i) < D(L,i) if v(L,i)>0 .

Fact.

p k (L,i)  D(L,i) (by definition of p k (L,i) ) Corollary.

 j: j  i, j  L(i) p j (L,i) < 2D(L,i) if v(L,i)>0.

Previous Analysis (flashback)

  We have u((S\ l Cost of solution is )\A) < D – u(A) if v(A) > 0 Dual LP

Highlights of the Analysis

Lemma . Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible.

Lemma .

 j: j  i, j  L(i) – {k} p j (L,i) < D(L,i) if v(L,i)>0 .

Fact.

p k (L,i)  D(L,i) (by definition of p k (L,i) ) Corollary.

 j: j  i, j  L(i) p j (L,i) < 2D(L,i) if v(L,i)>0.

Highlights of the Analysis

Corollary  j: j  i, j  L(i) p j (L,i) < 2D(L,i) if v(L,i)>0.

Same trick here:  j  L* w j =  j  L*  (L,i): j 2 S(L,i) p j (L,i) v(L,i) =  (L,i) v(L,i)  j: j  i, j  L(i) p j (L,i) ·  (L,i) 2D(L,i) v(L,i) · 2 OPT

Primal-Dual Theorem

For the 1-machine min-weight late jobs scheduling problem with a common deadline, the LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual 2 approximation algorithm.

General 1-Machine Min-Cost Scheduling

Each job j has its own nondecreasing cost function f j (C j ) – where C j denotes completion time of job j Assume that all processing times are integer Goal: construct schedule to minimize total cost incurred LP variables – x jt =1 means job j has C j =t Knapsack cover constraint: for each t and L, require that total processing time of jobs finishing at time t or later is sufficiently large

Primal-Dual Theorem(s)

For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual pseudo-polynomial 2-approximation algorithm.

For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual (2+ ² )-approximation algorithm.

“Weak” LP Relaxation

  Total processing time is  j p j  P WLOG assume schedule runs through [0,P]  x jt = 1 means job j completes at time

t

Minimize  f j (t) x jt subject to  t   j  {1,…n}  s  1,…,P {t,…,P} p j x jt x js x jt = 1 , j=1,…,n   D(t) t=1,…,P 0 j=1,…,n; t=1,…,P where D(t) = P-t+1 .

 

Strong LP Relaxation

Total processing time is  j p j  P WLOG assume schedule runs through [0,P]  x jt = 1 means job j completes at time

t

Minimize  f j (t) x jt subject to  t  {1,…,P}  j  L  s  {t,…,P} p j x jt (L,t) x js x jt = 1 , for all j ¸ ¸ D(L,t) 0 , for all L,t for all j,t where D(t) = P-t+1, D(L,t) = max{0, D(t)  j p j (L,t) = min{p j , D(L,t)} .

 L p j }, and

Primal and Dual LP

Minimize  f j (t) x jt subject to  t  {1,…,P} L  s=t,…,P p j x jt = 1 , for all j  j  (L,t) x js x jt ¸ D(L,t) ¸ 0 for all L,t , D(t) = P-t+1 D(L,t) = max{0,  j  L for all j,t p j –t+1} p j (L,t) = min{p j , D(L,t)} Maximize  L  t subject to  L: j  D(L,t) v(L,t) L  t=1,…,s p j (L,t) v(L,t)  v(L,t)  f j (s) for all j,s 0 for all L,t

      

Primal-Dual Summary

Start w/ all dual variables set to 0 and each A t = ; Increase variable v( A t ,t ) with largest D(A t ,t) until a dual constraint becomes tight for some item i (break ties by selecting latest time) Add item i to solution A s for all s · t and repeat Stop once solution A is sufficient so remaining jobs N-A satisfy all demand constraints Focus on pairs (j,t) where t is latest job j is in A t perform a reverse delete Set d j =t for job j by remaining pairs (j,t) Schedule in Earliest Due Date order and

Primal-Dual Theorem

For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual pseudo-polynomial 2-approximation algorithm.

Removing the “Pseudo” with a (1+

²

) Loss

This requires only standard techniques

 For each job j, partition the potential job completion times {1,…,P} into blocks so that within block the cost for j increases by · 1+ ²  Consider finest partition based on all n jobs  Now consider variables x jt that assign job j to finish in block t of this partition.

 All other details remain basically the same.

Fringe Benefit: more general models, such as possible periods of machine non-availability

Primal-Dual Theorem

For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual (2+ ² )-approximation algorithm.

Some Open Problems

 Give a constant approximation algorithm for 1-machine min-sum scheduling with release dates allowing preemption  Give a (1+ ² )-approximation algorithm for 1-machine min-sum scheduling, for arbitrarily small ² > 0  Give an LP-based constant approximation algorithm for capacitated facility location  Use “ configuration LP ” to find an approximation algorithm for bin-packing problem that uses at most ONE bin more than optimal

Thank you!

Any questions?