PowerPoint 簡報

Download Report

Transcript PowerPoint 簡報

Chapter 4
Sensitivity Analysis, Duality and
Interior Point Methods
4.1 SENSITIVITY ANALYSIS
• The basic assumption that all the coefficients of a linear
programming model are known with certainty rarely
holds in practice.
=> it may be expedient to simplify causal relationships
and to omit certain variables or constraints at the
beginning of the analysis to achieve tractability.
• Sensitivity analysis can be used to efficiently re-optimize
a model when a parameter is changed or a new constraint
or new variable is added to the model.
Sensitivity to Variation in the Right-Hand Side
• For every basis B associated with an LP, there is a corresponding
set of m dual variables π = (π1, ..., πm), one for each row. The
optimal values of the dual variables can be interpreted as prices.
• Consider the LP: Maximize (Cx: Ax = b, x ≧ 0)
Suppose that the optimal basis is B with solution (XB, 0), where
XB = B- 1b and π = CBB- 1 is unrestricted in sign.
Now, assuming nondegeneracy, small changes (Δb) in the vector b
will not cause the optimal basis to change. Thus, for b + Δb, the
optimal solution is
x = (XB + ΔXB, 0)
where ΔXB = B- 1Δb.
• The corresponding increment in the objective function is
△z = CB△XB = CB B- 1 △b = π △b
=> π gives the sensitivity of the optimal payoff with respect
to small changes in the vector b.
• For a maximization problem, πi, may be viewed equivalently
as the marginal price of bi , because if bi , is changed to b i
+△bi , the value of the optimal solution changes by πi△b
i.
• When the constraints Ax = b are written as Ax ≦b, the dual
variables are nonnegative, implying that for πi, positive, a
positive change in b i will produce an increase in the
objective function value. In economic terms, it is common to
refer to the dual variables as shadow prices.
Changes in the Objective Row
Nonbasic variable:
• The change in the objective coefficient of a nonbasic
variable affects the reduced cost of that variable only.
• Letting Q be the index set of nonbasic variables, if δis a
perturbation associated with the original objective
coefficient cq for some q  Q, then at optimality we can
write the reduced cost coefficient of nonbasic variable xq
as Cq (δ) =πAq -(cq +δ). In order for the current basis B to
remain optimal, we must have (δ)Cq≧0. This means
δ<πAq-cq = Cq
=> there is no lower bound on δ.
• Reducing the value of an objective coefficient associated
with a nonbasic variable in a maximization problem cannot
make the variable more attractive.
(Recall: z  c B B b - (cB B A k - c k )x k
-1
-1
)
Basic variable:
• A change in the objective coefficient of a basic variable may
affect the reduced cost of all the nonbasic variables.
• Let ei, be the ith unit vector of length m and suppose we
increment the objective coefficient of the ith basic variable
XB (i) by δ—i.e., CB ←CB +δeiT
=>π(δ) = (CB + δeiT)B-l
=>The reduced cost of the qth nonbasic variable is now
cq (δ) =( CB +δ eiT)B-l Aq -cq
= CB B-1 Aq +δ eiT B-1 Aq - cq = cq+δ
aiq
where a iq= eiT(B-1 Aq ), is the ith component of the updated
column of Aq.
• This value can be found for the nonbasic variable
Xq by solving BT y = ei for y, then computing aiq=
y T Aq.
• If aiq = 0 for any Xq the reduced cost does not
change.
• For a solution to remain optimal, we must have
cq (δ) > 0, or
cq+δ
for all q
a>0
iq
Q

Example 2
Suppose we have an optimal solution to an LP given in
tableau form with attached variables:
Maximize z = 4.9 - 0.1 X3 - 2.5 X4 - 0.2 X5
subject to X1 = 3.2 -0.5 X3 - 1.0 X4 - 0.6 X5
X2 = 1.5 + 1.0 X3 +0.5 X4 - 1.0 X5
X6 = 5.6 - 0.5 X3 - 2.0 X4 - 1.0 X5
The index set of nonbasic variables Q = {3,4,5}, so the
current basis remains optimal as long as δ≦ cfor
all
q
q Q.
 When q = 3, for instance, this means that δ≦
0.1. If the original coefficient C3 = 1, then the current
basis remains optimal for C3 ≦1.1.
• If the objective coefficient of the basic variable X2
becomes C2+δ, the reduced costs of the nonbasic
variables become
X3: C3 (δ)=0.1+δ(-1.0)
X4: C (δ)=2.5+δ(-0.5)
4
X5: C5 (δ)= 0.2 + δ(+1.0)
• Note that xB ( i )  bi   j{3,4,5} aij x j for i =1,2,6,
so a ij is the negative of the number appearing in the
preceding equations. The range that δ can assume is
given by
  0.2 
max 
    min
 1.0 
  2.5
,

  0.5
-0.2≦δ ≦ 0.1
 0.1

 1.0 
• When δ assumes one of the limits of its range, a
reduced cost becomes zero. In this example, for δ=
0.1, the reduced cost of X3 is zero, so that if the
objective coefficient of X2 increases by more than 0.1,
it becomes advantageous for X3 to become active. The
minimum ratio test, min 30..25 , 50..65  = 6.4, indicates that X3
can be increased to 6.4 before X1 becomes zero and a
change of basis is required. Analogously, for δ = -0.2,
the reduced cost for X5 is zero and any further
decrease in C2 will require a basis change to remain
optimal. In this case, the ratio test indicates that X2
will be the leaving variable.
Changes in the Right-Hand-Side Vector
• We wish to investigate the effect of a change bi ← bi +δ for
some 1≦i≦w. It is usual to consider the case in which bi is
the right-hand side of an inequality constraint, which
therefore has a slack variable associated with it. The goal is
to determine the range over which the current solution
remains optimal.
• If the constraint is an equality, it can be analyzed by
regarding its associated artificial variable as a positive
slack (which must be nonbasic for a feasible solution).
Basic slack variable:
• If the slack variable associated with the ith constraint is
basic, the constraint is not binding at the optimal solution.
=> The value of the slack gives the range over which the
right-hand-side bi can be reduced for a "less than or equal
to" constraint or increased for a "greater than or equal to"
constraint.
=>The solution remains feasible and optimal for the range
bi +δ, where
^
 xs ≦δ≦ ∞
﹣∞≦δ≦
^
for a "≦" constraint
^
a "≧" constraint
xfor
s
where x is the value of the associated slack variable.
s
Nonbasic slack variable:
• If a slack variable is nonbasic at zero, then the
original inequality constraint is binding at the
optimum. At first glance, it would seem that ,
because the constraint is binding, there is no
possibility of changing the right-hand-side term ,
particularly in decreasing the value of bi (for "less
than or equal to" constraints).
• It turns out that by changing the vector b, we also
change xB (=B-l b = b ) => there is a range
over which xB remains nonnegative. For the
associated values , we still retain an optimal
feasible solution in the sense that the basis does
not change. (Note that both xB and z = cB xB
change values.)
• Consider the kth constraint
ak1 x1 + ak2 x2+- - -+ akn xn+ xs= bk
where xs is the slack variable.
If the right-hand side becomes bk + δ, rearranging this
equation yields
ak1 x1 + ak2 x2+- - -+ akn xn +( xs -δ)= bk
so that ( xs -δ) replaces xs.
• If xs is nonbasic at zero in the final tableau, we have
(Note:
X B  B1b  B1 As X s and As  es
where As is the updated column in the tableau
corresponding to xs .
)
• From another point of view
B 1 (b  b)
1
1
 B b  B b
 b  B 1ei
 b  As
• Because xB must remain nonnegative, we have b  A  0
s
which is used to solve for the range over which δ can vary.
If there is no a >0, then δ> -∞;
is
If there is no ais <0, then δ < ∞.
,
δ
Example 3
• Consider Example 2 again and suppose that x4 represents a
slack variable for constraint 1 (originally written as a "≦'
constraint).
• If the coefficient b1 is varied by an amount δ for
b = (3.2,1.5, 5.6)T and As = A4 = (1.0,-0.5,2.0)T
we have
X1(δ)= 3.2 - 1.0(-δ)
X2(δ)=1.5+0.5(-δ)
X6(δ)=5.6-2.0(-δ)
Thus,
Therefore, δ can vary in the following range:
Consider the jth nonbasic variable with corresponding column Aj. If
the ith element of Aj is changed by an amount δ, this affects the
reduced cost c j as follows.
If Aj(δ)= Aj + δ ei
then c j (δ)=π(Aj + δ ei)-cj
= c j + δ πei =
c+δπ
j i
where π (= CB B-l) is the dual vector.
Thus, the solution remains optimal as long as
c j (δ)≧0.
The corresponding range for δ is
δ≧
δ≦

cj

cj
i
i
for πi > 0
for πi < 0
Definition of the Dual LP Model
• The dual model is derived by construction from
the standard inequality form of the linear
programming model as shown in Tables 4.1 and
4.2. In the primal model, it is assumed that there
are n decision variables and m constraints,
Example 4
Maximize zP = 2 X1 + 3 X2
subject to -X1+X2≦ 5
X1+3X2≦35
X1
≦ 20
Minimize zD=5π1 +35π2 + 20π3
subject to -π1+π2 +π3≧ 2
π1+3π2 ≧ 3
π1≧0, π2≧0,π3≧0
X1 ≧0 , X2≧0
• From an algorithmic point of view, solving the primal
problem with the dual simplex method is equivalent to
solving the dual problem with the primal simplex
method.
When written in inequality form, the primal and dual models are
related in the following ways:
• When the primal model has n variables and m constraints, the
dual model has m variables and n constraints.
• The constraints for the primal model are all "≦ ", while the
constraints for the dual model are all "≧ ".
• The objective for the primal model is to maximize, while the
objective for the dual model is to minimize.
• All variables for either problem are restricted to be
nonnegative.
• For every primal constraint, there is a dual variable. Associated
with the ith primal constraint is dual variable πi . The dual
objective function coefficient for πi , is the right-hand side of the
ith primal constraint bi.
• For every primal variable, there is a dual constraint. Associated
with primal variable xj is the jth dual constraint whose righthand side is the primal objective function coefficient cj.
• The number a ij is, in the primal model , the coefficient of xj
in the ith constraint, whereas in the dual model, a ij is the
coefficient of πi in the jth constraint.
Modifications of the Inequality Form
• No matter how the primal model is stated, the corresponding
dual model can always be found by first converting the primal
model to the inequality form in Table 4.1 and then writing the
dual model accordingly.
• For example, given an LP in standard equality form
Maximize Zp = cx
subject to Ax = b, x ≧ 0
we can replace the constraints Ax = b with two inequalities,
Ax ≦ b and -Ax ≦ -b, so that the coefficient matrix
becomes 
A
  A 


and the right-hand side vector becomes  b 
 
 b
• Introducing a partitioned dual row vector (γ1, γ2) with
2m components, the corresponding dual model is
Minimize ZD =γ1b -γ2b
subject to γ1A—γ2A≧c
γ1≧0 , γ2≧0
• Letting π =γ1 –γ2 we can simplify the representation
of this problem as shown in Table 4.3.
 unrestricted
Proposition 1: Dual models of equivalent problems are
equivalent. Let (P) refer to an LP and let (D) be its dual
model. Let (P ) be an LP that is equivalent to (P). Let (D )
be the dual model of (P ). Then ( D ) is equivalent to (D)—
that is, they have the same optimal objective function
values or they are both infeasible.
Example 5
(Primal)
Max Zp = -3X1 - 2X2
subject to
X1 - X2 = 8
X1 - 2 X2 ≧13
X1≧0, X2 unrestricted
What is its dual ?
Example 5
(Dual)
Min ZD = 8π1 + 13π2
subject to π1+π2 ≧ -3
-π1 -2π2 = -2
π1 unrestricted, π2≦ 0
• In what follows, x refers to any feasible solution of the
primal problem; and π to any feasible solution of the dual
problem; x* and π* are the respective optimal solutions, if
they exist.
Theorem 1 (Weak Duality): In a primal-dual pair of LPs,
let x be a primal feasible solution and let ZP(x) be the
corresponding value of the primal objective function that is
to be maximized. Let π be a dual feasible solution and let
ZD(π) be the corresponding dual objective function that is
to be minimized. Then, ZP(x)≦ZD (π).
Proof:
1. The primal solution is feasible by hypothesis: Ax ≦ b
2. Pre-multiply both sides by π: πAx ≦ πb
3. The dual solution is feasible by hypothesis: πA ≧ c
4. Post-multiply both sides by x: πAx ≧ cx
5. Combine the results of Steps 2 and 4:
cx ≦ πAx ≦ πb or ZP(x) ≦ ZD(π)
Note:
• The value of ZP(x) for any feasible x is a lower bound to ZD (π*).
• The value of ZD (π) for any feasible π is an upper bound to ZP(x*).
• If there exists a feasible x and the primal problem is unbounded,
there is no feasible π.
• If there exists a feasible π and the dual problem is unbounded, there
is no feasible x.
• It is possible that there is no feasible x and no feasible π.
Theorem 2: (Sufficient Optimality Criterion): In a primaldual pair of LPs, let ZP(x) be the primal objective
function and let ZD (π) be the dual objective function. If
^
,X ) is ^a pair of primal and dual feasible solutions

satisfying ZP( ) = ZD ( ^ ), then ^
X
^
X is an optimal solution of the primal problem and
(
^

is an optimal solution of the dual problem.
Proof:
^
1. Definition of optimality for the primal solution: ZP(X )
≦ ZP(x*)
2. Feasible dual solution bound on ZP: ZP(x*) ≦ZD (π*)
3. Definition of optimality for the dual solution: ZD (π*)
≦ZD ( ^ )

4. Combine the results of Steps 1,2, and 3:
^
^
ZP( X )≦ZP(x*)≦ZD(π*)≦ZD ( )
^
^
5. Objectives are equal by hypothesis: ZP( X ) = ZD (  )
6. Combine Steps 4 and 5:
^
^
ZP( X ) = ZP(x*) = ZD (π*) = ZD ( ).

From theorem 2, we have
• Given feasible solutions x and π for a primal-dual
pair, if the objective values are equal, they are
both optimal.
• If x* is an optimal solution to the primal problem, a
finite optimal solution exists for the dual problem
with objective value ZP(x*).
• If π* is an optimal solution to the dual problem, a
finite optimal solution exists for the primal
problem with objective value ZD (π*).
Theorem 3 (Strong Duality): In a primal-dual pair
of LPs, if either the primal problem or the dual
problem has an optimal feasible solution, then the
other problem does also and the two optimal
objective values are equal.
Each structural variable xj is associated with the dual constraint j,
and each slack variable xsi is associated with the dual variable πi
Recall that a basic solution is found by selecting a set of basic
variables, constructing the basis matrix B, and setting the nonbasic
variables equal to zero. This yields the primal solution
xB = B-l b with ZP = CB B-l b
The complementary dual solution associated with the basis is
defined as π = CB B-l with z D = πb = CB B-l b
Note: Every basis defines complementary primal and dual solutions
with identical objective function values.
Theorem 4 (Optimality of Feasible Complementary Solutions):
Given the solution XB determined from the basis B, when XB
is optimal for the primal problem, the complementary
solution π = CB B-l is optimal for the dual problem.
Proof:
1. Primal and dual objective values are equal by construction:
ZP(xB) = cBxB = CB B-l b
Z D (π) = πb = CB B-l b
2. The primal objective value for a basic solution when
nonbasic variable xk is allowed to increase is
ZP = CB B-l b - (πAk –ck)xk
3. Since the primal solution is optimal:
ck =πAk - ck ≧ 0 or πAk≧ck
4. From Step 3, when xk is a structural variable, dual
constraint k is satisfied: πAk≧ck
5. From Step 3, when the nonbasic variable is a slack variable
xsi , πi is nonnegative:
c si = 0 and A si = ei , so πi ≧ 0
6. For basic variables:
πB- cB = cB B-l B - cB = 0
7. From Step 6, when xk is a structural variable and basic,
the kth dual constraint is satisfied as an equality:
πAk - ck = 0
8. From Step 6, when Xsi is a slack variable and basic, πi
is zero:
csi = 0 and Asi = ei ,so πi = 0
• All constraints are satisfied, so π is feasible; by Theorem
2, it must be optimal.
• From Step 3 of the proof, it can be inferred that the
reduced cost ck , for the primal variable xk is equivalent
to the dual slack πsk for dual constraint k.
•Complementary solutions property:
For a given basis, when a primal structural variable is
basic, the corresponding dual constraint is satisfied as an
equality (the dual slack variable is zero), and when a
primal slack variable is basic (the primal constraint is
loose), the corresponding dual variable is zero.
This property holds whether or not the primal and dual
solutions are feasible. We have already seen this in the
simplex tableau—that is, when a primal structural
variable is basic, its reduced cost (dual slack) is zero, and
when a primal slack variable is basic, the corresponding
structural dual variable is zero (Step 8 of proof)
Theorem 5 (Necessary and Sufficient Optimality
Conditions):
Consider a primal-dual pair of LPs. Let x and π be
the primal and dual variables and let ZP(x) and ZD(π)
be the corresponding objective functions. If x is a
primal feasible solution, it is optimal if and only if
there exists a dual feasible solution π satisfying ZP(x)
= ZD(π).
Theorem 6 (Complementary Slackness):
The pairs (x, xs) and (π, πs) of primal and dual
feasible solutions are optimal for their respective
problems if and only if whenever a slack
variable in one problem is strictly positive , the
value of the associated nonnegative variable of the
other problem is zero.
Whenever
and whenever
m
sj   aij i  c j  0, we have x j  0
i 1
Alternatively, we have
m
x j sj  x j ( aij i  c j )  0,
i 1
j  1,..., n