PPT - Stanford University

Transcription

PPT - Stanford University
The Simplex Method and Linear Programming
Duality
Ashish Goel
Department of Management Science and
Engineering
Stanford University
Stanford, CA 94305, U.S.A.
http://www.stanford.edu/class/msande211/
(Based on slides by Yinyu Ye)
1
THE SIMPLEX METHOD
Basic and Basic Feasible Solution
In the LP standard form, select m linearly independent columns,
denoted by the variable index set B, from A. Solve
AB xB = b
for the dimension-m vector xB . By setting the variables, xN , of x
corresponding to the remaining columns of A equal to zero, we
obtain a solution x such that Ax = b.
Then, x is said to be a basic solution to (LP) with respect to the
basic variable set B. The variables in xB are called basic variables,
those in xN are nonbasic variables, and AB is called a basis.
If a basic solution xB ≥ 0, then x is called a basic feasible solution,
or BFS. Note that AB and xB follow the same index order in B.
Two BFS are adjacent if they differ by exactly one basic variable.
A BFS is non-degenerate if xB > 0
3
Simplex Method
George B. Dantzig’s Simplex Method for linear
programming stands as one of the most significant
algorithmic achievements of the 20th century. It is
now over 60 years old and still going strong.
x2
The basic idea of the simplex method to confine
the search to corner points of the feasible region
(of which there are only finitely many) in a most
intelligent way, so the objective always improves
x1
The key for the simplex method is to make
computers see corner points; and the key for
interior-point methods is to stay in the interior of
the feasible region.
4
From Geometry to Algebra
•How to make computer recognize a corner point? BFS
•How to make computer terminate and declare
optimality?
•How to make computer identify a better neighboring
corner?
5
Feasible Directions at a BFS and Optimality
Test
Non-degenerate BFS: AB xB +ANxN = b, and xB > 0 and xN = 0 .
Thus the feasible directions, d, are the ones to satisfy
AB dB +ANdN = 0, dN ≥ 0.
For the BFS to be optimal, any feasible direction must be an
ascent direction, that is,
cTd= cTB dB + cTNdN ≥ 0.
From dB = -(AB )-1ANdN, we must have for all dN ≥ 0,
cTd= -cTB (AB )-1ANdN +cTNdN =(cTN - cTB (AB )-1AN) dN ≥ 0
Thus, (cTN - cTB (AB )-1AN) ≥ 0 is necessary and sufficient. It is
called the reduced cost vector for nonbasic variables.
6
Computing the Reduced Cost Vector
We compute shadow prices, yT = cTB (AB )-1 , or
solving a system of linear equations.
yT AB = cTB, by
Then we compute rT=cT-yTA, where rN is the reduced cost vector for
nonbasic variables (and rB=0 always).
If one of rN is negative, then an improving feasible direction is found by
increasing the corresponding nonbasic variable value
T
 Increase along this direction till one of the basic variables
becomes 0 and hence non-basic
 We are left with m basic variables again
The process will always converge and produce an optimal solution if one
exists (special care for unbounded optimum and when two basic
variables become 0 at the same time)
7
In the LP production example, suppose the basic variable set B =
{1, 2, 3} .
min
s.t.
−x1 −2x2
+x3
x1
x1
x1 ,
x2
+x2
x2 ,
=1
+x4
x3 ,
x4 ,
+x5
x5
=1
= 1.5
≥ 0.
 1 
1 0 1 
0





0
c N   , cB    2 , AB   0 1 0 , AN  1
0
0 
1 1 0 
0





 0 1 1 

 T
1
AB   0 1 0 , y  (0  1 - 1), rNT  ( 1
 1 1  1


Yinyu Ye, Stanford, MS&E211 Lecture Notes #4
0

0 ,
1 
1 ).
8
In the LP production example, suppose the basic variable set B =
{3, 4, 5} .
min
s.t.
−x1 −2x2
x1
x2
x1 +x2
x1 , x2 ,
+x3
=1
+x4
x3 ,
x4 ,
+x5
x5
=1
= 1.5
≥ 0.
0
1 0 
 


 1 
, cB   0 , AB  I , AN   0 1 ,
c N  
  2
0
1 1 
 


1
T
T
AB  I , y  (0 0 0), rN  (1 - 2).
Yinyu Ye, Stanford, MS&E211 Lecture Notes #4
9
Summary
• The theory of Basic Feasible Solutions leads to
a solution method
• The Simplex algorithm is one of the most
influential and practical algorithms of all time
• However, we will not test or assign problems on
the Simplex method in this class (a testament to
the fact that this method has been so successful
that we can use it as a basic technology)
SENSITIVITY ANALYSIS
LP Shadow Price Vector
•The dimension of the shadow price (SP) vector equals
the dimension of the right-hand-side (RHS) vector, or
the number of linear constraints.
•In general, the optimal SP on a given active constraint
is the rate of change in the optimal value (OV) of the
objective as the RHS of the constraint increases in a
interval, ceteris paribus.
•All inactive or nonbinding constraint have zero SP.
•In non-degenerate case, a small change in the RHS
would change the OV and the optimal solution (OS),
but not the basis and the optimal SP.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #6
12
Why
Given a non-degenerate BFS in the LP standard
form with basis AB
xB = (AB)−1b > 0,
xN = 0,
so that small change in b does not change the
optimal basis and the shadow price vector:
yT = cBT(AB)-1
At optimality, the OV
cT x = cBT xB = cBT (AB)−1b = yT b.
Thus, when b is changed to b+Δb, then the new OV
OV+= cBT xB
= cBT (AB)−1(b+Δb)= yT (b+Δb)=OV+ yTΔb
=Net Change
when the basis is unchanged.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #6
13
LP Reduced Cost Vector
•The dimension of the reduced-cost (RC) vector equals
the dimension of the objective coefficient vector or the
number of decision variables.
•In general, the RC value of any non-basic variable is the
amount the objective coefficient of that variable would
have to change, ceteris paribus, in order for it to
become a basic variable at optimality.
•All basic variables have zero RC.
•Upon termination, all non-basic variables have RC ≥ 0
•In non-degenerate case, a small change in the
objective coefficients may change OV and optimal SP,
but not the basis and OS.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #6
14
Why
Given a BFS in the LP standard form with basis AB
and its companion SP vector:
yT = cBT(AB)-1 and RC rNT=cNT-yTAN > 0
If cN makes a small change, nothing would change.
But if they reduced enough such that one of the
reduced costs become negative, then the current
BFS is no longer optimal.
On the other hand, if cB makes a small change, say cB
is changed to cB +ΔcB, then the new SP and OV
y+T = (cB + ΔcB )T(AB)-1 =yT + ΔcBT (AB)−1
OV+=(yT+ΔcBT(AB)−1)b=OV+ΔcBT (AB)−1 b=OV + ΔcBT xB
=Net Change
Yinyu Ye, Stanford, MS&E211 Lecture Notes #6
15
LP DUALITY
Dual Problem of Linear Programming
• Every LP problem is associated with another LP problem called
dual (the original problem is called primal).
• Every variable of the dual is associated with a constraint of the
primal; every constraint of the dual is associated with a variable
of the primal.
• The dual is max (min) if the primal is min (max); the objective
coefficients of the dual are the RHS of the primal; and the RHS
of the dual are the objective coefficients of the primal.
• The constraint matrix of the dual is the transpose of the
constraint matrix of the primal.
• The final shadow price vector of the primal is an optimal
solution of the dual.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
17
The Dual of the Production Problem
Primal
max
x1  2 x2
s.t.
x1
Dual
1
x2  1
min
y1  y2  1.5 y3
s.t.
y1
y 2  y3  2
x1  x2  1.5
y1 , y2 ,
x1 , x2  0
1

A  0
1

0

1
1 
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
 y3  1
T
A
y3  0
1 0 1

 0 1 1



18
More Rules to Construct the Dual
obj. coef. vector
right-hand-side
right-hand-side
obj. coef. vector
A
AT
Max model
Min model
xj ≥ 0
jth constraint ≥
xj ≤ 0
jth constraint ≤
xj free
jth constraint =
ith constraint ≤
yi ≥ 0
ith constraint ≥
yi ≤ 0
ith constraint =
yi free
The dual of the dual is the primal
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
19
Dual of LP in Standard Equality Form
min
(LP ) s.t.
cT x
Ax = b, x ≥ 0,
x ∈ Rn.
max
(LD) s.t.
bT y
AT y ≤ c,
y ∈ Rm
Usually, we let r = c - AT y ∈ Rn called dual slacks; and it should be
non-negative for any dual feasible solution.
In the simplex method, the final reduced cost vector is a feasible
slack vector of the dual.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
20
Dual Feasible Region of LP in Standard Equality Form
max
(LD) s.t.
bT y
AT y ≤ c,
y ∈ Rm
This is an LP in the standard inequality form
Given a basis AB , the dual vector y satisfying ATB y = cB is said
to be a dual basic solution
If a dual basic solution is also feasible, that is, c − AT y ≥ 0, it
is said to be a dual basic feasible solution (BFS).
Every dual BFS is a corner point of the dual feasible region!
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
21
Dual Theorem
Theorem 1 (Weak duality theorem) Let both primal
feasible region Fp and dual feasible region Fd be nonempty. Then,
cT x ≥ bT y
for all
x ∈ Fp, y ∈ Fd.
Proof: cT x − bT y = cT x − (Ax)T y = xT (c − AT y) = xT r ≥ 0.
This theorem shows that a feasible solution to either problem
yields a bound on the value of the other problem. We call cT x
− bT y the duality gap.
If the duality gap is zero, then x and y are optimal for the
primal and dual, respectively!
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
Is the reverse true?
22
Dual of LP in Standard Equality Form
min
(LP ) s.t.
cT x
Ax = b, x ≥ 0,
x ∈ Rn.
max
(LD) s.t.
bT y
AT y ≤ c,
y ∈ Rm
Usually, we let r = c - AT y ∈ Rn called dual slacks; and it should be
non-negative for any dual feasible solution.
In the simplex method, the final reduced cost vector is a feasible
slack vector of the dual and the final shadow price vector is an
optimum solution of the dual, since cT x = yT b
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
23
Dual Theorem continued
Proved by the Simplex Method
Theorem 2 (Strong duality theorem) Let both primal
feasible region Fp and dual feasible region Fd be non-empty.
Then, x∗ ∈ Fp is optimal for (LP) and y∗ ∈ Fd is optimal for
(LD) if and only if the duality gap cT x∗ − bT y∗ = 0.
Corollary If (LP) and (LD) both have feasible solutions then
both problems have optimal solutions and the optimal
objective values of the objective functions are equal.
If one of (LP) or (LD) has no feasible solution, then the other is
either unbounded or has no feasible solution. If one of (LP) or
(LD) is unbounded then the other has no feasible solution.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
24
Possible Combination of Primal and Dual
Primal F-B
F-UB
IF
Dual
F-B
Yes
Yes
F-UB
Yes
IF
min
-x1  x2
s.t.
x1  x2  1
 x1  x2  1
x1 ,
x2  0
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
Yes
max
y1  y2
s.t.
y1  y2  1
- y1  y2  1
25
Application of the Theorem: Optimality Condition
Check if a pair of primal x and dual y with slack r, is optimal:
⎧
⎪
⎨
(x, y, r) ∈ (R+n , Rm, Rn+ ) :
⎪
⎩
cT x − bT y
=
0
Ax
=
b
=
c
AT y
+r
⎫
⎪
⎬
⎪
⎪
⎭
,
which is a system of linear inequalities and equations. Thus it
is easy to verify whether or not a pair (x, y, r) is optimal by a
computer.
These conditions can be classified as
• Primal Feasibility,
• Dual Feasibility, and
• Zero Duality Gap.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
26
Application of the Theorem: Complementarity Slackness
For feasible primal x ≥ 0 and dual (y, r ≥ 0 ),
xT r = xT (c − AT y) = cT x − bT y is also called the
complementarity gap.
Since both x and r are nonnegative, zero duality gap
0= xT r = x1 r1 + x2 r2 +…+ xn rn
implies that
xj rj = 0 for all j = 1,... , n,
where we say x and r are complementary to each other.
Note that rj = 0 implies that the corresponding
inequality constraint is active at the solution.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
27
Implication of the Complementarity
Primal (Dual)
Dual (Primal)
Max model
Min model
xj ≥ 0
jth constraint ≥
xj ≤ 0
jth constraint ≤
xj free
jth constraint =
ith constraint ≤
yi ≥ 0
ith constraint ≥
yi ≤ 0
ith constraint =
yi free
Complementarity condition implies that at optimality:
•every inactive inequality constraint has the zero dual value;
•every non-zero variable value implies that the dual constraint is active;
•every equality constraint is viewed active.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
28
The Ideology of the (Primal) Simplex Method
The simplex method described earlier is the primal simplex
method, meaning that the method maintains and improves a
primal basic feasible solution xB .
Shadow vector y in the method is a dual basic solution and it is
not feasible until at the termination; the reduced vector r in the
method is the dual dual slack vector. Note that xN = 0 and rB = 0,
so that x and r are complementary to each other at any basis AB .
When the method terminates, xB is primal optimal and (y, r)
becomes dual feasible so that it is also dual optimal, since they
complementary.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
29
Interpretation of the Dual of the Production Problem
Primal
Dual
min
y1  y2  1.5 y3
s.t.
y1
 y3  1
y 2  y3  2
y1 , y2 ,
max cT x s.t. Ax ≤ b, x ≥ 0
y3  0
min bT y s.t. AT y ≥ c, y ≥ 0
Acquisition Pricing:
•y: prices of the resources
•ATy≥c: the prices are competitive for each product
•min bT y: minimize the total liquidation cost
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
30
The Transportation Problem
s1
s2
1
2
.
.
sm
C 11 , x 11
.
.
.
m
Cmn , xmn
Supply
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
1
d1
2
d2
3
d3
.
.
n
dn
Demand
31
The Transportation Dual
Primal
m
min
 c x
i 1 j 1
n
s.t.
Dual
n
x
j 1
ij
m
 si ,  i  1,..., m
max
x
ij
 d j ,  j  1,..., n
xij  0,
 i, j
n
s u  d v
i 1
s.t.
m
i 1
ij ij
i i
j 1
j
j
ui  v j  cij ,  i, j
Shipping Company’s new charge
scheme:
ui: supply site unit charge
vi: demand site unit charge
ui + vj ≤ cij : competitiveness
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
32
The Transportation Example
1
2
4
Supply
4
6
500 u1
1
12
2
6
4
10
11
700 u2
3
10
9
12
4
800 u3
Demand
400
900
200
500
v1
v2
v3
v4
Yinyu Ye, Stanford, MS&E211 Lecture Notes #7
13
3
20000
33
Wrapping up: Range Analyses
Theorem When b is changed to b+Δb, the current
optimal basis AB remains optimal if and only if
(AB)−1(b+Δb)≥0
or
xB+(AB)−1Δb≥0.
When cB is changed to cB +ΔcB, the current optimal basis
AB remains optimal if and only if
cNT-y+TAN = rNT- ΔcBT(AB)−1AN ≥ 0
This will establish a range for each coefficient of b or cB.
Yinyu Ye, Stanford, MS&E211 Lecture Notes #6
34