Building Feasible Space of Linear Model

Transcription

Building Feasible Space of Linear Model
An Extensive Simplex Method for Mapping Global Feasibility
Liang Zhu and David Kazmer
Department of Mech. & Ind. Engineering
University of Massachusetts Amherst
1
Abstract
An algorithm is presented for mapping the global feasible space of multi-attribute
systems to support decision modeling and selection. Given n decision variables, the
feasible space of a linear model is an n-dimensional polyhedron constituted by a set of
extreme points. The polyhedron can be represented in a connected graph structure where
all extreme points are traversed exactly once from any initial point through the pivot
operations. Since the expected number of extreme points is no more than 2n, the traverse
extreme point (TEP) algorithm described requires exponential time instead of factorial
time in an exhaustive combinatorial search.
2
Key Words
Multi-attribute decision making, Optimization, Global Feasible Space, Extreme
Points
1
3
Introduction
The linear model has an important role in decision making. While some problems
can directly be formulated in linear form (e.g. many problems in the domain of Financial
Management and Industrial Engineering), others can be simplified by their correspondent
linear approximation. In many cases, the linear approximation provides a vital hint to the
behavior and solution of truly non-linear problems. This statement could be enhanced by
the popularity of the Newton-Raphson method that uses the Jacobian matrix to dominate
a non-linear function value (Reklaitis, Ravindran et al. 1983).
With one explicit objective, the linear problem can be well solved by the Simplex
method (Dantzig 1998). However, such a single representative objective may not be
available in the real problem formulation, which generally has multiple objectives that
are coupled and competitive with each other, like time, cost, and reliability. It is not
trivial to combine these objectives together. Without the knowledge of the relationship
between these multiple objectives, the conventional practice of multiplying or adding the
weighted objectives rarely gives the satisfied solution. As such, it is valuable to explore
the feasible space constituted by the multi-objective problem itself. Once the practitioner
acquires the efficient frontier for multiple objectives, he/she can then select preferred
decisions for compromising these objectives.
The linear formulation typically contains some constraints on the decision
variables. A common restriction is the "nonnegativity" of the physical variables, called
the primary constraints. Additional constraints might also represent other limitations, like
the resource supply, the process condition, and the performance improvement from
previous projects. Often times, the boundary of the constraint is not a pre-defined crisp
2
number. Depending on the overall performance, the constraint limits could be loosened or
tightened from the initial specifications. Therefore, the global feasible space needs to be
examined for the potential refinement of the constraints.
Sometimes it is difficult to differentiate between the objective and the constraints.
Some attributes might serve as both. For example, while the project cost is expected to be
under the budget as a constraint, the lower cost is obviously preferred as an objective
when other performance measures are satisfied. As a matter of solution and usage
convenience, therefore, all objectives are expressed in the form of constraints. The limits
on these constraints could be 0 for nonnegative attributes or other exaggerated values.
The traverse extreme point (TEP) algorithm, presented in this paper, computes the global
feasibility for both independent decision variables and dependent performance measures.
Consider a linear model with only two nonnegative decision variables:
x1 - 2x2 ≤ 3
(1)
x1 + x2 ≤ 6
x1 + 4x2 ≤ 12
x1, x2 ≥ 0
Figure 1 shows that each constraint divides 2-dimensional Euclidean space E2 into two
regions, called halfspaces. Hence a halfspace is a collection of points that satisfy the
constraint, Hi = {(x1, x2) | ai1 x1 + ai2 x2 ≤ bi}. The intersection of all the halfspaces
composes the feasible space H; in the case of Equation 1, H = H1 ∩ H2 ∩ H3 ∩ H4 ∩ H5.
Given any two points A(x1, x2) ∈ Hi and A'(x1', x2') ∈ Hi, for λ ∈ [0, 1], λA + (1 - λ)A' =
3
λ(ai1 x1 + ai2 x2) + (1-λ)(ai1 x1' + ai2 x2') ≤ λbi + (1-λ)bi ≤ bi, i.e., λA + (1 - λ)A' ∈ Hi.
Therefore, the feasible space H is convex.
The convex property is the basis to efficiently solve the feasible space. Since H is
bounded by straight lines through E2, it is a convex polygon constituted by the corner
points or the extreme points. Although every two constraints in E2 may result in a basic
solution, only the basic feasible solution is an extreme point. The labeled points in Figure
1 are all basic solutions, but F is not an extreme point of the feasible space since it is not
a feasible solution. As shown in Figure 1, the feasible space of Equation 1 is the polygon
ABCDE. Building the feasible space in this example is equivalent to solving the five
extreme points.
H5 : Primary constraint x2 ≥ 0
x2
H3 : x
E
1
+ 4x
2
≤ 12
H
2:
D
x
1
+
Feasible Space
x
2
≤
6
C
A
B
:x1
H1
- 2x 2
≤3
F
x1
H4 : Primary constraint x1 ≥ 0
Figure 1. Feasible space constituted by the constraints
The above concept can be extended to En. Consider a linear model with m
constraints and n decision variables, m1 of them are in the form
ai1 x1 + ai2 x2 + ... + ain xn ≤ bi
i = 1, ..., m1
(2)
m2 of them are in the form
4
aj1 x1 + aj2 x2 + ... + ajn xn ≥ bj
j = 1, ..., m2
m3 of them are in the form
ak1 x1 + ak2 x2 + ... + akn xn = bk
k = 1, ..., m3
Where, x1, ..., xn ≥ 0, b1, ..., bm ≥ 0, and m = m1 + m2 + m3.
Note that the decision variables in the above equation are all nonnegative. The
method can be generalized to any linear model with a trivial normalization process on the
real variable xi (e.g., xi = xa - xb, where xa, xb ≥ 0). As mentioned previously, all objectives
have been formulated as constraints with limits corresponding to their minimum
acceptable performance levels.
Each constraint in Equation 2 can be associated with one hyperplane in En, a
generalized notion of a straight line in E2 and a plane in E3. The halfspace in En is the
collection of points defined by the hyperplane. Hi = {(x1, x2, ..., xn) | ai1 x1 + ai2 x2 + ... +
ain xin ≤ bi}. The feasible space H is the intersection of all halfspaces, H = H1 ∩ H2 ∩ ... ∩
Hm. It is important to note that the dimension of the feasible space would shrink to En-m3,
given m3 equalities in Equation 3. Regardless, H is convex by the similar proof as in E2.
The extreme point of this convex polyhedron is an intersection point of n hyperplanes.
Similar to the 2-dimensional case, though any n hyperplanes may result in a basic
solution, only the point located inside all halfspaces (the basic feasible solution) is an
extreme point.
There are C(n+m, n) distinctive basic solutions out of n primary constraints and m
additional constraints in Equation 2. One approach is to solve all these combinations and
test their feasibility (Zhu and Kazmer 2000). Being NP hard, the number of iterations
5
radically increases with the number of decision variables and constraints, not to mention
that one iteration takes θ(n3) to solve a system of n linear equations (Cormen, Leiserson
et al. 1990). Considering that the basic feasible solution is only a small subset of the basic
solution, this paper presents an efficient algorithm by excluding the consideration of the
basic infeasible solutions.
4
Traverse extreme point
By adding a slack or surplus variable in the inequality, Equation 2 can be
formulated in the equality form:
ai1 x1 + ai2 x2 + ... + ain xn + xn+1 = bi
i = 1, ..., m1
aj1 x1 + aj2 x2 + ... + ajn xn - xn+m1 = bj
j = 1, ..., m2
ak1 x1 + ak2 x2 + ... + akn xn = bk
k = 1, ..., m3
(3)
More restrictedly, in the canonical form:
x1 + a11' xm+1 + a12' xm+2 + ... + a1n' xm+n = b1'
(4)
x2 + a21' xm+1 + a22' xm+2 + ... + a2n' xm+n = b2'
...
xm + am1' xm+1 + am2' xm+2 + ... + amn' xm+n = bm'
x1, ..., xn+m ≥ 0
b1', ..., bm' ≥ 0
Or in the matrix format,
Ax=b
(5)
6
x ≥ 0, b ≥ 0
Where,
⎧1 0 ... 0 a11'
⎪
'
⎪0 1 ... 0 a 21
A= ⎨
...
⎪
⎪⎩0 0 ... 1 a m' 1
a12'
...
'
22
...
a
...
a
'
m2
...
⎧ x1 ⎫
⎪ ... ⎪
'
⎪
⎪
a1n ⎫
⎪
⎪
x
⎪
m
a 2' n ⎪
⎪
⎪
⎬ , x = ⎨ x m +1 ⎬ , b =
⎪x ⎪
⎪
' ⎪
⎪ m+2 ⎪
a mn ⎭
⎪ ... ⎪
⎪x ⎪
⎩ m+n ⎭
⎛ b1 ' ⎞
⎜ ⎟
⎜ b2 ' ⎟
⎜ ... ⎟
⎜ ⎟
⎜ b '⎟
⎝ m⎠
For the purpose of presentation, Equations 4-5 assume that no equality exists in
Equation 2 (m3 = 0). Although it seems to be different from Equations 4-5, Equations 2-3
can be transformed to the canonical form as discussed subsequently.
The variable x1, ..., xm which appear with a unit coefficient in only one equation
and with zeros in all the other equations are called the dependent variables xd = (xd1, ...,
xdm). In the canonical form, there is one such basic variable in every constraint equation.
The other n variables xm+1, ..., xm+n are called the independent variables xi = (xi1, ..., xin).
Obviously, x = xd + xi, and xd ∪ xi = ∅. As shown in Equations 4-5, one basic feasible
solution can trivially be acquired when xi = 0 and xd = b. On the other hand, the choice
of xm+1, ..., xm+n as independent variables was purely for convenience. In fact, any
variable could be the independent variable through some elementary row operations
which multiply and add the rows in Equation 5. Given the total n+m variables,
independent variables xi can have C(n+m, n) different sets. Note that this selection of
independent variables is equivalent to choosing active constraints in acquiring one basic
7
solution. However, the concept of independent variable sets facilitates the efficient
traversal to other basic feasible solutions.
Assuming that the feasible space H is a closed convex polyhedron (the open
condition is an extreme case discussed later), H can be viewed as a connected graph G =
(V, E) with each vertex Vi representing one extreme point xi = ( xd1i ,..., xd mi , xi1i ,..., xini ) .
According to the previous definition of the extreme point, Vi is the intersection point of n
hyperplanes. Note that any one of n hyperplanes of Vi can be replaced by an additional
hyperplane to obtain the adjacent vertex Vj. Hence Vi can have C(n, 1) = n adjacent
vertices. In term of independent variables, each vertex Vi has a distinct set of independent
variables xii = ( xi1i ,..., xini ) (xii ∈ xi). Thus, the adjacent vertex Vj differs from Vi in
exactly one independent variable.
The traversal from Vi to its adjacent vertices can be accomplished as a pivot
operation in the form of tableau. Table 1 shows an example of the tableau for Equation 1.
Similar to the simplex method, the tableau is filled with aij and bi. Each row represents
one constraint. In this case, xdA = (x3, x4, x5) and xiA = (x1, x2). First, x1 is assumed to
leave the set of the independent variables. There are three alternatives in xd to choose as
an entering variable. Each of them represents one constraint blocking the leaving variable
x1. Quantitatively, the blocking distance di is represented by bi/ai1. In order to satisfy
every constraint, the minimum positive blocking distance is required. In this case,
min{b1/a11, b2/a21, b3/a31} = b1/a11 = 3. Hence xd1 = x3 is chosen, xdB = (x1, x4, x5) and xiB
= (x3, x2). The pivot element is 1 as circulated in Table 1. Similarly, min{b1/a12, b2/a22,
b3/a32} = b3/a31 for x2 leaving. Therefore xd3 = x5 is the entering variable, xdC = (x3, x4, x2)
and xiC = (x1, x5).
8
Table 1. Exploration the adjacent vertices for Equation 1
x
A
x1
x2
x3
x4
x5
b
b/a 1
b/a 2
x3
1
-2
1
0
0
3
3
-1.5
x4
1
1
0
1
0
6
6
6
x5
1
4
0
0
1
12
12
3
0
0
3
6
12
Following the same approach, all vertices and extreme points are computed as in
Table 2. The extreme point associated with each pivot element is labeled near the
element. As shown in the table, not all pivot operations are necessary since the pivot
might result in previously resolved extreme point. Considering that all extreme points are
connected in a graph structure, they can be traversed exactly once in a breadth-first
searching scheme.
Table 2. All extreme points for Equation 1
9
x3
x4
x5
x
A
x1
x2
x3
x4
x5
b
b/a 1
b/a 2
1B
-2
1
0
0
3
3
-1.5
1
1
0
1
0
6
6
6
1
4E
0
0
1
12
12
3
0
0
3
6
12
b/a 2
b/a 3
0
0
3
-1.5
3
0
-2
C
3
1A
-1
1
0
3
1
-3
0
6
-1
0
1
9
1.5
-9
3
0
0
3
9
b/a 1
b/a 5
1.5
0
1
0
0.5
9
6
18
0.75 D
0
0
1
3
4
-12
0.25
1
0
0
-0.25
0.25 A
3
12
12
0
3
9
3
0
b/a 3
b/a 4
5
15
7.5
1
x
x
x
x
B
E
C
D
1
0
0.33333333 0.66666667
-0.3333333 0.33333333B
0
0
1
0
1
-3
3
0
0
1D
-2
1
3
3
-1.5
5
1
0
0
3
b/a 4
b/a 5
1C
0
0
1
-1.5
3
0
0
-2
E
1.33333333 -0.3333333
3
1
4
3
-12
0
1
0
-0.3333333 0.33333333
2
-6
6
4
2
3
0
0
Based on the above discussion, the traverse-extreme-point algorithm (TEP) is
next presented. The algorithm uses the adjacency list to represent the connected graph G
for the feasible space (Cormen, Leiserson et al. 1990). Compared to the adjacency-matrix
representation, the adjacency list is preferred due to its simplicity and small number of
edges. It maintains several additional data with each extreme point xe = {x1, ..., xn+m} in
the graph besides the adjacency list xe.Adj. The set of dependent/independent variables of
xe is stored in the variable xe.xd/xe.xi, the predecessor is xe.p, and the distance to the
initial vertex xs is xe.depth. The algorithm also uses a queue Q to manage the collection of
10
the extreme points that have at least one unexplored neighbor. The procedure starts with
an initial vertex xs and its set of independent variables xi = {xi1, xi2, ..., xin}⊂ xs.
TEP(xs, xi)
1
V[G] ← {xs}
2
Q ← {xs}
3
xs.p ← ∅, xs.depth ← 0
4
xs.xi ← xi
5
xs.xd ← xs - xs.xi
6
while Q ≠ ∅
7
xe ← Head[Q]
8
for each xi ∈ xe.xi
9
j ← Min {bi/aij: bi/aij > 0}
10
xi[i] ← xe.xd [j]
11
xf ← Head[Q]
12
PIVOT ← TRUE
13
while (xf ≠ ∅) AND PIVOT
14
if xi = xf.xi then
15
add xe in xf.Adj,
16
PIVOT ← FALSE
add xf in xe.Adj
11
xf ← xf.next
17
18
if PIVOT then
19
xf ← new extreme point pivoting around aij
20
xf.xi ← xi
21
xf.xd ← xe.xd,
22
V[G] ← xf
23
PUSH(Q, xf)
24
add xe in xf.Adj,
25
xf.p ← xe,
26
xf.xd [j] ← xe.xi[i]
add xf in xe.Adj
xf.depth ← xe.depth + 1
POP(Q)
The algorithm works as follows. Lines 1-5 initialize the graph with the first
extreme point. The main loop of the program is contained in lines 6-26. The loop iterates
as long as there remains an uncompleted extreme point in Q. With the queue data
structure, the program avoids recursion which is often costly. Lines 8-26 iterate each
independent variables xi of xe to expand the graph of the extreme points. After finding the
j_th constraint that blocks xi, Lines 9-10 locate the proper dependent variables xe.xd[j]
(not xj). Lines 13-17 examine if the adjacent vertex of xe already exists in Q. If not, the
new extreme point xf with the solution pivoting around aij is added into G and Q in Lines
19-25. Line 26 pops out xe after all its adjacent vertices have been explored.
12
5
Correctness of traversing algorithm
f
Given any extreme point xe = (b1e ,..., bme , 01
,02
,...,
30) , let x be the adjacent vertex
n×0
obtained from the TEP. Without the loss of generality, let xm+j (1 ≤ j ≤ n) leave and xi (1 ≤
i ≤ m) enter the independent variable of xe.
f
,02
,...,
,02
,...
xf = (b1f ,..., bi −f 1 ,0, bi +f 1 ,..., bme , 01
30, bi , 01
30)
( j −1)×0
(6)
( n − j )×0
Where
⎧ e bie e
⎪bk − e a kj
aij
⎪⎪
f
bk = ⎨0
⎪ bie
⎪ e
⎪⎩ aij
Since
k = 1,..., i − 1, i + 1,..., m
k = i, m + 1,..., m + j − 1, m + j + 1,..., m + n
k = m+ j
bie bke
bke
> 0 , bkf ≥ 0 (k = 1, ..., j, ..., m).
≤
for
all
e
e
e
aij a kj
a kj
⎧1
⎪
⎪
⎪
And, Ae xf = ⎨0
⎪
⎪
⎪⎩0
... 0 ... 0
...
...
... 1 ... 0
a11e
aije
... a1ej
...
... aije
...
...
...
e
e
... 0 ... 1 a m1 ... a mj
... a1en ⎫
⎪
...
⎪
⎪
... aine ⎬
⎪
...
⎪
e
... a mn
⎪⎭
⎧ b1e − a1ej × bie aije ⎫
⎪
⎪
...
⎪
⎪
⎪
⎪
0
⎪
⎪ ⎧b1e ⎫
...
⎪
⎪ ⎪ ⎪
⎪⎪b e − a e × b e a e ⎪⎪ ⎪⎪ ... ⎪⎪
e
m
mj
i
ij
e
⎨
⎬ = ⎨bi ⎬ = b
0
⎪
⎪ ⎪ ... ⎪
⎪
⎪ ⎪ ⎪
...
⎪
⎪ ⎪⎩bme ⎪⎭
e
e
bi aij
⎪
⎪
⎪
⎪
...
⎪
⎪
⎪⎩
⎪⎭
0
Hence xf is another basic feasible solution of the original problem. Since xe and xf could
be any arbitrary extreme point and its correspondent adjacency, all solutions obtained in
13
the described TEP are the basic feasible solutions of Equation 4-5. Moreover, all extreme
points are connected and reachable from the initial vertex xs.
Assume that one extreme point xr is not reachable from xs. Let Hs denote the
polyhedron including xs. Then xr is disconnected to Hs in En. Since both xr and Hs are
inside the entire feasible space H, the convex property of H is conflicted to the
disconnection statement. Therefore the assumption is wrong. All extreme points are
reachable from xs.
The above discussions prove that the TEP reaches all extreme points starting from
xs. Moreover, let d(xe, xs) denote the shortest-path distance from xe to xs as the minimum
number of edges in any paths from xe to xs. The TEP actually computes the shortest-path
distance between every vertex to xs.
Lemma: Suppose the queue Q = {V1, V2, .., Vr} during the execution of the TEP, where
V1 is the head and Vr is the tail. Then V1.depth ≤ V2.depth ≤ ... ≤ Vr.depth ≤ V1.depth + 1.
Proof: The proof is by induction on the number of queue operations. Initially, when the
queue contains only xs, the lemma certainly holds.
Assume V1.depth ≤ V2.depth ≤ ... ≤ Vr.depth ≤ V1.depth + 1. If the head V1 of the
queue is popped out, the new head is V2. Since V1.depth ≤ V2.depth, Vr.depth ≤ V1.depth +
1 ≤ V2.depth + 1. The remaining inequalities are unaffected. If a new vertex Vr+1 is
pushed into the queue, Vr+1.depth = xf.depth = xe.depth = V1.depth + 1 according to Line
20 of the algorithm. Therefore Vr.depth ≤ Vr+1.depth and the remaining inequalities are
unaffected. Thus, the lemma is always true during the execution of the TEP.
♦
14
Theorem 1: For any vertex xe, xe.depth is the shortest-path distance d(xe, xs) from xe to
the initial extreme point xs in TEP.
Proof: Denote Vk as the set of vertices at distance k from xs. The proof proceeds by
induction on k.
When k = 0, V0 = {xs}. xs.depth = 0. The basis holds.
Assuming that xe.depth = d(xe, xs) for xe ∈ Vk-1, now let us consider an arbitrary
vertex xv ∈ Vk, where k ≥ 1. From the previous lemma, all vertices pushed into the queue
monotonically increase the depth. It implies that all vertices in Vk-1 are pushed into the
queue before xv. Moreover, xv is discovered by xu ∈ Vk-1. (xv can not be discovered
earlier, otherwise any adjacent vertex xu ∈ Vk-2 would give d(xv, xs) = d(xu, xs) + 1 = k-2
+ 1 ≠ k.) By the time that xu becomes the head of the queue, xv.depth = xu.depth + 1 =
d(xu, xs) + 1 = k. Since xv is an arbitrary vertex, the inductive hypothesis is proved.
♦
6
Analysis
Since the pivot operation requires O(m(n+m)) time to update every element of A
in Equation 4, Line 19 of the code dominates the running time of the TEP. The graph
structure is maintained such that the pivot operation will occur only once when the
extreme point is first created. Therefore the total time is proportional to the number of the
extreme points in the feasible space. As proved by (Berenguer and Smith 1986), the mean
number of extreme points of a randomly generated feasible space is no more than 2n for
any number of constraints and decision variables. Therefore, the presented algorithm
works in exponential time, compared to the factorial time of the exhaustive combination
15
method. For instance, a small system with n = 20 and m = 30 additional constraints would
take no more than 1.05×106 pivot operations in the TEP, while the exhaustive
combination method needs to generate 4.71×1013 basic solutions
7
7.1
Discussion
Initial extreme point
As described in the TEP, an initial extreme point is assumed as input. There
actually exist a couple of methods that generate the first extreme point. One of them is
the two-phase method (Press, Teukolsky et al. 1992).
Denoting that the artificial variables zi = bi - xi - ai1 xm+1 - ... - ain xm+n (zi ≥ 0, i = 1,
..., m) and the auxiliary objective function z' = z1 + z2 + ... + zm in Equation 4, the new
system is a typical linear programming problem in the canonical form where xi = (x1, ...,
xn+m), and xd = (z1, ..., zm). The simplex method can solve the problem and provide an
solution with the minimized z'. Noting that z' will be minimized to 0 only if all
nonnegative zi are assigned 0 and fit in independent variables xi' = (z1, ..., zm, xi1', ..., xin'),
an extreme point xi = (xi1', ..., xin') of Equation 4 can then be acquired by simply
eliminating zi from xi'.
The bonus of the two-phase method is the identification of an empty feasible
space. If the first phase does not produce zero values for all zi, it signals that there is no
initial basic feasible solution, i.e., empty feasible space. As a linear programming
problem proved by Stephen Smale (Smale 1983), the identification of an empty feasible
space is statistically no larger than the order of m or n.
16
7.2
Unboundedness and degeneracy
The TEP computes the blocking distance bi/aij in Line 9. There are some special
cases. First, aij could be negative or zero. Then xi could indefinitely increase without
violating the nonnegativity of other variables, i.e., the feasible space is unbounded for
increasing xi. As such, xe is labeled as unbounded and the procedure moves on to the next
vertex. In addition, bi could be 0. Geometrically, it means more than n hyperplanes
intersected at the same point. The case is also exhibited for multiple tied blocking
distances. If so, the extreme point xe could be assigned an alias for different sets of
independent variables.
Therefore, the pesudo code is modified as following:
9
xi ← xe.xi,
xd ← xe.xd
9.1
j ← 0,
b_min ← ∞
9.2
for k ← 1 to n
9.3
9.4
9.5
if (aik > 0) AND (bk/aik < b_min) then
j ← k,
b_min ← bk/aik
if b_min = ∞ then
9.6
xe.Unbounded ← TRUE
9.7
continue
9.8
9.9
else if b_min = 0 then
xi[i] ← xe.xd [j]
17
9.10
xe.Alias_xi ← xi
9.11
continue
10
xi[i] ← xe.xd [j]
11
xf ← Head[Q]
12
PIVOT ← TRUE
13
while (xf ≠ ∅) AND PIVOT
if (xi = xf.xi OR xf.Alias_xi OR more xf.Alias_xi) then
14
...
7.3
From decision variables to multiple performance attributes
Remember that the objective functions are included in the form of constraints.
Due to the duality of the linear model, all extreme points in the feasible decision space
can be mapped to the feasible attribute space. Then the global Pareto Optimal boundary,
or efficient frontier, for multiple attributes is identified (Zhu and Kazmer 2000).
Figure 2 is an example of the trade-off between two performance attributes cost
and time. Assuming that less cost and time are favored, the feasible attribute space
provides the Pareto Optimal boundary ABCD to trade-off cost and time. Based on the
established relationship between cost and time, the decision maker can then select the
desired solution according to his/her own preference. It is important to note that the tradeoff is obtained upon the established feasible space rather than imposing arbitrary weight
factors on multi-attributes. For instance, the solution C could be selected with its low cost
and relatively fast time.
18
Cost
A
Feasible Space
B
C
D
Time
Figure 2. Decision making with the performance space
8
Conclusion
The global feasible space is important in making the decision with multiple
objectives and fuzzy constraints. The paper presents an extensive simplex method to
solve the global feasible space for a linear model. By adopting a pivot operation similar
to the Simplex method, the algorithm traverses every basic feasible solution exactly once.
The method is superior to the exhaustive combination method because it only computes
the basic feasible solution. Moreover, the graph structure of all basic feasible solution is
beneficial to many applications. For example, the volume of the feasible space may be a
critical index to evaluate risk and uncertainty of a formulated problem. Also, the feasible
space can be established for a set of specified performance attributes to solve the inverse
problem. Especially, the shortest path between two solutions provides the least change of
the decision alternatives.
19
9
Acknowledgement
This research is supported by National Science Foundation, grant #9702797,
through the Division of Design, Manufacture, and Industrial Innovation.
10 References
Berenguer, S. E. and R. L. Smith (1986). “Expected Number of Extreme Points of a
Random Linear Program.” Mathematical Programming: 129-134.
Cormen, T. H., C. E. Leiserson, et al. (1990). Introduction to Algorithms, The MIT Press.
Dantzig, G. B. (1998). Linear Programming and Extensions, Princeton University Press.
Press, W. H., S. A. Teukolsky, et al. (1992). Numerical recipes in C: the art of scientific
computing, Cambridge University Press.
Reklaitis, G. V., A. Ravindran, et al. (1983). Engineering Optimization Methods and
Application, John Wiley & Sons.
Smale, S. (1983). “On the Average Number of Steps of the Simplex Method of Linear
Programming.” Mathematical Programming: 241-262.
Zhu, L. and D. Kazmer (2000). A performance-based reprsentation of constraint based
reasoning and decision based design. Proceedings of ASME Design Enigneering
Technical Conference, 12th International Conference on Design Theory and
Methodology, Baltimore, Maryland, DETC2000/DTM-14556.
20