TH MTD G21299 - It works

Transcription

TH MTD G21299 - It works
AN INTERACTIVE DESIRABILITY FUNCTION BASED
APPROACH TO GUIDED PARETO-OPTIMAL FRONT
A THESIS
Submitted in partial fulfilment of the
requirements for the award of the degree
of
DOCTOR OF PHILOSOPHY
in
MATHEMATICS
by
AMAR KISHOR
DEPARTMENT OF MATHEMATICS
INDIAN INSTITUTE OF TECHNOLOGY ROORKEE
ROORKEE-247 667 (INDIA)
DECEMBER, 2010
©INDIAN INSTITUTE OF TECHNOLOGY ROORKEE, ROORKEE, 2010
ALL RIGHTS RESERVED
INDIAN INSTITUTE OF TECHNOLOGY ROORKEE
ROORKEE
CANDIDATE'S DECLARATION
I hereby certify that the work which is being presented in the thesis entitled
AN
INTERACTIVE DESIRABILITY FUNCTION BASED APPROACH TO GUIDED
PARETO-OPTIMAL FRONT in partial fulfilment of the requirements for the award of the
Degree of Doctor of Philosophy and submitted in the Department of Mathematics of the Indian
Institute of Technology Roorkee, Roorkee is an authentic record of my own work carried out
during a period from July, 2005 to December, 2010 under the supervision of Dr. Shiv Prasad
Yadav, Associate Professor, Department of Mathematics and Dr. Surendra Kumar, Assistant
Professor, Electrical Engineering Department, Indian Institute of Technology Roorkee, Roorkee.
The matter presented in this thesis has not been submitted by me for the award of any
other degree of this or any other Institute.
(AMAR KISHOR)
This is to certify that the above statement made by the candidate is correct to the best of
our kno dge.
p
(Surendra Kumar
(Shiv rasad Yadav)
Supervisor
Supervisor
Date: December27, 2010
The Ph.D. Viva-Voce Examination of Mr. Amar Kishor, Research Scholar, has been
20 • i_c9 . 9-o 1~
held on
igna4uf f Supervisors
:
C''`ji.j
.
Signa ure of xternal Examiner
Abstract
Decision-making involves the use of a rational proceSs for selecting the best of several
alternatives. In real life, decisions are often made on the basis of multiple, conflicting
and non-commensurable criteria/objectives in uncertain/imprecise environments. Multiobjective evolutionary algorithm (MOEA) usually attempts to find a good
approximation to the complete Pareto-optimal front (POF), which then allows the user
to decide, among many alternatives. If a single solution is to be selected in a multiobjective optimization problem (MOOP), at some point during the process, the decision
maker (DM) has to reveal his/her preferences. Specifying these preferences a priori,
i.e., before alternatives are known, often means to ask too much of the DM. On the
other hand, searching for all nondominated solutions as most MOEA (a posteriori) do
may result in a waste of optimization efforts to find solutions that are clearly
unacceptable to the DM. This study introduces an intermediate approach, that asks for
partial preference information from the DM as a priori, and then focus the search
(using a posteriori) to those regions of the POF that seem most interesting to the DM.
In this way, it is possible to provide a larger number of relevant solutions.
The DM or user generally, has at least a vague idea about what kind of solutions
might be preferred. If such information (preference) is available, it can be used to focus
the search, yielding a more fine-grained approximation of the most relevant (from a
DM's perspective) areas (regions) of the POF. A novel approach, named as multiobjective evolutionary algorithm based interactive desirability function approach
(MOEA-IDFA), to guide the POF into interesting regions is developed. A set of Paretooptimal solutions is determined via desirability functions (DFs) which reveals DM's
preferences regarding different objective regions. The proposed method would be
highly effective in generating a compromise solution that is faithful to the DM's
preference structure. Theoretical analysis of the methodology is presented to assure the
effectiveness of the proposed approach. We apply the proposed approach to numbers of
test problems as well as some real life problems having different complexities. It is
observed that in almost all cases the proposed approach efficiently guides the
population towards the interesting region/regions, allowing a faster convergence and a
better coverage of this/these) area/areas of the POF. The idea here is to take the desires
of the DM into account more closely when foretelling the biasness onto the set of
nondominated solutions. In this way we can create a decision support system (DSS) for
the DM to help him/her finding the most satisfactory solution faster. We develop
different combination of DFs depending upon the choice of DM and demonstrate these
cases with examples. As the approach is MOEA based to validate the proposed
approach two different MOEAs: NSGA-II (elitist nondominated sorting genetic
algorithm) and MOPSO-CD (multi-objective particle swarm optimization with
crowding distance) are presented. An apparent evidence of the efficiency of the
proposed ideas is presented via summary of the results of extensive computational tests
that have been done in the present thesis. This thesis is described in two parts. The first
part deals with development of methodologies (Chapters 2, 3, 4 and 5) and the second
part deals with their applications to real world reliability engineering problems
(Chapter 6). Conclusions and future scope are summarized in Chapter 7.
Acknowledgements
First and foremost, I would like to thank my supervisors and mentors Dr. Shiv Prasad
Yadav, Associate Professor, Department of Mathematics, Indian Institute of
Technology Roorkee and Dr. Surendra Kumar, Assistant Professor, Department of
Electrical Engineering, Indian Institute of Technology Roorkee. I feel privileged to
express my sincere regards and gratitude to my guides for their valuable guidance and
constant encouragement throughout the course of my research work.
I express my earnest regards to Prof. & Head Rama Bhargava, Department of
Mathematics, Indian Institute of Technology Roorkee for providing valuable advice,
computational and other infrastructural facilities during my thesis work.
I also would like to thank Prof T. R. Gulati, DRC Chairman, Prof. G. S.
Srivastava, former DRC Chairman, my SRC members Dr. N. Sukavanam and Prof.
R.S. Anand for their guidance, cooperation, and many valuable comments dedicated to
my thesis.
The encouragement, support and cooperation which I have received from my
friends Komal, Ashok, Jagdish, Deepmala, Gaurav, Monika, Anupam, Kavita, Mohit,
Jai Prakash, Sanjeev, Zeyauddin, Alok, Jaideep, Saif, Sangeeta, Mukesh, Abishek,
Neeraj, Karunesh, Manjit, Rajni, Suraj and Prabhanjan are beyond the scope of my
acknowledgement, yet I would like to express my heartfelt gratitude to them.
I owe my sincere thanks to my family members and relatives for their blessings,
patience and moral support. I would like to give my special thanks to my brother
Lovekush and sisters Sneha and Rakshita. I also want to express my appreciation to my
wife Seema for encouraging me to achieve my goals.
Above all, I express my deepest gratitude to my Parents and my newly born
daughter Aashi, to whom I dedicate this thesis.
The financial assistance from Council of Scientific and Industrial Research
(CSIR), New Delhi, India, is also gratefully acknowledged.
Finally, my greatest regards to the Almighty for bestowing upon me the courage
to face the complexities of life and complete this thesis successfully.
Kishor)
Roorkee
December 27, 2010
iv
List of Publications
Journal Papers
(J1)
A multi-objective genetic algorithm for reliability optimization problem,
International Journal of Performability Engineering, 5 (3), 227-234, April 2009.
(J2)
Interactive Fuzzy Multi-Objective Reliability Optimization Using NSGA-II,
OPSEARCH, Springer publication .46, 214-224, June 2009.
(J3)
Incorporating Preferences in Multi-objective optimization problems: A novel
Approach, Communicated in The Journal of Information & Optimization Sciences.
(J4) Guiding MOEA towards interesting regions: A DF based Approach,
Communicated in International Journal of Approximate Reasoning.
(J5) Introducing Bias among Pareto-optimal Solutions: Reliability Optimization
Application, Communicated in International Journal of Quality & Reliability
Management
(J6) Interactive Trade-off using desirability function and Multi-objective. Evolutionary
Algorithm, Communicated in Decision Support Systems.
(J7)
MOEA-IDFA: Multi-objective evolutionary algorithm based interactive
desirability Function Approach, Communicated in Journal of Computational Methods
in Sciences and Engineering
Conference Papers
(C1). Application of a Multi-objective Genetic Algorithm to Solve Reliability
Optimization Problem, Conference on Computational Intelligence and Multimedia
Applications, 1,458 — 462, held at Sivakasi, Tamilnadu, 13-15 Dec. 2007.
ieeexplore.ieee.oreie15/4426318/4426531/04426622.pdf?arnumber=4426622
(C2). Complex bridge system bi-objective reliability optimization problem using
NSGA-II, Proceedings of 32th National Systems Conference (NSC-2008), 635-639,
organized by I.I.T. Roorkee, Roorkee, 17-19 Dec 2008.
( vi )
Table of Contents
Abstract Acknowledgements iii
List of Publications Table of Contents vii
List of Figures xi
List of Tables xvii
List of Abbreviations xxi
CHAPTER 1 Introduction 1
1.1 MOOP (Basic Concepts and Terminology) 5
1.2 Classification and Review of MOEAs 10
1.2.1Why Evolutionary Approach to MOOP 11
1.2.2A Priori Preference Articulation: (decide -+ search) 13
1.2.3Progressive Preference Articulation: (decide <--> search) 15
1.2.4A Posteriori Preference Articulation: (search —> decide) 16
1.3 DM's Partial Preference Articulation with MOEA -A Review 24
1.3.1Approaches Providing Reference Point 25
1.3.2Approaches Based on Trade-off Information 27
1.3.3Approaches Based on Marginal Contribution 28
1.3.4Approaches Based on Scaling 28
1.3.5Other Approaches 30
1.4 Objectives of the Thesis 32
1.5 Organization of the Thesis 33
CHAPTER 2 Articulation of an a Priori Approach with an a Posteriori
Approach 39
2.1 Introduction 39
2.2 DFA as a Priori 41
2.2.1Linear DF 42
2.3 Description of MOEAs Applied as a Posteriori 43
2.3.1NSGA-II or Elitist Nondominated Sorting Genetic Algorithm 43
2.3.2Run Time Complexity of NSGA-II 46
2.3.3MOPSO-CD or Multi-objective Particle Swarm Optimization
47
with Crowding Distance 2.3.4Run Time Complexity of MOPSO-CD -
48
2.3.5Constraint Handling in NSGA-II and MOPSO-CD 49
2.3.6Performance Measure for NSGA-II and MOPSO-CD 49
2.4 Proposed Methodology 50
2.4.1Assumptions: 53
2.4.2Detailed Procedure of the Methodology: MOEA-IDFA 53
2.5 Experimental Suite 55
56
2.6 Results and Discussion 57
2.7 Conclusion CHAPTER 3 Guided POF Articulating Nonlinear DFA with an MOEA
75
(Convex-Concave Combination) 3.1 Introduction 75
3.2 Nonlinear (Convex /Concave) DFA as a Priori 77
80
3.3 Proposed Methodology 3.3.1Detailed Procedure of the Methodology: MOEA-IDFA 82
3.4 Results and Discussion 83
3.4.1Effect of Variations in DF's Key Parameter on POF 85
3.5 Conclusion 85
CHAPTER 4 Guided POF Articulating Nonlinear DFA with an MOEA (All
Sigmoidal Combination) 99
4.1 Introduction 99
102
4.2 Nonlinear (Sigmoidal) DFA as a Priori 103
4.3 Proposed Methodology 4.3.1Detailed Procedure of the Methodology: MOEA-IDFA 105
106
4.4 Results and Discussion 4.4.1Effect of Variations in DF's Key Parameter on POF 108
108
4.5 Conclusion CHAPTER 5 Guided POF Articulating Nonlinear DFA with an MOEA (All
121
Convex Combination) 121
5.1 Introduction 123
5.2 Nonlinear (Convex) DFA as a Priori 124
5.3 Proposed Methodology 5.3.1Detailed Procedure of the Methodology: MOEA-IDFA 126
128
5.4 Results and Discussion 5.4.1Effect of Variations in DF's Key Parameter on POF 129
130
5.5 Conclusion CHAPTER 6 Application of the MOEA-IDFA to Reliability Optimization
Problems 143
143
6.1 Reliability Optimization (An Overview)
6.1.1Preference Incorporation in Reliability Optimization Problems 146
6.2 Reliability Optimization of a Series System 152
152
6.2.1Problem Description 6.2.2Step-by-Step Illustration of MOEA-IDFA for Series System 152
6.3 Reliability Optimization of Life Support System in a Space Capsule 154
6.3.1Step-by-Step Illustration of MOEA-IDFA for Life Support
154
System in a Space Capsule 6.4 Reliability Optimization of a Complex Bridge System 156
6.4.1Step-by-Step Illustration of MOEA-IDFA for Complex Bridge
System 156
( viii )
6.5 Residual Heat Removal (RHR) System of a Nuclear Power Plant
157
Safety System 6.5.1Step-by-Step illustration of MOEA-IDFA for RHR System 159
6.6 Reliability Optimization of a Mixed Series-Parallel System 160
6.6.1Step-by-Step Illustration of MOEA-IDFA for Mixed Series161
Parallel System 162
6.7 Conclusion 180
CHAPTER 7 Conclusions and Scope for Future Work 180
7.1 Conclusions 182
7.2 Future Scope 184
BIBLIOGRAPHY ix
(x)
List of Figures
Figure 1.1 Different Ways of Preference Articulation 35
Figure 1.2 An example to travel bigger distances in ways that are more
economical
36
Figure 1.3 Description of Dominated and Nondominated Solutions 36
Figure 1.4 Preference Based Classification of MOEA 37
Figure 1.5 Nondominated Ranking of Search Space for all Minimization
Case 38
Figure 1.6 Pseudo code for MOPSO 38
Figure 2.1 STB type of DF 58
Figure 2.2 LTB type of DF 58
Figure 2.3 Description of Crowding-Distance 58
Figure 2.4 The Hypervolume Enclosed by the Nondominated Solutions 59
Figure 2.5 Nondominated Sorting of a Population 59
Figure 2.6 Flow Chart Representation of NSGA-II Algorithm 60
Figure 2.7 Flow Chart of MOPSO-CD algorithm 61
Figure 2.8 Flow Chart of the Procedure Proposed: MOEA-IDFA 62
Figure 2.9 POFs of SCH1w.r.t NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 65
Figure 2.10 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 66
Figure 2.11 POFs of KUR w.r.t. NSGA-H and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 67
Figure 2.12 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 68
Figure 2.13 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 69
Figure 2.14 POFs of ZDT3 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 70
Figure 2.15 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 71
Figure 2.16 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 72
Figure 2.17 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 73
Figure 2.18 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Linear DF Case) 74
Figure 3.1 Shape of a Convex DF 79
Figure 3.2 Shape of a Concave DF 79
Figure 3.3 POFs of SCHlw.r.t NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 87
Figure 3.4 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 88
Figure 3.5 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using
89
MOEA-IDFA (Convex-Concave Combination) Figure 3.6 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 90
Figure 3.7 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 91
Figure 3.8 POFs of ZDT3 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 92
Figure 3.9 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 93
Figure 3.10 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 94
Figure 3.11 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 95
Figure 3.12 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (Convex-Concave Combination) 96
Figure 3.13 Effect of variations in key parameter of DF on POF for
SCH1 Figure 4.1 Proposed STB type of Sigmoidal DF 97
102
Figure 4.2 POFs of SCHlw.r.t NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 110
Figure 4.3 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 111
Figure 4.4 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 112
Figure 4.5 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 113 _
Figure 4.6 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 114
Figure 4.7 POFs of ZDT3 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 115..
Figure 4.8 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 116
Figure 4.9 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 117
Figure 4.10 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 118
Figure 4.11 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Sigmoidal Combination) 119
Figure 4.12 Effect of variations in key parameter of DF on POF for
SCH1 Figure 5.1 STB type of a Convex DF 120
124
Figure 5.2 POFs of SCH1 w.r.t NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 131
Figure 5.3 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 132
Figure 5.4 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 133
Figure 5.5 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 134
Figure 5.6 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination)
135
Figure 5.7 POFs of ZDT3 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 136
Figure 5.8 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 137
Figure 5.9 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination)
138
Figure 5.10 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 139
Figure 5.11 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using
MOEA-IDFA (All Convex Combination) 140
Figure 5.12 Effect of variations in key parameter of DF on POF for
141
SCH1 Figure 6.1 Block Diagram of Series System 164
Figure 6.2 Block Diagram of a Life Support System in a Space Capsule 164
Figure 6.3 Block Diagram of Complex Bridge System 165
Figure 6.4 Mixed Series-Parellel System 165
Figure 6.5 Schematic of the RHR of a Nuclear Power Plant 166
Figure 6.6 Simplified Fault Tree of the RHR system (Apostolakis,
167
1974) Figure 6.7 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System
(No Preference Case) 167
Figure 6.8 POFs w.r.t. NSGA-II and MOPSO-CD of a Life Support
System in a Space Capsule (No Preference Case) ( xiv )
168
Figure 6.9 POFs w.r.t. NSGA-II and MOPSO-CD of a Complex Bridge
System (No Preference Case) 168
Figure 6.10 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System
at different preferences:(a) For Preference 1, (b) For
Preference 2, (c) For Preference 3, (d) For Preference 4 170
Figure 6.11 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System
at Different Preferences: (a) For Preference 1, (b) For
Preference 2, (c) For Preference 3, (d) For Preference 4 172
Figure 6.12 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System
at different preferences: Preference 1(a), Preference 2 (b),
Preference 3 (c), Preference 4 (d) 174
Figure 6.13 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System
at different preferences:(a) For No Preference, (b) For
Preference 1, (c) For Preference 2, (d) For Preference 4 176
Figure 6.14 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System
at different preferences: (a) For No Preference, (b) For
Preference 2, (c) For Preference 3, (d) For Preference 4 178
xv
List of Tables
Table 2.1 Description of unconstrained Bi-objective Problems 63
Table 2.2 Description of Constrained Bi-objective and Unconstrained
Tri-objective Problems 64
Table 2.3 Parameters and Hypervolumes for SCH1 using MOEA-IDFA
(Linear DF Case) 65
Table 2.4 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(Linear DF Case) 66
Table 2.5 Parameters and Hypervolumes for KUR using MOEA-IDFA
(Linear DF Case) 67
Table 2.6 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(Linear DF Case) 68
Table 2.7 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(Linear DF Case) 69
Table 2.8 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(Linear DF Case) 70
Table 2.9 Parameters and Hypervolumes for TNK using MOEA-IDFA
(Linear DF Case) 71
Table 2.10 Parameters and Hypervolumes for VNT using MOEA-IDFA
(Linear DF Case) 72
Table 2.11 Parameters and Hypervolumes for MHHM1 using MOEAIDFA (Linear DF Case) 73
Table 2.12 Parameters and Hypervolumes for MHHM2 using MOEAIDFA (Linear DF Case) 74
Table 3.1 Parameters and Hypervolumes for SCH1 using MOEA-IDFA
(Convex-Concave Combination) 87
Table 3.2 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(Convex-Concave Combination) 88
Table 3.3 Parameters and Hypervolumes for KUR using MOEA-IDFA
(Convex-Concave Combination) ()
89
Table 3„4 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(Convex-Concave Combination) 90
Table 3.5 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(Convex-Concave Combination) 91
Table 3.6 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(Convex-Concave Combination) 92
Table 3.7 Parameters and Hypervolumes for TNK using MOEA-IDFA
(Convex-Concave Combination) 93
Table 3.8 Parameters and Hypervolumes for VNT using MOEA-IDFA
(Convex-Concave Combination) 94
Table 3.9 Parameters and Hypervolumes for MHHM1 using MOEAIDFA (Convex-Concave Combination)
95
Table 3.10 Parameters and Hypervolumes for MHHM2 using MOEAIDFA (Convex-Concave Combination)
96
Table 4.1 Parameters and Hypervolumes for SCH1 using MOEA-IDFA
110
(All Sigmoidal Combination) Table 4.2 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(All Sigmoidal Combination) 111
Table 4.3 Parameters and Hypervolumes for KUR using MOEA-IDFA
(All Sigmoidal Combination) 112
Table 4.4 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(All Sigmoidal Combination) 113
Table 4.5 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(All Sigmoidal Combination) 114
Table 4.6 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(All Sigmoidal Combination) 115
Table 4.7 Parameters and Hypervolumes for TNK using MOEA-IDFA
(All Sigmoidal Combination) 116
Table 4.8 Parameters and Hypervolumes for VNT using MOEA-IDFA
(All Sigmoidal Combination) 117
Table 4.9 Parameters and Hypervolumes for MHHM1 using MOEAIDFA (All Sigmoidal Combination) 118
Table 4.10 Parameters and Hypervolumes for MHHM2 using MOEAIDFA (All Sigmoidal Combination) 119
Table 5.1 Parameters and Hypervolumes for SCH1 using MOEA-IDFA
(All Convex Combination) 131
Table 5.2 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(All Convex Combination) 132
Table 5.3 Parameters and Hypervolumes for KUR using MOEA-IDFA
(All Convex Combination) 133
Table 5.4 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(All Convex Combination) 134,
Table 5.5 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(All Convex Combination) 135
Table 5.6 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(All Convex Combination) 136.
Table 5.7 Parameters and Hypervolumes for TNK using MOEA-IDFA
(All Convex Combination) 1371
Table 5.8 Parameters and Hypervolumes for VNT using MOEA-IDFA
(All Convex Combination) 138
Table 5.9 Parameters and Hypervolumes for MHHM1 using MOEAIDFA (All Convex Combination) 139
Table 5.10 Parameters and Hypervolumes for MHHM2 using MOEAIDFA (All Convex Combination) 140
Table 6.1Data for Mixed Series-Parallel System 161
Table 6.2 Third order minimal cut sets
166
Table 6.3 Initial a Priori Parameters for Series System 169
Table 6.4 Other a Priori Parameters for Series System 169
Table 6.5 A Posteriori Parameters for Series System 169
Table 6.6 Initial a Priori Parameters for Life Support System in a Space
Capsule 171
( xix )
Table 6.7 Other a Priori Parameters for Life Support System in a Space
Capsule 171
Table 6.8 A Posteriori Parameters for Life Support System in a Space
Capsule 171
Table 6.9 Initial a Priori Parameters for Complex Bridge System 173
Table 6.10 Other a Priori Parameters for Complex Bridge System 173
Table 6.11 A Posteriori Parameters for Complex Bridge System 173
Table 6.12 Initial a Priori Parameters for RHR System 175
Table 6.13 Other a Priori Parameters for RHR System 175
Table 6.14 A Posteriori Parameters for RHR System 175
Table 6.15 Initial a Priori Parameters for Mixed Series-Parallel System 177
Table 6.16 Other a Priori Parameters for Mixed Series-Parallel System 177
Table 6.17 A Posteriori Parameters for Mixed Series-Parallel System 177
XX
List of Abbreviations
ACO
Ant colony optimization
CFO
Central force optimization
DE
Differential evolution
DF
Desirability function
DFA
Desirability function based approach
DM
Decision maker
DS S
Decision support system
EA
Evolutionary algorithm
GA
Genetic algorithm
GS()
Glowworm swarm optimization
IDFA
Interactive desirability function approach
IIMOM
Intelligent interactive multi-objective
optimization method
LTB
Larger the better
MCDM
Multi-criteria decision-making
MCS
Minimal cut sets
MMP
Multi-objective mathematical programming
MOEA
Multi-objective evolutionary algorithm
MOEA-IDFA
Multi-objective evolutionary algorithm based
interactive desirability function approach
MOGA
Multi-objective genetic algorithm
MOLP
Multi-objective linear problem
MOPSO
Multi-objective particle swarm optimization
MOPSO-CD
Multi-objective particle swarm optimization with
crowding distance
MOOP
Multi-objective optimization problem
MOOT
Multi-objective optimization technique
NLMOOP
Nonlinear multi-objective optimization problem
NSGA
Nondominated sorting genetic algorithm
NPGA
Niched Pareto genetic algorithm
NSGA-II
Elitist nondominated sorting genetic algorithm
OR
Operations research
POF
Pareto-optimal front
PSO
Particle swarm optimization
RHR
Residual heat reservoir
SOOP
Single objective optimization problem
SPEA
Strength Pareto evolutionary algorithm
STB
Smaller the better
CHAPTER 1
Introduction
MOOP is an important class of problem copiously encountered in engineering and
industrial context. Many problems addressed in classical single objective models are
actually multi-objective in nature, reason being that the outcomes associated with the
decisions are multidimensional. Criticism over the use of a single criterion (objective)
as a sole basis for decision-making in such cases necessitates a need for the explicit
treatment multiple measures of solution quality or multiple objectives in decisionmaking problems. Multi-criteria decision-making (MCDM) encompasses all the
quantitative decision problems characterized by multiple measures of solution quality.
There is no actual decision-making involved in single objective optimization
problem (SOOP), as the decision is implicit in the measurement of scalar-valued
objective function. We can think of decision-making only when multiple objectives,
criteria, functions, etc. are involved as the alternatives of choice become more complex
and conflicting in nature. Thus, the problem of combining these objectives into a single
objective (measure) becomes difficult and some times impractical. If a single measure
such as the weighted average of individual, objectives or a value function can be found
representing a DM's preference structure, the multi-objective mathematical
programming (MMP) can be recasted into a SOOP. Thus, there would be no need for
multi-objective optimization techniques (MOOTs) in this case. The weighted average is
often difficult to find due to the non-commensurability of individual objective functions
and value function is difficult to determine in real situations. To resolve these problems
a large number of MOOTs have been developed in literature that can identify the
efficient alternatives. The concept of optimality in SOOP is replaced by that of
efficiency in MOOP. An efficient solution (Pareto-optimal solution) is one for which
there does not exist another feasible solution which does at least as well on every single
objective, and better on at least one objective. Generating efficient solutions to MOOP
is an area of increased interest by researchers in many disciplines, including
Engineering, Operations Research (OR), and Computer Science to name a few.
When solving real-world optimization problems, classical methods encounter
great difficulty in dealing with the complexity involved in these situations and cannot
offer a reliable solution. One can find many real applications in fields such as
economics, engineering or science where methods with ample mathematical support
(ensuring the optimality of solutions under ideal conditions) are unable to obtain a
solution or cannot obtain the solution in a reasonable time. These facts led researchers
to develop evolutionary algorithms (EAs) to solve these complex models. The success
of EAs produced enormous interest in their study giving rise to an active community
and a number of very efficient MOEAs for MOOR The use of EAs to solve problems
of this nature has been motivated mainly because of the population-based nature of
EAs, which allows the generation of several elements of the Pareto-optimal set in a
single run. In addition, the complexity (e.g. very large search space, uncertainty, noise,
disjoint POFs, etc.) of some MOOPs may prevent use (or application) of traditional OR
solution techniques. Thanks to past two decades of- research work, MOEA is now a
well-established and very popular computational research area. Several evolutionary
methods are available ensuring full convergence toward the POF in terms of both
precision and diversity of solutions (e.g. NSGA-H, MOPSO, MOPSO-CD, SPEA2
etc.). These methods have been widely and deeply tested and compared on different
standard test functions. In addition some convergence measuring criteria are also
available, being specifically developed for MOOPs (Thanh and Vong, 2000,
Parsopoulos and Vrahatis, 2002; Zitzler et al., 2003; Tan et al., 2005; Coello, 2009;
Nguyen, 2010).
Typically, there are infinitely many Pareto-optimal solutions for a MOOP.
Merely, determination of the efficient solutions does not solve the problem completely.
Mathematically, every Pareto-optimal point is an equally acceptable solution of the
MOOP. However, it is generally desirable to obtain one point as a solution. Selecting
one out of the set of Pareto-optimal solutions calls for information that is not contained
in the objective functions. Thus, it is often necessary and important issue to incorporate
DM's preferences for various objectives in order to determine a suitable (final)
solution. The DM is a person (or group of persons) who is supposed to have better
insight into the problem and who can express preference relations between different
solutions. Solving a MOOP calls for the co-operation between the DM and analyst. By
2
an analyst here, we mean a person or a computer program responsible for the
calculation/computation side of the solution process. The analyst generates information
for the DM and the solution is selected according to the preferences of DM. In addition,
the set is partially ordered and unlike SOOP, no analytical tool can identify the best
alternative among these without additional information in the form of subjective
preferences of a DM. The preferred solution is then called the best compromise
solution. Thus, the two important parts of MCDM problems are:
i)
An objective part handled by the analyst , and
ii)
A subjective part handled by the DM.
The objective part considers the internal structure of the system characterized by
constraints together with the functional relationship between decision variables and
decision criteria and based on it the objective part sorts out the efficient alternatives.
Then the subjective part takes over using the preferences of the DM to develop a
preference ordering relation, which results in a complete ordering of the efficient set,
thus, determining the best alternative-best in terms of some criteria of judgment known
to the DM. Therefore, interaction with DM is an integral part of the algorithms for
MOOP at some point during the optimization process.
Following a classification by Horn (1997) and Van Veldhuizen and Lamont
(2000) the articulation of preferences with MOEAs may be done either before (a
priori), during (progressive), or after (a posteriori) the optimization process. A priori
MOEAs involves preference specification prior to the optimization stage, and are
traditionally implemented by aggregating objectives into a single fitness function with
parameters reflecting the preference of the DM. Interactive MOEA allow the DM to
alter parameters during the search, effectively influencing the direction of the search. A
posteriori approach is to date the most popular, which allows the DM to choose
suitable solution out of the Pareto-optimal solutions presented before DM. In recent
years, researchers have started to look into incorporating preference in the search
process of an MOEA. Development in this area is interesting and the key element to
widespread application of MOEA in practical circumstances where the preferences are
incorporated. There are some advantage of knowing the range of each objective for
Pareto-optimality and the shape of the POF itself in a problem for an adequate decisionmaking. The task of choosing a preferred single Pareto-Optimal solution is also another
3
extremely important issue. Most of the MOEAs focus on the approximation of the POF
without including DM's preferences. However, the determination or approximation of
the POF is not enough, and the DM's preferences have to be incorporated in order to
determine the solution that better represents these preferences. Some works are found
in literature in which DM's preferences are incorporated along with MOEA. Deb and
Sundar (2006) in their paper concluded "having been well demonstrating the task of
finding multiple Pareto-optimal solutions in MOOPs, the MOEA researchers and
applicationists should now concentrate in devising methodologies of solving the
complete task of finding preferred Pareto-optimal solutions in an interactive manner
with the DM". Hence, researchers entail to work in this very essential area.
In this thesis, we articulate desirability function (DF) based approach (a priori)
with an MOEA (a posteriori), and demonstrate how, instead of one solution, a
preferred set of solutions near the desired region of DM's interest can be found. Thus, a
hybrid approach consisting a priori and a posteriori together is proposed in the present
work. In other words, we consider an intermediate approach shown through the middle
path in Figure 1.1. It may be impractical for a DM to completely specify his/her
preferences before any alternatives are known. However, often the DM has at least a
vague idea about what kind of solutions might be preferred and can specify partial
preferences before the search process. If such information is available, it can be used to
focus the search, yielding a more fine-grained approximation of the most relevant (from
a DM's perspective) area of the POF and/or reducing computational time. Thus, the
goal is no longer to generate a good approximation of all Pareto-optimal solutions, but
a small portion of set of Pareto-optimal solution that contains the DM's preferred
solution with the highest probability. In our methodology, preferences can be
effectively incorporated to an MOEA (a posteriori) with the help of DF as a priori
approach (Harrington, 1965; Mehnen et al., 2007). In this thesis we do not lay emphasis
on any new efficient set generation algorithms since, we already have a lot (e.g. NSGAII and MOPSO-CD etc.). Aim of the present work is to obtain a guided or partial POF,
through an interactive procedure involving DM. Thus, the main core of this thesis is the
utilization of an MOEA in finding preferred solution from the POF in the
region/regions that are of interest to the DM. We also present a basic theoretical
4
analysis and application of the proposed approach to five different reliability
optimization problems.
In this chapter, we first give the basic concepts and terminology regarding MOOP
(Section 1.1) followed by a classification and review of MOEA, provided in Section
1.2. In section 1.3, the literature dealing with DM's partial preference articulation into
MOEAs is reviewed and classified. Section 1.4 presents objectives of the thesis and
the organization of the thesis is finally presented in Section 1.5.
1.1 MOOP (Basic Concepts and Terminology)
MOOPs can be found everywhere in nature, and we deal with them on a daily basis. In
MOOP, there is not a single solution for a given problem; instead, there is a set of
solutions from where one can choose. From a person who tries to optimize a budget in
a supermarket, trying to get more and better quality products for less amount of motley;
industries trying to optimize their production, reducing their production costs. and
increasing their quality or people looking for ways that are more economical to travel
and covering bigger distance. In the last example, which means of transport we should
choose depends on how far we need to go or how cheap we need it to be as shown in
Figure 1.2.
Although genesis of MOOP is in economics, it has been studied through several
disciplines e.g. game theory, OR etc. The idea of solving a MOOP can be understood as
helping a human DM in considering the multiple criteria simultaneously and in finding
a Pareto-optimal solution that pleases him/her the most. The notion of an efficient
solution was introduced by Pareto (1896) hence, the connotation of an efficient solution
is also named as Pareto-optimal solution. However, the earliest concept of MCDM
appeared with the advances of OR following World War-II. Decision-making in
complex environment involves terms such as 'multiple objectives', 'multiple
attributes', 'multiple criteria' or 'multiple dimensions' are used to describe different
decision situations (Collette and Siarry, 2003). A common feature of these problems is
that they consider multiple measures of solution quality. From now, onwards we will
use multi-objectives in place of these terms.
A general MOOP formulation in standard form is as follows:
Minimize (Maximize) f (x){f(x), f2 (x),..., fk (x)}
(1.0)
subject to : g (x) = 0, j -= 1,..., me ;
(1.1)
XE R"
(1.2)
g (x) 5 0, j = me +1, ..., m,
where, k
2 is number of objectives in the MOOP; m is the total number of
constraints while me is the number of equality constraints, x = (x1 , x2 , x„) is n
fromsomeuniverse S2 c le , objective
dimensionaldecisionvariable
functions f (x), i =1, 2, ..., k where, f, : S2 —> R and the constrained functions g, (x)
where, g : S2 —> R are all real valued functions on S2 , f(x) is multi-objective vector or
criteria vector of objective functions. When all f 's and gi 's are linear, the problem is
called a multi-objective linear problem (MOLP). If at least one of the f,' s is nonlinear
the problem is nonlinear multi-objective optimization problem (NLMOOP).
Definition 1.1 (Feasible Decision Space): The vector x E le is said to be feasible if
and only if Equations 1.1 and 1.2 hold. The set of all feasible vectors is said to be a
feasible decision space X (often called feasible design space or constrained set) given
as
{ g ,(x) = 0, j =1,..., me ; and
X = x '
g , (x) ... 0, j = me + 1, ..., m.
(
1.2 )
Therefore, we can rewrite the MOOP given by Equations 1.0-1.2 as:
x Minimize f (x)
xEx
P1
which means the solution of constrained optimization problem given by Equations 1.01.2 is just to find a vector x E X, such that the objective function f (x) is minimized (or
maximized). From now, onwards we will take P1 as a minimization problem.
Definition 1.2 (Feasible Criterion Space): The feasible criterion space Z (also called
the attainable set) is defined as the set {f (x) x E X} .
6
Definition 1.3 (Ideal Solution): Let us look at the problem P1 as k SOOPs, each with
a different objective function but with the same constraints. The k SOOPs are given by
Minimize f (x)
subject to x c X
Let x ( ,i =1,
, i =1,2,...,k
(1.3)
k be the points where the minimum value of f; (say f,t ) are
respectively achieved. We call xt(') the attainable solutions of the original MOOD (i.e.
P1) if
*(2)
= x •(k) = x* ,
X *(1) = X =
That is, if the minimum values of all f are achieved at the same point x* , then x t is
called the ideal solution of the problem P1 . An ideal solution is the solution with all the
objective functions simultaneously minimized. In a real life problem, such a situation
would be very rare.
In the absence of an ideal solution, we may prefer a solution regarded best by some
other suitable criterion. One such criterion is to look for a point from where the value of
any of the objective functions cannot be decreased without increasing the value of at
least one of the other objective functions. Such a point is called an efficient point, and
the corresponding solution an efficient solution (will be discussed elaborately in
Section 1.2).
Definition 1.4 (Ideal Objective Vector): The ideal vector f t is the unique k -vector
with components
Minimize f(x)
, i =1, 2, . . , k f =
subject to x e X
(1.4)
obtained by minimizing each objective function separately i.e.,
f t = (fi ,fkt ) is called the ideal objective vector. Ideal objective vector f t can
be used as a reference solution for the algorithms seeking Pareto-optimal solution.
Definition 1.5(Anti-ideal Vector): Unlike the ideal objective vector which represents
the lower bound of each objective in the entire feasible search space X , the anti-ideal
(nadir) objective vector, ztt , represents the upper bound (in case of all minimization
7
problem) of each objective in the entire Pareto-optimal set, and not in the entire search
space E-2
*
**
** * *
zz'» = f =(f1 , f 2 fk** ), where the maximum solution for the i t h objective function
is the decision vector x**(') with the function value f".
Definition 1.6 (Dominance): A solution x(I) = (41) , 4),..., x„(1) ) is said to dominate the
other solution x(" = (x,2), x?),...., x(2)) if
1.
x(I)
is
not worse than x(2)
w.r.t. all objectives i.e.,
f (x(1) ) f (x(2) ) for all i = 1, 2, ..., k.
2. x(1) is strictly better that x(2) in at least one objective i.e.,
f (x0)) < f, (x(2) )for at least one i
If any of the above condition is violated, the solution x(I) does not dominate the
solution x(2) (Deb, 2001). If x(I) dominates x(2), it is also customary to write any of
the following:
• x(2) is dominated by x(1) ;
• x(I) is non-dominated by x(2) , or;
• x(1) is non-inferior to x(2) .
The depiction of dominance can be visualized using Figure 1.3.
Definition 1.7 (Efficient or Pareto-optimal solution): A solution x* E X of P1 is
called an efficient solution or Pareto-optimal solution if there is no other point
x E X such that for one objective function, say f (x), f ,. (x) < f (x*), and for all other
fj (x) < fi (x* ), j
1, 2, ..., k; j
The concept of an efficient solution for a MOOP may be regarded as a generalization of
the concept of an optimal solution for a SOOP. The inequality signs in the above
definition have to be reversed if the problem is to maximize all the objectives.
The set P* of all Pareto-optimal solutions of a given MOOP is called the Pareto-optimal
set. i.e. P* = (x E X :x is a Pareto-optimal solution) .
8
Definition 1.8 (POF): There are usually a lot (infinite number) of Pareto-optimal
solutions for a given MOOP and Pareto-optimal set /3* , the POF (PF* ) is defined as
image elements of P* under f, i.e.
PF* = If (x) = (fi (x), f 2 (x),..., fk (x)) x E P*1
(1.5)
There are generally convex and nonconvex POFs. A POF is said to be convex if and
only if
V f (1) (x), f (2) (x) E PF* ,V 2 E (0,1), 3 f (3) (x) E PF*such that
2 1f (I)
(x)11+ (1 — 2)11f (2) (x)11-f (3) (x)
On the contrary a POF is said to be concave if and only if
V f (1) (x), f (2) (x) E PF* ,V
2 11-f ("
(x)II + (1 2)11f (2) (x) II
E (0,1), 3 f (3) (x) E PF*such that
f (3) (x)II'
Apart from the above two, partially convex, partially concave and discontinuous POFs
may also exist.
The ultimate goal of a multi-objective algorithm is to identify solutions in the Paretooptimal set. However, identifying the entire Pareto-optimal set for a MOOP is
practically impossible due to its large size. In addition, for many problems, especially
combinatorial optimization problems, proof of solution optimality is computationally
infeasible. Therefore, a practical approach to multi-objective algorithm is to investigate
a set of solutions that represent the Pareto-optimal set. With these concerns in mind, a
MOOT should achieve the following three conflicting goals:
1. The best-known POF should be as close as possible to the true POF.
2.
Solutions in the best-known POF should be uniformly distributed and
diverse over of the POF in order to provide the DM a true picture of trade-offs.
3.
The best-known POF should capture the whole spectrum of the POF. This
requires investigating solutions at the extreme ends of the objective function
space.
Definition 1.8 (Strongly, Weakly, Inferior and Preferred solution): All Pareto-
optimal solutions lie on the boundary of the feasible criterion space Z (Athan and
Papalambros, 1996). Often algorithms provide solutions that may not be Pareto-optimal
but may satisfy other criteria, making them significant for practical applications. A
9
point x* E X is weakly Pareto-optimal if and only if there does not exist another
point x E X , such that f (x) < f (x`) . In other words, a point is weakly Pareto-optimal if
there is no other point that improves the entire objective functions simultaneously. In
contrast, a point is strongly Pareto-optimal if there is no other point that improves at
least one objective function without detriment to another function.
A solution which is not weakly efficient is called an inferior solution. A particular
efficient solution, which is finally selected by the DM after preference decision-making
is known as preferred solution.
Pareto-optimal solutions can be divided into improperly and properly Paretooptimal ones according to whether unbounded trade-offs between objectives are
allowed or not. Proper Pareto-optimality can be defined in several ways (Yu et al.,
1985; Miettinen, 1999). According to Geoffrion (1968) a solution is properly Paretooptimal if there is at least one pair of objectives for which a finite decrement in one
objective is possible only at the expense of some reasonable increment in the other
objective. Mathematically it is defined as follows:
Definition 1.9(Geoffrion's Proper Pareto-Optimality): A point x' e X is proper
Pareto-optimal solution if it is Pareto-optimal and there is some real number M > 0
such that for each f;(x) and each x
E
X satisfying f(x ) < f(x ) , there exists at least
one /es, (x) ( j i ) such that fi (x* ) < (x ) and fi (x* ft (x ) < M.
f.,(.) - 4(x)
If a Pareto-optimal solution is not proper, it is called improper. The quotient defined
above is referred to as a trade-off, and it represents the increment in objective function
j resulting from a decrement in objective function i (Geoffrion, 1968).
1.2 Classification and Review of MOEAs
Before proceeding towards the classification and review of MOEA, one important
aspect needs to be discussed given in the following subsection.
10
1.2.1 Why Evolutionary Approach to MOOP ?
The major part of earlier mathematical research has concentrated on optimization
problems where the functions were linear, differentiable, convex, or otherwise
mathematically well behaving. However, in practical problems, objective functions are
often nonlinear, non-differentiable, discontinuous, multi-modal etc. and no
presumptions can be made about their behaviour. Most traditional optimization
methods cannot handle such complexity or do not perform in some cases in which the
assumptions, upon which they are based do not hold. For such problems, stochastic
optimization methods such as EAs have been implemented effectively because they do
not rely upon assumptions concerning the objective and constraint functions. EAs are
stochastic search and optimization heuristic derived from the classic evolution theory,
working on a population of potential solutions to a problem. The basic idea is that if
only those individuals reproduce, which meet a certain selection criteria, thelpopulation
will converge to solutions that best meet the selection criteria. If imperfect reproduction
occurs, the population can begin to explore the search space and will move to
individuals (solutions) that have an increased selection probability. Genetic Algorithm
(GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), Ant colony
optimization (ACO) and Glowworm swarm optimization (GSO) Central force
optimization (CFO) are some well-known EAs found in literature (Holland, 1987;
Kennedy and Eberhart, 1995; Barbosa, 1996, 2002; Babu and Chaturvedi, 2000;Castro
and Barbosa, 2001;Barbosa and Lemonge, 2002, 2003, 2008; Babu and Chaurasia,
2003; Acan, 2004, 2005; Babu and khan, 2004; Drezner et al., 2005; Salhi et al.,
2005; Kennedy, 2006; Krishnanand and Ghose, 2006, 2009; Formato, 2007, 2009,
2010; Salhi and Petch, 2007; Dorigo and StUtzle, 2010).
Since, EAs work on a population of solutions instead of a single point (at each
iteration), hence are less likely to be trapped in a local minimum. The OR community
has developed several approaches to solve MOOPs since the 1950s. Currently, wide
varieties of mathematical programming techniques to solve MOOPs are available in the
specialized literature (Sawaragi et al., 1985; Steuer, 1986). However, mathematical
programming techniques have certain limitations when tackling MOOPs (Coello,
1999). For example, many of them are susceptible to the shape of the POF and may not
work when the POF is concave or disconnected. Others require differentiability of the
11
objective functions and the constraints. In addition, most of them only generate a single
solution from each run. Thus, several runs (using different starting points) are required
in order to generate several elements of the Pareto-optimal set. In contrast, EAs seem
particularly suitable to solve MOOPs, because they deal simultaneously with a set of
possible solutions (the so-called population) which allows one to find several members
of the Pareto optimal set in a single run of the algorithm. Additionally, EAs are less
susceptible to the shape or continuity of the POF.
This section presents background information to aid the reader understanding of
the necessary prior facts supporting this thesis. A brief description of classification of
MOEA is presented followed by a discussion of MOEAs and other approaches to solve
MOOPs. MOEAs are contemporary algorithms receiving renewed interest from EA
researchers to solve MOOPs and are part of the 'soft computing' umbrella of search
algorithms.(Coello, 2004) Several substantial reviews are available for classification
and solution of MOOP and examined major MOEA approaches (Fonseca and Fleming,
1995; Horn, 1997; Coello, 1999; Van Veldhuizen and Lamont, 2000; Tan et al., 2002;
Coello et al., 2006; Lamont and Van Veldhuizen, 2007). Reviews include GA, PSO,
DE, Evolution Strategies, Evolutionary Programming, Genetic Programming and their
extension to MOEA implementations.
Many researchers have attempted to classify algorithms according to various
considerations. Since, as discussed earlier preference or priority is an essential part of
decision-making in MOOP. Eventually, the DM should finally decide the relative
importance of each objective function in order to get a single unique solution to be used
as a solution of his original multidisciplinary decision-making problem. The various
multiple objective decision-making techniques are commonly classified from a DM's
point of view (Hwang and Masud, 1979; Van Veldhuizen and Lamont, 2000). Hwang
and Masud (1979) and later Miettinen (1999) fine-tuned the earlier classifications and
suggested the following three main classes based on the preference articulation of the
DM:
• A Priori Preference Articulation: (decide —> search)
• Progressive Preference Articulation: (decide <---> search)
•A Posteriori Preference ArticulatiOn: (search —> decide)
12
The multi-objective optimization includes numerous different techniques. Thus, it is
hard to summarize all of them; however, to give an idea about the most common
methods used in the literature, a short overview follows. For a more deep insight the
interested reader may consult to (Coello et al., 2007; Coello, 2009) and the references
therein included. The main approaches are described below and also shown in Figure
1.5.
1.2.2
A Priori Preference Articulation: (decide -+ search)
The DM selects the weights before running the optimization algorithm. In practice, it
means that the DM combines the individual objective functions into a scalar cost
function (linear or nonlinear combination). This effectively converts a multi-objective
problem into a single objective one. In the early stage of multi-objective optimik-ation,
objectives were linearly combined into a scalar objective via a predetermined
aggregating function to reflect the search for a particular solution on the trade-off
surface (Jakob et al., 1992; Wilson and Macleod, 1993). The whole trade-off is then
discovered by repeating the process numerous times with different settings for the
aggregating function. The drawback of this approach is that the weights are difficult to
determine precisely, especially when there is insufficient information or knowledge
concerning the optimization problem. Other objective reduction methods include the
use of penalty functions for the reduction of multi-objective optimization into a single
objective (Ritzel et al., 1994). As mentioned by Fonseca and Fleming (1993)Coello
(1996) these conventional multi-objective optimization approaches often have the
disadvantage of missing the concave portions of a trade-off curve. An important sub
classification of this approach is:
• Weighted Sum Approach- In this approach different objectives are combined using
weighted coefficients w, = 1,2,..., k.
The objective to minimize becomes
\ —11c
L./J=1 w f (x) . This is one of the most popular approaches for solving MOOP, and
may be simplest one (Murty, 1995). Combination used mostly is linear but non
linear combination can also be used.
13
• Goal Programming Based Approach- In this approach the user is required to
assign targets, or goals, 7; =1,k for each objective. The aim then becomes the
minimization of the deviation from the targets to the objectives,
or E,_,If,(x) — 7; I (Van Veldhuizen and Lamont, 2000;Knowles et al., 2006;Lamont
and Van Veldhuizen, 2007;Jones and Tamiz, 2010).
• Goal Attainment Based Approach-The user is required to provide, a vector of
weights w, =1,k in addition to the vector of goals, linking the relative under- or
over-attainment of the desired goals. Fonseca and Fleming (1993) were probably the
first to incorporate preferences from the DM into EA and later discussed by others
elaborately (Tan et al., 2002, 2005).
•
E — Constraint Approach- In this method, the primary objective function is
minimized whereas the other objectives are treated .as constraints bound by some
allowable levels e, . This technique was developed to alleviate the difficulties faced
by the weighted sum approach in solving nonconvex problems (Laumanns et al.,
2006; Mavrotas, 2009).
• Fuzzy Based Approaches- The concept of fuzzy sets is based on a multi-valued
logic where a statement could be simultaneously, partly true and partly false (Zadeh,
1975). In fuzzy logic, a membership function ,U , expresses the degree of truthfulness
of a statement, in the range from p = 0 indicating that the statement is false to
p =1 for truth. This is in opposite to binary logic where a statement can be only
false or true. In an optimization problem, the membership function enables us to
associate a normalized value to each objective p1 (f,(x)), which expresses the degree
of satisfaction of the considered ith objective. The value of f , (x) is fuzzified by p,
to yield a value in the range (0, 1), which quantifies how well a solution satisfies the
requirements. Ones the fuzzification has been performed the actual value of each
objectives is transformed into logical values. These values have to be aggregated to
14
one in order to get an overall value for the design. In binary logic this is
accomplished by the AND operator. However, in fuzzy logic the AND operator
could be implemented by several different rules. The most common ones are the min
and the product operators. A method for finding numerical compensation for fuzzy
multicriteria decision problem is demonstrated by Rao et al. (1988a). A preference
structure on aspiration levels in a goal-programming problem based on fuzzy
approach is illustrated by Rao et al. (1988b). Mohanty and Vijayaraghavan (1995)
presented a multi-objective programming problem and its equivalent goal
programming problem with appropriate priorities and aspiration levels based on
fuzzy approach. Examples and details on fuzzy approaches to multi-objective
optimization could be found in several works available in literature (Zimmermann,
1986, 1987, 2001; Zintmermann, 1990; Fuller and Carlsson, 1996; Wang, 2000).
1.2.3 Progressive Preference Articulation: (decide 4---> search)
DM interacts with the optimization program during the optimization process. Typically,
the system provides an updated set of solution and let the DM consider whether or not
change the weighting of individual objective functions. They rely on progressive
information about the DM's preferences simultaneously as they search through the
solution space. Interactive methods are very common within the field of operations
research. These methods work according to the hypothesis that the DM is unable
indicate 'a priori' preferences information because the complexity of the problem.
However, the DM is able to give some preference information as the search moves on.
The DM then learns about the problem as he/she faces different possible problem
solutions. Disadvantages of these types of methods are (Van Veldhuizen and Lamont,
1998; Adra et al.; Rachmawati and Srinivasan, 2009):
• The solutions are depending on how well the DM can articulate his preferences.
• A high effort is required from the DM during the whole search process.
• The solution is depending on the preferences of one DM. If the DM changes his
preferences or if there is a change of DM, the process has to be restarted.
• The required computational effort is higher than in the previous methods.
15
1.2.4 A Posteriori Preference Articulation: (search -p decide)
The DM specifies no weighting before or during the optimization process. The
optimization algorithm provides a set of efficient candidate solutions from which the
DM chooses the solution to be used. The big advantage of this approach is that the
solution is independent of the DM's preferences. The analysis has only to be performed
once, as the Pareto-optimal set would not change as long as the problem description is
unchanged. However, some of these methods suffer from a large computational burden.
Another disadvantage might be that the DM has too many solutions to choose. Present
work is an attempt to rectify this very problem. This is the category where most
MOEA approaches fall. The main approaches are described below.
• Non Pareto Based Approaches- VEGA (Vector Evaluating Genetic Algorithm)
was possibly the first multi-objective genetic algorithm proposed by Schaffer which
incorporates a special selection operator in which a number of sub-populations were
generated by applying proportional selection according to each objective function in
turn (Schaffer, 1985). However, it is reported that the. method tends to crowd results
at extremes of the solution space, often yielding poor coverage of the POF. Fourman
(1985) presented a GA using binary tournaments, randomly choosing one objective
to decide each tournament. Kursawe (1991) further developed this scheme by
allowing the objective selection to be random, fixed by the user, or to evolve with
the optimization process. He also added crowding techniques, dominance, and
diploid to maintain diversity in the population. All of these Non-Pareto techniques
tend to converge to a subset of the POF, leaving a large part of the Pareto set
unexplored.
• Pareto Based Approaches (First Generation) - After VEGA, researchers adopted
for several years other naïve approaches. Goldberg (1989) first hinted the direct
incorporation of the concept of Pareto-optimality into an EA in his seminal book on
genetic algorithms (Goldberg, 1989). While criticizing Schaffer's VEGA, Goldberg
suggested the use of nondominated ranking and selection to move a population
towards the POF in a MOOP. The basic idea is to find the set of solutions in the
population that are nondominated by the rest of the population. These solutions are
16
then assigned the highest rank and eliminated from further contention. Another set
of nondominated solutions are determined from the remaining population and are
assigned the next highest rank. This process continues until all the population are
suitably ranked. This procedure of identifying non-dominated sets of individuals is
repeated until the whole population has been ranked, as depicted in Figure 1.5.
Goldberg also discussed ranking using niching methods and speciation to promote
diversity so that the entire POF is covered. Goldberg (1989) did not provide an
actual implementation of his procedure, but practically all the MOEAs developed
after the publication of his book were influenced by his ideas. From the several
MOEAs developed from 1989 to till date, the most representative are the following:
The non-dominated sorting genetic algorithm (NSGA) of Srinivas and Deb (1994)
implemented Goldberg's thoughts about the application of niching methods. NSGA
is based on several layers of classifications of the individuals as suggesteth,by
Goldberg. Before selection is performed, the population is ranked based- on
nondomination: all nondominated individuals are classified into one category (with a
dummy fitness value, which is proportional to the population size, to provide an
equal reproductive potential for these individuals). To maintain the diversity of the
population, these classified individuals are shared with their dummy fitness values.
Then this group of classified individuals is ignored and another layer:
, of
nondominated individuals is considered. The process continues until all individuals
in the population are classified. Since individuals in the first front have the
maximum fitness value, they always get more copies than the rest of the population.
The algorithm of the NSGA is not very efficient, because Pareto ranking has to be
repeated over an over again. Evidently, it is possible to achieve the same goal in a
more efficient way. Another approach under this category is multi-objective genetic
algorithm (MOGA) in which an individual is assigned a rank corresponding to the
number of individuals in the current population by which it is dominated increased
by one. All nondominated individuals are ranked one. Fitness of individuals with the
same rank is averaged so that all of them are sampled at the same rate. A niche
formation method is used to distribute the population over the Pareto-optimal region
(Fonseca and Fleming, 1995). Niched Pareto genetic algorithm (NPGA) was
proposed by (Coello, 2004). A Pareto dominance-based tournament selection with a
17
sample of the population was used to determine the winner between two candidate
solutions. Around ten individuals are used to determine dominance, and the nondominated individual selected. If both the individuals are either dominated or nondominated, then the result of the tournament is decided through fitness sharing.
The main lesson learnt from the first generation MOEA was that a successful MOEA
had to combine a good mechanism to select nondominated individuals (perhaps, but not
necessarily, based on the concept of Pareto-optimality) combined with a good
mechanism to maintain diversity (fitness sharing was a choice, but not the only one).
• Pareto Based Approaches (Second Generation) - All the first generation Pareto
based algorithms, are used as tools to .keep diversity in the population through the
whole POF (niching technique), the fitness sharing procedure. They pursue
weakness as its dependence on the fitness sharing factor. An important operator that
has been demonstrated to improve significantly the performance of multi-objective
algorithms is elitism, as can be seen, for example refer Goldberg and Samtani
(1986). From the author's perspective, second generation of MOEA started when
elitism became a standard mechanism. However, the incorporation of elitism in
MOEAs is more complex than its incorporation in single objective optimization.
Since, now we have an elite set which size can be important compared to the
population. The elitism maintains the knowledge acquired during the algorithm
execution and is materialized by preserving the individuals with best fitness in the
population or in an auxiliary population. Most authors credit Zitzler and Thiele
(1998) with the formal introduction of this concept in a MOEA, mainly because his
Strength Pareto Evolutionary Algorithm (SPEA) made a landmark in the field.
Needless to say, after the publication of this paper, most researchers of the field
started to incorporate external populations in their MOEAs and the use of this
mechanism (or an alternative form of elitism) became a common practice. In fact,
the use of elitism is a theoretical requirement in order to guarantee convergence of a
MOEA and therefore its importance. In the context of multi-objective optimization,
elitism usually (although not necessarily) refers to the use of an external population
(also called secondary population) to retain the nondominated individuals found
18
along the evolutionary process. The main motivation for this mechanism is the fact
that a solution that is nondominated with respect to its current population is not
necessarily nondominated with respect to all the populations that are produced by an
evolutionary algorithm. Thus, what we need is a way of guaranteeing that the
solutions that we will report to the user are nondominated with respect to every other
solution that our algorithm has produced. Therefore, the most intuitive way of doing
this is by storing in an external memory (or archive) all the nondominated solutions
found. If a solution that wishes to enter the archive is dominated by its contents, then
it is not allowed to enter. Conversely, if a solution dominates anyone stored in the
file, the dominated solution must be deleted. Elitism can also be introduced using a
(p+ /1)— selection in which parents compete with their children and those, which
are nondominated (and possibly comply with some additional criterion such as
providing a better distribution of solutions), are selected for the following
generation. Many MOEAs have been proposed during the second generation (which
we are still living today). However, most researchers will agree that few of these
approaches have been adopted as a reference or have been used by others. The most
representative MOEAs of the second generation are the following (Coello, 2004):
o Strength Pareto Evolutionary Algorithm (SPEA): This algorithm was
introduced by Zitzler and Thiele (1998). This approach was conceived= as a
way of integrating different MOEAs. SPEA uses an archive containing
nondominated solutions previously found (the so-called external
nondominated set). At each generation, nondominated individuals are copied
to the external nondominated set. For each individual in this external set, a
strength value is computed. It is proportional to the number of solutions to
which a certain individual dominates. In SPEA, the fitness of each member of
the current population is computed according to the strengths of all external
nondominated solutions that dominate it. The fitness assignment process of
SPEA considers both closeness to the true POF and even distribution of
solutions at the same time. Thus, instead of using niches based on distance,
Pareto dominance is used to ensure that the solutions are properly distributed
along the POF. Although this approach does not require a niche radius, its
effectiveness relies on the size of the external nondominated set. In fact, since
19
the external nondominated set participates in the selection process of SPEA, if
its size grows too large, it might reduce the selection pressure, thus slowing
down the search. Because of this, the authors decided to adopt a technique that
prunes the contents of the external nondominated set so that its size remains
below a certain threshold. SPEA forms niches automatically only depending
how the individuals are located in relation to each other.
o Strength Pareto Evolutionary Algorithm 2 (SPEA2): SPEA2 has three main
differences with respect to its predecessor (Zitzler and Thiele, 1998). First one,
it incorporates a fine-grained fitness assignment strategy which takes into
account for each individual the number of individuals that dominate it and the
number of individuals by which it is dominated. Second one, it uses a nearest
neighbor density estimation technique which guides the search more
efficiently, and third one, it has an enhanced archive truncation method that
guarantees the preservation of boundary solutions (Zitzler et al., 2001).
o Pareto Archived Evolution Strategy (PAES): Knowles and Come (1999)
introduced this algorithm. It stores the solutions of the best POF found in an
external auxiliary population (elitism). A new crowding method introduced in
this algorithm to promote diversity in the population. 'The objective space is
divided into hypercubes by a grid, which determines the density of
individuals; the zones with lower density are favored in detriment of the zones
with higher density of points. This technique depends only on the parameter of
number of grid divisions and is less computationally expensive than niching,
avoiding the use of the fitness-sharing factor. Initially conceived as a multiobjective local search method (l+p)-PAES, it has been extended later to the
(u+?)-PAES. The rank of each new created individual is set by comparing its
dominance or non-dominance to the archive and also by the density of the grid
they belong to.
o Pareto Envelope-based Selection Algorithm (PESA): It stores the solutions
of the best front found in an external auxiliary population (elitism). Not only
the crowding mechanism is based on the hypercubes grid division as in PAES,
but also the selection criterion is performed by this concept. In a set of test
20
functions competing with PAES and SPEA, PESA is claimed to obtain the
best whole results (Come et al., 2000).
o Nondominated Sorting Genetic Algorithm II (NSGA-II) or Elitist
Nondominated Sorting Genetic Algorithm: It was proposed to resolve the
weaknesses of NSGA (Srinivas and Deb, 1994), specially its non-elitist nature.
Coello (2009) quoted "although several elitist MOEA exist, few have become
widely used and among them, one has become extremely popular called
NSGA-II". Deb and Goel (2001) introduced this approach. It maintains the
solutions of the best front found including them into the next generation
(elitism). The introduction of the controlled elitism operator in the NSGA-II
algorithm produces a better equilibrium between exploitation and exploration.
In NSGA-II, for each solution one has to determine how many solutions
dominate it and the set of solutions to which it dominates. The NSGA-II
estimates the density of solutions surrounding a particular solutio' n in the
population by computing the average distance of two points on either side of
this point along each of the objectives of the problem. This value is the socalled crowding distance. During selection, the NSGA-II uses a crowdedcomparison operator which takes into consideration both the nondomination
rank of an individual in the population and its crowding distance (i.e.,
nondominated solutions are preferred over dominated solutions, but between
two solutions with the same nondomination rank, the one that resides in the
less crowded region is preferred). The NSGA-II does not use an external
memory as the other MOEAs previously discussed. Instead, the elitist
mechanism of the NSGA-II consists of combining the best parents with the
best offspring obtained (i.e., a (p + — selection). Due to its clever
mechanisms, the NSGA-II is much more efficient (computationally speaking)
than its predecessor (NSGA), and its performance is so good, that it has
become extremely popular in the last few years, becoming a landmark against
which other multi-objective EAs have to be compared. There are two version
of NSGA-II namely; binary coded and real coded, we are concerned with real
coded here. In Chapter 2, it will be discussed in detail.
21
o Multi-objective Particle Swarm Optimization (MOPSO): Kennedy and
Eberhart (1995) proposed an approach called "particle swarm optimization
(PSO)" which was inspired by the choreography of a bird flock. The way in
which PSO updates the particle x, at the generation t is through the formula:
(t) = (t —1) + v, (t)
(1.6)
Where the factor vi (t) is known as velocity and it is given by
v,(t) = w*vi (t —1) + Cl * rl * (x pbes,, — xi) + C2* r2 * (xgbe,,, — x1 )
(1.7)
In this formula xpbe, is the best solution that x, has viewed, xgbe„, is the best
particle (also know as the leader) that the entire swarm has viewed, w is the
inertia weight of the particle and controls the trade-off between global and
local experience, rl and r2 are two uniformly distributed random numbers in
the range [0, 1] and Cl , C2 are specific parameters which control the effect
of the personal and global best particles. The approach can be seen as a
distributed behavioral algorithm that performs (in its more general version)
multidimensional search. In the simulation, the behavior of each individual is
affected by either the best local (i.e., within a certain neighbourhood) or the
best global individual. The approach uses the concept of population and a
measure of performance similar to the fitness value used with EAs. In
addition, the adjustments of individuals are analogous to the use of a
crossover operator of GA. However, this approach introduces the use of flying
potential solutions through hyperspace (used to accelerate convergence)
which does not seen to have an analogous mechanism in traditional EAs.
Another important difference is the fact that PSO allows individuals to benefit
from their experiences, whereas in an EA, normally the current population is
the only "memory" used by the individuals. Coello and Lechuga, 2002) found
PSO particularly suitable for MOOP mainly because of the high speed of
convergence that the PSO presents for single objective optimization problem.
The analogy of PSO with EAs makes evident the notion that using a Pareto
ranking scheme (Goldberg, 1989) could be the straightforward way to extend
this approach to handle MOOPs as well. The historical record of best
solutions found by a particle (i.e. an individual) could be used to store
22
nondominated solutions generated in the past (this would be similar to the
notion of elitism used in MOEA). The use of global attraction mechanism
combined with a historical archive of previously found nondominated vectors
would motivate convergence towards globally nondominated solutions. The
pseudo-code of general MOPSO is shown in Figure 1.6 (Durillo et al., 2009).
After initializing the swarm (Line 1), the typical approach is to use an external
archive to store the leaders, which are taken from the non-dominated particles
in the swarm. After initialization the leaders archive (Line 2), some quality
measure has to be calculated (Line 3) for all the leaders to select usually one
leader for each particle of the swarm. In the main loop of the algorithm, using
Equations 6&7 the flight of each particle is performed after leader selection
(Lines 7-8) and, optionally, a mutation or turbulence operator can be applied
(Line 9); then, the particle is evaluated and its corresponding pbest is updated
(Lines 10-11). After each iteration, the set of leaders is updated and the
quality measure is calculated again (Lines 13-14). After the termination
condition, the archive is returned as the result of the search. For further details
about the operations contained in the MOPSO pseudo code and detail
literature, please refer to (Coello et al (2004), Reyes-Sierra and Coello (2006),
del Valle et al (2008), Parsopoulos and Vrahatis (2008) Padhye et al (2009).
Raquel and Naval Jr( 2005) proposed another PSO based approach called
MOPSO-CD, which extended the algorithm of the single-objective PSO to
handle multi-objective optimization problems.. It incorporated the mechanism
of crowding distance computation into the algorithm of PSO specifically on
global best selection and in the deletion method of an external archive of
nondominated solutions. The crowding distance mechanism together with a
mutation operator maintains the diversity of nondominated solutions in the
external archive. MOPSO-CD also has a constraint handling mechanism for
solving constrained optimization problems. Raquel and Naval Jr. (2005) also
showed that MOPSO-CD is• highly competitive in converging towards the
POF and generated a well-distributed set of nondominated solutions. We
discuss this approach in detail in Chapter 2.
23
1.3 DM's Partial Preference Articulation with MOEA -A Review
During last few years of research on multi-objective optimization using EAs, it is
amply evident that EAs are capable of finding multiple Pareto-optimal solutions in a
single simulation run. It is then natural to ask: 'How does one choose a particular
solution from the obtained set of Pareto-optimal solutions?' In the following, we first
review a few techniques often followed in the context of MCDM.
Apart from a priori and a posteriori approaches, Branke (2008) elaborately discussed
an intermediate approach (middle path in Figure 1.1) incorporating both of these
approaches. Although we agree that, it may be impractical for a DM to specify
completely his or her preferences before any alternatives are known, we assume that the
DM has at least a vague idea or biasness about which solutions might be preferred, and
can indicate partial preferences. The methods discussed in this section aim at
integrating such imprecise knowledge into the MOEA approach, biasing the search
towards solutions that are considered relevant to the DM. The goal is no longer to
generate a good approximation to all Pareto optimal solutions, but a small set (or
subsection of the POF) of solution, that contains the DM's preferred solution with the
highest probability. This may yield two important advantages:
Focus: DM's partial preferences may be utilized to focus the search and generate a
subset of all Pareto-optimal alternatives that is particularly interesting to the DM. This
avoids overwhelming the DM with a huge set of (mostly irrelevant) alternatives.
Speed: By focusing the search onto the relevant part of the search space, one may
expect the optimization algorithm to find these solutions more quickly, not wasting
computational effort to identify all Pareto-optimal but irrelevant solutions.
To reach these goals, the MOEA community can accommodate or be inspired by
many of the methods, which generally integrate DM'S preference information into the
optimization process. Thus, combining MOEAs and their ability to generate multiple
alternatives simultaneously in one run, and methodologies to incorporate user
preferences holds great promise. Following brief literature survey contains quite a few
techniques to incorporate partial preference information into MOEAs, and previous
detailed surveys on this topic include Coello (2000), Rachmawati and Srinivasan
(2006), Rachmawati (2009). In the following, we classify the different approaches
24
based on the type of partial preference information asked to the DM, namely a goal or
reference point, trade-off information, weighted performance measures (approaches
based on marginal contribution), objective scaling and others (Branke, 2008).
1.3.1 Approaches Providing Reference Point
Perhaps the most important way to provide preference information is to provide a
reference point, a technique that has a long tradition in MCDM. e.g., Wierzbicki (1979,
1982). A reference point consists of aspiration levels reflecting desirable values for the
objective function, i.e., a target the DM is hoping for. Such information can then be
used in different ways to focus the search. The use of a reference point to guide the
MOEA has first been proposed by Fonseca and Fleming (1993). The basic idea there
was to give a higher priority to objectives in which the goal is not fulfilled. Thus, when
deciding whether a solution x is preferable to a solution y or not, first only the
objectives in which solution x does not satisfy the goal are considered, and x is
preferred to y if it dominates y on these objectives. If x is equal to y in . all these
objectives, or if x satisfies the goal in all objectives, x is preferred over y either if y
does not fulfil some of the objectives fulfilled by x, or if x dominates y on the
objectives fulfilled by x. A slightly extended version that allows the DM to.: assign
additionally priorities to objectives has been published in Fonseca and Fleming (1998).
The work also contains the proof that the proposed preference relation is transitive. The
approach by Deb (1999) used an analogy from goal programming. There, the DM can
specify a goal in terms of a single desired combination of characteristics t =
and the type of goal (e.g. f (x) t f (x), f (x) = t) . Deb (1999) demonstrated how
these can be modified to suit MOEAs. The distances from that goal rather than the
actual criteria are compared. If the goal for criterion i is to find a solution x with
f (x) t theninsteadof consideringthecriterion f ,(x) ,simply
f (x) = max {0, f , (x) — t, } is used. If the goal is set appropriately, this approach may
indeed restrict the search space to an interesting region. The problem here is to set the
goal a priori, i.e. before the POF is known. If the goal vector is outside the feasible
range, the method is almost identical to the definition in Fonseca and Fleming (1993).
25
However, if the goal can be reached, the approach from Deb (1999) will lose its
selection pressure and stop search as soon as the reference point has been found, i.e.,
return a solution, which is not Pareto-optimal. The goal-programming idea has been
extended in Deb (2001) to allow for reference regions in addition to reference points.
Tan et al. (1999) proposed another ranking scheme, which in a first stage prefers
individuals fulfilling all criteria, and ranks those individuals according to standard nondominance sorting. What is more interesting, in Tan et al. (1999) is the suggestion on
how to account for multiple reference points, connected with AND and OR operations.
In Deb and Sundar (2006), the crowding distance calculation in NSGA-II is replaced by
the distance to the reference point, where solutions with a smaller distance are
preferred. More specifically, solutions with the same non-dominated rank are sorted
with respect to their distance to the reference point. Furthermore, to control the extent
of obtained solutions, all solutions having are grouped based on distance. Only one
randomly picked solution from each group is retained, while all other group members
are assigned a large rank to discourage their use. As Fonseca and Fleming (1998) and
Tan et al. (1999), this approach is able to improve beyond a reference point within the
feasible region, because the non-dominated sorting keeps driving the population to the
POF. In addition, as Tan et al. (1999), it can handle multiple reference points
simultaneously. Yet another dominance scheme was recently proposed in Molina et al.
(2009), where solutions fulfilling all goals and solutions fulfilling none of the goals are
preferred over solutions fulfilling only some of the goals. This, again, drives the search
beyond the reference point if it is feasible, but it can obviously lead to situations where
a solution which is dominated (fulfilling none of the goals) is actually preferred over
the solution that dominates it (fulfilling some of the goals). Thiele et al. (2009)
integrated reference point information into the indicator based evolutionary algorithm.
In brief, the reference direction method allows the user to specify a starting point and a
reference point, with the difference of the two defining the reference direction. Then,
several points on this vector are used to define a set of achievement scalarizing
functions, and each of these is used to search for a point on the POF. In Deb and Kumar
(2007), an MOEA is used to search for all these points simultaneously. For this
purpose, the NSGA-II ranking mechanism has been modified to focus the search
accordingly. The light beam search also uses a reference direction, and additionally
26
asks the user for some thresholds which are then used so find some possibly interesting
neighbouring solutions around the (according to the reference direction) most preferred
solution.
1.3.2 Approaches Based on Trade-off Information
If the user has no idea of what kind of solutions may be reachable, it may be easier to
specify suitable trade-offs, i.e., how much gain in one objective is necessary to balance
the loss in the other. Greenwood et al. (1997) suggested a procedure, which asks the
user to rank a few alternatives, and from this derives constraints for linear weighting of
the objectives consistent with the given ordering. Then, these are used to check whether
there is a feasible linear weighting such that solution x is preferable to solution y. The
authors suggested using a mechanism from White (1984) which removes a minimal, set
of the DM's preference statements to make the weight space non-empty. Note that
although linear combinations of objectives are assumed, it is possible to identify a
concave part of the POF, because the comparisons are only pair-wise. In the guided
MOEA proposed in Branke et al. (2001) the user is allowed to specify preferences in
the form of maximally acceptable trade-offs like "one unit improvement in objective i
is worth at most a`, units in objective F. The basic idea is to modify the dominance
criterion accordingly, so that it reflects the specified maximally acceptable trade-offs. A
solution x is now preferred to a non-dominated solution y if the gain in the objective
where y is better does not outweigh the loss in the other objective. The region
dominated by a solution is adjusted by changing the slope of the boundaries according
to the specified maximal and minimal trade-offs. This idea can be implemented by a
simple transformation of the objectives: It is sufficient to replace the original
objectives with two auxiliary objectives S21 and Q2 and use these together with the
standard dominance principle, Where
S21 (x) = f (x) +f2 (x)
a 21
02
(X) =
1
a12
(X) ± 4
27
(x)
(1.8)
(1.9)
Because the transformation is so simple, the guided dominance scheme can be easily
incorporated into standard MOEAs based on dominance, and it does not change the
complexity nor the inner workings of the algorithm. However, an extension of this
simple idea to more than two dimensions seems difficult. Although developed
independently and with a different motivation, the guided MOEA can lead to the same
preference relation as the imprecise value function approach in Greenwood et al. (1997)
discussed above. The guided MOEA is more elegant and computationally efficient for
two objectives, the imprecise value function approach works independent of the
number of objectives. The idea proposed by Jin and Sendhoff (2002) is to aggregate the
different objectives into one objective via weighted summation, but to vary the weights
gradually over time during the optimization. The approach runs into problems if the
POF is concave, because a small weight change would require the population to make a
big "jump".
1.3.3 Approaches Based on Marginal Contribution
Several authors have recently proposed to replace the crowding distance as used in
NSGA-II by a solution's contribution to a given performance measure, i.e., the loss in
performance if that particular solution would be absent from the population (Branke et
al., 2004; Zitzler and Kiinzli, 2004; Emmerich et al., 2005), the performance measure
used is the hypervolume. The hypervolume is the area (in 2D) or part of the objective
space dominated by the solution set and bounded by, a reference point p. The marginal
contribution is then calculated only based on the individuals with the same Pareto rank.
Zitzler et al. (2007) extended this idea by defining a weighting function over the
objective space, and used the weighted hypervolume as indicator. This allows
incorporating preferences into the MOEA by giving preferred regions of the objective
space a higher weight.
1.3.4 Approaches Based on Scaling
All basic MOEA approaches attempt to generate a uniform distribution of
representatives along the POF. For this goal, they rely on distance information in the
28
objective space, be it in the crowding distance of NSGA-II or in the clustering of
SPEA-2.Many current implementations of MOEAs (e.g., NSGA-II and SPEA) scale
objectives based on the solutions currently in the population. While this results in nice
visualizations if the front is plotted with a 1:1 ratio, and relieves the DM from
specifying a scaling, it assumes that ranges of values covered by the POF in each
objective are equally important. Whether this assumption is justified certainly depends
strongly on the application and the DM's preferences. In order to find a biased
distribution anywhere on the POF, a previous study by Deb (2003) used a biased
sharing mechanism implemented on NS GA. In brief, the objectives are scaled
according to preferences when calculating the distances. This allows making distances
in one objective appear larger than they are, with a corresponding change in the
resulting distribution of individuals. Although this allows to focus on one objective or
another, the approach does not allow to focus on a compromise region (for equal
weighting of the objectives, the algorithm would produce no bias at all). Branke and
Deb (2005) applied a biased sharing mechanism extended with a better control of the
region of interest and a separate parameter .controlling the strength of the bias. For a
solution i on a particular front, the biased crowding distance measure D, is re-defined
as follows. Let 7/ be a user-specified direction vector indicating the most probable, or
central linearly weighted utility function, and let a be a parameter controlling the bias
intensity. Then,
a
= dH
di
(1.10)
Where d, and d , are the original crowding distance and the crowding distance
calculated based on the locations of the individuals projected onto the plane with
direction vector
7/. The exponent a controls the extent of the bias, with larger a
resulting in a stronger bias. Preference of solutions having a larger biased crowding
distance D, will then enable solutions closer to the tangent point to be found. DF based
approach also comes under the subsection we are going to discuss below:
• DF based Approach- Trautmann and Mehnen (2005) suggested an explicit
incorporation of preferences into the scaling. They proposed to map the objectives
29
into the range [0, 1] according to DFs. DFs are analogous to fuzzy membership
functions, in fact they are a special case of membership functions (Kim and Lin,
2006) first introduced by Harrington (1965). With one-sided sigmoid (monotone)
DFs, the non-dominance relations are not changed. Therefore, the solutions found
are always also nondominated in the original objective space. What changes is the
distribution along the front. Solutions that are in flat parts of the DF receive very
similar desirability values and as MOEAs then attempt to spread solutions evenly in
the desirability space, this will result in a more spread out distribution in the original
objective space. However, in order to specify the DFs in a sensible manner, it is
necessary to at least know the ranges of the POF. Details about DF can be found in
numerous papers (Derringer and Suich, 1980; Lee and Park, 2003; Steuer, 2004;
Lam and Tang, 2005; Park and Kim, 2005; Trautmann and Mehnen, 2005, 2009;
Kim and Lin, 2006; Trautmann and Weihs, 2006; Mandal et al., 2007;
Chatsirirungruang and Miyakawa, 2008; Mukherjee and Ray, 2008; Roy and
Mehnen, 2008; Heike and Jorn, 2009; Jeong and Kim, 2009; Mehnen, 2009;
Noorossana et al., 2009; Trautmann et al., 2009; Lee et al., 2010). We are more
concerned with this approach and this work attempts to modify the DFA in a manner
so that it can be applied in more general way.
1.3.5 Other Approaches
The method by Cvetkovic and Parmee (2000) and others ( Parmee et al., 2000; Parmee,
2001; Cvetkovic and Parmee, 2002, 2003) allowed the DM to articulate fuzzy
preferences, like "criterion 1 is much more important than criterion 2". A weight w, ,
and a minimum level for dominance, z is assigned to each criterion. Then, the concept
of dominance is defined as follows:
x
y <=>w,
r
(1.11)
f,(x)5 f,(x)
with inequality in at least one case. To facilitate specification of the required weights,
Cvetkovic and Parmee (1999) suggested a method to turn fuzzy preferences into
specific quantitative weighting. However, since for every criterion the dominance
30
scheme only considers whether one solution is better than another solution, and not by
how much it is better, this allows only a very coarse guidance and is difficult to control.
Fuzzy optimization problems also appear in literature with multiple objectives
(Hwang and Lai, 1993), and, typically, fuzzy logic has been used by numerous authors
to solve MOOPs (Sakawa and Kato, 2009). It is patent that EA (Goldberg, 1989) could
be used to solve fuzzy nonlinear programming problems because EAs are solution
methods potentially capable of solving general nonlinear programming problems. The
association of MOOP with fuzzy logic and evolutionary computation is approached in
various ways in the literature. A GA is described in de Moura et al. (2002) to solve
MOOPs with fuzzy constraints. In Sakawa and Kato (2009) an interactive fuzzy
approach is used to solve nonlinear MOOPs through GAs. A third alternative is
described in Jimenez et al. (2004), which describes a multi-objective approach to solve
optimization problems with fuzzy constraints using a Pareto-based EA to solve a
MOOP associated to the fuzzy problem. In this vein, de Moura et al. (2002) proposed a
MOEA to solve optimization problems with costs and constraints using fuzzy approach
export—import businesses as a case study. An "a posteriori" decision-making process
is described in the work to obtain a crisp solution from fuzzy solution (Jimenez et al.,
2001; Jimenez et al., 2006).
Hughes (2002) concerned with MOEAs for noisy objective functions only. The: main
idea to cope with the noise is to rank individuals by the sum of probabilities of being
dominated by any other individual. To consider preferences, the paper proposes a kind
of weighting of the domination probabilities.
Numerous works including Marler and Arora (2004, 2005) and Marler et al.
(2006) evaluated various transformation methods using simple example problems.
Viewing these methods as different means to restrict function values sheds light on how
the methods perform. In addition, they also demonstrated some transformation methods
and advantages of using a simple normalization—modification.
Rangarajan et al. (2004) stated that interactive multi-objective optimization methods
help focus computational effort to find solutions of interest to the DM. However, most
current EAs do not incorporate the expert knowledge of the user. Their paper presented
a multiobjective evolutionary optimization framework that interactively incorporated
user preference information. Although the weighted sum method is eventually used in
31
the study to depict the Pareto-optimal set, the present analysis is applicable to any
MOOP approach.
Rachmawati and Srinivasan (2006, 2009) presented a review of preference
incorporation in MOEA. It indicates that introducing preference in MOEAs increases
the specificity of selection, leading to solutions that are of higher relevance to the DM.
When many objectives are involved, a MOEA based on pure Pareto-optimality criterion
may not achieve meaningful search. The incorporation of preference addresses this
concern. The incorporation of preference is difficult because of uncertainties arising
from lack of prior problem knowledge and vagueness of human preference. Coello
(2009) in his latest survey paper on MOEA excerpts, "In practical applications of
MOEAs, users are normally not interested in a large number of nondominated
solutions. Instead, they are usually only interested in a few types of trade-offs among
the objectives" under the section 'what else remain to be done' (Coello, 2009). Present
work is one of such approach in this direction.
1.4 Objectives of the Thesis
DM's preferences can be incorporated into MOEA to make search much more efficient
and meaningful. In this way one can zoom in a certain region of the POF and evolve
the population only towards the area/areas of interest are incorporated into MOEA.
Thus, a decision support system (DSS) can be created for DM to aid decision-making.
As discussed earlier, some works are reported in this direction (for example (Cvetkovic
and Parmee, 1998, 1999, 2002; Deb, 1999; Branke et al., 2001; Jin and Sendhoff,
2002; Branke and Deb, 2005). It is still relatively infrequent to report outcome of an
MOEA that incorporates DM's preferences. Present work is a naïve attempt in this
direction to fill the gap pointed by Coello (2009).
32
Objectives of the present work are summarized as
• To incorporate the partial user preference with MOEAs.
• Development of methodology for MOOP to obtain the interactive (guided) POF.
• To hybrid the two approaches namely DFA as a priori and an MOEA (NSGAII/MOPSO-CD) as a posteriori to proVide the guided or biased POF according the
wish of the DM.
• To develop theoretical analysis of the proposed approach.
• To develop new types of DFs to produce required biasness in the POF.
• To use the proposed approach for solving reliability optimization problems.
1.5 Organization of the Thesis
In this thesis, a hybrid approach consists of DF as a priori and MOEA (NSGAII/MOPSO-CD) as a posteriori is developed to provide the guided or biased POF
according the wish of the DM. Apart from theoretical evaluation, the effectiveness of
this methodology is demonstrated on a number of benchmark test problems. The
developed approach is used to solve mathematical models of some real life engineering
optimization problems such as reliability optimization.
The Chapter wise summary of the thesis is as follows:
In Chapter 2, basics of DF, NSGA-II and MOPSO-CD are. presented. A new
approach named MOEA-IDFA (MOEA based IDFA) is proposed in this chapter.
Linear DFA is articulated with NSGA-II/MOPSO and theoretical analysis of the
approach is provided. Then, the proposed methodology is applied on ten benchmark
test problems of different complexities. For each problem performances of both NSGAII and MOPSO-CD are compared using hypervolume metric.
33
In Chapter 3, the mechanism of MOEA-IDFA having convex and concave DFs
(used as a priori) to incorporate the DM's preferences is given. It is possible to obtain a
desired portion of POF using appropriate combination of convex and concave DFs.
Theoretical investigation of the approach is also presented to support the scheme. Then,
the methodology is applied on ten standard test problems (provided in Chapter 1) of
different complication levels. Finally, performances of both NSGA-II and MOPSO-CD
are compared using hypervolume metric for these problems.
In Chapter 4, we discuss the functioning of proposed methodology i.e., MOEAIDFA having sigmoidal DFs (used as a priori) to incorporate the DM's preferences. A
new type of sigmoidal DF is proposed in this Chapter. It is possible to obtain a desired
portion (intermediate region) of POF using appropriate combination of sigmoid DFs.
Theoretical study of the approach is provided to support the method. The methodology
is applied on ten standard test problems of different difficulty levels. Finally,
performances of both NSGA-II and MOPSO-CD are evaluated.
In Chapter 5, it is shown that not only single region but multiple regions (of a
POF) also can be explored using the suitable DFs. A novel combination of DFs is
proposed to achieve this outcome in this chapter. Simultaneously, multiple portions of
DF's preference can be explored using the all convex DFs. Theoretical analysis of the
approach is provided to support the method. The methodology is applied on ten
standard test problems (same as used in Chapter 1-4) of different complication levels.
Corresponding each problem, performances of both NSGA-II and MOPSO-CD are also
compared.
In Chapter 6, five different kinds of problems arising in reliability engineering
are modeled as MOOPs and solved using the methodology developed (i.e., MOEAIDFA) in this thesis. These problems are series system, life supported system in a space
capsule, complex bridge system, mixed series parallel system and residual heat
reservoir (RHR) system for nuclear power plant. This Chapter shows the applicability
of the proposed interactive method for the solution of MCDM problems.
34
Chapter 7 is the concluding Chapter of the thesis. In this Chapter the usefulness
and performance of the methodology developed in the thesis are critically evaluated.
Also, some suggestions for further work in this direction are highlighted.
Full preferences
e.g. MOEA (NSGA-1 I/ MOPSO)
1\10()P
Partial Preferences
MOOP+
Partial Preferences
(Membership function/ Desirability
function)
Pareto-front
Approximation
MOEA
U
0
Soop
c.
Biased Pareto-front
Approximation
User selection
V
Solution.
Figure 1.1 Different Ways of Preference Articulation
35
a
ri
rn
a
Covered Distance
Figure 1.2 An example to travel bigger distances in ways that are more economical
Dominated solution
12
0
0 Nondominated solution
• • • • Pareto-optim al front
0 00
0
0
0
0
0
o........
Minimize
Figure 1.3 Description of Dominated and Nondominated Solutions
36
MOEA
A Priori
(Before)
Progressive
(During)
Aggregation ( Scalarization )
A Posteriori
(After)
Non Pareto Based
Linear fitness combination
VEGA
Nonlinear fitness combination
Pareto Based
Goal Programming Based
Approach
1st Generation
NSGA
Goal Attainment Based
MOGA
NPGA
6 —Constraint
IInd Generation
SPEA
SPEA2
Fuzzy based
PAES
PESA
NSGA-II
MOPSO
Figure 1.4 Preference Based Classification of MOEA
37
J;
3
•
. 1
.
23
•
•1
•-
•
1
•2
Figure 1.5 Nondominated Ranking of Search Space for all Minimization Case
1: initializeSwarm()
2: initializeLeadersArchive()
3: determineLeadersQuality()
4: generation = 0
5: while generation < maxGenerations do
6:
for each particle do
selectLeader()
7:
updatePosition() // flight (Equations 6 and 7)
8:
9:
mutation()
evaluation()
10:
11:
updatePbest()
end for
12:
13:
updateLeadersArchive()
14:
determineLeadersQuality()
15: generation ++
16: end while
17: returnArchive()
Figure 1.6 Pseudo code for MOPSO
38
CHAPTER 2
Articulation of an a Priori Approach with an a Posteriori
Approach
In this chapter, a partial user preference approach is proposed. In this method, linear
DFA as an a priori and an MOEA (NSGA-II/MOPSO-CD) as a posteriori are
articulated together to provide an interactive POF. The approach is analyzed
theoretically as well as numerically (on different standard test problems). Application
of using linear DF is analyzed elaborately as well as detail description of DF, NSGA-II
and MOPSO-CD is presented.
2.1 Introduction
Typically, a MOOP has infinitely many Pareto-optimal solutions. However, it is
generally desirable to obtain one point as a solution. Selecting one out of the set of
Pareto-optimal solutions, calls for information that is not contained in the objective
functions. Therefore, interaction with DM is an integral part of the algorithms for
MOOP at some point during the optimization process as discussed elaborately in
Section 1.1 of Chapter 1. Articulation of preferences may be done either before (a
priori), during (progressive), or after (a posteriori) the optimization process. Out of
these, a priori and a posteriori are highly used for preference articulation.
In the first approach (i.e. a priori), .one declares preferences for various
objectives ( f's) before the optimization process, incorporates the preferences into a
modified formulation of problem P1 (see Section 1.1) and solves the modified problem
to obtain a single Pareto-optimal solution. In a priori method, preferences are often
incorporated using a scalarization method in which multiple objectives are aggregated
(different aggregation operators may, be used) into a single function. If the resulting
solution is acceptable to the DM, the solution process is terminated. However, the most
scalarization methods do not transfer preferences from the user to the final solution
with complete accuracy. Thus, if the solution is not acceptable to DM, the preferences
are altered, and the problem is re-solved to obtain another Pareto-optimal solution.
39
Alternatively, the other approach (a posteriori) entails generating a representation of
the entire Pareto-optimal set (without any preference by DM) and then DM choosing
from that set a suitable solution point that satisfies his/her preferences. Most of the
MOEAs come under this category. This approach provides various options to the DM
to choose from. It also has a slight disadvantage of producing a large number of
solutions some of them not in the choice range of the DM. The reason for this
drawback is the unavailability of the preference by the DM.
With the purpose of presenting the DM, a POF in the region of his/her choice
one can club the advantages of above two discussed approaches (i.e. a priori and a
posteriori). Articulation of preference part can be done as an appropriate a priori
approach while the solution part may be obtained using suitable a posteriori approach.
In order to provide a DM's preferred (biased) POF a hybrid approach articulating an a
priori and an a posteriori approach is proposed in this study combining the goodness of
both types of approaches. Providing the biased POF eases the DM's judgment vis-à-vis
suitable solution. Jeong and Kim (2009) proposed an interactive DF approach (IDFA)
for multirespose optimization to facilitate the preference articulation process. One run
of this approach provides just one solution to the DM based on his preference in the
form of DF. Present work is an extension of the work done by Jeong and Kim (2009).
We present a MOEA based IDFA (MOEA-IDFA) to provide DM a preferred POF
rather than providing just a single solution. This approach comes under the category of
partial DM's preference incorporation (Branke, 2008). Proposed approach utilizes DFA
as a priori while NSGA-II/MOPSO-CD as a posteriori. The rest of the chapter is
organized as follows:
Section 2.2 describes the essentials of DFA as a priori. Section 2.3 describes the
working, computation complexity, constraint handling and performance metric of
MOEAs (NSGA-II and MOPSO-CD) used. The methodology of the proposed approach
is given in section 2.4. Necessary theorems are stated and proved in order to analyze
the methodology theoretically. Results from both MOEAs are discussed and compared
in section 2.5. Finally, certain concluding observations are drawn in section 2.6.
40
2.2 DFA as a Priori
The concept of desirability is a mean for complexity reduction of a MOOP. The DFA
to simultaneously optimizing multiple objectives was originally proposed by
Harrington (1965) in the context of multi-objective industrial quality control. DFs are
very popular in response surface methodology (Box and Wilson, 1951). To incorporate
the DM's vague idea about 'in which region of the objective the optimum should be'
the DF is used. Essentially, the approach is to translate the functions to a common scale
[0, 1] by the means of mathematical transformations, combine them using the
geometric mean and optimize the overall metric. The procedure calls for introducing
for each objective (say f , i = 1,k) a function p, (called DF w.r.t. f, ) with a range
of values between zero and one that measures how desirable it is that objective f takes
on a particular value. In other words, DF maps the objective (say f) to the interval [0,
1], defined by function p : f -+ [0,1] according to the DM's desired values specified as
a priori. Once this function is defined for each of the k objectives, an overall objective
function is defined by geometric mean of the individual desirabilities.
Harrington (1965) introduced two types of DFs. One aims at maxi-or
minimization (one-sided specification) where as the other reflects a target value
problem (two sided specification). We are concerned with one-sided specification only
in this work. Harrington's one-sided DF is a special form of the Gompertz-curve
having few parameters governing the kurtosis. Definition of DF reveals its similarity
with membership function of fuzzy approach (Kim and Lin, 2006). However, we will
retain the name DF in this work as this suits interactive guided approach. The
mathematical model of DF was put into a more general form by Derringer and Suich
(1980) by introducing LTB (larger the better), STB (smaller the better) and NTB
(nominal the better) types of DFs. Depending on whether a particular MOOP is to be
maximized (LTB), minimized (STB), or assigned a target value (NTB), appropriate
DFs can be used (Nguyen et al., 2009). In this study, we are concerned with only STB
type of DF.In general, DF can be further classified into two categories namely: linear
DF and nonlinear DF. Before discussing the behaviour of nonlinear DF, it is obligatory
to confer the performance of the linear one. Hence, in this chapter we are concentrating
on linear DF only.
41
2.2.1 Linear DF
By using DF, the goodness of a design objective at different functional values can be
well characterized. Since we are concentrating on minimization problem, we are
considering here smaller the better (STB) type of DF, where the grade (value of p, )
"zero" quantifies perfect satisfaction for the choice of the most desirable objective
function value that corresponds to an ideal target. On the other hand, the grade "one"
characterizes a threshold condition pertaining to the most undesirable objective
function value that corresponds to a worst scenario to be avoided. Equation (2.1)
represents one such formulation of a DF where STB type of DF is used, shown in
Figure 2.1. The desirability function ,u, for f is defined by,
0if fi <
f:.1 if ff
ff c
f rr - )
;i = 1, 2, 3, ..., k (2.1)
1if f - <
if fi < f .
f.
=
- A.if
f;
A
;i = 1, 2, 3, ..., k (2.2)
if < f
where j;* and f,"" are the i-th components of ideal and anti-ideal vectors of MOOP
(i.e., P1). DF can be defined either way too i.e. LTB type, where, "one" quantifies
perfect satisfaction (see Equation 2.2 and Figure 2.2). Other types (shapes) of DF can
be used too discussed intensely in later chapters.
Since in this thesis only minimization problems are taken (maximization
problem is converted into minimization one), only STB type of DF is of prime concern.
Application of linear DF can be found in various earlier works such as Tang and Paoli
(2004) applied linear DFA to represent the technical attributes' values to adapt to
different directions of improvement for different technical attributes. Merkuryeva
42
(2005) presented response surface-based simulation meta-modelling procedure using
various linear and nonlinear DFs. Nguyen et al. (2009) which solves Multiresponse
optimization problem based on the linear DF for a pervaporation process for producing
anhydrous ethanol. Most of the works involving linear DF, in a single run, has
produced a single solution due to the use of aggregation methods for solving the
involved MOOP. In the current Chapter, we are concerned using linear DF as an 'a
priori' approach to incorporate the DM's preference.
2.3 Description of MOEAs Applied as a Posteriori
Several evolutionary methods are available ensuring full convergence toward the
Pareto-optimal solution both in terms of precision and in terms of diversity of solutions.
For example, NSGA-II and MOPSO-CD are two such MOEAs. These methods,
specifically developed for MOOPs, have been widely and deeply tested and compared
on many different test functions and some convergence measuring criteria available.
We are taking these two (NSGA-II and MOPSO-CD) prominent MOEAs to act as a
posteriori in our approach. We discuss NSGA-II in the subsection 2.3.1 while MOPSOCD in subsection 2.3.2.
2.3.1 NSGA-II or Elitist Nondominated Sorting Genetic Algorithm
To solve MOOP in general, simple EA is extended to maintain a diverse set of
solutions with the emphasis on moving toward a true Pareto-optimal region. The
nondominated sorting genetic algorithm (NSGA) proposed by Srinivas and Deb (1994),
is one of the first of such algorithms. It is based on several layers of classification of the
individuals. Nondominated individuals get a certain dummy fitness value and then are
removed from the population. This process .is repeated until the entire population has
been ranked. It is a very effective algorithm but it has been criticized for its
computational complexity, lack of elitism and its requirement for specifying sharing
parameters in the algorithm. Based:on these issues, a modified version of the NSGA,
named NSGA-II Deb et al. (2002) was developed.
NSGA-II is a generational MOEA that aims at approximating the POF for a given
problem, while keeping high diversity in its result set. It builds on three main modules:
43
• Nondominated Sorting at a certain generation t , partitions the population P, in
fronts F , with index i indicating the non-domination rank shared by all individuals
contained in such a front. The first front F1 is the actual nondominated front, i.e., it
consists of all nondominated solutions in population P, at a certain generation t .
The second front F2 consists of all individuals that are non-dominated in the
set P, \ F/ , i.e., each member of F2 is dominated by at least one member of F as
shown in Figure 2.3. Generally, front Fk comprises all individuals that are
nondominated if the individuals in fronts F., with j < k were to be removed
from P, (Deb, 2001).
• Crowding-distance-assignment calculates a crowding-distance value for each
individual within a certain front F as the difference in objective function values
between the nearest neighbours at each side of the individual, summed up over all
objectives Extreme solutions (i.e., solutions with the smallest and largest function
values occurring within the front) are assigned an infinite distance value, which,
motivated by the pursuit of diversity, effectively preserves them into the next
generation should the front in which they are contained be partially discarded when
a new population P,±1 is formed.
• Crowded-comparison operator guides the selection process by defining an
ordering on I . Each individual has two attributes, a non-domination rank and a
crowding-distance value. Between two individuals with differing nondomination
ranks, we prefer the individual with the lower rank. Otherwise, with both
individuals belonging to the same front, we prefer the individual that is located in
the lesser crowded region (i.e., with higher crowding-distance value).
Detailed description of the main loop of NSGA-II is given below.
Two distinct entities are calculated in the NSGA-II to validate the quality of a given
solution. The first is a domination-count where the numbers of solutions that dominate
a given solution are tracked. The second keeps track of how many sets of solutions a
given solution dominates. In the process, all the solutions in the first nondominated
44
front will have their domination count set to zero. The next step is to select each
solution in which the nondomination count is set to zero and visit all other solutions in
the solution set and reduce the domination count by one. In doing so, if the domination
count of any other solution becomes zero, this solution is grouped in a separate list.
This list is flagged as the second nondominated front. This process is then continued
with each member of the second list until the next non-dominated front is identified.
The process is continued until all fronts are identified. Based on the nondomination
count given to a solution, a non-domination level will be assigned. Those solutions that
have higher nondomination levels are flagged as non-optimal and will never be visited
again. One of the key requirements of a successful solution method is ensuring that a
good representative sample from all possible solutions is chosen. Introduction of a
density estimation process and a crowded-comparison operator has helped NSGA-II to
address the above need. The crowding-distance computation requires sorting of a given
population according to each objective function value in ascending order of magnitude.
Once this is done, the two boundary solutions with the largest and smallest objective
value are assigned distance values of infinity. All other solutions lying in between these
two solutions are then assigned a distance value calculated by the absolute normalized
distance between each pair of adjacent solutions. After each population member is
assigned a crowding-distance value, a crowded-comparison operator is used to compare
each solution with the others. This operator considers two attributes associated with
every solution which is the nondomination rank and the crowding-distance. Every
solution is rated with others based on the non-domination rank. Solutions with lower
ranks are deemed better in this attribute. Once solutions that belong to the best front are
chosen based on the non-domination rank, the solution that is located in a lessercrowded region is considered better and forms the basis of the NSGA-II algorithm. The
flow chart depicting the NSGA-II algorithm is shown in Figure 2.1. For details of
NSGA-II, see (Deb et al., 2002, 2000; Deb, 2001, 1999a; Chakraborty et al., 2009;
Sinha et al., 2008). The source code for NSGA-II has been taken from the Kanpur
Genetic Algorithm Laboratory website http://www.iitk.ac.in/kangal and modified
according the application. The code is written in C, and the website is maintained by
Prof. Deb and his research group. The motivation for using this algorithm comes from
45';
the very good performance of this algorithm on test functions, and its success in
generating the POF.
2.3.2 Run Time Complexity of NSGA-H
The run time complexity of an algorithm quantifies the amount of time taken by an
algorithm to run, as a function of the size of the input to the problem. The time
complexity of an algorithm is commonly expressed using big 0 notation, which
suppresses multiplicative constants and lower order terms. Time complexity is
commonly estimated by counting the number of elementary operations performed by
the algorithm, where an elementary operation takes a fixed amount of time to perform.
Thus, the amount of time taken and the number of elementary operations performed by
the algorithm differ by at most a constant factor. Since an algorithm may take a
different amount of time even on inputs of the same size, the most commonly used
measure of time complexity, the worst-case time complexity of an algorithm.
In order to identify solutions of the first nondominated front in a population of
size N , each solution can be compared with every other solution in the population to
find if it is dominated. This requires o(kN) comparisons for each solution, where k is
the number of objectives. When this process is continued to find all members of the
oCK1.4 2)
first nondominated level in the population, the total complexity is! Consider the
complexity of one iteration of the entire algorithm.
Let population of size be N and k the number of objectives in the MOOP. Then
consider the complexity of one iteration of the entire algorithm.
The basic operations and their worst-case time complexities are as follows:
1) nondominated sorting is 0(kN 2 ) ;
2) crowding-distance assignment is 0(kN log N) ;
3) sorting in crowded-comparison operator is 0(N log N) .
The overall complexity of the algorithm is 0(kN2 ) which is governed by the
nondominated sorting part of the algorithm (Deb, 2001; Deb et al., 2002).
46
2.3.3 MOPSO-CD or Multi-objective Particle Swarm Optimization with
Crowding Distance
Inspired by the emergent motion of a flock of birds and fish searching for food, PSO
method was introduced by Kennedy and Eberhart (1995). Due to its simplicity in
implementation and high computational efficiency, it has been used in a range of
applications. In recent years, applying PSO to MOO has become increasingly popular.
Moore and Chapman (1999) attempted to handle MOO by applying Pareto dominance
into their approach, although it has been criticized for not adopting any scheme to
maintain diversity (Moore and Chapman, 1999). The algorithm of Ray and Liew (2002)
uses Pareto dominance for convergence and crowding comparison to maintain diversity
as well as a multi level sieve to handle constraints (Ray and Liew, 2002). The algorithm
proposed by Parsopoulos and Vrahaits (2008) focuses on addressing the• difficulty of
generating the concave portion of the Pareto front by using an aggregation function
(Parsopoulos and Vrahatis, 2008). Li (2003) proposes an approach that applies the main
techniques of NSGA II to PSO algorithm. Coello et al. (2004) proposed a new version
of multi-objective PSO (MOPSO), in which he compares his results to three highly
competitive SMO algorithms: NSAG II, PAES and microGA. Agrawal et al. (2008)
proposes an interactive particle swarm optimization algorithm (IPSO), which is similar
to Coello's MOPSO (Coello et al., 2004). Results show that while MOPSO is superior
in converging to the true Pareto front, its diversity mechanism falls behind that of
NSGA-II. Raquel and Naval Jr (2005) proposes an algorithm MOPSO-CD which
extends PSO in solving MOOPs by incorporating the mechanism of crowding distance
computation in the global best selection and the deletion method of the external archive
of nondominated solutions whenever the archive is full. The crowding distance
mechanism together with a mutation operator maintains the diversity of nondominated
solutions in the external archive (Raquel and Naval Jr, 2005). In other words, the
algorithm is a modification of PSO that adds an archive of nondominated solutions and
uses a crowding distance measure to prevent many similar Pareto optimal solutions
from being retained in the archive.
MOPSO-CD has drawn some attention recently as it exhibits a relatively fast
convergence and well-distributed Pareto front compared with other multi-objective
47
optimization algorithms. The original source code for MOPSO-CD has been taken from
http://www.engg.upd.edu.phi--cvmig/mopso.html and modified according the
application. The details of MOPSO-CD can be consulted in (Raquel and Naval Jr,
2005).
At each generation the following computation takes place: first, the crowding distances
of the nondominated solutions in the archive are computed. Guides are selected for
each particle in the swarm based on decreasing crowding distance in- the sorted archive.
This allows the particles to move towards those nondominated solutions in the external
archive which are in the least crowded area in the objective space. To obtain a good
distribution of nondomination solution in the archive, MOPSO-CD uses crowding
distance in selecting the best particles in its archive. This feature of MOPSO-CD
improves its convergence properties and maintains a good spread of the non-dominated
solutions. The flow chart of the MOPSO-CD is provided in Figure 2.6.
2.3.4 Run Time Complexity of MOPSO-CD
The computational complexity of the algorithm is dominated by the objective function
computation, crowding distance computation and the nondominated comparison of the
particles in the population and in the archive. If there are k objective functions and N
number of solutions (particles) in the population, then the objective function
computation has O(kN) computational complexity. The costly part of crowding
distance computation is sorting the solutions in each objective function. If there are M
solutions in the archive, .sorting the solutions in the archive has
0(kM log M) computational complexity. If the population and the archive have the
same number of solutions, say N , the computational complexity for the nondominated
comparison is 0(kINT 2 ) . Thus, the overall complexity of MOPSO-CD is 0(kN 2 ) .
From Section 2.3.3 and Section 2.3.4 it is clear that the run time complexity is
same in case of both NSGA-II and MOPSO-CD if the archive and population size are
kw same for MOPSO-CD. Hence, in this work's experimental setup (of standard test
problems as well as real world application) archive and population size are set same.
48
2.3.5 Constraint Handling in NSGA-II and MOPSO-CD
In order to handle constrained optimization problem, MOPSO-CD adapted the
constraint handling mechanism used by NSGA-II due to its simplicity in using
feasibility and nondominance of solutions when comparing solutions. A solution i is
said to constrained-dominate a solution j if any of the following conditions is true:
1. Solution i is feasible and solution j is not.
2. Both solutions i and j are infeasible, but solution i has a smaller overall constraint
violation.
3. Both solutions i and j are feasible and solution i dominates solutions j.
When comparing two feasible particles, the particle which dominates the other particle
is considered a better solution. On the other hand, if both particles are infeasible, the
particle with a lesser number of constraint violations is a better solution.
2.3.6 Performance Measure for NSGA-II and MOPSO-CD
The performance of an MOEA can be decomposed into two, interacting, criteria:
• Closeness — the nearness of the identified non-dominated solutions to the true POF,
and
• Diversity — the distribution of the identified solutions across the trade-off surface.
This distribution is commonly expressed in criterion-space.
Various performance metrics have been proposed to measure accuracy, diversity, and
in some cases both simultaneously. A review of performance metrics is provided by
Deb (2001). Some of these metrics involve measurements made with respect to the true
trade-off surface, whilst others involve a purely relative comparison of two sets of
results. The former approach requires that the true surface (POF) be known and can be
sampled. While hypervolume measure can be applied without knowing the true surface
also it takes care of both - criteria mentioned above. Hence, this study utilizes
hypervolume metric as a performance measure for both
and MOPSO-CD.
49
r algorithms i.e., NSGA-II
Hypervolume Metric
Hypervolume metric to evaluate the accuracy of MOEA is the volume-based
performance measure. This metric was proposed by Zitzler et al. (2003) which
measures the volume of the region that is dominated by the computed solution set. This
performance metric does provide very useful information about the dominance as a
whole. This metric calculates the volume (in the objective space) covered by the
members of the Pareto-optimal set in concern for problems where all objectives are to
be minimized. Mathematically, for each solution (e.g., PI , P2 and P3 in Figure 2.7) of
the Pareto-optimal set, a hypercube is constructed with a reference point and the
solution as the diagonal corners of the hypercube. Thereafter, a union of all hypercubes
is found and its Hypervolume calculated. Figure 23 shows the chosen reference point
and the three points P , P2 and P3 in the Pareto-optimal set of which hypervolume is to
be calculated. The Hypervolume is shown as a hatched region. The hypervolume of a
set is measured relative to a reference point, usually the anti-optimal point or "worst
possible" point in space. (We do not address here the problem of choosing a reference
point, if the anti-optimal point is not known or does not exist: one suggestion is to take,
in each objective, the worst value from any of the frOnts being compared.). If a set A
has a greater hypervolume than a set B, then A is taken to be a better set of solutions
than B.
2.4 Proposed Methodology
General MOOP is already described in Section (1.1) given by Equation P1
Minimize f (x) = {f;(x), f2 (x),..., f, (x)}
x
Eliciting the corresponding DF to each objective (i.e. f,, f2 ,...,
(P1)
through the
interaction of DM a new DF based multi-objective optimization problem (DFMOOP)
consisting the DFs is formulated given by Equation P2.
Minimize ,u(x) = tui (x),uk (x)}
rGx
(P2)
Where, for each value of an objective function f, , there is a mapping called DF i.e., ,u,
to prescribe the variation of vagueness involved discussed in Section 2.2. The overall
50
(resulting) DF's value (say ,u) is also between zero and one. Hence, the DFA works
here as a priori.
The Pareto-optimal solutions are then determined for this newly formed DFMOOP
instead of the original MOOR Solutions of DFMOOP have a unique relationship with
the original MOOP of objective functions. In general, DFMOOP is solved using
different aggregators, min and product operator are most common, providing a single
solution of DFMOOP. This type of approach is repeatedly applied for different degrees
of satisfaction values until the DM , is satisfied. Benefit of this technique lies in the fact
how well DM's preferences have been incorporated in a priori approach, which is quite
rare due to vague nature of human judgment. So in the present approach DFMOOP is
solved using purely multi-objective manner (using the discussed algorithms i.e.,
NSGA-II and MOPSO-CD) without aggregating. Present approach is an attempt to
incorporate the benefits of both the a priori and the a posteriori methods together,.
For obvious reason, there are some relationships between the solutions of P1 and P2
that can be understood through the following two theorems.
Theorem 2.1: The Pareto-optimal solutions • of P2 cones onding the desirability
004T RAL
function are also Pareto-optimal solutions of Pl.
CYef 1-4/1.
z ACC No -C.
Date Proof:
ROOK
Let x* be Pareto-optimal solution for P2. Then by the definition of Pareto-optimal
solution
Ox e X ,u, (x) < ,u, (x*) ; for i E {1, 2,..., k}
and ,u j(x) j(x* ); for j E {1, 2,..., ; j # i
(
<=>e X : fi (x)-1 < fif(x*)-*-f̀ * ) ;for i E
_f
f* fi* 51
(2.3)
-
(x ) -
f.T
j< f ( x* - fJ*1
fi ** -
, 2, ...,
(2.4)
E0
Inequality 2.3f,. (x) — f * < f,. (x* )— f,* as f ** f * > 0
<=> f(x) < f (x') ; for i e {1, 2, ...,
Similarly
Inequality 2.4 <=>f j (x) — fi (x* )— fi* as 4** — f *j > 0
(2.5)
(2.6)
<=> fi (x* ) ; for j e j#
•
The proposition holds as Inequalities 2.5 and 2.6 together form the condition that
implies that x* is also Pareto-optimal solution of P1 (MOOP).
Theorem 2.2: If x* is properly efficient w.r.t. P2 (in Geofferin's sense) then it is also
properly efficient w.r.t. P1 in the same sense.
Proof: Let x* be properly efficient solution w.r.t. P2. Then by Geoffrion's definition
(Definition 1-.9) of Pareto-optimality there must exist some real M > 0 such that for
each x E X satisfying ,u, (x) < ,u,(x* ) , there exists at least one ,u j (x) ( j # i) such that
(x ) < ,u, (x) and
pi(
) — /-gi(x) <M
Now ,u,.(x) < Ati (x* ) [f (x) — f: I f ** — f: ] <[f( ) - .4* If:* - f* ]
• [f (x) f * f ** f" ]<[f(x*)- f* 1 fi7 - f* ]
• [ (x) - f*]<[i;(x )- .1;*] as V** f i* ] > 0;
(2.7)
f(x) < f(x* )
Similarly
Pi (x. ) < (x)
if; (/./;* —< Lif (x) — .f,7 /47* —
]
(2.8)
*)< fi (x)
Also,
(x* ) — ,u,(x )/P / (X) /di (X) < M
52
[fi(x*)-47f7* — f7]—[4(x)-47fr — fil <111
[f i (x)— fj/f7—f;]—if f (x* )—f.;/f37 — f.;]
(f **
cx* )—f* ]—[fcx )—.,*] <M
[fi(x)—i5]—[fi(x*)—f;]( [7 — J./ )
or we have,
— fe*)
fi(x) , m (fi"
f(x * ) —
(2.9)
f j(x)— f j(x* )(f **j — f;)
Inequalities 2.7-2.9 together imply that proposition holds, so the condition of properly
efficient solution for x* w.r.t. P1 is evident.
Corollary 2.3 In Case (f," f * ) is equal to (f7 — f;) then M becomes equal to M .
In this situation, the shape of the POF will be same for both P1 and P2 (e.g.,i SCH1
problem to be discussed later).
2.4.1 Assumptions:
Throughout this study, we make the following assumptions.
1. The vague or biased goals of the DM can be quantified by eliciting the corresponding
DFs through the interaction with the DM.
2. MOEA applied to solve the MOOP provides the global Pareto-optimal solutions.
2.4.2 Detailed Procedure of the Methodology: MOEA-IDFA
The methodology consists of five steps. Step 0 is an initialization step. Step 1 and 2
constitute the calculation phase and Steps 3 and 4 the decision-making phase. Figure
1.8 shows the procedure of MOEA-IDFA. In the calculation phase, the DFs are
constructed, and then an optimization model of DFs is solved using an MOEA. In the
decision-making phase, the DM evaluates the results of the calculation phase, and then
articulates his/her preference information. More specifically, if the DM is satisfied with
the results on all the objectives, the procedure successfully ends. Otherwise, the DM
53
adjust the parameters (in case of linear DF only bound) of a DF. Then, the procedure
goes back to the calculation phase. Each step is described below.
Step 0: Initialization of DF's Parameters
The DF's parameters (preference parameters on each objective) are to be initialized, to
construct the DF for each objective in the first iteration. The, initial bound and goal
(target) may be determined based on the DM's subjective judgments. Ideal and antiideal vectors f: and r , of correspondingobjective function f, should be
calculated in advance for the given MOOP (P1). Since we are tackling linear DF in this
chapter hence, few parameters are less here that will be discussed intensely in the later
chapters.
Step I: Construction of the DFs
As mentioned earlier, the preference parameters initialized in Step 0 are used in
constructing the linear DF
p,
for each f of Pl. DM preference can be utilized in the
construction of DF. In case of linear DF the DM can specify any two points within
j
and r . Thus, a new DFMOOP (see Equation P2) is formed.
Step 2: Solving the DFMOOP
This newly formed P2 is then solved using an efficient MOEA. As we are discussing
general MOOP which can be nonlinear, nonconvex, multimodal and multivariable.
Hence, resulting DFMOOP will also be of the same nature. We need some powerful
algorithm to solve this optimization problem. NSGA-II and MOPSO-CD are two such
competent techniques capable of solving complex MOOPs used here.
Step 3: Evaluation of the Solution
Present the solution to the DM obtained in Step 2. Theorem 2.1 and 2.2 together imply
that the Pareto-optimal solution obtained by solving P2 is also the Pareto-optimal
solution of Pl.
If the DM is satisfied by the Pareto-optimal solutions, the methodology is terminated.
Else, the procedure goes to Step 4.
54
Step 4: Adjusting the Preference Parameters
In order to improve the unsatisfactory results, interaction with DM can help modifying
(updating) the Step 0 and Stepl. On the other hand, if the DM is fully satisfied by the
obtained guided POF the procedure ends successfully. The whole process may be
repeated until DM is satisfied.
2.5 Experimental Suite
To validate our approach we demonstrate its working on several standard test problems.
The set of diverse established test problems is used in this study to verify the proposed
approach. The suite consists of ten, tractable, bi objective as well as tri objective
functions with varying characteristics as summarized in Table 2.1. The corresponding
mathematical definitions, restrictions as well as the characteristics are also provided in
the same table. These functions cover many of the features that may be found in realworld problems. Test problems are chosen from a number of significant past studies in
this area. Van Veldhuizen and Lamont (2000) cited a number of test problems, which
many researches have used in the past. Of them, we choose three problems, we call
them SCH1 and SCH2 (Schaffer, 1985), KUR (from Kursawe's study (Kursawe,
1991)). In 1999, Deb suggested a systematic way of developing test problems for
MOOPs (Deb, 1999). Following those guidelines Zitzler et al. (2000) proposed six test
problems. We choose three of those problems there and call them ZDT1, ZDT2 and
ZDT3 detailed in Table 2.1. All the problems mentioned above have two objective
functions. None of these problems has any constraint. To seal this gap a constrained
MOOP consisting two objectives is considered here, we call it TNK (Tanaka et al.,
1995) shown in Table 2.2. To show the comprehensible depiction of the proposed
approach, we have considered three unconstrained tri-objective optimization problems
also namely VNT, MHHM1 and MHHM2 (Huband et al., 2006) revealed in the Table
2.2.
55
2.6 Results and Discussion
The proposed approach in this chapter has been applied to a set of ten standard test
problems (SCH1, SCH2, KUR, ZDT1, ZDT2, ZDT3, TNK, VNT, MHHM1 and
MHHM2) described in Section 2.5 and which are shown in Tables 2.1-2.2. As
conversed earlier in the Section 2.4, linear DF is used as a priori, while NSGAII/MOPSO-CD applied as a posteriori in the proposed approach.
As described in the Step 0 of Subsection 2.4.2 of the approach, the parameters of
a priori method i.e., linear DFs are obtained with the help of DM corresponding each
objective. Initial parameters for DF's used for all test problems are given in Tables
2.3(a)-2. 12 (a).
In Step I, preference parameters initialized in Step 0 are used in construction of
the linear DFs ( pi s) for each f; , and forming the DFMOOP (i.e., P2).
This newly formed P2 is solved using an a posteriori method as shown in Step 2. We
have used both NSGA-II and MOPSO-CD as a posteriori method to estimate the
effectiveness of the approach on different MOEAs. The values of different set of
parameters are tested and fine-tuned through several runs of MOEA (NSGAII/MOPSO-CD) methods used in this approath NSGA-1I and MOPSO-CD parameters
used for each problem are given in Tables 2.3(b)-2.13(b). In Step 2, computations have
been carried out based on these parameters and results have been reported. To compare
the efficiency of NSGA-II and MOPSO-CD, hypervolume metric is used here and the
reference points are taken (11, 11) and (11, 11, 11) for bi-objective problems and triobjective problems respectively. Tables 2.3(c)-2.12(c) display the mean, median,
standard deviation, best and worst values of hypervolume metrics obtained by using the
two algorithms, NSGA-II and MOPSO-CD with the help of 10 runs each. For the
steadiness of the results for each problem, 10 runs of each MOEA (NSGA-II and
MOPSO-CD) are done under the same parameters and the best POF (based on best
hypervolume metric) obtained is reported. The POFs of the problems are depicted in
Figures 2.9-2.18. Having a look on the Tables 2.3(c)-2.12(c) reveal that if we consider
mean of hypervolume metric as the competence parameter for both NSGA-II and
MOPSO-CD, then NSGA-II performed better in case of SCH1, SCH2, KUR, TNK and
VNT, while MOPSO-CD was better in the cases ZDT1, ZDT2, ZDT3, MHHM1 and
56
MHHM2 for the used parameters parameter settings. If the DM is satisfied by the POF
obtained then the procedure ends successfully as explained in Step 4. We are just
showing one iteration of the approach here as we assume that in this problem DM is
satisfied by the results obtained in Step 3. However, if the DM is not satisfied then the
procedure goes back to Step 1 to accumulate the DM's preferences in a better manner.
2.7 Conclusion
Exhaustive description and application of DF, NSGA-II and MOPSO-CD are presented
in this chapter. In this chapter, a partial user preference approach named MOEA-IDFA
is aldoproposed. In this approach, linear DFA as a priori and NSGA-II/MOPSO-CD as
a posteriori are combined together to provide an interactive POF. The theoretical
analysis of the proposed MOEA-IDFA is also provided in this chapter.. The
performance of the MOEA-IDFA is tested on a set of 10 test problems. Though,
application of linear DFA.does not provide any biasness in the POF, still it can be well
used as a starting point for the interaction with DM.
57
Figure 2.1 STB type of DF
Figure 2.2 LTB type of DF
Figure 2.3 Description of Crowding-Distance
58
Reference point
. ,. 0
..
.
.
.
..
.
..
..
f2
PI
N
E
..
\.,
...
.,.
. \
.
.,.
.
.,.
...
\..
\
2
P3 0
Minimize
Figure 2.4 The Hypervolume Enclosed by the Nondominated Solutions
Front
F3 Front
F Front
F Front 2
Figure 2.5 Nondominated Sorting of a Population
59
( Start )
Generate Random Population
gen = 0
Find Solution for Each Objective
Function
Assign Fitness/Nondominated Level
For Each Solution Using Fast Sorting
Algorithm and Crowding Distance
Evaluation
End )
4,
Yes
Generate Offspring From Crossover/
Mutation/ Elitism
Report The
Pareto Set of
Solutions
Combine Population of Parent and
Children
Sort New Population Based on
Nondomination Rank
gen < gen_max
gen = gen+1
Has The
Population
Changed
From Last
Time?
Select Nondominated Set with the
Best Rank
Yes t
Select The Next
Population For The
Next Generation
From The
Nondominated Set
Is The NonDominated Set
Smaller Than Initial
Population?
Yes
Go To The Next Best Nondomination
Set To Fill The Gap
Figure 2.6 Flow Chart Representation of NSGA-H Algorithm
60
Specify The Parameters For MOPSO-CD
Randomly Initialize Population Positions,
Velocities, pbest and gbest
Produce Next Swarm of Particles
For Every Particle
Evaluate The Fitness Values
Insert New Nondominated Solution Into A
and Delete All Dominated Solutions From A
Compute The Crowding Distance values of
Each Nondominated Solution in A and Sort
In Descending
Randomly Select Particle From a
Specified Bottom Portion(e.g. Lower 10%
and Replace It With The New Solution
r
Update pbest and gbest
Next Particle
----- Update Particle Position and Velocity
1
Output The Pareto Solution Set
End
Figure 2.7 Flow Chart of MOPSO-CD algorithm
61
(Step 0) Initialize the preference
parameters
Calculation
Phase
(Step 1) Construct the DFs
V
(Step 2) Solve the DFMOOP using NSGA-II /1VIOPSO-CD
1
DecisionMaking
Phase
Evaluate the solution
(Step 3)
Is DM satisfied
by the obtained guided
front ?
Adjust the DF's parameters
according the wish of DM's
choice of Pareto front
Yes
No
V
( End
Figure 2.8 Flow Chart of the Procedure Proposed: MOEA-IDFA
62
Table 2.1 Description of unconstrained Bi-objective Problems
Variable
Problem
Characteristics
Objective functions
bounds
minimize f = (f;(x), f2 (x)), where
f,(x)= x 2 ,
SCH1
x E [0,2]
convex,
connected
x e [-5,10]
disconnected
x, e [-5, 5]
nonconvex,
disconnected
n=30
x, c [0,1],
i =1,2,3,...,30
convex
f2(x) = (x — 2)2
SCH2
minimize f = (f, (x), f2 (x)), where
f, (x) = —x, if x __ 1,
=-2+x, ifl.x3,
=4—x, if34,
= —4 + x, if x .?-.. 4,
f2 (x) = (x —5)2
minimize f = (f, (x), f 2 (x)), where
KUR
f,(x)=E,_: (-10 exp(-0.2Vx,2 -1- x,2+, )),
f2(x) = yii (1,108± 5sin x;)
minimize f = (f,(x), f 2 (x)), where
f,(x) = x I,
ZDT1
f2 (x) = g(x)[1— Vx, / g(x)],
9"
g(x) =1 +X x,
11 - 1 i „.2
minimize f = (fi (x), f2 (x)), where
f,(x) = x,,
ZDT2
n=30
f2(x) = g(x)[1— (x, / g(x))2 I,
x, E [0,1],
concave
i =1,2,3,...,30
9"
x.
g(x) =X
1+
n-1 r=2
minimize f = (f (x), f 2 (x)), where
.1;(x) = x„
ZDT3
n=30
f2 (x) = g(x)[1— V x, 1 g(x) —.-i -1;0-- sin(107TX, )1,
X, E [0,1],
i =1,2,3,...,30
9"
E
g(x) =1+ x;
n —1 t=2
63
convex,
disconnected
Table 2.2 Description of Constrained Bi-objective and Unconstrained Tri-objective Problems
Problem
Variable
bounds
Objective functions
•
minimize f = (f, (x), f2 (x))
where f,(x) = x,,
f2(x)–x2
TNK
Characteristics
r
x,
subject to: x,2 + x22 –1-0.1 cos 16 arctan —
■
.x2
0,
x, c [0, a],
i =1& 2
nonconvex,
disconnected
e [-2,2],
i=1&2
bended surface,
connected
x e [0,1]
convex
x, e [0,1],
i = l& 2
convex
(x, – 0.5)2 + (x, – 0.5)2 -_ ._ 0.5
minimize f = (f, (x), f2 (x), f3 (x)), where
VNT
f (x) = x;+ (x2 – 1)2 ,
f2 (x) = X12 + (x2 + 1)2 ± 1,
X
f3 (x) = (x, –1)2 + x22 + 2
minimize f = (f (x), f 2 (x), f,(x)), where
MHHM1
f;(x) = (x – 0.8)2 ,
f2 (x) = (x – 0.85)2 ,
f,(x)= (x –0 .9)2
minimize f = (f (x), f2 (x), f3 (x)), where
MHHM2
f(x) = (x, –0 .8)2 + (x2 – 0.6)2 ,
f2 (x) = (x, – 0.85)2 + (x2 – 0.7)2 ,
f,(x)= (x, – 0.9)2 ± (x2 – 0.6)2
64
Table 2.3 Parameters and Hypervolumes for SCH1 using MOEA-IDFA (Linear DF Case)
Table 2.3 (a) A Priori Parameters for SCH1
.f.
A*•
0
4
•
2
J.
4
0
Table 2.3 ( b)A Posteriori Parameters for SCH1
Common Parameters
NSGA-H Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
200
50
0.4
200
0.4
1
1
0.9
5
-
Table 2.3 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for SCH1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
397.58253
397.58254
0.00029
397.58294
397.58212
NSGA-H
416.88258
416.88259
0.00029
416.88299
416.88212
0 NSGA-II
MOPSO-CD
3-
1
03
0
f
l
Figure 2.9 POFs of SCHlw.r.t NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
65
Table 2.4 Parameters and Hypervolumes for SCH2 using MOEA-IDFA (Linear DF Case)
Table 2.4 (a) A Priori Parameters for SCH2
i
fi.•
Jr*
f2....
-1
1
0
16
Table 2.4 ( b) A Posteriori Parameters for SCH2
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Size
rop
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
200
100
0.4
200
0.4
1
1
0.9
5
pu„-,—(,
--Li..4e)e
5
Table 2.4 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for SCH2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
407.25158
407.25163
0.00029
407.25199
407.25113
NSGA-II
407.97659
407.97659
0.00028
407.97699
407.97617
15 -
0 NSGA-II
MC)PSO-CD
129
6
30-
4:11:17ttarmlaurtmtP11.10:.04.
•
1
-0.8-0.4
0.00.40.8
f
Figure 2.10 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
66
Table 2.5 Parameters and Hypervolumes for KUR using MOEA-IDFA (Linear DF Case)
Table 7.2 (a) A Priori Parameters for KUR
f,.4
f:
0
4
f;
fr
0
4
Table 7.2 ( b)A Posteriori Parameters for KUR
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
200
100
0.3
200
0.5
1
1
0.8
5
1\49v.4-4.
,.T%—.4ex
5
Table 7.2 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for KUR
Mean
Median
S.D.
Best
Worst
MOPSOCD
1224.80032
1224.80032
0.00028
1224.80079
1224.80000
NSGA-II
1226.60059
1226.60058
0.00030
1226.60099
1226.60013
0 NSGA-1I
-
•
MOPSO-CD
-4 -6-8 -10 12
1 •I•1•
-20-19-18-17-16
fI
Figure 2.11 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
67
Table 2.6 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA (Linear DF Case)
Table 2.6 (a) A Priori Parameters for ZDT1
fi••
fi
1
0
f2,.
A
0
1
Table 2.6 ( b) A Posteriori Parameters for ZDT1
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
Cl
c2
Cross.
Prob.
Cross.
Index
Z.1,-.4e..p
/4,k jr
200
300
0.1
200
0.6
1.1
1.1
0.8
15
15
Table 2.6 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for ZDT1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.59859
399.59860
0.00028
399.59899
399.59816
NSGA-II
399.59554
399.59554
0.00025
399.59592
399.59512
Figure 2.12 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
68
Table 2.7 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA (Linear DF Case)
Table 2.7 (a) A Priori Parameters for ZDT2
fi.
ii."
.1;
f2"
0
1
0
1
Table 2.7 ( b) A Posteriori Parameters for ZDT2
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
Cl
200
300
0.1
200
0.5
1.1
c2
1.1
NSGA-H Parameters
Cross.
Prob.
Cross.
Index
0.9
15
pl). j
=%-...ftx
15
Table 2.7 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.29058
399.29061
0.00026
399.29097
399.29018
NSGA-1I
399.28358
399.28362
0.00028
399.28399
399.28315
0 NSGA-II
•
0.8
MOPSO-CD
0.4 -
0.0 0.0
0.4
f1
Figure 2.13 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
69
Table 2.8 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA (Linear DF Case)
Table 2.8 (a) A Priori Parameters for ZDT3
f.;..
f•
•
f,*
f;
0
8.52
-0.773
1
Table 2.8 ( b) A Posteriori Parameters for ZDT3
Common Parameters
NSGA-II Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
Cl
200
300
0.1
200
0.6
1.1
c2
Cross.
Prob.
Cross.
Index
1.1
0.9
15
15
Table 2.8 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for ZDT3
Mean
Median
S.D.
MOPSO-CD
414.75153
414.75151
0.00029 414.75195
414.75110
NSGA-H
414.72459
414.72461
0.00027
414.72410
1.0-
Best
414.72500
Worst
0 NS GA-II
•
0.5-
MOP SO-CD
00-
-o. 5 -1
I
0.00.20.4
fl
0 .60.8
Figure 2.14 POFs of ZDT3 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
70
Table 2.9 Parameters and Hypervolumes for TNK using MOEA-IDFA (Linear DF Case)
Table 2.9 (a) A Priori Parameters for TNK
1
.4..
f2
0
1.05
0
f2.
1.05
Table 2.9 ( b)A Posteriori Parameters for TNK
Common Parameters
MOPSO-CD Parameters
Pop
- Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
200
0.5
200
0.4
cl
1
c2
1
NSGA-II Parameters
Cross.
Prob.
0.8
cross. .A.u::,. . 4%
Index
--,-tg-3,
5
5
Table 2.9 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for TNK
Mean
MOPSO-CD
NSGA-II
Median
S.D.
399.22257 399.22257
397.32452
397.32453
0.00027
Best
Worst
399.22299 399.22212
0.00028 397.32496
397.32412
0 NSGA-II
OPS 0-CD
1.0 -
0.8 -
0.20.0 -
0.00.2• 0.40.6
f
0.81.0
l
Figure 2.15 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
71
Table 2.10 Parameters and Hypervolumes for VNT using MOEA-IDFA (Linear DF Case)
Table 2.10 (a) A Priori Parameters for VNT
fi.
fi.,..
0
4
f2•
.f2
5
1
f3.•
A
2
4
Table 2.10 ( b) A Posteriori Parameters for VNT
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
200
250
0.5
200
0.4
.
NSGA-II Parameters
CI
c2
Cross.
Prob.
Cross.
Index
1
1
0.9
10
,444.J,
-
.04,4.4.4.10
Table 2.10 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for VNT
Mean
Median
S.D.
Best
Worst
MOPSOCD
6093.40050
6093.40053
0.00032
6093.40099
6093.40000
NSGA-II
6095.10050
6095.10051
0.00032
6095.10099
6095.10000
Figure 2.16 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
72
Table 2.11 Parameters and Hypervolumes for MHHM1 using MOEA-IDFA (Linear DF Case)
Table 2.11 (a) A Priori Parameters for MHHM1
.A.
f"
f2
f2"
f3
f3"
0
0.01
0
0.0025
0
0.01
Table 2.11 ( b)A Posteriori Parameters for MHHM1
Common Parameters
MOPSO-CD Parameters
. Pop
-Si ze
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
250
0.6
200
0.4
cl
1
c2
1
NSGA-II Parameters
Cross.
Prob.
Cross.
Index
0.9
10
--si.1.44cr,211
-
10
Table 2.11 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for MHHM1
Mean
Median
S.D.
Best
Worst
MOPSOCD
7999.00051
7999.00052
0.00032
7999.00097
7999.00000
NSGA-1I
7999.00040
7999.00038
0.00014
7999.00066
7999.00011
MOPS 0-CD
0 NSC.+A -II
1-n/
0.0025
11 114/41,TPU 1:0 PP 5.Z1
0.000
0.0020
0.002
0.0015
0.004
0.0010
0.006
0.0005
0.008
0.0000
0.010
Figure 2.17 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
73
Table 2.12 Parameters and Hypervolumes for MHHM2 using MOEA-IDFA (Linear DF Case)
Table2.12 (a) A Priori Parameters for MHHM2
f ....
A.
0
A
A
0.0125
0
0.0125
0.0125
0
Table 2.12 ( b) A Posteriori Parameters for MHHM2
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
CI
c2
Cross.
Prob.
Cross.
Index
AAA.A4,
200
300
0.5
200
0.4
1
1
0.9
15
15
r1,...t10.0
Table 2.12 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for MHHM2
Mean
Median
S.D.
Best
Worst
MOPSOCD
7995.10051
7995.10051
0.00031
7995.10095
7995.10000
NSGA-1I
7995.00051
7995.00052
0.00031
7995.00095
7995.00000
MOPSO-CD
NSC+A-II
0214
0.008
0.006
0.004
0.002
0.000
4/002
0..0
0.061120
0.004
0.001
000
.0002
Figure 2.18 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Linear DF Case)
74
CHAPTER 3
Guided POF Articulating Nonlinear DFA with an MOEA
(Convex-Concave Combination)
In this chapter, we articulate nonlinear (convex and concave) DF as a priori with an
MOEA (NSGA-II/MOPSO-CD) as a posteriori, and demonstrate how in a single run,
instead of one solution, a preferred set of solutions near the desired region of DM's
interest can be found. Consequence of combination of convex-concave DF is analyzed
theoretically as well as numerically (on ten different benchmark problems). A set of
Pareto-optimal solutions is determined via DFs of the objectives, which incorporates
DM's preferences regarding different objectives.
3.1 Introduction
Having been well demonstrated the task of finding multiple POFs in MOOPs, the
MOEA researchers and applicationists should now concentrate in devising
methodologies of solving the complete task of finding preferred and POFs* in an
interactive manner with a DM. Although the ultimate target in such an activity is to
come up with a single solution, the use of an MOEA procedure can be well applied
with a decision-making strategy in finding a set of preferred solutions in regions of
interest to the DM, so that the solutions in a region collectively bring out properties of
the solutions there. Such an activity will then allow the DM to first make a higher-level
search of choosing a region of interest on the POF, rather than using a single solution to
focus on a particular solution.
In general, solution of MOOPs can be classified vis-à-vis at which stage the
preferences of the DM are integrated, discussed earlier in Section 1.1 of Chapter 1. A
priori optimization methods, e.g. desirability index (Harrington, 1965) utilize DM's
knowledge that has to be specified upfront. These techniques generate single point
solution only. A posteriori methods, however, create a set of solutions (i.e., . Paretooptimal solutions) or Pareto set. Most MOEA approaches can be classified as a
posteriori. They attempt to discover the whole set of Pareto-optimal solutions or at
least a well-distributed set of representatives. The DM then looks at the set (possibly
75
very large) of generated alternatives and makes final decision based on his/her
preferences. However, if the Pareto-optimal solutions are too many, their analysis to
reach the final decision is quite a challenging and burdensome process for the DM. In
addition; in a particular problem, the user may not be interested the complete Pareto set;
instead, the user may be interested in a certain region of the Pareto set. Such a bias can
arise if not all objectives are of equal importance to the user. Finding a preferred
distribution in the region of interest is more practical and less subjective than finding
one biased solution in the region of interest.
Although it is usually tough for a DM to completely specify his or her
preferences before any alternatives are known, the DM often has a rough idea of the
preferential goals towards which the search should be directed, so that he/she may be
able to articulate vague, linguistic degrees of importance or give reasonable trade-offs
between the different objectives. Such information should be integrated into the MOEA
to bias the search towards solutions that are preferential for the DM. This would in
principle yield two important advantages:
(1) Instead of a diverse set of solutions, many of which irrelevant to the DM, a search
biased towards the DM's preferences will yield a more fine-grained, and thus more
suitable, selection of alternatives;
(2) By focusing the search onto the relevant part of the search space, the optimization
algorithm is expected to find these solutions more quickly.
To achieve this task a methodology is proposed in this chapter. In the most
practical cases generally, DM have at least a vague idea in which region/regions of the
objective the 'optimums' should be. For this purpose,. DFs that map the objectives to
the interval [0, 1] according to their desired values can be specified as a priori to state a
relative preference of the objectives. The Pareto-optimal solutions are then determined
for the DFs instead of the objective functions using an MOEA. Thus, the proposed
method is a fusion of an a priori and an a posteriori approach.
Furthermore, this application in practice is quite straightforward. In addition, in
spite of the focus on the desired region/regions, the MOEA search can be carried out
without any additional constraints to be included for the decision variables and/or
objectives. Therefore, no individuals are 'wasted' during the optimization process in
case they do not fit into the constrained intervals. The MOEA is not touched at all
76
which facilitates the use of the Method in practice as existing optimization tools can be
used for optimization process as a posteriori. In other words, this approach is
independent of the MOEA used and can be easily coupled to any of them without any
deep modification to the main structure of the method chosen. For comparison purpose
we are using two of them namely NSGA-II and MOPSO-CD.
Once the DM has agreed on the parameters of the DFs, which is the key step,
MOEAs can be applied on the transformed objective space (i.e., DFMOOP) without
modification (Trautmann and Mehnen, 2009). By means of DFs, the solutions
concentrate in the desired region/regions which facilitate the solutions selection process
and support the MOEAs in finding relevant solutions. If DM has single/multiple
region/regions as preference/preferences for an objective, modification in the DF
should be done accordingly. For example, if DM is interested in finding the any corner
portion of the POF of a bi-objective minimization problem the DF corresponding the
objectives can be adapted using convex and concave DFs. We shall discuSs the
consequence of using convex and concave DFs together in this chapter.
Jeong and Kim (2009) proposed an interactive desirability function approach
(IDFA) for multirespose optimization to facilitate the preference articulation process.
One run of this approach provides just one solution to the DM based on his preference
in the form of DF. Present work is an extension of the work done by Jeong and Kim
(2009). We present a MOEA based IDFA (MOEA-IDFA) to provide DM a preferred
portion of the POF rather than providing just a single solution.
Section 3.2 describes the prerequisites of nonlinear (convex-concave) DFA as a
priori. The methodology of the proposed approach is provided in Section 3.3.
Necessary theorems are also provided in this section in order to analyze the
methodology theoretically. Results corresponding to the test problems from both
MOEAs (NSGA-II and MOPSO-CD) are compared and discussed in Section 3.4.
Finally, concluding observations are drawn in Section 3.5.
3.2 Nonlinear (Convex /Concave) DFA as a Priori
If a DM has some preference (i.e., biasness) towards one or more objectives, a
nonlinear DF may fulfill the purpose. Different shapes of DF will provide different type
of biasness towards different objective region of the POF. The form of the DF
77
originally proposed by Harrington (1965) was based on the exponential function of a
linear transformation of the A 's. Derringer and Suich (1980) found the specification
not very flexible in the sense that the DFs cannot assume a variety of shapes. Derringer
and Suich (1980) introduced a modified (alternative) version of DF for both one sided
and two sided case based on a power of a linear transformation of the f t 's. One-sided
case (STB type) formulation of a DF is given in Equation 3.1, as we are concerned with
one sided specification shown in Figure 3.1.
if f, < f .
if f: < A 5 f," ;i = 1, 2, 3, ..., ke R+
(3.1)
if f .. < f
Where parameters f" andare minimal and maximal acceptable levels of f,
respectively. The DFs u, 's are on the same scale and are discontinuous at the points
f,*
, A and f** . The values of n, (a kind of weighting factor) can be chosen so that the
DF is easier or more difficult to satisfy. Parameter ni is a positive constant whose
increasing magnitude creates a correspondingly more convex desirability curve. For
example, if n, is cliosen to be greater than one in Equation 3.1, p, is near 0 even if the
A is not low, making the DF more easier to satisfy in terms of desirability. As values
of n, move closer to 0, the desirability reflected by Equation 3.1 becomes lower
making it harder to satisfy in terms of desirability. Linear DF (i.e. for ni =1) has been
analyzed already in Chapter 1. Use of Linear DF does not provide biasness towards
objectives.
In this chapter, we are going to discuss the effect of combination of convex and
concave DFs for MOOP so the Equation 3.1 will be modified slightly for the further
uses, as convex DF (Equation 3.2) and as concave DF (Equation 3.3) shown in Figures
3.1-3.2. Both of these DFs are monotonic between the lower (f ' ) and upper ( )
bound of their definition. To the authors best knowledge this is the first attempt to
incorporate convex and concave DF together. In both the cases n, is the key parameter.
78
if f <
if
f:;i = 1, 2, 3, ..., k ;n1
; n1 E R+
(3.2)
if < .1;
CE91,1 Ite__
Figure 3.2_Shape of a -Goirrex-DF
if f, < f .
if ;i = 1, 2,3, ..., k ; /I;E R+ ,
if f .* < f;
Figure 31:Shape of a-Concave DF
79
(3f.3)
3.3 Proposed Methodology
MOOP is already described in Section (1.1) given by Equation P1
Minimize f (x)= {f1 (x), f2 (x),..., fk (x)}
xeX
(P1)
Eliciting the corresponding DF to each objective (i.e. f,fk ) through the
interaction of DM a new DF based multi-objective optimization problem (DFMOOP)
consisting the DFs is formulated given by Equation P2.
Minimize ,u(x) = Ipi (x), p2 (x),...„ uk (x)}
rGx
(P2)
Where, for each value of an objective function f; , there is a mapping called DF i.e.,
to prescribe the variation of vagueness involved discussed in Section 3.2. The overall
(resulting) DF's value (say ,u) is also between zero and one.
The Pareto-optimal solutions are then determined for this newly formed DFMOOP
instead of the original MOOP. Solutions of DFMOOP have a unique relationship with
the original MOOP of objective functions. In general, DFMOOP is solved using
different aggregators, min and product operator are most common, providing a single
solution of DFMOOP. This type of approach is repeatedly applied for different degrees
of satisfaction values until the DM is satisfied. Benefit of this technique lies in the fact
how well DM's preferences have been incorporated in a priori approach, which is quite
rare due to vague nature of human judgment. So in the present approach DFMOOP is
solved using purely multi-objective manner (using the discussed algorithms i.e.,
NSGA-II and MOPSO-CD) without aggregating. Present approach is an attempt to
incorporate the benefits of both the a priori and the a posteriori methods together.
Now we are going to prove a theorem, which establish the relationship between
the Pareto-optimal solutions of P2 and Pl.
80
Theorem 3.1: The Pareto-optimal solutions*Of P2 corresponding the DFs in Equation
3.2 and Equation 3.3 are Pareto-optimal solutions of Pl.
Proof:
Let x* be Pareto-optimal solution for P2. Now depending on the DM's priority
p, (ormay be defined by either Equation 3.2 or Equation 3.3. By the definition of
Pareto-optimal solution
E X pi (X) < (X ) ; for i {1,2,...,1c}
and ,u (x) ,u, (x* ); for j E {1, 2,
k}; j # i
f(x) — f* <( f(x*)— f*
f, —f ,f** —f
<=> E X
for i E {1, 2,...,14;n, EU + ;n,
1 or n, 5 1, depending on the shape of
as1 or Ili 1, and f,* S f,(x), f(x )z f,'
( f (x)— f * <(f,(x* )— f,*
g
f** — f* f** f*
;usin the monotonicities of the DFs
Again using the monotonicities of the DFs for j n,
also n,or 11, 1 and f *j fi (x), f., (x* )fr indicates
(3.4)
or n, j # i
.n,
( fi (x)— f,
< [f,(x* )— f; vil
,..•,.•
—
••
•
J." - - - -1.1i fi - fi
f ,(x)—
1
** *
JeJ
[f J
(x*)—
<
** *f J7
(3.5)
f l f l
Inequality (3.4)f, (x)f * < f(x) — f* as f **
<:=>
f (x) < fi (x* ); for i E {1, 2, ...,
Similarly,
Inequality (3.5) <=> fi (x)— f;f; as f;* —
<=> 1,(x)f '); for j E {1,2,..., lc} j # i
> 0
(3.6)
> 0, j
(3.7)
The proposition holds as Inequalities 3.6 and 3.7 together form the condition for
Pareto-optimal solution of Pl.
81
3.3.1 Detailed Procedure of the Methodology: MOEA-IDFA
The methodology consists of five steps. Step 0 is an initialization step. Steps 1 and 2
constitute the calculation phase and Steps 3 and 4 the decision-making phase. Figure
1.8 shows the procedure of MOEA-IDFA. In the calculation phase, the DFs are
constructed, and then an optimization model of DFs is solved using an MOEA. In the
decision-making phase, the DM evaluates the results of the calculation phase, and then
articulates his/her preference information. More specifically, if the DM is satisfied with
the results on all the objectives, the procedure successfully ends. Otherwise, the DM
adjusts the parameters (shape and bound) of a DF. Then, the procedure goes back to the
calculation phase. Each step is described below.
Step 0: Initialization of DF's Parameters
The DF's parameters (preference parameters on each objective) are to be initialized, to
construct the DF for each objective in the first iteration. The initial bound and goal
(target) may be determined based on the DM's subjective judgments. Ideal and antiideal vectors f* and r , of corresponding ith objective function f. should be
calculated in advance for the given MOOP (Equation P 1). The initial bound and shape
may be determined based on the DM's subjective judgments.
Step 1: Construction of the DFs
From this step calculation phase starts. As mentioned earlier, the preference parameters
initialized in Step 0 are used to construct the nonlinear (convex or concave) DF Ai, for
each f of P1 . DM preference can be utilized in the construction of DF via the value of
n, (a kind of weighting factor determines shape of DF). Thus, a new DFMOOP (see
Equation P2) is formed.
Step 2: Solving the DFMOOP
This newly formed P2 is then solved using an efficient MOEA. As we are discussing
general MOOP which can be nonlinear, nonconvex, multimodal and multivariable.
Hence, resulting DFMOOP will also be of the same nature. We need some powerful
82
algorithm to solve this optimization problem. NSGA-II and MOPSO-CD are two such
competent techniques capable of solving complex MOOPs used here.
Step 3: Evaluation of the solution
Present the solution to the DM obtained in Step 2. Theorem 3.1 and 3.2 together imply
that the Pareto-optimal solution obtained by solving P2 is also the Pareto-optimal
solution of Pl.
However, the POF of P2 is different from P1 due to different function formulation.
Therefore, they produce a section of POF of. Pl, guided according the choice of the DF.
For example, for a bi-objective minimization problem using a convex and a concave
DF produces biasness towards one objective. If the DM is satisfied by the Paretooptimal solutions, the methodology is terminated. Else, the procedure goes to Step 4.
Step 4: Adjusting the Preference Parameters
In order to improve the unsatisfactory results, interaction with DM can help modifying
(updating) the Step 0 and Stepl. On the other hand, if the DM is fully satisfied by the
obtained guided POF the procedure ends successfully. The whole process may be
repeated until DM is satisfied.
3.4 Results and Discussion
The proposed approach in this chapter has been applied to a set of ten standard test
problems (SCH1, SCH2, KUR, ZDT1, ZDT2, ZDT3, TNK, VNT, MHHM1 and
MHHM2). As conversed earlier in the Section 3.2, convex-concave combination of
DFs is used as a priori, while NSGA-II/MOPSO-CD applied as a posteriori in the
proposed approach.
As described in the Step 0 of Subsection 3.3.1 of the approach, the parameters of
a priori method i.e., nonlinear (convex or concave) DFs are obtained with the help of
DM corresponding each objective. Initial parameters of DF are used for all test
problems are given in Tables 3.1(a)-3.10(a).
In Step 1, preference parameters initialized in Step 0 are used in construction of
the convex or concave DFs (,uis ) for each f with the help of DM, and forming the
83
DFMOOP (i.e., P2). This newly formed P2 is solved using an a posteriori method as
shown in Step 2. Since our methodology is based on MOEA used, we have used both
NSGA-II and MOPSO-CD as a posteriori method to estimate the effectiveness of the
approach on different MOEAs. The values of different set of parameters are tested and
fine-tuned through several runs of MOEA (NSGA-II/MOPSO-CD) methods used in
this approach. NSGA-II and MOPSO-CD parameters used for each problem are given
in Tables 3.1(b)-3.10(b). In Step 2, computations have been carried out based on these
parameters and results have been reported. To compare the efficiency of NSGA-II and
MOPSO-CD, hypervolume metric is used here, for them reference points are taken
(11, 11) and (11, 11, 11) for bi-objective problems and tri-objective problems
respectively. Tables 3.1(c)- 3.10(c) display the mean, median, standard deviation, best
and worst values of hypervolume metrics obtained by using NSGA-II and MOPSO-CD
with the help of 10 runs each. For the steadiness of the results for each problem, 10
runs of each MOEA (NSGA-II and MOPSO-CD) are done under the same parameters
and the best POF (based on best hypervolume metric) obtained is reported.
The POFs of the problems are depicted in Figures 3.3-3.12, which clearly imply that
use of convex-concave combination of DF guides the POF towards one particular end
of the POF.
Having a look on the Tables 3.1(c)-3.10(c) reveal that if we consider mean of
hypervolume metric as the competence parameter for both NSGA-H and MOPSO-CD,
then NSGA-II performed better in case of KUR, ZDT1, ZDT2, VNT, MHHM1 and
MHHM2 while MOPSO-CD was better in the cases SCH1, SCH2, ZDT3 and TNK for
the used parameters parameter settings. If the DM is satisfied by the POF obtained then
the procedure ends successfully as explained in Step 4. We are just showing one
iteration of the approach here as we assume that in this problem DM is satisfied by the
results obtained in Step 3. However, if the DM is not satisfied then the procedure goes
back to Step 1 to accumulate the DM's preferences in a better manner.
84
3.4.1 Effect of Variations in DF's Key Parameter on POF
If the DM is unsatisfied by the outcome of Step 4 of the proposed approach then
analyst calls for the variation in DF. Thus, it is significant to understand the effect of
the parameters of DF on POF. It is obvious from the results and discussions in Section
3.4 that use of convex-concave combination of DFs produces biased Pareto-optimal
c@vic_ci v-e_
solutions towards the objective having,sonw4K-DF. To investigate the effect of the DF's
key parameter further, we are taking SCH1 as our problem and NS.GA-II as the MOEA
having the same a posteriori setting as used in Table 3.1 (b). As the key parameter in
case of a convex or concave DF p1 is n, we are using rest of the parameters same as
taken in Table 3.1 (a) and will vary n to observe the effect of it. Different combination
of n
have yielded different portions of POF of SCH1 as shown in Figures 3.13. The
key parameters n1 and n2 used for Figure 3.13 (a) are having values 5 and 0.1
respectively. Similarly the values of n1 and n2 used for Figure 3.13 (b) are taken 10
and 0.01 respectively. Figure 3.13 (c) depicts the POF in case of n1 and n2 are taken
15 and 0.001 respectively. 25 and 0.0001 are the value used for n1 and n2 respectively
in Figure 3.13 (d). It is clear from the Figures 3.13 that increment in the value of n1
while simultaneous decrement in the value of n2 contracts POF further towards the
biased portion.
3.5
Conclusion
In this chapter, a partial user preference approach named MOEA-IDFA having
nonlinear DF has been proposed. In this approach, nonlinear (convex-concave
combination of DFs) DFA as a priori and NSGA-II/MOPSO-CD as a posteriori are
combined together to provide an interactive POF. The theoretical analysis of the
proposed MOEA-IDFA is also provided in this chapter. It has been observed that use of
convex DF together with concave DF produces a definite bias among the Paretooptimal solutions of original MOOP. This type of approach is useful if the DM prefers
85
a particular objective over another objective.The performance of the MOEA-IDFA is
tested on a set of 10 test problems. Similar approach to find a bias in the POF proposed
by Branke et al. (2001) reported results in convex POF only. This approach is better
than the method proposed by Branke et al. (2001) in terms of its ability to produce
biasness in the disconnected and non-convex POFs as well. According the author's best
knowledge, there is no works till date, in literature that comprise the biasness among
tri-objective problems. In this chapter, we successfully apply the MOEA-IDFA using
sigmoidal DFs to three standard tri-objective problems (VNT, MHHM1 and MHHM2).
al9
86
Table 3.1 Parameters and Hypervolumes for SCH1 using MOEA-IDFA
(Convex-Concave Combination)
Table 3.1 (a)A Priori Parameters for SCH1
A...
f
0
4
jr .r
f,
;
n1
n2
0
2.
0.001
4
Table 3.1 ( b)A Posteriori Parameters for SCH1
Common Parameters
NSGA-II Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
200
150
0.5
200
0.5
1
1
0.8
5
Mme.
-,,.,„,k.
„5,
5
Table 3.1 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for SCH1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
392.62053
392.62060
0.00024
392.62087
392.62011
NSGA-I1
392.30585
392.30585
0.00001
392.30586
392.30583
4- O
0 NSGA-II
- 1\IOPSO-CD
3-
ri 2 -
4-a
1
0
VO
0
.."
4
01
Figure 3.3 POFs of SCH1w.r.t NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave. Combination)
87
Table 3.2 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(Convex-Concave Combination)
Table 3.2 (a) A Priori Parameters for SCH2
fi..•
-1
f,...
f;
A
1
0
16
ni
n2
10
0.001
Table 3.2 ( b) A Posteriori Parameter.s for SCH2
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
200
150
0.4
200
0.4
1
1
0.9
Cross. 4A4..2
Index ..G---A.>.
10
10
Table 3.2 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for SCH2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
403.58635
403.58636
0.00005
403.58643
403.58622
NSGA-H
402.90365
402.90364
0.00002
402.90368
402.90362
O
15 -
NSGA-II
MOPSO-CD
12
963Ctittli:Muazammo
0
0
1
Figure 3.4 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
88
Table 3.3 Parameters and Hypervolumes for KUR using MOEA-IDFA
(Convex-Concave Combination)
Table 3.3 (a) A Priori Parameters for KUR
11
.4".
-20
-14.4
f;
./2..
-11.6
0
ni
n2
10
0.002
Table 3.3 ( b) A Posteriori Parameters for KUR
Common Parameters
MOPSO-CD Parameters
Pop
• Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
150
0.3
200
0.5
NSGA-II Parameters
Cl
c2
Cross.
Prob.
Cross.
Index
1
1
0.8
10
0-1—x—i- •
'5--
5 T'
Table 3.3 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for KUR
Mean
Median
S.D.
Best
Worst
MOPSO-CD
1219.10046
1219.10043
0.00025
1219.10086
1219.10005
NSGA-1I
1223.10006
1223.10006
0.00002
1223.10008
1223.10002
I•1I•
-20-19-18-17
f1
Figure 3.5 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
89
Table 3.4 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(Convex-Concave Combination)
for ZDT1
Table 3.4 (a) AParameters
Priori
fi*
J1`•
0
1
'
f2 s
n
n2
0
1
12
0.01
Table 3.4 ( b)A Posteriori Parameters for ZDT1
Common Parameters
Pop
Size
200
Max.
Gen.
Mutation
Prob.
300
0.1
MOPSO-CD Parameters
Arch.
Size
200
w
0.6
Cl
C.
1.1
1.1
NSGA-H Parameters
Cross.
Prob.
Cross.
Index
frtwt.
0.8
15
15
D...44
Table 3.4 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.53662
399.53655
0.00024
399.53698
399.53625
NSGA-H
399.63846
399.63846
0.00002
399.63846
399.63842
1.0 0.8 0.6 0.4 -
0
O
0
NSGA-II
" MOPSO-CD
0
oo
0
0.2 -
0.0 0I00.20.4
f1
0. 6
0.8
110
Figure 3.6 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
90
Table 3.5 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(Convex-Concave Combination)
Table 3.5 (a) A Priori Parameters for ZDT2
f,
A
0
f2...
f2
1
1
0
ni
n2
20
0.001
Table 3.5 ( b) A Posteriori Parameters for ZDT2
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
-• Pop
, Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
Cl
C2
Cross.
Prob.
200
350
0.2
200
0.5
1.1
1.1
0.8
Cross. iii,,A,1.
Index T-i...:415a
15
15
Table 3.5 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.28417
399.28372
0.00163
399.28878
399.28344
NSGA-H
399.29071
399.29076
0.00016
399.29078
399.29024
NSGA-II
0
•
0
0
o
MOPSO-CD
o
0.4 0.2 0.0 -
0.00.20.40.60.81.0
f1
Figure 3.7 POFs of ZDT2 w.r.t. NSGA-H and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
91
Table 3.6 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(Convex-Concave Combination)
Table 3.6 (a) A
fi•
0
.r.
8.52
Priori Parameters for ZDT3
f2.•
f2
1
-0.773
ni
n2
10
0.00 1
Table 3.6 ( b) A Posteriori Parameters for ZDT3
Common Parameters
NSGA-II Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
CI
c2
Cross.
Prob.
Cross.
Index
MA.,),--.
200
350
0.1
200
0.6
1.1
1.1
0.8
15
15
D---ett->o,
Table 3.6 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT3
Mean
Median
S.D.
Best
Worst
MOPSO-CD
414.65639
414.65627
0.00023
414.65688
414.6562
NSGA-II
414.58246
414.58246
0.00003
414.58249
414.58242
Figure 3.8 POFs of ZDT3 w.r.t. NSGA-H and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
92
Table 3.7 Parameters and Hypervolumes for TNK using MOEA-IDFA
(Convex-Concave Combination)
Table 3.7 (a) A Priori Parameters for TNK
f;
fi
.4
0
f2"
1.05
0
1.05
nI
n2
10 `
0.1
Table 3.7 ( b) A Posteriori Parameters for TNK
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
200
0.5
.200
0.5
cl
1
C.
1
NSGA-II Parameters
Cross.
Prob.
Cross.
Index
--1-1--4-o.
0.8
10
10
Alt,--1.
Table 3.7 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for TNK
Mean
Median
S.D.
Best
Worst
MOPSO-CD
397.29493
397.29492
0.00002
397.29495
397.29491
NSGA-II
397.04872
397.04882
0.00025
397.04886
397.04802
NSGA -II
1 .0 -
Q4
•
MOPS 0-CD
e.
0.80_.6
w1
0.4 0.2 0 -
0.00.20.40.60.81.0
f
l
Figure 3.9 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
93
Table 3.8 Parameters and Hypervolumes for VNT using MOEA-IDFA (Convex-Concave
Combination)
Table 3.8 (a) A Priori Parameters for VNT
I
f:*
f;
f2*
f2
ic
n,
n,
n,
0
4
1
5
2
4
5
0.1
0.1
Table 3.8 ( b) A Posteriori Parameters for VNT
Common Parameters
MOPSO-CD Parameters
NSGA-H Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
Cl
c2
Cross.
Prob.
Cross.
Index
Ayet).41
StAvg,
200
250
0.6
200
0.4
1
1
0.8
10
10
Table 3.8 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for VNT
Mean
Median
S.D.
Best
Worst
MOPSO-CD
6069.00002
6069.00001
0.00001
6069.00005
6069.00001
NSGA-I1
6080.00004
6080.000045
0.00002
6080.000074
6080.00000
Figure 3.10 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
94
Table 3.9 Parameters and Hypervolumes for MHHMI using MOEA-IDFA (Convex-Concave
Combination)
Table 3.9 (a)A
Priori Parameters for MHHMI
fi•
fi..
J;
.1.2...
0
0.01
0
0.0025
J 3"
f;*
n,
n,
ri,
0
0.01
5
5
0.1
Table 3.9 ( b) A Posteriori Parameters for MHHM1
Common Parameters
NSGA-II Parameters
MOPSO-CD Parameters
. • Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
-i
c2
Cross.
Prob.
Cross.
Index
200
250
0.5
200
0.5
1
1
0.9
15
15
Table 3.9 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for MHHMI
Mean
Median
S.D.
Best
Worst
MOPSO-CD
7999.00452
7999.00344
0.00326
7999.00990
7999.00132
NSGA-II
7999.00574
7999.00567
0.00178
7999.00866
7999.00267
■
MOPSO-CID
0 NSGA-II
slelat.1=trtetin-J13XY1''.27'1'
0.040
0.002
0.004
0.006
0.008
0.0025
0.0020
0.0015
0.0010.
0.0005•.r,
0.0000
0.010
Figure 3.11 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
95
Table 3.10 Parameters and Hypervolumes for MHHM2 using MOEA-IDFA (Convex-Concave
Combination)
Table 3.10 (a) A Priori Parameters for MHHM2
f3•
fi*
I.
f;
f2*
f3
0
0.0125
0
0.0125
0
0.0125
'I,
n2
n3
5
5
0.1
Table 3.10 ( b) A Posteriori Parameters for MIIHM2
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
cl
c2
Cross.
Prob.
Cross.
Index
Aveni-
200
300
0.4
200
0.5
1
1
0.8
10
10
11/4-A.115G
Table 3.10 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for MHHM2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
7995.00005
7995.00006
0.00004
7995.00009
7995.00001
NSGA-H
7995.00007
7995.000078
0.00002
7995.00009
7995.00003
Figure 3.12 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(Convex-Concave Combination)
96
43-
O
0
4-
in I
-- .1 0
1-12 o° o f
3-
O
O
O
O
9)
0
0
O
0-
O
clmlizsramia•
00
1
f
0
f
1
(a)
(b)
4-
4-
-= 25
0
0.006 I
.)-12___=•- 00 1
3-
1
31I
10
0-
O
0
03111END
0-
0123
01
f
1
34
f1
(c)
(d)
Figure 3.13 Effect of variations in key parameter of DF on POF for SCH1
97
98
CHAPTER 4
Guided POF Articulating Nonlinear DFA with an MOEA (All
Sigmoidal Combination)
In this chapter, we articulate sigmoidal DF as a priori with an MOEA (NSGAHIMOPSO-GD)—as---a—posteriorh—and—demonstrate—ho-w;instead of one solution3—apreferred set of solutions near the desired portion of DM's interest can be found. A new
type of sigmoidal DF is proposed in this Chapter that can guide the Pareto-optimal
solutions close to the intermediate region of the original POF. Consequence of
sigmoidal DFA as a priori is analyzed theoretically as well as numerically (on ten
different standard test problems).
4.1 Introduction
If a single solution is to be selected in a MOOP, at some point during the process, the
DM has to reveal his/her preferences. Specifying these preferences a priori, i.e., before
alternatives. are known, often means to ask too much of the DM. On the other hand,
searching for all nondominated solutions as most MOEA do may result in a waste of
optimization efforts to find solutions that are clearly unacceptable to the DM. This
chapter introduces an intermediate approach, that asks for partial preference
information from the DM a priori; and then focus the search to those regions of the
Pareto optimal front that seem most interesting to the DM. That way, it is possible to
provide a larger number of relevant solutions. It seems intuitive that this should also
allow to reduce the computation time, although this aspect has explicitly only been
shown in Branke and Deb (2005) and Thiele et al. (2007).
In general, solution of MOOPs can be classified regarding at which stage the
preferences of the DM are integrated, discussed earlier in Section 1.1 of Chapter 1. A
priori optimization methods, e.g. desirability index (Harrington, 1965) utilize DM's
knowledge that has to be specified upfront. These techniques generate single point
solution only. A posteriori methods, however, .create a set of solutions (i.e., Paretooptimal solutions) or Pareto set. Most MOEA approaches can be classified as a
posteriori. They attempt to discover the whole set of Pareto-optimal solutions or at
99
least a well-distributed set of representatives. The DM then looks at the set (possibly
very large) of generated alternatives and makes final decision based on his/her
preferences. However, if the Pareto-optimal solutions are too many, their analysis to
reach the final decision is quite a challenging and burdensome process for the DM. In
addition, in a particular problem, the user may not be interested the complete Pareto set;
instead, the user may be interested in a certain region/regions of the Pareto set. Finding
a preferred distribution in the region of interest is more practical and less subjective
than finding one biased solution in the region of interest.
Although it is usually tougher for a DM to completely specify his or her
preferences before any alternatives are known, the DM often has a rough idea of the
preferential goals towards which the search should be directed, so that he/she may be
able to articulate vague, linguistic degrees of importance or give reasonable trade-offs
between the different objectives. Such information should be integrated into the MOEA
to bias the search towards solutions that are preferential for the DM. This would in
principle yield two important advantages:
(1) Instead of a diverse set of solutions, many of which irrelevant to the DM, a search
biased towards the DM's preferences will yield a more fine-grained, and thus more
suitable, selection of alternatives;
(2) By focusing the search onto the relevant part of the search space, the optimization
algorithm is expected to find these solutions more quickly.
To achieve this task a methodology is proposed in this chapter. In the most
practical cases generally, DM have at least a vague idea in which region/regions of the
objective the 'optimums' should be. For this purpose, DFs that map the objectives to
the interval [0, 1] according to their desired values can be specified as a priori to state a
relative preference of the objectives. The Pareto-optimal solutions are then determined
for the DFs instead of the objective functions using an MOEA. Thus, the proposed
method is a fusion of an a priori and an a posteriori approach.
Furthermore, this application in practice is quite straightforward. In addition, in
spite of the focus on the desired region/regions, the MOEA search can be carried out
without any additional constraints to be included for the decision variables and/or
objectives. Therefore, no individuals are 'wasted' during the optimization process in
case they do not fit into the constrained intervals. The MOEA is not touched at all
100
which facilitates the use of the method in ptactice as existing optimization tools can be
used for optimization process as a posteriori. In other words, this approach is
independent of the MOEA used and can be easily coupled to any of them without any
deep modification to the main structure of the method chosen. For comparison purpose
we are using two of them namely NSGA-II and MOPSO-CD.
Once the DM has agreed on the parameters of the DFs, which is the key step,
MOEAs can be applied on the transformed objective space (i.e., DFMOOP) without
modification (Trautmann and Mehnen, 2009). By means of DFs, the solutions
concentrate in the desired region/regions which facilitate the solutions selection process
and support the MOEAs in finding relevant solutions. If DM has single/multiple
region/regions as preference/preferences for an objective, modification in the DF
should be done accordingly. For example, if DM is interested in finding -the
intermediate portion of the POF of a bi-objective minimization problem the , DF
corresponding the objectives can be modified using sigmoidal DFs. We propose a new
kind of DF based on sigmoidal function and discuss the consequence of using
sigmoidal DFs in this chapter.
Jeong and Kim (2009) proposed an interactive DF approach (IDFA) for
multirespose optimization to facilitate the preference articulation process. One run of
this approach provides just one solution to the DM based on his preference in the form
of DF. Present work is an extension of the work done by Jeong and Kim (2009). We
present a MOEA based IDFA i.e., MOEA-IDFA to provide DM a preferred portion of
the POF rather than providing just a single solution.
Section 4.2 describes the prerequisites of sigmoidal DFA as a priori. The
methodology of the proposed approach is provided in Section 4.3. Necessary theorems
are also provided in this section in order to analyze the methodology theoretically.
Results corresponding to the ten test problems (Section .2.5) from both MOEAs
(NSGA-II and MOPSO-CD) are compared and discussed in Section 4.4. Finally,
concluding observations are drawn in Section 4.5.
101
4.2 Nonlinear (Sigmoidal) DFA as a Priori
If a DM has some preference (i.e., biasness) towards one or more objectives, a
nonlinear DF may fulfill the purpose. Different shapes of DF will provide different type
of biasness towards different objective region of the POF. The form of the DF
originally proposed by Harrington (1965) was based on the exponential function of a
linear transformation of the je,' s. Derringer and Suich (1980) found the specification not
very flexible in the sense that the DFs cannot assume a variety of shapes. Derringer and
Suich (1980) introduced a modified (alternative) version of DF for both one sided and
two sided case based on a power of a linear transformation of the f, 's. In this chapter
we propose a new kind of DF based on sigmoidal function and defined as
f <.f;
111=
(fP7t)
.))
1- F Z A* fi** ;a, E (4.1)
f" <
Figure 4.1 Proposed STB type of Sigmoidal DF
One-sided case (STB type) formulation of a sigmoidal DF is given in Equation 4.1, as
we are concerned with one sided specification, shown in Figure 4.1. Where parameters
102
f:and f:*
are minimal and maximal acceptable levels of
f,
are on the same scale and are discontinuous at the points
respectively. The DFss
f:
, f and f ** . The value of
a, (a kind of weighting factor) can be chosen so that the DF is easier or more difficult
to satisfy. Here, F+2f,- ) is the crossover point of the sigmoidal DF, shown in Figure
4.1. In case of sigmoidal based DF a, is the key parameter.
hinear--DF—and-cornbination_o_Lconvex,_-con.came..DF have_been_analyzed already
in Chapters 2 and Chapter 3 respectively. Use of Linear DF does not provide biasness
towards objectives. The combination of convex-concave DF does provide biasness
towards a corner of the POF. In this chapter, we will about to discuss the effect of
sigmoidal DFA for MOOP.
4.3 Proposed Methodology
MOOP is already described in Section 1.1 given by Equation P1
Minimize f (x)= {ft (x), f2 (x),..., fk (x)}
(P1)
XEX
Eliciting the corresponding DF to each objective (i.e. f2,..., fk ) through the
interaction of DM a new DF based multi-objective optimization problem (DFMOOP)
consisting the DFs is formulated given by Equation P2.
Minimize ,u(x) = {,u,(x), p,(x),..., Pk (x)}
(P2)
xeX
Where, for each value of an objective function
j; , there is a mapping called DF i.e., p,
to prescribe the variation of vagueness involved discussed in Section 2.2. The overall
(resulting) DF's value (say ,u) is also between zero and one.
The Pareto-optimal solutions are then determined for this newly formed DFMOOP
instead of the original MOOP. Solutions of DFMOOP have a unique relationship with
the original MOOP of objective functions. In general, DFMOOP is solved using
different aggregators, min and product operator are most common, providing a single
solution of DFMOOP. This type of approach is repeatedly applied for different degrees
of satisfaction values until the DM is satisfied. Benefit of this technique lies in the fact
how well DM's preferences have been incorporated in a priori approach, which is quite
rare due to vague nature of human judgment. So in the present approach DFMOOP is
103
solved using purely multi-objective manner (using the discussed algorithms i.e.,
NSGA-II and MOPSO-CD) without aggregating. Present approach is an attempt to
incorporate the benefits of both the a priori and the a posteriori methods together.
Now we are going to prove a theorem, which establishes the relationship between the
Pareto-optimal solutions of P2 and Pl.
Theorem 4.1: The Pareto-optimal solutions of P2 corresponding the DF in Equation
4.1 are also Pareto-optimal solutions of P1.
Proof:
Let x* be Pareto-optimal solution for P2. Then by the definition of Pareto-optimal
solution
Ox e X ,u, (x) < du (x ) ; for i e {1, 2,..., k}
and ,ti j (x)i (x* ); for j e {1, 2,..., k} ; j I
> x c X 11+ 2`Ea ( f (x)(1.9)) <11+ e "i(j;(x)-(f.' 9; for i e {1, 2, ..., lc}; a, E R+
1--a,if,(x) (-4* (- 11a,rf,(x)(-1 +2 7
f 1
<=> e` >
as a, e R+ and .f,* f(x),f,(x*) .f"
<=> f(x)
< f(x*) \f'Zf-
)
(4.2)
<=> f. (x)< f(x* ) for i E {1, 2, ...,
1+e
for j E <=> e [
E
aJ
(x.)
; j ai
aj [ f (x) ( 4 217 ))) > e [ a i [ f (x* ( fi+2fjJJJ
<=>(x)
-1;+2f7(x* )
f-7+2 f7 ) as a c R+ and f; < f i (x), fi (x
<=> f f (x) f f (x* ) for j e
i
(4.3)
The proposition holds as Inequalities 4.3 and 4.4 together form the condition for
Pareto-optimal solution of Pl.
104
4.3.1 Detailed Procedure of the Methodology: MOEA-IDFA
The methodology consists of five steps. Step 0 is an initialization step. Step 1 and 2
constitute the, calculation phase and Steps 3 and 4 the decision-making phase. Figure
1.8 shows the procedure of MOEA-IDFA. In the calculation phase, the DFs are
constructed, and then an optimization model of DFs is solved using an MOEA. In the
decision-rnaking_phase,the DM evaluates the results of the calculation phase, and then
articulates his/her preference information. More specifically, if the DM is satisfied with
the results on all the objectives, the procedure successfully ends. Otherwise, the DM
adjusts the parameters (shape and bound) of a DF. Then, the procedure goes back to the
calculation phase. Each step is described below.
Step 0: Initialization of DF's Parameters
The DF's parameters (preference parameters on each objective) are to be initialized, to
construct the DF for each objective in the first iteration. The initial bound and goal
(target) may be determined based on the DM's subjective judgments. Ideal and antiideal vectors
f: and 4. , of correspondingobjective function
f should be calculated
in advance for the given MOOP (Equation P1). The initial bound and shape may be
determined based on the DM's subjective judgments
Step 1: Construction of the DFs
From this step calculation phase starts. As mentioned earlier, the preference parameters
initialized in Step 0 are used to construct the nonlinear (convex or concave) DF ,u for
each f, of P1 . DM preference can be utilized in the construction of DF via the value of
a, (a kind of weighting factor determines shape of DF). Thus, a new DFMOOP (see
Equation P2) is formed.
Step 2: Solving the DFMOOP
This newly formed P2 is then solved using an efficient MOEA. As we are discussing
general MOOP which can be nonlinear, nonconvex, multimodal and multivariable.
Hence, resulting DFMOOP will also be of the same nature. We need some powerful
105
algorithm to solve this optimization problem. NSGA-II and MOPSO-CD are two such
competent techniques capable of solving complex MOOPs used here.
Step 3: Evaluation of the Solution
Present the solution to the DM obtained in Step 2. Theorem 4.1 and 4.2 together imply
that the Pareto-optimal solution obtained by solving P2 is also the Pareto-optimal
solution of Pl.
However, the POF of P2 is different from. P1 due to different function formulation.
Therefore, they produce a section of POF of P1, guided according the choice of the DF.
For example, for -a bi-objective minimization problem using sigmoidal DF produces
biasness towards intermediate region of the POF. If the DM is satisfied by the Paretooptimal solutions, the methodology is terminated. Else, the procedure goes to Step 4.
Step 4: Adjusting the Preference Parameters
In order to improve the unsatisfactory results, interaction with DM can help modifying
(updating) the Step 0 and Stepl . On the other hand, if the DM is fully satisfied by the
obtained guided POF the procedure ends successfully. The whole process may be
repeated until DM is satisfied.
4.4 Results and Discussion
The proposed approach in this chapter has been applied to same set of ten standard test
problems used in Chapters 2 and 3. As conversed earlier in the Section 4.2, sigmoidal
combination of DFs is used as a priori, while NSGA-II/MOPSO-CD applied as a
posteriori in the proposed approach.
As described in the Step 0 of Subsection 4.3.1 of the approach, the parameters of
a priori method i.e., nonlinear (sigmoidal) DFs are obtained with the help of DM
corresponding each objective. Initial parameters of DF's used for all test problems are
given in Tables 4.1(a)-4.10(a).
In Step 1, preference parameters initialized in Step 0 are used in construction of
the sigmoidal DFs (,u,s ) for each f with the help of DM, and forming the DFMOOP
(i.e., P2). This newly formed P2 is solved using an a posteriori method as shown in
106
Step 2. Since our methodology is based on MOEA used, we have used both NSGA-II
and MOPSO-CD as a posteriori method to estimate the effectiveness of the approach
on different MOEAs. The values of different set of parameters are tested and fine-tuned
through several runs of NSGA-II and MOPSO-CD in this approach. NSGA-II and
MOPSO-CD parameters used for each problem are given in Tables 4.1(b)-4.10(b). In
Step 2, computations have been carried out based on these parameters and results have
been reported. To compare the efficiency of NSGA-II and MOPSO-CD, hypervolume
metric is used here, for them reference points are taken (11, 11) and (11, 11, 11) for biobjective problems and tri-objective problems respectively. Tables 4.1(c)- 4.10(c)
display the mean, median, standard deviation, best and worst values of hypervolume
metrics obtained by using NSGA-II and MOPSO-CD with the help of 10 runs each. For
the steadiness of the results for each problem, 10 runs of each MOEA (NSGA-II and
MOPSO-CD) are done under the same parameters and the best POF (based on best
hypervolume metric) obtained is reported. The POFs of the problems are depicted in
Figures 4.2-4.11, which clearly imply that use of all sigmoidal combination of DF
guides the POF towards intermediate region of the POF.
Having a look on the Tables 4.1(c)-4.10(c) reveal that if we consider mean of
hypervolume metric as the competence parameter for both NSGA-II and MOPSO-CD,
then NSGA-H performed better in case of TNK, MHHM1 and MHHM2 while
MOPSO-CD was better in the cases SCH1, SCH2, KUR, ZDT1, ZDT2, VNT and
ZDT3 for the used parameters parameter settings. If the DM is satisfied by the POF
obtained then the procedure ends successfully as explained in Step 4. We are just
showing one iteration of the approach here as we assume that in this problem DM is
satisfied by the results obtained in Step 3. However, if the DM is not satisfied then the
procedure goes back to Step 1 to accumulate the DM's preferences in a better manner.
107
4.4.1 Effect of Variations in DF's Key Parameter on POF
If the DM is unsatisfied by the outcome of Step 4 of the proposed approach then
analyst calls for the variation in DF. Thus, it is significant to understand the effect of
the parameters of DF on POF. It is obvious from the results and discussions in Section
4.4 that use of sigmoidal combination of DFs yields Pareto-optimal solutions in the mid
portion of the POF. To investigate the effect of the DF's (i.e., ,u, 's) key parameter ( a; )
further, we are taking SCH1 as our problem and NSGA-II as the MOEA having the
same a posteriori setting as used in Table 4.1 (b). As the key parameter in case of a
sigmoidal DF ,u, is a, we are using rest of the parameters same as taken in Table 4.1
(a) and will vary a, to observe the effect of it. Different combination of a,' s have
yielded different portions of POF of SCH1 as shown in Figures 4.12. The key
parameters a, and a2 used for Figure 4.12 (a) are having equal values 2. Similarly the
values of al and a2 used for Figure 4.12 (b) are taken 3. Figure 4.12 (c) depicts the
POF in case of al and a2 are taken 5 each. 7 is the value used for both al and a2 in
Figure 4.12 (d). It is clear from the Figures 4.12 that increment in the values of the
parameters of the DF to a certain level (depending upon the problem) shrinks this mid
portion even more.
4.5
Conclusion
In this chapter, a partial user preference approach named MOEA-IDFA having
nonlinear DF is proposed. In this approach, nonlinear (sigmoidal combination of DFs)
DFA as a priori and NSGA-II/MOPSO-CD as a posteriori are combined together to
provide an interactive POF. The theoretical analysis of the proposed MOEA-IDFA is
also provided in this chapter. It has been observed that use of sigmoidal combination of
DFs produces a definite bias (intermediate region of POF) among the Pareto-optimal
solutions of original MOOP. This type of approach is useful if the DM does not prefer a
particular objective over another objective.The performance of the MOEA-IDFA is
tested on a set of 10 test problems. Effect of DF's key parameter on POF has also been
investigated. Similar approach to find a bias in the POF proposed by Branke et al.
108
(2001) reported results in convex POF only. This approach is better than the method
proposed by Branke et al. (2001) in terms of its ability to produce biasness in the
disconnected and non-convex POFs as well. According the author's best knowledge,
there are no works till date, in literature that comprise the biasness among tri-objective
problems. In this chapter, we successfully apply the MOEA-IDFA using sigmoidal DFs
to three standard tri-objective problems (VNT, MHHM1 and MHHM2).
109
Table 4.1 Parameters and Hypervolumes for SCH1 using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.1 (a) A Priori Parameters for SCH1
fi*
.1;*•
1.2.
0
4
0
./2.
a1
a2
4
3.9
3.9
Table 4.1 ( b)A Posteriori Parameters for SCH1
NSGA-II Parameters
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
c1
c2
Cross.
Prob.
Cross.
Index
200
200
0.4
200
0.5
1
1
0.9
10
10
Table 4.1 (c) Hypervolumes w.r.t. NSGA-I1 and MOPSO-CD for SCH1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
397.24474
397.24475
0.00003
397.24478
397.24470
NSGA-II
397.13946
397.13945
0.00003
397.13949
397.13941
Figure 4.2 POFs of SCH1w.r.t NSGA4I and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
110
Table 4.2 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.2 (a) A Priori Parameters for SCH2
fi.•
A
-1
1
.12
i-;- "
al
a2
0
16
10
10
TiaTuTurir-past r
or! Parameters-for—SCH2
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
250
0.5
200
0.5
NSGA-II Parameters
Cl
C2
Cross.
Prob.
Cross.
Index
1
1
0.7
10
10
Table 4.2 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for SCH2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
406.68519
406.68534
0.00055
406.68539
406.68354
NSGA-H
405.89094
405.89092
0.00002
405.89097
405.89091
O
15-
0 NSGA-II
•
12-
MOPSO-CD
9630
0.8-0.40.0OA0.8
1
Figure 4.3 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
111
Table 4.3 Parameters and Hypervolumes for KUR using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.3 (a) A Priori Parameters for KUR
A
-20
fi*"
-14.4
A
-11.6
f2.
al
a2
0
10
10
Table 4.3 ( b) A Posteriori Parameters for KUR
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
150
0.2
200
0.6
Cl
1.1
NSGA-II Parameters
c2
1.1
Cross.
Prob.
Cross.
Index
0.6
10
/14,LAJ,
..v../Afie
5
Table 4.3 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for KUR
Mean
Median
S.D.
Best
Worst
MOPSO-CD
1222.30007
1222.30002
0.00012
1222.30043
1222.30001
NSGA -II
1220.00173
1220.00167
0.00014
1220.00197
1220.00144
-12 ,iI1,
-20-19-18-17-16-15-14
f
1
Figure 4.4 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
112
Table 4.4 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.4 (a) A Priori Parameters for ZDT1
f,"
f"
/2
0
1
0
r
f
1
al
a2
15
15
Table-4:4{ b)-,4-Posteriori—Parameters for ZDT1
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
200
300
0.1
200
Cl
0.5I 1.1
W-
NSGA-II Parameters
c2
1.1
Cross.
Prob.
Cross.
Index
0.8
10
filA.J.,
:z.4-9c--,0
10
Table 4.4 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.66342
399.66083
0.00642
399.67865
399.65346
NSGA-II
399.65731
399.65787
0.00241
399.65978
399.65312
1.0 - 0
0 NS GA-II
" MOPSC)-CD
0.0 0.81.0
0.00.20.40.6
f
1
Figure 4.5 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using. MOEA-IDFA
(All Sigmoidal Combination)
113
Table 4.5 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.5 (a) A Priori Parameters for ZDT2
fi
f:
0
1
f;,
f;
0
Table 4.5 ( b) A
'1
al
az
8.0
8.0
Posteriori Parameters for ZDT2
NSGA-II Parameters
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
350
0.1
200
0.5
.
Ci
c2
Cross.
Prob.
Cross.
Index
,4'tuJ.
1-t-airx
1.1
1.1
0.6
10
15
Table 4.5 (c) Hypervolumes w.r.t.NSGA-II and MOPSO-CD for ZDT2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
406.97954
406.97943
0.00034
406.97998
406.97912
NSGA-I1
406.81763
406.81757
0.00017
406.81798
406.81734
0 N
° MOPSO-CD
040.1 -
0. 0 0.00.20.40.60.81.0
f1
Figure 4.6 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
114
Table 4.6 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.6 (a) A Priori Parameters for ZDT3
fi
0
1,..,••
jrz.•
A
8.52
-0.773
1
al
a2
10
10
Table 4.6 ( b)A Posteriori Parameters for ZDT3
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
CI
C.
200
350
0.02
200
0.6
1.1
1.1
NSGA-II Parameters
Cross.
Prob.
Cross.
Index
1,4-AA-41
..f—v-44.c.
'
0.6
15
15
Table 4.6 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for ZDT3
Mean
Median
S.D.
Best
Worst
MOPSO-CD
415.29906
415.29905
0.00003
415.29909
415.29900
NSGA-II
414.60759
414.60765
0.00028
414.60798 414.60712
Figure 4.7 POFs of ZDT3 w.r.t. NSGA-If and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
115
Table 4.7 Parameters and Hypervolumes for TNK using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.7 (a) A Priori Parameters for TNK
11.
.f
0
f2•
.1
1.05
1.05
0
al
a2
20
20
Table 4.7 ( b)A Posteriori Parameters for TNK
Common Parameters
NSGA-II Parameters
MOPSO-CD Parameters
. Pop.
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
300
0.5
200
0.5
C1
c2
Cross.
Prob.
Cross.
Index
Ai,t.I -
1
1
0.9
10
10
A-
Table 4.7 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for TNK
Mean
Median
S.D.
Best
Worst
MOPSO-CD
397.41494
397.41452
0.00183
397.41877
397.41132
NSGA-II
399.19856
399.19855
0.00003
399.19859
399.19852
0 NSGA-II
.
0
- 1\10PSO-CD
1.0-a
0.8 0.6C,1
0.40.2 -
0.0 -
O
I•IIII•
0.00.20.4• 0.60.81.0
f
l
Figure 4.8 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
116
Table 4.8 Parameters and Hypervolumes for VNT using MOEA-IDFA
(All Sigmoidal Combination)
Table 4.8 (a)A Priori Parameters for VNT
J1•
fi
f;
J 2.
f2
lc•
a1
a2
a3
0
4
1
5
2
4
5
5
5
Table 4.8 ( b) A Posteriori Parameters for VNT
Pop
Size
200
Max.
Gen.
250
NSGA-H Parameters
MOPSO-CD Parameters
Common Parameters
Mutation
Prob.
0.5
Arch.
Size
200
w
0.5
cl
c2
1
1
Cross.
Prob.
0.8
Cross.
Index
10
,,.
_r_.e,Pi
10
Table 4.8 (c) Hypervolumes w.r.t: NSGA-II and MOPSO-CD for VNT
Mean
Median
S.D.
Best
Worst
MOPSO-CD
6093.50480
6093.50561
0.00237
6093.50678
6093.50009
NSGA-1t
6087.20061
6087.20065
0.00023
6087.20099
6087.20022
Figure 4.9 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
117
Table 4.9 Parameters and Hypervolumes for MHHM1 using MOEA-IDFA (All Sigmoidal
Combination)
Table 4.9 (a) A Priori Parameters for MHHMI
..f
0.01
0
f2,..
f2
fl
0
f;
.f;*
al
az
a3
0
0.01
7000
7000
7000
0.0025
Table 4.9 ( b) A Posteriori Parameters for MHHM1
Common Parameters
MOPSO-CD Parameters
- Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
350
0.5
200
0.5
NSGA-II Parameters
cl
C ..
Cross.
Prob.
Cross.
Index
1
1
0.9
10
.1-1"-410,
10
Table 4.9 (c) Hypervolumes w.r.t. NSGA-H and MOPSO-CD for MHHM1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
7999.04947
7999.04343
0.02916
7999.09897
7999.01132
NSGA-II
7999.05225
7999.05431
0.02476
7999.09538
7999.01453
• MOPSO-CD
0 NSOA -II
0,000
0.002
0.004
0.006
0.008
0.0025
0,0020
0.0015
0.0010
0.0005
0.0100.0000
Figure 4.10 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
118
Table 4.10 Parameters and Hypervolumes for MHHM2 using MOEA-IDFA (All Sigmoidal
Combination)
Table 4.10 (a) A Priori Parameters for MHHM2
..
..
f3,..
A
f.
f;
f2
A
0.0
0.0125
0.0
0.0125
0.0
0.0125
al
a2
a3
2000.0
2000.0
2000.0
Table 4.10 ( b) A Posteriori Parameters for MHHM2
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
300
0.3
200
0.5
NSGA-II Parameters
Cl
c2
Cross.
Prob.
Cross.
Index
1
1
0.8
10
)44,,u4,--
<4.-t-Pir
10
Table 4.10 (c) Hypervolumes w.r.t. NS1-1A-11 and MOPSO-CD for MHHM2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
7995.00055
7995.00056
0.00022
7995.00087
7995.00012
NSGA-II
7995.00075
7995.00077
0.00021
7995.00098
7995.00036
• 110PSO-CD
0 NSCIA-11
Figure 4.11 POFs of MHHM2 w.r.t. NSGA-I1 and MOPSO-CD using MOEA-IDFA
(All Sigmoidal Combination)
119
4-
4-
0
32-
1,
001
r:0(.41
3
I de1I
0
o
0123
4
4
f
fl
(a)
4-
(b) '
0
4-
3-
32-
O
0
0
0
0
0
2
f1
34
fi
(d)•
(c)
Figure 4.12 Effect of variations in key parameter of DF on POF for SCH1
120
CHAPTER 5
Guided POF Articulating Nonlinear DFA with an MOEA (All
Convex Combination)
In this chapter, a method articulating sigmoidal DF as a priori with an MOEA (NSGAII/MOPSO-CD) as a posteriori is proposed. DM can have multiple regions together as
the preference in one POF. Convex DFA can help exploring the multiple regions of a
POF in a single run. Consequence of convex DFA as a priori is analyzed theoretically
as well as numerically (on ten different standard test problems).
5.1 Introduction
Most MOEA approaches can be classified as a posteriori. They attempt to discover the
whole set of Pareto-optimal solutions or at least a well-distributed set of
representatives. The DM then looks at the set (possibly very large) of generated
alternatives and makes final decision based on his/her preferences. However, if the
Pareto-optimal solutions are too many, their analysis to reach the final decision is quite
a challenging and burdensome process for the DM. In addition, in a particular problem,
the user may not be interested the complete Pareto set; instead, the user may be
interested in a certain region/regions of the Pareto set. Such a bias can arise if not all
objectives are of equal importance to the user. Finding a preferred distribution in the
region/regions of interest is more practical and less subjective than finding one biased
solution in the region of interest.
Although it is usually tougher for a DM to completely specify his or her
preferences before any alternatives are known, the DM often has a rough idea of the
preferential goals towards which the search should be directed, so that he/she may be
able to articulate vague, linguistic degrees of importance or give reasonable trade-offs
between the different objectives. Such information should be integrated into the MOEA
to bias the search towards solutions that are preferential for the DM. This would in
principle yield two important advantages:
121
(1) Instead of a diverse set of solutions, many of which irrelevant to the DM, a search
biased towards the DM's preferences will yield a more fine-grained, and thus more
suitable, selection of alternatives;
(2) By focusing the search onto the relevant part of the search space, the optimization
algorithm is expected to find these solutions more quickly.
In the most practical cases generally, DM have at least a vague idea in which
region/regions of the objective the 'optimums' should be. For this purpose, DFs that
map the objectives to the interval [0, 1] according to their desired values can be
specified as a priori. The Pareto-optimal solutions are then determined for the DFs
instead of the objective functions using an MOEA. Proposed method is a fusion of an a
priori and an a posteriori approach. Furthermore, this application in practice is quite
straightforward. In addition, in spite of the focus on the desired region, the MOEA
search can be carried out without any additional constraints to be included for the
decision variables and/or objectives. Therefore, no individuals are 'wasted' during the
optimization process in case they do not fit into the constrained intervals. The MOEA is
not touched at all which facilitates the use of the method in practice as existing
optimization tools can be used for optimization process as a posteriori. In other words,
this approach is independent of the MOEA used and can be easily coupled to any of
them without any deep modification to the main structure of the method chosen. For
comparison purpose we are using two of them namely NSGA-II and MOPSO-CD.
Once the DM has agreed on the parameters of the DFs, which is the key step,
MOEAs can be applied on the transformed objective space (i.e., DFMOOP) without
modification (Trautmann and Mehnen, 2009). By means of DFs, the solutions
concentrate in the desired region/regions which facilitate the solutions selection process
and support the MOEAs in finding relevant solutions. If DM has multiple regions as
preference/preferences for a MOOP, modification in the DF should be done
accordingly. For example, if DM is interested in finding the end portions of the POF of
a bi-objective minimization problem, the DF corresponding both the objectives can be
transformed as convex DF. We shall discuss the consequence of using convex DFs in
this chapter.
Jeong and Kim (2009) proposed an interactive DF approach (IDFA) for
multirespose optimization to facilitate the preference articulation process. One run of
122
this approach provides just one solution to the DM based on his preference in the form
of DF. Present work is an extension of the work done by Jeong and Kim (2009). We
present a MOEA based IDFA i.e., MOEA-IDFA to provide DM a preferred portion of
the POF rather than providing just a single solution.
Section 5.2 describes the prerequisites of convex DFA as a priori. The
methodology of the proposed approach is provided in Section 5.3. Necessary theorems
are also provided in this section in order to analyze the methodology theoretically.
Results corresponding to the ten test problems (Section 2.5) from both MOEAs
(NSGA-II and MOPSO-CD) are compared and discussed in Section 5.4. Finally,
concluding observations are drawn in Section 5.5.
5.2 Nonlinear (Convex) DFA as a Priori
If a DM has some preference/preferences (i.e., biasness) towards one or more
objectives, a nonlinear DF may fulfill the purpose. Different shapes of DF will provide
different type of biasness towards different objective region of the POF. The thrm of
the STB type of convex DF is already discussed in Chapter 3 proposed by Derringer
and Suich (1980), defined as
- f,
f . _ f.
y
if f, <
if .1;.* ;i =1,2,3,..., E
(5A)
if f < f,
1
Where parameters /and
A**
are minimal and maximal acceptable levels of
A
respectively (see Figure 5.1). The DFs p,' s are on the same scale and are discontinuous
at the points f, ,
A
and . The values of n, (a kind of weighting factor) can be
chosen so that the DF is easier or more difficult to satisfy. The key parameter of a
convex DF p, is n, .
Use of Linear DF discussed in Chapter 2 does not provide any biasness for DM.
The combination of convex-concave DF does provide biasness towards a corner of the
POF, discussed in Chapter 3. Sigmoidal DF guides towards the intermediate region of
the POF elaborated in Chapter 4.
123
Figure 5.1 STB type of a Convex DF
Till now we were discussing single region (either towards any corner or any
intermediate portion of the POF) of biasness. In this chapter, we discuss the effect of
convex DFA for MOOP that can guide the POF into multiple regions.
5.3 Proposed Methodology
MOOP is already described in Section (1A) given by Equation P1
Minimize f (x) = {f,(x), fk (x)}
xe X
(P1)
Eliciting the corresponding DF to each objective (i.e. f2 ,..., fk ) through the
interaction of DM a new DF based multi-objective optimization problem (DFMOOP)
consisting the DFs is formulated given by Equation P2.
Minimize ,u(x) {Pi(x), 12 (x), • • •, k (X) }
xeX
124
(P2)
Where, for each value of an objective function }; , there is a mapping called DF i.e., p,
to prescribe the variation of vagueness involved discussed in Section 2.2. The overall
(resulting) DF's value (say p) is also between zero and one.
The Pareto-optimal solutions are then determined for this newly formed DFMOOP
instead of the original MOOP. Solutions of DFMOOP have a unique relationship with
the original MOOP of objective functions. In general, DFMOOP is solved using
different aggregators, min and product operator are most common, providing a single
solution of DFMOOP. This type of approach is repeatedly applied for different degrees
of satisfaction values until the DM is satisfied. Benefit of this technique lies in the fact
how well DM's preferences have been incorporated in a priori approach, which is quite
rare due to vague nature of human judgment. So in the present approach DFMOOP is
solved using purely multi-objective manner (using the discussed algorithms i.e.,
NSGA-II and MOPSO-CD) without aggregating. Present approach is an attempt to
incorporate the benefits of both the a priori and the a posteriori methods together.
Now we are going to prove two theorems, which establish the relationship between the
Pareto-optimal solutions of P2 and Pl.
Theorem 5.1: The Pareto-optimal solutions of P2 corresponding the DF in Equation
3.2 are also Pareto-optimal solutions of Pl.
Proof:
Let x* be Pareto-optimal solution for P2. Then by the definition of Pareto-optimal
solution
,Zfx E X (X) <
(X*
) ; for i
E
{1,2,...,k}
and p (x) p j(x+ ); for j j # i
for i E {1,2,...,0;n, >1
'=, OX E X
rf(x)-f. <if(x )-f*"
f** f* )f** -f* ■
125
(5.2)
asand f: f (x), f(x* )< f:
(5.3)
f - f f* and [ fj(x)—
f; jni <( fi(x.)— * jnj ;for j
fss
— -1
.;47.-4
G
ni >1; jai
f.(x)— t(
f 7\
< Jf-(x*)**
as n,and f; < f (x), f i (x* ) f "
j (5.4)
Inequality (5.3) <=> f,(x)— f s < f,(x )— f: as f" — f, > 0
<=> fi (x)< f(xl ); for i
E
{1,2,..., lc}
(5.5)
Similarly,
Inequality (5.4) <=> f i (x)— f f (x* )— fJ asf;* — f; > 0, j # i
<=> f j(x)_. fi (x j); for j # i (5.6)
The proposition holds as Inequalities 5.5 and 5.6 together form the condition for
Pareto-optimal solution of P 1.
5.3.1 Detailed Procedure of the Methodology: MOEA-IDFA
The methodology consists of five steps. Step 0 is an initialization step. Step 1 and 2
constitute the calculation phase and Steps 3 and 4 the decision-making phase. Figure
1.8 shows the procedure of MOEA-IDFA. In the calculation phase, the. DFs are
constructed, and then an optimization model of DFs is solved using an MOEA. In the
decision-making phase, the DM evaluates the results of the calculation phase, and then
articulates his/her preference information. More specifically, if the DM is satisfied with
the results on all the objectives, the procedure successfully ends. Otherwise, the DM
adjusts the parameters (shape and bound) of a DF. Then, the procedure goes back to the
calculation phase. Each step is described below.
Step 0: Initialization of DF's Parameters
The DF's parameters (preference parameters on each objective) are to be initialized, to
construct the DF for each objective in the first iteration. The initial bound and goal
(target) may be determined based on the DM's subjective judgments. Ideal and antiideal vectors f* and r , of corresponding
126
1Ih
objective function j; should be
calculated in advance for the given MOOP (Equation P1). The initial bound and shape
may be determined based on the DM's subjective judgments
Step 1:
Construction of the DFs
From this step calculation phase starts. As mentioned earlier, the preference parameters
initialized in Step 0 are used to construct the nonlinear (convex or concave) DF At, for
each f of P1. DM preference can be utilized in the construction of DF via the value of
n.
(a kind of weighting factor determines shape of DF). Thus, a new DFMOOP (see
Equation P2) is formed.
Solving the DFMOOP
This newly formed P2 is then solved using an efficient MOEA. As we are discussing
Step 2:
general MOOP which, can be nonlinear, nonconvex, multimodal and multivariable.
Hence, resulting DFMOOP will also be of the same nature. We need some powerful
algorithm to solve this optimization problem. NSGA-II and MOPSO-CD are two such
competent techniques capable of solving complex MOOPs used here.
Step 3:
Evaluation of the Solution
Present the solution to the DM obtained in Step 2. Theorem 5.1 and 5.2 togetherimply
that the Pareto-optimal solution obtained by solving P2 is also the Pareto-optimal
solution of Pl.
However, the POF of P2 is different from P1 due to different function formulation.
Therefore, they produce a section of POF of Pl, guided according the choice of the DF.
For example, for a bi-objective minimization problem using convex DF produces
biasness towards both end of the POF. If the DM is satisfied by the Pareto-optimal
solutions, the methodology is terminated. Else, the procedure goes to Step 4.
Step 4:
Adjusting the Preference Parameters
In order to improve the unsatisfactory results, interaction with DM can help modifying
(updating) the Step 0 and Step]. On the other hand, if the DM is fully satisfied by the
127
obtained guided POF the procedure- ends successfully. The whole process may be
repeated until DM is satisfied.
5.4 Results and Discussion
The proposed approach in this chapter has been applied to the same set of ten standard
test problems used earlier (Chapters 2-4). As conversed earlier in the Section 5.2, all
convex combination of DFs is used as a priori, while NSGA-II/MOPSO-CD applied as
a posteriori in the proposed approach.
As described in the Step 0 of Subsection 5.3.1 of the approach, the parameters of
a priori method i.e., nonlinear (sigmoidal) DFs are obtained with the help of DM
corresponding each objective. Initial parameter of DF's used for all test problems are
given in Tables 5.1(a)-5.10(a).
In Step 1, preference parameters initialized in Step 0 are used in construction of
the sigmoidal DFs ( ,u,s ) for each f with the help of DM, and forming the DFMOOP
(i.e., P2). This newly formed P2 is solved using an a posteriori method as shown in
Step 2. Since our methodology is based on MOEA used, we have used both NSGA-II
and MOPSO-CD as a posteriori method to estimate the effectiveness of the approach
on different MOEAs. The values of different set of parameters are tested and fine-tuned
through several runs of NSGA-II and MOPSO-CD in this approach. NSGA-II and
MOPSO-CD parameters used for each problem are given in Tables 5.1(b)-5.10(b). In
Step 2, computations have been carried out based on these parameters and results have
been reported. To compare the efficiency of NSGA-II and MOPSO-CD, hypervolume
metric is used here, for them reference points are taken (11, 11) and (11, 11, 11) for biobj ective problems and tri-objective problems respectively. Tables 5.1(c)- 5.10(c)
display the mean, median, standard deviation, best and worst values of hypervolume
metrics obtained by using NSGA-II and MOPSO-CD with the help of 10 runs each. For
the steadiness of the results for each problem, 10 runs of each MOEA (NSGA-II and
MOPSO-CD) are done under the same parameters and the best POF (based on best
hypervolume metric) obtained is reported. The POFs of the problems are depicted in
Figures 5.2-5.11, which clearly imply that use of all convex combination of DF guides
the POF towards multiple regions of the POF.
128
Having a look on the Tables 5.1(c)-5.10(e) discloses that if we consider mean of
hypervolume metric as the competence parameter for both NSGA-II and MOPSO-CD,
then NSGA-II performed better in case of SCH2, KUR, ZDT1, ZDT2, MHHM1 and
MHHM2 while MOPSO-CD was better in the cases SCH1, TNK, VNT and ZDT3 for
the used parameters parameter settings. If the DM is satisfied by the POF obtained then
the procedure ends successfully as explained in Step 4. We are just showing one
iteration of the approach here as we assume that in this problem DM is satisfied by the
results obtained in Step 3. However, if the DM is not satisfied then the procedure goes
back to Step I to accumulate the DM's preferences in a better manner.
5.4.1 Effect of Variations in DF's Key Parameter on POF
If the DM is unsatisfied by the outcome of Step 4 of the proposed approach then
analyst calls for the variation in DF. Thus, it is significant to understand the •effect of
the parameters of DF on POF. It is obvious from the results and discussions in Section
5.4 that use of convex combination of DFs yields Pareto-optimal solutions in the
extreme portions of the POF. To investigate the effect of the DF's key parameter
further, we are taking SCH1 as our problem and NSGA-II as the MOEA having the
same a posteriori setting as used in Table 5.1 (b). As the key parameter in case of a
convex DF p, is n, we are using rest of the parameters same as taken in Table 5.1 (a)
and will vary n, to observe the effect of it. Different combination of n, s have yielded
different portions of POF of SCH1 as shown in Figures 5.12. The key parameters ni
and n2 used for Figure 5.12 (a) are having equal values 2. Similarly the values of ni
and n2 used for Figure 5.12 (b) are taken 4. Figure 5.12 (c) depicts the POF in case of
n1 and n2 are taken 5 each. 20 is the value used for both ni and n2 in Figure 5.12 (d).
It is clear from the Figures 5.12 that increment in the values of both the key parameters
contracts POF towards the extreme portions. Thus, if the DM wants the extreme
portions of POF (in case of bi-objective problem) in single run then all convex type of
combination of DF can be used.
129
5.5
Conclusion
The main crux of this chapter is application of DFA to exploit the population approach
of an MOEA procedure in finding more than one solutions not on the entire POF, but in
the regions of Pareto-optimality which are of interest to the DM. In this chapter, a
partial user preference approkh named MOEA-IDFA having nonlinear DF is
proposed. In this approach, nonlinear (convex combination of DFs) DFA as a priori
and NSGA-II/MOPSO-CD as a posteriori are combined together to provide an
interactive POF. The theoretical analysis of the proposed MOEA-IDFA is also
provided in this chapter. It has been observed that use of convex combination of DFs
produces multiple biases (towards extreme regions of POF) among the Pareto-optimal
solutions of original MOOR This type of approach is useful if the DM does not prefer a
intermediate region of the POF. The performance of the MOEA-IDFA is tested on a set
of 10 test problems. It is observed and displayed that multiple (two) regions can also be
explored using suitable combination of DFs. Effect of the key parameters of DF on
POF is also presented
According the author's best knowledge, there are no works till date, in literature
that comprise the biasness among tri-objective problems. In this chapter, we
successfully apply the MOEA-IDFA using convex DFs to three standard tri-objective
problems (VNT, MHHM1 and MHHM2).
130
Table 5.1 Parameters and Hypervolumes for SCHI using MOEA-IDFA
(All Convex Combination)
Table 5.1 (a) A Priori Parameters for SCH1
ii.
fi..
f;
0
4
0
A
nt
n2
4
10
10
Table 5.1 ( b) A Posteriori Parameters for SCH1
Common Parameters
MOPSO-CD Parameters
NSGA-H Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
c1
c2
Cross.
Cross.
Prob.
Index
200
250
0.4
200
0.4
1
1
0.9
10
...0,,
10
Table 5.1 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for SCHI
Mean
Median
S.D.
Best
Worst
MOPSO-CD
396.88250
396.88254
0.00015
396.88269
396.88221
NSGA-11
393.58259
393.58257
0.00015
393.58287
393.58232
Figure 5.2 POFs of SCH1 w.r.t NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
131
Table 5.2 Parameters and Hypervolumes for SCH2 using MOEA-IDFA
(All Convex Combination)
Table 5.2 (a) A Priori Parameters for SCH2
J1•
fi**
f;
127
n-1
n,
-1
1
0
16
11.0
11.0
Table 5.2 ( b)A Posteriori Parameters for SCH2
NSGA-II Parameters
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
300
0.5
200
0.6
Ci
c2
Cross.
Prob.
Cross.
Index
1
1
0.7
10
4.
j,,..4-9,1
10
Table 5.2 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for SCH2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
405.25125
405.25123
0.00021
405.25154
405.25101
NSGA-II
406.97663
406.97662
0.00022
406.97689
406.97623
15-
NSGA-II
MOPSO-CD
129-
63•
0
,'TIED:unurro
0
1
tl
Figure 5.3 POFs of SCH2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
132
Table 5.3 Parameters and Hypervolumes for KUR using MOEA-IDFA
(All Convex Combination)
Table S.3 (a) A Priori Parameters for KUR
1
-20
fi..
f;
-14.4
-11.6
f2...
n,
n2
0
12.0
12.0
Table 5.3 ( b)A Posteriori Parameters for KUR
Common Parameters
Pop
size
200
MOPSO-CD Parameters
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
200
0.2
200
0.6
cl
1.1
c2
1.1
NSGA-II Parameters
Cross.
Prob.
0.5
Cross. iti,..JIndex J_'O
10
10
Table 5.3 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for KUR
Mean
Median
S.D.
Best
Worst
MOPSO-CD
1223.60011
1223.60000
0.00013
1223.60025
1223.6000
NSGA-H
1224.80046
1224.80060
0.0004
1224.8009
1224.800
-19-18-17-16-15
f1
Figure 5.4 POFs of KUR w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
133
Table 5.4 Parameters and Hypervolumes for ZDT1 using MOEA-IDFA
(All Convex Combination)
Table 5.4 (a) A Priori Parameters for ZDT1
f..
ii
1
0
f2
f2"
ni
n2
0
1
10
10
Table 5.4 ( b) A Posteriori Parameters for ZDTI
NSGA-H Parameters
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
200
350
0.15
200
0.5
1.1
1.1
0.7
10
6/wei-
ItAz--te--x
10
Table 5.4 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDTI
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.59552
399.59555
0.0004
399.59578
399.59522
NSGA-H
399.59883
399.59555
0.0002
399.59578
399.59522
1.0 -
0 NS GA-II
° MOPS 0-CD
0.8 0.6 0.4 -
0.
0.2 0.0 -
0.001.20.40.60.81.0
f
l
Figure 5.5 POFs of ZDT1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
134
Table 5.5 Parameters and Hypervolumes for ZDT2 using MOEA-IDFA
(All Convex Combination)
Table 5.5 (a) A Priori Parameters for ZDT2
fi*
fi
f;
0
1
0
f2..
1
ni
n2
20.0
20.0
Table 5.5 ( b) A Posteriori Parameters for ZDT2
NSGA-H Parameters
MOPSO-CD Parameters
Common Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
200
350
0.15
200
0.5
1.1
1.1
0.6
15
.1.1,..049c
15
Table 5.5 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT2
Mean
Median
S.D.
Best
Worst
MOPSO-CD
399.28380
399.28378
0.00005
399.28388
399.28374
NSGA-1I
399.59579
399.595814
0.00012
399.59595
399.59554
0 NSGA-II
1.0 -
' MOPSO-CD
0.80.6(D
o
040.2 0.0 -
I•I•I 0:60.81.0
0.00.20.4
fl
Figure 5.6 POFs of ZDT2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
135
Table 5.6 Parameters and Hypervolumes for ZDT3 using MOEA-IDFA
(MI Convex Combination)
Table 5.6 (a) A Priori Parameters for ZDT3
fi*
fi...
0
8.52
f2....,
f2
-0.773
1
ni
n2
2.5
2.5
Table 5.6 ( b) A Posteriori Parameters for ZDT3
Common Parameters
MOPSO-CD Parameters
NSGA-H Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
Cl
c2
Cross.
Prob.
200
350
0.02
200
0.5
1.1
1.1
0.4
Cross. /141.A.1
Index 4).--ef eyi
15
15
Table 5.6 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for ZDT3
Mean
Median
S.D.
Best
Worst
MOPSO-CD
414.72488
414.724914
0.00014
414.72499
414.72454
NSGA-II
414.65120
414.65117
0.00006
414.65128
414.65112
1.0 0.8 -
0 NSGA-II
0.6 -
•
MOPSO-CD
0.4 0.2 0.0 -0.2
-0.4 -0.6 -0.8 -
I•I•I
0.00.20.40.60.8
fI
Figure 5.7 POFs of ZDT3 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
136
Table 5.7 Parameters and Hypervolumes for TNK using MOEA-IDFA
(All Convex Combination)
Table 5.7 (a) A Priori Parameters for TNK
,1
i
fi'"
0
1.05
99
12
f2
ni
n2
0
1.05
8.0
8.0
Table 5.7 ( b)A Posteriori Parameters for TNK
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
cl
c2
Cross.
Cross.
A.4.4...41,-
Size
Prob.
Index
apex
200
350
0.5
200
0.6
1
1
0.7'
10
10
Pop
Table 5.7 (c) Hypervolumes w.r.t. NSGA-If and MOPSO-CD for TNK
Mean
Median
S.D.
Best
Worst
MOPSO-CD
397.32435
397.32435
0.00003
397.32438
397.32431
NSGA-H
399.22263
399.22265
0.00018
399.22278
399.22212
O NSGA-II
•
MOPSO-CD
0.4 0.2 0.0
0.00.20.40.60.81.0
f
l
Figure 5.8 POFs of TNK w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
137
Table 5.8 Parameters and Hypervolumes for VNT using MORA-IDFA (All Convex Combination)
Table 5.8 (a) A Priori Parameters for VNT
f:
.1—
f;
. f2 *
.12..
.f2."
n,
n2
n,
0
4
1
5
2
4
5.0
5.0
5.0
Table 5.8 ( b) A Posteriori Parameters for VNT
Common Parameters
Pop
Size
206
MOPSO-CD Parameters
NSGA-II Parameters
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
Cl
CZ
Cross.
Prob.
Cross.
Index
250
0.5
200
0.5
1
1
0.9
10
M.A..A.3-._T v..46
10
Table 5.8 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for VNT
Mean
Median
S.D.
Best
Worst
MOPSO-CD
6085.10015
6085.10014
0.00009
6085.10025
6085.10004
NSGA-II
6070.40029
6070.40013
0.00031
6070.40098
6070.400012
Figure 5.9 POFs of VNT w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
138
Table 5.9 Parameters and Hypervolumes for MHHM1 using MOEA-IDFA (All Convex
Combination)
Table 5.9 (a) A Priori Parameters for MIIHM1
ft..
fi..
.i.;
f2.•
✓3
0
0.01
0
0.0025
0
1
0.01
fl
n,
n,
5.5
5.5
5.5
Table 5.9 ( b) A Posteriori Parameters for MHHM1
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
Cl
c2
Cross.
Prob.
200
350
0.4
200
0.5
1
1
0.9
Cross. /1/1.w4Index ..L.1...dlc2.
10
10
'Table 5.9 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for MHHM1
Mean
Median
S.D.
Best
Worst
MOPSO-CD
7999.00091
7999.00050
0.00170
7999.00565
7999.00001
NSGA-II
7999.00100
7999.00056
0.00166
7999.00565
7999.00004
■ MOPSQ-CD
0 NSGA-II
0.000
0.002
0.00$
0.006
✓
0.00g
0.0025
0.0020
0.0015
0.0010.
0.0005 \
0.0000
0.010
Figure 5.10 POFs of MHHM1 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
139
Table 5.10 Parameters and Hypervolumes for MHHM2 using MOEA-IDFA (All Convex
Combination)
Table 5.10 (a) A Priori Parameters for MHHM2
J
f.
f;
0.0
0.0125
0.0
..f2.
0.0125
./3.*
f3**
nl
n2
n3
0.0
0.0125
5.0
5.0
5.0
Table 5.10 ( b) A Posteriori Parameters for MHHM2
Common Parameters
NSGA-II Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
w
cl
c2
Cross.
Prob.
200
350
0.3
200
0.5
1
1
0.7
Cross. /..,L4—
Index
10
10
Table 5.10 (c) Hypervolumes w.r.t. NSGA-II and MOPSO-CD for MHHM2
Mean
MOPSO-CD
NSGA-H
7999.10011 7999.10010
7995.00144
Best
S.D.
Median
7999.00056
0.00002
Worst
7999.10015 7999.10010
0.00191 7999.00565
7999.00004
- MOPSO-CD
0 NSGA-II
0.014
0.0112
.° V.008
0,006
✓ 0.004
0.002
0.000
-0.002
O.
0.01
0.008
0.006
0.004c
0.002
000
-0.002
Figure 5.11 POFs of MHHM2 w.r.t. NSGA-II and MOPSO-CD using MOEA-IDFA
(All Convex Combination)
140
4-
4-
3-
3-
w' 2-
-I 2 0
0-
0-
0
1
01
34
11
(a) 1-•
f
4-
1
(b)
4-
Vi = _5
1'I 2_--- 5
3-
4
3
H =26
3-
1-
0
Irt"ANI
0-
234
0
f1
'34
01
OCEENNID
0-
XIO0I;
1
(d)
(c)
Figure 5.12 Effect of variations in key parameter of DF on POF for SCH1
141
142
CHAPTER 6
Application of the MOEA-IDFA to Reliability Optimization
Problems
In this Chapter, we exemplify application of the proposed MOEA based IDFA i.e.,
MOEA-IDFA discussed in Chapters 2-5, via the five well-known reliability
optimization problems of MOOP category. In Section 6.1 an overview as well as the
review of preference incorporation in reliability optimization problem is specified. In
the later sections (Sections 2-6) application of the proposed approach is shown on five
reliability optimization problems.
6.1 Reliability Optimization (An Overview)
The reliability of a system is generally measured in terms of probability that the system
will not fail during the delivery of the service. A system can be designed for optimal
reliability either by adding redundant components or by increasing the reliability of
components. In both cases, an increased demand for the applied resources of cost,
volume and weight must be observed. Therefore, a balance is required between
resources and reliability. Thus, it is worth considering the MOOP techniques to solve
this kind of problem. In most practical situations involving reliability optimization,
there are several mutually conflicting goals such as maximizing system reliability and
minimizing cost, weight, volume and constraints required to be addressed
simultaneously. Some main objectives can be expressed as
Objective 1 The most important objective is the maximization of system reliability
(R s ). It enables the system to function satisfactorily throughout its intended service
period.
Max R,
As in our approach we are considering all minimization problems. Hence, the above
objective is equivalent to minimization of system unreliability (Q, =1— R s ), can be
expressed equivalently as follows
Min
143
Objective 2 The addition of the redundant components increases not only the system
reliability but also its overall cost ( C ). A manufacturer has to balance these
conflicting objectives, keeping in view the importance of reducing the overall cost. This
objective can be expressed as
Min Cs
Objective 3 As with cost, every added redundant component increases the weight of the
system. Usually, the overall weight of a system needs to be minimized along with its
cost even as reliability is maximized (or unreliability is minimized) i.e.,
Min Ws
When two or more elements compose a system, the reliability of the latter depends
upon the reliability of the formers as well as the functional interactions amongst them.
The basic modes of interaction are series and parallel: in the former case a failure in
one component results in a failure of the whole system, thus the reliability of a series
system is expressed by Rs = n r,;i =1,n , where r; is the component reliability of
1=1
the ith component of the system, n is the total number of components in the systems.
Conversely, in parallel systems, all the components must fail to make the system fails;
therefore the Tenability is assessed as Rs = fJ (1- r1 )=1-1q=;i = 1,n, where q is
i=i
the unreliability of thecomponent of the system. Some systems with many
components are not single series nor parallel but a combination of both types. In such a
case systems are modeled as a group of subsystems, each of which might be parallel,
series or a combination of both. Some systems do not bear series-parallel structures and
therefore cannot be assessed directly using the previous formulae (for example complex
bridge system etc.). Instead, the analyst should resort to other analytic tools like Fault
Tree Analysis (see (Andrews and Moss, 1993) for details) in order to identify minimal
combinations of elements whose simultaneous failure doom the whole system to failure
or minimal cut sets (MCS). Every system can be seen as a collection or series of MCS.
Some components of the system can be present in many MCS so that the probability of
failure of such MCS is not independent but rely upon the replicated components. When
the MCS are disjoint, i.e. when no element is present in more than one MCS, the failure
144
probabilities of the MCS are independent. Hence, the sum of all disjoint MCS
reliabilities gives the system reliability. However, in practice this addition might bear
too many terms and therefore some simplifications are done, like considering MCS
composed of up to m elements, where m gives the order of the cut set. There is an
extensive literature on reliability the reader can consult for more details, e.g. (Andrews
and Moss, 1993; Verma et al., 2010).
Optimization of reliability of complex systems is an extremely important issue in the
field of reliability engineering. Over the past three decades, reliability optimization
problems have been formulated as non-linear programming problems within either
single-objective or multi-objective environment. Tillman et al. (1980) provides an
excellent overview of a variety of optimization techniques applied to solve these
problems. However, he reviewed the application of only derivative-based optimization
techniques, as metaheuristics were not applied to the reliability optimization problems
by that time. Reliability optimization problems can be classified into three sections
namely component reliability problem (nonlinear problem), redundancy allocation
problem (integer nonlinear problem), component reliability and redundancy allocation
problem (mixed integer nonlinear problem)
(Gopal et al., 1978; Govil and Agarwal, 1983; Dhillon, 1986; Misra, 1991, 2009;
Misra and Sharma, 1991a, 1991b; Mohamed Lawrence, 1992; Singh and Misra,, 1994;
Prasad and Kuo, 2000; Kuo and Prasad, 2000; Amari et al., 2002, 2004; Amari and
McLaughlin, 2005; Kapur and Verma, 2005; Ha and Kuo, 2006; Misra et al., 2006;
Kuo and Wan, 2007; Levitin and Amari, 2008; Coelho, 2009; Aggarwal et al., 2009b,
2009a; Kuo et al.; Amari and Dill, 2010)
In this thesis we are concerned with the first one (component reliability problem)
only and hence will use the term reliability optimization problem in this sense here
after. Over the last decade, metaheuristics have also been applied to solve the
reliability optimization problems. To list a few of them, Coit and Smith (1996) were the
first to employ a GA to solve reliability optimization problems. Later, Ravi et al. (1997)
developed an improved version of non-equilibrium simulated annealing called INESA
and applied it to solve a variety of reliability optimization problems. Further, Ravi et al.
(2000) first formulated various complex system reliability optimization problems with
single and multi objectives as fuzzy global optimization problems. They also developed
145
and applied the non-combinatorial version of another meta-heuristic viz. threshold
accepting to solve these problems. Recently, Shelokar et al. (2002) applied the ant
colony optimization algorithm to these problems and obtained compared results to
those reported by Ravi et al. (1997).
Vinod et al. (2004) applied Gas to Risk Informed In-Service Inspection (RI-ISI) which
aims at prioritising the components for inspection within the permissible risk level
thereby avoiding unnecessary inspections. A new fuzzy multi-objective optimization
method is introduced and it is used for the optimization decision-making of the series
and complex system reliability with two objectives is presented by Mahapatra and Roy
(2006).
Mahapatra (2009) considered a series-parallel system to find out optimum system
reliability with an additional entropy objective function.
Marseguerra et al. (2006) applied GA to solve the reliability problem. Salazar et al.
(2006, 2007) solved the system reliability optimization problem by using several EAs
and MOEAs. More recently, Ravi (2007) developed an extended version of the great
deluge algorithm and demonstrated its effectiveness in solving the reliability
optimization problems.
6.1.1 Preference Incorporation in Reliability Optimization Problems
It is very difficult for DM to specify accurately their preference on the goals a priori in
multi-objective reliability optimization problems. There are three key issues in the
interactive multi-objective optimization methods (1) how to elicit preference
information from the DM over a set of candidate solutions, (2) how to represent the
DM's preference structure in a systematic manner, (3) how to use the DM's preference
structure to guide the search for improved solutions. A brief review of DM's preference
articulation for reliability optimization problem is presented below.
To accommodate the preference of DM in reliability optimization problem Sakawa
(1978) considered a multi-objective formulation to maximize reliability and minimize
cost for reliability allocation by using the surrogate worth trade-off method. Dhingra
(1992) and Rao and Dhingra (1992) researched the reliability and redundancy
apportionment problem for a four-stage and a five-stage overspeed protection system,
146
using crisp and fuzzy multi-objective optimization approaches respectively. Ravi et al.
(2000) modeled the problem of optimizing the reliability of complex systems as a fuzzy
multi-objective optimization problem and studied it. Huang et al. (2007) reported a new
effective multi-objective optimization method, Intelligent Interactive multi-objective
optimization method (IIMOM), which is characterized by the way the DM's preference
structure model is built and used in guiding the search for improved solutions. IIMOM
is applied to the reliability optimization problem of a multistage mixed system. Five
different value functions are used to simulate the DM in the solution evaluation
process. The results illustrate that IIMOM is effective in capturing different kinds of
preference structures of the DM, and it is an effective tool for the DM to find the most
satisfying solution (Huang et al., 2005, 2007). Pandey et al. (2007) proposed an
enhanced particle swarm optimization (EPSO) to simulate the DM's opinion in the
solution process. Inagaki et al. (2009) solved another problem to maximize reliability
and minimize cost and weight by using an interactive optimization method. During
review, we found that incorporation of DM's preference in reliability optimization
problems is infrequent and of growing interest. The main crux of the proposed
approach is exploitation of the population approach of an MOEA procedure in finding
more than one solutions not on the entire POF, but in the regions of Pareto-optimality
which are of interest to the DM. Our proposed approach can be used in an interactive
manner to guide DM towards the preferred region of POF. Depending on the type of
choice (preference) available from the DM, we define following cases of preference.
We also present the appropriate combination of DFs needs to implement the particular
preference of DM.
• No Preference
No choice made by DM. Though in this case DM don't make any preference but this
Case is very important to start the interaction with DM. Before we concentrate in more
detail in other cases, we must say something about how to start the interactive
methodology's solution process. It is possible that we ask even the first choice from the
DM. In this case, it is typically useful to first show the ideal (Qs , CS" , Ws* ) and the antiideal( Q:* , C:*
Ws**
,
) objective values to her/him in order to give some understanding of
147
what kind of solutions are feasible. Alternatively, we can calculate a so-called neutral
compromise solution as the first solution (Wierzbicki, 1999). The result of this case is a
good starting point when no preference information is yet available. The DFs
(,uQ „uc,andcorresponding the objectives Qs , Cs and Ws is constructed based upon
initialized parameters of Step 0 of the proposed approach. In case of No Preference, no
extra information is needed from the DM to construct the DFs. In fact, in this case,
linear DFA described in Chapter 2 may be applied; expressions of DFs are given
below.
if
a <Cc
(6.1)
Ata
Q.:
<a
if C <
if C: S C
(6.2)
if C.:* <
0
kris
if<ws
**
W, - Ws*
11/;"
1 if
•
(6.3)
if kris *
Ws** <
Preference 1
Next we assume that the DM is able to rank the relative importance of objectives. In
this case, the DM prefers a region closer to the maximum of the reliability (one of
his/her objective). In other words, for a bi-objective problem, visually DM wants to
explore the top right portion of the POF shown him/her in the No Preference case.
In case of bi-objective problem having objectives Qs and C„ the DFs (PQ, „uc., ) can be
constructed through a concave and convex representations discussed in Chapter 3 as
below
148
If
Q, <Q:*
ni
if Q =
R+
(6.4)
Q:*
if Cs <C*,*
if C: 5 C5. < C:* ; n2; n2 E
Pc, =
(6.5)
if C:* <Cs
• Preference 2
This case is similar to the previous case; the difference here is the choice of the
objective made by the DM. In this case the DM prefers a region closer to the minimum
of the Cost (one of his/her objective). In other words, for a bi-objective problem,
visually DM wants to explore the bottom left portion of the POF shown him/her in the
No Preference case. In case of bi-objective problem having objectives Qs and Cs , the
DFs (N ,N ) can be constructed through a convex and concave representations
discussed in Chapter 3 as below
0
=
if
Q, <Q.:*
if V, Qs;ni E R,
(6.6)
if Q.:* <Qs
if Cs <Cs"
Pc,
if C; S C < C*; ; n2 S 1 ; n2 e ❑ +
(6.7)
if Cs- <C,
In case of tri-objective problem the situation is a bit different than the bi-objective
problem. In this case the convex concave combinations also give rise to the both
convex combination for any two objectives. Out of many cases of tri-objective problem
we consider a case that is comparable here. If DM wants a bottom left portion of the
POF shown him/her in the No Preference case. DFs (N „tic.) corresponding objectives
149
Q, and Cs can be constructed through a convex and concave representation shown
above in •Equations (6.6-6.7) while DF (,uw ) corresponding objective weight Ws is
given by a concave representation as
0
if W. <W.:"
W,-WS "'
fikv = W _ w* j
S -W .
1
•
if Ws* _147.,; n3; n3 E R+
(6.8)
if Ty.* <ws
Preference 3
In this case, the DM prefers an intermediate portion of the POF shown him/her in the
No Preference case. The DFs„uc,and pw ) corresponding the objectives Qs , CS and
Ws can be constructed through all sigmoid representation discussed in Chapter 4, also
shown below
Zf
*
Qs<Q,
if Vs.
Qs Q.:* ;061E
R+
(6.9)
if Q.:'
if C < C:*
+e
(—a
f7))) C:
Cs ;
a2 c R+ (6.10)
if <cs
if W. <Ws**
TV: +2W:*
A
j
if w: <wy<w,";a3 e R+
if W.,-** <W.,
150
(6.11)
• Preference 4
DM can have multiple regions together as the preference in one POF. This case, in one
way is complementary to the Preference3 case. In this case, DM wants only extreme
portions of the POF and no intermediate regions. In other words, for a bi-objective
problem, visually DM wants to explore simultaneously top right and bottom left
portions of the POF shown him/her in the No Preference case. If DM is interested in
finding multiple regions in POF and prescribes Preference 4 as the choice then all
convex representation (examined in Chapter 5) of DFs (,u Q, „tic, and ,u s ) corresponding
the objectives Q„ Cs and Ws can be used shown below
0if a <Q;*
p Q, =
-
.
* Q,
Qs 5_ Q: ;n1 .1;n1 E 11+ ,
a -V,
1if Q*,* <a
(6.12)
<C:
C: C Cs* ; n2; n2 e .1Z+
(6.13)
C:* <
0
if ws<ws**
jn3
if Ws:* Ws W: ; n3 __1; n3 E
(6.14)
Through numerical experiments on, five reliability optimization problems, in this
Chapter, we try to make sure that a specific combination of DFs will result in a
corresponding specific preferred portion of a POF as per wish of the DM.
151
6.2 Reliability Optimization of a Series System
6.2.1 Problem Description
Here a series system having five components, shown in Figure 6.1, is considered, each
having component reliability r; , i =1,2,...,5. The system reliability Rs , unreliability Qs
and system cost Cs are given by
5
Rs = n r,
or
5
Qs = (1— Rs )=1—Fir,
E
5
Cs =
1=1
1
a, log [1_ r +b,)
The problem is to find the decision variables r,, i= 1, 2,..., 5 which minimize both
Qs and Cs
Subject to 0.50.99; i =1,2,...,5,
In other words, the problem can be posed as a MOOP given by
Minimize (Qs ,Cs )
subject to 0.5 < re < 0.99, i =1, 2, ..., 5;
where vectors of coefficients
a,
and
b,
are
(6.15)
a = {24,8,8.75,7.14,3.33} and
b = {120,80,70,50,30} respectively. Huang (1997) solved the problem given by
Equation 6.1-6.2 using fuzzy approach and reported only three Pareto-optimal solutions
using aggregation method. We are going to solve this problem according the preference
of DM.
6.2.2 Step-by-Step Illustration of MOEA-IDFA for Series System
Step 0: Intialization of Priori Parameters
The DF's priori preference parameters need initialized to construct the
DFs (,uQ, and pc, ) for each objective (i.e., Qs and Cs ), shown in Table 6.3.
152
Step 1: Construction of DFs
The DFs (PQ and pc., ) corresponding the objectives Q5 and Cs is constructed based
upon the DM's preference parameters shown in Table 6.4 and the initialized parameters
of Step 0 (as discussed in Subsection 6.1.1)
■ in case of No Preference, Equations (6.1-6.2) are used,
▪ in case of Preference 1, Equations (6.4-6.5) are used,
■ in case of Preference 2, Equations (6.6-6.7) are used,
■ in case of Preference 3, Equations (6.9-6.10) are used and
■ in case of Preference 4, Equations (6.12-6.13) are used.
Step 2: Solving the DFMOOP
The following DFMOOP is formed using the DFs obtained in Step 1 that needs to be
solved by any MOEA.
Minimize (pQs , pc.$ )
subject to 0.50.99, i =1,2,...,5;
(6.16)
For each case (preference), the combination of DF will be different and so the problem
will be different for each case shown above. We are using NSGA-II and MOPSO-CD
to solve the DFMOOP obtained in Equation 6.16. The optimal parameters setting used
for both of these algorithms are provided in Table 6.5.
Step 3: Evaluation of the Solution
As shown in Chapters 2-5 Pareto-optimal solution of DFMOOP (Equation 6.16) is also
the Pareto-optimal solution of MOOP (Equation 6.15). We have plotted the best POF
(on the basis of max hypervolume metric) obtained through various run of the
algorithms of the series system for each case. The
POF of No Preference case is shown in Figure 6.7. For rest of the cases the plots are
shown in Figures 6.10.
Step 4: Adjustment of the Parameters
If the DM is satisfied by the solution obtained in Step 3 the approach stops
successfully. Other wise the key preference parameters shown in Table 6.4 can be
altered to meet the DM's choice and the method again go back to Step 2. The process is
repeated until DM is satisfied. We are just showing one run of the approach here as we
assume that in this problem DM is satisfied by the results obtained in Step 3.
153
6.3 Reliability Optimization of Life Support System in a Space
Capsule
This application concerns the reliability design of a life-support system in a space
capsule (Shelokar et al., 2002; Salazar et al., 2006) its configuration is presented in
Figure 6.2. The system, which requires a single path for its success, has two redundant
subsystems each comprising components 1 and 4. Each of the redundant subsystems is
in series with component 2 and the resultant pair of series-parallel arrangement forms
two equal paths. Component 3 is inserted as a third path and backup for the pair. The
continuous optimization models that were originally formulated for the reliability
design of this system approached the problem in two different ways: Sheloker et al.
(2002) adopted a single criterion methodology in which a cost function of component
reliability was minimised, subject to constraints on system and components'
reliabilities. On the other hand, Salazar et al (2006) used a bi-criterion approach using a
number of heuristic algorithms such as ant colony optimization, tabu search, and
NSGA-II.
The block-diagram is presented in Figure 6.2. The system reliability Rs , unreliability
Qs and system cost Cs are given by (Tillman et al., 1980):
Maximze Rs = 1— r3 [(1 - 10(1 — r4)] 2 — (1— r3)[1— r2{1— (1— TO(1— r4)}i 2
or
Minimize Qs =1— Rs ;
Minimize Cs =+ 2K2r2"2 + K3r3"3 + 2K4r4a4;
subject to 0.5=1,2,3,4
In other words, the problem can be posed as a MOOP given by
Minimize (Q s ,Cs )
subject to 0.5
r,i =1,2,3,4;
where vectors of coefficients
(6.17)
K, and
a,
are
K = {100,100,200,150} and
a = {0.6, 0.6,0.6, 0.6} respectively.
6.3.1 Step-by-Step Illustration of MOEA-IDFA for Life Support System in a
Space Capsule
Step 0: Intialization of Priori Parameters
154
The DF's a priori preference parameters need initialized to construct the
DFs (,uQ and) for each objective (i.e., Qs and CS ), shown in Table 6.6.
Step 1: Construction of DFs
The DFs (,uQ, and pc., ) corresponding the objectives Qs and Cs is constructed based
upon the DM's preference parameters shown in Table 6.7 and the initialized parameters
of Step 0
• in case of No Preference, Equations (6.1-6.2) are used,
• in case of Preference 1, Equations (6.4-6.5) are used,
• in case of Preference 2, Equations (6.6-6.7) are used,
• in case of Preference 3, Equations (6.9-6.10) are used and
• in case of Preference 4, Equations (6.12-6.13) are used.
Step 2: Solving the DFMOOP
The following DFMOOP is formed using the DFs obtained in Step 1 that needs, to be
solved by any MOEA.
Minimize (p Qs , pc, )
subject to 0.5
r,i =1,2,3,4;
(6.18)
For each case (preference), the combination of'DF will be different and so the problem
will be different for each case shown above. We are using NSGA-II and MOPSO-CD
to solve the DFMOOP obtained in Equation 6.18. The optimal parameters setting used
for both of these algorithms are provided in Table 6.8.
Step 3: Evaluation of the Solution
As shown in Chapters 2-5 Pareto-optimal solution of DFMOOP (Equation 6.17) is also
the Pareto-optimal solution of MOOP (Equation 6.18). We have plotted the best POF
obtained through various run of the algorithms of the series system for each case. The
POF of No Preference case is shown in Figure 6.8. For rest of the cases the plots are
shown in Figures 6.11.
Step 4: Adjustment of the Parameters
If the DM is satisfied by the solution obtained in Step 3 the approach stops
successfully. Other wise the key preference parameters shown in Table 6.7 can be
altered to meet the DM's choice and the method again go back to Step 2. The process is
155
repeated until DM is satisfied. We are just showing one run of the approach here as we
assume that in this problem DM is satisfied by the results obtained in Step 3.
6.4 Reliability Optimization of a Complex Bridge System
Here a bridge network system as shown in Figure 6.3, has been considered, each having
a component reliability r,, i=1,2,...,5. Misra and Agnihotri (2009) have investigated the
peculiarities associated with the system. The system reliability Rs , unreliability Qs and
system cost Cs are given by (Tillman et al., 1980):
Rs = r1r4 + r2 rs + r2 r3 r4 + r1 r3 rs + 2r, r2 r3r4r5 r1r2r4r5 r1r2r3r4 r1r3r4r5 r2 r3 r4 rsr2 r3 r5
5
Cs =
Ea, exp
,=11—r,
The problem is to find the decision variables
i =1,2,...,5 which minimize both
Qs and C
Subject to 0 < r,i =1,2,...,5,
In other words, the problem can be posed as a MOOP given by
Minimize (Qs ,Cs )
subject to 0 r,i =1,2,...,5;
(6.19)
where a, =1 and b, = 0.0003, Vi, i = 1, 2,...,5 .
6.4.1 Step-by-Step Illustration of MOEA-IDFA for Complex Bridge System
Step 0: Intialization of Priori Parameters
The DF's
a priori preference parameters need initialized to construct the
DFs (,uQ, and) for each objective (i.e., Qs and Cs ), shown in Table 6.9.
Step 1: Construction of DFs
The DFs (//Q‘ and) corresponding the objectives Qs and C, is constructed based
upon the DM's preference parameters shown in Table 6.10 and the initialized
parameters of Step 0
•
in case of No Preference, Equations (6.1-6.2) are used,
•
in case of Preference 1, Equations (6.4-6.5) are used,
•
in case of Preference 2, Equations (6.6-6.7) are used,
156
w in case of Preference 3, Equations (6.9-6.10) are used and
w in case of Preference 4, Equations (6.12-6.13) are used.
Step 2:
Solving the DFMOOP
The following DFMOOP is formed using the DFs obtained in Step 1 that needs to be
solved by any MOEA.
Minimize (1.1c ,,uc.$ )
subject to 0i =1,2,
(6.20)
For each case (preference), the combination of DF will be different and so the problem
will be different for each case shown above. We are using NSGA-II and MOPSO-CD
to solve the DFMOOP obtained in Equation 6.20. The optimal parameters setting used
for both of these algorithms are provided in Table 6.11.
Step 3:
Evaluation of the Solution
As shown in Chapters 2-5 Pareto-optimal solution of DFMOOP (Equation 6.20) is also
the Pareto-optimal solution of MOOP (Equation 6.19). We have plotted the best POF
obtained through various run of the algorithms of the series system for each case. The
POF of No Preference case is shown in Figure 6.9. For rest of the cases the plots are
shown in Figures 6.12.
Step
4: Adjustment of the Parameters
If the DM is satisfied by the solution obtained in Step 3 the approach stops
successfully. Other wise the key preference parameters shown in Table 6.10 can be
altered to meet the DM's choice and the method again go back to Step 2. The process is
repeated until DM is satisfied. We are just showing one run of the approach here as we
assume that in this problem DM is satisfied by the results obtained in Step 3.
6.5 Residual Heat Removal (RHR) System of a Nuclear Power Plant
Safety System
This problem considers the robust design of a Nuclear Power Plant safety system: The
Residual Heat Removal system (RHR) is a low pressure system (400 psi) directly
connected to the primary system which is at higher pressure (1200 psi), where psi
stands for pounds per square inch. The RHR constitutes an essential part of the low-
157
pressure core flooding system which is part of the Emergency Core Cooling System of
a nuclear reactor. Its objectives are:
• to remove decay and residual heat from the reactor so that refueling and servicing
can be performed,
• to supplement the spent fuel cooling capacity,
• to condense reactor steam so that decay and residual heat may be removed if the
main condenser is unavailable following a reactor scram In Figure 6.5, a schematic
of the system is shown (Marseguerra et al., 2002)).
The unreliability of the system Q, can be obtained from the simplified fault tree shown
in Figure 6.6 in which the 16 most important (third-order) cut sets (see Table 5.2),
originated by the combination of eight basic events, are considered (Apostolakis, 1974;
Marseguerra et al., 2002). The simplified expression for a yields.
Qs = q1 q3q5 + qlq3q6 (1- q5) + q1 q3q7 (1-q5 )(1-q6 ) +
ql q3q8 (1- q5 )(1-q6 )(1-q,) + q1q4q5 (1- q3 ) + ql q4q6 (1-q3 )(1-q5 )+
qiq4q7 (1-q3 )(1-q5 )(1-q6 ) + qi q4q8 (1-q3 )(1-q5 )(1-q6 )(1-q7 )+
q2q3q,(1- q1 ) + q2q3q6 (1-q1 )(1- q,) + q2q3q7 (1-q1 )(1-q5 )(1-q6 )+
q2q3q8 (1- q1 )(1- q,)(1- q6 )(1- q7 ) + q2q4q5 (1- q1 )(1-q3 )+
q2q4q6(1- q1 )(1- q3 )(1-q5 ) + q2q4q7 (1-q1 )(1-q3 )(1-q5 )(1-q6 )+
q2q4q8(1 - q1 )(1 - q3)(1 - q5 )(1 - q6 )(1 - q,),
where q, and (1— q,)=r,, are respectively the component unreliability and component
reliability of the ith basic event are subject to constraint 0
ro q S 1. Furthermore,
normally the system design problem seeks also to constrain the system cost, here taken
equal to a nonlinear combination of the components reliabilities
Cs = 2E K,/,`;
In this problem, the vector of coefficients of the nonlinear combination is
K = {100,100,100,150,100,100,100,150} and the exponents area, = 0.6, Vi, i =1,2,...,8.
) is considered as an
In the approach proposed in the work, the system cost (Cs )
objective to be minimized instead of a constraint, and this, together with the
minimization of the system unreliability (Qs ), leads to a MOOP
Minimize (Qs ,Cs )
subject to 0 5 r,i =1,2,...,8;
158
(6.21)
6.5.1 Step-by-Step illustration of MOEA-IDFA for RHR System
Step 0: Intialization of Priori Parameters
The DF's priori preference parameters need initialized to construct the
DFs (,uQand) for each objective (i.e., Qs and Cs ), shown in Table 6.12.
Step 1: Construction of DFs
The DFs (/./Q and /JO corresponding the objectives Qs and Cs is constructed based
upon the DM's preference parameters shown in Table 6.13 and the initialized
parameters of Step 0
• in case of No Preference, Equations (6.1-6.2) are used,
• in case of Preference 1, Equations (6.4-6.5) are used,
• in case of Preference 2, Equations (6.6-6.7) are used and
• in case of Preference 4, Equations (6.12-6.13) are used.
We were not able to find the parameter setting for this problem corresponding
Preference 3.
Step 2: Solving the DFMOOP
The following DFMOOP is formed using the DFs obtained in Step 1 that needs to be
solved by any MOEA.
Minimize (uQ, , tics )
subject to 0i =1,2,...,8;
(6.22)
For each case (preference), the combination of DF will be different and so the problem
will be different for each case shown above. We are using NSGA-II and MOPSO-CD
to solve the DFMOOP obtained in Equation 6.22. The optimal parameters setting used
for both of these algorithms are provided in Table 6.14.
Step 3: Evaluation of the Solution
As shown in Chapters 2-5 Pareto-optimal solution of DFMOOP (Equation 6.22) is also
the Pareto-optimal solution of MOOP (Equation 6.21). We have plotted the best POF
obtained through various run of the algorithms of the series system for each case. The
plots are shown in Figures 6.13.
Step 4: Adjustment of the Parameters
If the DM is satisfied by the solution obtained in Step 3 the approach stops
successfully. Other wise the key preference parameters shown in Table 6.13 can be
159
altered to meet the DM's choice and the method again go back to Step 2. The process is
repeated until DM is satisfied. We are just showing one run of the approach here as we
assume that in this problem DM is satisfied by the results obtained in Step 3.
6.6 Reliability Optimization of a Mixed Series-Parallel System
The multi-objective reliability optimization problem is taken from Sakawa (1978) and
Ravi et al. (2000). The block diagram for this problem is in Figure 6.4. Relevant data
for this problem is given in the Table 6.1. A multistage mixed system is considered,
where the problem is to allocate the optimal reliabilities r„ i =1,2,3,4 of four
components whose redundancies are specified in order to achieve the following three
goals
Maximize Rs ,minimize Cs ,minimize Ws
or
Minimize Qs,minimize q,minimize Ws
Subject to
Vs =65; Ps :5_12000;
i=t
0.5 j =1,2,3,4;
In other words, the problem can be posed as a MOOP of three objectives given by
Minimize (Qs ,Cs ,Ws )
subject to
4
V =
V,n,
(6.23)
65; Ps 5_12000;
0.5<r, <1,i=1,2,3,4;
where
V, are the reliability, unreliability, cost, weight and volume of the
system respectively. Here, r, represents the reliability of the
system. In addition, we have,
Ps =Ws *Vs ;
4
4
4
R = 11[1 — ( 1;C, = EC= EkVnj;
J .1
>=i 1=1
160
component of the
)6.;
171
;W./ =[logi,3 ( 1 gio
(1— ri )
(1— rj )
V=
7;
P;t.j)
login (0 /4
)1
Table 6.1Data for Mixed Series-Parallel System
a; = 8.0,a7 = 6.0, a; = 2.0;
y; = 2.0,7: = 0.5,
y; = 0.5;
N1` = 2.0, x=10.0, /3; = 3.0,
16:
=18.0;
fir = 3.0,)621v =2.0, 13; =10.0, JO: = 8.0;
IA" = 2.0, )6; = 2.0,
)57
= 6.0,
13: = 8.0;
n1 = 7, n2 = 8, n3 = 7, n4 = 8;
6.6.1 Step-by-Step Illustration of MOEA-IDFA for Mixed Series-Parallel System
Step 0: Intialization of Priori Parameters
The DF's priori preference parameters need initialized to construct the
DFs (uQ,and pw, ) for each objective (i.e., Qs 7
Cs
and W. ), shown in Table 6.15.
Step 1: Construction of DFs
The DFs (,uo,and) corresponding the objectives Q, and Cs is constructed with the
help of DM's preference parameters shown in Table 6.16 and the initialized parameters
of Step 0
•
in case of No Preference, Equations (6.1-6.3) are used,
■ in case of Preference 2, Equations (6.6-6.8) are used,
•
in case of Preference 3, Equations (6.9-6.11) are used and
•
in case of Preference 4, Equations (6.12-6.14) are used.
Step 2: Solving the DFMOOP
The following DFMOOP is formed using the DFs obtained in Step I that needs to be
solved by any MOEA.
161
Minimize (N s , Pcs , fiws )
(6.24)
subject to
Vs
4
= EV,n,
65; Ps ..12000;
0.51=1,2,3,4;
For each case (preference), the combination of DF will be different and so the problem
will be different for each case shown above. We are using NSGA-II and MOPSO-CD
to solve the DFMOOP obtained in Equation 6.24. The optimal parameters setting used
for both of these algorithms are provided in Table 6.17.
Step 3: Evaluation of the Solution
As shown in Chapters 2-5 Pareto-optimal solution of DFMOOP (Equation 6.24) is also
the Pareto-optimal solution of MOOP (Equation 6.23). We have plotted the best POF
obtained (having maximum hypervolume metric) through various run of the algorithms
of the series system for each case. The plots are shown in Figures 6.14.
Step 4: Adjustment of the Parameters
If the DM is satisfied by the solution obtained in Step 3 the approach stops
successfully. Other wise the key preference parameters shown in Table 6.16. can be
altered to meet the DM's choice and the method again go back to Step 2. The process is
repeated until DM is satisfied. We are just showing one run of the approach here as we
assume that in this problem DM is satisfied by the results obtained in Step 3.
6.7 Conclusion
We have suggested a novel way of taking preference information coming from the DM
more closely into account in DF based methods developed for multi-objective
optimization. Our goal is to be able to produce portions (subsets) of Pareto-optimal set
that are necessary to the DM than the ones produced with standard MOEAs.
We have carried out several computational tests in order to compare the outputs
of MOEA-IDFA approach using different combination (all linear, convex-concave, all
sigmoidal and all convex) of DFs. With five reliability optimization problems of multiobjective nature, we have tested all the four cases (types of combination of DFs used)
with real DMs. Four different set of DFs are utilized to simulate the DM's preference in
the solution process in order to illustrate the effectiveness of the proposed methodology
162
in capturing different kinds of preference structures (different portions of the POF) of
the DM. In other words, we have replaced the responses of the DM by DFs. The results
are encouraging and suggest the applicability of the proposed approach to more
complex and real-world engineering problems.
163
IN 0-
2
1
5 [---• OUT
Figure 6.1 Block Diagram of Series System
I-
r
4
-4) OUT
--I 4I-Figure 6.2 Block Diagram of a Life Support System in a Space Capsule
164
OUT
Figure 6.3 Block Diagram of Complex Bridge System
H1
IN
• OUT
2
x
2
X
Figure 6.4 Mixed Series-Parellel System
165
n
rPRIMARY CONTAINMENTF022
—I E:] LEG 1
tag
I
I
/1-01
I REACTOR F019
i. I F023
PRESSURE
I. •
I VESSEL
- MOTOR OPERATED
- AIR OPERATED
- PANEL LIGHT
375 PSI
\
ntotyro
.F060
ire
I DRY WELL
SPRAY
I HEADER
F005
F015F017
RHR•HX
SULRESSION SPRAY HEADER
FO0B 1 F0019
71
1
1
.1
11
11
F047
STRAINER
SUPPRESSION POOL
(TORUS)
COO 2A:
F03/A
F034A
Figure 6.5 Schematic of the RHR of a Nuclear Power Plant
Table 6.2 Third order minimal cut sets
Check valve F019 (1 — 1)
Cut set
Valve clapper stuck open (4 =I)
I.
Valve clapper fails at design pressure (j = 2)
2
3
Valve clapper fails at design pressure
Valve clappet stuck open
4
Valve clapper fails at. design pressure
5
Valve dapper, stuck open
6
7
Valve clapper stuck open
Valve clappet fails at design pressure
8
Valve clappet stuck open
9
Valve clappet fails at. design pressure
10
Valve clapper stuck open
11
. Valve clappet fails at design pressure
12
13
Valve clappet fails at design pressure
14
Valve clappet stuck open
Valve clapper fails at design pressure
1.5
Valve clappet stuck open
16
Valve F022 opened (1 = 2)
Left open by mistake (j =1)
Left open by mistake
Mechanical failure (j = 2)
Mechanical failure
Left open by mistake
Left- open by mistake.
Mechanical failure
Mechanical failure
Mechanical failure
Left open by mistake
Left open by mistake
Left open by mistake
Mechanical failure
Mechanical failure
Mechanical failure
Mechanical failure
166
Valve F023 opened i = 3)
Internal valve failure (j = 1)
Internal valve failure
Internal valve failure
Casing rupture. (j = 2)
Casing rupture
Internal valve failure
Casing rupture
Casing rupture
Inadvertently left open (j =3)
Inadvertently left open
Left open. for future _maintenance (j = 4)
Left open for future maintenance
Inadvertently left open
Left open for future maintenance
Left open for future maintenance
Inadvertently left open
System failure
Valve F023
opened
Valve 1022 opened
Media
nical
failorc
Check- valve F019
fails to close
Valve
cluppet
stuck
open
!mortal
value1:Awe
Valve
cluppet
rails at
design
pressure
Valve
let
op en
by
mistake
Casing,
rupture
Valve
inadvertent
left open
'Valve left
open for..
.future • . I ,
m aintenance
Figure 6.6 Simplified Fault Tree of the RHR system (Apostolakis, 1974)
6000 NS GA-II
550 -
•
MOPS O-CD
500 o
450 -
400 0.00.2OA0.60.81.0
Reliability
Figure 6.7 POFs w.r.t. NSGA-H and MOPSO-CD of a Series System (No Preference Case)
167
700-
0 NSGA-II
NIOPSO-CD
680-
660 -
640-
• ••
0.93
0.96
Reliability
• •I•
0.99
Figure 6.8 POFs w.r.t. NSGA-H and MOPSO-CD of a Life Support System in a Space Capsule
(No Preference Case)
0N
5.016
SG A-II
M OP SO-CD •
5.012 C
5.008 -
5.004 -
.,
5.000,
,,
0.00.20.40.60.81.0
Reliability
Figure 6.9 POFs w.r.t. NSGA-II and MOPSO-CD of a Complex Bridge System (No Preference
Case)
168
Table 6.3 Initial a Priori Parameters for Series System
Parameters initialized in Step 0
V
Q:.
C:
C:.
0.031
0.9509
385.5
585.88
Table 6.4 Other a Priori Parameters for Series System
Preference 1
Preference 2
Preference 3
Preference 4
nt
n2
ni
n2
a2
a2
nt
n2
10
0.001
0.001
10
20
0.1
15
15
Table 6.5 A Posteriori Parameters for Series System
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
P rob .
Arch.
Size
W
c]
c2
Cross.
Prob.
Cross.
Index
Cross.
Prob.
200
300
0.4
200
0.4
1
1
0.8
10
10
200
300
0.4
200
0.5
1
1
0.8
15
15
200
350
0.5
200
0.5
1
1
0.9
15
15
200
350
0.5
200
0.5
1
1
0.9
10
15
200
350
0.5
200
0.6
1.1
1.1
0.9
10
15
4..:
4.,
ct
2
a.
(-4
4.'
M
4."
v.
V
a
169
600
600 -
0 NS GA-II
0 NS GA-II
MO PSO-C7D
550 -
550 -
O
O
° MOPS C)-CD
O
500
500
o
o
450 -
400 -
450 -
o.r) 0
400
0.00.0.40.60.81.0
0.00.20.40.60.81.0
Reliability
Reliability
(a)
(b)
600 -
0 NS
•
550 -
0 NS GA-II
MO PS 0-CD
0
° MO PS O-CD
500 .4
450 400 -
o"
0.00.20.40.60.8
Reliability
1
1.0
I
I
I
I
I
0.40,60.81.0
0 . 0 0.7
Reliability
(d)
(c)
Figure 6.10 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System at different preferences:
(a) For Preference 1, (b) For Preference 2, (c) For Preference 3, (d) For Preference 4
170
Table 6.6 Initial a Priori Parameters for Life Support System in a Space Capsule
Parameters initialized in Step 0
Q:
Q:*
C:
C:*
0.01
0.1
641
700
Table 6.7 Other a Priori Parameters for Life Support System in a Space Capsule
Preference 1
Preference 2
Preference 3
Preference 4
ni
n2
ni
n2
a2
a2
ni
n2
10
0.001
0.001
10
170
170
10
10
Table 6.8 A Posteriori Parameters for Life Support System in a Space Capsule
Common Parameters
Pop
Size
Max.
Gen.
at'-'0
z
200
400
L
0,..
N
tj
c.,.
200
Mutati-
MOPSO-CD Parameters
NSGA-H Parameters
Arch.
Size
W
Cl
c2
Cross.
Prob.
Cross.
Index
Cross.
Prob.
0.6
200
0.4
1
1
0.8
10
10
400
0.7
200
0.5
1
1
0.8
15
10
200
400
0.7
200
0.5
1
1
0.9
15
15
M
"cd
200
450 •
0.7
200
0.6
1
1
0.9
10
10
t;
200
450
0.8
200
0.6
1.0
1.0
0.9
10
10
Pro
onb.
,.:.
IM.
A.
171
700-
0 NSGA-II
' MOPSO-CD
680O
O
660-
660 -
640-
640 •
I
,••
0.90
f •
•I.......
0.900.930.960.99
Reliability
700-
0 NSGA-II
•
Reliability
• ••
0.99
(b)
(a)
700 -
0.930.96
MOPSO-CD
0 NSGA-II
" MOPSO-CD
680 -
680 .
660 -
660 -
640 -
640 0.93
•
•
0.96
6
Reliability
Reliabi
•
• ,••••
0.900.930.96
0.99
Reliability
•
0.99
(d)
(c)
Figure 6.11 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System at Different Preferences:
(a) For Preference 1, (b) For Preference 2, (c) For Preference 3, (d) For Preference 4
172
Table 6.9 Initial a
Priori Parameters for Complex Bridge System
Parameters initialized in Step 0
Q,
QS-
C' s
C:
0.0160
1.0
5.0015
5.01686
Table 6.10 Other
a Priori Parameters for Complex Bridge System
Preference 1
Preference 2
Preference 3
Preference 4
ni
n2
ni
n2
az
a,
ni
n2
5
0.1
0.1
5.0
1000
1000
5
5
Table 6.11 A
Posteriori Parameters for Complex Bridge System
Common Parameters
MOPSO-CD Parameters
Pop
Size
Max.
Gen.
Mutation
Prob.
Arch.
Size
W
200
200
0.6
200
200
200
0.7
200
250
200
200
NSGA-H Parameters
Cl
c2
Cross.
Prob.
Cross.
Index
Cross.
Prob.
0.5
1
1
0.8
10
10
200
0.5
1
1
0.8
10
10
0.7
200
0.5
1
1
0.9
10
10
250
0.7
200.
0.5
1
1
0.6
10
10
250
0.7
200
0.6
1.0
1.0
0.6
10
10
%-1
or
It
Z'
a:
'16
a:
M
L
a.
a.
173
5.016 -
0 NSGA-II
5.012 -
5.012 -
a.:
a
5.008 -
5.008 -
5.004 -
0 00
,,
5.000 0.00.20.40.60.81.0
5.000
0'00.20.40.60.81.0
Reliability
Reliability
(a)
(b)
0 NSGA-II
5.016 -
O
O
° IvIOPSO-CD
IklOPSO-CD
5.004 -
0
0 NSGA-II
5.016 -
5.016-
' MOPSO-CD
0 NSGA-I1
MOPSO-CD
5.012-
5.012 -1
0
a
1
5.008 -
5.008 -
.
CI 0 0
5.004 -
5.004
4 4..)
,
,,
5 000,
0.00.20.40.60.81.0
,.
5.000 0.00.20.40.60.81.0
Reliability
Reliability
(c)
(d)
Figure 6.12 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System at different preferences:
Preference 1(a), Preference 2 (b), Preference 3 (c), Preference 4 (d)
174
Table 6.12 Initial a Priori Parameters for RHR System
Parameters initialized in Step 0
Q,.
Q,
C,.
C„7"
0
1
0.413412
412.0398
Table 6.13 Other a Priori Parameters for. RHR System
Preference 1
ni
15
Preference 2
n 2i
0.001
Preference 4
ni
n2
ni
n2
0.001
15
.15
15
Table 6.14 A Posteriori Parameters for RHR System
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
M uta tion
Prob.
Arch.
Size
w
Cl
c2
Cross.
Prob.
Cross.
Index
Cross.
Prob.
4-■
200
300
0.3
200
0.6
1
1
0.9
10
10
L
u.
200
300
0.3
200
0.6
1
1
0.8
10
10
' 200
4.<s.
200
L
a,
350
0.1
200
0.6
1
1
0.9
10
10
350
0.1
200
0.6
1
1
0.9
10
15
,..4
4,
z''
175
450 -
o
0 NSGA-II
450
NSGA-II
•
' MOPSO-CD
300 -
MOPSO-CD
300 -
.)
ef)•
150-
150-
0-
0
Reliability
Reliability
(a)
(b)
O NSGA-II
•
300 -
0
0.00.20.40.60.81.0
0.00.20.40.60.81.0
450 -
0
MOPSO-CD
0,
0 0
0
150 -
0-
•
0.40.60.8
1.0
0.0
0.2
Reliability
(c)
(d)
Figure 6.13 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System at different preferences:
(a) For No Preference, (b) For Preference 1, (c) For Preference 2, (d) For Preference 4
176
Table 6.15 Initial a Priori Parameters for Mixed Series-Parallel System
Parameters initialized in Step 0
C:
C,
131,
W:*
317.552
381.935
174.872
186.665
Q:
0.0025
0.023
Table 6.16 Other a Priori Parameters for Mixed Series-Parallel System
Preference 2
Preference 3
Preference 4
n1
n2
n3
a2
a2
a,
ril
n2
n3
10
0.001
0.001
800
0.35
2
10
10
10
Table 6.17 A Posteriori Parameters for Mixed Series-Parallel System
Common Parameters
MOPSO-CD Parameters
NSGA-II Parameters
Pop
Size
Max.
Gen.
Mutation
Prob .
Arch.
Size
w
cl
c2
Cross.
Prob.
Cross.
Index
Cross.
Prob.
ft
Z:'
200
200
0.4
200
0.4
1
1
0.8
10
10
..;
200
350
0.4
200
0.5
1
1
0.9
15
15
200
350
0.4
200
0.5
1.1
1.1
0.9
10
10
200
400
0.1
200
0.6
1
1
0.9
10
10
,..1
eu
f.)
ti
at,
L
a..
177
(a)
(b)
(c)
(d)
Figure 6.14 POFs w.r.t. NSGA-II and MOPSO-CD of a Series System at different preferences:
(a) For No Preference, (b) For Preference 2, (c) For Preference 3, (d) For Preference 4
178
179
CHAPTER 7
Conclusions and Scope for Future Work
The fundamental theme of this study is the development of an efficient strategy to
provide DM the guided or interactive POF of his/her interest. Various benchmark
problems as well as real-world engineering problems (reliability optimization
problems) are examined to ensure the efficacy of the approach proposed. This study
proposes a novel scheme for user preference incorporation in MOEAs. The results on
the single and multiple target scenarios indicate the ability of our scheme to efficiently
explore and focus the search on the regions of interest to the DM. The main advantages
of our scheme are its general applicability and the straightforward integration of
decision-making and optimization. The expert establishes his/her preference in light of
the feasible alternatives rather than specifying trade-offs a priori.
This chapter is organized as follows: Section 7.1 derives the conclusions based
on the present study and Section 7.2 enlists the suggestions for further work in this
direction.
7.1 Conclusions
In this study, we have addressed an important task of combining MOEA with a DFA to
not find a single Pareto-optimal solution, but to find' a set of solutions near the desired
regions of DM's interest on the POF. With a number of trade-off solutions in the region
of interests we have argued that the DM would be able to make a better and more
reliable decision than that with a single solution. A new interactive approach named as
MOEA-IDFA, is proposed to guide the POF in the region/regions of interest of DM.
The proposed approach is better than IDFA in terms of the fact that the MOEA-IDFA
provides guided region/regions unlike single solution (in case of IDFA), in one run of
the approach. Guided POF provides DM flexibility (by providing more than one
solution) in choosing final solution as well as reduces the burden of the DM by not
providing unnecessary (non preferred) solutions. It is concluded that use of different
type of DF exerts different kind of biasness in POF. We have categorized the DF on the
180
basis of the choice of DM. To investigate the effectiveness of the approach two types of
MOEAs (NSGA-II and MOPSO-CD) have been employed to solve the DFMOOP and
their results have been compared. This methodology links ideas and contributions that
span the following areas:
• The preference is the main character of MOOP, which produces the different
solution suitable to the various requirement of DM. The preference is often
expressed by goal, importance, priority, and weights. The structure of DF is an
alternate way to denote preference of the DM.
• It attempts to find a guided POF according the wish of the DM.
• Structure of the method is simple and mathematical complexities are very few.
Hence, it can be easily programmed and implemented.
• MOEA-IDFA is robust and does not require assurance from the user regarding the
mathematical properties (such as continuity, differentiability and convexity, etc.) of
the objective functions and constraints.
• One of the most important practical advantage of this approach is that the
mathematical models of real life optimization problems can be solved.
• Basic theoretical analysis of the approach is also presented in this work.
• Apart from single region of interest, multiple regions of interest of the DM are also
incorporated, which is a unique achievement of this study.
• Both bi-objective and tri-objective optimization problems are examined here in
perspective of MOEA-IDFA.
• Two very popular MOEAs (NSGA-II and MOPSO-CD) have been utilized to
explain the functioning of MOEA-IDFA.
• Several standard test problems as well as five real world problems corresponding
reliability optimization have been solved using MOEA-IDFA, which provided the
usefulness of the approach.
181
7.2 Future Scope
There may exist several interesting directions for further research and development
based on the work in this thesis. Some of the suggestions for future work in this
direction are:
• Since the proposed approach is a MOEA based approach, a comparative study of
MOEA-IDFA with other multi-objective computational algorithms such as SPEA2,
DE and ACO type methods need to be carried out.
• Another type of DFs can be developed to guide POF in other interesting regions of
DM's interest.
• In this study, maximum two regions of a POF have been obtained on the basis of
DM's interest. In future more than two regions simultaneously can be obtained
developing proper combination of DFs.
• Theoretical analysis of this approach is a major area in which a lot of work needs to
be done. POFs of MOOP and DFMOOP have interesting relationship that need to be
investigated theoretically.
• Research is needed to be carried out to choose the applicability of different
combinations of DFs in different applications.
• Relationship between membership function and DF needs to be explored
theoretically as well numerically.
• Effect of MOEA-IDFA needs to be investigated on MOOP having more than three
objectives.
182
183
BIBLIOGRAPHY
[1]
Acan, A. (2004). An external memory implementation in ant colony
optimization. Ant Colony, Optimization and Swarm Intelligence, 247-269.
[2]
Acan, A. (2005). An external partial permutations memory for ant colony
optimization. Evolutionary Computation in Combinatorial Optimization, 1-11.
[3]
Adra, S., Griffin, I., and Fleming, P. (1993). A comparative study of progressive
preference articulation techniques for multiobjective optimisation (Springer
Berlin / Heidelberg).
[4]
Aggarwal, K. K., Gupta, J. S., and Misra, K. B. (2009a). A new heuristic
criterion for solving a redundancy optimization problem. Reliability, IEEE
Transactions on 24, 86-87.
[5]
Aggarwal, K. K., Misra, K. B., and Gupta, J. S. (2009b). A fast algorithm for
reliability evaluation. Reliability, IEEE Transactions on 24, 83-85.
[6]
Agrawal, S., Panigrahi, B. K., and Tiwari, M. K. (2008). Multiobjective particle
swarm algorithm with fuzzy clustering for electrical power dispatch. IEEE
Transactions on Evolutionary Computation 12, 529-541.
[7]
Amari, S. V., and Dill, G. (2010). Redundancy optimization problem with
warm-standby redundancy. In Reliability and Maintainability Symposium
(RAMS), 2010 Proceedings-Annual (IEEE), 1-6.
[8]
Amari, S. V., Dugan, J. B., and Misra, R. B. (2002). Optimal reliability of
systems subject to imperfect fault-coverage. Reliability, IEEE Transactions on
48, 275-284.
184
[9]
Amari, S. V., and McLaughlin, L. (2005). Optimal design of a condition-based
maintenance model. In Reliability and Maintainability, 2004 Annual
Symposium-RAMS (IEEE), 528-533.
[10] Amari, S. V., Pham, H., and Dill, G. (2004). Optimal design of k-out-of-n: G
subsystems subjected to imperfect fault-coverage. Reliability, IEEE
Transactions on 53, 567-575.
[11] Andrews, J. D., and Moss, T. R. (1993). Reliability and risk assessment
(Longman Group, UK).
[12] Apostolakis, G. E. (1974). Mathematical methods of probabilistic safety
analysis.
[13] Athan, T. W., and Papalambros, P. Y. (1996). A note on weighted criteria
methods for compromise solutions in multi-objective optimization. Engineering
-
Optimization 27, 155 176.
[14] Babu, B. V., and Chaturvedi, G. (2000). Evolutionary computation strategy for
optimization of an alkylation reaction. In Proceedings of International
Symposium & 53rd Annual Session of IIChE (CHEMCON-2000) (Citeseer),
18-21.
[15] Babu, B. V., and Chaurasia, A. S. (2003). Optimization of pyrolysis of biomass
using differential evolution approach. In Second International Conference on
Computational Intelligence, Robotics, and Autonomous Systems (CIRAS2003), Singapore.
[16] Babu, B. V., and Jehan, M. M. L. (2004). Differential evolution for multiobjective optimization. In Evolutionary Computation, 2003. CEC'03. The 2003
185
Congress on (IEEE), 2696-2703.
[17] Barbosa, H. J. C. (2002). A coevolutionary genetic algorithm for constrained
optimization. In Evolutionary Computation, 1999. CEC 99. Proceedings of the
1999 Congress on (IEEE).
[18] Barbosa, H. J. C. (1996). A genetic algorithm for min-max problems. In
Goodman, editors, Proceedings of the First International Conference on
Evolutionary Computation and Its Applications, 99-109.
[19] Barbosa, H. J. C., and Lemonge, A. C. C. (2003). A new adaptive penalty
scheme for genetic algorithms. Information Sciences 156, 215-251.
[20] Barbosa, H. J. C., and Lemonge, A. C. C. (2008). An adaptive penalty method
for genetic algorithms in constrained optimization problems. Frontiers in
Evolutionary Robotics. Vienna: I-Tech Education and Publishing 1, 9-34.
[21] Barbosa, H. J. C., and Lemonge, A. C. C. (2002). An adaptive penalty scheme
in genetic algorithms for constrained optimization problems. In Proceedings of
the Genetic and Evolutionary Computation Conference (Morgan Kaufmann
Publishers Inc.), 287-294.
[22] Box, G. E. P., and Wilson, K. B. (1951). On the experimental attainment of
optimum conditions. Journal of the Royal Statistical Society. Series B
(Methodological) 13, 1-45.
[23] Branke, J. (2008a). Consideration of partial user preferences in evolutionary
multiobjective optimization. Multiobjective Optimization, 157-178.
[24] Branke, J., and Deb, K. (2005). Integrating user preferences into evolutionary
multi-objective optimization. Knowledge Incorporation in Evolutionary
186
Computation, 461-477.
[25] Branke, J., Deb, K., Dierolf, H., and Osswald, M. (2004). Finding knees in
multi-objective optimization,1-10.
[26] Branke, J., KauBler, T., and Schmeck, H. (2001). Guidance in evolutionary
multi-objective optimization. Advances in Engineering Software 32, 499-507.
[27] Castro, R. E., and BARBOSA, H. (2001). Otimizacao de estruturas corn multiobjetivos via algoritmos geneticos. Rio de Janeiro 206.
[28] Chakraborty, J., Konar, A., Nagar, A., and Das, S. (2009). Rotation and
translation selective Pareto optimal solution to the box-pushing problem by
mobile robots using NSGA-II. In Evolutionary Computation, 2009. CEC'09.
IEEE Congress on (IEEE), 2120-2126.
[29] Chatsirirungruang, P., and Miyakawa, M. (2008). Application of genetic
algorithm to numerical experiment in robust parameter design for signal multiresponse problem. In Proceedings of The 13th International Conference on
Industrial Engineering Theory, Applications & Practice, 7-10.
[30] Coelho, L. S. (2009). An efficient particle swarm approach for mixed-integer
programming in reliability-redundancy optimization applications. Reliability
Engineering & System Safety 94, 830-837.
[31] Coello Coello, C. A. (2009). Evolutionary multi-objective optimization: some
current research trends and topics that remain to be explored. Frontiers of
Computer Science in China 3, 18-30.
[32] Coello, C. A. (2000). An updated survey of GA-based multiobjective
optimization techniques. ACM Computing Surveys (CSUR) 32, 143.
187
[33] Coello, C. A. C. (1999). A comprehensive survey of evolutionary-based
multiobjective optimization techniques. Knowl. Inf. Syst. 1, 129-156.
[34] Coello, C. A. C. (1996). An empirical study of evolutionary techniques for
multiobjective optimization in engineering design (Ph.D. Thesis).
[35] Coello, C. A. C. (2009). Evolutionary multi-objective optimization: some
current research trends and topics that remain to be explored. Frontiers of
Computer Science in China 3, 18-30.
[36] Coello, C. A. C. (2004). List of references on evolutionary multiobjective
optimization. Laboratorio Nacional De Informatica Avanzada (LANIA).
[37] Coello, C. A. C., de ComputaciOn, S., and Zacatenco, C. S. P. (2006). Twenty
years of evolutionary multi-objective optimization: A historical view of the
field. IEEE Computational Intelligence Magazine 1, 28-36.
[38] Coello, C. A. C., Lamont, G. B., and Van Veldhuizen, D. A. (2007).
Evolutionary algorithms for solving multi-objective problems (Springer-Verlag
New York Inc).
[39] Coello, C. A. C., and Lechuga, M. S. (2002). MOPSO: A proposal for multiple
objective particle swarm optimization. Proceedings of the Evolutionary
Computation on, 1051-1056.
[40] Coello, C. A. C., Pulido, G. T., and Lechuga, M. S. (2004). Handling multiple
objectives with particle swarm optimization. IEEE Trans. Evolutionary
Computation 8, 256-279.
[41] Coit, D. W., and Smith, A. E. (1996). Reliability optimization of series-parallel
188
systems using a genetic algorithm. IEEE Transactions on Reliability 45, 254260.
[42] Collette, Y., and Siarry, P. (2003). Multiobjective optimization: principles and
case studies (Springer Verlag).
[43] Come, D., Knowles, J., and Oates, M. (2000). The Pareto envelope-based
selection algorithm for multiobjective optimization. In Parallel Problem Solving
from Nature PPSN VI (Springer), 839-848.
[44] Cvetkovic, D., and Parmee, I. (2003). Agent-based support within an interactive
evolutionary design system. AI EDAM 16, 331-342.
[45] Cvetkovic, D., and Parmee, I. C. (1998). Evolutionary design and multi—
objective optimisation. In 6th European Congress on Intelligent Techniques and
Soft Computing EUFIT (Citeseer), 397-401.
[46] Cvetkovic, D., and Parmee, I. C. (2002). Preferences and their application in
evolutionary multiobjectiveoptimization. IEEE Transactions on Evolutionary
Computation 6, 42-57.
[47] Cvetkovic, D., and Parmee, I. C. (1999). Use of Preferences for GA—based
Multi—objective Optimisation. In GECCO-99: Proceedings of the Genetic and
Evolutionary Computation Conference (Citeseer), 1504-1509.
[48] Deb, K. (2003). Multi-objective evolutionary algorithms: Introducing bias
among Pareto-optimal solutions. Advances in Evolutionary Computing: Theory
and Applications, 263-292.
[49] Deb, K. (1999). Multi-objective genetic algorithms: Problem difficulties and
construction of test problems. Evolutionary computation 7, 205-230.
189
[50] Deb, K. (2001). Multi-objective optimization using evolutionary algorithms
(Wiley).
[51] Deb, K., Agrawal, S., Pratab, A., and Meyarivan, T. (2000). A Fast Elitist NonDominated Sorting Genetic Algorithm for Multi-Objective Optimization:
NSGA-II (KanGAL report 200001). Indian Institute of Technology.
[52] Deb, K., and Goel; T. (2001). Controlled elitist non-dominated sorting genetic
algorithms for better convergence. In Evolutionary Multi-Criterion
Optimization (Springer), 67-81.
[53] Deb, K., and Kumar, A. (2007). Light beam search based multi-objective
optimization using evolutionary algorithms. In Proc. of the Congress on
Evolutionary Computation (CEC 2007), 2125-2132.
[54] Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T. (2002). A fast and elitist
multiobjective genetic algorithm: NSGA-II. IEEE transactions on evolutionary
computation 6, 182-197.
[55] Deb, K., and Sundar, J. (2006). Reference point based multi-objective
optimization using evolutionary algorithms. In Proceedings of the 8th annual
conference on Genetic and evolutionary computation (ACM), 642.
[56] Derringer, G., and Suich, R. (1980). Simultaneous optimization of several
response variables. Journal of quality technology 12, 214-219.
[57] Dhillon, B. S. (1986). Reliability apportionment/allocation: a survey.
Microelectronics Reliability 26, 1121-1129.
[58] Dhingra, A. K. (1992). Optimal apportionment of reliability and redundancy in
190
seriessystems under multiple objectives. IEEE Transactions on Reliability 41,
576-582.
[59] Dorigo, M., and Stutzle, T. (2010). Ant Colony Optimization: Overview and
Recent Advances. Handbook of Metaheuristics, 227-263.
[60] Drezner, T., Drezner,
Z., and Salhi, S. (2005). A multi-objective heuristic
approach for the casualty collection points location problem. Journal of the
Operational Research Society 57, 727-734.
[61] Durillo, J. J., Garcia-Nieto, J., Nebro, A. J., Coello, C. A., Luna, F., and Alba,
E. (2009). Multi-Objective Particle Swarm Optimizers: An Experimental
Comparison. In Proceedings of the 5th International .Conference on
Evolutionary Multi-Criterion Optimization (Springer-Verlag), pp. 495-509.
[62] Emmerich, M., Beume, N., and Naujoks, B. (2005). An EMO algorithm using
the hypervolume measure as selection criterion. In Evolutionary Multi-Criterion
Optimization (Springer), pp. 62-76.
[63] Fonseca, C. M., and Fleming, P. J. (1995). An overview of evolutionary
algorithms in multiobjective optimization. Evolutionary computation 3, 1-16.
[64] Fonseca, C. M., and Fleming, P. J. (1993). Genetic algorithms for
multiobjective optimization: Formulation, discussion and generalization. In
Proceedings of the fifth international conference on genetic algorithms
(Citeseer), pp. 416-423.
[65] Fonseca, C. M., and Fleming, P. J. (1998). Multiobjective optimization and
multiple constraint handling with evolutionary algorithms. II. Application
example. IEEE Transactions on Systems, Man, and Cybernetics, Part A:
Systems and Humans 28, 38-47.
191
[66] Formato, R. A. (2009). Central force optimization: A new deterministic
gradient-like optimization metaheuristic. OPSEARCH 46, 25-51.
[67] Formato, R. A. (2007). Central force optimization: A new metaheuristic with
applications in applied electromagnetics. Progress In Electromagnetics Research
77, 425-491.
[68] Formato, R. A. (2010). Improved CFO algorithm for antenna optimization.
Progress In Electromagnetics Research 19, 405-425.
[69] Fourman, M. P. (1985). Compaction of symbolic layout using genetic
algorithms. In Proceedings of the 1st International Conference on Genetic
Algorithms (L. Erlbaum Associates Inc.), 141-153.
[70] Fuller, R., and Carlsson, C. (1996). Fuzzy multiple criteria decision making:
Recent developments. Fuzzy Sets and Systems 78, 139-153.
[71] Geoffrion, A. (1968). Relaxation and the dual method in mathematical
programming (California Univ Los Angeles Western Management Science
Inst).
[72] Goldberg, D. E. (1989). Genetic Algorithms in Search and Optimization
(Addison-wesley).
[73] Goldberg, D. E., and Samtani, M. P. (1986). Engineering optimization via
genetic algorithm. In Proceedings of the ninth Conference on Electronic
Computation, 471-482.
[74] Gopal, K., Aggarwal, K. K., and Gupta, J. S. (1978). An improved algorithm for
reliability optimization. IEEE Transactions on Reliability 27, 325-328.
192
[75] Govil, K. K., and Agarwala, R. A. (1983). Lagrange multiplier method for
optimal reliability allocation in a series system. Reliability Engineering 6, 181190.
[76] Greenwood, G. W., Hu, X. S., and D'Ambrosio, J. G. (1997). Fitness functions
for multiple objective optimization problems: Combining preferences with
Pareto rankings. Foundations of genetic algorithms 4, 437.
[77] Ha, C., and Kuo, W. (2006). Reliability redundancy allocation: An improved
realization for nonconvex nonlinear programming problems. European Journal
of Operational Research 171, 24-38.
[78] Harrington, E. C. (1965). The desirability function. Industrial Quality Control
21, 494-498.
[79] Heike, T., and Jorn, M. (2009). Preference-based Pareto optimization in certain
and noisy environments. Engineering Optimization 41, 23-38.
[80] Holland, J. H. (1987). Genetic algorithms and classifier systems: foundations
and future directions (Michigan Univ., Ann Arbor (USA)).
[81] Horn, J. (1997). Multicriterion decision making. Handbook of evolutionary
computation (Oxford University Press).
[82] Huang, H. Z. (1997). Fuzzy multi-objective optimization decision-making of
reliability of series system. Microelectronics and Reliability 37, 447-449.
[83] Huang, H. Z., Tian, Z., and Zuo, M. (2007). Intelligent Interactive
Multiobjective Optimization of System Reliability. Computational intelligence
in reliability engineering, 215-236.
193
[84] Huang, H. Z., Tian, Z., and Zuo, M. J. (2005). Intelligent interactive
multiobjective optimization method and its application to reliability
optimization. IIE Transactions 37, 983-993.
[85] Huband, S., Hingston, P., Barone, L., and While, L. (2006). A review of
multiobjective test problems and a scalable test problem toolkit. IEEE
Transactions on Evolutionary Computation 10, 477-506.
[86] Hughes, E. J. (2002). Multi-Objective Evolutionary Guidance for Swarms. In
CEC'02: proceedings of the 2002 Congress on Evolutionary Computation: May
12-17, 2002, Hilton Hawaiian Village Hotel, Honolulu, Hawaii (IEEE), 2,
1127-1132.
[87] Hwang, C. L., and Lai, Y. J. (1993). ISGP-II for multiobjective optimization
with imprecise objective coefficients. Computers & Operations Research 20,
503-514.
[88] Hwang, C. L., and Masud, A. S. M. (1979). Multiple objective decision making,
methods and applications: a state-of-the-art survey (Springer).
[89] Inagaki, T., Inoue, K., and Akashi, H. (2009). Interactive optimization of
system reliability under multiple objectives. Reliability, IEEE Transactions on
27, 264-267.
[90] Jakob, W., Gorges-Schleuter, M., and Blume, C. (1992). Application of genetic
algorithms to task planning and learning. In Parallel Problem Solving from
Nature, 2nd Workshop, Lecture Notes in Computer Science, 291-300.
[91] Jeong, I. J., and Kim, K. J. (2009). An interactive desirability function method
to multiresponse optimization. European Journal of Operational Research 195,
194
412-426.
[92] Jimenez, F., Gomez-Skarmeta, A. F., Roubos, H., and Babuska, R. (2001). A
multi-objective evolutionary algorithm for fuzzy modeling. In IFSA World
Congress and 20th NAFIPS International Conference.
[93] Jimenez, F., Sanchez, G., Cadenas, J. M., Gomez-Skarmeta, A. F., and
Verdegay, J. L. (2004). A multi-objective evolutionary approach for nonlinear
constrained optimization with fuzzy costs. In 2004 IEEE International
Conference on Systems, Man and Cybernetics, 6 , 5771-5776.
[94] Jimenez, F., Cadenas, J. M., Sanchez, G., Gomez-Skarmeta, A. F., and
Verdegay, J. L. (2006). Multi-objective evolutionary computation and fuzzy
optimization. International Journal of Approximate Reasoning 43, 59-75.
[95] Jin, Y., and Sendhoff, B. (2002). Incorporation Of Fuzzy Preferences Into
Evolutionary Multiobjective Optimization. In Proceedings of the Genetic and
Evolutionary Computation Conference (Morgan Kaufmann Publishers Inc.),
683.
[96] Jones, D., and Tamiz, M. (2010). Practical Goal Programming (Springer).
[97] Kapur, P. K., and Verma, A. K. (2005). An Optimization of Integrated
Reliability Model with Multiple Constraints. Quality, reliability and information
technology: trends and future directions, 180.
[98] Kennedy, J. (2006). Swarm intelligence. Handbook of Nature-Inspired and
Innovative Computing, 187-219.
[99] Kennedy, J., and Eberhart, R. (1995). Particle swarm optimization. In IEEE
International Conference on Neural Networks, 1995. Proceedings., 1942-1948.
195
[100] Kim, K. J., and Lin, D. K. J. (2006). Optimization of multiple responses
considering both location and dispersion effects. European Journal of
Operational Research 169, 133-145.
[101] Knowles, J., and Come, D. (1999). The pareto archived evolution strategy: A
new baseline algorithm for pareto multiobjective optimisation. In Congress on
Evolutionary Computation (CEC99 (Citeseer), 98-105.
[102] Knowles, J., Come, D., and BioCentre, M. I. (2006). Evolutionary M
ultiobjective Optimization. In (Loire Valley,France).
[103] Krishnanand, K. N., and Ghose, D. (2006). Glowworm swarm based
optimization algorithm for multimodal functions with collective robotics
applications. Multiagent and Grid Systems 2, 209-222.
[104] Krishnanand, K. N., and Ghose, D. (2009). Glowworm swarm optimisation: a
new method for optimising multi-modal functions. International Journal of
Computational Intelligence Studies 1, 93-119.
[105] Kuo, W., and Prasad, V. R. (2000). An annotated overview of system-reliability
optimization. IEEE Transactions on Reliability 49, 176-187.
[106] Kuo, W., Prasad, V. R., Tillman, F. A., and Hwang, C. Optimal reliability
design. 2001 (Cambridge University Press, Cambridge).
[107] Kuo, W., and Wan, R. (2007). Recent advances in optimal reliability allocation.
Computational Intelligence in Reliability Engineering, 1-36.
[108] Kursawe, F. (1991). A variant of evolution strategies for vector optimization.
Parallel Problem Solving from Nature, 193-197.
196
[109] Lam, S. W., and Tang, L. C. (2005). A graphical approach to the dual response
robust design problems. In Reliability and Maintainability Symposium, 2005.
Proceedings. Annual, 200-206.
[110] Lamont, G. B., and Van Veldhuizen, D. A. (2007). Evolutionary Algorithms for
Solving Multi-Objective Problems (Springer).
[111] Laumanns, M., Thiele, L., and Zitzler, E. (2006). An efficient, adaptive
parameter variation scheme for metaheuristics based on the epsilon-constraint
method. European Journal of Operational Research 169, 932-942.
[112] Lee, D., Jeong, I., and Kim, K. (2010). A posterior preference articulation
approach to dual-response-surface optimization. IIE Transactions 42, 161-171.
[113] Lee, M., and Park, J. (2003). More efficient consideration of dispersion effect
by a probability-based desirability function in multiresponse problem (Ph.D.
Thesis).
[114] Levitin, G., and Amari, S. V. (2008). Multi-state systems with multi-fault
coverage. Reliability Engineering & System Safety 93, 1730-1739.
[115] Li, X. (2003). A non-dominated sorting particle swarm optimizer for
multiobjective optimization. In Genetic and Evolutionary ComputationGECCO 2003 (Springer), 198-198.
[116] Mahapatra, G. S., and Roy, T. K. (2006). Fuzzy multi-objective mathematical
programming on reliability optimization model. Applied Mathematics and
Computation 174, 643-659.
[117] Mahapatra, G. (2009). Reliability Optimization of Entropy Based Series-
197
Parallel System Using Global Criterion Method.
[118] Mandal, A., Johnson, K., Wu, C. F. J., and Bornemeier, D. (2007). Identifying
promising compounds in drug discovery: Genetic Algorithms and some new
statistical techniques. J. Chem. Inf. Model 47, 981-988.
[119] Marler, R. T., and Arora, J. S. (2005). Function-transformation methods for
multi-objective optimization. Engineering Optimization 37, 551-570.
[120] Marler, R. T., and Arora, J. S. (2004). Survey of multi-objective optimization
methods for engineering. Structural and multidisciplinary optimization 26, 369395.
[121] Marler, R. T., Kim, C. H., and Arora, J. S. (2006). System identification of
simplified crash models using multi-objective optimization. Computer Methods
in Applied Mechanics and Engineering 195, 4383-4395.
[122] Marseguerra, M., Zio, E., and Bosi, F. (2002). Direct Monte Carlo availability
assessment of a nuclear safety system with time-dependent failure
characteristics. Proceedings of MMR.
[123] Marseguerra, M., Zio, E., and Martorell, S. (2006). Basics of genetic algorithms
optimization for RAMS applications. Reliability Engineering & System Safety
91, 977-991.
[124] Mavrotas, G. (2009). Effective implementation of the [epsilon]-constraint
method in Multi-Objective Mathematical Programming problems. Applied
Mathematics and Computation 213, 455-465.
[125] Mehnen, H. T. J. (2009). Preference-based Pareto optimization in certain and
noisy environments. Engineering Optimization 41.1-10.
198
[126] Mehnen, J., Wagner, T., Kersting, P., Tipura, I., and Rudolph, G. (2007).
Evolutionary Five Axis Milling Path Optimization, GECCO'07, July 7-11 2007,
London, UK, 2122-2128.
[127] Merkuryeva, G. (2005). Response Surface-Based Simulation Metamodelling
Methods. Supply Chain Optimisation, 205-215.
[128] Miettinen, K. (1999). Nonlinear multiobjective optimization (Springer).
[129] Misra, K. B. (1991). An algorithm to solve integer programming problems: An
efficient tool for reliability design. Microelectronics Reliability 31, 285-294.
[130] Misra, K. B. (2009). Reliability optimization of a series-parallel system.
Reliability, IEEE Transactions on 21, 230-238.
[131] Misra, K. B., and Sharma, U. (1991a). An efficient approach for multiple
criteria redundancy optimization problems. Microelectronics Reliability 31,
303-321.
[132] Misra, K. B., and Sharma, U. (1991b). Applications of a search algorithm to
reliability design problems. Microelectronics Reliability 31, 295-301.
[133] Misra, N., van der Meulen, E. C., and Vanden Branden, K. (2006). On
estimating the scale parameter of the selected gamma population under the scale
invariant squared error loss function. Journal of Computational and Applied
Mathematics 186, 268-282.
[134] Misra, R. B., and Agnihotri, G. (1979). Peculiarities in Optimal Redundancy for
a Bridge Network. Reliability, IEEE Transactions on 28, 70-72.
199
[135] Mohamed Lawrence, M. (1992). Optimization techniques for system reliability:
a review. Reliability Engineering & System Safety 35, 137-146.
[136] Mohanty, B. K., and Vijayaraghavan, T. A. S. (1995). A multi-objective
programming problem and its equivalent goal programming problem with
appropriate priorities and aspiration levels: a fuzzy approach. Computers &
Operations Research 22, 771-778.
[137] Molina, J., Santana, L. V., Hernandez-Diaz, A. G., Coello Coello, C. A., and
Caballero, R. (2009). g-dominance: Reference point based dominance for
multiobjective metaheuristics. European Journal of Operational Research 197,
685-692.
[138] Moore, J., and Chapman, R. (1999). Application of particle swarm to
multiobjective optimization. Department of Computer Science and Software
Engineering, Auburn University.
[139] de Moura, L., Yamakami, A., and Bonfim, T. R. (2002). A genetic algorithm for
multiobjective optimization problems with fuzzy constraints. In Second
international workshop on Intelligent systems design and application (Dynamic
Publishers, Inc.), 142.
[140] Mukherjee, I., and Ray, P. K. (2008). Optimal process design of two-stage
multiple responses grinding procesSes using desirability functions and
metaheuristic technique. Applied Soft Computing 8, 402-421.
[141] Murty, M. (1995). The Analytic Rank of JO (AO (Q). In Number theory: Fourth
Conference of the Canadian Number Theory Association, July 2-8, 1994,
Dalhousie University, Halifax, Nova Scotia, Canada (Canadian Mathematical
Society), 263.
200
[142] Nguyen, H. H., Jang, N., and Choi, S. H. (2009). Multiresponse optimization
based on the desirability function for a pervaporation process for producing
anhydrous ethanol. Korean Journal of Chemical Engineering 26, 1-6.
[143] Nguyen, N. T. (2010). Intelligent Information and Database Systems: Second
International Conference, Aciids, Hue City, Vietnam, Mar. 24-26, 2010,
Proceedings.
[144] Noorossana, R., Davanloo Tajbakhsh, S., and Saghaei, A. (2009). An artificial
neural network approach to multiple-response optimization. The International
Journal of Advanced Manufacturing Technology 40, 1227-1238.
[145] Padhye, N., Branke, J., and Mostaghim, S. (2009). Empirical comparison of
MOPSO methods: guide selection and diversity preservation. In Proceedings of
the Eleventh conference on Congress on Evolutionary Computation (Institute of
Electrical and Electronics Engineers Inc., The), 2516-2523.
[146] Pandey, M. K., Tiwari, M. K., and Zuo, M. J. (2007). Interactive enhanced
particle swarm optimization: a multi-objective reliability application.
Proceedings of the Institution of Mechanical Engineers, Part 0: Journal of Risk
and Reliability 221, 177-191.
[147] Pareto, V. (1896). Cours d'Economie Politique, volume I and II. F. Rouge,
Lausanne 250.
[148] Park, K. S., and Kim, K. J. (2005). Optimizing multi-response surface
problems: How to use multi-objective optimization techniques. IIE Transactions
37, 523-532.
[149] Parmee, I. C. (2001). Evolutionary and adaptive computing in engineering
design (Springer Verlag).
201
[150] Parmee, I. C., Cvetkovic, D., Watson, A. H., and Bonham, C. R. (2000).
Multiobjective satisfaction within an interactive evolutionary design
environment. Evolutionary Computation 8, 197-222.
[151] Parsopoulos, K. E., and Vrahatis, M. N. (2008). Multi-Objective Particles
Swarm Optimization Approaches. Multi-objective optimization in
computational intelligence: theory and practice.
[152] Parsopoulos, K. E., and Vrahatis, M. N. (2002). Recent approaches to global
optimization problems through particle swarm optimization. Natural Computing
1, 235-306.
[153] Prasad, V. R, and Kuo, W. (2000). Reliability optimization of coherent
systems. IEEE Transactions on Reliability 49, 323-330.
[154] Rachmawati, L. (2009). Incorporation of human decision making preference
into evolutionary multi-objective optimization (Ph.D. Thesis).
[155] Rachmawati, L., and Srinivasan, D. (2009). Multiobjective evolutionary
algorithm with controllable focus on the knees of the Pareto front. IEEE
Transactions on Evolutionary Computation 13, 810-824.
[156] Rachmawati, L., and Srinivasan, D. (2006). Preference incorporation in multiobjective evolutionary algorithms: A survey. In IEEE Congress on Evolutionary
Computation, 2006. CEC 2006, 962-968.
[157] Rangarajan, A., Ravindran, A. R., and Reed, P. (2004). An interactive multiobjective evolutionary optimization algorithm. In In Proceedings of the 34th
International Conference on Computers & Industrial Engineering, 277-282.
202
[158] Rao, J. R., Tiwari, R. N., and Mohanty, B. K. (1988a). A method for finding
numerical compensation for fuzzy multicriteria decision problem. Fuzzy Sets
and Systems 25, 33-41.
[159] Rao, J. R., Tiwari, R. N., and Mohanty, B. K. (1988b). A preference structure
on aspiration levels in a goal programming problem--A fuzzy approach. Fuzzy
sets and systems 25, 175-182.
[160] Rao, S. S., and Dhingra, A. K. (1992). Reliability and redundancy
apportionment using crisp and fuzzy multiobjective optimization approaches.
Reliability Engineering & System Safety 37, 253-261.
[161] Raquel, C. R., and Naval Jr, P. C. (2005). An effective use of crowding distance
in multiobjective particle swarm optimization. In Proceedings of the 2005
conference on Genetic and evolutionary computation (ACM), 264.
[162] Ravi, V. (2007). Modified great deluge algorithm versus other metaheuristics in
reliability optimization. Intelligence in Reliability Engineering, 21-36.
[163] Ravi, V., Murty, B. S. N., and Reddy, P. J. (1997). Nonequilibrium simulated
annealing-algorithm applied to reliability optimization of complex systems.
IEEE Transactions on Reliability 46, 233-239.
[164] Ravi, V., Reddy, P. J., Zimmermann, H. J., fuer Untemehmensforschung, L.,
and Aachen, T. H.. (2000). Fuzzy global optimization of complex system
reliability. IEEE Transactions on Fuzzy Systems 8, 241-248.
[165] Ray, T., and Liew, K. M. (2002). A swarm metaphor for multiobjective design
optimization. Engineering Optimization 34, 141-153.
[166] Reyes-Sierra, M., and Coello, , C. A. C. (2006). Multi-objective particle swarm
• 203
optimizers: A survey of the state-of-the-art. International Journal of
Computational Intelligence Research 2, 287-308.
[167] Ritzel, B. J., Wayland Eheart, J., and Ranjithan, S. (1994). Using genetic
algorithms to solve a multiple objective groundwater pollution containment
problem. Water Resources Research 30, 1589-1589.
[168] Roy, R., and Mehnen, J. (2008). Technology Transfer: Academia To Industry.
Evolutionary Computation in Practice, 263.
[169] Sakawa, M. (1978). Multiobjective optimization by the surrogate worth tradeoff method. IEEE Transactions on Reliability 27, 311-314.
[170] Sakawa, M., and Kato, K. (2009). An interactive fuzzy satisficing method for
multiobjective nonlinear integer programming problems with block-angular
structures through genetic algorithms with decomposition procedures. Fuzzy
Sets and Systems, 81-99.
[171] Salazar, A., Daniel, E., Rocco, S., and Claudio, M. (2007). Solving advanced
multi-objective robust designs by means of multiple objective evolutionary
algorithms (MOEA): A reliability application. Reliability Engineering &
System Safety 92, 697-706.
[172] Salazar, D., Rocco, C. M., and Galvan, B. J. (2006). Optimization of
constrained multiple-objective reliability problems using evolutionary
algorithms. Reliability Engineering & System Safety 91, 1057-1070.
[173] Salhi, S., Drezner, T., and Drezner, Z. (2005). A Multi-Objective Heuristic
Approach for the Casualty Collection Points Location Problem.
[174] Salhi, S., and Petch, R. J. (2007). A GA based heuristic for the vehicle routing
204
problem with multiple trips. Journal of Mathematical Modelling and Algorithms
6, 591-613.
[175] Sawaragi, Y., Nakayama, H., and Tanino, T. (1985). Theory of multiobjective
optimization (Orlando Academic Press).
[176] Schaffer, J. D. (1985). Multiple objective optimization with vector evaluated
genetic algorithms. In Proceedings of the 1st International Conference on
Genetic Algorithms (L. Erlbaum Associates Inc.), pp. 93-100.
[177] Shelokar, P. S., Jayaraman, V. K., and Kulkarni, B. D. (2002). Ant algorithm
for single and multiobjective reliability optimization problems. Quality and
Reliability Engineering International 18, 497-514.
[178] Singh, H., and Misra, N. (1994). On redundancy allocations in systems. Journal
of Applied Probability 31, 1004-1014.
[179] Sinha, N., Purkayastha, B. S., and Purkayastha, B. (2008). Optimal Combined
Non-convex Economic and Emission Load Dispatch Using NSDE. ' In
Conference on Computational Intelligence and Multimedia Applications, 2007.
International Conference on (IEEE), 473-480.
[180] Srinivas, N., and Deb, K. (1994). Muiltiobjective optimization using
nondominated sorting in genetic algorithms. Evolutionary computation 2, 221248.
[181] Steuer, D. (2004). Multi-criteria-optimisation and desirability indices.
HT014602036.
[182] Steuer, R. E. (1986). Multiple criteria optimization: Theory, computation, and
application (Wiley, New York).
205
[183] Tan, K. C., Khor, E. F., and Lee, T. H. (2005). Multiobjective evolutionary
algorithms and applications (Springer Verlag).
[184] Tan, K. C., Lee, T. H., and Khor, E. F. (2002). Evolutionary algorithms for
multi-objective optimization: performance assessments and comparisons.
Artificial intelligence review. 17, 251-290.
[185] Tan, K. C., Lee, T. H., and Khor, E. F. (1999). Evolutionary algorithms with
goal and priority information for multi-objective optimization. In Proceedings
of the 1999 Congress on Evolutionary Computation: CEC99: July 6-9, 1999,
Mayflower Hotel, Washington, DC, USA (IEEE), 106.
[186] Tanaka, M., Watanabe, H., Furukawa, Y., and Tanino, T. (1995). GA-based
decision support system for multicriteria optimization. In IEEE International
Conference on Systems, Man and Cybernetics, 1995. Intelligent Systems for the
21st Century.
[187] Tang, L. C., and Paoli, P. (2004). A spreadsheet-based multiple criteria
optimization framework for quality function deployment. International Journal
of Quality and Reliability Management 21, 329.
[188] Thanh, N. H., and Vong, N. T. (2000). Determination of cropping pattern by the
multi-objective optimization model. Khoa Hoc Ky Thuat Nong Nghiep (Viet
Nam); Journal of Agricultural Sciences and Technology.
[189] Thiele, L., Miettinen, K., Korhonen, P. J., and Molina, J. (2009). A preferencebased evolutionary algorithm for multi-objective optimization. Evolutionary
Computation 17, 411-436.
[190] Tillman, F. A., Hwang, C. L., and Kuo, W. (1980). Optimization of systems
206
reliability (Marcel Dekker Inc).
[191] Trautmann, H., and Mehnen, J. (2005). A method for including a-prioripreference in multicriteria optimization (University of Dortmund, Germany).
[192] Trautmann, H., and Mehnen, J. (2009). Preference-based Pareto optimization in
certain and noisy environments. Engineering Optimization 41, 23-38.
[193] Trautmann, H., Wagner, T., Naujoks, B., Preuss, M., and Mehnen, J. (2009).
Statistical methods for convergence detection of multi-objective evolutionary
algorithms. Evolutionary Computation 17, 493-509.
[194] Trautmann, H., and Weihs, C. (2006). On the distribution of the desirability
index using Harrington's desirability function. Metrika 63, 207-213.
[195] del Valle, Y., Venayagamoorthy, G. K., Mohagheghi, S., Hernandez, J., and
Harley, R. G. (2008). Particle swarm optimization: Basic concepts, variants and
applications in power systems. IEEE Transactions on Evolutionary
Computation 12,171.
[196] Van Veidhuizen, D. A., and Lamont, G. B. (1998). Multiobjective evolutionary
algorithm research: A history and analysis. Air Force Inst. Technol., Dayton,
OH, Tech. Rep. TR-98-03.
[197] Van Veidhuizen, D. A., and Lamont, G. B. (2000). Multiobjective evolutionary
algorithms: Analyzing the state-of-the-art. Evolutionary computation 8, 125148.
[198] Verma, A. K., Ajit, S., and Karanki, D. R. (2010). Reliability and Safety
Engineering (Springer).
207
[199] Vinod, G., Kushwaha, H. S., Verma, A. K., and Srividya, A. (2004).
Optimisation of ISI interval using genetic algorithms for risk informed inservice inspection. Reliability Engineering & System Safety 86, 307-316.
[200] Wang, H. F. (2000). FuZzy multicriteria decision makingan overview. Journal
of Intelligent and Fuzzy Systems 9, 61-83.
[201] White, C. C. (1984). A generalized model of sequential decisionmaking under
risk. European Journal of Operational Research 18,19-26.
[202] Wierzbicki, A. P. (1982). A mathematical basis for satisficing decision making.
Mathematical modelling 3, 391-405.
[203] Wierzbicki, A. P. (1999). Multicriteria decision making: advances in MCDM
models, algorithms, theory, and applications (Kluwer Netherlands).
[204] Wierzbicki, A. P. (1979). The Use of Reference Objectives in Multiobjective
Optimization-Theoretical Implications and Practical Experience. Int. Inst.
Applied System Analysis, Laxenburg, Austria, Working Paper WP-79-66.
[205] Wilson, P. B., and Macleod, M. D. (1993). Low implementation cost IIR digital
filter design using genetic algorithms. In IEE/IEEE Workshop on Natural
Algorithms in Signal Processing, 4.
[206] Yu, P. L., Lee, Y. R., and Stam, A. (1985). Multiple-criteria decision making:
concepts, techniques, and extensions (Plenum Publishing Corporation).
[207] Zadeh, L. A. (1975). The concept of a linguistic variable and its application to
approximate reasoning. Information sciences 8, 199-249.
[208] Zimmermann, H. J. (1990). Decision making in ill-structured environments and
208
with multiple criteria. Readings in multiple criteria decision aid, 119-151.
[209] Zimmermann, H. J. (2001). Fuzzy set theory--and its applications (Springer
Netherlands).
[210] Zimmermann, H. J. (1987). Fuzzy sets, decision making, and expert systems
(Springer).
[211] Zimmermann, H. J. (1986). Multicriteria decision making in crisp and fuzzy
environments. Fuzzy sets theory and applications. NATO ASI Series 177, 233256.
[212] Zitzler, E., Brockhoff, D., and Thiele, L. (2007). The Hypervolume Indicator
Revisited: On the Design of Pareto-compliant Indicators Via Weighted
Integration. In In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T.
(eds.) EMO 2007 (Springer, Heidelberg), 862-876.
[213] Zitzler, E., Deb, K., and Thiele, L. (2000). Comparison of multiobjective
evolutionary algorithms: Empirical results. Evolutionary computation 8, 173195.
[214] Zitzler, E., and Kanzli, S. (2004). Indicator-based selection in multiobjective
search. In Parallel Problem Solving from Nature-PPSN VIII (Springer), 832842.
[215] Zitzler, E., Laumanns, M., and Thiele, L. (2001). SPEA2: Improving the
strength Pareto evolutionary algorithm. In EUROGEN (Citeseer), 95-100.
[216] Zitzler, E., and Thiele, L. (1998). An evolutionary algorithm for multiobjective
optimization: The strength pareto approach. Swiss Federal Institute of
Technology, TIK-Report 43.
209
[217] Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., and da Fonseca, V. G.
(2003). Performance assessment of multiobjective optimizers: An analysis and
review. IEEE Transactions on Evolutionary Computation 7, 117-132.
210