About Evolutionary Computation (EC)

Download Report

Transcript About Evolutionary Computation (EC)

Multiobjective Optimization for Locating Multiple
Optimal Solutions of Nonlinear Equation Systems and
Multimodal Optimization Problems
Yong Wang
School of Information Science and Engineering
Central South University
Changsha 410083, China
[email protected]
http://ist.csu.edu.cn/YongWang.htm
Outline of My Talk
 Part I: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Nonlinear
Equation systems (MONES)
 Part II: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Multimodal
Optimization Problems (MOMMOP)
 Future Work
2
Outline of My Talk
 Part I: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Nonlinear
Equation systems (MONES)
 Part II: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Multimodal
Optimization Problems (MOMMOP)
 Future Work
3
Nonlinear Equation Systems (NESs) (1/2)
• NESs arise in many science and engineering areas
such as chemical processes, robotics, electronic circuits,
engineered materials, and physics.
• The formulation of a NES
 e1 ( x )  0

 e2 ( x )  0
,


em ( x )  0
x  ( x1 ,
D
xD )  S  [ Li ,U i ]
i 1
4
Nonlinear Equation Systems (NESs) (2/2)
• An example
the optimal solutions
e1 ( x1, x2 )  x12  x22  1  0
e2 ( x1, x2 )  x1  x2  0
1  x1 , x2  1
A NES may contain multiple optimal solutions
5
Solving NESs by Evolutionary Algorithms (1/4)
• The aim of solving NESs by evolutionary algorithms (EAs)
– Locate all the optimal solutions in a single run
• The principle
• At present, there are three kinds of methods
– Single-objective optimization based methods
– Constrained optimization based methods
– Multiobjective optimization based methods
6
Solving NESs by Evolutionary Algorithms (2/4)
• Single-objective optimization based methods
 e1 ( x )  0

 e2 ( x )  0


em ( x )  0
minimize

m
|ei ( x) |
i 1
or
minimize
2
e
i 1 i ( x)
m
• The main drawback
– Usually, only one optimal solution can be found in a single
run
7
Solving NESs by Evolutionary Algorithms (3/4)
• Constrained optimization based methods
m

minimize
| e ( x) |


i 1 i


subject to ei ( x )  0, i  1, , m
or
m

minimize  i 1| ei ( x ) |


subject to ei ( x )  0, i  1, , m
 e1 ( x )  0

 e2 ( x )  0


em ( x )  0
• The main drawbacks
– Similar to the first kind of method, this kind of methods
can only locate one optimal solution in a single run
– Additional constraint-handling techniques should be
integrated
8
Solving NESs by Evolutionary Algorithms (4/4)
• Multiobjective optimization based methods (CA method)
 minimize |e1 ( x ) |

 minimize e2 ( x ) |


 minimize |em ( x ) |
 e1 ( x )  0

 e2 ( x )  0


em ( x )  0
• The main drawbacks
– It may suffer from the “curse of dimensionality” (i.e., manyobjective)
– Maybe only one solution can be found in a single run
C. Grosan and A. Abraham, “A new approach for solving nonlinear equation systems,”
IEEE Transactions on Systems Man and Cybernetics - Part A, vol. 38, no. 3, pp. 698714, 2008.
9
MONES: Multiobjective Optimization for NESs (1/9)
• The main motivation
– When solving a NES by EAs, it is expected to locate
multiple optimal solutions in a single run
– Obviously, the above process is similar to that of the
solution of multiobjective optimization problems by EAs
– A question arises naturally is whether a NES can be
transformed into a multiobjective optimization problem and,
as a result, multiobjective EAs can be used to solve the
transformed problem
W. Song, Y. Wang, H.-X. Li, and Z. Cai, “Locating multiple optimal solutions of nonlinear
equation systems based on multiobjective optimization,” IEEE Transactions on
Evolutionary Computation, Accepted.
10
MONES: Multiobjective Optimization for NESs (2/9)
• Multiobjective optimization problems
minimize f ( x )  ( f1 ( x ), f 2 ( x ),..., f m ( x ))
– Pareto dominance
– Pareto optimal solutions
f2
f ( xa ) •( The
f1 ( xaset
), off 2all( xthe
..., f m ( xa ))
a ), nondominated
solutions
≤
– Pareto front
≤
≤
≤
f ( xb ) •( The
f1 ( xbimages
), f 2 (of
xb the
), ...,
f ( xb ))
Paretomoptimal
solutions in the objective space
<
x a Pareto dominates x b
xb
xa
11
xb
xa
xc
f1
MONES: Multiobjective Optimization for NESs (3/9)
• The main idea
 e1 ( x )  0

 e2 ( x )  0


em ( x )  0
m

  | ei ( x ) |
 f1 ( x )  x1
minimize 
i 1
 f ( x )  1  x  m * max(| e ( x ) |,
1
1
 2
①
②
12
,| em ( x ) |)
MONES: Multiobjective Optimization for NESs (4/9)
• The principle of the first term
minimize
1 ( x)  x1

2 ( x)  1  x1
The images of the optimal solutions
of the first term in the objective
space are located on the line
segment defined by y=1-x
Each decision vector in the decision
space of a NES is a Pareto optimal
solution of the first term
13
MONES: Multiobjective Optimization for NESs (5/9)
• The principle of the second term
m

 1 ( x )   | ei ( x ) |
minimize 
i 1
  ( x )  m * max(| e ( x ) |,
1
 2
,| em ( x ) |)
Remark I: For an optimal solution x* of a NES, then 1 ( x* )  2 ( x* )  0
Remark II: The parameter m is used to make 1 ( x* ) and  2 ( x* ) have the similar scale
14
MONES: Multiobjective Optimization for NESs (6/9)
• The principle of the first term plus the second term
m

  | ei ( x ) |
 f1 ( x )  x1
minimize 
i 1
 f ( x )  1  x  m * max(| e ( x ) |,
1
1
 2
,| em ( x ) |)
The images of the optimal solutions of a NES in the objective space
are located on the line segment defined by y=1-x
1
Pareto Front
0
1
15
MONES: Multiobjective Optimization for NESs (7/9)
• Summary
– In MONES, a NES has been transformed into a
biobjective optimization problem
– There are some very good properties for our
transformation technique
– Multiobjective EAs (such as NSGA-II) can be easily used
to solve the transformed biobjective optimization problem
16
MONES: Multiobjective Optimization for NESs (8/9)
• The differences between CA and MONES
1
e1 ( x1, x2 )  x12  x22  1  0
0.5
x2
e2 ( x1, x2 )  x1  x2  0
1  x1 , x2  1
0
-0.5
-1
-1
-0.5
0
x1
CA
17
MONES
0.5
1
MONES: Multiobjective Optimization for NESs (9/9)
• The differences between CA and MONES
e1 ( x1, x2 )  x12  x22  1  0
e2 ( x1, x2 )  x1  x2  0
1  x1 , x2  1
CA
1
The optimal
solutions
x2
0.5
E
0
F
-0.5
-1
A
-1
B
C
D
-0.5
0
0.5
1
x1
MONES
18
The Experimental Results (1/4)
• Test instances
19
The Experimental Results (2/4)
• IGD indicator:
Inverted
Generational
Distance
The images of the
best solutions found
1
Pareto Front
0
1
IGD
20
The Experimental Results (3/4)
• NOF indicator:
Number of the
Optimal
Solutions
Found
21
The Experimental Results (4/4)
F1
F4
F5
Convergence behavior in a typical run provided by CA in the decision space
F1
F4
F5
Convergence behavior in a typical run provided by MONES in the decision space
22
Outline of My Talk
 Part I: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Nonlinear
Equation systems (MONES)
 Part II: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Multimodal
Optimization Problems (MOMMOP)
 Future Work
23
Multimodal Optimization Problems (MMOPs) (1/2)
• Many optimization problems in the real-world applications
exhibit multimodal property, i.e., multiple optimal solutions
may coexist.
• The formulation of multimodal optimization problems
(MMOPs)
Maximize/Minimize f ( x ), x  ( x1 ,
24
D
xD )  S  [ Li ,U i ]
i 1
Multimodal Optimization Problems (MMOPs) (2/2)
• Several examples
25
The Previous Work (1/2)
• Niching methods
– The first niching method
 The preselection method suggested by Cavicchio in 1970
– The current popular niching methods
 Clearing (Pétrowski, ICEC, 1996)
 Fitness sharing (Goldberg and Richardson, ICGA, 1987)
 Crowding (De Jong, PhD dissertation, 1975)
 Restricted tournament selection (Harik, ICGA, 1995)
 Speciation (Li et al., ECJ, 2002)
• The disadvantages
– Some problem-dependent niching parameters are required
26
The Previous Work (2/2)
• Multiobjective optimization based methods, usually two
objectives are considered:
– The first objective: the original multimodal function
– The second objective: the distance information (Das et al.,
IEEE TEVC, 2013) or the gradient information (Deb and
Saha, ECJ, 2012)
• The disadvantages
– It cannot guarantee that the two objectives in the
transformed problem totally conflict with each other
– The relationship between the optimal solutions of the
original problems and the Pareto optimal solutions of the
transformed problems is difficult to be verified theoretically.
27
MOMMOP: Multiobjective Optimization for MMOPs (1/5)
• The main motivation
1 ( x)  x1
2 ( x)  1  x1
minimize 
Y. Wang, H.-X. Li, G. G. Gary, and W. Song, “MOMMOP: Multiobjective optimization
for locating multiple optimal solutions of multimodal optimization problems,” IEEE
Transactions on Cybernetics, Accepted.
28
MOMMOP: Multiobjective Optimization for MMOPs (2/5)
• The main idea
– Convert an MMOP into a biobjective optimization problem
| f ( x )  best_refer |

f
(
x
)

x

 (U1  L1 )  
1
 1
| worst_refer  best_refer |

minimize 
| f ( x )  best_refer |
 f (x)  1  x 
 (U1  L1 )  
2
1

|
worst_refer

best_refer
|

①
②
29
MOMMOP: Multiobjective Optimization for MMOPs (3/5)
• The principle of the second term
the objective function
value of the best individual
found during the evolution
the objective function value of
the current individual
| f ( x )  best_refer |
 (U1  L1 )  
| worst_refer  best_refer |
the scaling factor
the range of the
first variable
Remark: the aim is to make
the first term and the second
term have the same scale
the objective function value of
the worst individual found
during the evolution
For the optimal solutions of the original multimodal optimization
problems, the values of the second term are equal to zero.
30
MOMMOP: Multiobjective Optimization for MMOPs (4/5)
• The principle of the first term plus the second term
minimize
| f ( x )  best_refer |

f
(
x
)

x

 (U1  L1 )  
1
 1
|
worst_refer

best_refer
|


| f ( x )  best_refer |
 f (x)  1  x 
 (U1  L1 )  
2
1

| worst_refer  best_refer |

The images of the optimal solutions of an MMOP in the objective space are
located on the line segment defined by y=1-x
1
Pareto Front
0
31
1
MOMMOP: Multiobjective Optimization for MMOPs (5/5)
x
• Why does MOMMOP work?
f(x)
– MOMMOP is an implicit niching method
1
| f ( x )  best_refer
|

f
(
x
)

x

 (U1 1 L1 )  1
1
 1
|
worst_refer

best_refer
|
0
1


1
| f ( x )  best_refer
|
 f (x)  1  x 
 (U1 1 L1 )  1
2
1

| worst_refer
 best_refer
|
0
1

f(x)
 f1 ( xa )  0.1  0.0  0.0

 f 2 ( xa )  1  0.1  0.0  0.9
 f1 ( xb )  0.15  0.2  0.35

 f 2 ( xb )  1  0.15  0.2  1.05
(0.1, 1)
1
(0.15, 0.8)
 f1 ( xc )  0.6  0.2  0.8

 f 2 ( xc )  1  0.6  0.2  0.6
(0.6, 0.8)
f2 (0.35, 1.05)
xb
xa
(0.0, 0.9)
0
xa x b
xc
x
1
32
xa
xa 
xc
(0.8, 0.6)
f1
xb
xc
Two issues in MOMMOP (1/3)
| f ( x )  best_refer |


 (U1  L1 )  
 f1 ( x )  x1
| worst_refer  best_refer |


| f ( x )  best_refer |
 f (x)  1  x 
 (U1  L1 )  
2
1

|
worst_refer

best_refer
|

• The first issue
– Some optimal solutions
may have the same value
in one or many decision
variables
| f ( x )  best_refer |


 (U 2  L2 )  
 f1 ( x )  x2
| worst_refer  best_refer |


| f ( x )  best_refer |
 f ( x)  1  x 
 (U 2  L2 )  
2
2

|
worst_refer

best_refer
|

| f ( x )  best_refer |


 (U D  LD )  
 f1 ( x )  xD
| worst_refer  best_refer |


| f ( x )  best_refer |
 f ( x)  1  x 
 (U D  LD )  
2
D

| worst_refer  best_refer |

xa
xb on BOP1  xa
xb on BOP2 
xa dominates xb
33
 xa
xb on BOPD
Two issues in MOMMOP (2/3)
• The second issue
– In some basins of attraction,
maybe there are few individuals
– Meanwhile, some individuals in
the same basin may be quite
similar to each other
f ( xa ) is better than f ( xb ) &
normalization( xa )  normalization( xb ) 2  0.01
xa dominates xb
34
Two issues in MOMMOP (3/3)
• When compare two individuals
if
xa
xb on BOP1  xa
xb on BOP2 
 xa
xb on BOPD
or f ( xa ) is better than f ( xb ) & normalization( xa )  normalization( xb ) 2  0.01
xa dominates xb
35
Test Instances
20 benchmark test functions developed for the
IEEE CEC2013 special session and competition on niching
methods for multimodal function optimization
36
The Experimental Results (1/3)
• Comparison with four recent methods in IEEE CEC2013
37
The Experimental Results (2/3)
• Comparison with four state-of-the-art single-objective
optimization based approaches
38
The Experimental Results (3/3)
• Comparison with two state-of-the-art multiobjective
optimization based approaches
39
Outline of My Talk
 Part I: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Nonlinear
Equation systems (MONES)
 Part II: Multiobjective Optimization for Locating
Multiple Optimal Solutions of Multimodal
Optimization Problems (MOMMOP)
 Future Work
40
Future Work
• We have proposed a multiobjective optimization based
framework for nonlinear equation systems and multimodal
optimization problems, respectively, however
– The principle should be analyzed in depth in the future
– The rationality should be further verified
– The framework could be further improved
• This generic framework could be generalized into solve
other kinds of optimization problems
The source codes of MONES and MOMMOP can be downloaded from:
http://ist.csu.edu.cn/YongWang.htm
41