Linear Programming

Download Report

Transcript Linear Programming

Swarm Intelligence
虞台文
Content


Overview
Swarm Particle Optimization (PSO)
–

Example
Ant Colony Optimization (ACO)
Swarm Intelligence
Overview
Swarm Intelligence



Collective system capable of accomplishing
difficult tasks in dynamic and varied environments
without any external guidance or control and with
no central coordination
Achieving a collective performance which could
not normally be achieved by an individual acting
alone
Constituting a natural model particularly suited to
distributed problem solving
Swarm Intelligence
http://www.scs.carleton.ca/~arpwhite/courses/95590Y/notes/SI%20Lecture%203.pdf
Swarm Intelligence
Swarm Intelligence
Swarm Intelligence
Swarm Intelligence
Swarm Intelligence
Particle Swarm Optimization (PSO)
Basic Concept
The Inventors
Russell Eberhart
James Kennedy
electrical engineer
social-psychologist
Developed in 1995 by James Kennedy and
Russell Eberhart.
Particle Swarm Optimization (PSO)

PSO is a robust stochastic
optimization technique based
on the movement and
intelligence of swarms.

PSO applies the concept of
social interaction to problem
solving.
PSO Search Scheme


It uses a number of agents, i.e.,
particles, that constitute a swarm
moving around in the search space
looking for the best solution.
Each particle is treated as a point in
a N-dimensional space which adjusts
its “flying” according to its own flying
experience as well as the flying
experience of other particles.
Particle Flying Model

pbest  the best solution achieved so far by that particle.

gbest  the best value obtained so far by any particle in the
neighborhood of that particle.

The basic concept of PSO lies in
accelerating each particle
toward its pbest and the gbest
locations, with a random
weighted acceleration at each
time.
Particle Flying Model
v  w1d
k
pbest k
v
v
k
v
sk
 w2 d
k
gbest k
sk 1
d
k 1
d
pbest k
w1  c1  rand ()
w2  c2  rand ()
gbest k
gbest k
pbest k
Particle Flying Model

Each particle tries to modify
its position using the
v k  w1d pbest  w2 d gbest
following information:
k
–
the current positions,
–
the current velocities,
–
the distance between the
current position and pbest,
–
vk
vk
the distance between the
current position and the gbest.
v
sk
k
sk 1
d
k
11
d pbest
k
w1  c1  rand ()
w2  c2  rand ()
gbest k
gbest k
pbest k
Particle Flying Model
k 1
i
s
k 1
i
v
k 1
i
 s v
k
i
v  w1d
k
pbest k
vk
 v  v
k
i
k
i
vk
v
sk
 w2 d
gbest k
sk 1
d
k
11
d pbest
k
w1  c1  rand ()
w2  c2  rand ()
gbest k
gbest k
pbest k
vik  c1  rand ()  ( pbestik  sik )  c2  rand ()  ( gbest k  sik )
*
vik 1  vik  vik
vik  c1  rand ()  ( pbestik  sik )  c2  rand ()  ( gbest k  sik )
PSO Algorithm
**
sik 1  sik  vik
For each particle
Initialize particle
END
Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pbest) in history
set current value as the new pbest
End
Choose the particle with the best fitness value of all the particles as the gbest
For each particle
Calculate particle velocity according equation (*)
Update particle position according equation (**)
End
While maximum iterations or minimum error criteria is not attained
Swarm Intelligence
Particle Swarm Optimization (PSO)
Examples
Schwefel's Function
n
f ( x)   ( xi )  sin( xi )
i 1
with  500  xi  500
Global minimum
f ( x)  n  418.9829;
xi  420.9687, i  1 : n
Simulation  Initialization
Simulation  After 5 Generations
Simulation  After 10 Generations
Simulation  After 15 Generations
Simulation  After 20 Generations
Simulation  After 25 Generations
Simulation  After 100 Generations
Simulation  After 500 Generations
Summary
Iterations
gBest
0
416.245599
5
515.748796
850
"sample.dat"
800
750
700
10
759.404006
15
793.732019
650
600
550
20
834.813763
100
837.911535
500
450
400
5000
837.965771
Optimun
837.9658
1
4
16
64
256
1024
4096
Exercises


Compare PSO with GA
Can we use PSO to train neural networks?
How?
Particle Swarm Optimization
(PSO)
Ant Colony
Optimization (ACO)
Facts


Many discrete optimization problems are
difficult to solve, e.g., NP-Hard
Soft computing techniques to cope with
these problems:
–
Simulated Annealing (SA)

–
Genetic algorithm (GA)

–
Based on physical systems
based on natural selection and genetics
Ant Colony Optimization (ACO)

modeling ant colony behavior
Ant Colony Optimization
Background



Introduced by Marco Dorigo (Milan, Italy),
and others in early 1990s.
A probabilistic technique for solving
computational problems which can be
reduced to finding good paths through
graphs.
They are inspired by the behaviour of ants
in finding paths from the colony to food.
Typical Applications

TSP  Traveling Salesman Problem

Quadratic assignment problems

Scheduling problems

Dynamic routing problems in networks
Natural Behavior of Ant
ACO Concept

Ants (blind) navigate from nest to food source

Shortest path is discovered via pheromone trails
–
each ant moves at random, probabilistically
–
pheromone is deposited on path
–
ants detect lead ant’s path, inclined to follow, i.e.,
more pheromone on path increases probability of path
being followed
ACO System

Virtual “trail” accumulated on path segments

Starting node selected at random

Path selection philosophy
–
based on amount of “trail” present on possible paths
from starting node
–
higher probability for paths with more “trail”

Ant reaches next node, selects next path

Continues until goal, e.g., starting node for TSP, reached

Finished “tour” is a solution
ACO System, cont.

A completed tour is analyzed for optimality

“Trail” amount adjusted to favor better solutions
–
better solutions receive more trail
–
worse solutions receive less trail
higher probability of ant selecting path that is part of a
better-performing tour


New cycle is performed
Repeated until most ants select the same tour on
every cycle (convergence to solution)
Ant Algorithm for TSP
Randomly position m ants on n cities
Loop
for step = 1 to n
for k = 1 to m
Choose the next city to move by applying
a probabilistic state transition rule (to be described)
end for
end for
Update pheromone trails
Until End_condition
Pheromone Intensity
 ij (t )
 ij (0)  C , i, j
Ant Transition Rule
Probability of ant k going from city i to j:







(
t
)
.

ij
ij




k
pij (t ) 


  il (t ) .il 
lJ ik
k the set of nodes applicable
J i : to ant k at city i
,   0
ij  1/ dij
visibility
Ant Transition Rule
Probability of ant k going from city i to j:







(
t
)
.

ij
ij




k
pij (t ) 


  il (t ) .il 
lJ ik
,   0
ij  1/ dij

 = 0 : a greedy approach
 = 0 : a rapid selection of tours that may not be optimal.

Thus, a tradeoff is necessary.

Pheromone Update
k
k

Q
/
L
(
t
)
(
i
,
j
)

T
(t )
k
 ij (t )  
otherwise
0
Q: a constant
Tk(t): the tour of ant k at time t
Lk(t): the tour length for ant k at time t
m
 ij (t )    ijk (t )
k 1
 ij (t )  (1   )  ij (t )   ij (t )
Demonstration