The Farmer, Wolf, Duck, Corn Problem

Download Report

Transcript The Farmer, Wolf, Duck, Corn Problem

The Farmer, Wolf, Duck, Corn Problem
Farmer, Wolf, Goat, Cabbage
Farmer, Fox, Chicken, Corn
Farmer Dog, Rabbit, Lettuce
A farmer with his wolf, duck and bag of corn come to the east
side of a river they wish to cross. There is a boat at the rivers
edge, but of course only the farmer can row. The boat can only
hold two things (including the rower) at any one time. If the wolf
is ever left alone with the duck, the wolf will eat it. Similarly if
the duck is ever left alone with the corn, the duck will eat it. How
can the farmer get across the river so that all four arrive safely on
the other side?
The Farmer, Wolf, Duck, Corm problem dates back to the eighth century and
the writings of Alcuin, a poet, educator, cleric, and friend of Charlemagne.
This means that
everybody/everything is
on the same side of the
river.
This means that we
somehow got the Wolf to
the other side.
FWDC
F
DC
W
F WD C
WD C
F
D C
F W
Search Tree for “Farmer, Wolf, Duck, Corn”
W
F
Illegal State
C
D
WD
F
C
F WD C
WD C
F
D C
W
F W
F
F W
C
C
WD
D
F
F WD C
D
Search Tree for “Farmer, Wolf, Duck, Corn”
Illegal State
Repeated State
C
F WD C
WD C
D C
F
W
F W
F
F W
C
WD
D
F
C
C
F WD C
D
W
F
F
C
C
C
D
W
F WD
F W
WD
C
F
D
F
D C
F W
W
D C
F WD
D C
C
F W
D C
D
F WD
F W
F
D
W
WD
C
F WD
C
F
F
C
F W
C
D
C
F W
C
D
W
C
F
D C
D C
W
D
F W
C
F WD C
Search Tree for “Farmer, Wolf, Duck, Corn”
Illegal State
Repeated State
Goal State
F WD C
F WD C
W
F
F W
C
C
D
W
F
Farmer takes duck to left bank
C
Farmer returns alone
C
Farmer takes wolf to left bank
D
C
F WD
F
C
D
F W
D
F WD
D C
F
W
D C
Farmer returns with duck
D
C
Farmer takes corn to left bank
C
Farmer returns alone
W
D
F W
F
D
W
C
F WD C
C
F W
F
D
W
Initial State
F WD C
Farmer takes duck to left bank
Success!
Problem Solving using Search
A Problem Space consists of
• The current state of the world (initial state)
• A description of the actions we can take to transform one
state of the world into another (operators).
• A description of the desired state of the world (goal state),
this could be implicit or explicit.
•A solution consists of the goal state*, or a path to the goal
state.
* Problems were the path does not matter are known as “constraint satisfaction”
problems.
Initial State
2
4
5
1
7
8
3
6
FWDC
1
X  X   X
2
Operators
Slide blank square left.
Slide blank square right.
….
Goal State
1
4
7
2
5
8
3
6
Move F
Move F with W
….
FWDC
Distributive property
Associative property
X X
...
4 Queens
Add a queen such that it
does not attack other,
previously placed queens.
A 4 by 4 chessboard
with 4 queens placed on
it such that none are
attacking each other
Representing the states
A state space should describe
• Everything that is needed to solve the problem.
• Nothing that is not needed to solve the problem.
In general, many possible
representations are possible,
choosing a good representation
will make solving the problem
much easier.
For the 8-puzzle
• 3 by 3 array
• 5, 6, 7
8, 4, BLANK
3, 1, 2
• A vector of length nine
• 5,6,7,8,4, BLANK,3,1,2
• A list of facts
•Upper_left = 5
•Upper_middle = 6
•Upper_right = 7
•Middle_left = 8
•….
Choose the representation that make the operators easiest to
implement.
Operators I
• Single atomic actions that can transform one state into another.
• You must specify an exhaustive list of operators, otherwise the
problem may be unsolvable.
• Operators consist of
• Precondition: Description of any conditions that must be true before
using the operator.
• Instruction on how the operator changes the state.
• In general, for any given state, not all operators are possible.
Examples:
In FWDC, the operator Move_Farmer_Left is not possible if the farmer
is already on the left bank.
In this 8-puzzle,
The operator Move_6_down is possible
But the operator Move_7_down is not.
2
4
5
1
7
8
3
6
Operators II
There are often many ways to specify the operators, some
will be much easier to implement...
Example: For the eight puzzle we could have...
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Move
Move
Move
Move
Move
Move
Move
Move
Move
Move
Move
Move
Move
…
1
1
1
1
2
2
2
2
3
3
3
3
4
left
right
up
down
left
right
up
down
left
right
up
down
left
Or
•
•
•
•
Move
Move
Move
Move
Blank
Blank
Blank
Blank
left
right
up
down
2
4
5
1
7
8
3
6
A complete example: The Water Jug Problem
A farm hand was sent to a
nearby pond to fetch 2
gallons of water. He was
given two pails - one 4, the
other 3 gallons. How can he
measure the requested
amount of water?
 Two jugs of capacity 4 and 3 units.
Abstract away
unimportant
details
 It is possible to empty a jug, fill a jug,
transfer the content of a jug to the other
jug until the former empties or the latter
fills.
 Task: Produce a jug with 2 units.
Define a state representation
State representation (X , Y)
Define an initial state
Initial State (0 , 0)
Define an goal state(s)
Goal State (2 , n)
May be a description rather than explicit state
Define all operators
• X is the content of the 4 unit jug.
• Y is the content of the 3 unit jug.
Operators
• Fill 3-jug from faucet (a, b)  (a, 3)
• Fill 4-jug from faucet (a, b)  (4, b)
• Fill 4-jug from 3-jug (a, b)  (a + b, 0)
• ...
Once we have defined the problem space (state representation,
the initial state, the goal state and operators) are we are done?
We start with the initial state and keep using the operators to
expand the parent nodes till we find a goal state.
F WD C
WD C
DC
F
…but the search
space might be
large…
…really large…
W
F W
F
F W
WD
F
C
C
F WD C
D
W
F
F
C
C
C
D
W
F WD
F W
WD
C
F
D
F
DC
F W
W
DC
D
F WD
F W
F
D
W
C
F WD
F WD C
F
F
C
DC
W
F W
C
WD
C
F WD
C
D
F W
DC
DC
C
F W
So we need some
systematic way to
search.
C
D
D
C
F W
C
D
W
C
F
DC
• The average number of new nodes we create when expanding
a new node is the (effective) branching factor b.
• The length of a path to a goal is the depth d.
So visiting every the
every node in the search
tree to depth d will take
O(bd) time.
Not necessarily O(bd)
space.
A Generic Search Tree
b
b2
Fringe (Frontier)
Set of nonterminal nodes without children
I.e nodes waiting to be expanded.
bd
Branching factors for some problems
The eight puzzle has a branching factor of 2.13, so a
search tree at depth 20 has about 3.7 million nodes.
(note that there only 181,400 different states).
Rubik’s cube has a branching factor of 13.34. There
are 901,083,404,981,813,616 different states. The
average depth of a solution is about 18. The best time
for solving the cube in an official championship was 17.04
sec, achieved by Robert Pergl in the 1983 Czechoslovakian
Championship. In 1997 the best AI computer programs took
weeks (See Korf, UCLA).
Chess has a branching factor of about 35, there are
about 10120 states (there are about 1079 electrons in the
universe).
2
4
5
1
7
8
3
6
Detecting repeated states is hard….
F WD C
WD C
DC
F
W
F W
F
F W
C
WD
D
F
C
C
F WD C
D
W
F
F
C
C
C
D
W
F WD
F W
WD
C
F
D
F
DC
F W
W
DC
D
F WD
F W
F
D
W
C
F WD C
F
F
C
DC
W
F W
C
WD
C
F WD
C
D
F W
F WD
DC
C
F W
DC
D
C
F W
C
D
W
C
F
DC
We are going to consider different techniques to search the
problem space, we need to consider what criteria we will use to
compare them.
• Completeness: Is the technique
guaranteed to find an answer (if there is
one).
• Optimality: Is the technique guaranteed
to find the best answer (if there is more
than one). (operators can have different costs)
• Time Complexity: How long does it
take to find a solution.
• Space Complexity: How much memory
does it take to find a solution.
General (Generic) Search Algorithm
function general-search(problem, QUEUEING-FUNCTION)
nodes = MAKE-QUEUE(MAKE-NODE(problem.INITIAL-STATE))
loop do
if EMPTY(nodes) then return "failure"
node = REMOVE-FRONT(nodes)
if problem.GOAL-TEST(node.STATE) succeeds then return node
nodes = QUEUEING-FUNCTION(nodes, EXPAND(node, problem.OPERATORS))
end
A nice fact about this search algorithm is that we can use a single algorithm to do
many kinds of search. The only difference is in how the nodes and placed in the
queue.
Breadth First Search
Enqueue nodes in FIFO (first-in, first-out) order.
Intuition: Expand all nodes at depth i before
expanding nodes at depth i + 1
• Complete? Yes.
• Optimal? Yes, if path cost is nondecreasing function of depth
• Time Complexity: O(bd)
• Space Complexity: O(bd), note that every node in the fringe is kept in the queue.
Uniform Cost Search
Enqueue nodes in order of cost
5
2
5
2
1
5
2
7
7
1
4
5
Intuition: Expand the cheapest node. Where
the cost is the path cost g(n)
• Complete? Yes.
• Optimal? Yes, if path cost is nondecreasing function of depth
• Time Complexity: O(bd)
• Space Complexity: O(bd), note that every node in the fringe keep in the queue.
Note that Breadth First search can be seen as a special case of Uniform Cost Search, where the path cost is just the depth.
Depth First Search
Enqueue nodes in LIFO (last-in, first-out) order.
Intuition: Expand node at the deepest level
(breaking ties left to right)
• Complete? No (Yes on finite trees, with no loops).
• Optimal? No
• Time Complexity: O(bm), where m is the maximum depth.
• Space Complexity: O(bm), where m is the maximum depth.
Depth-Limited Search
Enqueue nodes in LIFO (last-in, first-out) order. But limit depth to L
L is 2 in this example
Intuition: Expand node at the deepest level,
but limit depth to L
• Complete? Yes if there is a goal state at a depth less than L
• Optimal? No
Picking the right value for L is
• Time Complexity: O(bL), where L is the cutoff.
a difficult, Suppose we chose
• Space Complexity: O(bL), where L is the cutoff.
7 for FWDC, we will fail to
find a solution...
Iterative Deepening Search I
Do depth limited search starting a L = 0, keep incrementing L by 1.
Intuition: Combine the Optimality and
completeness of Breadth first search, with the
low space complexity of Depth first search
• Complete? Yes
• Optimal? Yes
• Time Complexity: O(bd), where d is the depth of the solution.
• Space Complexity: O(bd), where d is the depth of the solution.
Iterative Deepening Search II
Iterative deepening looks wasteful because
we reexplore parts of the search space many
times...
Consider a problem with a branching factor of
10 and a solution at depth 5...
1+10+100+1000+10,000+100,000 = 111,111
1
1+10
1+10+100
1+10+100+1000
1+10+100+1000+10,000
1+10+100+1000+10,000+100,000
= 123,456
Bi-directional Search
Intuition: Start searching from both the initial
state and the goal state, meet in the middle.
Notes
• Not always possible to search
backwards
• How do we know when the trees
meet?
• At least one search tree must be
retained in memory.
• Complete? Yes
• Optimal? Yes
• Time Complexity: O(bd/2), where d is the depth of the solution.
• Space Complexity: O(bd/2), where d is the depth of the solution.
Heuristic Search
The search techniques we have seen so far...
• Breadth first search
• Uniform cost search
• Depth first search
• Depth limited search
• Iterative Deepening
• Bi-directional Search
uninformed search
blind search
...are all too slow for most real world problems
Sometimes we can tell that some states
appear better that others...
7
3
6
8
5
2
FWD
C
4
1
1
4
7
2
5
D
FW C
3
6
8
...we can use this knowledge of the relative merit of states to guide search
Heuristic Search (informed search)
A Heuristic is a function that, when applied to a state, returns a
number that is an estimate of the merit of the state, with respect to
the goal.
In other words, the heuristic tells us approximately how far the state
is from the goal state*.
Note we said “approximately”. Heuristics might underestimate or
overestimate the merit of a state. But for reasons which we will see,
heuristics that only underestimate are very desirable, and are called
admissible.
*I.e Smaller numbers are better
Heuristics for 8-puzzle I
Current
State
•The number of
misplaced tiles
(not including
the blank)
Goal
State
1
4
7
2
5
1
4
2
5
7
8
3
6
8
3
6
In this case, only “8” is misplaced, so the heuristic
function evaluates to 1.
In other words, the heuristic is telling us, that it thinks a
solution might be available in just 1 more move.
Notation: h(n)
h(current state) = 1
11 22 33
44 55 66
77 8
8
N
N
N
N
N
Y
N
N
Heuristics for 8-puzzle II
Current
State
•The Manhattan
Distance (not
including the
blank)
Goal
State
3
4
7
1
4
7
2
5
1
2
5
8
8
6
In other words, the heuristic is telling us, that it thinks a
solution is available in just 8 more moves.
h(current state) = 8
3
2 spaces
8
3
6
In this case, only the “3”, “8” and “1” tiles are
misplaced, by 2, 3, and 3 squares respectively, so
the heuristic function evaluates to 8.
Notation: h(n)
3
3 spaces
8
1
3 spaces
1
Total 8
1
4
7
We can use heuristics
to guide “hill climbing”
search.
1
4
7
2
8
6
3
5
In this example, the
Manhattan Distance
heuristic helps us
quickly find a solution
to the 8-puzzle.
1
4
1
4
7
But “hill climbing has
a problem...”
1
4
7
2
5
8
3
6
2
8
6
h(n)
3
5
5
6
1
4
7
2
8
6
3
5
1
2
3
4
7
8
5
6
2
8
7
3
5
6
2
5
8
3
0
4
4
3
1
4
7
2
8
3
5
6
1
2
4
8
3
5
6
1
6
1
4
7
7
2
5
8
3
6
2
2
3
1
4
7
2
8
3
5
6
3
1
4
6
In this example,
hill climbing
does not work!
All the nodes
on the fringe
are taking a step
“backwards”
(local minima)
Note that this
puzzle is
solvable in just
12 more steps.
1
2
3
4
6
5
8
7
2
5
7
7
1
4
2
6
7
3
8
h(n)
6
1
2
4
6
5
7
3
5
8
6
3
5
8
1
4
2
5
6
7
3
8
6
We have seen two interesting algorithms.
Uniform Cost
• Measures the cost to each node.
• Is optimal and complete!
• Can be very slow.
Hill Climbing
• Estimates how far away the goal is.
• Is neither optimal nor complete.
• Can be very fast.
Can we combine them to create an optimal and complete
algorithm that is also very fast?
Uniform Cost Search
Enqueue nodes in order of cost
5
2
5
2
5
2
7
1
7
1
5
4
Intuition: Expand the cheapest node. Where
the cost is the path cost g(n)
Hill Climbing Search
Enqueue nodes in order of estimated distance to goal
17
19
17
16
17
19
14
14
16
13
Intuition: Expand the node you think is nearest to
goal. Where the estimate of distance to goal is h(n)
19
15
The A* Algorithm (“A-Star”)
Enqueue nodes in order of estimate cost to goal, f(n)
g(n) is the cost to get to a node.
h(n) is the estimated distance to the goal.
f(n) = g(n) + h(n)
We can think of f(n) as the estimated cost of the cheapest solution that goes through node n
Note that we can use the general search algorithm we used before.
All that we have changed is the queuing strategy.
If the heuristic is optimistic, that is to say, it never overestimates
the distance to the goal, then…
A* is optimal and complete!
Informal proof outline of A* completeness
• Assume that every operator has some minimum positive cost,
epsilon .
• Assume that a goal state exists, therefore some finite set of
operators lead to it.
•Expanding nodes produces paths whose actual costs increase by
at least epsilon each time. Since the algorithm will not terminate
until it finds a goal state, it must expand a goal state in finite time.
Informal proof outline of A* optimality
• When A* terminates, it has found a goal state
• All remaining nodes have an estimate cost to goal (f(n)) greater
than or equal to that of goal we have found.
•Since the heuristic function was optimistic, the actual cost to goal
for these other paths can be no better than the cost of the one we
have already found.
How fast is A*?
A* is the fastest search algorithm. That is, for any given
heuristic, no algorithm can expand fewer nodes than A*.
How fast is it? Depends of the quality of the heuristic.
•If the heuristic is useless (ie h(n) is hardcoded to equal 0 ), the
algorithm degenerates to uniform cost.
•If the heuristic is perfect, there is no real search, we just
march down the tree to the goal.
Generally we are somewhere in between the two situations
above. The time taken depends on the quality of the heuristic.
What is A*’s space complexity?
A* has worst case O(bd) space complexity, but an iterative
deepening version is possible ( IDA* )
A Worked Example: Maze Traversal
Problem: To get from square A3 to
square E2, one step at a time, avoiding
obstacles (black squares).
Operators: (in order)
•go_left(n)
•go_down(n)
•go_right(n)
each operator costs 1.
Heuristic: Manhattan distance
A
B
C
D
E
1
2
3
4
5
A3
A2
g(A2) = 1
h(A2) = 4
B3
g(B3) = 1
h(B3) = 4
A4
g(A4) = 1
h(A4) = 6
A
B
C
D
E
A2
A4
B3
1
2
3
4
5
Operators: (in order)
•go_left(n)
•go_down(n)
•go_right(n)
each operator costs 1.
A3
A2
g(A2) = 1
h(A2) = 4
A1
g(A1) = 2
h(A1) = 5
B3
g(B3) = 1
h(B3) = 4
A4
g(A4) = 1
h(A4) = 6
A A1 A2
A4
B3
B
C
D
E
1 2 3 4 5
Operators: (in order)
•go_left(n)
•go_down(n)
•go_right(n)
each operator costs 1.
A3
A2
g(A2) = 1
h(A2) = 4
A1
g(A1) = 2
h(A1) = 5
C3
B3
g(B3) = 1
h(B3) = 4
g(C3) = 2
h(C3) = 3
B4
A4
g(B4) = 2
h(B4) = 5
g(A4) = 1
h(A4) = 6
A A1 A2
A4
B3 B4
B
C3
C
D
E
1 2 3 4 5
Operators: (in order)
•go_left(n)
•go_down(n)
•go_right(n)
each operator costs 1.
A3
A2
g(A2) = 1
h(A2) = 4
A1
g(A1) = 2
h(A1) = 5
C3
B1
B3
g(B3) = 1
h(B3) = 4
g(C3) = 2
h(C3) = 3
B4
A4
g(B4) = 2
h(B4) = 5
g(A4) = 1
h(A4) = 6
A A1 A2
A4
B3 B4
B B1
C3
C
D
E
1 2 3 4 5
g(B1) = 3
h(B1) = 4
Operators: (in order)
•go_left(n)
•go_down(n)
•go_right(n)
each operator costs 1.
A3
A2
g(A2) = 1
h(A2) = 4
A1
g(A1) = 2
h(A1) = 5
C3
B1
B3
g(B3) = 1
h(B3) = 4
g(C3) = 2
h(C3) = 3
B4
g(B4) = 2
h(B4) = 5
B5
g(B5) = 3
h(B5) = 6
A4
g(A4) = 1
h(A4) = 6
A A1 A2
A4
B3 B4 B5
B B1
C3
C
D
E
1 2 3 4 5
g(B1) = 3
h(B1) = 4
Operators: (in order)
•go_left(n)
•go_down(n)
•go_right(n)
each operator costs 1.
Optimizing Search
(Iterative Improvement Algorithms)
I.e Hill climbing, Simulated Annealing Genetic Algorithms
Optimizing search is different to the path finding search we
have studied in many ways.
• The problems are ones for which exhaustive and heuristic search are
NP-hard.
• The path is not important (for that reason we typically don’t bother to
keep a tree around) (thus we are CPU bound, not memory bound).
• Every state is a “solution”.
• The search space is (often) continuous.
• Usually we abandon hope of finding the best solution, and settle for a
very good solution.
• The task is usually to find the minimum (or maximum) of a function.
Example Problem I
(Continuous)
y = f(x)
Finding the maximum
(minimum) of some
function (within a defined
range).
Example Problem II
(Discrete)
The Traveling Salesman
Problem (TSP)
A salesman spends his time
visiting n cities. In one tour he
visits each city just once, and
finishes up where he started.
In what order should he visit
them to minimize the distance
traveled?
There are (n-1)!/2 possible
tours.
A
B
C
...
A
0
12
34
...
B
12
0
76
...
C
34
76
0
...
...
...
...
...
Example Problem III
(Continuous and/or discrete)
Function Fitting
Depending on the way the problem
is setup this, could be continuous
and/or discrete.
Discrete part
Finding the form of the function
is it X2 or X4 or ABS(log(X)) + 75
Continuous part
Finding the value for X
is it X= 3.1 or X= 3.2
Assume that we can
• Represent a state.
• Quickly evaluate the quality of a state.
• Define operators to change from one state to another.
y = log(x) + sin(tan(y-x))
x = 2;
y = 7;
log(2) + sin(tan(7-2)) = 2.00305
x = add_10_percent(x)
y = subtract_10_percent(y)
….
A C F K W…..Q A
A to C = 234
C to F = 142
…
Total 10,231
A C F K W…..Q A
A C K F W…..Q A
A
B
C
...
A
0
12
34
...
B
12
0
76
...
Hill-Climbing I
function Hill-Climbing (problem) returns a solution state
inputs : problem
// a problem.
local variables : current
// a node.
next
// a node.
current  Make-Node ( Initial-State [ problem ]) // make random
loop do
// initial state.
next  a highest-valued successor of current
if Value [next] < Value [current] then return current
current  next
end
How would HillClimbing do on the
following problems?
How can we improve
Hill-Climbing?
Random restarts!
Intuition: call hillclimbing as many
times as you can
afford, choose the
best answer.
function Simulated-Annealing ( problem, schedule ) returns a solution state
inputs : problem // a problem
schedule // a mapping from time to "temperature"
local variables : current // a node
next
// a node
T // a "temperature" controlling the probability of downward steps
current  Make-Node ( Initial-State [ problem ])
for t  1 to  do
T  schedule [ t ]
if T = 0 then return current
next  a randomly selected successor of current
E  Value [ next ] - Value [ current ]
if E > 0 then current  next
else current  next only with probability eE/T
Genetic Algorithms I (R and N, pages 619-621)
• Variation (members of the same species are differ in some ways).
• Heritability (some of variability is inherited).
• Finite resources (not every individual will live to reproductive age).
Given the above, the basic idea of natural selection is this.
Some of the characteristics that are variable will be advantageous to
survival. Thus, the individuals with the desirable traits are more likely
to reproduce and have offspring with similar traits ...
And therefore the species evolve over time…
Since natural selection is known
to have solved many important
optimizations problems it is
natural to ask can we exploit the
power of natural selection?
Richard Dawkins
Genetic Algorithms II
The basic idea of genetic algorithms (evolutionary programming).
•Initialize a population of n states (randomly)
While time allows
• Measure the quality of the states using some fitness function.
• “kill off” some of the states.
• Allow the surviving states to reproduce (sexually or asexually or..)
end
• Report best state as answer.
All we need do is ...(A) Figure out how to represent the states. (B) Figure out a
fitness function. (C) Figure out how to allow our states to reproduce.
Genetic Algorithms III
log(xy) + sin(tan(y-x))
One possible representation of
the states is a tree structure…
+
log
Another is a bitstring…
100111010101001
tan
pow
x
For problems where we are trying to
find the best order to do some thing
(TSP), a linked list might work...
A
C
E
F
D
B
A
sin
y
y
x
Genetic Algorithms IIII
Usually the fitness function is
fairly trivial.
+
For the function maximizing problem we
can evaluate the given function with the
state (the values for x, y, z... etc)
log
tan
pow
For the function finding problem we can
evaluate the function and see how close it
matches the data.
For TSP the fitness function is just the length
of the tour represented by the linked list
A
C
23
E
12
F
56
D
77
B
36
A
83
x
sin
y
y
x
Genetic Algorithms V
Parent state A
log
x
sin
y
x
-
Parent state A
11101000
Parent state B
+
sin
tan
/
x
y
x
y
cos
10011000
5
/
Child of A and B
10011101
+
cos
tan
pow
Sexual
Reproduction
(crossover)
Parent state B
+
y
-
Child of A and B
y
x
Genetic Algorithms VI
Parent state A
+
cos
Mutation
Asexual
Reproduction
Child of A
tan
5
x
y
Parent state A
10011101
5
/
/
x
+
y
Parent state A
10011111
A
C
E
A
C
E
F
D
B
A
D
F
Child of A
B
A
Child of A
Mutation
Discussion of Genetic Algorithms
• It turns out that the policy of “keep the best n individuals” is not the best idea…
• Genetic Algorithms require many parameters... (population size, fraction of the
population generated by crossover; mutation rate, number of sexes... ) How do we
set these?
• Genetic Algorithms are really just a kind of hill-climbing search, but seem to
have less problems with local maximums…
• Genetic Algorithms are very easy to parallelize...
Applications
Protein Folding, Circuit Design, Job-Shop Scheduling Problem, Timetabling,
designing wings for aircraft.
+
log
sin
cos
tan
pow
x
+
y
/
x
y
x
y
5