Artificial Intelligence and Software that Learns and Evolves Dr. J. Michael Moshell

Download Report

Transcript Artificial Intelligence and Software that Learns and Evolves Dr. J. Michael Moshell

hplusmagazine.com

Artificial Intelligence and Software that Learns and Evolves DIG 3563 – Fall 13 Dr. J. Michael Moshell University of Central Florida

Adapted from A Special Presentation for Ajou University Autumn 2013

1

The Plan of the Lecture

0: What is a problem? What is intelligence?

1. The classical approach: logic and deduction 2. The knowledge-based approach: large databases 3. Cognitive science: models of human reasoning

4. Evolutionary Computing -2 -

0: What is a Problem?

"Something that is difficult to deal with." (Dictionary definition)

-3 -

0: What is a Problem?

"Something that is difficult to deal with." (Dictionary definition) For a small child, this is a problem:

Anna had $2.00. She spent $0.75 for candy.

How much money does Anna have now?

www.towngreendistrict.co

m

-4 -

0: What is a Problem?

"Something that is difficult to deal with." (Dictionary definition) For a small child, this is a problem:

Anna had $2.00. She spent $0.75 for candy.

How much money does Anna have now?

For the President of the United States, this is a problem: www.nps.gov

Can we change the laws so that everyone has a job, and the economy grows in a safe, steady fashion?

-5 -

Classifying Problems

Problems Well-formulated problems clear goals limited action space clear rules other problems mixed goals infinite action space rules are changing

-6 -

Classifying Problems

Problems Well-formulated problems clear goals limited action space clear rules Easy problems Tractible problems Intractible problems other problems mixed goals infinite action space rules are changing kardwell.com

en.wikipedia.org

artsbeat.blog.nytimes.com

-7 -

Tractible: definition

Easily handled or worked.

examples: Wood is a tractible material for making furniture.

OPPOSITE: intractible Titanium is an intractible material for making furniture.

bizchair.com

worldchair.com

-8 -

The Traveling Salesman Problem

A man must visit 50 cities. He must visit each city ONE TIME.

Find the shortest path for his travel.

-9 -

The Traveling Salesman Problem

A man must visit 50 cities. He must visit each city ONE TIME.

Find the shortest path for his travel.

A man must visit n cities. He must visit each city ONE TIME.

Find the shortest path for his travel. How long to compute?

-10 -

The Traveling Salesman Problem

A man must visit 50 cities. He must visit each city ONE TIME.

Find the shortest path for his travel.

A man must visit n cities. He must visit each city ONE TIME.

Find the shortest path for his travel. How long to compute?

time = k c

n (for some constants k and c).

As n gets large, time gets VERY BIG VERY FAST

-11 -

The Traveling Salesman Problem

for k=1 microsecond and c=2, 50 cities takes 313,000 hours or 35 years!

-12 -

Classifying Problems

Problems Well-formulated problems clear goals limited action space clear rules Easy problems Tractible problems Intractible problems other problems mixed goals infinite action space rules are changing

In 1975:

kardwell.com

en.wikipedia.org

artsbeat.blog.nytimes.com

-13 -

IBM's

Deep Blue

Chess Playing Computer

In 1989, IBM's computer and programming team defeated Garry Kasparov, world chess champion.

ibm.com

It did not defeat the exponential time cost of chess.

It simply made k and c small enough, and explored more futures than the human could.

en.wikipidia.org

-14 -

Classifying Problems

Problems Well-formulated problems clear goals limited action space clear rules Easy problems Tractible problems Intractible problems other problems mixed goals infinite action space rules are changing

In 1990:

kardwell.com

en.wikipedia.org

artsbeat.blog.nytimes.com

-15 -

Decision Trees and Exponential Time-Cost

trim-a-tree.co.uk

Many problems are analyzed by building a

decision tree

and seeking a path to a winning node. Here, n=9 (nine options)

-16 -

en.wikipidia.org

Decision Trees and Exponential Time-Cost

trim-a-tree.co.uk

If each decision leads to a growing tree of other decisions, the time required to explore all the branches

time = k c

n and that is too long for anything but very small n.

-17 -

en.wikipidia.org

trim-a-tree.co.uk

Heuristic: A plan to choose options that are 'most likely to succeed'

Eliminate those branches that your

heuristic function

tells you are not likely to succeed. Then expand the promising ones.

-18 -

en.wikipidia.org

trim-a-tree.co.uk

Heuristic: A plan to choose options that are 'most likely to succeed'

A simple heuristic from chess: Pawn=1 unit

Do not exchange pieces if you lose more pawn-units

Knight, Bishop=3 pawns

than your opponent loses.

Rook=5 pawns Queen=9 pawns

-19 -

trim-a-tree.co.uk

Heuristic: A plan to choose options that are 'most likely to succeed'

A simple heuristic from chess: Pawn=1 unit

Do not exchange pieces if you lose more pawn-units

Knight, Bishop=3 pawns

than your opponent loses.

Example:

Rook=5 pawns

Do not exchange your queen for two knights.

Queen=9 pawns

-20 -

Intelligence = Problem Solving Ability?

zmescience.com

Most people agree that an intelligent agent must be able to solve some problems (not all problems.) However, Many people feel that if you have a

well-formed problem,

the hard work has already been done.

The BIG challenge is

transforming

a real-world problem into a

well-formed

symbolic problem.

-21 -

Natural Language: a great place to find ill-formed problems

zmescience.com

Imagine a computer program that could answer questions: "Can a cat drive a car?" Computer and Program worldoffemale.com

-22 -

Natural Language: a great place to find ill-formed problems

zmescience.com

Imagine a computer program that could answer questions: "Can a cat drive a car?" Computer and Program "No. A cat has no hands and cannot drive a car."

-23 -

The Turing Test for Intelligence

Alan Turing was a British mathematician who played a key role in World War II code-breaking and helped to develop the digital computer.

He thought about intelligence and proposed a test.

thocp.net

-24 -

The Turing Test for Intelligence

Is "mystery system" intelligent?

Ask questions via a Teletype machine.

thocp.net

Mystery System Is "mystery system" a human or a machine? If you cannot accurately decide (and it's a machine) then

the machine is intelligent.

-25 -

The Turing Test for Intelligence

Has any system passed the Turing Test yet?

Ask Siri ...

scoopertino.com

www.apple.com

Most people quickly conclude that Siri does not yet pass the Turing Test. But it's getting better all the time...

-26 -

1. The Classical (Logical) Approach to Artificial Intelligence

Basic concepts:

1. LOGIC

is powerful enough to solve AI problems.

2. KNOWLEDGE

must be represented in a

formal system.

hci.stanford.edu/~wino grad

3. INFERENCE

is the key mechanism to answer questions.

All humans will die.

John is a human  therefore, John will die.

-27 -

1. The Classical (Logical) Approach to Artificial Intelligence

Knowledge representation as a "semantic net" of related concepts hci.stanford.edu/~wino grad en.wikipedia.org

-28 -

1. The Classical (Logical) Approach to Artificial Intelligence

Example: Terry Winograd's SHRDLU System hci.stanford.edu/~wino grad A "Toy world" of colored blocks (simulated by computer) Questions and commands (in English): 1) Translate into formal propositions 2) Try to prove or disprove them from the known facts.

3) Change system state if possible.

University of Utah

-29 -

1. The Classical (Logical) Approach to Artificial Intelligence

Example: Terry Winograd's SHRDLU System Person: Pick up a big red block Computer: OK Person: Grasp the pyramid hci.stanford.edu/~wino grad University of Utah

-30 -

1. The Classical (Logical) Approach to Artificial Intelligence

Example: Terry Winograd's SHRDLU System Person: Pick up a big red block Computer: OK Person: Grasp the pyramid Computer: I don't understand which pyramid you mean.

(because there are two of them.)

University of Utah hci.stanford.edu/~wino grad

-31 -

1. The Classical (Logical) Approach to Artificial Intelligence

Example: Terry Winograd's SHRDLU System Watch the SHRDLU movie (3 minutes 20 seconds of it) University of Utah

-32 -

1. The Classical (Logical) Approach to Artificial Intelligence

Excitement! SHRDLU worked for Blocks World.

followed by Disappointment: Most domains are MUCH harder.

hci.stanford.edu/~wino grad

-33 -

2. The Knowledge-Based Approach:

Doug Lenat's talk at Google:

Brittle Software (Lenat video: first 14 minutes)

-34 -

2. The Knowledge-Based Approach:

Key concept: Today we have brittle (easily broken) software Danger: Power is in the hands of "smart idiots".

Examples of Cyc's successes:

Request:

Find a picture of someone smiling  Cyc found picture of a man helping his daughter take her first step

Request:

Find something that could harm an airplane  Cyc located a video about an SA-7 missile

-35 -

2. The Knowledge-Based Approach:

LARGE databases of facts.

If SHRDLU's world was too small, let's build a big world of knowledge.

Cyc Project – started in 1984 by Douglas Lenat

Estimated effort (1986): 250,000 rules and 350 man-years of effott.

Up until now: >1 million rules, and no end in sight.

-36 -

2. The Knowledge-Based Approach:

LARGE databases of facts.

If SHRDLU's world was too small, let's build a big world of knowledge.

Cyc Project – started in 1984 by Douglas Lenat

cYcorp

distributes the OpenCyc 4.0 database (for free), with ~ 239,000 terms ~ 2,093,000 "triples" (rules) that attempt to represent human

common sense.

-37 -

2. The Knowledge-Based Approach:

cYcorp also has a private database with many more assertions and rules, in the

CycL language.

Example:

(#$isa #$BillClinton #$UnitedStatesPresident)

cycorp.org

-38 -

Cyc: An example of the complexity

cycorp.org

University of Utah

-39 -

Cyc: Method for Growing the Database

* Attempt to

automatically

read encyclopedia articles.

(en

CYClopedia!) *

Analyze successes & failures

*

Apply human "knowledge engineering" to improve rules

-40 -

Cyc example: Terrorism Database

* Analyze literature on terrorism * Predict future events.

Success:  predicted anthrax mailings, 6 months before 9/11 Miss:  Predicted 1000 dolphins from Al-Qaeda to attack Hoover Dam www.usbr.gov

-41 -

Cyc: Status and Hope for the Future

Cyc

will eventually become smart enough to teach itself.

The results thus far: * Government sponsors basic research and terrorism database * Some commercial applications are being tried.

-42 -

Cyc: Status and Hope for the Future

Cyc

will eventually become smart enough to teach itself.

The results thus far: * Government sponsors basic research and terrorism database * Some commercial applications are being tried.

* Many people in the Artificial Intelligence community doubt that Cyc will play a key role in successful AI Why?

It's too logical. Humans are inconsistent, emotional, intuitive – they act on their FEELINGS --

-43 -

wikimedia commons

3. Cognitive Science – How Humans Think

-44 -

Philosophy:

Example: Mind-Body Problem Is the mind part of the body? Or separate?

Metaphors: "The brain is a telephone switchboard" "The brain is a computer" 

Mind

is software (can be changed)

Brain

 is hardware (can be broken) New ideas on good and evil

-45 -

Philosophy:

Example: Deductive Logic If A, and A  B, then B A: A Hyundai is a car B: Cars are made by humans so: Hyundais are made by humans

-46 -

Philosophy:

Inductive logic: If the events in class C are probable, and A is in class C, then A is probable.

90% of humans are right-handed.

Jack is a human.

so Jack is probably right-handed.

-47 -

Philosophy and Intelligence

If a thing is intelligent, we expect it to use

deductive logic

and

inductive logic

.

-48 -

Psychology:

Definition: Study of mental functions and behaviors Example: Memory wikipedia.org/mimory

-49 -

Psychology:

Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) everyculture.com

-50 -

Psychology:

Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) mycharlois.com

-51 -

Psychology:

Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened)

-52 -

fleetowners.com

Psychology:

en.wikipedia.org

Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) Semantic (facts, definitions, abstract knowledge)

-53 -

Psychology:

Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) Semantic (facts, definitions, abstract knowledge) Visual (I've seen this before) bic.org

-54 -

Psychology:

Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) Semantic (facts, definitions, abstract knowledge) Visual (I've seen this before) Emotional (things I loved or hated) gofamilyperks.com

-55 -

Psychology:

If a thing is intelligent, we expect it to need (and have) most of the types of memory that humans have.

Why? (back to Philosophy!)

-56 -

Psychology:

If a thing is intelligent, we expect it to need (and have) most of the types of memory that humans have.

Why? Inductive logic.

"Most of the intelligent creatures we have seen, had these kinds of memory".

-57 -

Linguistics: Scientific Study of Language

Key insight:

Analogies

carry meaning.

Definition: An

analogy

is a comparison of two systems. If you understand system A, it can help you to understand system B.

Analogy Simile Metaphor

-58 -

Linguistics: Scientific Study of Language

Key insight:

Analogies

carry meaning.

Definition: An

analogy

is a comparison of two systems. If you understand system A, it can help you to understand system B.

Analogy: "The motor of a car is like a horse pulling a wagon."

Simile

Metaphor

-59 -

Linguistics: Scientific Study of Language

Key insight:

Analogies

carry meaning.

Definition: An

analogy

is a comparison of two systems. If you understand system A, it can help you to understand system B.

Analogy: "His mother was a tiger!"

Simile

Metaphor

-60 -

Linguistics: Scientific Study of Language

Key insight:

Analogies

carry meaning.

Science is based on analogies.

Example: Bohr's "Solar system" model of the atom.

wikipediaorg ou.org

-61 -

Linguistics:

If a thing is intelligent, we expect that it will understand and use a natural language (like English or Korean).

and we expect that it will make and use

analogies

to extend and communicate its knowledge.

-62 -

AI and Cognitive Science: Marvin Minsky makes an analogy

Minsky's theory of mind: The mind is like a complex software system.

The pieces of this software system will interact in ways that are different from traditional software.

They will interact like a "society".

-63 -

AI and Cognitive Science: Marvin Minsky makes an analogy

Minsky's theory of mind: • • • A mind is a large collection of small agents They compete for control of the ‘ front office ’ (consciousness) The ‘ all or none ’ theory. You can ’ t half walk and half sit.

Many of these agents are working at any time

-64 -

Interior Grounding, Reflection and Self-Consciousness * A woman named Joan is crossing the street.

A car sounds its horn.

-65 -

Interior Grounding, reflection and Self-Consciousness * A story about a woman crossing the street.

Reaction: Joan reacted quickly to that sound.

Identification: She recognized it as being a sound.

Characterization: She classified it as the sound of a car.

Attention: She noticed certain things rather than others.

Indecision: She wondered whether to cross or retreat.

-66 -

Interior Grounding, reflection and Self-Consciousness * A story about a woman crossing the street.

Reaction: Joan reacted quickly to that sound.

Identification: She recognized it as being a sound.

Characterization: She classified it as the sound of a car.

Attention: She noticed certain things rather than others.

Indecision: She wondered whether to cross or retreat.

Imagining: She envisioned some possible future conditions.

Selection: She selected a way to choose among options.

Decision: She chose one of several alternative actions.

Planning: She constructed a multi-step action-plan.

Reconsideration: Later she reconsidered this choice.

-67 -

Marvin Minsky:

Interior Grounding, Reflection and Self-Consciousness * These processes can be classified something like this

.. which is similar to Freud

s model:

-68 -

Marvin Minsky:

Interior Grounding, reflection and Self-Consciousness

-69 -

Minsky doesn

t like the

bottom-up

idea that sensations (alone) could lead to higher thought.

He believes in a rich set of built-in capabilities.

The details of which language, what culture, what house and street are learned by each individual.

-70 -

Minsky's Influence

"Societies of Mind" has not yet led to a working AI system .. but Minsky's early work led to the study of Neural Nets (our next topic)

-71 -

5. Neural Nets, Perception and Learning

-72 -

4. Artificial Evolution Nature "learns" by creating new species.

bio100.nicerweb.net

Can we model that process, to solve problems?

-73 -

Evolutionary Computing: * Reviewing Genetics • Sexual reproduction has a big payoff. What is it?

( In other words: why are males worth having?) Observation: bacteria and viruses without SR have evolved several mechanism for swapping DNA.

It ’ s almost as if the fundamental underlying metaphor for life is a flea market.

www.ryctx.org

-74 -

Evolutionary Computing: * Genetics Reviewed

KNOWN BEFORE DNA was discovered:

• The genome is a (very) long sequence of Genes • Each gene controls the production of one kind of protein • Proteins are catalysts for chemical reactions as well as the ‘ structural steel ’ of living organisms.

A GENE represents a finite alphabet of choices.

The various versions of a gene are called

alleles.

If there are 10 ways to make collagen, there would be 10 alleles for the collagen gene.

-75 -

Genotype and Phenotype

• Genotype: your collection of genes Phenotype: your ‘ rendering ’ – your actual body, as built.

• Genes, encoded in DNA, are organized into

chromosomes

• Individual humans have 23 pairs of chromosomes • When reproducing, each parent randomly contributes one of the two chromosomes to the child.

-76 -

Genotype and Phenotype Mom Dad ...

1 2 3 X X X X X X 23 X X

-77 -

Genotype and Phenotype Mom Dad ...

1 2 3 X X X X X X 23 X X

-78 -

Genotype and Phenotype Mom Dad 1 2 3 ...

X X X X X X 23 X X

-79 -

Genotype and Phenotype Mom Dad 1 2 3 ...

X X

A given pair of parents can

X X

produce 2 23 ~= 8 million

X X

different genetic combinations.

23 X X .. it

s a GIRL!

-80 -

Why does this system pay such big dividends?

• The

gene pool

is a toolkit of variations.

• Consider melanin. Assume variations from black to brown in various versions of the melanin gene.

• Your tribe moves from Africa to Europe.

• Your random genome remix produces kids of various shades.

The ones with lighter skin get more vitamin D and thrive.

They have more kids. The light-skin gene increases in the gene pool. Feedback loop.

-81 -

Why does this system pay such big dividends?

• The

gene pool

is a toolkit of variations.

• Consider melanin. Assume variations from black to brown in various versions of the melanin gene.

• Your tribe moves from Africa to Europe.

• Your random genome remix produces kids of various shades.

The ones with lighter skin get more vitamin D and thrive.

They have more kids. The light-skin gene increases in the gene pool. Feedback loop.

NOTE: You didn ’ t have to INVENT the variation (mutation).

You had it stored away in your toolkit (genome).

Mutation (creation of new alleles or genes) is MUCH slower than selection among existing alleles. You need BOTH mechanisms.

-82 -

Mutation – the big Disaster/Opportunity

• Mutations are rare and usually fatal • A copying error occurs in a chromosome - some DNA is duplicated - some DNA is deleted - one codon (and its amino acid) replaces another • Some mutations are beneficial but most are fatal or neutral (now) • A slightly different kind of hemoglobin might not kill you but might turn out to be BETTER, against some parasite that attacks your great great great .... grandchildren

-83 -

Diversity yields robustness

• The environment produces an infinite suite of challenges.

• A rich gene pool provides instant options to try.

• A narrow gene pool is a ticket to extinction (florida panthers.) •

Hybrid vigor

is a concept that every farmer knows.

Cross Hereford and Angus cows; calves grow faster.

-84 -

Diversity yields robustness

• The environment produces an infinite suite of challenges.

• A rich gene pool provides instant options to try.

• A narrow gene pool is a ticket to extinction (florida panthers.) •

Hybrid vigor

is a concept that every farmer knows.

Cross Hereford and Angus cows; calves grow faster.

It ’ s like NAFTA or the European Union. Win-win really is possible! Your kids will survive better if your partner ’ s tool-set

complements

rather than

replicates

your own.

-85 -

Unnatural selection

• To build a “ learning system ” we need three things: -a genotype (a coded representation) -a phenotype (a rendering into a ‘ real world ’ of competition) -a fitness function (something to measure and kill the losers)

-86 -

Unnatural selection

• To build a “ learning system ” we need three things: -a genotype (a coded representation) -a phenotype (a rendering into a ‘ real world ’ of competition) -a fitness function (something to measure and kill the losers) Is a self-replicating robot without a genome impossible?

That is not proven. But all examples thus far are trivial.

(Crystal growth) www.dpchallenge.com

-87 -

A Genotype for Mini-Robots

• Karl Sims decided to use a graph-theory genome • It is applied twice: once for body, once for nervous system • A random pool of 300 genomes is built -they are pre-selected by removing: - creatures with more than N body parts - creatures whose body parts interpenetrate (share space) - Rules of the universe are established; e. g. gravity, a floor - Goal (fitness function) is set: e. g. radius crawled in 1 minute.

-Run simulation. Keep best 1/5 of population (60 individuals) Re-mix genes to replace the 240 who died.

Run the simulation again.

-88 -

A Genotype for Mini-Robots

• Some of the goals: - radial distance traveled - linear distance traveled - distance swum (or flown) through fluid medium - speed of approach toward a moving target point - competition to capture a shared object

-89 -

A Genotype for Mini-Robots

• Some of the goals: - radial distance traveled - linear distance traveled - distance swum (or flown) through fluid medium - speed of approach toward a moving target point - competition to capture a shared object

Competitive

events: how do you pair them up?

- n x n takes n 2 time, and is too slow (each sim is slow!) - pairwise often means playing against an idiot.

- n vs. best-of-last-round seemed to work well.

-90 -

A Genotype for Mini-Robots

• Some of the goals: - radial distance traveled - linear distance traveled - distance swum (or flown) through fluid medium - speed of approach toward a moving target point - competition to capture a shared object

Competitive

events: how do you pair them up?

- n x n takes n 2 time, and is too slow (each sim is slow!) - pairwise often means playing against an idiot.

- n vs. best-of-last-round seemed to work well.

One-species versus two-species (breeding populations)

-91 -

NOW watch the movie at http://www.youtube.com/watch?v=JBgG_VSP7f8

www.dpchallenge.com

-92 -

A Genotype for Mini-Robots

• So ... how was this done?

NODE and LINK (The names are just to help us think.)

Example 1:

From a segment, link to two other segments.

Repeat any number of times, recursively.

Example 1:

-93 -

A Genotype for Mini-Robots

• So ... how was this done?

NODE and LINK (The names are just to help us think.)

Example 2:

From a body segment, link to one other body seg.

and two leg segments.

From a leg segment link once to another leg segment.

Example 2:

-94 -

A Genotype for Mini-Robots

• So ... how was this done?

NODE and LINK (The names are just to help us think.)

Example 3:

From a body, link to a head & four limbs.

From a limb, link to another limb.

Example 3:

www.dpchallenge.com

-95 -

Brains and bodies

Each sensor is contained in a specific body part

Sensors measure joint angles, forces, properties of the world

The brain is a network of neurons – (but not like real ones) •

Neurons

functions include:

sum, product, sum-threshold, greater than, .... sin, cos, log, integrate, differentiate, ... smooth, memory, oscillate-wave, oscillate-sawtooth

-96 -

Neurons in Segments

P0, P1 are photosensors

C0 and Q0 are contact sensors

E0 and E1 are effectors (joint angle drivers)

The connections are evolved, not reasoned out.

(There is a graph genome for the neurons, too.)

-97 -

Neurons in Segments

P0, P1 are photosensors

C0 and Q0 are contact sensors

E0 and E1 are effectors (joint angle drivers)

The connections are evolved, not reasoned out.

(There is a graph genome for the neurons, too.)

-98 -

Neurons in Segments

A single shared neuron group

is also provided.

( “ where ” is it? Unspecified) This capability allows for coordinated control.

-99 -

Neurons in Segments

A single shared neuron group

is also provided.

( “ where ” is it? Unspecified) This capability allows for coordinated control.

The

saw

and

wav

oscillators are key elements.

-100 -

What Changes in each Generation?

NOTE:

The system mixes sexual reproduction with mutation in an un-biological way: mutation occurs in every generation.

MUTATION

1. Internal parameters (weights, oscillation frequencies) are randomly altered. Small alterations more likely than big ones.

2. A new random node is added to graph. (May not connect; will be discarded if not.) 3. New random connections are added, existing ones are removed.

4. Unconnected elements are garbage-collected.

Outside (morphology) graphs are altered, then inside (neuro) ones.

-101 -

What Changes in each Generation?

MATING the GRAPHS

a. Crossover operation.

A subset of parent 2 is inserted to replace a subset of parent 1

-102 -

What Changes in each Generation?

MATING the GRAPHS

a. Crossover operation.

A subset of parent 2 is inserted to replace a subset of parent 1

-103 -

What Changes in each Generation?

MATING the GRAPHS

b. Grafting operation.

Two parents are joined together (each loses one node)

-104 -

Results

- Interbreeding populations often converge to uniformity, but - Successive runs often produce totally different results.

- Swimming produced: - paddles - tail wagglers - specialized scullers - lots of flippers - water snakes

-105 -

Results

- Interbreeding populations often converge to uniformity, but - Successive runs often produce totally different results.

- Walking produced: - corner-walkers - rocking blocks - inchworms - legs - hoppers Light-following worked in walking and swimming environments.

-106 -

What happened next?

- not much (at least, nothing so spectacular as Sims ’ creatures.) Why?

- The leap from simple goal-seeking motor activity ( “ tropisms ” ) to interesting perception and cognition is verrrrrry looooong.

- Folks like Brooks and Minsky ’ s successors are trying to bridge the gap.

- Fundamental insights are still needed.

-107 -