CS-184: Computer Graphics

Download Report

Transcript CS-184: Computer Graphics

CS-378: Game Technology
Lecture #18: AI
Prof. Okan Arikan
University of Texas, Austin
Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica Hodgins
V2005-08-1.1
Today
Finishing path planning
A* recap
Smooth paths
Hierarchical path planning
Dynamic path planning
Potential fields
Flocking
Back to High level AI
Decision trees
Rule based systems
Half Life (original)
A* Search
Set f(n)=g(n)+h(n)
Now we are expanding nodes according to best estimated
total path cost
Is it optimal?
It depends on h(n)
Is it efficient?
It is the most efficient of any optimal algorithm that uses the
same h(n)
A* is the ubiquitous algorithm for path planning in games
Much effort goes into making it fast, and making it produce
pretty looking paths
More articles on it than you can ever hope to read
Heuristics
For A* to be optimal, the heuristic must underestimate the
true cost
Such a heuristic is admissible
Also, not mentioned in Gems, the f(n) function must
monotonically increase along any path out of the start node
True for almost any admissible heuristic, related to triangle inequality
If not true, can fix by making cost through a node max(f(parent) + edge,
f(n))
Combining heuristics:
If you have more than one heuristic, all of which underestimate, but which
give different estimates, can combine with: h(n)=max(h1(n),h2(n),h3(n),…)
Inventing Heuristics
Bigger estimates are always better than smaller ones
They are closer to the “true” value
So straight line distance is better than a small constant
Important case: Motion on a grid
If diagonal steps are not allowed, use Manhattan distance
is a bigger estimate than
General strategy: Relax the constraints on the problem
For example: Normal path planning says avoid obstacles
Relax by assuming you can go through obstacles
Result is straight line distance
A* Problems
Discrete Search:
Must have simple paths to connect waypoints
Typically use straight segments
Have to be able to compute cost
Must know that the object will not hit obstacles
Leads to jagged, unnatural paths
Infinitely sharp corners
Jagged paths across grids
Starcraft
Efficiency is not great
Finding paths in complex environments can be very
expensive
Path Straightening
Straight paths typically look more plausible than
jagged paths, particularly through open spaces
Option 1: After the path is generated, from each
waypoint look ahead to farthest unobstructed
waypoint on the path
Removes many segments and replaces with one straight one
Could be achieved with more connections in the waypoint graph,
but that would increase cost
Option 2: Bias the search toward straight paths
Increase the cost for a segment if using it requires turning a
corner
Reduces efficiency, because straight but unsuccessful paths will
be explored preferentially
Smoothing While Following
Rather than smooth out the path, smooth out the
agent’s motion along it
Typically, the agent’s position linearly interpolates
between the waypoints: p=(1-u)pi+upi+1
u is a parameter that varies according to time and the
agent’s speed
Two primary choices to smooth the motion
Change the interpolation scheme
“Chase the point” technique
Different Interpolation
Schemes
View the task as moving a point (the agent) along a curve
fitted through the waypoints
We can now apply classic interpolation techniques to
smooth the path: splines
Interpolating splines:
The curve passes through every waypoint, but may have nasty bends and
oscillations
Hermite splines:
Also pass through the points, and you get to specify the direction as you go
through the point
Bezier or B-splines:
May not pass through the points, only approximate them
Chase the Point
Instead of tracking along the path, the agent chases
a target point that is moving along the path
Start with the target on the path ahead of the agent
At each step:
Move the target along the path using linear
interpolation
Move the agent toward the point location, keeping it
a constant distance away or moving the agent at the
same speed
Works best for driving or flying games
Chase the Point Demo
Still not great…
The techniques we have looked at are path postprocessing: they take the output of A* and process it
to improve it
What are some of the bad implications of this?
Why do people still use these smoothing techniques?
If post-processing causes these problems, we can
move the solution strategy into A*
A* for Smooth Paths
http://www.gamasutra.com/features/20010314/pinter_01.htm
You can argue that smoothing is an attempt to avoid
infinitely sharp turns
Incorporating turn radius information can fix this
Option 1: Restrict turn radius as a post-process
But has all the same problems as other post
processes
A* for Smooth Paths
Option 2: Incorporate direction and turn radius into A*
itself
Add information about the direction of travel when
passing through a waypoint
Do this by duplicating each waypoint 8 times (for
eight directions)
Then do A* on the expanded graph
Cost of a path comes from computing bi-tangents …
Using Turning Radius
Fixed start direction, any
finish direction: 2 options
Fixed direction at both
ends: 4 options
Curved paths are used to compute cost, and also to determine whether
the path is valid (avoids obstacles)
Improving A* Efficiency
Recall, A* is the most efficient optimal algorithm for a
given heuristic
Improving efficiency, therefore, means relaxing
optimality
Basic strategy: Use more information about the
environment
Inadmissible heuristics use intuitions about which paths are
likely to be better
E.g. Bias toward getting close to the goal, ahead of exploring early
unpromising paths
Hierarchical planners use information about how the path must
be constructed
E.g. To move from room to room, just must go through the doors
Inadmissible Heuristics
A* will still gives an answer with inadmissible
heuristics
But it won’t be optimal: May not explore a node on
the optimal path because its estimated cost is too
high
Optimal A* will eventually explore any such node
before it reaches the goal
Inadmissible Heuristics
However, inadmissible heuristics may be much
faster
Trade-off computational efficiency for pathefficiency
Start ignoring “unpromising” paths earlier in the
search
But not always faster – initially promising paths
may be dead ends
Recall additional heuristic restriction: estimates for
path costs must increase along any path from the
start node
Inadmissible Example
Multiply an admissible heuristic by a constant
factor (See Gems for an example of this)
Why does this work?
The frontier in A* consists of nodes that have roughly equal
estimated total cost: f = cost_so_far + estimated_to_go
Consider two nodes on the frontier: one with f=1+5, another
with f=5+1
Originally, A* would have expanded these at about the same
time
If we multiply the estimate by 2, we get: f=1+10 and f=5+2
So now, A* will expand the node that is closer to the goal long
before the one that is further from the goal
Hierarchical Planning
Many planning problems can be thought of
hierarchically
To pass this class, I have to pass the exams and do the projects
To pass the exams, I need to go to class, review the material,
and show up at the exam
To go to class, I need to go to RLM 7.120 at 2:00pm TuTh
Path planning is no exception:
To go from my current location to slay the dragon, I
first need to know which rooms I will pass through
Then I need to know how to pass through each room,
around the furniture, and so on
Doing Hierarchical Planning
Define a waypoint graph for the top of the hierarchy
For instance, a graph with waypoints in doorways
(the centers)
Nodes linked if there exists a clear path between
them (not necessarily straight)
Doing Hierarchical Planning
For each edge in that graph, define another
waypoint graph
This will tell you how to get between each doorway
in a single room
Nodes from top level should be in this graph
First plan on the top level - result is a list of rooms to
traverse
Then, for each room on the list, plan a path across it
Can delay low level planning until required smoothes out frame time
Hierarchical Planning Example
Plan this first
Then plan each room
(second room shown)
Hierarchical Planning Advantages
The search is typically cheaper
The initial search restricts the number of nodes
considered in the latter searches
It is well suited to partial planning
Only plan each piece of path when it is actually
required
Averages out cost of path over time, helping to avoid
long lag when the movement command is issued
Makes the path more adaptable to dynamic changes
in the environment
Hierarchical Planning Issues
Result is not optimal
No information about actual cost of low level is used at top level
Top level plan locks in nodes that may be poor choices
Have to restrict the number of nodes at the top level for efficiency
So cannot include all the options that would be available to a full
planner
Solution is to allow lower levels to override higher level
Textbook example: Plan 2 lower level stages at a time
E.g. Plan from current doorway, through next doorway, to one after
When reach the next doorway, drop the second half of the path and start
again
Pre-Planning
If the set of waypoints is fixed, and the obstacles
don’t move, then the shortest path between any two
never changes
If it doesn’t change, compute it ahead of time
This can be done with all-pairs shortest paths
algorithms
Dijkstra’s algorithm run for each start point, or special
purpose all-pairs algorithms
The question is, how do we store the paths?
Storing All-Pairs Paths
Trivial solution is to store the shortest path to every
other node in every node: O(n3) memory
A better way:
Say I have the shortest path from A to B: A-B
Every shortest path that goes through A on the way to B must use A-B
So, if I have reached A, and want to go to B, I always take the same next step
This holds for any source node: the next step from any node on the way to B
does not depend on how you got to that node
But a path is just a sequence of steps - if I keep following the “next step” I will
eventually get to B
Only store the next step out of each node, for each possible destination
Example
A
B
And I want to go to:
A
D
C
A
E
F
B
C
D
E
F
G
- A-B A-C A-B A-C A-C A-C
B B-A - B-A B-D B-D B-D B-D
G
If I’m at:
C C-A C-A - C-E C-E C-F C-E
D D-B D-B D-E - D-E D-E D-G
E E-C E-D E-C E-D - E-F E-G
F F-C F-E F-C F-E F-E - F-G
G G-E G-D G-E G-D G-E G-F -
To get from
A to G:
+ A-C
+ C-E
+ E-G
Big Remaining Problem
So far, we have treated finding a path as planning
We know the start point, the goal, and everything in
between
Once we have a plan, we follow it
What’s missing from this picture?
Hint: What if there is more than one agent?
What might we do about it?
Dynamic Path Planning
What happens when the environment changes after
the plan has been made?
The player does something
Other agents get in the way (in this case, you know that the
environment will change at the time you make the plan)
The solution strategies are highly dependent on the
nature of the game, the environment, the types of AI,
and so on
Three approaches:
Try to avoid the problem
Re-plan when something goes wrong
Reactive planning
Avoiding Plan Changes
Partial planning: Only plan short segments of path at a time
Stop A* after a path of some length is found, even if the goal is not
reached - use best estimated path found so far
Extreme case: Use greedy search and only plan one step at a time
Common case: Hierarchical planning and only plan low level when needed
Underlying idea is that a short path is less likely to change than a long
path
But, optimality will be sacrificed
Another advantage is more even frame times
Other strategies:
Wait for the blockage to pass - if you have reason to believe it will
Lock the path to other agents - but implies priorities
Re-Planning
If you discover the plan has gone wrong, create a
new one
The new plan assumes that the dynamic changes
are permanent
Usually used in conjunction with one of the
avoidance strategies
Re-planning is expensive, so try to avoid having to
do it
No point in generating a plan that will be re-done suggests partial planning in conjunction with replanning
Reactive Planning
A reactive agent plans only its next step, and only uses
immediately available information
Best example for path planning is potential field planning
Set up a force field around obstacles (and other agents)
Set up a gradient field toward the goal
The agent follows the gradient downhill to the goal, while the
force field pushes it away from obstacles
Can also model velocity and momentum - field applies a force
Potential field planning is reactive because the agent just
looks at the local gradient at any instant
Has been used in real robots for navigating things like
hallways
Potential Field
Red is start point, blue
is goal
This used a quadratic
field strength around
the obstacles
Note that the
boundaries of the world
also contribute to the
field
Creating the Field
The constant gradient can be a simple linear gradient
based on distance from the goal, dgoal: fgoal=k dgoal
The obstacles contribute a field strength based on
the distance from their boundary, fi(di)
Linear, quadratic, exponential, something else
Normally truncate so that field at some distance is
zero. Why?
Strength determines how likely the agent is to avoid it
Add all the sub-fields together to get overall field
Following the Field
At each step, the agent needs to know which direction is
“downhill”
Compute the gradient of the field
Compute the gradients of each component and add
Need partial derivatives in x and y (for 2D planning)
Best approach is to consider the gradient as an acceleration
Automatically avoids sharp turns and provides smooth motion
Higher mass can make large objects turn more slowly
Easy to make frame-rate independent
But, high velocities can cause collisions because field is not strong enough
to turn the object in time
One solution is to limit velocity - want to do this anyway because the field is
only a guide, not a true force
Following Examples
No momentum - choose to go to
neighbor with lowest field strength
Momentum - but with linear obstacle
field strength and moved goal
Discrete Approximation
Compute the field on a grid
Allows pre-computation of fields that do not change, such as
fixed obstacles
Moving obstacles handled as before
Use discrete gradients
Look at neighboring cells
Go to neighboring cell with lowest field value
Advantages: Faster
Disadvantages: Space cost, approximate
Left example on previous slide is a (very fine)
discrete case
Potential Field Problems
There are many parameters to tune
Strength of the field around each obstacle
Function for field strength around obstacle
Steepness of force toward the goal
Maximum velocity and mass
Goals conflict
High field strength avoids collisions, but produces big forces and
hence unnatural motion
Higher mass smoothes paths, but increases likelihood of
collisions
Local minima cause huge problems
Bloopers
Field too weak
Field too strong
Local Minima Example
The Local Minima Problem
Recall, path planning can be viewed as optimization
Potential field planning is gradient descent optimization
The biggest problem with gradient descent is that it gets
stuck in local minima
Potential field planning suffers from exactly the same
problem
Must have a way to work around this
Go back if a minima is found, and try another path
With what sorts of environments will potential fields work
best?
What form of waypoint-based search is it similar to?
Flocking Models (Reynolds 87)
Potential fields are most often used in avoiding collisions
between the members of a group
Each member pushes on its neighbors to keep them from colliding
Additional rules for groups can be defined - the result is a
flocking model, or herding, or schooling, …
Each rule contributes a desired direction, which are
combined in some way to come up with the acceleration
The aim is to obtain emergent behavior:
Define simple rules on individuals that interact to give interesting global
behavior
For example, rules for individual birds make them form a flock, but we
never explicitly specify a leader, or the exact shape, or the speed, …
Flocking Rules
Separation: Try to avoid running into local flock-mates
Works just like potential fields
Normally, use a perception volume to limit visible flock-mates
Alignment: Try to fly in same direction as local flockmates
Gets everyone flying in the same direction
Cohesion: Try to move toward the average position of
local flock-mates
Spaces everyone out evenly, and keep boundary members toward
the group
Avoidance: Try to avoid obstacles
Just like potential fields
Rules Illustrated
Separation:
Fly away away
from neighbors that
are “too close”
Alignment: steer
toward average
velocity
Cohesion: steer
toward average
position
Avoidance: steer
away from
obstacles
Combining Commands
Consider commands as accelerations
Give a weight to each desire
High for avoidance, low for cohesion
Option 1: Apply in order of highest weight, until a max
acceleration is reached
Ensures that high priority things happen
Option 2: Take weighted sum and truncate
acceleration
Makes sure some part of everything happens
Flocking Demo
http://www.red3d.com/cwr/boids/
Flocking Evaluation
Advantages:
Complex behavior from simple rules
Many types of behavior can be expressed with
different rules and parameters
Disadvantages:
Can be difficult to set parameters to achieve desired
result
All the problems of potential fields regarding strength
of forces
General Particle Systems
Flocking is a special case of a particle system
Objects are considered point masses from the point of view
of simulating their motion (maybe with orientation)
Simple rules are applied to control how the particles move
General Particle Systems
Particles can be rendered in a variety of ways to simulate
many different things:
Fireworks
Waterfalls, spray, foam
Explosions (smoke, flame, chunks of debris)
Clouds
Crowds, herds
Widely used in movies as well as games
The ocean spray in Titanic and Perfect Storm, for instance
Lutz Latta
Particle System Step
1.
Inject any new particles into the system and assign them
their individual attributes
•
•
2.
May have a fixed lifetime, or die on some condition
Move all the current particles according to their script
•
4.
Particles might be generated at random (clouds), in a constant
stream (waterfall), or according to a script (fireworks)
Remove any particles that have exceeded their lifetime
•
3.
There may be one or more sources
Script typically refers to the neighboring particles and the
environment
Render all the current particles
•
Many options for rendering
Example: Smoke Trails
Particles are spawned at a constant rate
They have zero initial velocity, or maybe a small
velocity away from the rocket
Rules:
Particles could rise or fall slowly (drift with the wind)
Attach a parameter, density, that grows quickly then
falls over time
Extinguish when density becomes very small
Render with an billboard facing the viewer, scaled
according to the density of the puff
Smoke Trails
Time
Extinguished
New
Example: Explosions
System starts when the target is hit
Target is broken into pieces and a particle assigned for
each piece
Each particle gets an initial velocity away from the center of
the explosion
Particle rules are:
Move ballistically unless there is a collision
Could take into account rigid body rotation, or just do random rotation
Collisions are resolved by reflecting the velocity about the contact normal
Rendering just draws the appropriate piece of target at the
particles location
Rule Based Systems
Back to high level AI
Classification
Our aim is to decide which action to take given the
world state
Convert this to a classification problem:
The state of the world is a set of attributes (or features)
Who I can see, how far away they are, how much energy, …
Given any state, there is one appropriate action
Extends to multiple actions at the same time
The action is the class that a world state belongs to
Low energy, see the enemy means I should be in the retreat
state
Classification problems are very well studied
Decision Trees
Nodes represent attribute tests
One child for each possible outcome of the test
Leaves represent classifications
Can have the same classification for several leaves
Classify by descending from root to a leaf
At each node perform the test and descend the appropriate branch
When a leaf is reached return the classification (action) of that leaf
Decision tree is a “disjunction of conjunctions of constraints
on the attribute values of an instance”
Action if (A and B and C) or (A and ~B and D) or ( … ) …
Retreat if (low health and see enemy) or (low health and hear enemy) or (
…)…
Decision Tree for Quake
Just one tree
Attributes: Enemy=<t,f> Low=<t,f>
Sound=<t,f> Death=<t,f>
D?
t
Actions: Attack, Retreat, Chase,
Spawn, Wander
Could add additional trees:
f
Spawn
E?
t
If I’m attacking, which weapon
should I use?
L?
If I’m wandering, which way
should I go?
Can be thought of as just
extending given tree (but
easier to design)
Or, can share pieces of tree,
such as a Retreat sub-tree
f
t
Retreat
S?
f
t
Attack
L?
t
Retreat
f
Wander
f
Chase
Compare and Contrast
Attack-ES
E,-D,S,-L
S
Attack-E
E,-D,-S,-L
-S
L
Retreat-S
-E,-D,S,L
L
-L
E
-E
E E
Wander-L
-E,-D,-S,L
L
-L
-L
E
L
Retreat-ES
E,-D,S,L
-S
-L
S
-E
Wander
-E E
-E,-D,-S,-L
D D
D
D
Spawn
D
(-E,-S,-L)
S
Chase
-E,-D,S,-L
Retreat-E
E,-D,-S,L
Handling Simultaneous
Actions
Treat each output command as a separate
classification problem
Given inputs should walk => <forward, backward, stop>
Given inputs should turn => <left, right, none>
Given inputs should run => <yes, no>
Given inputs should weapon => <blaster, shotgun…>
Given inputs should fire => <yes, no>
Have a separate tree for each command
If commands are not independent, two options:
Have a general conflict resolution strategy
Put dependent actions in one tree
Deciding on Actions
Each time the AI is called:
Poll each decision tree for current output
Event driven - only call when state changes
Need current value of each input attribute
All sensor inputs describe the state of the world
Store the state of the environment
Most recent values for all sensor inputs
Change state upon receipt of a message
Or, check validity when AI is updated
Or, a mix of both (polling and event driven)
Sense, Think, Act Cycle
Sense
Sense
Gather input sensor changes
Update state with new values
Think
Think
Poll each decision tree
Act
Execute any changes to actions
Act
Building Decision Trees
Decision trees can be constructed by hand
Think of the questions you would ask to decide what to do
For example: Tonight I can study, play games or sleep. How do I
make my decision?
But, decision trees are typically learned:
Provide examples: many sets of attribute values and resulting
actions
Algorithm then constructs a tree from the examples
Reasoning: We don’t know how to decide on an action, so let the
computer do the work
Whose behavior would we wish to learn?
Learning Decision Trees
Decision trees are usually learned by induction
Generalize from examples
Induction doesn’t guarantee correct decision trees
Bias towards smaller decision trees
Occam’s Razor: Prefer simplest theory that fits the data
Too expensive to find the very smallest decision tree
Learning is non-incremental
Need to store all the examples
ID3 is the basic learning algorithm
Induction
If X is true in every example that results in action A,
then X must always be true for action A
More examples are better
Errors in examples cause difficulty
If X is true in most examples X must always be true
ID3 does a good job of handling errors (noise) in examples
Note that induction can result in errors
It may just be coincidence that X is true in all the examples
Typical decision tree learning determines what tests
are always true for each action
Assumes that if those things are true again, then the same
action should result
Learning Algorithms
Recursive algorithms
Find an attribute test that separates the actions
Divide the examples based on the test
Recurse on the subsets
What does it mean to separate?
Ideally, there are no actions that have examples in both sets
Failing that, most actions have most examples in one set
The things to measure is entropy - the degree of homogeneity
(or lack of it) in a set
Entropy is also important for compression
What have we seen before that tries to separate
sets?
Why is this different?
Induction requires Examples
Where do examples come from?
Programmer/designer provides examples
Capture an expert player’s actions, and the game state, while
they play
# of examples needed depends on difficulty of
concept
Difficulty: Number of tests needed to determine the action
More is always better
Training set vs. Testing set
Train on most (75%) of the examples
Use the rest to validate the learned decision trees by estimating
how well the tree does on examples it hasn’t seen
Decision Tree Advantages
Simpler, more compact representation
State is recorded in a memory
Create “internal sensors” – Enemy-Recently-Sensed
Easy to create and understand
Can also be represented as rules
Decision trees can be learned
Decision Tree Disadvantages
Decision tree engine requires more coding than FSM
Each tree is “unique” sequence of tests, so little
common structure
Need as many examples as possible
Higher CPU cost - but not much higher
Learned decision trees may contain errors
References
Mitchell: Machine Learning, McGraw Hill, 1997
Russell and Norvig: Artificial Intelligence: A Modern
Approach, Prentice Hall, 1995
Quinlan: Induction of decision trees, Machine
Learning 1:81-106, 1986
Quinlan: Combining instance-based and modelbased learning,10th International Conference on
Machine Learning, 1993
This is coincidental - I took an AI course from Quinlan
in 1993
Rule-Based Systems
Decision trees can be converted into rules
Just test the disjunction of conjunctions for each leaf
More general rule-based systems let you write the
rules explicitly
System consists of:
A rule set - the rules to evaluate
A working memory - stores state
A matching scheme - decides which rules are applicable
A conflict resolution scheme - if more than one rule is applicable,
decides how to proceed
What types of games make the most extensive use
of rules?
Rule-Based Systems Structure
Rule Memory
Match
Program
Procedural
Knowledge
Act
Long-term
Knowledge
Conflict
Resolution
Working Memory
Data
Declarative
Knowledge
Short-term
Knowledge
AI Cycle
Sensing
Memory
Match
Changes to
Working Memory
Game
Act
Actions
Selected
Rule
Rule instantiations that
match working memory
Conflict
Resolution
Age of Kings
; The AI will attack once at 1100 seconds and then again
; every 1400 sec, provided it has enough defense soldiers.
(defrule
(game-time > 1100)
=>
(attack-now)
(enable-timer 7 1400))
Rule
Action
(defrule
(timer-triggered 7)
(defend-soldier-count >= 12)
=>
(attack-now)
(disable-timer 7)
(enable-timer 7 1400))
Age of Kings
(defrule
(true)
=>
(enable-timer 4 3600)
(disable-self))
(defrule
(timer-triggered 4)
=>
(cc-add-resource food 700)
(cc-add-resource wood 700)
(cc-add-resource gold 700)
(disable-timer 4)
(enable-timer 4 2700)
(disable-self))
What is it doing?
Implementing Rule-Based
Systems
Where does the time go?
90-95% goes to Match
Matching all rules against all of
working memory each cycle is
way too slow
Key observation
Memory
Match
# of changes to working memory
each cycle is small
If conditions, and hence rules, can
be associated with changes, then
we can make things fast (event
driven)
Act
Conflict
Resolution
Efficient Special Case
If only simple tests in conditions, compile rules into a match net
Simple means: Can map changes in state to rules that must be reevaluated
Process changes to working memory
Associate changes with tests
Expected cost: Linear in the number of changes to working memory
Test A
Rules: Bit vectors store
which tests are true
Rules with all tests true
go in conflict set
Test B
R1
Test C
R2
Conflict Set
Test D
R1: If A, B, C, then
…
R2: If A, B, D, then
…
General Case
Rules can be arbitrarily complex
In particular: function calls in conditions and actions
If we have arbitrary function calls in conditions:
Can’t hash based on changes
Run through rules one at a time and test conditions
Pick the first one that matches (or do something else)
Time to match depends on:
Number of rules
Complexity of conditions
Number of rules that don’t match
Baulders Gate
IF
Heard([PC],UNDER_ATTACK)
!InParty(LastAttackerOf(LastHeardBy(Myself)))
Range(LastAttackerOf(LastHeardBy(Myself)),5)
!StateCheck(LastAttackerOf(LastHeardBy(Myself)),
STATE_PANIC)
!Class(Myself,FIGHTER_MAGE_THIEF)
THEN
RESPONSE #100
EquipMostDamagingMelee()
AttackReevaluate(LastAttackerOf(LastHeardBy(Myself)),30)
END
Conflict Resolution Strategies
What do we do if multiple rules match?
Conflict Resolution Strategies
What do we do if multiple rules match?
Rule order – pick the first rule that matches
Makes order of loading important – not good for big
systems
Rule specificity - pick the most specific rule
Conflict Resolution Strategies
Rule importance – pick rule with highest priority
When a rule is defined, give it a priority number
Forces a total order on the rules – is right 80% of the
time
Decide Rule 4 [80] is better than Rule 7 [70]
Decide Rule 6 [85] is better than Rule 5 [75]
Now have ordering between all of them – even if
wrong
Rule-based System: Good
and Bad
Advantages
Corresponds to way people often think of knowledge
Very expressive
Modular knowledge
Easy to write and debug compared to decision trees
More concise than FSM
Disadvantages
Can be memory intensive
Can be computationally intensive
Sometimes difficult to debug
References
RETE:
Forgy, C. L. Rete: A fast algorithm for the many
pattern/many object pattern match problem. Artificial
Intelligence, 19(1) 1982, pp. 17-37
TREAT:
Miranker, D. TREAT: A new and efficient match
algorithm for AI production systems. Pittman/Morgan
Kaufman, 1989