Transcript Document

Games with Chance
Other Search
Algorithms
CPSC 315 – Programming Studio
Spring 2009
Project 2, Lecture 3
Adapted from slides of
Yoonsuck Choe
Game Playing with Chance



Minimax trees work well when the game is
deterministic, but many games have an
element of chance.
Include Chance nodes in tree
Try to maximize/minimize the expected value

Or, play pessimistic/optimistic approach
Tree with Chance Nodes
Max
Chance
…
Min
Chance

For each die roll (blue lines), evaluate each
possible move (red lines)
Expected Value

For variable x, the Expected Value is:

where Pr(x) is the probability of x occurring
Example: rolling a pair of die:
E( x)  x Pr(x)
2
1
1
1
1
5
1
5
1
1
1
1
 3  4   5  6
 7  8
 9   10   11   12 
7
36
18
12
9
36
6
36
9
12
18
36
Expectiminimax
Evaluating Tree
Max
Chance
…
Min
Chance

Choosing a Maximum (same idea for Minimum):



Evaluate all chance nodes from a move
Find Expected Value for that move
Choose largest expected value
More on Chance

Rather than expected value, could use another
approach




Maximize worst case value
 Avoid catastrophe
Give high weight if a very good position is possible
 “Knockout” move
Form hybrid approach, weighting all of these options
Note: time complexity increased to bmnm where n is
the number of possible choices (m is depth)
More on Game Playing

Rigorous approaches to imperfect information
games still being studied.



Assume random moves by opponent
Assume some sort of model based on perfect
information model
Indications that often the behavior of the
opponent is of more value than evaluating the
board position
AI in Larger-Scale and Modern
Computer Games

The idealized situations described often don’t
extend to extremely complex, and more continuous
games.



Larger situation can be broken down into
subproblems



Even just listing possible moves can be difficult
Consider writing the AI controller for a non-player opponent
in a modern strategy game
Hierarchical approach
Use of state diagrams
Some subproblems are more easily solved

e.g. path planning
AI in Larger-Scale and Modern
Computer Games

Use of simulation as opposed to deterministic
solution



Fun vs. Competent



Helps to explore large range of states
Can create complex behavior wrapped up in autonomous
agents
Goal of game is not necessarily for the computer to win
Often a collection of ad-hoc rules
“Cheating” allowed (e.g. Civilization)
General State Diagrams

List of possible states one can reach in the game
(nodes)


Describe ways of moving from one state to another
(edges)


Can be abstracted, general conditions
Not necessarily a set move, could be a general approach
Forms a directed (and often cyclic) graph


Our minimax tree is a state diagram, but we hide any
cycles
Sometimes want to avoid repeated states
State Diagram
State C
State I
State A
State B
State E
State D
State J
State H
State G
State K
State F
Exploring the State Diagram


Explore for solutions using BFS, DFS
Depth limited search:


Iterative Deepening search:


DFS but to limited depth in tree
DFS one level deep, then two levels (repeats first level),
then three levels, etc.
If a specific goal state, can use bidirectional search


Search forward from start and backward from goal – try to
meet in the middle.
Think of maze puzzles
More informed search


Traversing links, goal states not always equal
Can have a heuristic function: h(x) = how
close the state is to the “goal” state.



Kind of like board evaluation function/utility
function in game play
Can use this to order other searches
Can use this to create greedy approach
A* Algorithm


Avoid expanding paths that are already
expensive.
f(n) = g(n) + h(n)


g(n) = current path cost from start to node n
h(n) = estimate of remaining distance to goal


h(n) should never overestimate the actual cost of the
best solution through that node.
Then apply a best-first search

Value of f will only increase as paths evaluate