CS 415 – A.I.

Download Report

Transcript CS 415 – A.I.

CS 415 – A.I.
Slide Set 3
Representation and Search
• Representational System – function is to capture the
essential features of a problem domain and make the
information accessible
– Abstraction – being able to efficiently store the features of
the problem domain
• Note: the features will undoubtedly change
– Balance trade-offs between efficiency and expressiveness
Representation in Mobile Robotics
• Kinematics – basic study of how mechanisms
move
– Basic goal: given all the angles and movement,
what is you point in space at this time
– 2 Frames of Reference
• Global Frame of Reference
– Robot gets through layers of representations (maps, etc)
• Local Frame of Reference
– Don't know what the world looks like
– Remember how far I've traveled
• Given global frame of reference
– If the robot moves how do we keep up with where
we are in space
– Kinematic Equations
• A system of equations that determines our x,y position
and our rotation (angle Θ) after k control steps
• See second page of ARL paper
– Synchro Drive Robots
– Also exist for Differential Drive Robots
• Store as a matrix system
– Perform matrix operations to transform and solve
• Regardless, everyone should work in the same
frame of reference
– Homogenous Transform
• Do matrix operations to transform from one frame of
reference to another
How far has the robot moved?
• Apply power, robot moves, right?
– Power does not relate well to speed
• So, other options:
– Check the particular motor velocity (left/right)
– Some visual cue for how fast you're going
– Encode inside motor
• How many 'ticks' has the motor made as it rotated
• Use PID, proportional integral derivative
– Integral and derivative → smooths error/gives result
Mapping
• How do we abstract a map?
• Efficiency vs Expressiveness
– What are the tradeoffs
• Dealing with Errors
– Examples:
• Synchro Drive
• Differential Drive
Accurate Relative Localization Using
Odometry
• Drawing Maps
– Depends on relative localization
• Can't escape the use of odometry
– Have error built in
• Overcoming Error
1.Odometry error modeling
2.Error Parameters estimation
3.Covariance matrix estimation

1 and 2 – systematic errors

3 – non-systematic errors
Systematic Errors
• Define these for the robot based on the
appropriate error model for the drive type
– Differential Drive → given in the literature
• Borenstein paper
– Synchro Drive → given in this paper
• Major source: wheel misalignment
– Major source of distortion for theta (angular velocity): drag AND
rotate the robot
– Provable by geometric analysis of kinematic equation
Non-systematic Errors
• PC (POSTECH CMU)-method
– 1st get the error model
– 2nd use PC-method to generate error parameters
and covariance matrix
• Based on sensor-based navigation through the
Generalized Voronoi Graph (GVG)
– Voronoi extensively covered in literature
– Creates a well-understood path based on obstacles
– Robot drives it forward (FOP) and backward (BOP)
• 2 diff odom paths, same real-world path
• Give an initial CFOP and CBOP based on error
model and initial error parameters guess
• Then, find the error parameters that minimize
error between CFOP and CBOP
– Steepest descent method
• Now, build an error covariance matrix based on
3 assumptions and worst-case analysis
A little error, uncorrected, tends to
flourish
• See Fig 8
– Note: one possible approach, reset the odometry
before the error gets too bad
• See Fig 9
• See Fig 12
• See Fig 13
Representation/Search Considerations
• Real-time Systems
– Is it schedulable
• Has a lot to do with efficiency/expressiveness
– How are we storing things (fast to slow)
• I/O
• Memory
• Registers
• Cache
– What Language are we using (fast to slow)
• Assembly
• C
• C++
• Java
• Python
Other Forms of Representation
• Example: A robot might be stacking elements from a
table on top of one another
• Might give the following predicates (state facts about
our domain):
clear(c)
clear(a)
ontable(a)
ontable(b)
on(c,b)
cube(b)
cube(a)
pyramid(c)
• Might also define a set of rules
which relate to these predicates
For all X if there does not exist
any Y where on(Y,X) than this
implies clear(X)
Using Predicate Calculus
• Predicates can also be more advanced
hassize(bluebird,small)
hascovering(bird,feathers)
hascolor(bluebird,blue)
hasproperty(bird,flies)
isa(bluebird,bird)
isa(bird,vertebrae)
• Predicates are not functions in the sense of higher-level
languages, nor should you think of them in terms of
programming
– There is no set of predicate functions
– Any predicate can be defined
• They are strictly useful for representing knowledge in conjunction with
rules
Search
• What are the possible moves?
– The computer knows because of the knowledge
representation
• All moves are either stored or can be inferred from the
stored knowledge and set of rules.
• What is the best move?
– This is the domain of search
– Example: Tic Tac Toe
Limitations of State-Space Search
• Not sufficient to automate intelligent behavior
– How big is the state-space for chess?
• 10120 different board configurations
– Larger than # molecules in the universe
– Larger than the number of seconds since the “big bang”
– How big is the state-space for human language?
• Untold possibilities
• State-Space Representation and Search is an
important tool only
Exhaustive Search vs. Heuristic Search
• Exhaustive Search
– Brute force attempting all possible combinations
till an optimized solution is found
• Heuristic Search
– Humans don’t use exhaustive search
– Instead, we use rules of thumb based on what
seems most “promising”
– Heuristic – a strategy for selectively searching a
state space ---- Examples?
Autonomous Robotics
• Key: operating in an unknown environment
– Exploration: the act of moving through an unknown
environment while building a map that can be used
for subsequent navigation
• The world is not made of right angles
– Kinds of space: Open, Closed, Unknown
– Frontiers: regions on the boundary between open
and unknown space
The Many Faces of Control
•
•
•
•
Deliberative Control
Reactive Control
Hybrid Control
Behavior-Based Control


Emergent Behavior
Note: some info taken from The Robotics
Primer by Maja J. Mataric.

Highly recommend this book
Deliberative Control
• Think Hard, Act Later
– Throw back to the early days of AI
• Example: Chess
• Useful When:
– There's time to do it
– Without strategy, things go bad
• Planning
– i.e. - Programming Assignment 1
• But, done automatically
– Search
• (DFS, BFS, etc), Goal-Oriented, Forward-Oriented
Deliberative-planning based
architecture
• 3 Steps (done in order)
– Sensing
– Planning
– Acting (executing the plan)
• SPA Architectures
• The Good
– Can expect to find many (all possible?) paths to the goal
– Can cherry-pick the one that's best
• Optimization
• The Bad
– All the time it takes isn't always necessary
– Assumes you have stored knowledge
• The Ugly
– Not possible except in relatively small search spaces
• Almost never possible in real-time
Additional Drawbacks
• Sensor data is coming all the time
• Planning for all possible actions is memory
intensive
– Only need one action really
• World model must always be accurate and
updated
– The real world is a dynamic place
– Can't make it hold still while I act
Reactive Control
• Don't think, react
– Tight link between sensors and actuators
– Doesn't have a world model
– A set of rules is executed based on sensor states
• Mutually-exclusive Conditions
– One sensor state, one action
– Keeps control system simple
• But, how many sensor states are there? Many.
• Giant lookup table, slow response
• Rules should be generated at Design-Time
– Designer thinks, robot does not
– Designer identifies important sensor states /
situations
• Actions can get stuck in loop
– Use a little randomness
– Keep a bit of history
• We want our robot to do multiple things (multitasking)
– Action Selection
• Command Arbitration (choose a command)
• Command Fusion (combine the commands)
– Example
• Avoid Object
• Follow-wall
• Subsumption Architecture
– More on this later
Hybrid Control
• Reactive – fast but inflexible
• Deliberative – slow but smart
• Hybrid – best of both worlds
– 3 components
• A reactive layer (on bottom)
• A planner (on top)
• A layer that links the above two together
– (in the middle)
– 3-layer architecture
Middle Layer
• Has to:
– Compensate for shortcomings of reactive and
deliberative layers
– Reconcile different time-scales
– Deal with different representations
– Reconcile any contradictory commands
• Gopher bots (hospital deliveries): The What-ifs Problem
– Need to get to room fast, but no plan
– Taking optimal route, suddenly blocked
• By priority personnel, because of outdated map
– Always having to go to same room
• Dealing with changes
– This is where the middle layer comes in
– No one “correct” answer
•
•
•
•
Dynamic Replanning
Off-line Planning
On-line Planning
Building in domain knowledge
• The middle-layer is hard
– Best solutions are case-specific (so far)
Behavior-based Control
• What we know
– Reactive is inflexible, no representation, or learning
– Deliberative systems are slow
– Hybrid systems are hard and complex
– Biology (our model) has evolved from simple and
consistent components
• BBC → implemented as collections of behaviors
What are behaviors
• Many answers to this, but some commonalities
– Achieve/maintain particular goals
– Take time, are not instantaneous
• More complex than simple actions
– Take input from sensors and other behaviors
– Send output to effectors and other behaviors
• Behaviors can work at multiple levels of
abstraction (levels of specificity)
How they are used
• Executed in parallel, controller can respond immediately when
needed
• Networks of behaviors store state and construct world models
• Designed to work on the same time-scale
• Internal behavior structure is not necessarily one-to-one with
external behavior
– Interesting behavior is the result of complex interaction
between internal behavior structures
– Example: Robot flocking
– Emergent Behavior
• Key challenge: distributing knowledge over the behavior
structure
Example: Mapping
• Build a plant-watering robot, also builds a map
of plants as it goes
– Distribute map data over behaviors
– Link behaviors that are adjacent in the environment
• Examples:
– Toto: Behavior-based robot for mapping and
navigation at MIT