Agents that reason logically Tuomas Sandholm Carnegie Mellon University Computer Science Department Agents that reason logically Logic: - formal language in which knowledge can.

Download Report

Transcript Agents that reason logically Tuomas Sandholm Carnegie Mellon University Computer Science Department Agents that reason logically Logic: - formal language in which knowledge can.

Agents that reason logically
Tuomas Sandholm
Carnegie Mellon University
Computer Science Department
Agents that reason logically
Logic: - formal language in which knowledge can be expressed
- means of carrying out reasoning in such a language
Knowledge base (KB) consisting of sentences
- Background knowledge
- TELL’ed
function KB-AGENT(percept) returns an action
static: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB,MAKE-PERCEPT-SENTENCE(percept, t))
action  ASK(KB,MAKE-ACTION-QUERY(t))
TELL(KB,MAKE-ACTION-SENTENCE(action,t))
t  t+1
return action
Syntax semantics
Sentences
Facts
“KB entails ”
sentence
KB
Semantics
World
Entails
Semantics
Representation
Sentences
Facts
Follows

“ is derived from KB by i”
KB
i

An inference procedure that generate only entailed sentences is called sound
(truth-preserving)
Proof = record of operation of sound inference procedure
Proof theory specifies the sound inference steps for a logic.
An inference procedure is complete if it can find a proof for any entailed sentence.
Inference
“The pope is in Denver”
Pope = microfilm
Denver = pumpkin on the porch
A sentence is true under a particular interpretation if the state of affairs it
represents is the case
A sentence is valid (tautology, necessarily true) if it is true under all possible
worlds, i.e. regardless of what it is supposed to mean and regardless of the state
of affairs in the universe being described. E.g. A  ¬A
A sentence is satisfiable if there is some interpretation of some world for which
it is true. E.g. A  B (satisfiable by setting A= True, B=True)
Unsatisfiable: E.g. A  ¬A
Ontological commitment
(what exists in the world)
Epistemological
commitment (what an
agent believes about facts)
Propositional logic
Facts
True/false/unknown
First-order logic
Facts, objects, relations
True/false/unknown
Temporal logic
Facts, objects, relations, True/false/unknown
times
Facts
Degree of belief 0…1
Degree of truth
Degree of belief 0…1
Language
Probability theory
Fuzzy logic
Propositional Logic (PL): Syntax
Sentence  AtomicSentence | ComplexSentence
Logic constants
AtomicSentence  True | False
Propositional symbols
|P|Q|R|…
ComplexSentence  ( Sentence )
| Sentence Connective Sentence
| ¬Sentence
Connective   |  |  | 
Conjunction (and’ed together)
Disjunction (or’ed together)
Precedence: ¬    
E.g. ¬ P  Q  R  S
is equivalent to
((¬ P)  (Q  R))  S
Propositional Logic: Semantics
Truth table defines the semantics
Validity and inference
Truth tables can be
used for inference
((PH)¬H)  P
If the sentence is true in every row, then the sentence is valid.
This can be used for machine inference by building a truth table for
Premises  Conclusions
and checking all rows.
Slow, so need more powerful inference rules…
Inference rules in propositional logic
E.g. to prove that P follows
from (PH) and H,
we require only one
application of the resolution
rule with  as P,  as H, and
 empty.
Proving soundness of inference rules for
propositional logic …
The truth-table demonstrating soundness of the resolution
inference rule for propositional logic.
An inference rule is sound if the conclusion is true in all
cases where the premises are true.
Complexity of propositional inference
• Truth table method needs to check 2n rows for any proof
involving n propositional symbols
• NP-Complete [Cook 1971]
3SAT: ? x s.t. (x1x5x6)  (x2x5x6) …
Most instances may be easy
Monotonicity: When we add new sentences to KB, all the
sentences entailed by the original KB are still entailed.
Propositional logic (and first-order-logic) are monotonic.
Monotonicity allows local inference rules.
Probability theory is not monotonic.
Complexity of propositional inference:
a tractable special case
A class of sentences that allow polynomial time inference in
propositional logic:
Horn sentence:
P1  P2  …  Pn  Q
where Pi’s and Q are non-negated
Inference procedure: apply Modus Ponens whenever possible
until no more inferences possible.
   ,

Models (= dark regions in the Venn diagrams below)
= those parts of the world where sentence is true. I.e. those
assignments of {True,False} to propositions.
A sentence  is entailed by a KB if the models of KB are all
models of .
Another method for inference in
propositional logic:
Model finding
Postulate
 (Premises  Conclusions)
and try to find a model
Applications of model finding
•
•
•
•
•
•
Logic, theorem proving (e.g. Robbins algebra)
Planning (e.g. SATPLAN)
Boolean circuits
Satisfiability checking
Constraint satisfaction
Vision interpretation [e.g. Reiter & Mackworth 89]
Model finding algorithms
Davis-Putnam procedure [1960]
clause
E.g. for 3SAT
? p s.t. (p1p3p4)  (p1p2p3)  …
p2T
T
p3
p4
p1
F
F
Complete
Backtrack when some clause becomes empty
Unit propagation (for variable & value ordering): if some clause
only has one literal left, assign that variable the value that satisfies
the clause (never need to check the other branch)
A helpful observation for the
Davis-Putnam procedure
P1  P2  …  Pn  Q
is equivalent to
(P1  P2  …  Pn)  Q
is equivalent to
P1  P2  …  Pn  Q
(Horn)
(Horn)
(Horn clause)
Thrm. If a propositional theory consists only of Horn clauses
(i.e., clauses that have at most one non-negated variable) and
unit propagation does not result in an explicit contradiction
(i.e., Pi and Pi for some Pi), then the theory is satisfiable.
Proof. On the next page.
…so, Davis-Putnam algorithm does not need to branch on
variables which only occur in Horn clauses
Proof of the thrm
Assume the theory is Horn, and that unit propagation has completed
(without contradiction). We can remove all the clauses that were satisfied
by the assignments that unit propagation made. From the unsatisfied
clauses, we remove the variables that were assigned values by unit
propagation. The remaining theory has the following two types of clauses
that contain unassigned variables only:
P1  P2  …  Pn  Q
and
P1  P2  …  Pn
Each remaining clause has at least two variables (otherwise unit
propagation would have applied to the clause). Therefore, each remaining
clause has at least one negated variable. Therefore, we can satisfy all
remaining clauses by assigning each remaining variable to False.
Variable ordering heuristic for the DavisPutnam procedure [Crawford & Auton AAAI-93]
Heuristic: Pick a non-negated variable that occurs in a nonHorn (more than 1 non-negated variable) clause with a
minimal number of non-negated variables.
Motivation: This is effectively a “most constrained first”
heuristic if we view each non-Horn clause as a “variable”
that has to be satisfied by setting one of its non-negated
variables to True. In that view, the branching factor is the
number of non-negated variables the clause contains.
Q: Why is branching constrained to non-negated variables?
A: We can ignore any negated variables in the non-Horn
clauses because
– whenever any one of the non-negated variables is set to True the
clause becomes redundant (satisfied), and
– whenever all but one of the non-negated variables is set to False
the clause becomes Horn.
“Order parameter” for 3SAT
[Mitchell, Selman, Levesque AAAI-92]
•  = #clauses / # variables
• This predicts
– satisfiability
– hardness of finding a model
Generality of the order parameter 
• The results seem quite general across model
finding algorithms
• Other constraint satisfaction problems have
order parameters as well
…but the complexity peak does
not occur under all ways of
generating the 3SAT instances
GSAT [Selman, Levesque, Mitchell AAAI-92]
(= a local search algorithm for model finding)
Incomplete (unless restart a lot)
Avg. total flips
2000
1600
1200
800
400
50 variables, 215 3SAT clauses
max-climbs
100
200
Greediness is not essential as long
as climbs and sideways moves are
preferred over downward moves.
Restarting
vs.
Escaping
BREAKOUT algorithm [Morris AAAI-93]
Initialize all variables Pi randomly
UNTIL currently state is a solution
IF current state is not a local minimum
THEN make any local change that reduces the total cost
(i.e. flip one Pi)
ELSE increase weights of all unsatisfied clause by one
Incomplete, but very efficient on large (easy) satisfiable problems.
Reason for incompleteness: the cost increase of the current local
optimum spills to other solutions because they share unsatisfied
clauses.
Summary of the algorithms we covered
for inference in propositional logic
• Truth table method
• Inference rules
• Model finding algorithms
– Davis-Putnam (Systematic backtracking)
• Early backtracking when a clause is empty
• Unit propagation
• Variable (& value?) ordering heuristics
– GSAT
– BREAKOUT
Propositional logic is too weak a
representational language
- Too many propositions to handle, and truth table has 2n rows. E.g. in the wumpus
world, the simple rule “don’t go forward if the wumpus is in front of you” requires
64 rules ( 16 squares x 4 orientations for agent)
- Hard to deal with change. Propositions might be true at times but not at others.
Need a proposition Pit for each time step because one should not always forget what
held in the past (e.g. where the agent came from)
- don’t know # time steps
- need time-dependent versions of rules
- Hard to identify “individuals”, e.g. Mary, 3
- Cannot directly talk about properties of individuals or relations between
individuals, e.g. Tall(bill)
- Generalizations, patterns cannot easily be represented “all triangles have 3 sides.”