austin-day3.ppt

Download Report

Transcript austin-day3.ppt

“Belief Revision”
and
Truth-Finding
Kevin T. Kelly
Department of Philosophy
Carnegie Mellon University
[email protected]
Further Reading
(with O. Schulte and V. Hendricks) “Reliable Belief
Revision”, in Logic and Scientific Methods, Dordrecht:
Kluwer, 1997.
“The Learning Power of Iterated Belief Revision”, in
Proceedings of the Seventh TARK Conference, 1998.
“Iterated Belief Revision, Reliability, and Inductive
Amnesia,” Erkenntnis, 50: 1998
The Idea
•
Belief revision theory... “rational” belief change
•
Learning theory...............reliable belief change
•
Conflict?
Truth
Part I
Iterated Belief Revision
Bayesian (Vanilla) Updating
Propositional epistemic state
B
Bayesian (Vanilla) Updating
•
•
•
New belief is intersection
Perfect memory
No inductive leaps
new
evidence
E
B
Bayesian (Vanilla) Updating
•
•
•
New belief is intersection
Perfect memory
No inductive leaps
E
B
B’
Bayesian (Vanilla) Updating
•
•
•
New belief is intersection
Perfect memory
No inductive leaps
E
B
B’
“Epistemic Hell” (a.k.a. Nirvana)
B
“Epistemic Hell” (a.k.a. Nirvana)
E
Surprise!
B
Epistemic Hell (a.k.a. Nirvana)
•
•
•
•
•
•
Scientific revolutions
Suppositional reasoning
Conditional pragmatics
Decision theory
Game theory
Data bases
E
B
Epistemic
hell
Ordinal Epistemic States
Spohn 88
•
•
Ordinal-valued degrees of “implausibility”
Belief state is bottom level
w+1
w
2
1
b (S)
S
0
Iterated Belief Revision
epistemic state trajectory
S0
initial state
input propositions
E0
*
E1
E2
Iterated Belief Revision
epistemic state trajectory
S0
S1
input propositions
E1
*
E2
Iterated Belief Revision
epistemic state trajectory
S0
S1
input propositions
E1
*
E2
Iterated Belief Revision
epistemic state trajectory
S0
S1
S2
input proposition
E2
*
Iterated Belief Revision
epistemic state trajectory
S0
S1
S2
input proposition
E2
*
Iterated Belief Revision
epistemic state trajectory
S0
S1
S2
S3
*
Iterated Belief Revision
epistemic state trajectory
S0
S1
S2
S3
b ( S0) b (S1) b (S2) b (S3)
belief state trajectory
*
Generalized Conditioning *C
Spohn 88
S
Generalized Conditioning *C
Spohn 88
•
Condition entire epistemic state
E
S
Generalized Conditioning *C
Spohn 88
•
Condition entire epistemic state
E
*C
B’
S
S *C E
Lexicographic Updating *L
Spohn 88, Nayak 94
S
Lexicographic Updating *L
Spohn 88, Nayak 94
•
Lift refuted possibilities above
non-refuted possibilities
preserving order.
E
S
Lexicographic Updating *L
Spohn 88, Nayak 94
•
Lift refuted possibilities above
non-refuted possibilities
preserving order.
E
*L
B’
S
S *L E
Minimal or “Natural” Updating *M
Spohn 88, Boutilier 93
S
Minimal or “Natural” Updating *M
Spohn 88, Boutilier 93
•
Drop the lowest
possibilities consistent
with the data to the bottom
and raise everything else
up one notch
E
S
Minimal or “Natural” Updating *M
Spohn 88, Boutilier 93
•
Drop the lowest
possibilities consistent
with the data to the bottom
and raise everything else
up one notch
E
*M
S
S *M E
The Flush-to-a Method *F,a
Goldszmidt and Pearl 94
S
The Flush-to-a Method *F,a
Goldszmidt and Pearl 94
•
Send non-E worlds to a
and drop E -worlds rigidly
to the bottom
“boost parameter”
E
E
S
a
The Flush-to-a Method *F,a
Goldszmidt and Pearl 94
•
Send non-E worlds to a
and drop E -worlds rigidly
to the bottom
a
E
E
*F,a
S
S *F,a E
Ordinal Jeffrey Conditioning *J,a
Spohn 88
S
Ordinal Jeffrey Conditioning *J,a
Spohn 88
E
E
S
Ordinal Jeffrey Conditioning *J,a
Spohn 88
•
Drop E worlds to the
bottom. Drop non-E worlds
to the bottom and then jack
them up to level a
E
E
a
S
Ordinal Jeffrey Conditioning *J,a
Spohn 88
•
Drop E worlds to the
bottom. Drop non-E worlds
to the bottom and then jack
them up to level a
*J,a
E
E
a
S
S *J,a E
Empirical Backsliding
•
Ordinal Jeffrey
conditioning can
increase the
plausibility of a
refuted possibility
E
a
The Ratchet Method *R,a
Darwiche and Pearl 97
S
The Ratchet Method *R,a
Darwiche and Pearl 97
•
Like ordinal Jeffrey
conditioning except
refuted possibilities move
up by a from their current
positions
b +a
b
E
S
The Ratchet Method *R,a
Darwiche and Pearl 97
•
Like ordinal Jeffrey
conditioning except
refuted possibilities move
up by a from their current
positions
b +a
b
*R,a
E
B
S
B’
S *R,a E
Part II
Properties of the Methods
Timidity and Stubbornness
•
•
•
Timidity: no inductive leaps
without refutation.
Stubbornness: no retractions
without refutation
Examples: all the above
B’
•
Nutty!
B
Timidity and Stubbornness
•
•
•
Timidity: no inductive leaps
without refutation.
Stubbornness: no retractions
without refutation
Examples: all the above
B’
•
Nutty!
B
Timidity and Stubbornness
•
•
•
Timidity: no inductive leaps
without refutation.
Stubbornness: no retractions
without refutation
Examples: all the above
B’
•
Nutty!
B
Local Consistency
•
Local consistency: new
belief must be consistent
with the current consistent
datum
•
Examples: all the above
Positive Order-invariance
• Positive order-invariance:
preserve original ranking
inside conjunction of data
• Examples:
• *C, *L, *R, a, *J, a.
Data-Precedence
• Data-precedence: Each world
satisfying all the data is placed
above each world failing to
satisfy some datum.
• Examples:
• *C, *L
• *R, a, *J, a, if a is above S .
S
Enumerate and Test
n
n
Enumerate-and-test:
– locally consistent,
– positively invariant
– data-precedent
Examples:
– *C, *L
– *R, a, *J, a, if a is above S .
epistemic
dump for
refuted
possibilities
preserved
implausibility
structure
Part III
Belief Revision as Learning
A Very Simple Learning Paradigm
data
trajectory
mysterious
system
A Very Simple Learning Paradigm
data
trajectory
mysterious
system
A Very Simple Learning Paradigm
data
trajectory
mysterious
system
A Very Simple Learning Paradigm
data
trajectory
mysterious
system
Possible Outcome Trajectories
possible data trajectories
e
e|n
Finding the Truth
(*, S0) identifies e 
for all but finitely many n,
b(S0 * ([0, e(0)], ... , [n, e(n)])) = {e}
Finding the Truth
(*, S0) identifies e 
for all but finitely many n,
b(S0 * ([0, e(0)], ... , [n, e(n)]) = {e}
truth
Finding the Truth
(*, S0) identifies e 
for all but finitely many n,
b(S0 * ([0, e(0)], ... , [n, e(n)]) = {e}
truth
Finding the Truth
(*, S0) identifies e 
for all but finitely many n,
b(S0 * ([0, e(0)], ... , [n, e(n)]) = {e}
truth
Finding the Truth
(*, S0) identifies e 
for all but finitely many n,
b(S0 * ([0, e(0)], ... , [n, e(n)]) = {e}
truth
Finding the Truth
(*, S0) identifies e 
for all but finitely many n,
b(S0 * ([0, e(0)], ... , [n, e(n)]) = {e}
completely true belief
Reliability is No Accident
•
Let K be a range of possible outcome trajectories
(*, S0) identifies K  (*, S0) identifies each e in K.
•
Fact: K is identifiable  K is countable.
•
Completeness
•
* is complete 
for each identifiable K
there is an S0 such that,
K is identifiable by (*, S0).
•
Else * is restrictive.
•
•
•
Completeness
Proposition: If * enumerates and tests, * is complete.
Completeness
Proposition: If * enumerates and tests, * is complete.
•Enumerate K
•Choose arbitrary e in K
e
Completeness
Proposition: If * enumerates and tests, * is complete.
Completeness
Proposition: If * enumerates and tests, * is complete.
data precedence
positive invariance
Completeness
Proposition: If * enumerates and tests, * is complete.
Completeness
Proposition: If * enumerates and tests, * is complete.
data precedence
positive invariance
Completeness
Proposition: If * enumerates and tests, * is complete.
Completeness
Proposition: If * enumerates and tests, * is complete.
data precedence
local consistency
convergence
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
Amnesia
Without data precedence, memory can fail
Same example, using *J,1.
E
E is forgotten
Duality
conjectures and refutations
tabula rasa
...
predicts
may forget
remembers
doesn’t predict
“Rationally” Imposed Tension
compression for memory
Can both be
accommodated?
rarefaction for inductive leaps
Inductive Amnesia
compression for memory
Bang!
Restrictiveness:
No possible initial
state resolves the
pressure
rarefaction for inductive leaps
Question
•
•
Which methods are guilty?
Are some worse than others?
Part IV:
The Goodman Hierarchy
The Grue Operation
Nelson Goodman
n
e
e‡n
Grue Complexity Hierarchy
Gw(e)
G4(e)
G3(e)
G2(e)
G1(e)
G0(e)
finite variants of e ,¬e
finite variants of e
Gweven (e)
G2even (e)
G1even (e)
G0even(e)
Classification: even grues
Min
Flush Jeffrey Ratch Lex
Gweven (e)
no
a=w
a=1
a=1
yes
yes
Gneven (e)
no
a = n +1 a = 1
a=1
yes
yes
G2even (e)
G1even (e)
G0even (e)
Cond
no
a=3
a=1
a=1
yes
yes
no
a=2
a=1
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Classification: even grues
Min
Flush Jeffrey Ratch Lex
Gweven (e)
no
a=w
a=1
a=1
yes
yes
Gneven (e)
no
a = n +1 a = 1
a=1
yes
yes
G2even (e)
G1even (e)
G0even (e)
Cond
no
a=3
a=1
a=1
yes
yes
no
a=2
a=1
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Hamming Algebra
n
a H b mod e 
a differs from e only where b does.
Hamming
111
110
101
011
100
010
001
000
*R,1 ,*J,1 can identify
w
G
even(e)
a
e
Example
a
Learning as rigid
hypercube rotation
e
*R,1 ,*J,1 can identify
w
G
even(e)
Learning as rigid
hypercube rotation
a
e
*R,1 ,*J,1 can identify
e
a
w
G
even(e)
Learning as rigid
hypercube rotation
*R,1 ,*J,1 can identify
w
G
even(e)
e
Learning as rigid
hypercube rotation
convergence
a
Classification: even grues
Min
Flush Jeffrey Ratch Lex
Gweven (e)
no
a=w
a=1
a=1
yes
yes
Gneven (e)
no
a = n +1 a = 1
a=1
yes
yes
G2even (e)
G1even (e)
G0even (e)
Cond
no
a=3
a=1
a=1
yes
yes
no
a=2
a=1
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Classification: arbitrary grues
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Classification: arbitrary grues
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
*R,2 is Complete
n
n
Impose the Hamming distance ranking on each
finite variant class
Now raise the nth Hamming ranking by n
S
C0 C1 C2 C3 C4
*R,2 is Complete
n
Data streams in the same column just barely make
it because they jump by 2 for each difference from
the truth
S
1 difference from truth
2 differences from truth
C0 C1 C2 C3 C4
Classification: arbitrary grues
Gw(e)
Min
Flush Jeffrey Ratch Lex
no
a=w
a=2
Cond
a=2
yes
yes
a=2
yes
yes
Can’t use Hamming
rank
G3(e)
G2(e)
G1(e)
G0(e)
no
a = n +1 a = 2
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Wrench In the Works
n
n
Suppose *J,2 succeeds with Hamming rank.
Feed ¬ e until it is uniquely at the bottom.
k
¬e
By convergent success
Wrench In the Works
n
So for some later n,
k n
a
Hamming rank and positive invariance.
b
If empty, things go even worse!
¬e
Still alone since timid and stubborn
Wrench In the Works
n
b moves up at most 1 step since ¬e is still alone (rule)
k n
a
b
¬e
Refuted worlds touch bottom and get
lifted by at most two.
Wrench In the Works
n
n
n
a
b
¬e
a
So b never rises above a when a is true (positive invariance)
Now a and b agree forever, so can never be separated.
So never converges in a or forgets refutation of b.
k n
Hamming vs. Goodman Algebras
n
n
a H b mod e  a differs from e only where b does.
a G b mod e  a grues
e only where b does.
Hamming
111
110
101
011
010
001
100
000
101
Goodman
100
110
010
111
011
001
000
Epistemic States as Boolean Ranks
Hamming
Goodman
Gwodd (e)
Gweven (e)
e
Gw(e)
e
*J,2 can identify
n
n
n
w
G (e)
Proof: Use the Goodman ranking as initial state
Then *J,2 always believes that the observed grues
are the only ones that will ever occur.
Note: Ockham with respect to reversal counting
problem.
Classification: arbitrary grues
Min
Flush Jeffrey Ratch Lex
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
G2(e)
G1(e)
G0(e)
Cond
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Methods *J,1; *M Fail on
n
n
Proof: Suppose otherwise
Feed e until e is uniquely at the bottom
e
data so far
1
G (e)
Methods *J,1; *M Fail on
n
1
G (e)
By the well-ordering condition,
...else infinite
descending
chain
e
data so far
Methods *J,1; *M Fail on
n
n
1
G (e)
Now feed e’ forever
By stage n, the picture is the same
e’
positive order invariance
e’’
e
e’
timidity and stubbornness
n
Methods *J,1; *M Fail on
n
n
e’
n
e’’
n
e
e’
n
1
G (e)
At stage n +1, e stays at the
bottom (timid and stubborn).
So e’ can’t travel down (rule)
e’’ doesn’t rise (rule)
Now e’’ makes it to the bottom
at least as soon as e’
Classification: arbitrary grues
Min
Flush Jeffrey Ratch Lex
Cond
Gw(e)
no
a=w
a=2
a=2
yes
yes
G3(e)
no
a = n +1 a = 2
a=2
yes
yes
forced
backsliding
G2(e)
G1(e)
G0(e)
no
a=3
a=2
a=2
yes
yes
no
a=2
a=2
a=1
yes
yes
yes
a=0
a=0
a=0
yes
yes
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
Proof: Suppose otherwise
Bring e uniquely to the bottom, say at stage k
k
e
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
Start feeding a = e ‡ k
k
e
a
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
By some stage k’, a is uniquely down
So between k + 1 and k’, there is a first stage j when no
finite variant of e is at the bottom
k
e
a
k’
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
k
c
a
j
k’
Let c in G2(e ) be a finite variant of e
that rises to level 1 at j
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
k
c
a
j
k’
Let c in G2(e ) be a finite variant of e
that rises to level 1 at j
Method *R,1 Fails on G2(e)
with Oliver Schulte
k
j
k’
n
c
a
So c(j - 1) is not a(j - 1)
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
k
d
1 c
a
j
k’
n
Let d be a up to j and e
thereafter
So is in G2(e)
Since d differs from e, d is at
least as high as level 1 at j
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
k
d
1 c
a
j
k’
Show: c agrees with e after j.
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
kj
d
1 c
a
k’
Case: j = k+1
Then c could have been chosen
as e since e is uniquely at the
bottom at k
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
k
d
1 c
a
j
k’
Case: j > k+1
Then c wouldn’t have been at
the bottom if it hadn’t agreed
with a (disagreed with e)
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
k
d
1 c
a
j
k’
Case: j > k+1
So c has already used up its two
grues against e
Method *R,1 Fails on G2(e)
with Oliver Schulte
n
n
k
d
1 c
d
j
k’
Feed c forever after
By positive invariance, either
never projects or forgets the
refutation of c at j-1
Without Well-Ordering
Min
Flush Jeffrey Ratch Lex
Cond
Gw(e)
no
yes
yes
G3(e)
no
yes
yes
no
yes
yes
yes
yes
yes
yes
yes
yes
infinite descending
chains can help!
G2(e)
G1(e)
G0(e)
Summary
•
•
•
•
•
Belief revision constrains possible inductive strategies
“No induction without contradiction” (?!!)
“Rationality” weakens learning power of ideal agents.
Prediction vs. memory
Precise recommendations for rationalists:
•
•
•
•
boosting by 2 vs. 1
backslide vs. ratchet
well-ordering
Hamming vs. Goodman rank