Probabilistic Inference

Download Report

Transcript Probabilistic Inference

PROBABILISTIC INFERENCE
AGENDA
Conditional probability
 Independence
 Intro to Bayesian Networks

REMEMBER: PROBABILITY NOTATION
LEAVES VALUES IMPLICIT

P(AB) = P(A)+P(B)- P(AB) means
P(A=a  B=b) = P(A=a) + P(B=b)
- P(A=a  B=b)
For all aVal(A) and bVal(B)
A and B are random variables.
A=a and B=b are events.
Random variables indicate many possible
combinations of events
CONDITIONAL PROBABILITY


P(A,B)
= P(A|B) P(B)
= P(B|A) P(A)
P(A|B) is the posterior probability of A given
knowledge of B
Axiomatic definition:
P(A|B) = P(A,B)/P(B)
CONDITIONAL PROBABILITY
P(A|B) is the posterior probability of A given
knowledge of B
 “For each value of b: given that I know B=b, what
do I believe about A?”
 If a new piece of information C arrives, the
agent’s new belief (if it obeys the rules of
probability) should be P(A|B,C)

CONDITIONAL DISTRIBUTIONS
State
P(state)
C, T, P
0.108
C, T, P
C, T, P
C, T, P
C, T, P
0.012
0.072
0.008
0.016
C, T, P
C, T, P
C, T, P
0.064
0.144
0.576
P(Cavity|Toothache) =
P(CavityToothache)/P(Toothache) =
(0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6
Interpretation: After observing
Toothache, the patient is no longer an
“average” one, and the prior probability
(0.2) of Cavity is no longer valid
P(Cavity|Toothache) is calculated by
keeping the ratios of the probabilities of
the 4 cases of Toothache unchanged,
and normalizing their sum to 1
UPDATING THE BELIEF STATE
State
P(state)
C, T, P
C, T, P
C, T, P
0.108
0.012
0.072
C, T, P
C, T, P
C, T, P
C, T, P
0.008
0.016
0.064
0.144
C, T, P
0.576
 The patient walks into
the dentists door
 Let D now observe
evidence E: Toothache
holds with probability
0.8 (e.g., “the patient
says so”)
 How should D update its
belief state?
UPDATING THE BELIEF STATE
State
P(state)
C, T, P
C, T, P
C, T, P
0.108
0.012
0.072
C, T, P
C, T, P
C, T, P
C, T, P
0.008
0.016
0.064
0.144
C, T, P
0.576
P(Toothache|E) = 0.8
 We want to compute
P(CTP|E)
= P(CP|T,E) P(T|E)
 Since E is not directly
related to the cavity or the
probe catch, we consider that
C and P are independent of
E given T, hence:
P(CP|T,E) = P(CP|T)
 P(CTP|E)
= P(CPT) P(T|E)/P(T)

UPDATING THE BELIEF STATE
State
P(state)
C, T, P
C, T, P
C, T, P
0.108
0.012
0.072
C, T, P
C, T, P
C, T, P
C, T, P
0.008
0.016
0.064
0.144
C, T, P
0.576
P(Toothache|E) = 0.8
 We want to compute
P(CTP|E)
These rows should be
= P(CP|T,E)
P(T|E)
scaled to sum
to 0.8
 Since E is not directly
related to the cavity or the
probe catch, we consider that
C and P are independent of
TheseT,rows
should be
E given
hence:
scaled to=sum
to 0.2
P(CP|T,E)
P(CP|T)
 P(CTP|E)
= P(CPT) P(T|E)/P(T)

UPDATING THE BELIEF STATE
State
P(state)
C, T, P
C, T, P
C, T, P
0.108 0.432
0.012 0.048
0.072 0.018
C, T, P
C, T, P
C, T, P
C, T, P
0.008 0.002
0.016 0.064
0.064 0.256
0.144 0.036
C, T, P
0.576 0.144
P(Toothache|E) = 0.8
 We want to compute
P(CTP|E)
These rows should be
= P(CP|T,E)
P(T|E)
scaled to sum
to 0.8
 Since E is not directly
related to the cavity or the
probe catch, we consider that
C and P are independent of
TheseT,rows
should be
E given
hence:
scaled to=sum
to 0.2
P(CP|T,E)
P(CP|T)
 P(CTP|E)
= P(CPT) P(T|E)/P(T)

ISSUES
If a state is described by n propositions, then a
belief state contains 2n states (possibly, some
have probability 0)
  Modeling difficulty: many numbers must be
entered in the first place
  Computational issue: memory size and time

INDEPENDENCE OF EVENTS

Two events A=a and B=b are independent if
P(A=a  B=b) = P(A=a) P(B=b)
hence P(A=a|B=b) = P(A=a)
 Knowing B=b doesn’t give you any information
about whether A=a is true
INDEPENDENCE OF RANDOM VARIABLES

Two random variables A and B are independent
if
P(A,B) = P(A) P(B)
hence P(A|B) = P(A)
 Knowing B doesn’t give you any information
about A

[This equality has to hold for all combinations of
values that A and B can take on, i.e., all events
A=a and B=b are independent]
SIGNIFICANCE OF INDEPENDENCE

If A and B are independent, then
P(A,B) = P(A) P(B)
=> The joint distribution over A and B can be
defined as a product of the distribution of A and
the distribution of B
 Rather than storing a big probability table over
all combinations of A and B, store two much
smaller probability tables!


To compute P(A=a  B=b), just look up P(A=a)
and P(B=b) in the individual tables and multiply
them together
CONDITIONAL INDEPENDENCE

Two random variables A and B are conditionally
independent given C, if
P(A, B|C) = P(A|C) P(B|C)
hence P(A|B,C) = P(A|C)
 Once you know C, learning B doesn’t give you
any information about A

[again, this has to hold for all combinations of
values that A,B,C can take on]
SIGNIFICANCE OF CONDITIONAL
INDEPENDENCE
Consider Rainy, Thunder, and RoadsSlippery
 Ostensibly, thunder doesn’t have anything
directly to do with slippery roads…
 But they happen together more often when it
rains, so they are not independent…
 So it is reasonable to believe that Thunder and
RoadsSlippery are conditionally independent
given Rainy
 So if I want to estimate whether or not I will hear
thunder, I don’t need to think about the state of
the roads, just whether or not it’s raining!

State
P(state)
C, T, P
C, T, P
C, T, P
0.108
0.012
0.072
C, T, P
C, T, P
C, T, P
C, T, P
0.008
0.016
0.064
0.144
C, T, P
0.576
Toothache and PCatch
are independent given
Cavity, but this
relation is hidden in
the numbers! [Quiz]
 Bayesian networks
explicitly represent
independence among
propositions to reduce
the number of
probabilities defining
a belief state

BAYESIAN NETWORK
Notice that Cavity is the “cause” of both Toothache and
PCatch, and represent the causality links explicitly
 Give the prior probability distribution of Cavity
 Give the conditional probability tables of Toothache and
PCatch

P(CTP) = P(TP|C) P(C)
= P(T|C) P(P|C) P(C)
P(T|C)
Cavity
Cavity
0.6
0.1
P(Cavity)
0.2
Cavity
P(P|C)
Toothache
PCatch
5 probabilities, instead of 7
Cavity
Cavity
0.9
0.02
CONDITIONAL PROBABILITY TABLES
P(CTP) = P(TP|C) P(C)
= P(T|C) P(P|C) P(C)
P(Cavity)
0.2
Cavity
P(T|C)
Cavity
Cavity
0.6
0.1
P(P|C)
Toothache
Cavity
Cavity
P(T|C)
0.6
0.1
P(T|C)
0.4
0.9
Cavity
Cavity
0.9
0.02
PCatch
Columns sum to 1
If X takes n values, just store n-1 entries
SIGNIFICANCE OF CONDITIONAL
INDEPENDENCE
Consider Grade(CS101), Intelligence, and SAT
 Ostensibly, the grade in a course doesn’t have a
direct relationship with SAT scores
 but good students are more likely to get good
SAT scores, so they are not independent…
 It is reasonable to believe that Grade(CS101) and
SAT are conditionally independent given
Intelligence

BAYESIAN
NETWORK
Explicitly represent independence among propositions
 Notice that Intelligence is the “cause” of both Grade and
SAT, and the causality is represented explicitly

P(I,G,S) = P(G,S|I) P(I)
= P(G|I) P(S|I) P(I)
P(I=x)
Intel.
P(G=x|I) I=low I=high
high
0.3
low
0.7
‘A’
0.2
0.74
P(S=x|I) I=low
I=high
‘B’
0.34
0.17
low
0.95
0.2
‘C’
0.46
0.09
high
0.05
0.8
Grade
SAT
6 probabilities, instead of 11
SIGNIFICANCE OF BAYESIAN NETWORKS
If we know that variables are conditionally
independent, we should be able to decompose
joint distribution to take advantage of it
 Bayesian networks are a way of efficiently
factoring the joint distribution into conditional
probabilities
 And also building complex joint distributions
from smaller models of probabilistic relationships
 But…

What knowledge does the BN encode about the
distribution?
 How do we use a BN to compute probabilities of
variables that we are interested in?

A MORE COMPLEX BN
Burglary
Intuitive meaning of
arc from x to y: “x
has direct influence
on y”
Earthquake
causes
Alarm
Directed
acyclic graph
effects
JohnCalls
MaryCalls
A MORE COMPLEX BN
Burglary
P(B)
Size of the
CPT for a
node with k
parents: 2k
JohnCalls
Earthquake
0.001
P(E)
0.002
B E P(A|…)
Alarm
A
P(J|…)
T
F
0.90
0.05
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
10 probabilities, instead of 31
A P(M|…)
T 0.70
F 0.01
WHAT DOES THE BN ENCODE?
Burglary
P(BJ)  P(B) P(J)
P(BJ|A) = P(B|A) P(J|A)
JohnCalls

Each of the beliefs
JohnCalls and
MaryCalls is
independent of Burglary
and Earthquake given
Alarm or Alarm
Earthquake
Alarm
MaryCalls
For example, John does
not observe any burglaries
directly
WHAT DOES THE BN ENCODE?
Burglary
Earthquake
P(BJ|A) = P(B|A) P(J|A)
Alarm
P(JM|A) = P(J|A) P(M|A)
JohnCalls

The beliefs JohnCalls
and MaryCalls are
independent given
Alarm or Alarm
A node is independent of
MaryCalls
its non-descendants
given its parents
For instance, the reasons why
John and Mary may not call if
there is an alarm are unrelated
WHAT DOES THE BN ENCODE?
Burglary
Burglary and
Earthquake are
independent
JohnCalls

The beliefs JohnCalls
and MaryCalls are
independent given
Alarm or Alarm
Earthquake
Alarm
A node is independent of
MaryCalls
its non-descendants
given its parents
For instance, the reasons why
John and Mary may not call if
there is an alarm are unrelated
LOCALLY STRUCTURED WORLD
A world is locally structured (or sparse) if each of
its components interacts directly with relatively
few other components
 In a sparse world, the CPTs are small and the
BN contains much fewer probabilities than the
full joint distribution
 If the # of entries in each CPT is bounded by a
constant, i.e., O(1), then the # of probabilities in a
BN is linear in n – the # of propositions – instead
of 2n for the joint distribution

EQUATIONS INVOLVING RANDOM VARIABLES
GIVE RISE TO CAUSALITY RELATIONSHIPS
C=AB
 C = max(A,B)
 Constrains joint probability P(A,B,C)
 Nicely encoded as causality relationship

A
Conditional probability
given by equation rather
than a CPT
B
C
NAÏVE BAYES MODELS

P(Cause,Effect1,…,Effectn)
= P(Cause) Pi P(Effecti | Cause)
Cause
Effect1
Effect2
Effectn
BAYES’ RULE AND OTHER PROBABILITY
MANIPULATIONS

P(A,B)
= P(A|B) P(B)
= P(B|A) P(A)

P(A|B) = P(B|A) P(A) / P(B)

Gives us a way to manipulate distributions
e.g. P(B) = Sa P(B|A=a) P(A=a)
 Can derive P(A|B), P(B) using only P(B|A) and P(A)

NAÏVE BAYES CLASSIFIER

P(Class,Feature1,…,Featuren)
= P(Class) Pi P(Featurei | Class)
Spam / Not Spam
Class
English / French/ Latin
…
Feature1
Given features,
what class?
Feature2
Featuren
Word occurrences
P(C|F1,….,Fk) = P(C,F1,….,Fk)/P(F1,….,Fk)
= 1/Z P(C) Pi P(Fi|C)
BUT DOES A BN REPRESENT A
BELIEF STATE?
IN OTHER WORDS, CAN WE COMPUTE
THE FULL JOINT DISTRIBUTION OF
THE PROPOSITIONS FROM IT?
CALCULATION OF JOINT PROBABILITY
Burglary
P(B)
Earthquake
0.001
P(JMABE) = ??
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
P(E)
0.002
B E P(A|…)
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
Burglary
Earthquake
Alarm
P(JMABE)
JohnCalls
= P(JM|A,B,E)  P(ABE)
= P(J|A,B,E)  P(M|A,B,E)  P(ABE)
(J and M are independent given A)
 P(J|A,B,E) = P(J|A)
(J and B and J and E are independent given A)
 P(M|A,B,E) = P(M|A)
 P(ABE) = P(A|B,E)  P(B|E)  P(E)
= P(A|B,E)  P(B)  P(E)
(B and E are independent)
 P(JMABE) =
P(J|A)P(M|A)P(A|B,E)P(B)P(E)

MaryCalls
CALCULATION OF JOINT PROBABILITY
Burglary
P(B)
Earthquake
0.001
P(JMABE)
= P(J|A)P(M|A)P(A|B,E)P(B)P(E)
= 0.9 x 0.7 x 0.001 x 0.999 Alarm
x 0.998
= 0.00062
JohnCalls
A
P(J|…)
T
F
0.90
0.05
P(E)
0.002
B E P(A|…)
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
CALCULATION OF JOINT PROBABILITY
Burglary
P(B)
Earthquake
0.001
P(JMABE)
= P(J|A)P(M|A)P(A|B,E)P(B)P(E)
= 0.9 x 0.7 x 0.001 x 0.999 Alarm
x 0.998
= 0.00062
P(E)
0.002
B E P(A|…)
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
P(x1x
= Pi=1,…,nP(x
JohnCalls
MaryCalls
T 0.70 i))
2…xnT) 0.90
i|parents(X
A
P(J|…)
A P(M|…)
F
0.05
F 0.01
 full joint distribution table
CALCULATION OF JOINT PROBABILITY
Since a BN definesP(E)the
Burglary 0.001 full jointEarthquake
0.002 of a
distribution
set of propositions, it
P(JMABE)
B E P(A| )
represents
a belief state
= P(J|A)P(M|A)P(A|B,E)P(b)P(e) T T 0.95
P(B)
…
= 0.9 x 0.7 x 0.001 x 0.999 Alarm
x 0.998
= 0.00062
T F 0.94
F T 0.29
F F 0.001
P(x1x
= Pi=1,…,nP(x
JohnCalls
MaryCalls
T 0.70 i))
2…xnT) 0.90
i|parents(X
A
P(J|…)
A P(M|…)
F
0.05
F 0.01
 full joint distribution table
PROBABILISTIC INFERENCE
Assume we are given a Bayes net
 Usually we aren’t interested in calculating the
probability of a setting of all of the variables

For some variables we observe their values directly:
observed variables
 For others we don’t care: nuisance variables


How can we enforce observed values and ignore
nuisance variables, while strictly adhering to the
rules of probability?

Probabilistic inference problems
PROBABILITY MANIPULATION REVIEW…
Three fundamental operations
 Conditioning
 Marginalization
 Applying (conditional) independence assumptions

TOP-DOWN INFERENCE
Suppose we want to compute P(Alarm)
Burglary
P(B)
Earthquake
0.001
P(E)
0.002
B E P(A|…)
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
TOP-DOWN INFERENCE
Suppose we want to compute P(Alarm)
1. P(Alarm) = Σb,e P(A,b,e)
P(B)
2. P(Alarm) = Σb,e P(A|b,e)P(b)P(e)
Burglary
Earthquake
0.001
P(E)
0.002
B E P(A|…)
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
TOP-DOWN INFERENCE
Suppose we want to compute P(Alarm)
1. P(Alarm) = Σb,e P(A,b,e)
P(B)
2. P(Alarm) = Σb,e P(A|b,e)P(b)P(e)
0.001
3. P(Alarm)Burglary
= P(A|B,E)P(B)P(E)
+
P(A|B, E)P(B)P(E) +
P(A|B,E)P(B)P(E) +
P(A|B,E)P(B)P(E)
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
Earthquake
P(E)
0.002
B E P(A|…)
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
TOP-DOWN INFERENCE
Suppose we want to compute P(Alarm)
1. P(A) = Σb,e P(A,b,e)
P(B)
2. P(A) = Σb,e P(A|b,e)P(b)P(e)
Burglary 0.001+
3. P(A) = P(A|B,E)P(B)P(E)
P(A|B, E)P(B)P(E) +
P(A|B,E)P(B)P(E) +
P(A|B,E)P(B)P(E)
4. P(A) = 0.95*0.001*0.002 +
0.94*0.001*0.998 + Alarm
0.29*0.999*0.002 +
0.001*0.999*0.998
= 0.00252
JohnCalls
A
P(J|…)
T
F
0.90
0.05
Earthquake
P(E)
0.002
B E P(A|…)
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
TOP-DOWN INFERENCE
Now, suppose we want to compute P(MaryCalls)
Burglary
P(B)
Earthquake
0.001
P(E)
0.002
B E P(A|…)
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
TOP-DOWN INFERENCE
Now, suppose we want to compute P(MaryCalls)
1. P(M) = P(M|A)P(A) + P(M| A) P(A)
Burglary
P(B)
Earthquake
0.001
P(E)
0.002
B E P(A|…)
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
TOP-DOWN INFERENCE
Now, suppose we want to compute P(MaryCalls)
1. P(M) = P(M|A)P(A) + P(M| A) P(A)
2. P(M) = 0.70*0.00252 + P(B)
0.01*(1-0.0252)
Burglary 0.001
Earthquake
= 0.0117
P(E)
0.002
B E P(A|…)
Alarm
JohnCalls
A
P(J|…)
T
F
0.90
0.05
T
T
F
F
T
F
T
F
0.95
0.94
0.29
0.001
MaryCalls
A P(M|…)
T 0.70
F 0.01
QUERYING THE BN
The BN gives P(T|C)
 What about P(C|T)?

Cavity
P(C)
0.1
C P(T|C)
Toothache
T 0.4
F 0.01111
BAYES’ RULE

P(AB)

So…
= P(A|B) P(B)
= P(B|A) P(A)
P(A|B) = P(B|A) P(A) / P(B)

A convenient way to manipulate probability
equations
APPLYING BAYES’ RULE


Let A be a cause, B be an effect, and let’s say we know
P(B|A) and P(A) (conditional probability tables)
What’s P(B)?
APPLYING BAYES’ RULE


Let A be a cause, B be an effect, and let’s say we know
P(B|A) and P(A) (conditional probability tables)
What’s P(B)?
 P(B)
= Sa P(B,A=a)
 P(B,A=a)
 So,
= P(B|A=a)P(A=a)
P(B) = Sa P(B | A=a) P(A=a)
[marginalization]
[conditional probability]
APPLYING BAYES’ RULE


Let A be a cause, B be an effect, and let’s say we know
P(B|A) and P(A) (conditional probability tables)
What’s P(A|B)?
APPLYING BAYES’ RULE


Let A be a cause, B be an effect, and let’s say we know
P(B|A) and P(A) (conditional probability tables)
What’s P(A|B)?
 P(A|B)
 P(B)
 So,
= P(B|A)P(A)/P(B)
[Bayes rule]
= Sa P(B | A=a) P(A=a)
[Last slide]
P(A|B) = P(B|A)P(A) / [Sa P(B | A=a) P(A=a)]
HOW DO WE READ THIS?



P(A|B) = P(B|A)P(A) / [Sa P(B | A=a) P(A=a)]
[An equation that holds for all values A can take on, and
all values B can take on]
P(A=a|B=b) =
HOW DO WE READ THIS?



P(A|B) = P(B|A)P(A) / [Sa P(B | A=a) P(A=a)]
[An equation that holds for all values A can take on, and
all values B can take on]
P(A=a|B=b) = P(B=b|A=a)P(A=a) /
[Sa P(B=b | A=a) P(A=a)]
Are these the same a?
HOW DO WE READ THIS?



P(A|B) = P(B|A)P(A) / [Sa P(B | A=a) P(A=a)]
[An equation that holds for all values A can take on, and
all values B can take on]
P(A=a|B=b) = P(B=b|A=a)P(A=a) /
[Sa P(B=b | A=a) P(A=a)]
Are these the same a?
NO!
HOW DO WE READ THIS?



P(A|B) = P(B|A)P(A) / [Sa P(B | A=a) P(A=a)]
[An equation that holds for all values A can take on, and
all values B can take on]
P(A=a|B=b) = P(B=b|A=a)P(A=a) /
[Sa’ P(B=b | A=a’) P(A=a’)]
Be careful about indices!
QUERYING THE BN

Cavity
P(C)

0.1

The BN gives P(T|C)
What about P(C|T)?
P(Cavity|T=t)
= P(Cavity  T=t)/P(T=t)
= P(T=t|Cavity) P(Cavity) / P(T=t)
[Bayes’ rule]
C P(T|C)
Toothache
T 0.4
F 0.01111

Querying a BN is just applying
Bayes’ rule on a larger scale…
algorithms next time
MORE COMPLICATED
SINGLY-CONNECTED BELIEF NET
Battery
Radio
Gas
SparkPlugs
Starts
Moves
SOME APPLICATIONS OF BN
Medical diagnosis
 Troubleshooting of hardware/software systems
 Fraud/uncollectible debt detection
 Data mining
 Analysis of genetic sequences
 Data interpretation, computer vision, image
understanding

Region = {Sky, Tree, Grass, Rock}
R1
Above
R2
R3
R4
BN to evaluate
insurance risks
PURPOSES OF BAYESIAN NETWORKS
Efficient and intuitive modeling of complex
causal interactions
 Compact representation of joint distributions
O(n) rather than O(2n)
 Algorithms for efficient inference with given
evidence (more on this next time)

HOMEWORK

Read R&N 14.1-3