Applications of belief propagation in low-level vision Bill Freeman Massachusetts Institute of Technology Jan.
Download
Report
Transcript Applications of belief propagation in low-level vision Bill Freeman Massachusetts Institute of Technology Jan.
Applications of belief
propagation in low-level vision
Bill Freeman
Massachusetts Institute of Technology
Jan. 12, 2010
1
Joint work with: Egon Pasztor, Jonathan Yedidia, Yair Weiss,
Thouis Jones, Edward Adelson, Marshall Tappen.
Derivation of belief propagation
y1
y2
( x1 , y1 )
( x2 , y2 )
x1
( x1 , x2 )
x2
y3
( x3 , y3 )
( x2 , x3 )
x3
x1MMSE mean sum sum P(x1, x 2, x 3 , y1, y 2 , y 3 )
x1
x2
x3
2
The posterior factorizes
x1MMSE mean sum sum P ( x1 , x2 , x3 , y1 , y2 , y3 )
x1
x2
x3
x1MMSE mean sum sum ( x1 , y1 )
x1
x2
x3
( x2 , y2 ) ( x1 , x2 )
( x3 , y3 ) ( x2 , x3 )
x1MMSE mean ( x1 , y1 )
x1
sum ( x2 , y2 ) ( x1 , x2 )
x2
sum ( x3 , y3 ) ( x2 , x3 )
y1
( x1 , y1 )
y2
( x2 , y2 )
x1
( x1 , x2 )
x2
y3
( x3 , y3 )
( x2 , x3 )
x3
3
x3
Propagation rules
x1MMSE mean sum sum P ( x1 , x2 , x3 , y1 , y2 , y3 )
x1
x2
x3
x1MMSE mean sum sum ( x1 , y1 )
x1
x2
x3
( x2 , y2 ) ( x1 , x2 )
( x3 , y3 ) ( x2 , x3 )
x1MMSE mean ( x1 , y1 )
x1
sum ( x2 , y2 ) ( x1 , x2 )
x2
sum ( x3 , y3 ) ( x2 , x3 )
y1
( x1 , y1 )
y2
( x2 , y2 )
x1
( x1 , x2 )
x2
y3
( x3 , y3 )
( x2 , x3 )
x3
4
x3
Propagation rules
x1MMSE mean ( x1 , y1 )
x1
sum ( x2 , y2 ) ( x1 , x2 )
x2
sum ( x3 , y3 ) ( x2 , x3 )
x3
M 12 ( x1 ) sum ( x1 , x2 ) ( x2 , y2 ) M 23 ( x2 )
x2
y1
( x1 , y1 )
y2
( x2 , y2 )
x1
( x1 , x2 )
x2
y3
( x3 , y3 )
( x2 , x3 )
5
x3
Propagation rules
x1MMSE mean ( x1 , y1 )
x1
sum ( x2 , y2 ) ( x1 , x2 )
x2
sum ( x3 , y3 ) ( x2 , x3 )
x3
M 12 ( x1 ) sum ( x1 , x2 ) ( x2 , y2 ) M 23 ( x2 )
x2
y1
( x1 , y1 )
y2
( x2 , y2 )
x1
( x1 , x2 )
x2
y3
( x3 , y3 )
( x2 , x3 )
6
x3
Belief propagation messages
A message: can be thought of as a set of weights on
each of your possible states
To send a message: Multiply together all the incoming
messages, except from the node you’re sending to,
then multiply by the compatibility matrix and marginalize
over the sender’s states.
M i j ( xi ) ij (xi , x j )
xj
i
j
=
i
k
M
j (x j )
k N ( j ) \ i
j
7
Belief propagation: the nosey
neighbor rule
“Given everything that I’ve heard, here’s what
I think is going on inside your house”
(Given my incoming messages, affecting my
state probabilities, and knowing how my
states affect your states, here’s how I think
you should modify the probabilities of your
states)
8
Beliefs
To find a node’s beliefs: Multiply together all the
messages coming in to that node.
j
bj (x j )
k
M
j (x j )
kN ( j )
(Show this for the toy example.)
9
Optimal solution in a chain or tree:
Belief Propagation
• “Do the right thing” Bayesian algorithm.
• For Gaussian random variables over time:
Kalman filter.
• For hidden Markov models:
forward/backward algorithm (and MAP
variant is Viterbi).
10
Markov Random Fields
• Allows rich probabilistic models for
images.
• But built in a local, modular way. Learn
local relationships, get global effects out.
11
MRF nodes as pixels
12
Winkler, 1995, p. 32
MRF nodes as patches
image patches
scene patches
(xi, yi)
image
(xi, xj)
scene
13
Network joint probability
1
P ( x, y )
Z
scene
image
( x , x ) ( x , y )
i
i, j
j
i
i
i
Scene-scene
Image-scene
compatibility
compatibility
function
function
neighboring
local
scene nodes
observations
14
In order to use MRFs:
• Given observations y, and the parameters of
the MRF, how infer the hidden variables, x?
• How learn the parameters of the MRF?
15
Inference in Markov Random Fields
Gibbs sampling, simulated annealing
Iterated conditional modes (ICM)
Belief propagation
Application examples:
super-resolution
motion analysis
shading/reflectance separation
Graph cuts
Variational methods
16
Inference in Markov Random Fields
Gibbs sampling, simulated annealing
Iterated conditional modes (ICM)
Belief propagation
Application examples:
super-resolution
motion analysis
shading/reflectance separation
Graph cuts
Variational methods
17
Derivation of belief propagation
y1
( x1 , y1 )
y2
( x2 , y2 )
x1
( x1 , x2 )
x2
y3
( x3 , y3 )
( x2 , x3 )
x3
x1MMSE mean sum sum P( x1 , x2 , x3 , y1 , y2 , y3 )
x1
x2
x3
18
No factorization with loops!
x1MMSE mean ( x1 , y1 )
x1
sum ( x2 , y2 ) ( x1 , x2 )
x2
sum ( x3 , y3 ) ( x2 , x3 ) ( x1 , x3 )
x
3
y2
y1
x1
x2
y3
x3
19
Applications of belief
propagation in low-level vision
Bill Freeman
Massachusetts Institute of Technology
Jan. 12, 2010
20
Joint work with: Egon Pasztor, Jonathan Yedidia, Yair Weiss,
Thouis Jones, Edward Adelson, Marshall Tappen.
Belief, and message updates
bj (x j )
j
k
M
j (x j )
kN ( j )
M i j ( xi ) ij (xi , x j )
xj
i
=
i
k
M
j (x j )
k N ( j ) \ i
j
21
Optimal solution in a chain or tree:
Belief Propagation
• “Do the right thing” Bayesian algorithm.
• For Gaussian random variables over time:
Kalman filter.
• For hidden Markov models:
forward/backward algorithm (and MAP
variant is Viterbi).
22
Justification for running belief propagation
in networks with loops
• Experimental results:
– Error-correcting codes Kschischang and Frey, 1998;
McEliece et al., 1998
– Vision applications
Freeman and Pasztor, 1999;
Frey, 2000
• Theoretical results:
– For Gaussian processes, means are correct.
Weiss and Freeman, 1999
– Large neighborhood local maximum for MAP.
Weiss and Freeman, 2000
– Equivalent to Bethe approx. in statistical physics.
Yedidia, Freeman, and Weiss, 2000
– Tree-weighted reparameterization
Wainwright, Willsky, Jaakkola, 2001
23
Results from Bethe free energy analysis
• Fixed point of belief propagation equations iff. Bethe
approximation stationary point.
• Belief propagation always has a fixed point.
• Connection with variational methods for inference: both
minimize approximations to Free Energy,
– variational: usually use primal variables.
– belief propagation: fixed pt. equs. for dual variables.
• Kikuchi approximations lead to more accurate belief
propagation algorithms.
• Other Bethe free energy minimization algorithms—
Yuille, Welling, etc.
24
References on BP and GBP
• J. Pearl, 1985
– classic
• Y. Weiss, NIPS 1998
– Inspires application of BP to vision
• W. Freeman et al learning low-level vision, IJCV 1999
– Applications in super-resolution, motion, shading/paint
discrimination
• H. Shum et al, ECCV 2002
– Application to stereo
• M. Wainwright, T. Jaakkola, A. Willsky
– Reparameterization version
• J. Yedidia, AAAI 2000
– The clearest place to read about BP and GBP.
25
Inference in Markov Random Fields
Gibbs sampling, simulated annealing
Iterated conditional modes (ICM)
Belief propagation
Application examples:
super-resolution
motion analysis
shading/reflectance separation
Graph cuts
Variational methods
26
Super-resolution
• Image: low resolution image
• Scene: high resolution image
scene
image
ultimate goal...
27
Pixel-based images
are not resolution
independent
Pixel replication
Cubic
Cubic spline,
spline
sharpened
Polygon-based
graphics
images are
resolution
independent
Training-based
super-resolution
28
spatial frequency
amplitude
(1) Sharpening; boost existing high
frequencies.
(2) Use multiple frames to obtain
higher sampling rate in a still frame.
(3) Estimate high frequencies not
present in image, although implicitly
defined.
amplitude
3 approaches to perceptual
sharpening
In this talk, we focus on (3), which
we’ll call “super-resolution”.
spatial frequency
29
Super-resolution: other approaches
• Schultz and Stevenson, 1994
• Pentland and Horowitz, 1993
• fractal image compression (Polvere, 1998; Iterated Systems)
• astronomical image processing (eg. Gull and Daniell, 1978;
“pixons” http://casswww.ucsd.edu/puetter.html)
• Follow-on: Jianchao Yang, John Wright, Thomas S. Huang,
Yi Ma: Image super-resolution as sparse representation of raw
image patches. CVPR 2008
30
Training images,
~100,000 image/scene patch pairs
Images from two Corel database categories:
“giraffes” and “urban skyline”.
31
Do a first interpolation
Zoomed low-resolution
Low-resolution
32
Zoomed low-resolution
Full frequency original
Low-resolution
33
Zoomed low-freq.
Representation
Full freq. original
34
Zoomed low-freq.
Low-band input
(contrast normalized,
PCA fitted)
Representation
Full freq. original
True high freqs
(to minimize the complexity of the relationships we have to learn,
we remove the lowest frequencies from the input image, 35
and normalize the local contrast level).
Gather ~100,000 patches
high freqs.
...
low freqs.
...
Training data samples (magnified)
36
Nearest neighbor estimate
True high freqs.
Input low freqs.
Estimated high freqs.
high freqs.
...
low freqs.
Training data samples (magnified)
...
37
Nearest neighbor estimate
Input low freqs.
Estimated high freqs.
high freqs.
...
low freqs.
Training data samples (magnified)
...
38
Example: input image patch, and closest
matches from database
Input patch
Closest image
patches from database
Corresponding
high-resolution
patches from database
39
40
Scene-scene compatibility function,
(xi, xj)
Assume overlapped regions, d, of hi-res.
patches differ by Gaussian observation noise:
Uniqueness constraint,
not smoothness.
d
41
Image-scene compatibility
function, (xi, yi)
y
x
Assume Gaussian noise takes you from
observed image patch to synthetic sample:
42
Markov network
image patches
(xi, yi)
scene patches
(xi, xj)
43
Belief Propagation
Input
After a few iterations of belief propagation, the
algorithm selects spatially consistent high resolution
interpretations for each low-resolution patch of the
input image.
Iter. 0
Iter. 1
Iter. 3
44
Zooming 2 octaves
We apply the super-resolution
algorithm recursively, zooming
up 2 powers of 2, or a factor of 4
in each dimension.
85 x 51 input
Cubic spline zoom to 340x204
45
Max. likelihood zoom to 340x204
Now we examine the effect of the prior
assumptions made about images on the
high resolution reconstruction.
First, cubic spline interpolation.
Original
50x58
(cubic spline implies thin
plate prior)
True
200x232
46
Original
50x58
Cubic spline
(cubic spline implies thin
plate prior)
True
200x232
47
Next, train the Markov network
algorithm on a world of random noise
images.
Original
50x58
Training images
True
48
The algorithm learns that, in such a
world, we add random noise when zoom
to a higher resolution.
Original
50x58
Training images
Markov
network
True
49
Next, train on a world of vertically
oriented rectangles.
Original
50x58
Training images
True
50