http://cms.brookes.ac.uk/research/visiongroup/talks/russell/presentationc.ppt

Download Report

Transcript http://cms.brookes.ac.uk/research/visiongroup/talks/russell/presentationc.ppt

Filling Algorithms
Pixelwise MRFs
Chaos Mosaics
Patch segments are pasted,
overlapping, across the image.
Pixel distributions are
determined by comparision
with those with similar
neighbourhoods.
Then either:
●
These distributions are
sampled from or heuristics
are performed on them to
determine how to fill them.
●
Ambiguities are removed by
smoothing (Chaos Mosaics-MSR).
Or a least cost path through the
(chosen) overlapping images are
found. Efros’01 uses dynamic
programming, while Graphcut
textures’03 uses… graphcut.
Synthesizing One Pixel
Taken from
Efros’ original
presentation
SAMPLE
p
Infinite sample
image
Generated image



Assuming Markov property, what is conditional probability
distribution of p, given the neighbourhood window?
Instead of constructing a model, let’s directly search the
input image for all such neighbourhoods to produce a
histogram for p
To synthesize p, just pick one match at random
Really Synthesizing One Pixel
Taken from
Efros’ original
presentation
SAMPLE
finite sample
image
p
Generated image


However, since our sample image is finite, an exact
neighbourhood match might not be present
So we find the best match using SSD error (weighted by a
Gaussian to emphasize local structure), and take all samples
within some distance from that match
The pixel metric doesn’t matter
One of the surprising things about this is that the
choice pixel metric doesn’t seem to matter.
Efros uses || . ||2 2 on RGB space, I’ve been using
|| . ||1, while Criminisi uses a metric based on the
CIELab colour space.
It doesn’t seem to matter which you use,
presumably because we are only concerned with
nearest neighbours, and they are all topologically
equivalent.
Chaotic Mosaics
Dynamic Programming solution
B1
B2
Neighboring blocks
constrained by overlap
block
B1
B2
Minimal error
boundary cut
Chaotic Mosaics
Graphcuts solution
Takes full advantage of the power of
graphcuts method, it treats the whole
image as one patch and finds optimal
joins along it.
Pros: Finds optimal (and often
seamless) matches
Cons: Doesn’t find anything else, the
recycling the optimal matches still
leaves you with tiling artefacts.
Chaotic Mosaics
Graphcuts solution
Image Quilting
Graphcuts
Pixel choice and Filling Algorithms
●
●
●
●
●
Onion skin or Outside in
The first.
The simplest?
Works with single textures or
simple convex filling regions
Just picks away at the image one
layer at a time
Pixel choice and Filling
Algorithms
●
●
●
●
Linear structure propagation
Onion skin +pushing in on linear textures
Better than Onion skin for multi textural
environments
When all you have is hammer, everything
starts to look like a nail. ~ Artefacts from trying
too hard.
Missing Data Correction in Still Images and Image Sequences, Bornard et al. 2002
Pixel choice and Filling
Algorithms
●
●
●
●
Linear structure propagation
Onion skin +pushing in on linear textures
Better than Onion skin for multi textural
environments
When all you have is hammer, everything
starts to look like a nail. ~ Artefacts from trying
too hard.
Missing Data Correction in Still Images and Image Sequences, Bornard et al. 2002
Filling Algorithms
Onion Peel Vs. Linear propagation
Push In
Now
Onion Peel
Pixel choice and Filling
Algorithms
●
●
●
●
Max. entropy fill
Consistent with MRF
assumptions.
Locally convex with a
minimum of occlusions
at point of fill.
Spirals in on simple
shapes.
Why Coarse to Fine?
The same but quicker...
Standard efros 21 pixels
Course to fine 15 pixel nhood
Standard Efros 15 pixels
C2f uniform pixel weighting
Efros uniform pixel weighting 15
New metric Efros 15 pixels
Structures...
Why Coarse to Fine?
Structures
Why Coarse to Fine? As Textures
Structures
Why Coarse to Fine?As Textures
Why Coarse to Fine?
Texture as structure?
?
Strong linear propagation comes from efros style
fills naturally.
Not readily apparent due to the
onion skin fill
Efros
Linear propagation
The Efros Algorithm
with my pixel choice
No guarantee it’s any better
than linear propagation
The algorithm often spots at
the coarser levels that it has
insufficient data to complete
No guarantee it’s any better
than linear propagation
Annoyingly, this problem is
not solvable by more data, in
the sense of higher
resolution images.
Information about how to
propagate edges at the
higher levels is still needed.
No guarantee it’s any better
than linear propagation
Annoyingly, this problem is
not solvable by more data, in
the sense of higher
resolution images.
Information about how to
propagate edges at the
higher levels is still needed.
No guarantee it’s any better
than linear propagation
Annoyingly, this problem is
not solvable by more data, in
the sense of higher
resolution images.
Information about how to
propagate edges at the
lowest resolution is still
needed.
No guarantee it’s any better
than linear propagation
Annoyingly, this problem is
not solvable by more data, in
the sense of higher
resolution images.
Information about how to
propagate edges at the
lowest resolution is still
needed.
No guarantee it’s any better
than linear propagation
Annoyingly this problem is
not solvable by more data, in
the sense of higher
resolution images.
Information about how to
propagate edges at the
lowest resolution is still
needed.
No guarantee it’s any better
than linear propagation
Annoyingly, this problem is
not solvable by more data, in
the sense of higher
resolution images.
Information about how to
propagate edges at the
lowest resolution is still
needed.
No guarantee it’s any better
than linear propagation
Compare it with a smaller fill
Why Coarse to Fine?
Is manifold learning
possible?
Up to a point, we don’t care what
colour the books are when
propagating the shelf.
Similar structural edge patterns are
apparent everywhere.
?
Why Coarse to Fine?
Problems
●
Speed

Massively slower(hours rather than seconds)
than patch based synthesis even with coarse to
fine reducing neighbourhood size.
●
●
Can we reduce the search space via image
segmentation?
Alternatively turn our soft coarse to fine constraints
into something harder, by only testing pixels from the
neighbourhoods of the k-closest fits at a coarser level.
Problems
Surprisingly, this could even increase
robustness by preventing the growth of
miss-fittings