Accelerating Spatially Varying Gaussian Filters

Download Report

Transcript Accelerating Spatially Varying Gaussian Filters

Summer Seminar
Ruizhen Hu
Sampling
• Spectral Sampling of Manifolds (Siggraph Asia 2010)
• Accurate Multidimensional Poisson-Disk Sampling (TOG)
• Efficient Maximal Poisson-Disk Sampling
• Blue-Noise Point Sampling using Kernel Density Model
• Differential Domain Analysis for Non-uniform Sampling
Noise & filtering
• Filtering Solid Gabor Noise
• Accelerating Spatially Varying Gaussian
Spectral Sampling of Manifolds
A. Cengiz Öztireli
ETH Zürich
Marc Alexa
TU Berlin
Markus Gross
ETH Zürich
Authors
CENGİZ ÖZTİRELİ
PhD Candidate
Computer Graphics Laboratory
ETH Zürich
Research interests:
Reconstruction, sampling and
processing of surfaces, and
sketch based modeling.
Marc Alexa
Professor
Electrical Engineering
& Computer Science
TU Berlin
Markus Gross
Professor
Department of Computer Science
ETH Zürich
Research interests:
Computer graphics, image
generation, geometric modeling,
computer animation, and
scientific visualization
Motivation
• Goal: finding optimal sampling conditions for a given
surface representation
• Work: propose a new
method to solve this
problem based on
spectral analysis of
manifolds, kernel
methods and matrix
perturbation theory
Contributions
• Efficient, simple to implement, easy to control through
intuitive parameters, feature sensitive
• Result in accurate reconstructions with kernel based
approximation methods and high quality isotropic
samplings
• A discrete spectral analysis of manifolds using results from
kernel methods and matrix perturbation theory
Main Algorithm
• Input: a set of points lying near a manifold with normals + a
kernel function definition = a continuous surface
Algorithms for Sampling
• Subsampling: measuring the effect of a point on the
manifold using the Laplace-Beltrami spectrum
Algorithms for Sampling
• Resampling: maximizing and equalizing s (x) for all points
– use local operations and move points in a simple gradient ascent
Results
Results
Results
Conclusions
• New algorithms for the simplification and resampling of
manifolds depending on a measure that restricts changes
to the Laplace-Beltrami spectrum
• Limitations: the algorithms are greedy and thus not
theoretically guaranteed to give the optimal sampling
• Future Directions:
– texture on a surface
– isotropic adaptive remeshing
Accurate Multi-Dimensional
Poisson-Disk Sampling
Manuel N. Gamito
Lightwork Design Ltd
Steve Maddock
The University of Sheffield
Authors
Manuel Noronha Gamito
software engineer
Lightwork Design Ltd
Steve Maddock
Senior Lecturer
The University of Sheffield
Research interests:
character animation,
specifically modelling and
animating faces
Poisson-Disk Sampling
• Definition:
– Each sample is placed with uniform probability density
– No two samples are closer than , where is some chosen
distribution radius
– A distribution is maximal if no more samples can be inserted
• Poisson-Disk sampling is useful for:
– Distributed ray tracing [Cook 1986; Hachisuka et al. 2008]
– Object placement and texturing
[Lagae and Dutré 2006; Cline et al. 2009]
– Stippling and dithering [Deussen et al. 2000; Secord et al. 2002]
– Global Illumination [Lehtinen et al. 2008]
Previous Methods
• Approximate Methods
– Relax at least one of the sampling conditions
• Accurate Methods
– Brute force
• Dart Throwing [Dippé and Wold 1985]
– Assisted by a spatial data structure
•
•
•
•
Voronoi diagram [Jones 2006]
Scalloped sectors [Dunbar and Humphreys 2006]
Uniform grid [Bridson 2007]
Simplified subdivision tree and uniform grid [White et al. 2007]
Main Algorithm
Radius vs. Number of Samples
• A distribution can be specified by supplying either
– The distribution radius r
– The desired number of samples N
• When the number of samples is specified
– The algorithm uses a radius r based on N and on the measured
packing density of sample disks
• The packing density was obtained by averaging the packing
densities measured from 100 distributions generated by our
algorithm
• The number of samples of the resulting maximal distribution is
approximately equal to the desired number N (error<5%)
Results
• Number of samples
• Sampling time
• Samples per second
Results
Conclusions
• A Poisson-Disk Sampling Algorithm that
– Is statistically correct (see proof in paper)
– Is efficient through the use of a subdivision tree
– Works in any number of dimensions
• Subject to available physical memory
– Generates maximal distributions
– Allows approximate control over the number of samples
– Can enforce periodic or wall boundary conditions on the
boundaries of the domain
Future Work
• Make it multi-threaded
– Distant parts of the domain can be sampled in parallel with
different threads
– Some synchronisation between threads is still required
• Generate non-uniform distributions
– Have the distribution radius be a function of the position in the
domain
• Work over irregular domains
– Discard subdivided tree nodes that fall outside the domain
Efficient Maximal Poisson-Disk Sampling
Authors
Mohamed S. Ebeida
Andrew Davidson
post-doctor
Carnegie Mellon
university
PhD
Carnegie University of
California, Davis
Anjul Patney
Patrick M. Knupp
PhD
Carnegie University of
California, Davis
Distinguished Member
Technical Staff
Sandia National
Laboratories
Scott A. Mitchell
Principal Member of
Technical Staff
Sandia National
Laboratories
John D. Owens
Associate Professor
Carnegie University of
California, Davis
Work
• generating a uniform Poisson-disk sampling that is both
maximal and unbiased over bounded non-convex domains
Motivation
• Maximal Poisson-disk sampling distributions:
– Avoid aliasing
– Have blue noise property
• Bias-free:
– Crucial in fracture propagation simulations
Conditions
• Maximal : the sample disks overlap cover the whole
domain leaving no room to insert an additional point
• Bias-free the likelihood of a sample being inside any subdomain is
proportional to the area of the subdomain, provided the
subdomain is completely outside all prior samples’ disks
•
Previous methods
• relax the unbiased or maximal conditions, or require
potentially unbounded time or space
• Dart-throwing
– unbiased but also not maximal
• Tile-based
– biased and require relatively large storage.
Main Algorithm
• First phase:
– an unbiased, near-maximal covering
– voids: the part of a grid cell outside all circles
• Second phase:
– completes the maximal covering
– darts are thrown directly into the voids, maintaining the biasfree condition
– A maximal distribution is achieved when the domain is
completely covered, leaving no room for new points to be
selected
Sequential Sampling
1. Generate a background grid; mark interior and boundary
cells
2. Phase I. Throw darts into square cells; remove hit cells
3. Generate polygonal approximations to the remaining
voids
4. Phase II. Throw darts into voids; update remaining areas
Algorithm through Phase I
Voids
• Polygonal approximations to arc-voids
Results
Implementation Performance
Conclusions
• An efficient algorithm for maximal Poisson-disk sampling in
two-dimensions
–
–
–
–
–
–
the final result is provably maximal
the sampling is unbiased
it is O(n log n) in expected time
it is O(n) in deterministic memory required
not limited to convex domains
efficiently implemented in both sequential and parallel forms
• Future work: 3D maximal Poisson-disk sampling algorithm
Blue-Noise Point Sampling using
Kernel Density Model
Raanan Fattal
Hebrew University of Jerusalem, Israel
Author
Raanan Fattal
Alon faculty member
School of Computer Science and
Engineering
The Hebrew University of Jerusalem
Work
• A new approach for generating point sets with high-quality
blue noise properties that formulates the problem using a
statistical mechanics interacting particle model
Contributions
• present a new approach that formulates the problem using
a statistical mechanics interacting particle model
• derive a highly efficient multi-scale sampling scheme for
drawing random point distributions
• avoids the critical slowing down phenomena that plagues
this type of models
Previous work
• Dart throwing
– constrain a minimal distance between every pair of points
• Relaxation
– follow a greedy strategy that maximizes this distance
– two main shortcomings:
• teriminatin
• impreciseness : apparent blur
New Approach
• model the target density as a sum of nonnegative radiallysymmetric kernels
The j-th kernel
centered around the
point xj:
New Approach
• The error of this approximation
• Minimizing E, with respect to the kernel centers
– equivalent to the one obtained by converged Lloyd’s iterations
– has the ability to achieve spectral enhancement
• We unify error minimization and randomness by defining a
statistical mechanics particle model using E
New Approach
• Assigning each
configuration
a probability density
according to the
following BoltzmannGibbs distribution:
Drawing samples
•
•
•
•
Markov-chain Monte Carlo(MCMC)
Gibbs sampler
Langevin method
Metropolis-Hastings(MH) test
Critical slowing down
Multi-scale sampling
Results
Results
Differential Domain Analysis for
Non-uniform Sampling
Li-Yi Wei
Microsoft Research
Rui Wang
University of Massachusetts Amherst
Authors
Li-Yi Wei
Researcher
Microsoft Research
Rui Wang
Assistant Professor
Department of Computer Science
University of Massachusetts
Work
• new methods for analyzing non-uniform sample
distributions
Previous work
• Two common methodologies exist for evaluating the
quality of samples
– spatial uniformity: discrepancy & relative radius
– Power spectrum analysis, including radial mean and anisotropy
• However, existing methods are primarily designed for
uniform Euclidean domains and can not be easily extended
to general non-uniform scenarios, such as
– adaptive
– anisotropic
– non-Euclidean domains
Contributions
• A reformulation of standard Fourier spectrum analysis into
a form that depends on sample location differentials
• A generalization of this basic formulation, including
different distance transformations for various domains, and
range selection for better control of quality and speed
• Applications in spectral and spatial analysis for non-uniform
sample distributions
Core idea
• Fourier power spectrum
– Fourier transform:
– power spectrum:
• Differential representation:
Core idea
• Integral form:
• General kernel:
– Range selection (for computational reasons):
Where
is a local distance measure of s’ with respect to the local
frame centered at s. In uniform Euclidean domains,
Core idea
• Non-uniform domain:
Where
is a differential domain transformation function
that locally warps each d from a non-uniform
to a (hypothetical)
uniform
Kernel selection
• a cos kernel may amplify some information while obscure
others
• A Gaussian kernel, in contrast, displays the main peak
clearly without undulations. it can manifest the distribution
properties more clearly, e.g. more apparent characteristic
structures
Spectral Analysis
• Exact
– Isotropic Euclidean domain
– Anisotropic Euclidean domain
• Range selection
• Radial measures
– We can compute the circular average and variance of p(d)
– The former gives the radial mean, indicating the overall
distance-based property of p(d)
– The latter gives the anisotropy, which reveals if there is any
directional bias/structure in the distribution
Comparisons
• How our method relates and compares to traditional
Fourier spectrum analysis?
Results
Results
Filtering Solid Gabor Noise
Ares Lagae
Katholieke Universiteit Leuven
George Drettakis
REVES/INRIA Sophia-Antipolis
Authors
Ares Lagae
Postdoctoral Fellow
Computer Graphics Research Group
Katholieke Universiteit Leuven
George Drettakis
Group Leader
REVES/Inria Sophia-Antipolis
Work
• we show that a slicing approach is required to preserve
continuity across sharp edges, and we present a new noise
function that supports anisotropic filtering of sliced solid
noise
Filtering
• Filtering the noise on the surface using frequency clamping
works better if the power spectrum of the noise on the
surface is bandpass.
• The Fourier Slice Theorem:
projecting in the spatial
domain corresponds to
slicing in the frequency
domain, and vice versa
Previous works
• Perlin noise[Perlin 1985]:
– use slicing and frequency clamping
– Filtering introduces an aliasing vs. detail loss trade-off since the
power spectrum of Perlin noise is not band-pass
• Wavelet noise [Cook and DeRose 2005]:
– use projection and frequency clamping
– Even though the power spectrum of the noise on the surface is
band-pass, filtering does not fully solve the aliasing vs. detail
loss trade-off
• Gabor noise [Lagae et al. 2009]:
– use projection and a filtering approach specific to Gabor noise
– This results in high-quality anisotropic filtering, however,
introduces discontinuities at sharp edges
New noise
• solid random-phase Gabor noise:
– use slice (preserves continuity at sharp edges)
– Since filtering is inherently a 2D operation, we have to explicitly
model the slicing of the 3D Gabor kernels to be able to filter the
resulting 2D Gabor kernels. This requires the introduction of
• a new Gabor kernel, the phase-augmented Gabor kernel
• a new Gabor noise, random-phase Gabor noise
Slicing Solid Gabor Noise
• The Gabor kernel of
Lagae et al. [2009] is
not closed under
slicing:
Phase-augmented Gabor kernel
• The phase-augmented Gabor kernel is closed under slicing:
• solid random-phase Gabor noise (also closed under slicing):
Slicing Random-Phase Gabor Noise
• solid random-phase Gabor noise is also closed under slicing
• the statistical properties of a sliced solid random-phase
Gabor noise are obtained using the analytical expressions
for the statistical properties of a 2D random-phase Gabor
noise
Filtering Sliced Solid Gabor Noise
Conclusion
• A new procedural noise function, random-phase Gabor
noise, that supports
– continuity across sharp edges
– high-quality anisotropic filtering
– Anisotropy
• Future work:
– exploring volumetric filtering of solid Gabor noise
– further exploring anisotropy in the context of solid noise
– designing user interfaces for interacting with anisotropic solid
noise
Results
Results
Accelerating Spatially Varying
Gaussian Filters
Jongmin Baek
David E. Jacobs
Stanford University
Authors
Jongmin Baek
David E. Jacobs
Ph.D. student
Stanford University
PhD candidate
Stanford University
Motivation
Input
Gaussian
Filter
Spatially
Varying
Gaussian
Filter
Roadmap
1) Accelerating Spatially Varying Gaussian Filters
2) Accelerating Spatially Varying Gaussian Filters
3) Accelerating Spatially Varying Gaussian Filters
4) Applications
Gaussian Filters
Position
Value
Gaussian Filters
Each output value …
Gaussian Filters
… is a weighted sum of input values …
Gaussian Filters
… whose weight is a Gaussian …
Gaussian Filters
… in the space of the associated positions.
Gaussian Filters: Uses
Gaussian Blur
Gaussian Filters: Uses
Bilateral Filter
Gaussian Filters: Uses
Non-local Means Filter
Gaussian Filters: Summary
Applications

Denoising images and meshes

Data fusion and upsampling

Abstraction / Stylization

Tone-mapping

...
Previous work on fast Gaussian Filters

Bilateral Grid (Chen, Paris, Durand; 2007)

Gaussian KD-Tree (Adams et al.; 2009)

Permutohedral Lattice (Adams, Baek, Davis; 2010)
Gaussian Filters: Implementations
Summary of Previous Implementations:


A separable blur flanked by resampling operations.
Exploit the separability of the Gaussian kernel.
Spatially Varying Gaussian Filters
Spatially Invariant
Spatially varying covariance
matrix
Spatial Variance in Previous Work
Trilateral Filter (Choudhury and Tumblin, 2003)


Tilt the kernel of a bilateral filter along the image
gradient.
“Piecewise linear”
instead of
“Piecewise constant”
model.
Spatially Varying Gaussian Filters: Tradeoff
Benefits:

Can adapt the kernel spatially.

Better filtering performance.
Cost:

No longer separable.

No existing acceleration
schemes.
Input
Bilateral-filtered
Trilateral-filtered
Acceleration
Problem:
 Spatially varying (thus non-separable) Gaussian filter
Existing Tool:
 Fast algorithms for spatially invariant Gaussian filters
Solution:
 Re-formulate the problem to fit the tool.
 Need to obey the “piecewise-constant” assumption
Naïve Approach (Toy Example)
I LOST THE GAME
Input Signal
1
2
1
3
1
4
Desired Kernel
filtered w/ 1
filtered w/ 2
filtered w/ 3
filtered w/ 4
1
1
1
2
3
4
Output Signal
Challenge #1
In practice, the # of kernels can be very large.
Desired Kernel K(x)
Range of
Kernels needed
Pixel Location x
Solution #1
Sample a few kernels and interpolate.
Desired Kernel K(x)
K1
Interpolate result!
Sampled
kernels
K2
K3
Pixel Location x
Assumptions
Interpolation needs an extra assumption to work:


The covariance matrix Ʃi is either piecewiseconstant, or smoothly varying.
Kernel is spatially varying,
but locally spatially invariant.
Challenge #2
Runtime scales with the # of sampled kernels.
Desired Kernel K(x)
K1
Sampled
kernels
K2
K3
Filter only some
regions of the image
with each kernel.
(“support”)
Pixel Location x
Defining the Support
In this example, x needs to be in the support of K1 & K2.
Desired Kernel K(x)
K1
K2
K3
Pixel Location x
Dilating the Support
Desired Kernel K(x)
K1
K2
K3
Pixel Location x
Algorithm
1) Identify kernels to sample.
2) For each kernel, compute the support needed.
3) Dilate each support.
4) Filter each dilated support with its kernel.
5) Interpolate from the filtered results.
Algorithm
1) Identify kernels to sample.
2) For each kernel, compute the support needed.
3) Dilate each support.
4) Filter each dilated support with its kernel.
5) Interpolate from the filtered results.
K1
K2
K3
Algorithm
1) Identify kernels to sample.
2) For each kernel, compute the support needed.
3) Dilate each support.
4) Filter each dilated support with its kernel.
5) Interpolate from the filtered results.
K1
K2
K3
Algorithm
1) Identify kernels to sample.
2) For each kernel, compute the support needed.
3) Dilate each support.
4) Filter each dilated support with its kernel.
5) Interpolate from the filtered results.
K1
K2
K3
Algorithm
1) Identify kernels to sample.
2) For each kernel, compute the support needed.
3) Dilate each support.
4) Filter each dilated support with its kernel.
5) Interpolate from the filtered results.
K1
K2
K3
Algorithm
1) Identify kernels to sample.
2) For each kernel, compute the support needed.
3) Dilate each support.
4) Filter each dilated support with its kernel.
5) Interpolate from the filtered results.
K1
K2
K3
Applications
 HDR Tone-mapping
 Joint Range Data Upsampling
Application #1: HDR Tone-mapping
Base
Input HDR
Output
Detail
Tone-mapping Example
Bilateral Filter
Kernel Sampling
Application #2: Joint Range Data Upsampling
Range Finder Data
Scene Image
 Sparse
 Unstructured
 Noisy
Output
Synthetic Example
Scene Image
Ground Truth Depth
Synthetic Example
Scene Image
Simulated Sensor Data
Synthetic Example : Result
Bilateral Filter
Kernel Sampling
Synthetic Example : Relative Error
Bilateral Filter
Kernel Sampling
2.41% Mean Relative Error
0.95% Mean Relative Error
Real-World Example
Scene Image
Range Finder Data
*Dataset courtesy of Jennifer Dolson, Stanford University
Real-World Example: Result
Input
Bilateral
Naive
Kernel
Sampling
Performance
Kernel
Sampling
Choudhury and
Tumblin (2003)
Naïve
Tonemap1
5.10 s
41.54 s
312.70 s
Tonemap2
6.30 s
88.08 s
528.99 s
Kernel
Sampling
Kernel Sampling
(No segmentation)
Depth1
3.71 s
57.90 s
Depth2
9.18 s
131.68 s
Conclusion
1. A generalization of Gaussian filters
• Spatially varying kernels
• Lose the piecewise-constant assumption.
2. Acceleration via Kernel Sampling
• Filter only necessary pixels (and their support)
and interpolate.
3. Applications