Journal Club - Image Fusion

Download Report

Transcript Journal Club - Image Fusion

JOURNAL CLUB:
Yang and Ni, Xidian University, China
“Multimodality medical image fusion
based on multiscale geometric analysis of
contourlet transform.”
Jul 21, 2014
Jason Su
Motivation
• Visualization of multiple image modalities or contrasts
is difficult
– Side by side comparisons are often not precise
– Flipping back and forth helps to pronounce changes but
some modalities may have no structural landmarks
• Beginning to collect time-resolved “MP-nRAGE” with
view-sharing methods
– What is the best way to visualize such data, esp. for
thalamic nuclei?
– Can we do something else other than fitting a T1 map?
Goal: Image Fusion
• The combination of multiple images into one while
preserving the important information from each
• Common examples:
–
–
–
–
fMRI overlays on structural images
Segmentation overlays
Nuclear medicine overlays
HDR photography
• Compared to quantitative imaging, the goal is to
achieve a pleasing effect to the eye instead of fitting to
a model
– Thus there are many possible algorithms and no
necessarily “correct” way to do things
Background: Types of data fusion
• Signal level, pixel level
– Image fusion, e.g. averaging, SOS, MIP
– Region-based: consider neighborhood around
current pixel
• Feature level
– Label fusion segmentation: combine multiple
candidate labels to identify features
• Decision level
– Image biomarkers
Gaussian and Laplacian Pyramid
•
•
Pyramids are multiresolution
decompositions of images
Each level is subsampled by a
factor of 2, i.e. each level is an
octave
•
GP: Successively blurred and
downsampled versions of the
image
– Gives scale of features in
the image
•
LP: take differences between
Gaussian levels
– Gives information about
edges of varying widths
ROLP/Contrast Pyramid
• Ratio of low-pass pyramid: take ratios between Gaussian levels
• Contrast = (L-Lb)/Lb
• R = L/Lb = C + 1
Background: The -lets
• Discontinuities destroy the sparsity of a Fourier series, the Gibbs
phenomenon
• Wavelets – are localized and multi-scale
– Perform well in 1D, but poor sense of orientation for 2D
– Only horizontal, vertical or diagonal
• How to better represent a 2D image?
•
– Want multiresolution, localization, critical sampling, directionality, anisotropy
Curvelets – Candés et al.
– Developed in continuous domain then adapted to discrete
– Optimally sparse representation
for smooth 2D functions except
for a discontinuity along a curve
– Models wave propagation
• Contourlets – Do and Vetterli
– Developed in discrete domain
Pointillism-like
Background: Curvelet Decomposition
Directional filter bank.
(a) Frequency
partitioning where l=3
and there are 23 = 8 real
wedge-shaped
frequency bands.
Methods: Algorithm
• Take 2 input images, how to combine them?
• Tale contourlet transform of each
– Each level apears to gain a factor of 2 in angular
resolution Yang’s decomposition
– How does this effect the quality?
Algorithm: Lowpass Subband
• Treat the lowest level of pyramid differently
– This is a tiny thumbnail of the original information
– Higher-level detail is added to this to reconstruct the whole image
• 2 modes of operation: selection or averaging
• Choose based on a threshold criterion: salience
– If the correlation between the input windows in a 3x3 patch in
curvelet space is above a threshold -> weighted averaging
– Else choose the one with more energy (sum sq. over window)
• Averaging is only done at a fixed alpha blend amount
– Not variable dependent on data
• A bit ad hoc in that there are many unspecified preset tunable
parameters: thresholds, blend factors
– They can be optimized for nuclei
Highpass Subband Algo
• Contrast = (L-Lb)/Lb = Lh/Lb
– Ratio of a high level curvelet coefficient to lowest level
1. Compute contrast as above, Lb comes from the pixels
in the lowest level that contribute to the highest level
2. Blur this to get weighted neighborhood contrast
3. Select coefficients from the image that has the higher
value on this metric, i.e. the one that has more local
contrast
• “Using contourlet contrast, more dominant features
can be preserved precisely at all the resolution levels”
Reconstruction
• Take the inverse curvelet transform of the
blended pyramid
Methods
• Test cases
– CT-MR
– Gd and T2w
– PD and T1w
• Compared against existing methods: average, PCA, wavelet
maximum
• Metrics
– Standard deviation – image variability
– Entropy – how much information is in the image
– Overall cross entropy – how close are the distributions, is information
preserved in the fused result?
– Spatial frequency – amount of energy energy in high frequencies
• Only looking at horizontal and vertical freqs.
– Correlation – how similar is the fusion to the inputs
Results: Metrics
• Proposed algorithm generally shows to have
more variability and capture more information
from the inputs
Notes
• PCA table values seem off?
• There is a Matlab implementation of curvelets
by the creators
• How to handle multiple image fusion?