Transcript Slide 1

A Multi-level Approach to Quantization
Yair Koren, Irad Yavneh, Alon Spira
Department of Computer Science
Technion, Haifa 32000
Israel
One dimensional (scalar) quantization
Consider the image I consisting of G representation
(gray) levels. We would like to represent I with n < G
representation levels as best as possible.
More formally, given a signal X (image, voice,
etc.), with probability density function (histogram)
p(x), we would like an approximation q(x) of X,
which minimizes the distortion:
n
D(q)  E[ X  q  X  2 ]  
2
di

i 1 di1
x  ri
2
2
p  x  dx.
Here, all x  [di 1 , di ) are represented by ri .
Example (Lena, 512x512)
Gray Level
Left – Lena (gray level image),
Right – Lena’s histogram, p(x).
Lena, 8 gray levels
Gray Level
Representing Lena with less levels
128 gray levels
64 gray levels
Lena, 4 gray levels
Gray Level
Naïve vs. Optimal Quantization
Lena, 8 levels, Left – optimal, Right - Naive
The Lloyd Max Iterative Process
We wish to minimize
n
D(q)  E[ X  q  X  2 ]  
2
di

i 1 di1
x  ri
2
2
p  x  dx.
Differentiating w.r.t. r and d yields the Lloyd-Max equations:
di
ri 
 xp( x)dx
di 1
di

p( x)dx
, di 
ri  ri1
2
, i  1,
, n 1
di 1
Max and Lloyd proposed a simple iterative process:
The Lloyd Max Iterative Process
Given some initial guess, d 0, iterate for k  1, 2,
until some convergence criterion is satisfied:
d ik 1
ri k 

xp( x)dx
dik11
dik 1

dik11
, dik 
p( x)dx
ri k  rik1
2
.
The Lloyd Max Iterative Process
We can rewrite the Lloyd-Max equations in terms of
d alone:
di 1
 di

  xp  x  dx  xp  x  dx 
1  di1

d
d i   di
 dii1
, i  1,

2

p  x  dx
p  x  dx 


d

d
i
 i1

This is a generally a nonlinear system.
, n  1.
The Lloyd Max Iterative Process
However, for the simple case, p = 1, L-M reduces to
1
k 1
d   di 1  2di  di 1  , i  1,
4
k
i
, n  1.
This is nothing but a damped Jacobi relaxation with
damping factor 1/2 for the discrete Laplace
equation. Evidently, multigrid acceleration is likely to
help.
We employ a nonlinear multigrid algorithm, using
the Lloyd Max process for relaxation (with overrelaxation 4/3), and a nonlinear interpolation which
retains the order of d.
Numerical Tests
We compare three algorithms:
1. Lloyd-Max, starting with a uniform representation
2. Our multigrid algorithm, starting similarly
3. LBG (Linze et al., 1980): Sequential refinement
(coarse-to-fine).
In all the algorithms, the basic iteration is Lloyd-Max.
P(x)=x
Step Function and local minima
Local Minima
Convergence Criterion
Discrete Vector Quantization
The 1D problem is used mainly as a preliminary
study towards higher-dimensional problems, viz.,
vector quantization (e.g., for color images).
Also, the p histogram is discrete in practice, and
usually quite sparse and patchy and there are many
different “solutions” (local minima). “Standard”
multigrid methods do not seem appropriate.
Decision regions (Voronoi cells) and
representation levels (centers of mass) for
P(x,y)≡1
Equal height contours of P(x,y) = x*y
Decision regions for P(x,y)=x*y
Discrete Vector Quantization
Let G denote the number of possible representationlevels (D-tuples), P the number of such levels for
which p does not vanish, and R the number of
quantized representation levels. Typically,
R
P
G
A Lloyd Max iteration costs at least O(P) operations.
As it doesn’t seem possible to usefully coarsen p,
coarse–level iterations will be equally expensive,
resulting in O(P log(R)) complexity for the multigrid
cycle.
Discrete Vector Quantization
Sketch of algorithm (V Cycle):
For j  0,1,
, log  R   1, log  R  , log  R   1,
,1, 0,
Relax  j 
Sketch of Relaxation algorithm:
Partition the R variables into, say, 2log R  j aggregates
of 2 j representation levels each. Then sweep over the
aggregates, changing each in turn so as to (approximately)
minimize the fine-level functional (Gauss-Seidel style).
Conclusions
The multi-level approach is very promising for the problem
of quantization. In 1D and (semi-) continuous p we get
•
Much faster convergence.
•
Often better minima.
•
Sounder convergence criterion.
The real dividends are expected for vector quantization (as
in color images). This is a significantly harder and
more important problem. Research on this is in
progress, led by Yair Koren.
A Multigrid Approach to Binarization
Ron Kimmel and Irad Yavneh
Image Binarization
Original Image
Nonuniform Illumination
Tilted
Spherical
Naïve (threshold) binarization
Tilted
Naïve (threshold) binarization
Spherical
Yanowitz-Bruckstein Binarization
• Isolate the locations of edge centers, for example, the
set of points,
s   x, y  : I  T 
for some threshold T.
• Use the values I(x,y), for (x,y) in s, as constraints for
a threshold surface, u, which elsewhere satisfies the
equation
u  0.
For this we use our version of a multigrid algorithm with
matrix-dependent prolongations.
Edges
Tilted
Spherical
Results
Tilted
Spherical