Edge Preserving Image Restoration using L1 norm

Download Report

Transcript Edge Preserving Image Restoration using L1 norm

Edge Preserving Image
Restoration using L1 norm
Vivek Agarwal
The University of Tennessee, Knoxville
Outline
• Introduction
• Regularization based image restoration
– L2 norm regularization
– L1 norm regularization
• Tikhonov regularization
• Total Variation regularization
• Least Absolute Shrinkage and Selection Operator (LASSO)
• Results
• Conclusion and future work
2
Introduction -Physics of Image formation
Imaging
system
f(x’,y’)
g(x,y)
K(x,y,x’,y’)
Registration
system
g(x,y)+noise
Reverse Process
Forward Process
noise
3
Image Restoration
• Image restoration is a subset of image processing.
• It is a highly ill-posed problem.
• Most of the image restoration algorithms uses least squares.
• L2 norm based algorithms produces smooth restoration which is
inaccurate if the image consists of edges.
• L1 norm algorithms preserves the edge information in the restored
images. But the algorithms are slow.
4
Well-Posed Problem
In 1923, the French mathematician Hadamard introduced the
notion of well-posed problems.
According to Hadamard a problem is called well-posed if
1.
2.
3.
A solution for the problem exists (existence).
This solution is unique (uniqueness).
This unique solution is stable under small perturbations in the data,
in other words small perturbations in the data should cause small
perturbations in the solution (stability).
If at least one of these conditions fails the problem is called ill or
incorrectly posed and demands a special consideration.
5
Existence
To deal with non-existence we have to enlarge the domain where
the solution is sought.
Example: A quadratic equation ax2 + bx +c =0 in general form has
two solutions:
 b  b 2  4ac
 b  b 2  4ac
x1 
and x2 
2a
2a
if b 2  4ac  0, thereis no real roots,
however there are complexones
b
4ac  b 2
b
4ac  b 2
x1  

i x2  

i
2a
2a
2a
2a
There is a solution
Real Domain
No SolSution
Complex domain
Non-existence is Harmfull
6
Uniqueness
Non-uniqueness is usually caused by the lack or absence of
information about underlying model.
Example: Neural networks. Error surface has multiple local minima
and many of these minima fit training data very well, however
Generalization capabilities of these different solution (predictive
models) can be very different, ranging from poor to excellent. How
to pick up a model which is going to generalize well?
Solution #1
Bad or good?
Solution #3
Bad or good?
Solution #2
Bad or good?
7
Uniqueness
• Non-uniqueness is not always harmful. It depends on what we are
looking for. If we are looking for a desired effect, that is we know how
the good solution looks like then we can be happy with multiple
solutions just picking up a good one from a variety of solution.
• The non-uniqueness is harmful if we are looking for an observed
effect, that is we do not know how good solution looks like.
• The best way to combat non-uniqueness is just specify a model
using prior knowledge of the domain or at least restrict the space
where the desired model is searched.
8
Instability
Instability is caused by an attempt to reverse cause-effect
relationships.
Nature always solves just for forward problem, because of the
arrow of time. Cause always goes before effect.
Effect
Forward Operation
Cause
In practice very often we have to reverse the relationships, that is
to go from effect to cause.
Example: Convolution-deconvolution, Fredhold integral equations
of the first kind.
9
L1 and L2 Norms
The general expression for norm is given as


|| x || p    | xi | p 
 i

L2 norm: || x || 2 
distance.

i
1
p
xi2 is the Euclidean distance or vector
| also known as Manhattan norm because
L1 norm: || x || 1  i | xi is
it corresponds to the sum of the distances along the coordinate
axes.
10
Why Regularization?
• Most of the restoration is based on Least Squares. But if the problem is
ill-posed then least squares method fails.
50
100
150
200
250
50
100
150
200
11
250
Regularization
The general formulation for regularization techniques is
|| Ax  g ||22  || Lx ||22
Where
|| Ax  b ||22 is the Error term

is the regularization parameter
|| Lx ||22 is the penalty term
12
Tikhonov Regularization
• Tikhonov is a L2 norm or classical regularization technique.
• Tikhonov regularization technique produces smoothing effect on the
restored image.
• In zero order Tikhonov regularization, the regularization operator (L)
is identity matrix.
• The expression that can be used to compute, Tikhonov regularization is

f  AT A  L

1
AT g
• In Higher order Tikhonov, L is either first order or second order
differentiation matrix.
13
Tikhonov Regularization
Original Image
Blurred Image
50
100
150
200
250
50
100
150
200
14
250
Tikhonov Regularization - Restoration
Reconstructed Image for  = 7.9123e-012
50
100
150
200
250
50
100
150
200
250
15
Total Variation
• Total Variation is a deterministic approach.
• This regularization method preserve the edge information in the
restored images.
• TV regularization penalty function obeys the L1 norm.
• The mathematical expression for TV regularization is given as
1
T ( x)  || Ax   b ||2   | x | 2   2

2
16
Difference between Tikhonov
regularization and Total Variation
S.No
1.
Tikhonov Regularization
|| Ax  g ||22  || Ix ||22
Total Variation regularization
|| Ax  g ||22  || x ||1
2.
Assumes smooth and
continuous information
Smoothness is not assumed.
3.
Computationally less complex
Computationally more complex
4.
Restored image is smooth
Restored image is blocky and
preserves the edges.
17
Computation Challenges
Total Variation
1
T ( x)  || Ax   b ||2   | x | 2   2

2
Gradient
 x 

TV ( x )    
 | x | 
Non-Linear PDE
 x 
   A * Ax  A * b  0
T ( x)    
 | x | 
18
Computation Challenges (Contd..)
• Iterative method is necessary to solve.
• TV function is non-differential at zero.
• The    x  is non-linear operator.
 | x | 
 x 
 causes numerical
• The ill conditioning of the operator   
 | x | 
difficulties.
• Good Preconditioning is required.
19
Computation of Regularization Operator
Total Variation is computed using the formulation.
T( f ) 
1
|| Af  g ||2   | f | 2   2

2
Least Square Solution
Total Variation Penalty function (L)
The total variation is obtained after minimization of the
fv  1
 A
A  L( f ) A g
 f  A A   L( f ) grad T ( f )
1
T
v
T
v
T
1
v
v
20
Computation of Regularization Operator
Discretization of Total variation function:

1 nx ny
T ( f )    Dix, j f
2 i 1 j 1
Dix, j f 
f i , j  f i 1, j
x
D f 
y
i, j
  D f 
2
y
i, j
f i , j  f i , j 1
y
2

 (t )  2 t   2
Gradient of Total Variation is given by

1 nx ny '
T ( f )    i , j Dix, j f
2 i 1 j 1
  D f 
2
y
i, j
2

21
Regularization Operator
The regularization operator is computer using the expression




L( f )  DxT diag  ' ( f ) Dx  DTy diag  ' ( f ) Dy

 D D
Where
T
x
T
Y



diag  ' ( f )

0

 ' i , j ( f )   ' Dix, j f

  Dx 
 
diag  ' ( f )   Dy 
0

  D f 
2
y
i, j
2


22
Lasso Regression
• Lasso for “Least Absolute Shrinkage and Selection Operator” is a
shrinkage and selection method for linear regression introduced by
Tibshirani 1995.
• It minimizes the usual sum of squared errors, with a bound on the sum
of the absolute values of the coefficients.


Mini yi   j xij  j subject to j |  j | s
2
• The computation of solution for Lasso is a quadratic programming
problem that can be best solved by least angle regression algorithm.
• Lasso also uses L1 penalty norm.
23
Ridge Regression and Lasso Equivalence
• The cost function of ridge regression is given as
2
p
m
C    yˆ i  f ( xi )     w2j
i 1
j 1
• Ridge regression is identical to Zero Order Tikhonov regularization
• Analytical Solution of Ridge and Tikhonov are similar
X
T
X  I

1
XT y
• The bias introduced favors solution with small weights and the effect is
to smooth the output function.
24
Ridge Regression and Lasso Equivalence
• Instead of single value of λ, different values of λ can be used for
different pixels.
2
p
m
C    yˆ i  f ( xi )     j w2j
i 1
j 1
• It should provide same solution as lasso regression (regularization).
• Thus we establish relation between lasso and Zero Order Tikhonov,
there is a relation between Total Variation and Lasso
Our Aim
To Prove
Tikhonov
Proved
Total Variation
Lasso
Both are L1
Norm penalties
25
L1 norm regularization - Restoration
Synthetic Images
Input Image
Blurred and Noisy Image
26
L1 norm regularization - Restoration
Total Variation
Restoration
LASSO
Restoration
27
L1 norm regularization - Restoration
I Deg of Blur
II Deg of Blur
III Deg of Blur
Blurred and
Noisy Images
Total Variation
Regularization
LASSO
Regularization
28
L1 norm regularization - Restoration
I level of Noise
II level of Noise
III level of Noise
Blurred and
Noisy Images
Total Variation
Regularization
LASSO
Regularization
29
Cross Section of Restoration
Different degrees Of Blurring
Total Variation
Regularization
LASSO
Regularization
30
Cross Section of Restoration
Different levels of Noise
Total Variation
Regularization
LASSO
Regularization
31
Comparison of Algorithms
Original Image
LASSO Restoration
Tikhonov Restoration
Total Variation Restoration
32
Effect of Different Levels of Noise and
Blurring
Blurred and Noisy Image
Tikhonov Restoration
LASSO Restoration
Total Variation Restoration
33
Numerical Analysis of Results - Airplane
First Level of Noise
Plane
PD
Iteration
CG
Iteration
Lambda
Blurring
Error
(%)
Residual
Error
(%)
Restoration
Time
(min)
Total
Variation
2
10
2.05e-02
81.4
1.74
2.50
LASSO
Regression
1
6
1.00e-04
81.4
1.81
0.80
Tikhonov
Regularization
--
--
1.288e-10
81.4
9.85
0.20
Second Level of Noise
Plane
PD
Iteration
CG
Iteration
Lambda
Blurring
Error
(%)
Residual
Error
(%)
Restoration
Time
(min)
Total
Variation
1
15
1e-03
83.5
3.54
1.4
LASSO
Regression
1
2
1e-03
83.5
4.228
0.8
Tikhonov
Regularization
--
--
1.12e-10
83.5
11.2
0.30
34
Numerical Analysis of Results - Airplane
Shelves
PD
Iteration
CG
Iteration
Lambda
Blurring
Error
(%)
Residual
Error
(%)
Restoration
Time
(min)
Total
Variation
2
11
1.00e-04
84.1
2.01
2.00
LASSO
Regression
1
8
1.00e-06
84.1
1.23
0.90
Plane
PD
Iteration
CG
Iteration
Lambda
Blurring
Error
(%)
Residual
Error
(%)
Restoration
Time
(min)
Total
Variation
2
10
1.00e-03
81.2
3.61
2.10
LASSO
Regression
1
14
1.00e-03
81.2
3.59
1.00
35
Graphical Representation – 5 Real Images
Different degrees of Blur
Image Restoration Time
Restoration Time using TV method
2.0000
Time in minutes
Time in minutes
Image Restoration Time
5.0000
Restoration Time using TV method
4.0000
Restoration Time using Lasso method
2.5000
Restoration
Time
Image Restoration Time
4.5000
1.5000
1.0000
3.5000
4.0000
3.0000
3.5000
2.5000
2.0000
1.5000
1.0000
0.5000
3
4
0.0000
5
2.5000
2.0000
1.5000
0.0000
1
Image
3.0000
0.5000
0.0000
2
Restoration Time using Lasso method
1.0000
0.5000
1
Restoration Time using TV method
4.5000
Restoration Time using Lasso method
Time in minutes
3.0000
2
3
4
5
1
2
Image
Residual Error
4.5000
Residual Error
Residual Error using TV method
4.0000
Residual Error using TV method
Residual Error using Lasso method
10.0000
5.0000
2.5000
2.0000
1.5000
8.0000
4.0000
Error in (%)
Error in (%)
3.0000
Error in (%)
5
12.0000
Residual method using Lasso method
Residual
Error
4
Residual Error
Residual Error using TV method
6.0000
Residual Error using Lasso method
3.5000
3
Image
3.0000
6.0000
2.0000
4.0000
1.0000
2.0000
1.0000
0.5000
0.0000
0.0000
1
2
3
Image
4
5
0.0000
1
2
3
Image
4
5
1
2
3
Image
36
4
5
Graphical Representation - 5 Real Images
Different levels of Noise
Image Restoration Time
3.0000
Restoration Time using TV method
Restoration time using TV method
Restoration time using Lasso method
3.0000
3.0000
2.5000
1.5000
1.0000
2.0000
1.5000
1.0000
2.0000
1.5000
1.0000
0.5000
0.5000
0.0000
1
2
3
4
0.5000
0.0000
5
1
2
3
Image
1
2
3.0000
Residual Error using Lasso method
12.0000
10.0000
Error in (%)
Error in (%)
8.0000
1.5000
5
Residual Error using TV method
14.0000
Residual Error using Lasso method
10.0000
2.0000
4
Residual Error
Residual Error using TV method
Residual Error using Lasso method
2.5000
3
Image
Residual Error
3.5000
Error in (%)
0.0000
5
12.0000
Residual Error using TV method
4.0000
4
Image
Residual Error
4.5000
Residual
Error
Restoration time using Lasso method
2.5000
Time in minutes
2.0000
Time in minutes
Time in minutes
Image Restoration Time
3.5000
Restoration time using TV method
Restoration Time using Lasso method
2.5000
Restoration
Time
Image Restoration Time
3.5000
6.0000
8.0000
6.0000
4.0000
4.0000
1.0000
2.0000
2.0000
0.5000
0.0000
0.0000
0.0000
1
2
3
Image
4
5
1
2
3
Image
4
5
1
2
3
Image
37
4
5
Effect of Blurring and Noise
Effect of Blurring
Effect of Noise
16.0000
First Degree of Blurring
12.0000
Third Degree of Blurring
8.0000
12.0000
Error in (%)
Error in (%)
10.0000
6.0000
First Level of Noise
Second Level of Noise
Third Level of Noise
14.0000
Second Degree of Blurring
10.0000
8.0000
6.0000
4.0000
4.0000
2.0000
2.0000
0.0000
0.0000
1
2
3
4
5
1
2
Image
Effect of Blurring on Error and Time
4
5
Effect of Noise on Error and Restoration Time
10.0000
12.0000
Restoration Time in minutes
Residual Error in (%)
8.0000
7.0000
6.0000
5.0000
4.0000
3.0000
2.0000
1.0000
0.0000
Restoration Time in minutes
Error in (%) and Time in minutes
9.0000
Error in (%) and Time in minutes
3
Image
Residual Error in (%)
10.0000
8.0000
6.0000
4.0000
2.0000
0.0000
1
2
Degree of Blurring
3
1
2
3
Noise Level
38
Conclusion
• Total variation method preserves the edge information in the restored
image.
• Restoration time in Total Variation regularization is high
• LASSO provides an impressive alternative to TV regularization
• Restoration time of LASSO regularization is two times less than
restoration time of RV regularization
• Restoration quality of LASSO is better or equal to the restoration
quality of TV regularization
39
Conclusion
• Both LASSO and TV regularization fails to suppress the noise in the
restored images.
• Analysis shows increase in degree of blur increases the restoration
error
• Increase in the noise level does not have a significant influence on the
restoration time but effects the residual error
40