Barcode Recognition Team - University of California, Irvine

Download Report

Transcript Barcode Recognition Team - University of California, Irvine

BARCODE RECONITION
TEAM
CHRISTINE LEW
DHEYANI MALDE
EVERARDO
URIBE
YIFAN ZHANG
SUPERVISORS:
ERNIE ESSER
YIFEI LOU
UPC BARCODE
What type of barcode? What is a barcode?
Structure?
Our barcode representation?
ο‚’ Vector of 0s and 1s
MATHEMATICAL REPRESENTATION
Barcode Distortion Mathematical Representation:
𝑓 =π‘˜βˆ—π‘’+𝑛
What is convolution?
ο‚’ Every value in the blurred signal is given by the same
combination of nearby values in the original signal
and the kernel determines these combinations.
Kernel
ο‚’ For our case, the blur kernel k, or point spread
function, is assumed to be a Gaussian
Noise
ο‚’ The noise we deal with is white Gaussian noise
0.2 STANDARD DEVIATION
0.5 STANDARD DEVIATION
0.9 STANDARD DEVIATION
DECONVOLUTION
What is deconvolution?
ο‚’ It is basically solving for the clean barcode signal,
𝑒.
Difference between non-blind deconvolution and
blind deconvolution:
ο‚’ Non-blind deconvolution: we know how the signal
was blurred, ie: we assume k is known
ο‚’ Blind deconvolution: we may know some or no
information about how the signal was blurred.
Very difficult.
SIMPLE METHODS OF DECONVOLUTION
Thresholding
ο‚’ Basically converting signal to binary signal,
seeing whether the amplitude at a specific point
is closer to 0 or 1 and rounding to the value its
closer to.
Wiener filter
ο‚’ Classical method of reconstructing a signal after
being distorted, using known knowledge of kernel
and noise.
WIENER FILTER
We have: 𝑓 = π‘˜ βˆ— 𝑒 + 𝑛
The Wiener Filter solves for:
1
π‘Ÿ
2
π‘šπ‘–π‘›
π‘˜ βˆ— 𝑒 βˆ’ 𝑓 + | 𝑒 |²
2
2
Filter is easily described in frequency domain. Wiener
filter defines 𝑔, such that x = 𝑔 βˆ— 𝑓, where π‘₯ is the
estimated original signal:
𝐾
𝐺=
𝐾 ²+𝑅
𝑋 = 𝐺𝐹
x = ifft X
Note that if there is no noise, r =0, and G =
G𝐹 =
𝐾
πΎπ‘ˆ
𝐾2
reduces to π‘ˆ.
𝐾
|𝐾|²
· So X =
0.7 STANDARD DEVIATION, 0.05 SIGMA
NOISE
0.7 STANDARD DEVIATION, 0.2 SIGMA
NOISE
0.7 STANDARD DEVIATION, 0.5 SIGMA
NOISE
Non-blind
Deblurring using
Yu Mao’s Method
By: Christine Lew
Dheyani Malde
Overview
β€’ 2 general approaches:
o -Yifei (blind: don’t know blur kernel)
o -Yu Mao (non-blind: know blur kernel
β€’ General goal:
o -Taking a blurry barcode with noise and making it as clear as possible
through gradient projection.
o -Find method with best results and least error
Data Model
β€’ Method’s goal to solve
1
π‘šπ‘–π‘› | π‘˜ βˆ— 𝑒 βˆ’ 𝑏 |²
2
𝑒 𝑠. 𝑑. 0 ≀ 𝑒 ≀ 1
o
o
o
o
Convex Model
K: blur kernel
U: clear barcode
B: blurry barcode with noise
β€’ b = k*u + noise
β€’ Find the minimum through gradient projection
β€’ Exactly like gradient descent, only we project onto
[0,1] every iteration
β€’ Once we find min u, we can predict clear signal
Classical Method
β€’ Compare with Wiener Filter in
terms of error rate
o Error rate: difference between reconstructed
signal and ground truth
1
πΈπ‘Ÿπ‘Ÿπ‘œπ‘Ÿ π‘…π‘Žπ‘‘π‘’ =
𝑁
𝑁
|π‘₯𝑖 βˆ’ 𝑒𝑖|
𝑖=1
Comparisons for Yu Mao’s
Method
Yu Mao’s Gradient Projection
Wiener Filter
Comparisons for Yu Mao’s
Method (Cont.)
Yu Mao’s Gradient Projection
Wiener Filter
Jumps
β€’ How does the number of jumps affect the result?
β€’ What happens if we apply the amount of jumps to
the different methods of de-blurring?
β€’ Compared Yu Mao’s method & Wiener Filter
β€’ Created a code to calculate number of jumps
β€’ 3 levels of jumps:
o Easy: 4 jumps
o Medium: 22 jumps
o Hard: 45 jumps (regular barcode)
What
are
Jumps
β€’Created a code to calculate number of jumps:
β€’Jump: when the binary goes from 0 to 1 or 1 to 0
β€’3 levels of jumps:
oEasy: 4 jumps
oMedium: 22 jumps
oHard: 45 jumps
o(regular barcode)
Analyzing Jumps
β€’How does the number of jumps affect
the result (clear barcode)?
β€’Compare Yu Mao’s method & Weiner
Filter
Comparison for Small
Jumps (4 jumps)
Yu Mao’s Gradient Projection
Wiener Filter
Comparison for Hard
Jumps (45 jumps)
Yu Mao’s Gradient Projection
Wiener Filter
Wiener Filter with
Varying Jumps
- More jumps, greater error
- Drastically gets worse with more jumps
Yu Mao's Gradient Projection with
Varying Jumps
- More jumps, greater error
- Slightly gets worse with more jumps
Conclusion
Yu Mao's method better overall:
produces less error
from jump cases: consistent error rate of 20%-30%
Wiener filter did not have a consistent error rate:
consistent only for small/medium jumps
at 45 jumps, 40%- 50% error rate
BLIND DECONVOLUTION
Yifan Zhang
Everardo Uribe
DERIVATION OF MODEL
We have:
𝑓 =π‘˜βˆ—π‘’+𝑛
For our approach, we assume that π‘˜, the kernel, is a
symmetric point-spread function.
Since its symmetric, flipping it will produce an
equivalent:
π‘˜ = π‘˜ = πΉπ‘˜ = 𝑓𝑙𝑖𝑝𝑝𝑒𝑑 π‘˜
We flip entire equation and began reconfiguration:
𝑦 = π‘˜ βˆ— 𝑒 + 𝑛𝑦 βˆ— 𝑒 = π‘˜ βˆ— 𝑒 βˆ— 𝑒 + 𝑛 βˆ— 𝑒
π‘¦βˆ—π‘’ =π‘¦βˆ—π‘’βˆ’π‘›βˆ—π‘’+π‘›βˆ—π‘’
π‘Œπ‘’ = π‘ŒπΉπ‘’ βˆ’ 𝑁𝐹𝑒 + 𝑁𝑒
Y and N are matrix representations
π‘Œ βˆ’ π‘ŒπΉ + 𝑁𝐹 βˆ’ 𝑁 𝑒 = 0
𝐴 = [π‘Œ βˆ’ π‘ŒπΉ]
𝐴𝑒 = 0
𝐴π‘₯ = 0
DERIVATION OF MODEL
Signal Segmentation & Final Equation:
π‘₯1
[𝐴1 𝐴2 𝐴3] π‘₯2
= 𝐴1 · π‘₯1 + 𝐴2·π‘₯2 + 𝐴3·π‘₯3 = 0
π‘₯3
β€’
Middle bars are always the same, represented
as vector [0 1 0 1 0] in our case.
𝐴π‘₯ = 𝐴1 · π‘₯1 + 𝐴3·π‘₯3 = 𝑏 ; 𝑏 = βˆ’π΄2·π‘₯2
We have to solve for x in:
𝐴π‘₯ = 𝑏
Gradient Projection
𝑒
𝑛+1
𝑛
𝑛
= πœ‹ 0,1 (𝑒 βˆ’ 𝑑𝑑𝛻𝐹 𝑒 )
β€’Projection of Gradient Descent ( first-order
optimization)
β€’Advantage:
β€’ Allows us to set a range
β€’Disadvantage:
β€’ Takes very long time
β€’ Not extremely accurate results
β€’ Underestimate signal
Least Squares
β€’ estimates unknown parameters
β€’ minimizes sum of squares of errors
β€’ considers observational errors
β€’
β€’
β€’
Least Squares (cont.)
β€’ Advantages:
β€’ return results faster than other
methods
β€’ easy to implement
β€’ reasonably accurate results
β€’ great results for low and high noise
β€’ Disadvantage:
β€’ doesn’t work well when there are errors
in
Total Least Squares
β€’ Least squares data modeling
β€’ Also considers errors of
β€’
β€’ SVD (C)
β€’ Singular Value Decomposition
β€’Factorization
β€’
Total Least Squares (Cont.)
β€’ Advantage:
β€’ works on data in which others does not
β€’ better than least squares when more errors
in
β€’ Disadvantages:
β€’
β€’
β€’
β€’
doesn’t work for most data not in extremities
x
overfits data
not accurate
takes a long time