topic_10_rev

Download Report

Transcript topic_10_rev

Department of Physics and Astronomy
DIGITAL IMAGE PROCESSING
Course 3624
Topic 10 - Image Analysis
Professor Bob Warwick
10. Image Analysis
Mapping, filtering and restoration techniques are all
aimed at producing an “enhanced” image as an input to
IMAGE ANALYSIS
On the one hand image analysis can be carried out by visual
inspection perhaps supported by a number of interactive
software tools. At the other extreme, it may involve automated
processing utilising very complex computer algorithms
Here we consider a few relevant techniques under the headings:
10.1
Simple interactive techniques
10.2
Image segmentation
10.3
Feature description & recognition
10.4
Pattern recognition via cross-correlation
10.1 Simple Interactive Tools
(i) Description of a Point (selected via a mouse/cursor)
Position = x, y (and via the calibration  scene coordinate θ,Φ)
Gray level = fxy (and via the calibration  flux density W/m2)
(ii) Description of an Extended Object
Position = <x>,<y> (centroid within the defined
object region, weighted by gray level)
Average gray level = <f>
å xfxy
x =
box
åf
y
xy
f =
box
å1
box
xy
box
xy
box
(iii) Subtraction of a Background Signal
ie net =<f>-<b> where:
å fxy
å yf
=
åf
box
åf
å1
xy
b =
annulus
annulus
(iv) Distribution along a 1-d Cut through the Image
(v) Distribution Radially and Azimuthally around a Point
åf
=
å1
xy
f
box
box
10.2 Image Segmentation
This is the process of sub-dividing an image into its constituent parts (i.e.
into features or objects which comprise the image). Segmentation is
often the first step in an automated “feature-identification” procedure
Segmentation often relies on either:
(i)
The identification of the gray-level range which “characterises” the
features/objects of interest (against the confusion of the surrounding
scene.
(ii) The identification of edge/discontinuities which suitably delineate
the features/objects (against the confusion of the surrounding
scene).
Method (i) often reduces simply to defining a suitable threshold
in the gray-level distribution.
Method (ii) often involves the use of an edge-detection filter
Segmentation by applying a gray-level
threshold-I
Example 1 The detection of
bright point sources in an
image.
source detection threshold
Setting the threshold level
Constant background:
T = b + ns
noise
Varying background:
T =b
model
+ ns
noise
Segmentation by applying a gray-level
threshold II
Example 2
The detection of a set of bright point sources against a varying background an image.
Segmentation by applying a gray-level
threshold - III
Example 3. How well might the rectangles be "identified" by the suitable
choice of a gray-level threshold?
EASY!
HARD!
Segmentation via Edge Detection - I
Discontinuities/edges in images can be detected by the use of either
“gradient” or “Laplacian” spatial filters.
Example 1:
Using the Sobel filters
-1 -2 -1
-1 0 1
0 0 0
-2 0 2
1 2 1
-1 0 1
Segmentation via Edge Detection - II
Example 2:
Using a Laplacian mask and
an "edge-crossing" algorithm.
0 1 0
1 -4 1
0 1 0
A Full Segmentation Process
1. Compute a gradient image or “zerocrossing” image.
2. Apply a threshold to remove clutter and
noise.
1. Apply “edge growing” and “edge thinning”
algorithms.
2. Apply “edge linking” and “stray-filament
removal” algorithms.
Image Segmentation - Example
10.3 Feature Description and Recognition
Once an image has been segmented into a set of individual objects or features, the next
step is to characterize the image in terms of these components. The possibilities range from:
Object Identification – how many objects are there of a given type?
Easy
– what is inter-relation of the objects?
Scene Analysis
Hard
A common requirement is to identify the individual objects against a specified (and
restricted) set of possibilities – i.e., a RECOGNITION problem.
How should the objects be represented? The two possibilities are:
• In terms of their (external) boundary characteristics.
i.e. the morphology (shape) of the object.
•
In terms of the (internal) object pixel characterictics.
i.e. the gray level, colour or texture of the object.
A quantitative representation of the object characteristics is then
provided by its descriptor values. Once measured, the object descriptor
value is compared to the possible values for known types of object
(held in a look-up table) so as to search for a match.
Some Object Descriptors
Perimeter 2
Compactness =
Area
1. Compactness Parameter
2. pq’th Central Moment
m pq =
å (x-x)
p
(y-y)q fxy
object
m00 =
åf
xy
m10 =
object
Normalization 1
3. Texture
tn =
å (x-x)f
xy
=0
object
Defines <x>
m01 =
å (y-y)f
xy
=0
object
Defines <y>
n
(f-f
)
P(f)
å
object
t0 =1
t1 =
å (f-f )P(f) = 0
object
t2 =
2
(f-f
)
P(f) = 0
å
object
Ideally descriptors are insensitive to variations in the object
size, translation and rotation
Shape Recognition via Fourier Descriptors
•
Consider the set of (x,y) values of the boundary pixels as a set of
complex numbers x+iy.
•
Compute the DFT of this set of complex numbers
•
Extract a subset of the Fourier values and use as descriptors.
•
•
As a check apply the inverse DFT to the restricted set
(setting the non-descriptors to zero)
•
Compare the limited set if descriptors with look-up tables for the
“target” objects
Aircraft silhouettes reconstructed from 32 DFT Descriptor values
10.4 Pattern recognition via cross-correlation
Convolution
Cross-correlation
f (x, y)* h(x, y) Û F(u, v) ´ H(u, v)
f (x, y) t(x, y) Û F(u, v)´ T (u, v)*
Pattern Recognition via Cross-Correlation cont.
Cross-correlation techniques are very powerful when searching
for objects/pattern of fixed size and orientation.
Pattern Recognition via Cross-Correlation cont.
In 2-d the process involves crosscorrelating the image with a subimage (ie the template)
R xy = åå fx+a,y+b ta ,b
a
b