CS589-04 Digital Image Processing Lecture 6. Image Segmentation Spring 2008 New Mexico Tech.

Download Report

Transcript CS589-04 Digital Image Processing Lecture 6. Image Segmentation Spring 2008 New Mexico Tech.

CS589-04 Digital Image Processing
Lecture 6. Image Segmentation
Spring 2008
New Mexico Tech
Fundamentals
►
Let R represent the entire spatial region occupied by an
image. Image segmentation is a process that partitions R
into n sub-regions, R1, R2, …, Rn, such that
n
(a)  Ri  R.
i 1
(b) Ri is a connected set. i  1, 2, ..., n.
(c) Ri  R j  .
(d)
( Ri )  TRUE for i  1, 2, ..., n.
(e)
( Ri  R j )  FALSE for any adjacent regions
Ri and R j .
11/6/2015
2
11/6/2015
3
Background
►
First-order derivative
f
 f '( x)  f ( x  1)  f ( x)
x
►
Second-order derivative
2 f
 f ( x  1)  f ( x  1)  2 f ( x)
2
x
11/6/2015
4
11/6/2015
5
Characteristics of First and Second Order
Derivatives
►
First-order derivatives generally produce thicker edges in
image
►
Second-order derivatives have a stronger response to fine
detail, such as thin lines, isolated points, and noise
►
Second-order derivatives produce a double-edge response
at ramp and step transition in intensity
►
The sign of the second derivative can be used to determine
whether a transition into an edge is from light to dark or
dark to light
11/6/2015
6
Detection of Isolated Points
►
The Laplacian
2
2

f

f
2
 f ( x, y )  2  2
x
y
 f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)
4 f ( x, y )
1 if | R( x, y) | T
g ( x, y)  
otherwise
0
11/6/2015
9
R   wk zk
k 1
7
11/6/2015
8
Line Detection
►
Second derivatives to result in a stronger response and to
produce thinner lines than first derivatives
►
Double-line effect of the second derivative must be
handled properly
11/6/2015
9
11/6/2015
10
Detecting Line in Specified Directions
►
Let R1, R2, R3, and R4 denote the responses of the masks in
Fig. 10.6. If, at a given point in the image, |Rk|>|Rj|, for all
j≠k, that point is said to be more likely associated with a
line in the direction of mask k.
11/6/2015
11
11/6/2015
12
Edge Detection
►
►
Edges are pixels where the brightness function changes
abruptly
Edge models
11/6/2015
13
11/6/2015
14
11/6/2015
15
11/6/2015
16
Basic Edge Detection by Using First-Order
Derivative
 f 
 g x   x 
f  grad ( f )      
 g y   f 
 y 
The magnitude of f
M ( x, y )  mag(f )  g x 2  g y 2
The direction of f
 gx 
 ( x, y )  tan  
 g y 
The direction of the edge
1
11/6/2015
   - 90
17
Basic Edge Detection by Using First-Order
Derivative
 f 
 g x   x 
Edge normal: f  grad ( f )      
 g y   f 
 y 
Edge unit normal:
f / mag(f )
In practice,sometimes the magnitude is approximated by
 f f 
f
f
mag(f )=
+
or mag(f )=max  | |,| | 
x y
 x y 
11/6/2015
18
11/6/2015
19
11/6/2015
20
11/6/2015
21
11/6/2015
22
11/6/2015
23
11/6/2015
24
11/6/2015
25
11/6/2015
26
11/6/2015
27
11/6/2015
28
Advanced Techniques for Edge Detection
►
The Marr-Hildreth edge detector
G ( x, y )  e

x2  y 2
2 2
,  : space constant.
Laplacian of Gaussian (LoG)
 2 G ( x, y )  2G ( x, y )
 G ( x, y ) 

2
x
y 2
2
x y
   x  2 2
  2e
x  
2
x
1  
  4  2 e
 

x2  y 2
2
2
2
2 2
 x  y  

e

4



2
11/6/2015
    y  x  y2
   2 e 2
 y  
2
2

2



y
1  
  4  2 e
 

2
x2  y 2
2 2
x2  y 2
2 2
29
11/6/2015
30
Marr-Hildreth Algorithm
1.
Filter the input image with an nxn Gaussian lowpass
filter. N is the smallest odd integer greater than or equal
to 6
2.
Compute the Laplacian of the image resulting from step1
3.
Find the zero crossing of the image from step 2
g ( x, y)   G( x, y) f ( x, y)
2
11/6/2015
31
11/6/2015
32
The Canny Edge Detector
►
Optimal for step edges corrupted by white noise.
►
The Objective
1.
Low error rate
The edges detected must be as close as possible to the true edge
2.
Edge points should be well localized
The edges located must be as close as possible to the true edges
3.
Single edge point response
The number of local maxima around the true edge should be
minimum
11/6/2015
33
The Canny Edge Detector: Algorithm (1)
Let f ( x, y ) denote the input image and
G ( x, y ) denote the Gaussian function:
G( x, y)  e

x2  y 2
2 2
We form a smoothed image, f s ( x, y ) by
convolving G and f :
f s ( x, y)  G( x, y)
11/6/2015
f ( x, y)
34
The Canny Edge Detector: Algorithm(2)
Compute the gradient magnitude and direction (angle):
M ( x, y )  g x 2  g y 2
and
 ( x, y )  arctan( g y / g x )
where g x  f s / x and g y  f s / y
Note: any of the filter mask pairs in Fig.10.14 can be used
to obtain g x and g y
11/6/2015
35
The Canny Edge Detector: Algorithm(3)
The gradient M ( x, y) typically contains wide ridge around
local maxima. Next step is to thin those ridges.
Nonmaxima suppression:
Let d1 , d 2 , d3 , and d 4 denote the four basic edge directions for
a 3  3 region: horizontal, -45o , vertical,+45 , respectively.
1. Find the direction d k that is closest to  ( x, y ).
2. If the value of M ( x, y) is less than at least one of its two
neighbors along d k , let g N ( x, y)  0 (suppression);
otherwise, let g N ( x, y)  M ( x, y)
11/6/2015
36
11/6/2015
37
The Canny Edge Detector: Algorithm(4)
The final operation is to threshold g N ( x, y ) to reduce
false edge points.
Hysteresis thresholding:
g NH ( x, y )  g N ( x, y )  TH
g NL ( x, y )  g N ( x, y )  TL
and
g NL ( x, y )  g NL ( x, y )  g NH ( x, y )
11/6/2015
38
The Canny Edge Detector: Algorithm(5)
Depending on the value of TH , the edges in g NH ( x, y )
typically have gaps. Longer edges are formed using
the following procedure:
(a). Locate the next unvisited edge pixel, p, in g NH ( x, y ).
(b). Mark as valid edge pixel all the weak pixels in g NL ( x, y )
that are connected to p using 8-connectivity.
(c). If all nonzero pixel in g NH ( x, y ) have been visited go to
step (d), esle return to (a).
(d). Set to zero all pixels in g NL ( x, y ) that were not marked as
11/6/2015
valid edge pixels.
39
The Canny Edge Detection: Summary
►
Smooth the input image with a Gaussian filter
►
Compute the gradient magnitude and angle images
►
Apply nonmaxima suppression to the gradient magnitude
image
►
Use double thresholding and connectivity analysis to detect
and link edges
11/6/2015
40
11/6/2015
TL  0.04;TH  0.10;  4 and a mask of size 25  25
41
11/6/2015
TL  0.05; TH  0.15;  2 and a mask of size 1313
42
11/6/2015
43
11/6/2015
44
Edge Linking and Boundary Detection
►
Edge detection typically is followed by linking algorithms
designed to assemble edge pixels into meaningful edges
and/or region boundaries
►
Three approaches to edge linking
Local processing
Regional processing
Global processing
11/6/2015
45
Local Processing
►
Analyze the characteristics of pixels in a small
neighborhood about every point (x,y) that has been
declared an edge point
►
All points that similar according to predefined criteria are
linked, forming an edge of pixels.
Establishing similarity: (1) the strength (magnitude) and (2)
the direction of the gradient vector.
A pixel with coordinates (s,t) in Sxy is linked to the pixel at
(x,y) if both magnitude and direction criteria are satisfied.
11/6/2015
46
Local Processing
Let S xy denote the set of coordinates of a neighborhood
centered at point (x, y ) in an image. An edge pixel with
coordinate (s, t ) in S xy is similar in magnitude to the pixel
at (x, y ) if
M ( s, t )  M ( x, y )  E
An edge pixel with coordinate (s, t ) in S xy is similar in angle
to the pixel at (x, y ) if
 ( s , t )   ( x, y )  A
11/6/2015
47
Local Processing: Steps (1)
1.
Compute the gradient magnitude and angle arrays,
M(x,y) and  ( x, y) , of the input image f(x,y)
2.
Form a binary image, g, whose value at any pair of
coordinates (x,y) is given by
 1 if M ( x, y )  TM and  ( x, y )  A  TA
g ( x, y )  
otherwise
0
TM : threshold
A : specified angle direction
TA : a "band" of acceptable directions about A
11/6/2015
48
Local Processing: Steps (2)
3.
Scan the rows of g and fill (set to 1) all gaps (sets of 0s)
in each row that do not exceed a specified length, K.
4.
To detect gaps in any other direction, rotate g by this
angle and apply the horizontal scanning procedure in
step 3.
11/6/2015
49
11/6/2015
50
Regional Processing
►
The location of regions of interest in an image are known
or can be determined
►
Polygonal approximations can capture the essential shape
features of a region while keeping the representation of
the boundary relatively simple
►
Open or closed curve
Open curve: a large distance between two consecutive
points in the ordered sequence relative to the distance
between other points
11/6/2015
51
11/6/2015
52
Regional Processing: Steps
1.
2.
3.
4.
11/6/2015
Let P be the sequence of ordered, distinct, 1-valued
points of a binary image. Specify two starting points,
A and B.
Specify a threshold, T, and two empty stacks, OPEN
and ClOSED.
If the points in P correspond to a closed curve, put A
into OPEN and put B into OPEN and CLOSES. If the
points correspond to an open curve, put A into OPEN
and B into CLOSED.
Compute the parameters of the line passing from the
last vertex in CLOSED to the last vertex in OPEN.
53
Regional Processing: Steps
5.
6.
7.
8.
9.
11/6/2015
Compute the distances from the line in Step 4 to all
the points in P whose sequence places them between
the vertices from Step 4. Select the point, Vmax, with
the maximum distance, Dmax
If Dmax> T, place Vmax at the end of the OPEN stack
as a new vertex. Go to step 4.
Else, remove the last vertex from OPEN and insert it
as the last vertex of CLOSED.
If OPEN is not empty, go to step 4.
Else, exit. The vertices in CLOSED are the vertices of
the polygonal fit to the points in P.
54
11/6/2015
55
11/6/2015
56
11/6/2015
57
Global Processing Using the Hough Transform
►
“The Hough transform is a general technique for
identifying the locations and orientations of certain types of
features in a digital image. Developed by Paul Hough in
1962 and patented by IBM, the transform consists of
parameterizing a description of a feature at any given
location in the original image’s space. A mesh in the space
defined by these parameter is then generated, and at each
mesh point a value is accumulated, indicating how well an
object generated by the parameters defined at that point
fits the given image. Mesh points that accumulate relatively
larger values then describe features that may be projected
back onto the image, fitting to some degree the features
actually present in the image.”
http://planetmath.org/encyclopedia/HoughTransform.html
11/6/2015
58
11/6/2015
59
11/6/2015
60
Edge-linking Based on the Hough Transform
1.
Obtain a binary edge image
2.
Specify subdivisions in   plane
3.
Examine the counts of the accumulator cells for high
pixel concentrations
4.
Examine the relationship between pixels in chosen cell
11/6/2015
61
11/6/2015
62
11/6/2015
63
Thresholding
1
g ( x, y)  
0
if f ( x, y )  T (object point)
if f ( x, y)  T (background point)
T : global thresholding
Multiple thresholding
a

g ( x, y )   b
c

11/6/2015
if f ( x, y )  T2
if T1  f ( x, y )  T2
if f ( x, y )  T1
64
11/6/2015
65
The Role of Noise in Image Thresholding
11/6/2015
66
The Role of Illumination and Reflectance
11/6/2015
67
Basic Global Thresholding
1.
2.
Select an initial estimate for the global threshold, T.
Segment the image using T. It will produce two groups of pixels: G1
consisting of all pixels with intensity values > T and G2 consisting of
pixels with values
T.
Compute the average intensity values m1 and m2 for the pixels in
G1 and G2, respectively.
Compute a new threshold value.

3.
4.
1
T   m1  m 2 
2
5.
11/6/2015
Repeat Steps 2 through 4 until the difference between values of T in
successive iterations is smaller than a predefined parameter  T .
68
11/6/2015
69
Optimum Global Thresholding Using Otsu’s
Method
►
Principle: maximizing the between-class variance
Let {0, 1, 2, ..., L -1} denote the L distinct intensity levels
in a digital image of size M  N pixels, and let ni denote the
number of pixels with intensity i.
pi  ni / MN and
L 1
p
i 0
i
1
k is a threshold value, C1  [0, k ], C2  [k  1, L -1]
k
P1 (k )   pi and P2 (k ) 
i 0
11/6/2015
L 1
p
i  k 1
i
 1  P1 (k )
70
Optimum Global Thresholding Using Otsu’s
Method
The mean intensity value of the pixels assigned to class
C1 is
k
1 k
m1 (k )   iP(i / C1 ) 
ipi

P1 (k ) i 0
i 0
The mean intensity value of the pixels assigned to class
C2 is
L 1
1 L 1
m2 (k )   iP(i / C2 ) 
ipi

P2 (k ) i k 1
i  k 1
Pm
1 1  P2 m2  mG (Global mean value)
11/6/2015
71
Optimum Global Thresholding Using Otsu’s
Method
Between-class variance,  B2 is defined as
 B2  P1 (m1  mG ) 2  P2 (m2  mG ) 2
= P1 P2 ( m1  m2 ) 2
 mG P1  m1P1 
2
=
P1 (1  P1 )
 mG P1  m
2
=
11/6/2015
P1 (1  P1 )
72
Optimum Global Thresholding Using Otsu’s
Method
The optimum threshold is the value, k*, that maximizes
 B2 (k *),
 B2 ( k*)  max  B2 ( k )
1
g ( x, y)  
0
0 k  L 1
if f ( x, y)  k *
if f ( x, y)  k *
 B2
Separability measure   2
G
11/6/2015
73
Otsu’s Algorithm: Summary
1.
2.
3.
4.
5.
11/6/2015
Compute the normalized histogram of the input
image. Denote the components of the histogram
by pi, i=0, 1, …, L-1.
Compute the cumulative sums, P1(k), for k = 0,
1, …, L-1.
Compute the cumulative means, m(k), for k = 0,
1, …, L-1.
Compute the global intensity mean, mG.
Compute the between-class variance, for k = 0,
1, …, L-1.
74
Otsu’s Algorithm: Summary
6.
Obtain the Otsu’s threshold, k*.
7.
Obtain the separability measure.
11/6/2015
75
11/6/2015
76
Using Image Smoothing to Improve Global Thresholding
11/6/2015
77
Using Edges to Improve Global Thresholding
11/6/2015
78
Using Edges to Improve Global Thresholding
1.
2.
3.
4.
5.
11/6/2015
Compute an edge image as either the magnitude of the
gradient, or absolute value of the Laplacian of f(x,y)
Specify a threshold value T
Threshold the image and produce a binary image, which
is used as a mask image; and select pixels from f(x,y)
corresponding to “strong” edge pixels
Compute a histogram using only the chosen pixels in
f(x,y)
Use the histogram from step 4 to segment f(x,y) globally
79
11/6/2015
80
11/6/2015
81
Multiple Thresholds
In the case of K classes, C1 , C2 , ..., CK , the between-class
variance is
K
 B2   Pk  mk  mG 
2
k 1
where
Pk  
iCk
1
pi and mk 
Pk
 ip
iCk
i
The optimum threshold values, k1*, k2 *, ..., kK 1 * that maximize
 B2 (k1*, k2 *, ..., kK 1*)  max  B2 ( k1, k2 , ..., k K 1 )
0 k  L 1
11/6/2015
82
11/6/2015
83
Variable Thresholding: Image Partitioning
►
Subdivide an image into nonoverlapping rectangles
►
The rectangles are chosen small enough so that the
illumination of each is approximately uniform.
11/6/2015
84
11/6/2015
85
11/6/2015
86
Variable Thresholding Based on Local Image
Properties
Let  xy and mxy denote the standard deviation and mean value
of the set of pixels contained in a neighborhood S xy , centered
at coordinates (x, y) in an image. The local thresholds,
Txy  a xy  bmxy
If the background is nearly constant,
Txy  a xy  bm
1
g ( x, y)  
0
11/6/2015
if f ( x, y )  Txy
if f ( x, y )  Txy
87
Variable Thresholding Based on Local Image
Properties
A modified thresholding
1
g ( x, y )  
0
if Q(local parameters) is true
otherwise
e.g.,
 true
Q( xy , mxy )  
false
11/6/2015
if f ( x, y )  a xy AND f ( x, y )  bmxy
otherwise
88
a=30
b=1.5
mxy = mG
11/6/2015
89
Variable Thresholding Using Moving Averages
►
Thresholding based on moving averages works well when
the objects are small with respect to the image size
►
Quite useful in document processing
►
The scanning (moving) typically is carried out line by line in
zigzag pattern to reduce illumination bias
11/6/2015
90
Variable Thresholding Using Moving Averages
Let zk 1 denote the intensity of the point encountered in
the scanning sequence at step k  1. The moving average
(mean intensity) at this new point is given by
1 k 1
1
m( k  1) 
zi  m(k )  ( zk 1  zk )

n i  k  2 n
n
where n denotes the number of points used in computing
the average and m(1)  z1 / n, the border of the image were
padded with n -1 zeros.
11/6/2015
91
Variable Thresholding Using Moving Averages
1
g ( x, y )  
0
if f ( x, y)  Txy
if f ( x, y )  Txy
Txy  bmxy
11/6/2015
92
N = 20
b=0.5
11/6/2015
93
11/6/2015
94
Region-Based Segmentation
►
Region Growing
1.
Region growing is a procedure that groups pixels or subregions into
larger regions.
2.
The simplest of these approaches is pixel aggregation, which starts
with a set of “seed” points and from these grows regions by
appending to each seed points those neighboring pixels that have
similar properties (such as gray level, texture, color, shape).
3.
Region growing based techniques are better than the edge-based
techniques in noisy images where edges are difficult to detect.
11/6/2015
95
Region-Based Segmentation
Example: Region Growing based on 8-connectivity
f ( x, y ) : input image array
S ( x, y ): seed array containing 1s (seeds) and 0s
Q( x, y ): predicate
11/6/2015
96
Region Growing based on 8-connectivity
1. Find all connected components in S ( x, y ) and erode each
connected components to one pixel; label all such pixels
found as 1. All other pixels in S are labeled 0.
2. Form an image fQ such that, at a pair of coordinates (x,y),
let f Q ( x, y )  1 if the Q is satisfied otherwise f Q ( x, y )  0.
3. Let g be an image formed by appending to each seed point
in S all the 1-value points in f Q that are 8-connected to that
seed point.
4. Label each connencted component in g with a different region
label. This is the segmented image obtained by region growing.
11/6/2015
97
TRUE if the absolute difference of the intensities

Q
between the seed and the pixel at (x,y) is  T
 FALSE
otherwise

11/6/2015
98
11/6/2015
99
4-connectivity
11/6/2015
100
8-connectivity
11/6/2015
101
11/6/2015
102
11/6/2015
103
Region Splitting and Merging
R : entire image Ri :entire image Q: predicate
1. For any region Ri , If Q( Ri ) = FALSE,
we divide the image Ri into quadrants.
2. When no further splitting is possible,
merge any adjacent regions R j and Rk
for which Q( R j  Rk ) = TRUE.
3. Stop when no further merging is possible.
11/6/2015
104
11/6/2015
105
11/6/2015
106
TRUE if   a and 0  m  b
Q
otherwise
 FALSE
11/6/2015
107
Segmentation Using Morphological
Watersheds
►
Three types of points in a topographic interpretation:
 Points belonging to a regional minimum
 Points at which a drop of water would fall to a single
minimum. (The catchment basin or watershed of that
minimum.)
 Points at which a drop of water would be equally likely
to fall to more than one minimum. (The divide lines or
watershed lines.)
Watershed lines
11/6/2015
108
Segmentation Using Morphological
Watersheds: Backgrounds
http://www.icaen.uiowa.edu/~dip/LECTURE/Segmentation3.html#watershed
11/6/2015
109
Watershed Segmentation: Example
►
►
The objective is to find watershed lines.
The idea is simple:
 Suppose that a hole is punched in each regional minimum and that
the entire topography is flooded from below by letting water rise
through the holes at a uniform rate.
 When rising water in distinct catchment basins is about the merge, a
dam is built to prevent merging. These dam boundaries correspond
to the watershed lines.
11/6/2015
110
11/6/2015
111
11/6/2015
112
Watershed Segmentation Algorithm
►
►
Start with all pixels with the lowest possible value.
 These form the basis for initial watersheds
For each intensity level k:
 For each group of pixels of intensity k
1. If adjacent to exactly one existing region, add these
pixels to that region
2. Else if adjacent to more than one existing regions,
mark as boundary
3. Else start a new region
11/6/2015
113
Watershed Segmentation: Examples
Watershed
algorithm is
often used on
the gradient
image instead
of the original
image.
11/6/2015
114
Watershed Segmentation: Examples
Due to noise and other local irregularities of the gradient, over-segmentation
might occur.
11/6/2015
115
Watershed Segmentation: Examples
A solution is to limit the number of regional minima. Use markers to specify
the only allowed regional minima.
11/6/2015
116
Watershed Segmentation: Examples
A solution is to limit the number of regional minima. Use markers to specify
the only allowed regional minima. (For example, gray-level values might be
used as a marker.)
11/6/2015
117
Use of Motion in Segmentation
11/6/2015
118
K-means Clustering
►
Partition the data points into K clusters randomly. Find the
centroids of each cluster.
►
For each data point:


Calculate the distance from the data point to each cluster.
Assign the data point to the closest cluster.
►
Recompute the centroid of each cluster.
►
Repeat steps 2 and 3 until there is no further change in
the assignment of data points (or in the centroids).
11/6/2015
119
K-Means Clustering
11/6/2015
120
K-Means Clustering
11/6/2015
121
K-Means Clustering
11/6/2015
122
K-Means Clustering
11/6/2015
123
K-Means Clustering
11/6/2015
124
K-Means Clustering
11/6/2015
125
K-Means Clustering
11/6/2015
126
K-Means Clustering
11/6/2015
127
Clustering
►
Example
D. Comaniciu and P.
Meer, Robust Analysis
of Feature Spaces:
Color Image
Segmentation, 1997.
11/6/2015
128