Transcript Slides

Image Segmentation
with Level Sets
Group reading
22-04-05
Outline
1. Active Contours
2. Level sets in Malladi 1995
3. Prior knowledge in Leventon 2000
1. Active Contours
1. Active Contours
1.1
1.2
1.3
1.4
Introduction to AC
Types of curves
Types of influences
Summary
1.1 Introduction to AC
• AC = curve fitted iteratively to an image
– based on its shape and the image values
– until it stabilises (ideally on an object’s boundary).
• curve:
polygon = parametric AC
continuous = geometric AC
parametric
geometric
1.1 Introduction to AC
• Example: cell nucleus
• Initialisation: external
• Fitting: iterative
– Using the current geometry
(“Contour influence”)
and the image contents
(“Image influence”)
– “Influence”:
scalar = energy
vector = force
– Until the energy is minimum,
or the forces are null
Image: Ian Diblik 2004
1.2 Types of curves
• Parametric AC:
– Stored as vertices
– Each vertex is moved
iteratively
• Geometric AC:
– Stored as coefficients
– Sampled before each
iteration
– Each sample is moved
– New coefficients are
computed (interpolation)
1.3 Types of influences
• Each vertex /sample vi is moved:
in a direction that minimises its energy:
E(vi) =  Econtour(vi) +  Eimage(vi)
or, along the direction of the force:
F(vi) =  Fcontour(vi) +  Fimage(vi)
( Linear combinations of influences )
1.3 Types of influences
• Contour influence:
– meant to control the geometry locally around vi
– corresponds to the prior knowledge on shape
• Energy:
example: Econtour(vi) =  . ||v’i||2 +  . ||v’’i||2
• Force:
example: Fcontour(vi) =  v’i +  v’’i +  ni
where ni is the normal of the contour at vi
and  is a constant pressure (“balloon”)
1.3 Types of influences
• Image influence:
– meant to drive the contour locally
– corresponds to the prior knowledge on the object’ boundaries
• Energy:
example: Eimage(vi) =  ( I (vi) - Itarget)2
Eimage(vi) =  . 1 / ( 1 + || I (vi) || )  Malladi 1995
Eimage(vi) = -  . || I (vi) ||
 Leventon 2000
• Force:
example: Fimage(vi) = Eimage(vi) ni
Fimage(vi) = - I (vi)
1.3 Types of influences
• Exotic influences:
Econtour(vi) proportional to the deformations of a template:
Eimage(vi) based on region statistics:
Hamarneh et al.
2001
Ioutside
Iinside
Bredno et al. 2000
1.4 Summary
• Active Contours:
– function v : N  P ( [ 0 , Iwidth ] x [ 0 , Iheight ] )
t  { (x,y) }
– v(0) is provided externally
– v(t+1) is computed by moving sample points vi of v(t)
– moved using a linear combination of influences
– on convergence, v(tend) is the border of the object
• Issues:
– fixed topology
– possible self-crossing
2. Level sets in Malladi 1995
2. Level sets in Malladi 1995
2.1
2.2
2.3
2.3
2.4
Introduction to LS
Equation of motion
Algorithm 1
Narrow-band extension
Summary
2.1 Introduction to LS
• Transition from Active Contours:
– contour v(t)  front (t)
– contour energy  forces FA FC
– image energy  speed function kI
• Level set:
– The level set c0 at time t of a function (x,y,t)
is the set of arguments { (x,y) , (x,y,t) = c0 }
– Idea: define a function (x,y,t) so that at any time,
(t) = { (x,y) , (x,y,t) = 0 }
• there are many such 
•  has many other level sets, more or less parallel to 
• only  has a meaning for segmentation, not any other level set of 
2.1 Introduction to LS
Usual choice for : signed distance to the front (0)
 - d(x,y, ) if (x,y) inside the front
(x,y,0) =  0
“
on
“
 d(x,y, )
“
outside “
7
6
5
4
4
4
3
2
1
1
1
2
3
4
5
6
5
4
3
3
3
2
1
0
0
0
1
2
3
4
5
4
3
2
2
2
1
0
-1 -1 -1
0
1
2
3
4
3
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
3
2
1
0
0
0
-1 -2 -3 -3 -2 -1
0
1
2
2
1
0
-1 -1 -1 -2 -3 -3 -2 -1
0
1
2
3
2
1
0
-1 -2 -2 -3 -3 -2 -1
0
1
2
3
4
2
1
0
-1 -2 -2 -2 -2 -1
0
1
2
3
4
5
3
2
1
0
-1 -1 -1 -1 -1
0
1
2
3
4
5
4
3
2
1
0
0
0
0
-1 -1
0
1
2
3
4
5
4
3
2
1
1
1
1
0
0
1
2
3
4
5
6
5
4
3
2
2
2
2
1
1
2
3
4
5
6
(x,y,t)
(x,y,t)
5
0
-2
(t)
2.1 Introduction to LS
7
6
5
4
4
4
3
2
1
1
1
2
3
4
5
6
5
4
3
3
3
2
1
0
0
0
1
2
3
4
5
4
3
2
2
2
1
0
-1 -1 -1
0
1
2
3
4
3
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
3
2
1
0
0
0
-1 -2 -3 -3 -2 -1
0
1
2
2
1
0
-1 -1 -1 -2 -3 -3 -2 -1
0
1
2
3
2
1
0
-1 -2 -2 -3 -3 -2 -1
0
1
2
3
4
2
1
0
-1 -2 -2 -2 -2 -1
0
1
2
3
4
5
3
2
1
0
-1 -1 -1 -1 -1
0
1
2
3
4
5
4
3
2
1
0
0
0
0
-1 -1
0
1
2
3
4
5
4
3
2
1
1
1
1
0
0
1
2
3
4
5
6
5
4
3
2
2
2
2
1
1
2
3
4
5
6
• no movement, only change of values
• the front may change its topology
• the front location may be
between samples
(x,y,t)
(x,y,t+1) = (x,y,t) + ∆(x,y,t)
7
6
5
4
4
4
3
2
1
1
1
2
3
4
5
6
5
4
3
3
3
2
0
-1
0
0
1
2
3
4
5
4
3
2
2
2
1
-1 -2 -1 -1
0
1
2
3
4
3
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
3
2
1
0
0
0
-1 -2 -3 -3 -2 -1
0
1
2
2
1
0
-1 -1 -1 -2 -3 -3 -2 -1
0
1
2
3
2
1
0
-1 -2 -2 -3 -3 -2 -1
0
1
2
3
4
2
1
1
0 -2 -2 -2 -2 0
0
1
2
3
4
5
3
2
1
0
-1 -3 -1
0
1
1
1
2
3
4
5
4
3
2
0
-1 -2 0
1
1
0
0
2
2
3
4
5
4
3
2
0
0
1
1
0
-1
0
1
2
4
5
6
5
4
3
2
1
2
2
0
0
1
2
4
5
6
2.1 Introduction to LS
Segmentation with LS:
• Initialise the front (0)
• Compute (x,y,0)
• Iterate:
(x,y,t+1) = (x,y,t) + ∆(x,y,t)
until convergence
• Mark the front (tend)
2.2 Equation of motion
product of influences
• Equation 17 p162:
 ˆ
 kI  FA  FG ( )   0
t
(x,y,t+1) - (x,y,t)

extension of the
speed function kI
spatial
derivative
of 
smoothing “force”
depending on the
local curvature 
(contour influence)
(image influence)
constant “force”
(balloon pressure)
  
  div





link between spatial and temporal derivatives,
but not the same type of motion as contours!

2.2 Equation of motion
• Speed function:
–
–
–
–
kI is meant to stop the front on the object’s boundaries
similar to image energy: kI(x,y) = 1 / ( 1 + || I (x,y) || )
only makes sense for the front (level set 0)
yet, same equation for all level sets
 extend kI to all level sets, defining
^
kI
– possible extension:
^
kI (x,y) = kI(x’,y’)
where (x’,y’) is the point in the front closest to (x,y)
( such a ^
kI (x,y) depends on the front location )
2.3 Algorithm 1 (p163)
7
6
5
4
4
4
3
2
1
1
1
2
3
4
5
6
5
4
3
3
3
2
1
0
0
0
1
2
3
4
5
4
3
2
2
2
1
0
-1 -1 -1
0
1
2
3
4
3
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
3
2
1
0
0
0
-1 -2 -3 -3 -2 -1
0
1
2
2
1
0
-1 -1 -1 -2 -3 -3 -2 -1
0
1
2
3
2
1
0
-1 -2 -2 -3 -3 -2 -1
0
1
2
3
4
2
1
0
-1 -2 -2 -2 -2 -1
0
1
2
3
4
5
3
2
1
0
-1 -1 -1 -1 -1
0
1
2
3
4
5
4
3
2
1
0
0
0
0
-1 -1
0
1
2
3
4
5
4
3
2
1
1
1
1
0
0
1
2
3
4
5
6
5
4
3
2
2
2
2
1
1
2
3
4
5
6
(x,y,t)
1. compute the speed kI on the front
extend it to all other level sets
2. compute (x,y,t+1) = (x,y,t) + ∆(x,y,t)
3. find the front location (for next iteration)
modify (x,y,t+1) by linear interpolation
7
6
5
4
4
4
3
2
1
0
1
1
2
3
4
5
6
5
4
3
3
3
2
0
-1
0
0
1
2
3
4
5
4
3
2
2
2
1
-1
0 -2 -1 -1
0
1
2
3
4
3
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
3
2
1
0
0
0
-1 -2 -3 -3 -2 -1
0
1
2
2
1
0
-1 -1 -1 -2 -3 -3 -2 -1
0
1
2
3
2
1
0
-1 -2 -2 -3 -3 -2 -1
0
1
2
3
4
2
1
1
0 -2 -2 -2 -2 0
0
1
2
3
4
5
3
2
1
0
-1 -3 -1
0
1
1
1
2
3
4
5
4
3
2
0
-1 -2 0
1
1
0
0
2
2
3
4
5
4
3
2
0
0
1
1
0
-1
0
1
2
4
5
6
5
4
3
2
1
2
2
0
0
1
2
4
5
6
2.4 Narrow-band extension
• Weaknesses of algorithm 1
– update of all (x,y,t): inefficient, only care about the front
– speed extension: computationally expensive
• Improvement:
– narrow band: only update a few level sets around 
– other extended speed: kI(x,y) = 1 / ( 1 + ||  I(x,y)|| )
^
2
1
1
1
2
2
1
0
0
0
1
2
-1 -1 -1
2
2
2
1
0
0
1
2
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
2
1
0
0
0
-1 -2
0
1
2
2
1
0
-1 -1 -1 -2
0
1
2
2
1
0
-1 -2 -2
0
1
2
2
1
0
-1 -2 -2 -2 -2 -1
0
1
2
2
1
0
-1 -1 -1 -1 -1
0
1
2
2
1
0
0
0
0
-1 -1
0
1
2
1
1
1
1
0
0
1
2
2
2
2
2
1
1
2
-2 -1
-2 -1
-2 -1
2
2.4 Narrow-band extension
• Caution:
– extrapolate the curvature
 at the edges
– re-select the narrow band
regularly:
an empty pixel cannot get
a value  may restrict
the evolution of the front
2
1
1
1
2
2
1
0
0
0
1
2
-1 -1 -1
2
2
2
1
0
0
1
2
2
1
1
1
0
-1 -2 -2 -2 -1
0
1
2
2
1
0
0
0
-1 -2
0
1
2
2
1
0
-1 -1 -1 -2
0
1
2
2
1
0
-1 -2 -2
0
1
2
2
1
0
-1 -2 -2 -2 -2 -1
0
1
2
2
1
0
-1 -1 -1 -1 -1
0
1
2
2
1
0
0
0
0
-1 -1
0
1
2
1
1
1
1
0
0
1
2
2
2
2
2
1
1
2
-2 -1
-2 -1
-2 -1
2
2.5 Summary
• Level sets:
– function  : [ 0 , Iwidth ] x [ 0 , Iheight ] x N  R
( x , y , t )  (x,y,t)
– embed a curve : (t) = { (x,y) , (x,y,t) = 0 }
– (0) is provided externally, (x,y,0) is computed
– (x,y,t+1) is computed by changing the values of (x,y,t)
– changes using a product of influences
– on convergence, (tend) is the border of the object
• Issue:
– computation time (improved with narrow band)
3. Prior knowledge in Leventon 2000
3. Prior knowledge in Leventon 2000
3.1
3.2
3.3
3.4
3.5
Introduction
Level set model
Learning prior shapes
Using prior knowledge
Summary
3.1 Introduction
• New notations:
function (x,y,t)  shape u(t) ( still a function of (x,y,t) )
front (t)
 parametric curve C(q) (= level set 0 of u(t) )
extended speed ^
kI  stopping function g(I)
Malladi’s model:
u
 ˆ
 g(I)  (FA  FG ( )) u  0
 kI  (FA  FG ( ))   0
t
choose: FA = -c (constant value) t
FG() = - (penalises high curvatures),

 u
u
 g(I)  u  div
t
 u

 c  u


 u 
  div

u


3.1 Introduction
u(t)
use prior knowledge and
global image values to find
the prior shape which is
closest to u(t): call it u*(t)
use a level set model (based on current
geometry and local image values):
u(t+1) = u(t) + ∆u(t)
combine linearly these two new
shapes to find the next iteration:
u(t+1) = u(t)
+ 1 ∆u(t)
+ 2 ( u*(t) - u(t) )
3.2 Level set model
u(t)
use prior knowledge and
global image values to find
the prior shape which is
closest to u(t): call it u*(t)
use a level set model (based on current
geometry and local image values):
u(t+1) = u(t) + ∆u(t)
combine linearly these two new
shapes to find the next iteration:
u(t+1) = u(t)
+ 1 ∆u(t)
+ 2 ( u*(t) - u(t) )
3.2 Level set model
• not Malladi’s model:
( implicit geometric active contour )
• it’s Caselles’ model:
( implicit geodesic active contour )

 u 
u
 g(I)  u  div
 c  u
t
 u 

u
u 
 u  divg(I) 
 c  u
t
u 

u
 g(I)  c    u  u  g
t 
use it in: u(t+1) = u(t) + 1 ∆u(t) + 2 ( u*(t) - u(t))

That’s all about level sets. Now it’s all about u*.
3.3 Learning prior shapes
From functions to vectors:
u(t)
(continuous)
sample it:
7
6
5
4
3
2
2
2
3
4
5
6
6
5
4
3
2
1
1
1
2
3
4
5
5
4
3
2
1
0
0
0
1
2
3
4
4 4 4 3 2 1 1 1 2
3 3 3 2 1 0 0 0 1
2 2 2 1 0 -1 -1 -1 0
1 1 1 0 -1 -2 -2 -2 -1
0 0 0 -1 -2 -3 -3 -2 -1
-1 -1 -1 -2 -3 -3 -2 -1 0
-1 -2 -2 -3 -3 -2 -1 0 1
-1 -2 -2 -2 -2 -1 0 1 2
0 -1 -1 -1 -1 -1 0 1 2
1 0 0 0 0 -1 -1 0 1
2 1 1 1 1 0 0 1 2
3 2 2 2 2 1 1 2 3
3
2
1
0
0
1
2
3
3
2
3
4
4
3
2
1
1
2
3
4
4
3
4
5
5
4
3
2
2
3
4
5
5
4
5
6
put the values
in one column
7
6
5
4
3
2
2
2
3
4
5
6
6
5
4
3
2
1
1
1
2
3
4
5
..
.
5
5
4
5
6
= vector u(t)
(discrete)
3.3 Learning prior shapes
• Storing the training set:
– set of n images manually segmented: { Ci , 1  i  n }
– compute the embedding functions : T = { ui , 1  i  n }
– compute the mean and offsets of T:
 = 1/n i ui
ûi = ui - 
– build the matrix M of offsets:
M = ( û 1 û2    ûn )
– build the covariance matrix 1/n MMT , decompose as:
1/n MMT = U  UT
U orthogonal, columns: orthogonal modes of shape variations
 diagonal, values: corresponding singular values
3.3 Learning prior shapes
• Reducing the number of dimensions:
– u: dimension Nd (number of samples, e.g. Iwidth x Iheight)
– keep only first k columns of U: Uk (most significant modes)
– represent any shape u as:
 = U kT  ( u -  )
 = “shape parameter” of u, dimension k only
– restore a shape from :
u = Uk   + 
~ a shape u: pose p (position)
– other parameter defining
– prior shape: u*  ( , p)
3.3 Learning prior shapes
• At the end of the training:
10 0 0


 k  0 5 0


0 0 2

 2 
 
   0 
 
0.1


0.1
 
   1 
 
 5 
0.1 0
0 


1
 k   0 0.2 0 


0 0.5
 0
most common variation
least common variation

 T 1
k   0.405
P( ) 

 T 1
k   7.901
P( ) 
 1 T 1 
exp
  k   big
k

2


  k  2
1
 1 T 1 
exp
  k   small
k

2


  k  2
1
The 
most common shapes
 have a higher probability.

3.4 Using prior knowledge
u(t)
use prior knowledge and
global image values to find
the prior shape which is
closest to u(t): call it u*(t)
use a level set model (based on current
geometry and local image values):
u(t+1) = u(t) + ∆u(t)
combine linearly these two new
shapes to find the next iteration:
u(t+1) = u(t)
+ 1 ∆u(t)
+ 2 ( u*(t) - u(t) )
3.4 Using prior knowledge
• Finding the prior shape u* closest to u(t)
– Given the current shape u(t) and the image I, we want:
u*map = (map , pmap) = arg max P( , p | u(t) , I)
– Bayes rule:
P(,p | u,I) = P(u | ,p) . P(I |u,,p) . P() . P(p) / P(u,I)
constant
has to be max
= U(–∞,∞)
uniform distribution of
poses (no prior knowledge)
= P (u | u*) = exp( - Voutside)
compares u and u*
= P (I | u, u*) ≈ P ( I | u*)
= exp ( - | h(u*) - |I| |2)
compares u* with the
image features

1
 1

exp  T 1
k  
 2

k
2 
measures how likely
u* is as a prior shape

k
3.4 Using prior knowledge
• P( u | u* ) :
P(u | u*) = exp ( - Voutside) , Voutside : volume of u outside of u*
u*
Voutside = Card { (x,y) , u(x,y) < u*(x,y)
u}
– meant to choose the prior shape u* most similar to u
– only useful when C(q) gets close to the final boundaries
3.4 Using prior knowledge
u*1
P(u(t) | u*1) =1
u*2
=1
=0
P(u(t) | u*2) =0.6
u(t)
u*3
=0.6
P(u(t) | u*3) =1
3.4 Using prior knowledge
• P( I | u , u* ) :
– meant to choose the prior shape u* closest to the image
– depends on the boundary features (image influence)
P( I | u*) = exp ( - | h(u*) - |I| |2)
u*=0
I
u*=-2
h(u*)
u*=2
I(x,y)
I(x,y)
-2
0
2
u*
3.4 Using prior knowledge
u*1
u(t)
u(t) + 2 (u*1 - u(t))
u*2
P(I|u*1) = 0.9
P(I|u*2) = 0.6
3.5 Summary
• Training:
– compute mean and covariance of training set
– reduce dimension
• Segmentation:
– initialise C, compute u(0) with signed distance
– to find u(t+1) from u(t), compute:
a variation ∆u(t) using current shape and local image values
a prior shape u* so that:
• u* is a likely prior shape
• u* encloses u(t)
• u* is close to the object boundaries
– combine ∆u(t) and u* to get u(t+1)
Conclusions
• Active contours:
– use image values and a target geometry
– topology: fixed but little robust
• Level sets:
– also use image values and a target geometry
– but in higher dimension, with a different motion
– topology: flexible and robust
• Prior knowledge:
– prior shapes influence the segmentation
– not specific to level sets
– ideas can be adapted to other features (boundaries, pose)
That’s all folks!