Transcript Document

Advanced Topics – I (EENG 4010) Computer Vision & Image Analysis (EENG 5640)

Introduction to Computer Vision

Image Processing System Image Image Computer Vision/ Image Analysis/ Image Understanding System Image/ Scene Description Image Pattern Vector (with Image measurements as components in the current application) Pattern Recognition System Pattern Classification Label

Computer Vision generally involves pattern recognition

Typical Computer Vision Applications

 Medical Imaging  Automated Manufacturing (some experts use Machine vision as the term to describe Computer Vision for Industrial applications; others use it as synonym for computer vision)  Remote Sensing  Character Recognition  Robotics

Binary Image Analysis

 Grey scale to Binary transformation (Otsu’s method)  Counting holes  Counting objects  Connected Component Labeling Algorithms – Recursive Algorithm – Two Pass Row by Row Labeling Algorithm

Two-Pass Algorithm: Illustrative Example

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 4 4 4 2 2 2 3 3 3 3 4 4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 2 4 1 0 2 1 3 1 4 2 - Label Parent

Binary Image analysis (Contd.)

 Morphological Processing – Dilation, Erosion, Opening and Closing Operations – Example to Illustrate the effects of the operations  Region Properties – Area – Perimeter – Circularity

Medical Application of Morphology

Industrial Application of Morphology

Grey Level Image Processing

  Image Enhancement Methods – Histogram Equalization and Contrast Stretching – Mitigation of Noise Effects  Image Smoothing  Median Filtering  Frequency Domain Operations (Low Pass Filtering) Image Sharpening and Edge Detection – High Pass Filtering – – Differencing Masks (Prewitt, Sobel, Roberts, Marr Hildreth operators) Canny Edge Detection and Linking

Histogram Equalization- Original Image

Histogram Equalization- Equalized Image

Low Contrast Image

Contrast Stretching (Linear Interpolation between 79-136)

Histogram Equalization

Color Fundamentals

[1, 0, 1] Magenta [1, 0, 0] Red [0, 0, 1] Blue [1, 1, 1] White [0, 0, 0] Black [0, 1, 1] Cyan [0, 1, 0] Green [1, 1, 0] yellow [1, 1, 0] yellow [0, 1, 0] Green [0, 1, 1] Cyan [0, 0, 1] Blue [1, 1, 1] White [1, 0, 1] Magenta [1, 0, 0] Red

RGB and HSI (HSV) Systems

RGB-HSI Convesion

RGB to HSI Conversion- Final Formulae

HIS-RGB Conversion

Method is given. You need to reason out why? Or explore web for answer.

YIQ and YUV Systems for TVs, etc.

YIQ system is used in TV Signals. Its components are: Luminance Y = 0.30R + 0.59G + 0.11B

R-Cyan I = 0.60R - 0.28G - 0.32B

Magenta-Green Q = 0.21R - 0.52G + 0.31B

In some digital products and JPEG/MPEG Compression algorithms, YUV System as follows is used: Y = 0.30R + 0.59G + 0.11B

U = 0.493* (B – Y) V = 0.877*(R – Y) Advantage: Luminance and Chromaticity components can be coded with different number of bits.

Optical Illusion - I

Optical Illusion - II

Optical Illusion - III

Texture

 Pattern caused by a regular spatial arrangement of pixel colors or intensities.

 Two approaches – Structural or Syntactic (usually used in case of synthetic images by defining a grammar on texels). – Statistical or quantitative (more useful in natural texture analysis; can be used to identify texture primitives (texels) in the image.

Quantitative Texture Measures

 Edge related – Edginess (proportion of strong edges in a small window around pixels.

– Edge direction histograms (the pattern vector constituted by the proportion of the edgels in the horizontal, vertical, and other quantized directions among the total pixels in a chosen window around a pixel (I,j) under consideration  Co-occurrence matrix based

Co-occurrence matrix based Measures

 Construction of Co-occurrence matrix C d [i,j] where d is the displacement of j from i (e.g. (0,1), (1, 1),

etc.

  Normalized and symmetric co-occurrence matrices N d [i,j] and S d [i,j].

Zucker and Terzopoulos’s Chi-square metric to choose the best

d

( i.e. d with most structure).

 Numeric measures from N d [i,j]

Choice of the Best Co-occurrence Matrix and Computation of Features

Laws’ Texture Energy Measures

 Simple because masks are used  2-D masks are created using 1-D masks: – L5 (Level) = [ 1 4 6 4 1] – E5 (Edge) = [-1 -2 0 2 1] – S5 (Spot) = [-1 0 2 0 -1] – – W5 (Wave) = [-1 2 0 -2 1] (not in the text!) R5 (Ripple) = [ 1 – 4 6 -4 1] (

e.g.

5x5 matrix of L5E5 mask is obtained by multiplying transpose of L5 by E5).

Laws’ Algorithm for Texture Energy Pattern Vector Construction

Laws’ Texture Segmentation Results- I

Laws’ Texture Segmentation Results- II

Laws’ Texture Segmentation Results- III

Gabor Filter Based Texture Analysis

Gabor Filter is mathematically represented by (refer Wikipedia): Where and θ  Orientation of the normal to parallel stripes λ  Wavelength or inverse of the frequency of the cosine function g  σ  Sigma of the Gaussian function ψ  Phase offset of the cosine function

Image Segmentation

Image Segmentation Contour-Based Methods (

e.g.

Canny Edge Detection and Linking) Region-Based Methods Partitioning/Clustering (e.g. K-Means Clustering, Isodata clustering, Ohlander’

et al.

’s recursive histogram based technique) Region Growing (

e.g.

Haralick and Shapiro Method)

Clustering Algorithms

   Clustering (partitioning) of pixels in the pattern space Each pixel is represented by a pattern vector of properties. For example, in case of a colored image, we could have could be of any dimensionality (even 1,

i.e.

could be a scalar as in case of a grey level image).

 Depending upon the problem, may include other measurements on

texture, shading, etc.

that constitute additional dimensional components of the pattern vector.

Note:

i

and

j

denote pixel row & columns.

K-Means Clustering algorithm

Isodata Clustering Algorithm

T

ISODATA Clustering Problem

m 1        m 2    4  .

1   m 3    2 4 .

8   m 3 m 2 m 1  1    1 0 0 1    2    1 0 0 1    3    3 1 1 3   g r  To which cluster does

X

belong to?

X

   2 2   If split threshold TS = 3.0 and Merge Threshold TM = 1.0, what will be the new cluster configuration? Get new cluster means in case of a Split.

Image Databases- Content-Based Image Retrieval

Any Problem with Traditional Text (in Caption) Based Retrieval?

Typical SQL (Structured Query Language) Query: SELECT * FROM IMAGEDB WHERE CATEGORY = ‘GEMS’ AND SOURCE = ‘SMITHSONIAN’ AND (KEYWORD = ‘AMETHYST’ OR KEYWORD = ‘CRYSTAL’ OR KEYWORD = ‘PURPLE’) This will retrieve the gem collection of the Smithsonian Institute from its IMAGEDB database restricting its search based on the logical combination of the keyword specified.

Looks like no problem here!

Limitations of the Key Word Based Retrieval

 Human coding of key words is expensive; but still some keywords by which one likes to retrieve the image cannot be visualized and hence may be left out. Key words may sometimes retrieve unexpected images as well!

 What kind of images do you expect with the key word ‘pigs’?

Unexpected Retrieval- An Example

Content-Based Image Retrieval

  Uses Query-By-Example (QBE) Concept IBM’s QBIC (Query By Image Content) is the first system  In QBE systems, you specify an example plus some constraints  Typical example images for specification – A digital Photograph – User painted drawing – A line-drawing sketch

Matching- Image Distance (Similarity Measures)

4 Major classes:  Color Similarity  Texture Similarity  Shape Similarity  Object and relationship similarity

Color Similarity Measures

 QBIC lets the user choose up to 5 colors from the color table and specify their percentages  Color histograms (K-bin) can be used

D hist

(

I

,

Q

) Here

h(I)

 (

h

(

I

) and

h(Q)

h

(

Q

))

T A

(

h

(

I

) 

h

(

Q

)) are

K

-bin histograms of images

I

and

Q

, and

A

matrix.

is (

K x K

) similarity

Color-Layout-Based Similarity

 Distances between corresponding grid squares of the database and example  images are found and summed up.

D gridded

_

color

(

I

,

Q

)  

D color

(

C I

(

g

),

C Q

(

g

))

g

Each grid square spans over multiple pixels. Then how do you compare grid squares?

– Use Mean Color – Use Mean and Standard Deviation – Use Multi-bin Histogram

Texture-Based Similarity Measure

 Pick-and-Click distance

D pick

_

and

_

click

(

I

,

Q

)  min

i

I

||

T

(

i

) 

T

(

Q

) || 2  Grid based texture similarity can be found by the same process as in the gridded color case

D gridded

_

texture

(

I

,

Q

)  

g D texture

(

T I

(

g

),

T Q

(

g

))

Shape-Based Similarity Measures

 Histogram approach is difficult to apply particularly when you want scale and rotation invariance.

  Boundary Matching Granlund’s Fourier Descriptors for Translation, Scale, starting point (for boundary tracing), and rotation invariant matching.

Boundary (Sketch) Matching

 Obtain a normalized image- reduce the original image to a fixed size,

e.g.,

(64x64) & median filter  2 stage edge detection-global and local thresholds.

 Perform linking and thinning.

 Find Correlation between line drawing (L)’s grid square and various shifts (n) of the DB image A’s grid square & sum up best correlations.

D sketch

(

I

,

Q

)  1 

g

max

n

[

D correlatio n

(

shift n

(

A I

(

g

)),

L Q

(

g

))]

Line Sketch of a Horse

Retrieved Images of Paintings

Granlund’s Fourier Descriptors

 Let be the points on the boundary of the query shape.  The

k

-th discrete Fourier coefficient is given by  Leaving out and , we can compute translation, rotation, starting-point and scale invariant shape descriptors ) as follows:

Invariant Properties of Fourier Descriptors

 Translation  Rotation |.| takes care of the problem because  Scale By using for scaling, we are eliminating c.

 Starting point- Once again |.| operation helps!

Relational Similarity

 Spatial Relationship- relational graphs indicating inter-object relationships can be constructed. Once objects are identified, relationships can be matched, by graph matching techniques.

 Abstract Relationship- Happy face; it involves separation and identification of a face region first, and then checking whether it is a happy or sad face.

Matching in 2D

 Transformation- Mapping from one coordinate space to another. – linear or nonlinear (called warps) – – one to one correspondence between points if linear Invertible or non-invertible  Image Registration- Process of establishing point by point correspondence between two images of a scene.

Affine Transformation (Mapping)

 Wikipedia Definition: Affine (Latin

affinis

 Connected with) mapping between two vector (affine) spaces is a linear transformation followed by translation.

 It preserves: – – Collinearity of points Ratios of distances along a line

Image Operations Represented by Affine Transformation

 Can we write the affine transformation

Y = A.X + t (Can we absorb t into A)

?

 For scaling and rotation, yes.

 For translation, does not seem to be possible. What is the way out?

Homogeneous Coordinates

   Introduced by August Ferdinand Mobius (1790 1868), a German mathematician and theoretical astronomer.

(x, y)  (w.x, w.y, w); (2.x, 2.y, 2) = (3.x, 3.y, 3) (x, y, z)  (w.x, w.y, w.z, w); Same concept an be extended to any n-dimensional space.

 You can represent a point at infinity (how?)

Usage of Homogeneous Coordinates

 The rotation, scaling, and translation can be modeled as matrix multiplication operations as follows:  If control (easily distinguishable) points of two images are identified, and registered with an affine mapping, it is easy to identify the presence of the same object(s) in both- Recognition by Alignment.

Shears and Reflections

(0, 1) (e y , 1) (1, 1) (1+e y , 1) (0, 0) x  Horizontal Shear (1, 0) (0, 1) (1, 1+e x ) (1, 1) (1, e x ) (0, 0) x  Vertical Shear (1, 0) Reflections about x-axis (x, y)  Reflections about y-axis (x, y)  (x, -y) (-x, y) You can express these also as an affine transformations.

General Affine Transformation

 You may consolidate all the previous affine transformations (translation, rotation, scale, shear, and reflection) into the general affine transformation as follows:

Best 2D Transformation with Least Squares Fitting

 Let (x j , y j ), j = 1, n the control points in an image, and (x’ j , y’ j ) are the corresponding points in the transformed image. Then the least-squares fit method seeks to minimize the error:  Setting the 6 partial derivatives of the form corresponding to the 6 translation parameters to zero, we get 6 equations for 6 unknowns.

2D-Object Recognition via Affine Mapping- Local Feature Focus Method

{

Local-Feature-Focus-Method

For each pair of model and image { features Find the maximal subset of matching neighboring features; Find Best T; } } If enough features align, confirm the presence of the model object; G2 G1 G4 G8 G5 G3 G6 G7 F2 F3 F1 F4 Model F E1 E4 Image E2 E3 Model E

2D-Object Recognition via Affine Mapping- Pose Clustering Method

} {

Pose-Clustering-Method (P, L)

// P is the set of image features // L is the set of stored model features For each pair (P i , P j ) of image features For each pair (L m , L n ) of model features of the same type { ‘L’ Junction ‘Y’ Junction ‘T’ Junction Compute the affine (RST) parameters a Each a will be a point in the parameter space Arrow Junction ‘X’ Junction

Some Typical

}

Model Features

Examine the parameter space for large cluster modes return all a k ‘s corresponding to dominant modes

2D-Object Recognition via Affine Mapping- Geometric Hashing

x

e 10 • Useful when model database (DB) is very large and an object in the image is known to be an affine e 01 • transform of one of the DB models If e 00 , e 01 , and e 10 e 00 are any three non-collinear feature points from the model feature point set M, any point x e M can be represented as follows using the affine basis • set (coordinate set) constructed from these 3 points:

x

= x (e 10 – e 00 ) + h (e 01 – e 00 ) + e 00 We use the property that under an affine transform, the same relation holds: T

x

= x (Te 10 – Te 00 ) + h (Te 01 – Te 00 ) + Te 00 Te 00 T

x

Te 01 Te 10

Geometric Hashing Method (Contd.)

} {

GH-Offline-Preprocessing (D, H)

//D- Database Model Set // H- Initially empty hash table for each model M in D { Extract feature set F M ; For each non-collinear triplet E of F M { For each other point x of F M Calculate ( x , h ) for x with respect to E Store (M, E) in the table H at index ( x , h ) } }

Geometric Hashing Method (Contd.) GH-Online-Recognition (D, H) {

//- Database Model Set; H- Hash table constructed in the offline processing

for {

each possible (M, E) tuple in the database

set

Bin-Count (M, E) = 0;

}

Extract image feature set F I ;

for

each non-collinear triplet E of F I

{

and

for

each other point x of I Calculate ( x , h ) for x with respect to E; Retrieve (M, E) pairs in the table H at index ( x , h ); M Increment the Bin-Count of those (M, E) pairs;

}

Return (M, E) values with highest Bin-Count values.

}

Practical Problems in Geometric Hashing

     Errors in feature point coordinates Missing and extra feature points Occlusions and multiple objects Unstable bases Weird affine transforms on subsets points and consequent hypotheses for presence of hallucinated objects (This problem is present in pose clustering and focus feature methods also).

(a) Image Points (b) Hallucinated Object

General Framework for 2D-Object Recognition via Relational Matching

      Consistent Labeling Problem is a 5-tuple (

P

,

L

,

R P

,

R L

,

f

)

P

- Object

P

arts (found in the image)

L

- Object

L

abels (names of stored model features)

R P

– Set of

R

elationships between

P

arts;

R L

– Set of Constraint

R

elationships between

L

abels

f

is a mapping such that if (p i , p j ) e

R P

, then (f(p i ), f(p j )) e

R L

Brute Force Method for Consistent Labeling - Interpretation Tree Search

} {

Bool Interpretation-Tree-search(P, L, R

p = first(P);

for

each I in L {

P , R L , f)

C1 f’ = f U {(p, I)};//add part-label to interpretation C2 C5 H3 H4 H1 H2 C3

(a) Labels

C7 P1 OK = true;

for

each N-tuple (p 1 , …, p N ) in R P ( ' ( ),..., ' ( )) containing p { 1 if OK then { P’ = P – {p}; if is empty(P’) then output(f’); else Interpretation-Tree Search(P’, L, R P , R L , f’) } Nil P3 P1 = H1 P1 = C1 P4 P5

Parts

C6 C8 P2 P4

(b) Image (c) Interpretation Tree

Discrete Relaxation Labeling

}

Discrete Relaxation Labeling(P, S, R)

{ // P i //

S

, i = 1, …, D, is the set

P

is the set of sets S(P i of the detected image features ), i = 1, …, D, of initially compatible labels for P i s //

R

= set of relationships over which compatibility is determined {

repeat

} {

for

each (

P i , S(P i

))

for

each label

L k

e

S(P i ) for

each relation

R(P i , P j )

over image parts

If

there exists

L m

e

S(P j )

with

R(L k , L m )

in model then keep L

k

in

S(Pi)

else delete L

k

from

S(Pi)

}

until

no change in any

S(P i )

return

(S)

Continuous (Probabilistic) Relaxation

iter

 0

pr i

0 (

l

) 

pr i

(

l

)

q i iter pr i iter

 1 

iter

{

j

| (

i

,

j

C ij

) 

R P

}

l

 ' 

L j r ij

(

l

,

l

' )

pr j iter

(

l

' ) compatibility values  '

l

 

L i pr i iter pr i iter

(

l

(

l

)( 1 ' )( 1 

q i iter

q i

(

iter l

)) (

l

' ))

Extraction of 3D Information from 2D Images Shape from “X” Techniques

 Direct 3D perception- Range Imaging (Costly)  How do human perceive depth/3D shape?

– – Stereo Shading – Monocular cues,

e.g.,

Relative depth information from occlusions of the background objects by foreground objects, perspective view with farther objects appearing smaller with distance  Shape from X (X = binocular stereo/ photometric stereo/shading/texture/ boundary/motion

Binocular Stereo

P x' x’’ d f z Depth can be inferred from the disparity (x’’-x’) Only problem remains to be solved is the correspondence problem.

x’/f = OP/(f + z) x’’/f = (OP + d) / (f + z) (x’’-x’)/f = d/(f + z) z = f . d /(x’’ – x’) - f

Surface Orientation from Reflectance Models

z

e

i

n n g n

g

s

y

i

= angle of incidence

e

= angle of emittance

g

= phase angle

n

,

n g

, and

n s

are unit vectors along the surface normal, view direction, and source direction, respectively. x For specular (smooth mirror-like) surfaces, maximum amount of light is reflected in the direction of what is called specular angle, and it reduces in the directions away from this one. An estimate of the cosine of the difference between the specular and viewing angles is given by: C = 2cos(

i

) cos(

e

) – cos (

g

). For dusty/matte (Lambertian) surfaces, reflectivity in any direction is proportional to angle of incidence. A general formula that includes both effects is: L(

i, e, g

) =

s

C

n

+ (1 –

s

) cos (i) 0 <=

s

<= 1. Larger the n, sharper the peaking in the specular direction.

Reflectance Map for Lambertian Surfaces

q

0.7

z

0.9

0.8

n

y

x x

R = r 0

n.n

s

… (1) The spatially varying reflectance factor r 0 is called albedo. For a surface z = f(x, y), let p =  z/  x and q =  z/  y

p

z= (

 z/  x).

x+

e 

p.

x (

x, 0,

p.

x ) T ||

l to (1, 0, p) T =

r x

, say

n n s

 

(0,

r x

 x (-p s

y,

r y

p.

, -q s 

y) T

= (-p, -q, 1) , 1) r Putting in (1), we

||

l to (0, 1, q) T T =

r y

, say get R(p, q)

Photometric Stereo

With 3 sources of light R k (x, y) = I k (x, y) = r o (

n k .n

) k = 1, …, 3

I

= r 0

N

.

n

where

I

= [I 1 (x, y), I 2 (x, y), I 3 (x, y) ] T and

q

r o

n

= |N -1 = 1/ r 0

I

| . N -1

I

R 2 (p, q) = 0.75

R 1 (p, q) = 0.9

Solution point

p

R 3 (p, q) = 0.5

Shape from Shading

We know that (-p, -q, 1) is the vector in the direction of surface normal and (-p s , -q s , 1) is the corresponding vector for source direction, the reflectance (i.e. image intensity) for a Lambertian surface is given by: At each pixel site (k, l) we need to find the best (p kl , q kl ) pair that gives an R kl matching with the image intensity E kl . In other words, we minimize This may not have unique solution. Hence, we used in photometric stereo 3 images to resolve the ambiguity. Here we use surface continuity for the same: Overall we minimize We get the final solution by setting and

Shape from Shading Relaxation Approach Ikeuchi’s

Continuing from the previous slide by setting and , we get and Now, the equations for obtaining

p

and

q

iteratively by Ikeuchi’s relaxation approach are:

Mask to compute average p and q Advantage:

Relaxation method can enforce the boundary conditions and get good solutions.

Limitation:

In the current formulation, occluding boundary with p and q at  causes a problem. How to solve this problem?

Photometric Stereo by Relaxation Approach

Same relaxations equations can be extended to the photometric stereo problem involving

n

(= 2 or more) images: Here and represent the intensity at the pixel site in the image captured with the th light source, and the corresponding reflectance map, respectively.

Advantage:

Relaxation method can enforce the boundary conditions and get good solutions.

Limitation:

In the current formulation, occluding boundary with p and q at  causes a problem. How to solve this problem? See the next slide!

Solution to the Problem of Infinite Gradients at Occluding Boundary

Ikeuchi suggested to formulate the problem in polar coordinates and solve!

If P is a point on a surface patch, and OP is a unit normal, the x, y, and z components of this vector are given by: (sin  . cos  , sin  . sin  , cos  ) … (1) We can represent similarly the x, y, and z components of the unit vector in the direction the light source with polar coordinates (  s ,  s ) as: z (sin  s . cos  s , sin  s . sin  s , cos  s ) Since, for Lambertian surfaces, R = r o . cos i where i is the angle between the two directions, we can rewrite R as the following function of  R (  ,  sin  = r o ) = r sin . ( sin  s  occluding boundary  equations as before hold with From  and o . (sin sin  sin   to s  sin sin  s

p

cos(  and and  :  + cos s  cos  s  cos  s cos ) + cos =  /2 and 

q

   s ) + cos  s ). At the = tan -1  y/  x. The same and  replacing

p

and

q

.

x may be done in the end using (1).

O   P y

Needle Diagram (Display of Unit Normal Vectors in the Image Space)

(a) Image of a resin Droplet on a flower of a plant (b) Needle diagram for the image in (a)

Depth Construction from the (p, q) tuples (Needle Diagram)

Since p =  z/  x and q =  z/  y, Because of imperfect (p, q) values, this integral may not yield correct values. The values may be sensitive to the path chosen for integration. Integral around a close loop of pixels may not vanish. Hence, better approach is to choose p and q that minimizes: . From the calculus of variations, an integral of the form can be minimized by solving the Euler equation: . For our problem, the Euler equation would yield:

Z-map Construction Algorithm

1. Convolve the p and q maps with horizontal and vertical Prewitt/Sobel masks to get p x and q y maps and add them up to get s= p x +q y map.

2. Start with a random configuration of z-values on the image grid. Or, preferably obtain a crude z-map by starting with z (0, 0) = 0 and applying the integral for computing z from p and q values by visiting the cell sites in raster scan fashion.

3. Convolve the z-image with horizontal Prewitt/Sobel mask to get  z/  x image. Convolve again the resultant image with same mask to get  2 z/  x 2 . Convolve again the z-image with vertical Prewitt/Sobel mask to get  z/  y -image. Convolve again the resultant image with same mask to get  2 z/  y 2 . Add the two resultant images to get  2 z-image.

4. For all (i, j), update z ij by z ij + e .(  2 z Ij – s ij ) where e is a small constant.

5. If none of the updated z ij s change significantly, stop.

Otherwise, go to step 3.

Moving Objects- Optical Flow

Whether the observer or the scene objects are moving, the relative motion gives a lot of information about depth because the points v closer to the observer seem to be moving faster. Motion stereo is the name of phenomenon (or body of techniques) for depth perception based on motion information. Brightness patterns in the image move as the objects that give rise to them move. Optical flow is the apparent motion of the brightness pattern.

Basic optical flow equation: Expanding the left hand side by Taylor series and equating terms, we get Constraint line I x u+I y v+I t = 0 (I x ,I I x =  I/  x I y =  I/  y I t =  I/  t y ) u Velocity in the direction of brightness gradient is given by We can’t determine flow in the iso brightness (right angles) direction. This is called aperture problem.

Optical Flow Estimation- Horn and Sjoberg’s Relaxation Method

As before, we need to minimize conjointly two constraints: (i, j) (i, j) (i, j) (i, j) (i, j) I t Computation (i, j) Frame t (i+1, j+1) (i+1, j+1) Frame t+1 Frame t (i+1, j+1) (i+1, j+1) Frame t+1 Frame t (i+1, j+1) (i+1, j+1) Frame t+1 I x Computation I y Computation

Shape/Structure from Motion

Now, equating each component of , we get: If are the image point corresponding to the 3-D object point P, we have from perspective projection: and (assuming f = 1, and Z >> f).

z Now, the optical flow components and are: O and Now, using (1), we can get expressions for and in terms of Z and 6 motion parameters.

x v v x , w y , w z  T P = (X, Y, Z) T y

Shape from Motion- Pure Translation Case

From the imaging geometry, the (x, y) coordinates of the image points corresponding to the 3D-Object point (X, Y, Z) are given by: Now, if we ignore terms related to rotation from the equations of u and v on the previous slide, we get In order to match the estimated (u, v) values with computed values, we need to minimize the function:

Shape/Object Representation and Recognition

Objects/Shapes 2-D (Planar) 3-D Representation of closed boundaries (Recognition Sensitive to Noise and Occlusions) Descriptors

e.g.,

Fourier Volumetric/binary voxel representation (

e.g.

for

3-D

medical image constructed from 2D-slices) Surface based representations(

e.g.

Coon’s surface patches represented by 2-D polynomials, generalized cylinders ) Noise and Occlusion-tolerant Representation of parts and their interrelationships (e.g., Connectivity, adjacency). Parts could be regular shaped objects such as circles, squares, triangles recognizable by Hough transform, or curve segments represented using polynomial forms (B-Splines)

Reconstruction Imaging- Computer Aided Tomography (CAT) Scans

g(x  , y  ) X-ray source