3D Geometry for Computer Graphics

Download Report

Transcript 3D Geometry for Computer Graphics

SVD(Singular Value
Decomposition) and Its
Applications
Joon Jae Lee
2006. 01.10
The plan today

Singular Value Decomposition
 Basic
intuition
 Formal definition
 Applications
2
LCD Defect Detection

Defect types of TFT-LCD : Point, Line,
Scratch, Region
3
Shading


Scanners give us raw point cloud data
How to compute normals to shade the surface?
normal
4
Hole filling
5
Eigenfaces

Same principal components analysis can be applied to images
6
Video Copy Detection

same image content
different source

same video source different
image content
AVI and MPEG face image
Hue histogram
Hue histogram
7
3D animations


Connectivity is usually constant (at least on large
segments of the animation)
The geometry changes in each frame  vast amount of
data, huge filesize!
13 seconds, 3000 vertices/frame, 26 MB
8
Geometric analysis of linear transformations



We want to know what a linear transformation A does
Need some simple and “comprehendible” representation
of the matrix of A.
Let’s look what A does to some vectors

Since A(v) = A(v), it’s enough to look at vectors v of unit length
A
9
The geometry of linear transformations

A linear (non-singular) transform A always takes
hyper-spheres to hyper-ellipses.
A
A
10
The geometry of linear transformations

Thus, one good way to understand what A does is to find
which vectors are mapped to the “main axes” of the
ellipsoid.
A
A
11
Geometric analysis of linear transformations


If we are lucky: A = V  V , V orthogonal (true
if A is symmetric)
The eigenvectors of A are the axes of the ellipse
T
A
12
Symmetric matrix: eigen decomposition

In this case A is just a scaling matrix. The eigen
decomposition of A tells us which orthogonal axes it
scales, and by how much:
1
1
2
A
A   v1 v 2
1

2
vn  



1


v v
 1 2

n 
vn 
T
Avi  i vi
13
General linear transformations: SVD

In general A will also contain rotations, not just scales:
1
1
2
1
A
A  U V T
A  u1 u 2
 1

2

un 





v v
 1 2

 n 
vn 
T
14
General linear transformations: SVD
1
1
2
1
A
AV  U 
 1
orthonormal
orthonormal 
2
A  v1 v 2 v n   u1 u 2 u n  








 n 
A vi   iui ,  i  0
15
SVD more formally


SVD exists for any matrix
Formal definition:

For square matrices A  Rnn, there exist orthogonal matrices
V  Rnn and a diagonal matrix , such that all the diagonal
values i of  are non-negative and
A  U V
U,
T
=
A
U

VT
16
SVD more formally



The diagonal values of  (1, …, n) are called the singular values. It
is accustomed to sort them: 1  2 …  n
The columns of U (u1, …, un) are called the left singular vectors.
They are the axes of the ellipsoid.
The columns of V (v1, …, vn) are called the right singular vectors.
They are the preimages of the axes of the ellipsoid.
A  U V
T
=
A
U

V
T
17
SVD is the “working horse” of linear algebra

There are numerical algorithms to compute SVD.
Once you have it, you have many things:
inverse  can solve square linear systems
 Numerical rank of a matrix
 Can solve least-squares systems
 PCA
 Many more…
 Matrix
18
Matrix inverse and solving linear systems

Matrix inverse:
A  U V T
A
1
 U  V
T

1
 11

V 



So, to solve
 V
T

1
 1 U 1 

 T
U
1 
n 
Ax  b
x  V  1 U T b
19
Matrix rank

The rank of A is the number of non-zero singular
values
n
1
m
2
n
=
A
U

VT
20
Numerical rank

If there are very small singular values, then A is
close to being singular. We can set a threshold t,
so that numeric_rank(A) = #{i| i > t}

If rank(A) < n then A is singular. It maps the entire
n
space R onto some subspace, like a plane (so A
is some sort of projection).
21
Solving least-squares systems

We tried to solve Ax=b when A was rectangular:
x
A

=
b
Seeking solutions in least-squares sense:
2
~
x  arg min Ax  b
x
22
Solving least-squares systems



We proved that when A is full-rank, x  A A
(normal equations).
So: A  U  V T (full SVD)
A A  U V
T

T T
T

1
AT b
U V T  V T U TU V T 
 V T V T  V 2 V T
1
1
n
=
n
12
n2
23
Solving least-squares systems

Substituting in the normal equations:
x   A A  AT b
1
T
x  V  V
2
T
 U V 
1
T T
b
 V  2 V T V T U T  V  2 TU T b 
 V  U T b
1/1
1
1/1
2
=
1/n2
2
n
T
1/n

24
Pseudoinverse


The matrix we found is called the pseudoinverse of A.
Definition using reduced SVD:
 11



A V 


1
2


 T
U

1

n 
If some of the i are zero,
put zero instead of 1/I .
25
Pseudo-inverse


Pseudoinverse A+ exists for any matrix A.
Its properties:


If A is mn then A+ is nm
Acts a little like real inverse:
AA A  A
A AA  A
( AA )T  AA
( A A)T  A A
26
Solving least-squares systems

T
When A is not full-rank: A A is singular
 There
are multiple solutions to the normal equations:
 A A x  A b
T
T
 Thus,
there are multiple solutions to the least-squares.
 The SVD approach still works! In this case it finds the
minimal-norm solution:

xˆ  A b s.t. xˆ  arg min Ax  b
2
and xˆ is minimal
x
27
PCA – the general idea

PCA finds an orthogonal basis that best represents given data set.
y
y’
x’
x

The sum of distances2 from the x’ axis is minimized.
28
PCA – the general idea
y
v2
v1
x
y
y
x
This line segment approximates
the original data set
x
The projected data set
approximates the original data set
29
PCA – the general idea

PCA finds an orthogonal basis that best represents given data set.
z
3D point set in
standard basis
x

y
PCA finds a best approximating plane (again, in terms of distances2)
30
For approximation



In general dimension d, the eigenvalues are
sorted in descending order:
1  2  …  d
The eigenvectors are sorted accordingly.
To get an approximation of dimension d’ < d, we
take the d’ first eigenvectors and look at the
subspace they span (d’ = 1 is a line, d’ = 2 is a
plane…)
31
For approximation

To get an approximating set, we project the
original data points onto the chosen subspace:
xi = m + 1v1 + 2v2 +…+ d’vd’ +…+dvd
Projection:
xi’ = m + 1v1 + 2v2 +…+ d’vd’ +0vd’+1+…+ 0 vd
32
Principal components



Eigenvectors that correspond to big eigenvalues
are the directions in which the data has strong
components (= large variance).
If the eigenvalues are more or less the same –
there is no preferable direction.
Note: the eigenvalues are always non-negative.
Think why…
33
Principal components

There’s no preferable
direction

There is a clear preferable
direction

S looks like this:

S looks like this:

V



V 

 T
V

Any vector is an eigenvector

 T
V

 is close to zero, much
smaller than .
34
Application: finding tight bounding box

An axis-aligned bounding box: agrees with the axes
y
maxY
minX
maxX
x
minY
35
Application: finding tight bounding box

Oriented bounding box: we find better axes!
y’
x’
36
Application: finding tight bounding box

Oriented bounding box: we find better axes!
37
Scanned meshes
38
Point clouds


Scanners give us raw point cloud data
How to compute normals to shade the surface?
normal
39
Point clouds

Local PCA, take the third vector
40
Eigenfaces

Same principal components analysis can be applied to images
41
Eigenfaces


Each image is a vector in R250300
Want to find the principal axes – vectors that best represent the
input database of images
42
Reconstruction with a few vectors

Represent each image by the first few (n) principal components
v  1u1  2u2  nun  1, 2 ,
, n 
43
Face recognition

Given a new image of a face, w  R250300

Represent w using the first n PCA vectors:
w  1u1  2u2  nun  1, 2 ,

, n 
Now find an image in the database whose representation
in the PCA basis is the closest:
w  1,  2 ,
,  n 
w, w is the largest
The angle between w and w’ is the smallest
w
w’
w
w’
44
SVD for animation compression
Chicken animation
45
3D animations

Each frame is a 3D model (mesh)
 Connectivity
– mesh faces
46
3D animations

Each frame is a 3D model (mesh)
– mesh faces
 Geometry – 3D coordinates of the vertices
 Connectivity
47
3D animations


Connectivity is usually constant (at least on large
segments of the animation)
The geometry changes in each frame  vast amount of
data, huge filesize!
13 seconds, 3000 vertices/frame, 26 MB
48
Animation compression by dimensionality reduction

The geometry of each frame is a vector in R3N space
(N = #vertices)
 x1


 xN

 y1


 yN
z
 1

z
 N














3N  f
49
Animation compression by dimensionality reduction

Find a few vectors of R3N that will best represent our
frame vectors!
U
 x1


 xN

 y1


 yN
z
 1

z
 N














=















3Nf














V
ff
1

2




1   2 
T
 1

2 T



 f  
V
ff





 f 
 f
50
Animation compression by dimensionality reduction

The first principal components are the important ones
VT
=
u1 u2 u3
51
Animation compression by dimensionality reduction



Approximate each frame by linear combination of the
first principal components
The more components we use, the better the
approximation
Usually, the number of components needed is much
smaller than f.
= 1 u1
+ 2 u2
+ 3 u3
52
Animation compression by dimensionality reduction

Compressed representation:
ui

The chosen principal component vectors

Coefficients i for each frame
Animation with only
2 principal components
Animation with
20 out of 400 principal
components
53
LCD Defect Detection

Defect types of TFT-LCD : Point, Line,
Scratch, Region
54
Pattern Elimination Using SVD
55
Pattern Elimination Using SVD

Cutting the test images
56
Pattern Elimination Using SVD
•
Singular value determinant for the image reconstruction
57
Pattern Elimination Using SVD

Defect detection using SVD
58