Surface normals and PCA

Download Report

Transcript Surface normals and PCA

Surface normals and principal
component analysis (PCA)
3DM slides by
Marc van Kreveld
1
Normal of a surface
• Defined at points on the surface: normal of the
tangent plane to the surface at that point
• Well-defined and unique inside the facets of any
polyhedron
• At edges and vertices,
the tangent plane is not
unique or not defined
(convex/reflex edge)
 normal is undefined
2
Normal of a surface
• On a smooth surface without a boundary, the
normal is unique and well-defined everywhere
(smooth simply means that the derivatives of the
surface exist everywhere)
• On a smooth surface (manifold) with boundary,
the normal is not defined on the boundary
3
Normal of a surface
4
Normal of a surface
• The normal at edges or vertices is often defined
in some convenient way: some average of normals
of incident triangles
5
Normal of a surface
• No matter what choice we make at a vertex, a
piecewise linear surface will not have a continuously
changing normal
 visible after computing illumination
not normals!
they would
be parallel
6
Curvature
• The rate of change of the normal is the curvature
higher curvature
lower curvature
infinite curvature
zero curvature
7
Curvature
• A circle is a shape that has constant curvature
everywhere
• The same is true for a line, whose curvature is zero
everywhere
8
Curvature
• Curvature can be positive or negative
• Intuitively, the magnitude of the curvature is the
curvature of the circle that looks most like the curve,
close to the point of interest
negative curvature
positive curvature
9
Curvature
• The curvature at any point
on a circle is the inverse
of its radius
r
curvature = 1/r
• The (absolute) curvature at any point on a curve is the
curvature of the circle through that point that has the
same first and second derivative at that point (so it is
defined only for C2 curves)
r
10
Curvature
• For a 3D surface, there are curvatures in all
directions in the tangent plane
11
Curvature
negative
positive
inside
12
Properties at a point
• A point on a smooth surface has various properties:
– location
– normal (first derivative) / tangent plane
– two/many curvatures (second derivative)
13
Normal of a point in a point set?
• Can we estimate the normal for each point in a
scanned point cloud? This would help reconstruction
(e.g. for RANSAC)
14
Normal of a point in a point set
• Main idea of various different methods, to estimate
the normal of a point q in a point cloud:
– collect some nearest neighbors of q, for instance 12
– fit a plane for q and its 12 neighbors
– use the normal of this plane as the estimated normal for q
15
Normal estimation at a point
• Risk: the 12 nearest neighbors of q are not nicely
spread in all directions on the plane
 the computed
normal could even
be perpendicular
to the real normal!
16
Normal estimation at a point
• Also: the quality of normals of points
near edges of the scanned shape is
often not so good
• We want a way of knowing how good
the estimated normal seems to be
17
Principal component analysis
•
•
•
•
General technique for data analysis
Uses the statistical concept of correlation
Uses the linear algebra concept of eigenvectors
Can be used for normal estimation and tells
something about the quality (clearness, obviousness)
of the normal
18
Correlation
• Degree of correspondence/changing together of two
variables measured from objects
– in a population of people, length and weight are correlated
– in decathlon, performance on 100 meters and long jump
are correlated (so are shot put and discus throw)
Pearson’s
correlation
coefficient
19
Covariance, correlation
• For two variables x and y, their covariance is defined
as
(x,y) = E[ (x – E[x]) (y – E[y]) ]
= E[xy] – E[x] E[y]
• E[x] is the expected value of x, equal to the mean x
• Note that the variance 2(x) = (x,x), the covariance
of x with itself, where (x) is the standard deviation
• Correlation x,y = (x,y) / ((x) (y))
20
Covariance
• For a data set of pairs (x1, y1), (x2, y2), …, (xn, yn), the
covariance can be computed as
1
𝑛
𝑛
(𝑥𝑖 − 𝑥)(𝑦𝑖 − 𝑦)
𝑖=1
where x and y are the mean values of xi and yi
21
Data matrix
• Suppose we have weight w, length l, and blood
pressure b of seven people
• Let the mean of w, l, and b be w, l, and b
• Assume the measurements have been adjusted by
subtracting the appropriate mean
𝑤1 𝑤2
• Then the data matrix is X = 𝑙1 𝑙2 …
𝑏1 𝑏2
𝑤7
𝑙7
𝑏7
• Note: Each row has zero mean, the data is
mean-centered
22
Covariance matrix
• The covariance matrix is
1
𝑛
XXT
𝜎(𝑤, 𝑤)
• This is in the example: 𝜎(𝑙, 𝑤)
𝜎(𝑏, 𝑤)
𝜎(𝑤, 𝑙)
𝜎(𝑙, 𝑙)
𝜎(𝑏, 𝑙)
𝜎(𝑤, 𝑏)
𝜎(𝑙, 𝑏)
𝜎(𝑏, 𝑏)
• The covariance matrix is square and symmetric
• The main diagonal contains the variances
• Off-diagonal are the covariance values
23
Principal component analysis
• PCA is a linear transformation (3 x 3 in our example)
that makes new base vectors such that
– the first base vector has a direction that realizes the largest
possible variance (when projected onto a line)
– the second base vector is orthogonal to the first and
realizes the largest possible variance among those vectors
– the third base vector is orthogonal to the first and second
base vector and …
– … and so on …
• Hence, PCA is an orthogonal linear transformation
24
Principal component analysis
• In 2D, after finding the first base vector, the second
one is immediately determined because of the
requirement of orthogonality
25
Principal component analysis
• In 3D, after the first base vector is found, the data is
projected onto a plane with this base vector as its
normal, and we find
the second base vector
in this plane as the
direction with largest
variance in that plane
(this “removes” the
variance explained by
the first base vector)
26
Principal component analysis
• After the first two base vectors are found, the data is
projected onto a line orthogonal to the first two base
vectors and the third
base vector is found
on this line
 it is simply given
by the cross product of
the first two base vectors
27
Principal component analysis
• The subsequent variances we find are decreasing in
value and give an “importance” to the base vectors
• The mind-process explains why principal component
analysis can be used for dimension reduction: maybe
all the variance in, say, 10 measurement types can be
explained using 4 or 3 (new) dimensions
28
Principal component analysis
• In actual computation, all base vectors are found at
once using linear algebra techniques
29
Eigenvectors of a matrix
• A non-zero vector v is an eigenvector of a matrix X
if X v =  v for some scalar , and  is called an
eigenvalue corresponding to v
2
• Example 1: (1,1) is an eigenvector of
1
2 1 1
1
because
=3
1 2 1
1
1
2
In words: the matrix leaves the direction of an
eigenvector the same, but its length is scaled by
the eigenvalue 3
30
Eigenvectors of a matrix
• A non-zero vector v is an eigenvector of a matrix X
if X v =  v for some scalar , and  is called an
eigenvalue corresponding to v
2
• Example 1: (1, –1) is also an eigenvector of
1
2 1 1
1
because
=
1 2 −1
−1
1
2
In words: the matrix leaves the direction and length
of (1, –1) the same because its eigenvalue is 1
31
Eigenvectors of a matrix
• Consider the transformation
2
1
1
animated
2
Blue vectors: (1,1)
Pink vectors: (1, –1) and (–1, 1)
Red vectors are not
eigenvectors
(they change direction)
32
Eigenvectors of a matrix
• If v is an eigenvector, then any vector parallel to v is
also an eigenvector (with the same eigenvalue!)
• If the eigenvalue is –1 (negative in general), then
the eigenvector will be reversed in direction by the
matrix
• Only square matrices have eigenvectors and values
33
Eigenvectors, a 2D example
−2 1
−3 2
• We need: Av = v by definition, or (A – I) v = 0
(in words: our matrix minus  times the identity
matrix applied to v is the zero vector)
• This is the case exactly when det(A – I) = 0
−2 − 
1
−2 − 
1
• det(A – I) = det
=
−3
2−
−3
2−
• Find the eigenvectors and eigenvalues of
= (–2 – )(2 – ) – (–3) = 2 – 1 = 0
34
Eigenvectors, a 2D example
• 2 – 1 = 0 gives  = 1 or  = –1
• The corresponding eigenvectors can be obtained by
filling in each  and solving a set of equations
• The polynomial in  given by det(A – I) is called the
characteristic polynomial
35
Questions
1. Determine the eigenvectors and eigenvalues of
What does the matrix do?
Does that explain the eigenvectors and values?
2. Determine the eigenvectors and eigenvalues of
What does the matrix do?
Does that explain the eigenvectors and values?
–2 0
–2 0
0
1
−1
0
1 0
3. Determine the eigenvectors and eigenvalues of 1 2
1 1
4
0
2
36
Principal component analysis
• Recall: PCA is an orthogonal linear transformation
• The new base vectors are the eigenvectors of the
covariance matrix!
• The eigenvalues are the variances of the data points
when projected onto a line with the direction of the
eigenvector
• Geometrically, PCA is a rotation around the multidimensional mean (point) so that the base vectors
align with the principal components
(which is why the data matrix must be mean centered)
37
PCA example
• Assume the data pairs (1,1), (1,2), (3,2), (4,2), and (6,3)
• X = 15/5 = 3 and Y = 10/5 = 2
• The mean-centered data becomes
(-2,-1), (-2,0), (0,0), (1,0), and (3,1)
−2 −2 0 1 3
• The data matrix X =
−1 0 0 0 1
1
1 18 5
T
• The covariance matrix XX =
5
5 5
2
1
18 − 
5
• The characteristic polynomial is det
5
5
2−
38
PCA example
18 − 
5
• The characteristic polynomial is det
5
2−
1
1 2
= ((18 – )(2 – ) – 25) = ( – 20  + 11)
1
5
5
5
• When setting it to zero we can omit the factor
1
5
20 ± 400−44
2
• We get  =
, 1  19.43 and 2  0.57
as the eigenvalues of the covariance matrix
• We always choose the eigenvalues to be in
decreasing order: 1 > 2 > …
39
PCA example
• The first eigenvalue 1  19.43 corresponds to an
eigenvector (1, 0.29) or anything parallel to it
• The second eigenvalue 2  0.57 corresponds to an
eigenvector (–0.29, 1) or anything parallel to it
40
PCA example
• The data points and the mean-centered data points
41
PCA example
• The first principal component (purple): (1, 0.29)
• Orthogonal projection onto
the orange line (direction of
first eigenvector) yields the
largest possible variance
• The first eigenvalue 1  19.43 is the sum of the
squared distances to the mean (variance times 5)
for this projection
42
PCA example
• Enlarged, and the non-squared distances shown
43
PCA example
• The second principal component (green): (–0.29, 1)
• Orthogonal projection onto
the dark blue line (direction
of second eigenvector)
yields the remaining
variance
• The second eigenvalue 2  0.57 is the sum of the
squared distances to the mean (variance times 5)
for this projection
44
PCA example
• The fact that the first eigenvalue is much larger
than the second means that there is a direction
that explains most of the variance of the data
 a line exists that fits well with the data
• When both eigenvalues are
equally large, the data is
spread equally in all
directions
45
PCA, eigenvectors and eigenvalues
• In the pictures, identify
the eigenvectors and
state how different the
eigenvalues appear to be
46
PCA observations in 3D
• If the first eigenvalue is large and the other two are
small, then the data points lie approximately on a line
– through the 3D mean
– with orientation parallel to the first eigenvector
• If the first two eigenvalues are large and the third
eigenvalue is small, then the points lie approximately
on a plane
– through the 3D mean
– with orientation spanned by the first two eigenvectors /
with normal parallel to the third eigenvector
47
PCA and local normal estimation
• Recall that we wanted to estimate the normal at
every point in a point cloud
• Recall that we decided to use the 12 nearest
neighbors for any point q, and find a fitting plane
for q and its 12 nearest neighbors
q
Assume we have the 3D
coordinates of these points
measured in meters
48
PCA and local normal estimation
• Treat the 13 points and their three coordinates as
data with three measurements, x, y, and z:
 we have a 3 x 13 data matrix
• Apply PCA to get three eigenvalues 1 , 2 , and 3 ,
(in decreasing order) and eigenvectors v1 , v2 , and v3
• If the 13 points lie roughly in a plane, then 3 is small
and the plane contains directions parallel to v1 , v2
• The estimated normal is the perpendicular to v1 , v2 ,
so it is the third eigenvector v3
49
PCA and local normal estimation
• How large should the eigenvalues 1 , 2 be, and
how small should 3 be, for this to be true?
• This depends on
–
–
–
–
scanning density
point distribution
scanning accuracy
curvature of the surface
50
PCA and local normal estimation
• Example 1: Assume
–
–
–
–
the surface is a perfect plane (no curvature)
scanning yields uniform distribution
density is 100 pt/m2
accuracy is  0.03 m
• Then 3 will be less than 13 x 0.032 = 0.0117;
we expect the 13 points to roughly lie in a circle of
radius 0.203 m, and 1 and 2 should each be about
0.1 (10x as large as 3, but this is just a rough guess)
51
PCA and local normal estimation
• Example 2: Assume
–
–
–
–
the surface is a perfect plane (no curvature)
scanning yields uniform distribution
density is 500 pt/m2
accuracy is  0.03 m
• Then 3 will be less than 13 x 0.032 = 0.0117;
we expect the 13 points to roughly lie in a circle of
radius 0.09 m, and 1 and 2 should each be about
0.026 (maybe 3 times as large as 3)
52
PCA and local normal estimation
• Example 3: Assume
–
–
–
–
the surface is a perfect plane (no curvature)
scanning yields uniform distribution
density is 500 pt/m2
accuracy is  0.07 m
• Then 3 will be close to 0.02 ( 13 x 0.042 );
we expect the 13 points to roughly lie in a circle of
radius 0.09 m, and 1 and 2 should each be about
0.026 (maybe hardly larger than 3)
[ note: these values are not possible! ]
53
PCA and local normal estimation
• When the density goes up and/or the accuracy goes
down, we may need to use more than 12 nearest
neighbors to observe a considerable difference in
eigenvalues for points on a plane, and estimate the
normal correctly
54
PCA and local normal estimation
• More neighbors means more reliable covariance
estimations, so a better normal, for flat surfaces
• … but for surfaces with considerable curvature, or
close to edges of a flat surface, more neighbors
means that the quality of normals goes down
55
PCA and local normal estimation
• Example 4: Assume
–
–
–
–
the surface is a perfect plane (no curvature)
scanning is by LiDAR, with dense lines
density is 100 pt/m2
accuracy is  0.03 m
• We will find one large
eigenvalue and two
small ones
q
56
PCA and local normal estimation
• Example 5: Assume
– point q lies in a tree
(leaf or thin branch)
– distribution is uniform
– density is 100 pt/m2
– accuracy is  0.03 m
• We will find three
eigenvalues that don’t
differ too much (but it
is rather unpredictable)
57
Another way for normal estimation
• Compute the Voronoi diagram of the points and
choose the direction from each point to the furthest
Voronoi vertex bounding its cell
• For unbounded cells,
take the middle
direction of the
unbounded rays
• Need to resolve
inside/outside
• Works in any
dimension
58
Curvature estimation
• Recall: a curvature at point p exists for any direction
in the tangent plane of p
• “The” curvature at p is the maximum occurring
curvature
• Basic approach: fit a suitable ball or quadratic surface
with p on its boundary that locally is a good fit for
the point set. Then compute the curvature of the ball
or surface at p
59
Curvature estimation
• Or: choose many directions in the tangent plane and
fit a circle tangent to p in the plane normal to the
tangent plane and a chosen direction
• Choose the circle with smallest radius over all
directions
60
Summary
• Eigenvectors and eigenvalues are a central concept in
linear algebra and they are generally useful
• Local normal estimation can be done by principal
component analysis on the 12 nearest neighbors,
which essentially comes down to eigenvector and
eigenvalue computation of 3x3 matrices
• There are other methods for local normal estimation,
but these are generally less reliable and may not
indicate how “good” the normal is
61
Questions
1. Perform principal component analysis on the following four
data points: (1, 3), (2, 2), (4, 2), (5, 5) [mean-center first!]
2. Estimate the normal from 5 points in 3D, namely:
(0,1,1), (4,2,0), (8,5,1), (-6,-2,1), (-6,-6,-3)
How clear do you think the estimated normal is?
3. Can the estimated normal always be obtained as the third
eigenvector? If not, what can you do?
4. Suppose the density and accuracy suggest that you need
the 100 nearest neighbors to get a good normal estimate,
but you don’t want to manipulate large matrices (3 x 100)
for every point, what can you do?
62