Document 7635802

Download Report

Transcript Document 7635802

On the Dimensionality of Face
Space
Marsha Meytlis and Lawrence
Sirovich
IEEE Transactions on PAMI, JULY 2007
Outline
•
•
•
•
•
•
Introduction
Background
Experiment
Analysis of Data
Results
Discussion
2
Introduction
• A low-dimensional description of face space
first appears in [1] for face recognition.
• Then the eigenface approach, [2] [3], was
based on the premise that a small number of
elements or features could be efficiently used.
[1] L. Sirovich and M. Kirby, “Low-Dimensional Procedure for the Characterization
of Human Faces,” J. Optical Soc. Am., vol. 4, pp. 519-524, 1987.
[2] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive
Neuroscience, vol. 3, pp. 71-86, 1991.
[3] M. Turk and A. Pentland, “Face Recognition Using Eigenfaces,” Proc. IEEE
Computer Vision and Pattern Recognition, pp. 586-591, 1991.
3
Introduction
• The dimension of face space may be
reasonably defined as an acceptable threshold
number of dimensions necessary to specify an
identifiable face.
• How to find the threshold number of
dimensions?
4
Background
• Eigenface approach:
– Acquire the training set of face images and
calculate the eigenfaces, which define the face
space.
– Calculate a set of weights based on the new face
image and the M eigenfaces by projecting the
input image onto each of eigenfaces.
– Determine if the image is a face and classify the
weight pattern as either a known person or as
unknown.
5
Background
• Calculate1 eigenfaces:
A face image which is a two-dimensional
N  N array of intensity values could be
2
considered as a vector of dimension N .
N
N
N2
Face images are similar in overall
configuration and can be described by
a relatively low dimensional subspace.
face image
Ni
Nj
A set of images maps to a
collection of points in this
huge space.
Nk
The principal component
analysis(PCA, or KarhunenLoeve expansion) is to find the
vectors which best account for
the distribution of face images.
6
Background
• Face images of the training set are Γ1, Γ2, …, ΓM,
and the average face of the set is defined
1
M
ψ

Γ .
by

n 1 n
M
• Each face differs from the average by the
vector Φi  Γ i Ψ , and the covariance matrix is
1
M
C  n 1 ΦnΦnT  AAT , where A  [Φ1 , Φ2 , ..., ΦM ] .
M
• The N orthonormal vectors un which best
describes the data are the eigenvectors of C.
7
Background
• Using Eigenfaces to classify a face image: a
new face (Γ) is transformed into its eigenface
components(projected into “face space”) by a
simple operation ωk  ukT Γ  Ψ  for k = 1, 2, …, N.
We just compare
Ω with other
face classes’ Ωn
to determine
which face class
it belong to.
The weights form a vector ΩT = [ω1, ω2, …, ωN].
8
Background
• With the SVD of the training set, we could get
the eigenfuntions(eigenfaces),  n x  and the
corresponding eigenvalues, n in [2] [3].
• For experiment, we could consider the
average probability that an eigenface  n x 
appears in the representation of a face.
9
Background
the noise line
n  200
the signal line
10
Background
• The remnants of facial structure in the
eigenfaces decay slowly after the first 100
components.
11
Background
f
2
• SNR(signal-to-noise), SNR  log f ,the measure
of error in the reconstruction, i.e., the amount
of variance that has been captured in the
reconstruction.
• In [4], the most face identity information
necessary for recognition is captured within
an SNR span of approximately 7-7.5 octaves.
err 2
N
[4] P. Penev and L. Sirovich, “The Global Dimensionality of Face Space,” Proc.
IEEE CS Int’l Conf. Automatic Face and Gesture Recognition, pp. 264-270, 2000.
12
Experiment
• The goal was to arrive at an estimate of the
dimension of face space, that is, the threshold
number of dimensions.
• Human observers were shown partial
reconstructions of faces and asked whether
there was recognition.
• Human observers: five men and five women,
mean age 27, range 20-35, all right handed.
13
Experiment
• The first part: assess a baseline for the
observers’ knowledge of familiar faces.
• The observers had to respond 46 people
(three images of each) with one of the
following options:
– high familiarity
– medium familiarity
– low or no familiarity
14
Experiment
• The second part: the observers viewed the
truncated versions of 80 faces, referred to as
test faces.
• The test faces included:
– 20 familiar faces in the FERET training set
– 20 unfamiliar faces in the FERET training set
– 20 familiar faces not in the FERET training set
– 20 unfamiliar faces not in the FERET training set
15
Experiment
• All 80 test faces were reconstructed to an SNR
of 5.0, and the observers viewed them in a
random sequence.
• In the same manner, SNR was incremented in
even steps of 0.5 until 10 was reached, with
11 steps in all.
16
Experiment
• Observers distinguish the degree to which a
face is familiar or unfamiliar and respond with
one of the following options:
–
–
–
–
–
–
1. high certainty a face is unfamiliar
2. medium certainty a face is unfamiliar
3. low certainty a face is unfamiliar
4. low certainty a face is familiar
5. medium certainty a face is familiar
6. high certainty a face is familiar
17
Experiment
• The third part: Using 80 faces to furnish a
baseline comparison of reconstruction error.
in-population faces are
better reconstructed.
18
Analysis of Data
• Data gathered in the second part of the
experiment were analyzed using Receiver
Operating Characteristic(ROC) curves to
classify familiar versus unfamiliar faces.
19
Analysis of Data
• The ROC can also be represented equivalently
by plotting the fraction of true positives(TPR)
vs. the fraction of false positives(FPR).
TP
TP

P TP  FN
FP
FP
FPR 

N
FP  TN
TPR 
Analysis of Data
• For classification, we need to transform the
six-point response into a binary recognition,
based on five different thresholds for
observer’s responses, r: r>5, r>4, r>3, r>2, and
r>1.
• Then, r>5 may be regarded as the probability
that the observers is certain that he is viewing
a familiar face. r>4 is this probability plus the
probability of medium certainty and so forth.
21
Analysis of Data
• An image which received a score above a
specific threshold was classified as familiar
and , otherwise, as unfamiliar.
• The proportion of true positive responses was
determined as the percentage of familiar faces
that were classified as familiar at a threshold.
TP
TP
TPR 

P TP  FN
FP
FP
FPR 

N
FP  TN
22
Analysis of Data
• For each observer, we could get the series of
ROC curves.
carry a high signal
be noisy
45° line: pure chance
The area between each curve and
45° line corresponds to classification
accuracy, an increasing function of
SNR.
23
Analysis of Data
• From [5], we use the area under the ROC
curve(AUC) as a measure of classifier
performance.
• The numerical classification of accuracy is the
area under the ROC curve which adds a
baseline value of 0.5.
[5] A. Bradley, “The Use of the Area under the ROC Curve in the Evaluation of
Machine Learning Algorithms,” Pattern Recognition, vol. 30, pp. 1145-1159,
1997.
24
Results
• In the first part of the experiment, we could
the familiarity rating of each observer.
• Not all observers were equally familiar with
the face.
Those have good representation of
the familiar faces in memory.
25
Results
• In the second part of the experiment, we
could use the ROC curve to analysis the
classification accuracy.
– 3 best observers and all observers.
26
Results
• For all observers, we averaged face
classification accuracy as a function of SNR.
3 best observers
The functions are fitted by the Weibull
distribution’s Cumulative distribution
function
all observers
1 e
 
 x
k
27
Results
• The functions would be: pSNR   1  0.5  e
• A classification accuracy of 1.0 indicates
perfect stimulus detection.
• The point at which there is a 50%
improvement over chance( p  0.75 ) in
classification accuracy is chosen as the
detection threshold [6].
  SNR  
 

   





.
[6] R. Quick, “A Vector Magnitude Model of Contrast Detection,” Kybernetik, vol.
16, pp. 65-67, 1974
28
Results
• Parameter values for the Weibull distribution:
• With the classification accuracy threshold 0.75,
the average of all observers is reach at an SNR
of 7.74, and the 3 best observers is reach at an
SNR of 7.24.
29
Results
161
7.24
7.74
0.75
107
196
124
30
Results
• The dimensionality measure based on
observers that have the highest baseline
familiarity ratings is significantly lower than
the estimate based on the average observers.
• A person’s measure of dimensionality might
be dependent upon how well these familiar
faces are coded in memory.
31
Discussion
• On average, the dimension of face space is in
the range of 100~200 eigenfeatures.
• The error tolerance of observers may be
related to an observer’s prior familiarity with
the familiar faces.
32