Transcript slides

Deception Detection with
micro-expressions using
Kernelized Perceptron
Abhijit Roy (ar3579)
Why micro-expressions?
•
There has been a lot of interest in the field of studying micro-expressions
•
It is considered as one of the most effective clues to detect lies and
dangerous behaviors
•
Haggard and Isaacs first discovered micro-expression and considered it as
repressed emotions
•
Micro-expression has gained popularity recently because of its potential
applications in the process of diagnosis and national security
•
The Transportation Security Administration in the USA has already
employed Screening Passengers by Observation Techniques (SPOT) which
uses micro-expressions analysis
Why micro-expressions with machine
learning?
•
Higher accuracy in classification of data with higher number of features.
•
Each data point or example is a set of features representing the human face.
•
Less number of features makes it harder to train the model, but higher
number of features can help in better classification.
•
Micro-expressions can be used to create feature rich examples, which can be
used to train classifiers with an expectation of better accuracy.
Perceptron
Positive Example - Truth
Negative Example - Lie
Perceptron
•
Very efficient classification algorithm for linearly separable data.
•
Positive and negative examples for lie detection experiment are highly likely to
be linearly separable.
•
Works well with low dimensional data - that is data with few features.
•
Computationally expensive when run with examples having lot of features, that
is high dimensional data causes performance issues.
•
Micro-expression data is a high dimensional data having multiple features for
each example point.
Kernelized Perceptron
•
Perceptron has a dual form which can be run on high dimensional data, even
with infinite feature space.
•
Made possible due to use of Kernel function which is a simple
computationally inexpensive calculation.
•
Can be used with micro-expressions data, where each example is a high
dimensional vector with each dimension corresponding to a certain feature.
•
Expected to run in reasonable amount of time and still result in a good
classifier with high accuracy in predicting lies.
Data
•
The Chinese Academy of Sciences Micro-expression (CASME) database
•
Contains 195 micro-expressions filmed under 60fps
•
They were selected from more than 1500 elicited facial movements
•
These data points will be preprocessed and presented as labeled example
vectors which will then be fed to the Perceptron algorithm as training data
Results and Other Methods
•
The experiment will do a 70/30 split of the data into training set and holdout
validation set.
•
The training error will be calculated on the training data set, and the test
error will be calculated on the holdout validation set.
•
We can also do Principal Component Analysis to find the most significant
features and then run some other algorithms as well.
•
For data that is not clearly linearly separable, we can use soft margin SVMs
with kernel functions to get a good accuracy.