下載/瀏覽Download

Download Report

Transcript 下載/瀏覽Download

A Real-Time Automated System for
the Recognition of Human Facial
Expressions
IEEE FEBRUARY 2006
Keith Anderson and Peter W. McOwan
指導教授:顏國郎
報告者:廖崧棋
Outline
• Introduction
– Problems & Rationales
– Purpose & Specific Aims
– Background & Literature Review
• Material & Methods
• Experiments
• Results & Discussions
Problems & Rationales
• Interactive science and technology
development
– Applied to robot
– Measurement of mental tools
– Detection of deception
– Development of educational tools
Purpose
• The system uses facial motion to characterize
monochrome frontal views of facial
expressions and is able to operate effectively
in cluttered and dynamic scenes, recognizing
the six emotions universally associated with
unique facial expressions
Specific Aims
• The definition of the collection of emotional words
• Position to identify and track the face
• Algorithms and the use of facial expression
recognition
• Applied to the actual exercise
BACKGROUND & LITERATURES REVIEW
Background & Literatures Review
• R.W. Picard, Affective Computing.
Cambridge,MA:MIT Press, 1995.
• C. Breazeal, “Emotion and sociable humanoid
robots,” Int. J. Human-Comput. Stud., vol. 59,
pp. 119–155, 2003.
• K.Mase and A. Pentland, “Recognition of facial
expression fromoptical flow,” IEICE Trans. E,
vol. 74, pp. 408–410, 1991.
MATERIALS & METHODS
Optical flow
•
•
•
•
To detect changes in light, strong or weak
Gray-scale changes in distribution
Occlusion
Apertrue
SVM
Emotion
• Namely happiness, sadness, disgust, surprise,
fear, and anger
System summary
EXPERIMENTS
Face tracker summary
Restrictions on the error table
Comparison of Classifier
Optimal data
Optimised
expression recognition approach
RESULTS & DISCUSSIONS
Results & Discussions
• the integration and extension of existing
techniques to produce a complete expression
recognising system that can be used by
anyone
• can be contrasted with other approaches that
use the optical flow in conjunction with
computationally heavy 3-Dface Models or
more complicated classifier architectures
Results & Discussions
• there is no reason to believe that these
expressions are those exclusively of
importance
• into learning facial regions when the size and
shape of regions are not restricted
• that technology can understand and respond
to their facial cues
Regions for motion averaging based on the Co-Articulation regions of
Fidaleo and Neumann