幻灯片 1 - Kobe University

Download Report

Transcript 幻灯片 1 - Kobe University

Human Emotions Estimation by Adaboost based on
ISS-P-252
User's Facial Expression and Average Face from Different Directions
Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki(Kobe University)
Overview
Proposed method
Background
 TV programs customer content recommendation automatic
Flowchart
The system includes 2 parts, the 1st one is training system, during this stage face
features and emotion features data is extracted and saved as database individually.
The other stage the face expressions are classified and processing cost is cut down
analysis needs to collect the data of user‘s facial expression.


The expression of customer’s face is directly related to sales
content recommendation.
Obtain the face
region
Input video
To understand human emotions is able to improve robot
cognitive and interactive abilities
Conventional methods
Recover into the
3D models
Adaptive Boosting [1] (Adaboost) is a basic method widely used in face
feature extraction and recognition. The method is operated easily, and its
classification is quite precise. Though it is sensitive to noisy data and
outliers. What’ s more, it is not good at processing the event that is lack
of previous train. So it need to be used in conjunction with many other
algorithms to improve the performance.
create average
face by 3D
features
Estimate emotions
Detecting system
The emotion features data
Training system
Recover 3D features
[1]Yoav Freund and Robert Schapire,1995
The face features data
[3] C. Goodall,1991
Problems & Approaches
problem1
The original face data
is recovered as 3D
model to get more
features points
 When the user’s head significantly rotating, there would lead to be
obvious errors.
aproaches1
 Combine the three-dimensional average face (3DAF) with Adaboost
problem2
Procrustes analysis
 The cost of processing data and time is high by the real-time detecting
We adjust the difference between the
model data and the
original data by the
following function ,to
control the error.
[3]
aproaches2
 The average face models are projected into the 8-bit gray image,
which is intend of the original data.
EI  x, y ||I o ( x, y )  U I m ( x, y, z )) ||
T
Features Extraction
Average face creation
EI: control factor(≤0.001)
Io : the original face data
Im : 3D model data
UT: Projection vector
Obtained the 3D features, the coordinates Si
and RGB value Ti are got easily.
Si  ( xi1 , yi 2 , zi 2 ,, xin , yin , zin )
Boost training
Cut out features
T
Ti  ( Ri1 , Gi 2 , Bi 2 ,, Rin , Gin , Bin )
SDAM [2] draft
T
Support t he samplesas : ( x1 , y1 ),, ( x N , y N ) where xi  , yi  {1,1}

we use t hefinalclassifier


S i , T: i
:previous i-th iteration average value
 j ,:  j :adjustment factors
N :total of elements
T
H ( x)  sign( wt ht ( x))
t 1
t o judge t heemot ionfeat ures,
Taking use of the data to calculate their
average value,
Be updated by
each iteration
H ( x) is got fromT t imest raind
ht ( x) is t heclassifier of t - t h it erat ion
wt is t he weight
The face marked feature points
[2] T. Wang et al.,1995
The average model is projected into 8bit gray image, we will get final average
face features, which is used as emotion
classification.
S mod el 
m 1
S i    j Sij
j
N

, Tmod el 
m 1
T i    j Sij
j
N
 si mod el 

fE  U 
 ti mod el 
T
Experiment
Experiment information
 Trained samples: 20X3X12+75
 The person tested: 2
 Emotion groups : Natural(Nau), Happy(Hap), Unhappy(Unh)
Head keeps static
2D features
Head freely rotates
3D features
Data Explanation
 The classification effects using two methods’ features are
compared;
 The correct rates under conditions of two motion states are
compared.
Conclusion
In our research, we proposed a novel method for
improving emotions estimation. Our experiments
have shown that our approach improved the
emotions classification rate, specifically the head
was freely rotating.