BEST-PRISM-12mReview

Download Report

Transcript BEST-PRISM-12mReview

Building a Robust Speaker Recognition System

Oldřich Plchot, Ondřej Glembek, Pavel Matějka

December 9

th

2012

The PRISM Team

SRI International Harry Bratt, Lukas Burget, Luciana Ferrer, Martin Graciarena, Aaron Lawson, Yun Lei, Nicolas Scheffer

Sachin Kajarekar, Elizabeth Shriberg, Andreas Stolcke

Brno University of Technology Jan H. Cernocky, Ondrej Glembek, Pavel Matejka, Oldrich Plchot

PRISM Robustness

“How did we achieve these results?” “What are the outstanding research issues?”

5,0 4,9 ~ ~ 0,7 ~ ~ 0,6 0,5 0,4 0,3 0,2 0,1 0 match chan noise reverb voc lang PRISM Baseline at start of BEST Error rates lowered Full PRISM system BEST Phase I PI conference, Nov. 29th, 2011 3

Robustness

A need for

effectiveness

on non-ideal conditions

Moving beyond biometric evaluation on clean, controlled acquisition environments – Extract robust and discriminative biometric features, invariant to such variability types •

A need for

predictability

– A system claiming 99% accuracy should not give 80% on unseen data – Unless otherwise warned by the system BEST Phase I PI conference, Nov. 29th, 2011 4

A comprehensive approach

• Multi-stream High order and Low order features • Advanced speaker modeling and system combination • Prediction of difficult scenarios – QM vector • Robustness vs. Unknown – Carefully test on held-out data, beware of overtraining BEST Phase I PI conference, Nov. 29th, 2011 5

A comprehensive approach

• Multi-stream High order and Low order features:

Prosody, MLLR, constraints

, and MFCC, PLP, … – Multiple HOFs: new complimentary information – Multiple LOFs: ditto + redundancy for increased robustness • Advanced speaker modeling and system combination:

Unified modeling framework i-vector / probabilistic discriminant analysis

– Robust variation-compensation scheme for multiple features and variability types – i-vector / PLDA framework adapted to all high- and low- level features – Discriminative training for more compact thus robust systems BEST Phase I PI conference, Nov. 29th, 2011 6

THE MAGIC? - iVectors

• • • • iVector extractor is model similar to JFA – – with single subspace

T

 easier to train no need for speaker labels  the subspace can be trained on large amount of unlabeled recordings We assume standard normal prior factors

i

.

iVector – point estimate of representation

i

– can now be (typically 200 dimensions). extracted for every recording as its low-dimensional, fixed-length However, iVector contains information about both speaker and channel. Hopefully this can by separated by the following classifier.

Dehak, N., et al., Support Vector Machines versus Fast Scoring in the Low-Dimensional Total Variability Space for Speaker Verification In Proc Interspeech 2009, Brighton, UK, September 2009

Illustration

Low dimensional vector can represent complex patterns in multi dimensional space μ 1 μ 2 μ 1 μ 2 μ 1 μ 2 = m 1 m 2 m 1 m 2 m 1 m 2 + t 11 t 12 t 21 t 22 t 11 t 12 t 21 t 22 t 11 t 12 t 21 t 22 t 13 t 23 t 13 t 23 t 13 t 23

i 1

i 2 i 3

Probabilistic Linear Discriminant Analysis (PLDA)

• Let every speech recording be represented by iVector. • What would be now the appropriate probabilistic model for verification?

– iVector are assumed to be normal distributed – iVector still contains channel information  our model should consider both speaker and channel variability, just like in JFA.

• Natural choice is simplified JFA model with only single Gaussian . Such model is known as PLDA and is described by familiar equation:

Why PLDA ?

For our low-dimensional iVectors, we usually choose matrix  no need to consider residual є

U

to be full rank We can rewrite definition of PLDA as or equivalently as … familiar LDA assumptions !

PLDA based verification

• Lets again consider verification score given by log-likelihood ratio for same and different speaker hypothesis, now in the context of modeling iVectors using PLDA: … before: intractable, with iVectors: feasible. • All the integral are now convolutions of Gaussians and can be solved analytically, giving, after some manipulation: FAST !

Performance compared to Eigenchannels and JFA

NIST SRE 2010, tel-tel (cond. 5) Baseline (relevane MAP) Eigenchannel adapt.

JFA iVector+PLDA • • • iVector+PLDA system: Implementation simpler than for JFA Allows for extremely fast verification Provides significant improvements especially in important low False Alarm region

iVector+PLDA – enhancements

NIST SRE 2010, tel-tel (cond. 5) iVector+PLDA iVector+PLDA fullcov UBM LDA150+Length normalization red + Mean normalization • • • Ideas behind the enhancements: Make it easier for PLDA by preprocessing the data by LDA Make the heavy tail distributed iVectors more Gaussian Help a little bit more with channel compensation by condition-based mean normalization

30 20 10 0 2 1,5

Diverse systems unified

“New technologies for prosody modeling, e.g. subspace multinomial modeling”

Prospol MFCC MIC MFCC TEL PLP LPCC MLLR 1 0,5 0 match channel noise reverb voc lang

“All features are now modeled using the i-vector paradigm, even for combination”

BEST Phase I Final review, Nov. 3rd, 2011 14

BEST evaluation submissions

Early iVector fusion, optimal MFCC 0,4 0,35 0,3 0,25 0,2 0,15 0,1 0,05 0 PRISM_3: MFCC + Prospol PRISM_2: MFCC + PLP + LPCC + Prospol + MLLR PRISM_1: MFCC + PLP + LPCC + Prospol + MLLR + cond. detection metadata PRISM_7: All subsystems + cond. detection metadata PRIMARY match channel noise reverb voc lang • • Complex multi-feature / combination of low- and high- level systems ½% False Alarms @10% Miss for our PRISM MFCC system: Look at another operating point? (if that low for the evaluation) BEST Phase I Final review, Nov. 3rd, 2011 15

A comprehensive approach

• Prediction of difficult scenarios:

Universal audio characterization for system combination

– Detect the difficulty of the problem, eg: enroll on noise, test on telephone – React appropriately, eg: calibrate scores for sound decisions BEST Phase I PI conference, Nov. 29th, 2011 16

• • •

Predicting challenging scenarios

Unified acoustic characterization:

A novel approach to extract any metadata in a unified way – Designed with the BEST program goal in mind: ability to handle unseen data, or compounded variability types – Avoid the unnecessary burden to develop a new system for each new type of metadata IDentification system, where the training data is divided into conditions Investigating how to integrate intrinsic conditions: language and vocal effort Microphone Tel Reverb 0.3

Reverb 0.5

Noise 20db Noise 8db Reverb 0.7

Noise 15db

0.001

0.4

0.001

0.001

0.001

0.001

0.3

0.3

BEST Phase I Final review, Nov. 3rd, 2011 17

Robust calibration / fusion

• • • Condition prediction features as new higher order information for calibration – Calibration: scale and shift scores for sound decision making on all operating points – Confidence under matched vs. mismatch conditions will differ Discriminative training of the bilinear form – Model is giving a bias for each condition type Further research – Assess generalization – Affect system fusion weights not just calibration – Early inclusion of the information 0,3 PRISM Robust +Prosody PRISM Robust +Prosody +Audio Char.

0,2 0,1 0 match chan noise reverb voc lang BEST Phase I Final review, Nov. 3rd, 2011 18

Fusion with QM

-

Offset

-

Linear combination weigths

- Score from system

k

-

Vectors of metadata

-

Bilinear combination matrix

BEST Phase I PI conference, Nov. 29th, 2011

A comprehensive approach

• Robustness vs. unknown:

The PRISM data set

– – Expose systems to a diverse enough variability types of interest Aim for generalization on non-ideal or unseen data scenarios – Use advanced strategies to compensate for these degradation BEST Phase I PI conference, Nov. 29th, 2011 20

The PRISM data set

• • • A multi-variability, large scale, speaker recognition evaluation set – Unprecedented design effort across many data sets – – Simulation of extrinsic variability types: reverb & noise Incorporation of intrinsic and cross-language variability – 1000 speakers, 30K audio files and more than 70M trials Open design: Recipe published at SRE11 analysis workshop [Ferrer11] Extrinsic data simulation – Degradation of a clean interview data set from SRE’08 and ‘10 (close mics) – A variety of degradation aiming at generalization: Diversity of SNRs / reverbs to cover unseen data • • •

Noisy data set

Noises from freesound.org, mixied using FaNT (Aurora) Real noise sample: cocktail party type, office noises Different noises for training and evaluation • • • •

Reverb data set

Uses RIR + Fconv Choose 3 RT30 values: 0.3, 0.5, 0.7

15 different room configurations 9 for training, 3 enrollment, 3 test BEST Phase I Final review, Nov. 3rd, 2011 21

Research opportunities

• • • Multi-feature systems – Use novel low-level features for noise robustness – Noise / Reverb robust pitch extraction algorithms Deeper understanding of combination: Aiming for simpler systems – – Information fusion at earlier stage than score level Early fusion Acoustic characterization – 0,6 Deep integration of condition prediction in the pipeline – – – Affecting fusion weights during system combination Integrate language and intrinsic variations • Hard extrinsic variations brings up new domains of expertise borrowed from speech recognition and others (noise robust modeling, speech enhancement: De-reverberation, de-noising, binary masks, …) BEST Phase I PI conference, Nov. 29th, 2011 23

Research opportunities Relaxing constraints even more

Compounded variations

: Reverb + noise + language switch • Explore new types of variations – New kinds of intrinsic variations: vocal effort (furtive, oration), Aging, Sickness – Naturally occurring reverberant and noisy speech • Other parametric relaxations –

Unconstrained duration

for speaker enrollment and testing (as low as a second?) – Robustness to

multi-speaker

audio enrollment and testing: another kind of variability: VERY important for interview data processing BEST Phase I PI conference, Nov. 29th, 2011 24

Questions?