Tracking VALSE

Download Report

Transcript Tracking VALSE

Robust Visual Tracking –
Algorithms, Evaluations and Problems
Haibin Ling
Department of Computer and Information Sciences
Temple University
Philadelphia, PA 19122
October 15, 2014
Visual Tracking
Continuously localization of a visual entity or visual entities.
Single target tracking (model-free) (PAMI’11,CVPR’11,ICCV’11,CVPR’12,ICCV’13,ECCV’14)
Pose tracking
Contour tracking (CVPR’14b)
(Sigal et al 2004)
Multi-target tracking
(CVPR’13,CVPR’14a)
Visual Tracking
Continuously localization of a visual entity or visual entities.
Single target tracking (model-free) (PAMI’11,CVPR’11,ICCV’11,CVPR’12,ICCV’13,ECCV’14)
Related work
- Tooooooo many to be listed
- A survey by Yilmaz, Javed & Shah in 2006
- There are many influential trackers after 2006
Outline
• Problem formulation and particle filter
tracking framework
• Visual tracking using sparse representation
• Reducing bias in tracking evaluation
• Recent and future work
Problem formulation
Input:
• A sequence of images: I0, I1, …, It, …
• Target of interest at the initial frame: x0
A target is represented by a state vector
x = (pos, scale, orientation)‘
Output:
• Targets in each of the following frames
– x1, …, xt, …
Tracking by Bayesian Estimation
Bayesian estimation:
At frame t, find the best xt by x  arg max p( x | I , I ,..., I , I )
t
t
t 1
1
0
xt
Bayesian inference t
Using observations (features) y , y ,..., y ;
0
1
t
extracted from images I0, I1, …, It :
y0:t : { y0 , y1,..., yt }
We have xt  arg max p ( xt | y0:t )
xt
Kalman filter
– Gaussian everywhere  closed form solution 
– But, probabilities in visual tracking is not usually Gaussian 
Particle filter
– Probability propagation: iterative prediction and updating
– Sampling techniques
Particle Filter (Isard & Blake 98)
xt  arg max p( xt | y0:t )
Visual tracking
xt
Probability propagation
Prediction:
Update:
p( xt | y0:t 1 )  
xt 1
p( xt | xt 1 ) p( xt 1 | y0:t 1 )dxt 1
p( xt | y0:t )  p( yt | xt ) p( xt | y0:t 1 )
Particle sampling (sequential Monte Carlo)
Approximate the posterior density by a set of weighted samples:
( xt(i ) , wt(i ) ) : i  1,2,...,N
where wti is theweight for particlexti , e.g., p( yti | xti ).
Now we need to decide
p( xt | xt 1 ) : statetransition probability (drift,motion,etc)
p( yt | xt ) : observation likelihood
Outline
• Problem formulation and particle filter
tracking framework
• Visual tracking using sparse representation
• Reducing bias in tracking evaluation
• Recent and future work
Motivation
Intuition
• During tracking, there is a large redundancy in the observation of
target appearance
• It is common to represent the target appearance using a linear
representation
Idea
• Introduce sparse constraints in the linear target representation
• Non-negativity constraints
Advantage
• Models observation redundancy naturally.
• Addresses discrete appearance corruption such as occlusion (Wright
et al. 2009)
• Benefits from recent advance in solutions for sparse
coding/compressive sensing (Candes et al. 2006, Donoho 2006)
• A flexible framework (as illustrated in many extensions)
Sparse Representation for Tracking
• A candidate y approximately lies in a linear subspace, which is
spanned by templates from past observation
y  a1t1  a2 t 2   an t n
y  a1t1  a2 t 2    an t n  

Rewrite as
y  a1t1  a2 t 2   an t n  e1i1  e2i 2   ed i d

Task: find a sparse solution for a and e.
a 
 [T, I]  
e 
Non-negativity Constraints
• In addition to the (positive) trivial templates I, we include
negative trivial templates -I.
y  a1t1  a2 t 2   an t n  e1i1  e2i 2   ed i d
 e1 (i1 )  e2 (i 2 )   ed (i d )
where ai, ei, ei- >=0 .

The formula can be rewritten as
a 
 
y  [T, I, -I] e  ˆ Bc,
e  
 
c0
Example Templates
 
a 
 
 
 
e 
 
 
e  
 
 

y 
B
c
Comparing Good and Bad
Candidates
Achieving Sparse Solutions

Our task is to find a sparse solution to the following linear
system,
y  Bc,

c0
It leads to an L0 minimization task, such as
min Bc  y 2   c 0 ,
2

c0
This can be well approximated, under very flexible conditions,
by an L1 minimization,
min Bc  y 2   c 1 ,
2
c0
Extension
• Speed up
– Speed up: bounded particle resampling (CVPR’11)
– Speed up: accelerated proximal gradient (CVPR’12)
– Blurred target tracking (ICCV’11)
• Other sparse-representation trackers
– Liu et al. ECCV'10,
– Li, Shen & Shi CVPR'11, Liu et al CVPR'11, Kwak et al
ICCV’11
– Zhong, Lu & Yang CVPR'12; Jia, Lu & Yang CVPR'12;
Zhang, Zhang & Yang CVPR'12; ZhangT et al CVPR'12,
– ZhangT et al IJCV’13, Hu et al PAMI’14
–…
Outline
• Problem formulation and particle filter
tracking framework
• Visual tracking using sparse representation
• Reducing bias in tracking evaluation
• Recent and future work
Reducing Subjective Bias
• Which are the best trackers among all?
• Implementing and testing on a large benchmark
(e.g., Wu et al 2013) is a huge project.
• Recent trend: compare the authors’ own tracker
with many other trackers.
• Their own tracker typically performs the best.
– It has advantages that the authors want to highlight.
– Optimizing all trackers is non-trivial, if not possible.
• We aim to reduce such biases and provide a more
practical comparison.
An example
Average Center Location Error
A
B
C
D
E
Seq 1
17.5
56.7
11.3
10.5
5.0
Seq 2
7.0
39.2
8.5
39.2
6.1
…
…
…
…
…
…
Seq N
30.7
66.2
20.4
120.4
24.9
The authors’
previous tracker
The proposed
tracker
• The best two results are shown in red and blue
Partial ranking representation
Average Center Location Error
Seq 1
A
A
17.5
B
B
56.7
Seq 2
7.0
…
Seq N
C
11.3
D
D
10.5
5.0
39.2
8.5
39.2
6.1
…
…
…
…
…
30.7
66.2
20.4
120.4
24.9
<
E
<
Higher rank
Lower rank
D
<
A
<
B
A
<
B
=
D
…
<
…
<
…
A
<
B
<
D
Pairwise representation
Average Center Location Error
A
B
C
D
E
56.7
B
39.2
39.2
11.3
8.5
10.5
D
39.2
39.2
5.0
Seq 2
17.5
A
7.0
…
…
…
…
…
…
Seq N
30.7
66.2
20.4
120.4
24.9
Seq 1
6.1
<
=
Seq 2
Seq N
(A, B, 1)
(A, B, 1)
(A, B, 1)
(D, A, 1)
(A, D, 1)
(D, B, 1)
(B, D, 0.5)
(D, B, 0.5)
…
Seq 1
(A, D, 1)
(B, D, 1)
Data Statistics
• PAMI (2000 Vol.22– 2013 Vol.35),
IJCV (2000 Vol.36 – 2013 Vol.104)
• ICCV, CVPR, ECCV (2005 – 2013)
• 45 papers (tournament) contain useful table
data
• 48 trackers appear in the data at the first stage
• 15 trackers are left after the cleaning
• 664 partial rankings
• 6280 pairs of records with 151 draw records
Paper selection and data cleaning
• More than 2 trackers left after remove
unqualified trackers
• Independent assumption
– Conference to journal extension
– Duplicate experimental results
• Significant lack of data
– Compared only in one tournament
– #records ≤ 10
Rank aggregation
• Rank aggregation (Ailon 2010)
– Find a full-ranking to minimize the total violation of
pairwise comparison.
– NP-Hard, LpKwikSorth algorithm
• PageRank-like ranking (Page et al. 1999)
– Graph-based solution
• Elo’s rating (Elo 1978)
– Widely used in sport ranking (chess, football, …)
– Sequentially update score based on each game
• Glicko’s rating (Glickman 1999)
– Extension of Elo’s rating by introducing confidence
Ranking results
Outline
• Problem formulation and particle filter
tracking framework
• Visual tracking using sparse representation
• Reducing bias in tracking evaluation
• Recent and future work
Tracking with GPR (TGPR)
Transfer Learning Based Visual Tracking with Gaussian Processes Regression
Gao, Ling, Hu & Xing, ECCV 2014
Source code of TGPR available:
http://www.dabi.temple.edu/~hbling/code/TGPR.htm or http://jingao.weebly.com/
Promising Results
CVPR2013 Benchmark
(Wu et al 2013)
50 sequences
Princeton Benchmark
(Song & Xiao 2013)
100 sequences
VOT2013
(Kristan et al
2013)
16 sequences
Acknowledgement
• Collaborators
Chenglong Bao, Erik Blasch, Jin Gao, Weiming Hu
Hui Ji, Xue Mei, Yu Pang, Yi Wu
• Funding
• National Sciences Foundation
• Air Force Research Laboratory
Thank You!
&
Questions?