You are on page 1of 2

Diversity-Based Classifier Selection

for Adaptive Object Tracking

Ingrid Visentini1 , Josef Kittler2 , and Gian Luca Foresti1


1
Dept of Mathematics and Computer Science, University of Udine, 33100 Udine, Italy
2
CVSSP, University of Surrey, Guildford, Surrey, UK, GU2 7XH

Abstract. In this work we propose a novel pairwise diversity measure, that re-
calls the Fisher linear discriminant, to construct a classifier ensemble for tracking
a non-rigid object in a complex environment. A subset of constantly updated
classifiers is selected exploiting their capability to distinguish the target from the
background and, at the same time, promoting independent errors. This reduced
ensemble is employed in the target search phase, speeding up the application of
the system and maintaining the performance comparable to state of the art al-
gorithms. Experiments have been conducted on a Pan-Tilt-Zoom camera video
sequence to demonstrate the effectiveness of the proposed approach coping with
pose variations of the target.

1 Introduction

It is well known that ensemble methods’ aim is to aggregate multiple learned models to
improve the accuracy of classification. Boosting, bagging and other forms of classifiers
combination [8,17,22] give an experimental confirmation and a theoretical explanation
that diverse hypotheses joined together produce a strong ensemble, whose error is re-
duced with respect to the average error of the members.
At the same time, the diversity concept arises from the intuition that a set of very
dissimilar classifiers would perform better than a single good decision maker, because
its error is compensated by the decisions of the others [12]. Intuitively, the more diverse
are the classifiers, the wider is the knowledge and the more tolerant is the ensemble
to unpredictable events. As equal classifiers will produce the same (redundant) output,
the combination of the responses of several classifiers is useful when they disagree on
some inputs. We refer to this measure of disagreement, which initially appeared under
the name of ambiguity in [10], as diversity.
The construction of a classifier ensemble is deemed to take advantage of the diversity
of its components [4,7,3]; the empirical explanation has been deduced from experiments
[13,6], and the use of diversity in designing ensembles has been intensively analysed
[19,5,2,20]. However, despite all the work done to date, there is not a concordant defi-
nition and formalization of diversity, but only different representations of the same in-
tuition [11,18]. It is acknowledged that the diversity measures can be categorised in two
types: pairwise, when a measure considers a couple of classifiers, and non–pairwise,
when the diversity refers to the whole ensemble and its performance. Yule’s Q statistics
[21], the correlation coefficient and the disagreement measure, for instance, belong to

J.A. Benediktsson, J. Kittler, and F. Roli (Eds.): MCS 2009, LNCS 5519, pp. 438–447, 2009.
c Springer-Verlag Berlin Heidelberg 2009
Diversity-Based Classifier Selection for Adaptive Object Tracking 439

the first set, while Kohavi–Wolpert’s measure [9] or Kuncheva’s entropy [13] represent
the second group.
In this work we propose to exploit a criterion reminiscent of the Fisher linear dis-
criminant as a pairwise diversity measure for the construction of an effective selection
of online trained classifiers. This criterion is used to select a subset of classifiers, pro-
moting those with independent errors; the resulting ensemble is then employed to track
via classification a moving object in a video sequence, following the idea pioneered in
[1]. Unlike most of the literature on classifiers, in our case the ensemble does not re-
quire any information on the data; it is initially built on-the-fly with random hypotheses,
and then updated with significant information.
Starting with a minimal set of training examples, the experts pool is updated with
other patterns coming from the tracking phase: the target found at time (t − 1) is em-
ployed as a positive sample to update the ensemble parameters at time t. The training
set is thus collected as two observations at a time, one for the target and one for the
negative sample. This mechanism offers the advantage of selecting fresh and constantly
updated classifiers at each step, maintaining the knowledge of the ensemble coherent
with the object appearance and, at the same time, allowing to select the most appropri-
ate classifiers in that context. Compared with similar methods, like the Online Boosting
algorithm [15] that implicitly promotes diversity between classifiers modifying at each
step their weight in the linear combination, the proposed technique allows a dynamic
replacement of the participating classifiers without affecting the overall performance.
Preliminary experiments conducted on outdoor video sequence demonstrate that this
expert fusion framework can be employed as a robust tracking system that copes with
pose variations while following moving objects in a dynamic environment. Moreover,
our selection strategy saves the computational cost when compared to the Online Boost-
ing approach, keeping the accuracy comparable.

2 Proposed Solution

Given an ensemble S of R binary classifiers {s1 , s2 , . . . , sR } and a set of vector-valued


samples X, so that sr : X → {+1, −1}, we define the average prediction of the
ensemble at time t on a sample x as the average of the individual scores (mean rule)
R
1 
st (x) = sr,t (x) (1)
R r=1

The outputs of this classifier on a training set of samples (xn , yn ) with n = 1, . . . , N


can be divided considering the class yn ∈ Y = {−1, +1} of the sample xn ∈ X.
At time t, these two separate sets model two distinct probability density functions
2
P (st (x)|y) with priors P (y), means μyt = E(st ) and variances (σty ) . For each class
y ∈ Y , we can define the variance of the ensemble of (1) as
⎧ 2 ⎫
  ⎨ 1  R ⎬
V ary (st ) = E (st (x) − μyt )2 = E (sr,t (x) − μyr,t )
⎩ R ⎭
r=1

You might also like