Professional Documents
Culture Documents
X
(j)
k+1|k
_ _
1
z
(i)
k
m
(j),d
k+1|k
_ _
, D
d
(p) (2)
are put into the same cell. Here, a d-dimensional extension
estimate
X
(j)
k+1|k
is computed as in [3]
X
(j)
k+1|k
=
V
(j)
k|k1
v
(j)
k|k1
2d 2
(3)
and
d
( p) is computed using inverse cumulative
2
distribution
with d degrees of freedom for probability p = 0.99. If a
measurement falls into two or more extension estimates, it is
only put into the cell formed by the highest weight. If a
measurement does not satisfy (2) for any GIW component, it
is put into the cell containing only one measurement.
For EM partition [11], a partition is also obtained by the
predicted GIW components. For components j with weight
w
(j)
k+1|k
. 0.5, the initial values of Gaussian mixture
parameters are set as means m
l
= m
(j),d
k+1|1
, covariances
S
l
=
X
(j)
k+1|k
and mixing coefcients p
l
/g(j
(j)
k+1|k
). The
mixing coefcients
l
are normalised to meet
l
l
= 1 before
the rst E-step. The details of the EM algorithm for
Gaussian mixtures can be found in, e.g., [14].
Fig. 1 Separating tracks
www.ietdl.org
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
331
& The Institution of Engineering and Technology 2014
Note that for a given set of the initial predicted GIW
components, the EM algorithm will converge to the closest
local optimum of the likelihood function, namely, there is
no guarantee that EM converges to the global optimum.
This implies that EM partition can obtain a right partition if
the predicted GIW components can correctly represent the
likelihood function; otherwise EM partition is likely to
obtain a wrong partition. This problem also exists in
Prediction partition. Thus, Prediction partition and EM
partition are sensitive to the predicted GIW components. In
particular, for the different sized and spatially close
extended targets, if we rst use the wrong predicted GIW
components, we do not obtain the right partition. It means
that measurements from more than one measurement source
will be included into the same cell in all partitions, and
subsequently the ET-GIW-PHD lter will interpret
measurements from multiple targets as having originated
from just one target, leading to the cardinality
underestimation problem. For the separating tracks of
Fig. 1, since we rst cannot obtain the correct predicted
GIW components, Prediction partition and EM partition do
not work.
3 Robust Bayesian partition
3.1 Fuzzy ART
The fuzzy ART [13] (see Fig. 2) is a neural network
architecture that includes an input eld F
0
storing the
current input vector, a choice eld F
2
containing the active
categories and a matching eld F
1
receiving bottom-up
input from F
0
and top-down input from F
2
. All input
patterns are complement-coded by the eld F
0
in order to
avoid category proliferation. Each input a is represented as
a 2M-dimensional vector A = (a, a
c
). The weight associated
with each F
2
category node j( j = 1,...,N) is denoted as
w
j
= (u
j
, v
c
j
), which subsumes both the bottom-up and
top-down weight vectors of fuzzy ART. Initially, all
weights are set to one, each category is said to be
uncommitted. When a category is rst coded it becomes
committed. Each long-term memory trace w
i
(i = 1,...,2M) is
monotonically non-increasing with time and hence
converges to a limit. Categorisation with the fuzzy ART is
performed by category choice, resonance or reset and
learning. The details of the fuzzy ART model can be found
in [13].
Category choice: The category function, T
j
, for each input
A is dened by
T
j
=
|A ^ w
j
|
a + |w
j
|
(4)
where (p ^ q)
i
= min (p
i
, q
i
) and |p| =
M
i=1
|p
i
|. The Jth
winner node in F
2
is selected by
T
J
= max T
j
:j = 1, . . . , N
_ _
(5)
In particular, nodes become committed in order j = 1, 2,3,.
Resonance or reset: When the category J is chosen, a
hypothesis testing is performed in order to measure the
degree to which A is a fuzzy subset of w
J
. Resonance
occurs if the match function of the chosen category meets
the vigilance criterion [0, 1]
|A ^ w
J
|
|A|
r (6)
then the chosen category is said to win (match) and learning is
performed. Otherwise, the value of T
J
is set to zero for the rest
of this pattern presentation to prevent the persistent selection
of the same category during search. A new category is then
chosen by (5), and the search process continues until the
chosen category satises (6).
Learning: Once search is nished, the weight vector w
J
is
updated according to
w
(new)
J
= b A ^ w
(old)
J
_ _
+ (1 b)w
(old)
J
(7)
where [0, 1] is the learning rate parameter. Fast learning
corresponds to = 1. By (7), the fuzzy ART can realise
learning new data without forgetting past data.
Compared with the other clustering algorithms, the fuzzy
ART has a distinct merit of rapid stable learning. Moreover,
it is a global clustering algorithm which is not constrained
by the initial values. Of course, k-means clustering [14], its
improved version k-means + + [15] and EM clustering [14]
also can be used to partition the measurement set. However,
Granstrm et al. in [11] illustrated that k-means clustering
and k-means + + do not apply to the different sized and
spatially close extended targets. Additionally, from the
description of Section 2.2, we can obtain that EM clustering
is limited by the initial values, so there is no guarantee that
EM converges to the global optimum. Therefore for the
ET-GIW-PHD lter, choosing the fuzzy ART as the
architecture of partitioning algorithm has potential advantage.
3.2 Bayesian partition
In order to solve the cardinality underestimation problem, a
novel robust clustering algorithm, Bayesian partition, is
proposed in this section. It is based on the fuzzy ART [13]
and Bayesian theorem. In Bayesian partition, we adopt the
neural network architecture (including category choice,
resonance or reset, and learning) that is similar to the fuzzy
ART to achieve partitioning the measurement set. However,
in Bayesian partition, we use a Bayesian posterior
probability as category choice function of category choice
stage, which can ensure that the input measurement is put
into the correct category. Therefore we call the proposed
algorithm as Bayesian partition. Similar to the fuzzy ART,
Bayesian partition is a neural network architecture that
includes an input eld F
0
storing the current input vector, a
choice eld F
2
containing the active categories and a
matching eld F
1
receiving bottom-up input from F
0
and
top-down input from F
2
. The mean vector and covariance
matrix associated with each F
2
category nodes j( j = 1,, Fig. 2 Fuzzy ART architecture
www.ietdl.org
332
& The Institution of Engineering and Technology 2014
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
N
cat
) is denoted as
j
and
j
, respectively, where N
cat
is the
number of categories. Initially, all mean vectors are set to
the rst measurement vector that chooses this category, all
covariance matrices are set to be so large enough that
satises the following vigilance criterion, and each category
is said to be uncommitted. When a category is rst learned
it becomes committed. In other words, if the initial
covariance matrix of a category is learned, this category
becomes committed (namely, learned). Otherwise, it is
uncommitted. Bayesian partition is performed by category
choice, resonance or reset (vigilance test) and learning for
every measurement vector.
Category choice: In this stage, all existing categories
compete to represent an input measurement. The choice
function is a Bayesian posterior probability for
thejthcategory to input measurement vector z, which is
represented as
P(c
j
|z) =
p(z|c
j
)
P(c
j
)
N
cat
l=1
p z|c
l
_ _
P c
l
_ _ (8)
where c
j
is the jth category,
P(c
j
) is the estimated prior
probability of the jth category, and p(z|c
j
) is a Gaussian
likelihood function that is used to measure the similarity
between z and c
j
, which is dened as
p(z|c
j
) =
1
(2p)
d/2
|S
j
|
1/2
exp
1
2
z m
j
_ _
T
S
1
j
z m
j
_ _
_ _
(9)
where
j
and
j
are mean vector and covariance matrix of the
jth category. Note that if z is the rst input vector of all
measurement vectors, N
cat
= 1; if z is the rst input vector
of one category,
P(c
j
) = 1. The winning category J is
selected by
J = argmax
j
P(c
j
|z)
_ _
(10)
Resonance or reset (Vigilance test): The purpose of this stage
is to determine whether the input measurement z matches
with the shape of the chosen category Js distribution,
which is characterised by match function. In this paper, we
use the normalised similarity between z and c
J
to dene
match function, namely
M(z, J) = (2p)
d/2
S
J
1/2
p z|c
J
_ _
(11)
Resonance occurs if the match function of c
J
meets the
vigilance criterion
M(z, J) r (12)
where [0, 1] is a vigilance parameter. Learning then
ensues, as dened below. Mismatch reset occurs if
M(z, J) , r (13)
The category J is then removed from the competition for the
measurement z, and Bayesian partition searches for another
category via (8) and (10) until nding one satisfying (12).
If all existing categories fail the vigilance test, a new
category characterised by a mean vector z and an initial
covariance matrix
ini
is formed, then
N
cat
= N
cat
+ 1 (14)
Note that the initial distribution shape represented by
ini
must
be so large enough that enables meeting (12).
Learning: When the chosen category satises the vigilance
criterion (12), then the category parameters are updated by
Learning. Learning of Bayesian partition involves the
adjustment of the centre (i.e., mean vector), distribution
shape (i.e., covariance matrix), counts and estimated prior
probability of the winning category. In the learning stage of
Bayesian partition, in order to retain the fuzzy ART merit
of learning new data without forgetting past data, we adopt
the following measures: rstly, the centre of the chosen
category J is updated by weighting the current measurement
z and mean vector
J
. The weights are obtained by
considering that z and
J
make the contribution to the
category centre, respectively. Given N
J
is the number of
measurements that have been clustered by the Jth category.
If considering the current measurement z, the
share contributed by the original mean vector
J
should be
N
J
/(N
J
+ 1). Similarly, the share contributed by z is 1/(N
J
+ 1).
Thus, the centre of the chosen category J can be updated by
m
J
=
N
J
N
J
+ 1
m
J
+
1
N
J
+ 1
z (15)
Secondly, analogously, the distribution shape of the chosen
category is updated by weighting the original covariance
matrix
J
and the temporary covariance matrix derived by z
and the updated
J
. According to probability theory
knowledge, this temporary covariance matrix is simply
computed as (z
J
)(z
J
)
T
. Therefore the distribution shape
of the chosen category J can be updated using
S
J
=
N
J
N
J
+ 1
S
J
+
1
N
J
+ 1
z m
J
_ _
z m
J
_ _
T
(16)
Finally, N
J
and
P c
J
_ _
are updated using the following equations
N
J
= N
J
+ 1 (17)
P(c
J
) =
N
J
N
cat
j=1
N
j
(18)
Note that (17) and (18) are often used for clustering. Bayesian
partition is also illustrated by Fig. 3. If Bayesian partition is
applied to the ETT, one partition is generated by one preset
vigilance parameter. A partition contains a few of categories,
that is, cells. In the next section, we give the detailed process
of generating partitions.
3.3 Generating alternative partitions
For Bayesian partition, given a vigilance parameter, a
partition is obtained by iterating over all current
measurement vectors. Therefore N
V
alternative partitions of
the measurement set can be generated by selecting N
V
different vigilance parameters, that is
r
l
_ _
N
V
l=1
, r
l+1
= r
l
+ D, for l = 1, . . . , N
V
1 (19)
where
l
[0, 1], and is a step length, which is an empirical
value. A larger step length generates fewer alternative
www.ietdl.org
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
333
& The Institution of Engineering and Technology 2014
partitions with less computation time, whereas sacrices a lot
of performance. The alternative partitions contain more
categories as
l
increases, and the categories typically
contain fewer measurements.
If one uses all vigilance parameters satisfying r
l
[ [0, 1] to
form alternative partitions, N
V
= round(1/D) + 1 partitions
are obtained. Note that the adjacent vigilance parameters
may generate the same partition. Thus, some partitions
might be identical, and must hence be discarded so that
each partition is unique at the end. Finally, we can obtain
N
V
different alternative partitions by deleting the same
partitions, where N
V
N
V
. If D is smaller, N
V
is quite
large. In order to reduce the computational burden, we only
choose the vigilance parameters in condence region. This
region is dened by the vigilance thresholds d
V
L
and d
V
U
for lower and upper vigilance parameters. Thus, the
vigilance parameters that only satisfy the condition
d
V
L
r
l
d
V
U
(20)
can form the alternative partitions. Similar to D, d
V
L
and d
V
U
are also empirical values. Note that the subscripts V in
N
V
, N
V
, d
V
L
and d
V
U
, denotes the Vigilance.
4 Simulation results
In this section, we illustrate our robust Bayesian partition for
the four different ETT scenarios, that is, crossing tracking,
parallel tracks, separating tracks and turning tracks. To
demonstrate the tracking performance of the proposed
method, the joint partitioning strategy presented by [11] is
used as a compared method. It includes Distance partition
with sub-partition and Prediction partition. For notational
clarity, we call this partitioning strategy as Joint partition.
Note that in order to track the different sized and spatially
close extended targets, Prediction partition is only adopted.
The reason for this is that both Prediction partition and EM
partition handle this type of true target scenario [11].
The true target extensions, the model for the expected
number of measurements, the motion model parameters and
the parameters of scenario are very similar to those
presented in [11]. The true tracks for the four test cases are
shown in Figs. 1 and 4. The true target extensions are
dened by
X
(i)
k
= R
(i)
k
diag A
2
i
a
2
i
_ _ _ _
R
(i)
k
_ _
T
(21)
where R
(i)
k
is a rotation matrix, which makes the ith
extensions major aligned with the ih targets direction of
motion at time step k, and A
i
and a
i
are the length of the
major and minor axes, respectively. In the separating tracks,
parallel tracks and turning tracks scenarios, the major and
minor axes are set to (A
1
, a
1
) = (25, 6.5) and (A
2
, a
2
) = (15, 4)
for the two targets, respectively. In the crossing tracks
scenario, (A
1
, a
1
) = (25, 6.5), (A
2
, a
2
) = (15, 4) and (A
3
, a
3
) =
(10, 2.5) for the three targets, respectively. Suppose that
the expected number of measurements generated by the
targets is a function of the extended target volume
V
(i)
k
= p
......
X
(i)
k
_
= pA
i
a
i
. We here adopt the following
simple model for the expected number of measurements
generated by the extended targets, which is denoted as
g
(i)
k
=
.......
4
p
V
(i)
k
_
+ 0.5
_ _
= 2
.....
A
i
a
i
_
+ 0.5
_ _
(22)
where is the oor function. This model is equivalent to
assuming a uniform expected number of measurements per
square root of surveillance area. The target dynamic motion
model is represented as [3]
x
(i)
k+1
= F
k+1|k
I
d
_ _
x
(i)
k
+w
(i)
k+1
(23)
where w
(i)
k+1
is zero mean Gaussian process noise with
covariance Q
k+1|k
X
(i)
k+1
, d is the dimension of the target
extent, X
(i)
k
is a d d symmetric positive-denite matrix, I
d
is
a d d identity matrix, A B is the Kronecker product and
F
k + 1|k
and Q
k + 1|k
are dened by Koch [3]
F
k+1|k
=
1 T
s
0.5T
2
s
0 1 T
s
0 0 e
T
s
/u
_
_
_
_
(24)
Q
k+1|k
= S
2
1 e
2T
s
/u
_ _
diag([001]) (25)
where T
s
is the sampling time, is the scalar acceleration
standard deviation and is the manoeuvre correlation time.
The measurement model is represented as [3]
z
(j)
k
= H
k
I
d
_ _
x
(j)
k
+e
(j)
k
(26)
where e
(j)
k
is white Gaussian noise with covariance
X
(j)
k
, and H
k
= [100]. Each target generates a Poisson
distributed number of measurements, where the Poisson rate
g
(j)
k
is dened by (22). In the four simulation scenarios, the
model parameters are set to T
s
= 1s, u = 1s, S = 0.1m
2
/s
and t = 5s. Here, the parameters of the J
b,k
= 2 and 3 birth
extended targets are set w
(j)
b,k
= 0.1 to, m
(j)
b,k
= [(x
(j)
0
)
T
0
T
4
]
T
,
P
(j)
b,k
= diag([100
2
25
2
25
2
]), v
(j)
b,k
= 7 and V
(j)
b,k
= diag([11]).
In Fig. 1, the two extended targets are born at 1s and die
at 100s. Similarly, in Figs. 4b and c, the two extended
targets are also born at 1s and die at 100s, whereas in
Figs. 4a the extended target 1 and 2 are born at 1s and die
at 100s, and extended target 3 is born at 20s and dies at
92s. A total of 500 Monte Carlo simulations are
performed for the four tracking scenario (see Figs. 1 and
4), with a clutter rate of ten clutter measurements per scan
and clutters and measurements generated independently.
The probabilities of survival and detection are set to 0.99
and 0.98, respectively. According to the empirical
Fig. 3 Bayesian partition
www.ietdl.org
334
& The Institution of Engineering and Technology 2014
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
simulations with good target tracking results, the initial
covariance matrix is set to
ini
= diag([2000\2000]), the
vigilance thresholds is set to d
V
L
= 0.05 and d
V
U
= 0.2,
and the step length is set to = 0.05. Therefore the
alternative partitions approximating all possible partitions
are generated by setting the vigilance parameters as 0.05,
0.1, 0.15 and 0.2, respectively. In Bayesian partition, the
larger the vigilance parameter is, the smaller the ellipse
areas of the formed categories are but the more the
number of the ellipses are. By a few of simulations, we
nd that the different sized and spatially close extended
targets cannot be split if d
V
L
, 0.05, and the cardinality
Fig. 4 True target tracks used in simulations
a Crossing tracks
b Parallel tracks
c Turning tracking
Fig. 5 Results of separating tracks
a Average cardinality estimates
b Average OSPA distances
c Average computation time for Joint partition and Bayesian partition
www.ietdl.org
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
335
& The Institution of Engineering and Technology 2014
overestimation problem will occur if d
V
L
. 0.2. In Joint
partition, for Prediction partition
d
( p) in (2) is computed
using inverse cumulative
2
distribution with d = 2 degrees
of freedom for probability p = 0.99 [11], and for Distance
partition with sub-partition the distance threshold d
l
satisfying the condition d
P
L
, d
l
, d
P
U
for lower
probabilities P
L
0.3 and P
U
0.8, where the denition of
d
P
L
and d
P
U
in referred to [2].
The results are shown in Figs. 58, which show the
corresponding average cardinality estimates, optical
Fig. 6 Results of crossing tracks
a Average cardinality estimates
b Average OSPA distances
c Average computation time for Joint partition and Bayesian partition
Fig. 7 Results of parallel tracks
a Average cardinality estimates
b Average OSPA distances
c Average computation time for Joint partition and Bayesian partition
www.ietdl.org
336
& The Institution of Engineering and Technology 2014
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
subpattern assignment (OSPA) distances [16] and
computation time, respectively. Compared with the previous
miss-distances used in multi-target tracking, the OSPA
distance overcomes the limitations including inconsistent
behaviour and the lack of a meaningful physical
interpretation if the cardinalities of the two nite sets under
consideration differ. It allows for a natural physical
interpretation even if the two sets cardinalities are not the
same, and does not exhibit any elements of arbitrariness
inherent in ad hoc assignment approaches [16]. It has two
adjustable parameters p [1, ] and c > 0 that have
meaningful interpretations, that is, outlier sensitivity and
cardinality penalty, respectively. In the paper, the
parameters of the OSPA metric are set as p = 2 and c = 60.
As seen from Fig. 5a, for the separating tracks scenario, the
average cardinality estimated by Joint partition is far
smaller than the true value when the two different sized and
spatially close extended targets move in parallel, which is
caused by the cardinality underestimation problem
discussed in Section 2. As a comparison, the cardinality
estimate of Bayesian partition is closer to the true value.
This is due to the fact that Bayesian partition takes into
account the distributions shape information. In Bayesian
partition, with the increase of the measurement input, the
true distribution shape of the category is iteratively
depicted by updating of (16). This is veried by Fig. 5b
which shows that the average OSPA distance of Bayesian
partition is far smaller than that of Joint partition for
moving in parallel. However, Bayesian partition requires
slightly more computation time, as shown Fig. 5c. This can
be accepted by target tracking system. By calculating, the
ET-GIW-PHD lter using Bayesian partition requires
7.1798s on average for one MC run, while the
ET-GIW-PHD lter using Joint partition requires 6.1872s.
Note that the computational requirements for both
algorithms are compared via the CPU processing time. The
simulations are implemented by MATLAB on Intel Core
i5-2400 3.10 GHz processor and 4 GB RAM. As seen from
Figs. 7 and 8, for the parallel tracks and turning tracks,
Bayesian partition can also obtain good tracking
performance except for requiring slightly more computation
time. However, for the crossing tracking, the performance
of the two partitioning algorithms is comparable. This is due
to the fact that both Bayesian partition and Joint partition can
well split the spatially farther (i.e., separated) extended
targets. Therefore for the separated extended targets, the
tracking of the two partitioning algorithms can achieve good
performance, thus their performance is comparable. Note that
both Bayesian partition and Joint partition are sensitive to
manoeuvres that are modelled poorly by the motion model,
but results of Bayesian partition are better than that of Joint
partition are, as shown Fig. 8.
5 Conclusions
In order to solve the cardinality underestimation problem
caused by the separating tracks, a novel robust clustering
algorithm based on Bayesian theorem, called Bayesian
partition, is proposed in this paper. The partitioning method
generates the alternative partitions of the measurement set via
different vigilance parameters. The simulation results show
that the proposed algorithm can effectively solve the
cardinality underestimation problem, and has better tracking
performance than Joint partition for the two or more different
sized and spatially close extended targets. However, for the
separated extended targets, the tracking performance of the
proposed algorithm and Joint partition is comparable.
6 Acknowledgments
This work was supported by the National Natural Science
Foundation of China (grant no. 61372003). The authors
Fig. 8 Results of turning tracks
a Average cardinality estimates
b Average OSPA distances
c Average computation time for Joint partition and Bayesian partition
www.ietdl.org
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150
337
& The Institution of Engineering and Technology 2014
acknowledge the anonymous reviewers and associate editor
for their valuable suggestions and comments.
7 References
1 Gilholm, K., Godsill, S., Maskell, S., Salmond, D.: Poisson models for
extended target and group tracking. Proc. SPIE Signal Data Process.
Small Targets, San Diego, CA, August 2005, vol. 5913, pp. 230241
2 Granstrm, K., Lundquist, C., Orguner, U.: Extended target tracking
using a Gaussian-mixture PHD lter, IEEE Trans. Aerosp. Electron.
Syst., 2012, 48, (4), pp. 32683286
3 Koch, J.W.: Bayesian approach to extended object and cluster tracking
using random matrices, IEEE Trans. Aerosp. Electron. Syst., 2008, 44,
(3), pp. 10421059
4 Angelova, D., Mihaylova, L.: Extended object tracking using Monte
Carlo methods, IEEE Trans. Signal Process., 2008, 56, (2),
pp. 825832
5 Feldmann, M., Franken, D.: Tracking of extended objects and group
targets using random matrices a new approach. Proc. Int. Conf.
Information Fusion, Cologne, Germany, July 2008, pp. 18
6 Mahler, R.: PHD lters for nonstandard targets, I: extended targets.
Proc. Int. Conf. Information Fusion, Seattle, WA, July 2009,
pp. 915921
7 Granstrm, K., Lundquist, C., Orguner, U.: A Gaussian mixture PHD
lter for extended target tracking. Proc. Int. Conf. Information
Fusion, Edinburgh, Scotland, July 2010, pp. 18
8 Orguner, U., Lundquist, C., Granstrm, K.: Extended target tracking
with a cardinalized probability hypothesis density lter. Proc. Int.
Conf. Information Fusion, Chicago, Illinois, USA, 58 July, 2011,
pp. 18
9 Lundquist, C., Granstrm, K., Orguner, U.: An extended target CPHD
lter and a gamma Gaussian inverse Wishart implementation, IEEE
J. Spec. Topics Signal Process., 2013, 7, (3), pp. 472483
10 Lian, F., Han, C., Liu, W., Liu, J., Sun, J.: Unied cardinalized
probability hypothesis density lters for extended targets and
unresolved targets, Signal Process., 2012, 92, (7), pp. 17291744
11 Granstrm, K., Orguner, U.: A PHD lter for tracking multiple
extended targets using random matrices, IEEE Trans. Signal
Process., 2012, 60, (11), pp. 56575671
12 Zhang, Y.Q., Ji, H.B.: A novel fast partitioning algorithm for extended
target tracking using a Gaussian mixture PHD lter, Signal Process.,
2013, 93, (11), pp. 29752985
13 Carpenter, G.A., Grossberg, S., Rosen, D.B.: Fuzzy ART: fast stable
learning and categorization of analog patterns by an adaptive
resonance system, Neural Netw., 1991, 4, (1), pp. 759771
14 Bishop, C.M.: Pattern recognition and machine learning (Springer,
New York, 2006)
15 Arthur, D., Vassilvitskii, S.: k-means + + : the advantages of careful
seeding. Proc. ACM-SIAM Symp. Discrete Algorithms, Philadelphi,
PA, USA, January 2007, pp. 10271035
16 Schuhmacher, D., Vo, B.T., Vo, B.N.: A consistent metric for
performance evaluation of multi-object lters, IEEE Trans. Signal
Process., 2008, 56, (8), pp. 34473457
www.ietdl.org
338
& The Institution of Engineering and Technology 2014
IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338
doi: 10.1049/iet-spr.2013.0150