Professional Documents
Culture Documents
Abstract—We present a method that is suitable for clustering of approach is discussed in [4], which uses the longest common
vehicle trajectories obtained by an automated vision system. We subsequence (LCSS) measure to cluster vehicle trajectories.
combine ideas from two spectral clustering methods and propose The edit distance on real sequences in [5] and [6] is closely
a trajectory-similarity measure based on the Hausdorff distance,
with modifications to improve its robustness and account for the related to LCSS. The need for compact representation and
fact that trajectories are ordered collections of points. We compare fast retrieval of trajectory data has resulted in dimensionality
the proposed method with two well-known trajectory-clustering reduction approaches, e.g., the segmented principal component
methods on a few real-world data sets. analysis (PCA) in [7]. Another popular distance measure for
Index Terms—Clustering of trajectories, time-series similarity trajectories and general time series is dynamic time warping
measures, unsupervised learning. (DTW), which is exemplified in [8], where DTW is applied to
a rotation-invariant representation of trajectories. In an earlier
I. I NTRODUCTION work, [9] argued in favor of LCSS over DTW, which is a view
shared by [10], where the same conclusion is reached. In both
T HE trajectories of vehicles through a traffic scene are an
information-rich feature that reveals the structure of the
scene, provides clues to the events taking place, and allows
cases, LCSS only significantly outperforms DTW on data with
large amounts of pointwise outliers that are ignored by the
inference about the interactions between traffic objects. Several distance threshold of LCSS; the method proposed in this paper
vision algorithms are designed to collect trajectory data, either is similarly robust to the presence of outlying points, but that
as their primary output or as a by-product of their operation. aspect turns out to be not important for our experiments, where
The focus of this paper is to present a method for unsu- DTW clearly outperforms LCSS.
pervised clustering of vehicle trajectories with widely varying This paper is similar to [11], which compares the unmodified
lengths. We augment the preliminary results presented in [1] Hausdorff distance, LCSS, and DTW on a task that is very
with a comparison of the proposed method with two well- similar to the task considered here. In addition, the authors
established approaches for trajectory clustering on several sets test a plain Euclidean distance and a Euclidean distance after
of trajectories for which manual ground truth was obtained. The linear dimensionality reduction with PCA. The experiments
intended application for the method is as an extension to the with the last two distances require resampling of trajectories,
traffic data-collection system in [2], but automatic clustering which loses information (in the particular case, speed infor-
has applications in collision prediction, as in [3]. mation and spatial fidelity). We particularly do not attempt
Researchers in data mining and management are chiefly to compare methods that require fixed-dimensional trajectory
concerned with deriving similarity measures on trajectories. representation, because they require a commitment to a specific
The sophistication of these measures is necessitated by the trajectory model or features. Our only assumption is that the
use of relatively simple clustering and indexing schemes, e.g., spatial aspects of the problem are more important than the
agglomerative clustering or k-means. One example of this dynamic aspects, i.e., a vehicle’s path through the scene is more
important than the speed/acceleration. This assumption is also
Manuscript received April 13, 2008; revised April 11, 2008, June 13, 2009, implicitly made in DTW and LCSS, and we will not consider
December 26, 2009, and March 13, 2010; accepted March 16, 2010. Date of the necessary modifications to remove it from the methods.
publication May 10, 2010; date of current version September 3, 2010. This
work was supported in part by the U.S. Army Research Laboratory and the U.S. Another work that compares an unmodified Hausdorff variant
Army Research Office under Contract 911NF-08-1-0463 (Proposal 55111-CI), with LCSS is [12], which find the measures to perform similarly
by the National Science Foundation through under Grant IIS-0219863, when outliers are accounted for.
Grant IIP-0443945, Grant IIP-0726109, Grant CNS-0708344, and Grant IIP-
0934327, by the Minnesota Department of Transportation, and by the ITS The first objective of this paper is to compare DTW, LCSS,
Institute at the University of Minnesota. The Associate Editor for this paper and a modified Hausdorff (MH) distance when applied to
was Q. Ji. the task of clustering trajectories of vehicles at traffic scenes.
S. Atev is with the Department of Computer Science and Engineering,
University of Minnesota, Minneapolis, MN 55455 USA, and also with Vital Our focus is not on indexing and retrieval but, rather, on
Images, Inc., Minnetonka, MN 55343-4411 USA (e-mail: atev@cs.umn.edu). unsupervised learning. The second purpose of this paper is to
G. Miller was with the Department of Computer Science and Engineering, investigate the performance of spectral clustering methods for
University of Minnesota, Minneapolis, MN 55455 USA. He is now with
Concrete Software Inc., Eden Prairie, MN 55344-7707 USA (e-mail: gmiller@ the problem domain. As our results show, there is no undue
cs.umn.edu). computational burden when working with moderately sized
N. P. Papanikolopoulos is with the Department of Computer Science and
Engineering and Security in Transportation Technology Research and Applica-
data sets. The clustering method that we use is a modification
tions, Center for Transportation Studies, University of Minnesota, Minneapolis, of the method in [13], which uses a symmetrically normalized
MN 55455 USA (e-mail: npapas@cs.umn.edu). graph Laplacian as the affinity matrix. Rather than using a
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. fixed global scale, as in [13], we have adopted the scale-
Digital Object Identifier 10.1109/TITS.2010.2048101 selection strategy suggested in [14], where local scaling was
,δ (a, b)
⎧ 0, n=0∨m=0
⎪
⎨ 1 + (a , b ),
,δ |an − bm |
=
⎪
⎩ < ∧ |n − m| ≤ δ
max {,δ (a , b), ,δ (a, b )} , otherwise Fig. 2. Trajectories A and B are close together when treated as unordered sets
as each point in A is close to one in B. The proposed modification increases
(1) the reported distance by a factor of 4.6.
The splitting of the squared distance into the product of d(p, q) IV. T RAJECTORY C LUSTERING
and d(q, p) suggests that there is no need for d to be symmetric
In this section, we describe the two clustering methods used
(the kernel will be symmetric regardless), which means that
and also clarify the rules that we use to decide which of the
we can directly use the directed Hausdorff distance hα,w . Intu-
trajectories produced by the tracking method in Section II are
itively, d(p, q)/σ(p) is the distance from p to q, as shown on a
considered outliers for our purposes. In the following exposi-
scale appropriate for the neighborhood of p, and d(q, p)/σ(q)
tion, dij will denote the distance between the ith and jth input
is the distance from q to p, as shown on a scale appropriate for
trajectories (used by the agglomerative method), and kij will
q’s neighborhood. If the underlying distance is symmetric, kD
denote their corresponding similarity after appropriate scale
is equivalent to kL .
selection (used in the spectral method).
We choose the scale σ(p) based on the distance of p to its
ninth nearest neighbor, using similar reasoning as in [14], i.e.,
the distance to a close neighbor is representative of the sample A. Outlier Filtering
density in the region. Although the neighbor index should
theoretically increase with the number of points, that condition Visual tracking can fail under various circumstances; there-
would prevent clustering in data sets with large sample fore, not all automatically collected trajectories correspond to a
imbalances; therefore, a small fixed constant is preferable. single vehicle, and in extreme cases, the objects being tracked
This scaling approach is insensitive to the dimensionality of are not vehicles at all. This case is particularly true for traffic
the data, because the increasing distances between perturbed intersections, where tracking is very difficult in moderately
points in high dimensions also affect the nearest neighbors. The dense traffic due to occlusions. Such outlier trajectories, if
algorithm produces practically identical results with the fifth included in the input to the clustering algorithms, produce many
and seventh nearest neighbors; thus, we kept a value consistent singleton clusters and impede the learning process. We remove
with our earlier work. In practice, the algorithm is much less outliers in advance, to the extent possible. Because our tracking
sensitive to the choice of nearest neighbor than to the choice of method produces dimension estimates and positions in real-
global scale [14]. If the σ value is outside the range 0.4 and 2 m, world units, we have a sensible method for prefiltering most
we clip the value to the appropriate boundary. With that defin- trajectories generated by tracking failure or by merged targets.
ition, we can write the final similarity measure that we use as A trajectory of a tracked object that violates any of the
following rules is considered an outlier and not used in our
experiments.
hα,w (p, q)hα,w (q, p)
k(p, q) = exp − (20) 1) At least 90 samples (30 for highways) must be available.
2σ(p)σ(q)
2) The object length must exceed the width.
which is used whenever we need to use the MH distance in a 3) The object length must be between 1.5 and 6 m.
spectral clustering framework. 4) The width must be between 1 and 4 m.
At this point, we would like to summarize the differences 5) The maximum instantaneous velocity over the trajectory
and similarities between the proposed distance and the LCSS must be between 10 and 100 km/h.
and DTW methods. The actual distances computed by both 6) The objects’ turning rate must be below 45◦ per frame.
DTW and the MH distance directly depend on an underlying A certain number of trajectories that conform to all afore-
metric and have the same units as the point distance itself; in mentioned criteria were considered outliers in the ground truth.
contrast, LCSS always returns a unitless distance that is based We kept these outliers in the input to the algorithms to test
on a count. The 1-D LCSS distance between two series of the methods’ robustness. This approach is in contrast with
length N can take on only N + 1 distinct values, because it [11], which manually eliminated all outliers in their experi-
ultimately depends on an integer count that can range between ments. None of the clustering methods explicitly detect out-
0 and N . The length normalization of the DTW distance liers; therefore, we excluded the outlier trajectories from the
[the factor of 1/n in (3)] is somewhat ambiguous when two clustering confusion matrix (i.e., the score that an algorithm
sequences of widely differing lengths are compared, because received was not influenced by the cluster that it assigned to the
the normalization factor depends on the number of points of outliers).
only one of the series, whereas the MH distance is based on
a robust norm that needs no normalization by the number of
B. Agglomerative Clustering
points at all. The parameterization of LCSS used to control
the “slack” in the matching is time based (i.e., the parameter δ The agglomerative clustering method that we use is identical
limits a correspondence neighborhood based on time), whereas to the one in [4], i.e., each trajectory is initially assigned a
the proposed Hausdorff distance controls the matching based singleton cluster. The two closest clusters are iteratively merged
on the arc length, which is important when the series under until all clusters are separated by a distance greater than a
comparison have longer periods without change (e.g., a vehicle prespecified threshold τ . If I and J denote the sets of indices
stopped to wait for a traffic light does not change its position). of the trajectories of two disjoint clusters, the distance between
Although both DTW and the MH distance allow for a meaning- the clusters is
ful computation of relative distances, the LCSS distance is less
1
discriminating because of the early application of the threshold davg (I, J) = dij . (21)
. The time complexity of all distances, in general, is O(mn). |I||J|
i∈I j∈J
652 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 3, SEPTEMBER 2010
C. Spectral Clustering
As already explained, we use a variant of the clustering
algorithm proposed in [13], but we use the local scaling method
in [14]. We include the description of the algorithm here for
completeness. Note that we automatically choose the number
of clusters by using the distortion metric used in [13] to select
the optimal global scale.
The input to the spectral clustering method is the affinity Fig. 4. Distortion ρg for data set 1 [see Fig. 5(a)] using the Hausdorff distance.
matrix K with elements kij for 1 ≤ i, j ≤ n. The first step of
the process is to normalize the rows and columns of K, which 4) The within-class scatter is W = ni=1 r(i) − μc(i) 22 ,
n
k
yields a normalized affinity L, i.e., and the total scatter is T = i=1 j=1 r(i) − μj 22 .
Compute the distortion score ρg = W/(T − W ) as the
L = W−1/2 KW−1/2 (22) ratio of the within-class to between-class scatter.
The value of g that produces the least distortion ρg is the
where the ith element of the diagonal matrix W is defined as automatically determined number of clusters, and the c(i)
obtained with that number of clusters is the class indicator
wi = kij . (23) function for the input trajectories. We show a plot of ρg versus
1≤j≤n
g in Fig. 4. Note that the actual cost ρg is not monotonic or
well behaved; thus, a search over all reasonable values of g is
In an ideal noise-free case, the matrix L is block diagonal
necessary.
and has g eigenvalues with value 1 and n − g zero eigenvalues,
where g is the actual number of clusters in the data. Unfor-
tunately, counting the number of eigenvalues of L that are V. E XPERIMENTS
equal to 1 is not a practical way of figuring out the number of In this section we explain the experiments performed. After
clusters as pointed out in [14]. However, we can at least find a discussing the metrics for comparing the different clustering
suitable range for the number of clusters using the spectrum of algorithms, we show the test data with the manual ground-truth
L. Let 1 ≥ λ1 ≥ λ2 ≥ · · · λn ≥ 0 be the eigenvalues of L. We labeling and finally discuss the choice of parameters for the
estimate the minimum number of clusters in the data gmin by various algorithms.
counting how many eigenvalues are greater than 0.99, and we
estimate the maximum number of clusters gmax by counting
how many eigenvalues are greater than 0.8. These thresholds A. Comparing Clusterings
were experimentally chosen. For some of our tests, the possible The clustering similarity measure used in our earlier work
range was very narrow, whereas it was quite large for others. [1] can be shown to be an upper bound on the misclassifica-
Once we choose the search range for the number of clusters, tion error probability. As such, it is well suited to comparing
we compute the eigenvectors of L that correspond to the top clusterings that are generally very similar or are produced by
gmax eigenvalues. Let vi denote these vectors for 1 ≤ i ≤ very similar methods. Although such a measure is good for
gmax . In the case of a repeated eigenvalue, we require that testing the stability and sensitivity of a single method, it is
the corresponding eigenvectors are mutually orthogonal. The not very sensible for comparing different methods. To com-
clustering algorithm proceeds by using these steps that are pare the different clustering methods, we use the variation of
repeated for each integer value of g, inclusively ranging from information distance in [33], which discusses several desirable
gmin to gmax . properties for clustering-comparison metrics. It is proven that
1) For all integer values of g between gmin and gmax , form only unbounded distances, e.g., the variation of information
the n × g matrix V = [v1 , . . . , vg ]. metric, satisfy all desirable properties for comparison of clus-
2) Normalize each row of V to have a unit length tering methods. The misclassification error measure that we use
R
= SV, where S is diagonal with elements si = is also discussed in [33], and we include results from it in all
( cj=1 Vij )−1/2 . cases where the method produced the same number of clusters
3) Treat the rows of R as g-dimensional data points and as the ground truth.
cluster them using k-means. Let μ1 , μ2 , . . . , μg denote We compare two clusterings C1 : [1, . . . , n] → [1, . . . , k1 ]
the g cluster centers (as row vectors), and let c(i) denote (mapping the n trajectories to k1 labels) and C2 : [1, . . . , n] →
the cluster that corresponds to the ith row r(i) . [1, . . . , k2 ] (mapping the same n trajectories to k2 labels) by
ATEV et al.: CLUSTERING OF VEHICLE TRAJECTORIES 653
Fig. 5. Four scenes from which the trajectories were collected and the distribution of their lengths. (a) Scene 1: 403 trajectories, 38 (9.4%) outliers. (b) Scene 2:
391 trajectories, six (1.5%) outliers. (c) Scene 3: 161 trajectories, one (0.6%) outlier. (d) Scene 4: 240 trajectories, 13 (5.4%) outliers. (e) Distribution of trajectory
lengths.
TABLE II
COMPARISON TO GROUND-TRUTH CLUSTERING FOR THE FOUR SCENES IN F IG. 6
Fig. 6. Result of the proposed method on the four data sets. The grid units are in meters. (a) Data set 1. (b) Data set 2. (c) Data set 3. (d) Data set 4.
reach the specified correct number of clusters. The problem note that, on the data set with the largest intracluster variation
was that LCSS produced only two types of distances—very in trajectory lengths, data set 1, DS, and DM did not perform
close to the minimum possible and ∞—and any outliers or well, whereas the same methods outperformed DA for data
well-separated clusters produced many infinite distances. In- set 2, which exhibits the least variation in intracluster trajectory
creasing improved the performance of the method for the lengths.
freeway data sets but had the opposite effect on the intersec-
tion data sets and negatively affected the spectral clustering
B. Time Cost of Methods
results.
Although the performance of DTW was not very good, we Two significant factors affect the runtime of the compared
believe that this result is mostly due to the very strict constraints methods: 1) the computation of a pairwise distance matrix and
of (4) and (5) that require the starting points of the trajectories to 2) the actual clustering. The asymptotic complexity of the first
be matched. Due to the unpredictability of target initialization step is very similar for all of the compared methods; in practice,
in the tracker and the effects of foreshortening, trajectories as Table III shows, the MH distance and LCSS have very
in the same cluster could have varying starting and ending similar runtimes, and DTW is slower by a constant factor. The
positions. The Hausdorff distance does not impose such a reason for the longer runtime of DTW is that we do not use
constraint; therefore, it has a clear advantage over DTW on data a band-limited version of the distance (the slack parameters δ
set 3 and, to a lesser extent, on data set 4. We were surprised by for LCSS and w for the proposed distance reduce the runtime).
the poor results when using DTW in a spectral framework and All implementations of this component are in C++ but have not
have no definitive explanation for this observation; however, been heavily optimized.
656 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 3, SEPTEMBER 2010
[5] L. Chen, M. T. Özsu, and V. Oria, “Symbolic representation and retrieval [30] M.-P. Dubuisson and A. K. Jain, “A modified Hausdorff distance for
of moving object trajectories,” in Proc. ACM SIGMM Int. Workshop object matching,” in Proc. Int. Conf. Pattern Recog., Oct. 1994, vol. 1,
Multimedia Inf. Retrieval, Oct. 2004, pp. 227–234. pp. 566–568.
[6] L. Chen, M. T. Özsu, and V. Oria, “Robust and fast similarity search for [31] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing
moving object trajectories,” in Proc. ACM SIGMOD Int. Conf. Manag. images using the Hausdorff distance,” IEEE Trans. Pattern Anal. Mach.
Data, 2005, pp. 491–502. Intell., vol. 15, no. 9, pp. 850–863, Sep. 1993.
[7] F. I. Bashir, A. A. Khokhar, and D. Schoenfeld, “Segmented trajectory [32] N. Meratnia and R. A. de By, “Aggregation and comparison of trajec-
based indexing and retrieval of video data,” in Proc. IEEE Int. Conf. Image tories,” in Proc. ACM Int. Symp. Adv. Geographic Inf. Syst., Nov. 2002,
Process., Sep. 2003, vol. 2, pp. 623–626. pp. 49–54.
[8] M. Vlachos, D. Gunopulos, and G. Das, “Rotation invariant distance [33] M. Meilã, “Comparing clusterings—An information-based distance,” J.
measures for trajectories,” in Proc. ACM SIGKDD Int. Conf. Knowl. Multivariate Anal., vol. 98, no. 5, pp. 837–895, May 2005.
Discovery Data Mining, Aug. 2004, pp. 707–712.
[9] M. Vlachos, G. Kollios, and D. Gunopoulos, “Discovering similar multi-
dimensional trajectories,” in Proc. 18th Int. Conf. Data Eng., Feb. 2002,
pp. 673–684. Stefan Atev received the B.A. degree in com-
[10] F. Duchene, C. Garbay, and V. Rialle, “Similarity measure for heteroge- puter science and mathematics from Luther College,
neous multivariate time series,” in Proc. Eur. Signal Process. Conf., Sep. Decorah, IA, in 2003 and the M.S. degree in com-
2004, pp. 1605–1608. puter science from the University of Minnesota,
[11] Z. Zhang, K. Huang, and T. Tan, “Comparison of similarity measures for Minneapolis, in 2007. He is currently working to-
trajectory clustering in outdoor surveillance scenes,” in Proc. Int. Conf. ward the Ph.D. degree with the Department of
Pattern Recog., Aug. 2006, pp. 1135–1138. Computer Science and Engineering, University of
[12] G. Antonini and J. P. Thiran, “Trajectories clustering in ICA space: An Minnesota.
application to automatic counting of pedestrians in video sequences,” in He is a Senior Algorithm Development Scientist
Proc. Adv. Concepts Intell. Vis. Syst., Aug. 2004. [Online]. Available: with Vital Images, Inc., Minnetonka, MN. His re-
http://infoscience.epfl.ch/record/87110/ search interests include computer vision, machine
[13] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and learning, and medical image processing.
an algorithm,” in Advances in Neural Information Processing Systems,
T. G. Dietterich, S. Becker, and Z. Ghahramani, Eds. Cambridge, MA:
MIT Press, 2002, pp. 849–856.
[14] L. Zelnik-Manor and P. Perona, “Self-tuning spectral clustering,” in
Advances in Neural Information Processing Systems, L. K. Saul, Grant Miller received the B.S. degree in computer
Y. Weiss, and L. Bottou, Eds. Cambridge, MA: MIT Press, 2005, science from the University of Minnesota, Mineapo-
pp. 1601–1608. lis, in 2009.
[15] D. Verma and M. Meilã, “A comparison of spectral clustering algorithms,” He was with the Department of Computer Science
Dept. Comput. Sci. Eng., Univ. Washington, Seattle, WA, Tech. Rep. UW- and Engineering, University of Minnesota, working
CSE-03-05-01, 2003. on mobile applications. He is currently with Con-
[16] K. Klpakis, D. Gada, and V. Puttagunta, “Distance measures for effective crete Software Inc., Eden Prairie, MN. His research
clustering of ARIMA time series,” in Proc. IEEE Int. Conf. Data Mining, interests include computer vision and programming
Nov. 2001, pp. 273–280. languages.
[17] J. Alon, S. Sclaroff, G. Kollios, and V. Pavlovic, “Discovering clusters Mr. Miller received an honorable mention at the
in motion time-series data,” in Proc. IEEE Conf. Comput. Vis. Pattern 2008 Computer Research Association Outstanding
Recog., Jun. 2003, vol. 1, pp. 375–381. Undergraduate Award Competition.
[18] F. M. Porikli, “Learning object trajectory patterns by spectral
clustering,” in Proc. IEEE Int. Conf. Multimedia Expo., Jun. 2004, vol. 2,
pp. 1171–1174.
[19] Y. Xiong and D.-Y. Yeung, “Mixtures of ARMA models for model-based Nikolaos P. Papanikolopoulos (F’07) received the
time series clustering,” in Proc. IEEE Int. Conf. Data Mining, Dec. 2002, Diploma degree in electrical and computer engi-
pp. 717–720. neering from the National Technical University of
[20] W. Hu, X. Xiao, D. Xie, T. Tan, and S. Maybank, “Traffic accident Athens, Athens, Greece, in 1987 and the M.S.E.E.
prediction using 3-D model-based vehicle tracking,” IEEE Trans. Veh. degree in electrical engineering and the Ph.D. de-
Technol., vol. 53, no. 3, pp. 677–694, May 2004. gree in electrical and computer engineering from
[21] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using Carnegie Mellon University, Pittsburgh, PA, in 1988
real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, and 1992, respectively.
pp. 747–757, Aug. 2000. He is currently a Distinguished McKnight Uni-
[22] D. R. Magee, “Tracking multiple vehicles using foreground, background, versity Professor with the Department of Computer
and motion models,” Image Vis. Comput., vol. 22, no. 2, pp. 143–155, Science and Engineering, University of Minnesota,
Feb. 2004. Minneapolis, and the Director of the Security in Transportation Technology
[23] J. W. Hsieh, S. H. Yu, Y. S. Chen, and W. F. Hu, “Automatic traffic Research and Applications, Center for Transportation Studies. His research
surveillance system for vehicle tracking and classification,” IEEE Trans. interests include computer vision, sensors for transportation applications, ro-
Intell. Transp. Syst., vol. 7, no. 2, pp. 175–187, Jun. 2006. botics, and control. He is the author or coauthor of more than 250 journal papers
[24] B. Morris and M. Trivedi, “Robust classification and tracking of vehi- and conference proceedings in the aforementioned research areas.
cles in traffic video streams,” in Proc. IEEE Conf. Intell. Transp. Syst., Dr. Papanikolopoulos was a finalist for the Anton Philips Award for Best
Sep. 2006, pp. 1078–1083. Student Paper at the 1991 IEEE International Conference on Robotics and
[25] Z. Fu, W. Hu, and T. Tan, “Similarity-based vehicle trajectory cluster- Automation and the recipient of the Best Video Award at the 2000 IEEE
ing and anomaly detection,” in Proc. IEEE Int. Conf. Image Process., International Conference on Robotics and Automation. Furthermore, he was the
Sep. 2005, vol. 2, pp. 602–605. recipient of the Kritski Fellowship in 1986 and 1987. He was a McKnight Land
[26] S. Atev, O. Masoud, R. Janardan, and N. P. Papanikolopoulos, “A collision Grant Professor with the University of Minnesota for the period 1995–1997
prediction system for traffic intersections,” in Proc. IEEE/RSJ Int. Conf. and has received the National Science Foundation (NSF) Research Initiation
Intell. Robots Syst., Aug. 2005, pp. 169–174. and Faculty Early Career Development Awards. He was also received the
[27] S. Atev and N. P. Papanikolopoulos, “Multiview vehicle tracking with Faculty Creativity Award from the University of Minnesota. One of his papers
constrained filter and partial measurement support,” in Proc. IEEE Int. (with O. Masoud as a coauthor) received the IEEE Vehicular Technology
Conf. Robot. Autom., May 2008, pp. 2277–2282. Society 2001 Best Land Transportation Paper Award. Finally, he has received
[28] J. W. Hunt and T. G. Szymanski, “A fast algorithm for computing longest grants from Defense Advanced Research Projects Agency, Department of
common subsequences,” Commun. ACM, vol. 20, no. 5, pp. 350–353, Homeland Security, the U.S. Army, the U.S. Air Force, Sandia National
May 1977. Laboratories, NSF, Microsoft, Lockheed Martin, Idaho National Engineering
[29] D. S. Hirschberg, “Algorithms for the longest common subsequence prob- and Environmental Laboratory, U.S. Department of Transportation, Minnesota
lem,” J. Assoc. Comput. Mach., vol. 24, no. 4, pp. 664–675, Oct. 1977. Department of Transportation, Honeywell, and 3M, totaling more than $17 M.