Professional Documents
Culture Documents
3, MARCH 2015
In this paper, we focus on the investigation of the local or global image variations (e.g., viewpoint changes,
appearance-based validation approaches to HV. Seeking the illumination changes, and partial occlusion).
solutions to boost the vehicle detection accuracy and reduce Local features, on the other hand, are less sensitive to
the false alarm rate while considering the real time, we propose the effects faced by global features. In addition, geometric
a machine learning algorithm based on Haar-like features and information and constraints in the configuration of different
support vector machine (SVM). Specifically, we first design a local features can be utilized either explicitly or implicitly.
Haar-like features extraction method to represent a vehicles An overcomplete dictionary of Haar wavelet features was
edges and structures, and then propose a rapid feature selection utilized in [14] for vehicle detection. They argued that this
algorithm using AdaBoost due to the large pool of Haar-like representation provided a richer model and spatial resolution
features. Finally, we present an improved normalization and that it was more suitable for capturing complex patterns.
method for feature values. Experimental results demonstrate Sun et al. [15] went one step further by arguing that the
that the proposed approaches not only speedup the feature actual values of the wavelet coefficients are not very impor-
selection process with AdaBoost, but also outperform the tant for vehicle detection. They proposed using quantized
state-of-the-art methods in terms of classification ability. coefficients to improve detection performance. Using Gabor
The rest of this paper is organized as follows. In Section II, filters for vehicle feature extraction was investigated in [16].
we review the related work for vehicle detection using Gabor filters [17] provide a mechanism for obtaining orien-
appearance-based approaches. In Section III, we present an tation and scale related features. The hypothesized vehicle
algorithm for computing Haar-like features. A fast feature subimages were divided into nine overlapping subwindows,
selection method based on AdaBoost is reported in Section IV. and then Gabor filters were applied on each subwindow sep-
Section V gives an introduction of SVMs and introduces an arately. Furthermore, Sun et al. [18] combined Haar wavelet
improved normalization method for the original feature values, with Gabor features to describe the properties of a vehicle.
while training SVM. The experimental results and analysis are Scale invariant feature transform features [19] were used
described in Section VI. Finally, the conclusion is drawn in in [20] to detect the rear faces of vehicles. In [21], the his-
Section VII. togram of oriented gradients (HOGs) features were extracted
in a given image patch for vehicle detection. In [22], a
combination of speeded up robust features [23] and edges was
II. R ELATED W ORK
used to detect vehicles in the blind spot.
Machine learning methods are becoming increasingly pop- The main drawback of the above local features is that
ular for their high performance, good robustness, and easy they are quite slow to compute. In recent years, there has
operation, which have been applied to many fields (such as been a transition from complex image features such as
image retrieval, image annotation, visual recognition, and Gabor filters and HOG to simpler and efficient feature sets for
vehicle detection) [1][4]. The HV using machine learn- vehicle detection. Haar-like features are sensitive to vertical,
ing methods is treated as a two-class pattern classification horizontal, and symmetric structures, and they can be
problem: vehicle versus nonvehicle. In general, machine learn- computed efficiently, making them well suited for real-time
ing methods consist of two processes: 1) feature representation detection of vehicles [24], also demonstrated by their good
and 2) classification. performance in the object detection literature [25][27].
Accordingly, we choose Haar-like features as the feature
representation for our vehicle detection system.
A. Feature Representation
Given the huge intra-class variabilities of the vehicle class,
one feasible approach is to learn the decision boundary based B. Classification
on training a classifier using the feature sets extracted from Classification can be broadly split into two categories:
a training set. Various feature extraction methods have been 1) discriminative and 2) generative methods. Discriminative
investigated in the context of vehicle detection. Based on the classifiers, which learn a decision boundary between two
method used, the features extracted can be classified as either classes, have been more widely used in vehicle detection.
global or local. Generative classifiers, which learn the underlying distribution
Global features are obtained by considering all the pixels in of a given class, have been less common in the vehicle
an image. Usually dimensionality reduction techniques [5], [6] detection literature. While in [28] and [29], artificial neural
are required for the high-dimensional features. Wu and network classifiers were used for vehicle detection, they have
Zhang [7] used standard principal component analysis (PCA) recently fallen somewhat out of favor. Neural networks have
for feature extraction, together with a nearestneighbor many parameters to tune, and the training tends to converge to
classifier, reporting an 89% accuracy on a vehicle data set. a local optimum. The research community has moved toward
However, their evaluation database was quite small (93 vehicle classifiers whose training converges to a global optimum over
images and 134 nonvehicle images), which makes it difficult the training set, such as SVMs and AdaBoost. The SVMs
to draw any useful conclusions. Although detection schemes have been widely used for vehicle detection. In [30] and [31],
based on global features such as those described in [7][13] SVM was used to classify feature vectors consisting of Haar
perform reasonably well, an inherent problem with global wavelet coefficients. The combination of HOG features and
feature extraction approaches is that they are sensitive to the SVM classifier has been also used in [28], [32], and [33].
510 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 25, NO. 3, MARCH 2015
TABLE I
N UMBER OF F EATURES FOR AN I MAGE PATCH W ITH S IZE OF 32 32
Fig. 2. Haar-like feature prototypes used in our method: upright ones (the first
row) and rotated ones (the second row).
takes the class labels into account, i.e., only the middle = + wj yj 2 w j y j . (4)
2 2
j =1 j =1
location of the two adjacent feature values with different labels
is considered as the latent classification location. Therefore,
finding min () is to compute
max( l1
j =1 w j y j ). As w j > 0, only when y l1 = 1 and
yl+1 = 1, does l1 w y
j =1 j j reach the maximum.
C. Theoretical Analysis for the Proposed Approach 2) When = 1, (3) turns into
In Section IV-B, we have presented the proposed fea-
1 1
n l1
ture selection method that combines the feature values with = wj yj 2 w j y j . (5)
their class labels. In this section, we theoretically analyze 2 2
j =1 j =1
our approach in terms of the property of the class labels.
For convenience, we assume l is the latent classification Therefore,
finding min() is to compute
location, and the classification results of the left of l are min( l1 w
j =1 j j y ). Only when yl1 = 1 and yl+1 = 1,
kst
( {1, +1}); on the contrary, the results of the right of does j =1 w j y j reach the minimum.
512 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 25, NO. 3, MARCH 2015
Fig. 6. Examples of training images. (a) Vehicle samples. (b) Non-vehicle Fig. 7. Examples of test images of Test data II. (a) Vehicle samples.
samples. (b) Nonvehicle samples.
feature vector set is used to train the RBF-SVM classifier with samples at both stages include roads, buildings, green plants,
cross-validation to select the optimal parameters and C. advertisement boards, bridges, traffic signs, guardrails, and
so on. Fig. 6 shows some training examples of vehicle and
C. Testing Process nonvehicle images, and Fig. 7 shows some test examples of
vehicle and nonvehicle images at the second stage.
For a given test ROI image patch, we first normalize it to a To evaluate the performance of the approaches, the true
32 32 grayscale patch, and then compute the feature values positive rate (or vehicle detection rate) t p and false positive
according to the selected Haar-like features and normalize the rate f p were recorded. They are defined in
feature values to [0, 1] according to the improved normaliza-
tion algorithm shown in Algorithm 3. Finally, we construct the NTP NFP
tp = , fp = (10)
normalized feature values to a vector and input it to the trained NTP + NFN NFP + NTN
RBF-SVM classifier, and then obtain the classification result. where NTP , NFP , NTN , and NFN are the numbers of the
objects identified as true positives, false positives, true neg-
VI. E XPERIMENTAL R ESULTS AND A NALYSIS atives, and false negatives, respectively. Three experiments
To evaluate the proposed approaches, we apply them to a are conducted on a PC [CPU: Intel Core2 2.13 GHZ,
monocular vision-based detection system for static rear-vehicle memory: 2 GB, operating system: Windows 7, implementa-
images. This system includes two modules. The first module tion: MATLAB 2012b].
aims to segment ROIs accurately according to [30] and [31]. The first experiment aims to validate the performance
The second module, which is the focus of this paper, per- in classification accuracy of the proposed machine learning
forms classification on the ROIs. Vehicle existence validation method compared with the state-of-the-art ones that perform
is a two-class pattern classification problem: vehicle versus reasonably well in vehicle classification and for which the code
nonvehicle. can be obtained or reproduced according to the original papers.
Different videos recorded by a camera mounted on a vehicle The second experiment compares the designed normalization
are collected for evaluating the presented algorithms, and algorithm for the feature vector set to other normalization
the videos are taken on different daytime scenes, including methods. The third experiment aims to validate the time
highway, urban common road, urban narrow road, and so on. efficiency of the proposed feature selection algorithm with
Some roads are covered with japanning, smear, and so on. the AdaBoost compared with the state-of-the-art selection
At the first stage, 23 687 samples from the same videos algorithms and the traditional one. All ROIs are normalized
were collected for training and testing, and 17 647 samples to 32 32 grayscale image patches.
were selected randomly for training, including 8 774 vehicle In the first experiment, since different data sets will induce
samples (positive samples) and 8873 nonvehicle samples different optimal parameters for feature extraction methods
(negative samples), and the remaining 6040 samples (denoted and classifiers, we select the optimal parameters in terms of
as Test data I) for testing that include 4266 vehicle samples and the classification ability. For the feature extraction of PCA [7],
1774 nonvehicle samples. At the second stage, 29 698 samples we choose the first 79 eigenvectors associated with the first
from different videos with the samples at the first stage were 79 biggest eigenvalues that generate the best classification
collected for only testing (denoted as Test data II), which accuracy. For the feature extraction of Gabor [16], we select
include 7901 vehicle samples (positive samples) and 4602 non- six angles and four orientations. For the feature extraction of
vehicle samples (negative samples). The vehicle samples at wavelet, we select the simplest Haar wavelet and perform a
both the first stage and the second stage include various kinds 6-level decomposition, and then remove the HH part of the
of vehicles, such as cars, trucks, and buses as well as different first level according to [15]. For the feature extraction of
colors, such as red, blue, black, gray, and white. Furthermore, the Gabor combining with wavelet according to [18], the
the vehicle samples include both vehicles near the vehicle computation of the Gabor features is similar to [16] and
mounted with the camera and those that are far. The nonvehicle the wavelet features is similar to [15]. For the extraction of
514 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 25, NO. 3, MARCH 2015
TABLE IV
E VALUATION R ESULTS ON THE T WO P UBLIC D ATA S ETS
VII. C ONCLUSION
In this paper, we have proposed a solution based on
Haar-like features and RBF-SVM for vehicle detection. First,
From Table II, one can conclude that, compared with due to the huge pool of Haar-like features, a fast feature selec-
the state-of-the-art detection methods, the proposed algorithm tion algorithm via AdaBoost has been proposed by combining
produces not only a higher vehicle detection rate (t p ), but also a samples feature value with its class label. Then, an improved
a lower false positive rate ( f p ) on both Test data I and normalization algorithm for feature values has been presented,
Test data II. On Test data II, although the vehicle detection which can effectively reduce the within-class variation and
rate of the proposed algorithm is only 0.44% better than that increase the between-class variability. The experimental results
of the method in [46] and [47], but the false positive rate ( f p ) show that the proposed approaches not only speeded up
of the proposed algorithm is 7.84% lower than that achieved the feature selection process, but also showed superiority in
by the method in the literatures. From Fig. 8, one can conclude vehicle classification ability compared with the state-of-the-art
that the proposed algorithm shows the best performance among methods.
all methods.
From Table IV, one can conclude that the proposed ACKNOWLEDGMENT
algorithm shows its superiority on the two public data sets
The authors would like to thank all the anonymous
compared with the other methods. In Table IV, all methods
reviewers for their valuable comments.
have better classification results on MIT CBCL than on the
Caltech rear-viewed vehicle data set, because most of the
vehicle images in MIT CBCL are frontal-viewed vehicles that R EFERENCES
are more similar to our training samples in distribution. [1] D. Tao, X. Tang, X. Li, and X. Wu, Asymmetric bagging and random
From Figs. 9 and 10, one can conclude that, compared subspace for support vector machines-based relevance feedback in image
with the original data, attribute normalization improves the retrieval, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 7,
pp. 10881099, Jul. 2006.
classification performance significantly, and compared with the [2] W. Liu and D. Tao, Multiview Hessian regularization for image
other two popular normalization methods in vehicle detection, annotation, IEEE Trans. Image Process., vol. 22, no. 7, pp. 26762687,
our improved normalization algorithm is the best choice for Jul. 2013.
[3] F. Zhu and L. Shao, Weakly-supervised cross-domain dictionary learn-
RBF-SVM classifier on both Test data I and Test data II. ing for visual recognition, Int. J. Comput. Vis., vol. 109, nos. 12,
The original data is sensitive to the illumination and easily pp. 4259, Aug. 2014.
dominated by the too big attribute values in classification, [4] S. Sivaraman and M. M. Trivedi, Looking at vehicles on the road:
A survey of vision-based vehicle detection, tracking, and behav-
and the statistical normalization method requires that the ior analysis, IEEE Trans. Intell. Transp. Syst., vol. 14, no. 4,
attribute data should follow a normal distribution, which pp. 17731795, Dec. 2013.
is not always satisfied in real applications. Although the [5] J. Li and D. Tao, Simple exponential family PCA, IEEE Trans. Neural
Netw. Learn. Syst., vol. 24, no. 3, pp. 485497, Mar. 2013.
minmax normalization directly used on the original attribute [6] D. Tao, X. Li, X. Wu, and S. J. Maybank, Geometric mean for subspace
data overcomes the domination of the too big attribute selection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2,
values in classification, it is still sensitive to illumination. pp. 260274, Feb. 2009.
[7] J. Wu and X. Zhang, A PCA classifier and its application in vehi-
The improved normalization method overcomes the above two cle detection, in Proc. Int. Joint Conf. Neural Netw., vol. 1. 2001,
problems. pp. 600604.
516 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 25, NO. 3, MARCH 2015
[8] T. Kato, Y. Ninomiya, and I. Masaki, Preceding vehicle recognition [34] Q. Yuan, A. Thangali, V. Ablavsky, and S. Sclaroff, Learning a family
based on learning from sample images, IEEE Trans. Intell. Transp. of detectors via multiplicative kernels, IEEE Trans. Pattern Anal. Mach.
Syst., vol. 3, no. 4, pp. 252260, Dec. 2002. Intell., vol. 33, no. 3, pp. 514530, Mar. 2011.
[9] N. D. Matthews, P. E. An, D. Charnley, and C. J. Harris, Vehicle [35] N. Blanc, B. Steux, and T. Hinz, LaRASideCam: A fast and robust
detection and recognition in greyscale imagery, Control Eng. Pract., vision-based blindspot detection system, in Proc. IEEE Intell. Veh.
vol. 4, no. 4, pp. 473479, Apr. 1996. Symp., Jun. 2007, pp. 480485.
[10] S. L. Phung, D. Chai, and A. Bouzerdoum, A distribution-based [36] Z. Kim, Realtime obstacle detection and tracking based on constrained
face/nonface classification technique, Austral. J. Intell. Inf. Process. Delaunay triangulation, in Proc. IEEE ITSC, Sep. 2006, pp. 548553.
Syst., vol. 7, nos. 34, pp. 132138, 2001. [37] Y. Zhang, S. J. Kiselewich, and W. A. Bauson, Legendre and Gabor
[11] A. N. Rajagopalan, P. Burlina, and R. Chellapa, Higher order statistical moments for vehicle recognition in forward collision warning, in Proc.
learning for vehicle detection in images, in Proc. 7th IEEE Int. Conf. IEEE ITSC, Sep. 2006, pp. 11851190.
Comput. Vis., vol. 2. Sep. 1999, pp. 12041209. [38] T. Liu, N. Zheng, L. Zhao, and H. Cheng, Learning based symmetric
[12] Z. Sun, G. Bebis, and R. Miller, Object detection using feature subset features selection for vehicle detection, in Proc. IEEE Intell. Veh. Symp.,
selection, Pattern Recognit., vol. 37, no. 11, pp. 21652176, Nov. 2004. Jun. 2005, pp. 124129.
[13] K.-K. Sung and T. Poggio, Example-based learning for view-based [39] A. Khammari, F. Nashashibi, Y. Abramson, and C. Laurgeau, Vehicle
human face detection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, detection combining gradient analysis and AdaBoost classification, in
no. 1, pp. 3951, Jan. 1998. Proc. IEEE Intell. Transp. Syst., Sep. 2005, pp. 6671.
[40] J. Cui, F. Liu, Z. Li, and Z. Jia, Vehicle localisation using a single
[14] C. Papageorgiou and T. Poggio, A trainable system for object detec-
camera, in Proc. IEEE Intell. Veh. Symp., Jun. 2010, pp. 871876.
tion, Int. J. Comput. Vis., vol. 38, no. 1, pp. 1533, 2000.
[41] D. Withopf and B. Jahne, Learning algorithm for real-time vehicle
[15] Z. Sun, G. Bebis, and R. Miller, Quantized wavelet features and support tracking, in Proc. IEEE ITSC, Sep. 2006, pp. 516521.
vector machines for on-road vehicle detection, in Proc. 7th Int. Conf. [42] I. Kallenbach, R. Schweiger, G. Palm, and O. Lohlein, Multi-class
Control, Autom., Robot. Vis., 2002, pp. 16411646. object detection in vision systems using a hierarchy of cascaded classi-
[16] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection using Gabor fiers, in Proc. IEEE Intell. Veh. Symp., 2006, pp. 383387.
filters and support vector machines, in Proc. 14th Int. Conf. Digit. [43] T. T. Son and S. Mita, Car detection using multi-feature selection for
Signal Process., 2002, pp. 10191022. varying poses, in Proc. IEEE Intell. Veh. Symp., Jun. 2009, pp. 507512.
[17] D. Tao, X. Li, X. Wu, and S. J. Maybank, General tensor discriminant [44] D. Acunzo, Y. Zhu, B. Xie, and G. Baratoff, Context-adaptive approach
analysis and Gabor features for gait recognition, IEEE Trans. Pattern for vehicle detection under varying lighting conditions, in Proc. IEEE
Anal. Mach. Intell., vol. 29, no. 10, pp. 17001715, Oct. 2007. ITSC, Sep./Oct. 2007, pp. 654660.
[18] Z. Sun, G. Bebis, and R. Miller, Improving the performance of on-road [45] C. Szegedy, A. Toshev, and D. Erhan, Deep neural network for object
vehicle detection by combining Gabor and wavelet features, in Proc. detection, in Advances in Neural Information Processing Systems.
IEEE 5th Int. Conf. Intell. Transp. Syst., 2002, pp. 130135. Red Hook, NY, USA: Curran Associates, Inc., 2013, pp. 25532561.
[19] D. G. Lowe, Object recognition from local scale-invariant features, in [46] R. Lienhart and J. Maydt, An extended set of Haar-like features for
Proc. 7th IEEE Int. Conf. Comput. Vis., Sep. 1999, pp. 11501157. rapid object detection, in Proc. Int. Conf. Image Process., Jan. 2002,
[20] X. Zhang, N. Zheng, Y. He, and F. Wang, Vehicle detection using an pp. 900903.
extended hidden random field model, in Proc. 14th Int. IEEE Conf. [47] R. Lienhart, A. Kuranov, and V. Pisarevsky, Empirical analysis of
ITSC, Oct. 2011, pp. 15551559. detection cascades of boosted classifiers for rapid object detection, in
[21] M. Cheon, W. Lee, C. Yoon, and M. Park, Vision-based vehicle Proc. 25th German Pattern Recognit. Symp., 2003, pp. 297304.
detection system with consideration of the detecting location, IEEE [48] Y. Freund and R. E. Schapire, Experiments with a new boosting
Trans. Intell. Transp. Syst., vol. 13, no. 3, pp. 12431252, Sep. 2012. algorithm, in Proc. 13th Int. Conf. Mach. Learn., 1996, pp. 148156.
[22] B. F. Lin et al., Integrating appearance and edge features for sedan [49] V. Vapnik, The Nature of Statistical Learning Theory. New York, NY,
vehicle detection in the blind-spot area, IEEE Trans. Intell. Transp. USA: Springer-Verlag, 1995.
Syst., vol. 13, no. 2, pp. 737747, Jun. 2012. [50] C. J. C. Burges, A tutorial on support vector machines for pat-
[23] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, Speeded-up robust tern recognition, Data Mining Knowl. Discovery, vol. 2, no. 2,
features (SURF), Comput. Vis. Image Understand., vol. 110, no. 3, pp. 955974, 1998.
pp. 346359, 2008. [51] Y. Li, B. Fang, L. Guo, and Y. Chen, Network anomaly detection based
[24] X. Wen and Y. Zheng, An improved algorithm based on AdaBoost on TCM-KNN algorithm, in Proc. 2nd ASIACCS, 2007, pp. 1319.
for vehicle recognition, in Proc. 2nd Int. Conf. Inf. Sci. Eng. (ICISE), [52] W. Ma, D. Tran, and D. Sharma, A study on the feature selection of
Hangzhou, China, Dec. 2010, pp. 981984. network traffic for intrusion detection purpose, in Proc. IEEE Int. Conf.
[25] P. Viola and M. Jones, Rapid object detection using a boosted cascade ISI, Jun. 2008, pp. 245247.
of simple features, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. [53] Y. Liao, V. R. Vemuri, and A. Pasos, Adaptive anomaly detection with
Pattern Recognit., Jan. 2001, pp. 511518. evolving connectionist systems, J. Netw. Comput. Appl., vol. 30, no. 1,
[26] P. Viola and M. J. Jones, Robust real-time face detection, Int. pp. 6080, 2007.
J. Comput. Vis., vol. 57, no. 2, pp. 137154, 2004. [54] C.-C. R. Wang and J.-J. J. Lien, Automatic vehicle detection using
local featuresA statistical approach, IEEE Trans. Intell. Transp. Syst.,
[27] P. Viola and M. J. Jones, Robust real-time object detection, in Proc.
vol. 9, no. 1, pp. 8396, Mar. 2008.
IEEE ICCV Workshop Statist. Computat. Theories Vis., Vancouver, BC,
[55] J. Wu, S. C. Brubaker, M. D. Mullin, and J. M. Rehg, Fast asymmetric
Canada, Jul. 2001, pp. 130.
learning for cascade face detection, IEEE Trans. Pattern Anal. Mach.
[28] R. Miller, Z. Sun, and G. Bebis, Monocular precrash vehicle detection: Intell., vol. 30, no. 3, pp. 369382, Mar. 2008.
Features and classifiers, IEEE Trans. Image Process., vol. 15, no. 7,
pp. 20192034, Jul. 2006.
[29] O. Ludwig and U. Nunes, Improving the generalization properties of
neural networks: An application to vehicle detection, in Proc. 11th Int.
IEEE Conf. ITSC, Oct. 2008, pp. 310315.
[30] X. Wen, H. Zhao, N. Wang, and H. Yuan, A rear-vehicle detec-
tion system for static images based on monocular vision, in Proc. Xuezhi Wen received the Ph.D. degree in computer
9th Int. Conf. Control, Autom., Robot. Vis., Singapore, Mar. 2006, application technique from Northeastern University,
pp. 24212424. Shenyang, China, in 2008.
[31] W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, Rear vehicle He is an Associate Professor with Jiangsu
detection and tracking for lane change assist, in Proc. IEEE Intell. Engineering Center of Network Monitoring,
Veh. Symp., Istanbul, Turkey, Jun. 2007, pp. 252257. Nanjing University of Information Science and
[32] S. S. Teoh and T. Brunl, Symmetry-based monocular vehicle detection Technology, Nanjing, China, where he is also an
system, Mach. Vis. Appl., vol. 23, no. 5, pp. 831842, Sep. 2012. Associate Professor with the School of Computer
[Online]. Available: http://dx.doi.org/10.1007/s00138-011-0355-7 and Software.
[33] S. Sivaraman and M. M. Trivedi, Active learning for on-road vehicle Dr. Wen is a member of the Association for
detection: A comparative study, Mach. Vis. Appl., vol. 25, no. 3, Computing Machinery. His research interests
pp. 599611, Dec. 2011. include pattern recognition, image processing, and intelligent transportation.
WEN et al.: EFFICIENT FEATURE SELECTION AND CLASSIFICATION FOR VEHICLE DETECTION 517
Ling Shao (M09SM10) received the B.Eng. Wei Fang received the Ph.D. degree in computer
degree in electronic and information engineering science from Soochow University, Suzhou, China,
from University of Science and Technology of in 2009.
China, Hefei, China, and the M.Sc. degree in med- He is an Associate Professor with the Jiangsu
ical image analysis and the Ph.D. (D.Phil.) degree Engineering Center of Network Monitoring,
in computer vision from Robotics Research Group, Nanjing University of Information Science and
University of Oxford, Oxford, U.K. Technology, Nanjing, China. His research interests
He was a Senior Lecturer with the Department include data mining, big data analytics, and cloud
of Electronic and Electrical Engineering, University computing.
of Sheffield, Sheffield, U.K., from 2009 to 2014, Dr. Fang is a Senior Member of the China
and a Senior Scientist with Philips Research, The Computer Federation and a member of the
Netherlands, from 2005 to 2009. He is currently a Full Professor with the Association for Computing Machinery.
Department of Computer Science and Digital Technologies, Northumbria
University, Newcastle upon Tyne, U.K. He has authored or co-authored over
150 academic papers in refereed journals and conference proceedings, and
holds over 10 European Union/U.S. patents. His research interests include Yu Xue received the Ph.D. degree from the College
computer vision, image/video processing, pattern recognition, and machine of Computer Science and Technology, Nanjing Uni-
learning. versity of Aeronautics and Astronautics, Nanjing,
Dr. Shao is a fellow of the British Computer Society and the Institu- China, in 2013.
tion of Engineering and Technology. He has been an Associate or Guest He is a Lecturer with the Jiangsu Engineering
Editor of IEEE T RANSACTIONS ON C YBERNETICS , IEEE T RANSACTIONS Center of Network Monitoring, Nanjing University
ON I MAGE P ROCESSING , Pattern Recognition, IEEE T RANSACTIONS ON of Information Science and Technology, Nanjing,
N EURAL N ETWORKS AND L EARNING S YSTEMS , and several other journals. where he is also a Lecturer with the School of Com-
He has organized several workshops with top conferences, such as ICCV, puter and Software. His research interests include
ECCV and ACM Multimedia. He has served as the Program Commit- computational intelligence, electronic countermea-
tee Member for many international conferences, including ICCV, CVPR, sure, and Internet of things.
ECCV, and ACM MM. Dr. Xue is a member of the China Computer Federation.