You are on page 1of 6

Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with

2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance

Vehicle Make and Model Recognition Using Symmetrical SURF


Jun-Wei Hsieh
Dep. of C.S. E., NTOU

Li-Chih Chen, Duan-Yu Chen


Dep. of E. E., YZU

Shyi-Chyi Cheng
Dep. of C.S. E., NTOU

Keelung ,Taiwan

Chung-Li, Taiwan

Keelung ,Taiwan

shieh@ntou.edu.tw

lcchen@mail.lit.edu.tw
dychen@saturn.yzu.edu.tw

csc@ntou.edu.tw

Abstract
SURF (Speeded Up Robust Features) is a robust and
useful feature detector for various vision-based
applications but lacks the ability to detect symmetrical
objects. This paper proposes a new symmetrical SURF
descriptor to enrich the power of SURF to detect all
possible symmetrical matching pairs through a mirroring
transformation. A vehicle make-and-model recognition
(MMR) application is then adopted to prove the
practicability and feasibility of the method. To detect
vehicles from the road, the proposed symmetrical
descriptor is first applied to determine the ROI of each
vehicle from the road without using any motion features.
This scheme provides two advantages; there is no need of
background subtraction and it is extremely efficient for
real-time applications. Two MMR challenges, i.e.,
multiplicity and ambiguity problems, are then addressed.
The multiplicity problem stems from one vehicle model
often having different model shapes on the road. The
ambiguity problem results from vehicles from different
companies often sharing similar shapes. To address these
two problems, a grid division scheme is proposed to
separate a vehicle into several grids; different weak
classifiers that are trained on these grids are then
integrated to build a strong ensemble classifier. Because
of the rich representation power of the grid-based method
and the high accuracy of vehicle detection, the ensemble
classifier can accurately recognize each vehicle.

1. Introduction
One important task in computer vision is to find
correspondences between two images of the same object or
scene. To perform this task reliably, some important
interest features with higher repeatability should be first
detected from both images even under various lighting
changes and geometrical transformations. One of most
commonly-used methods to extract and represent interest
points is SIFT (Scale-Invariant Feature Transform)[1].
However, it is not fast enough for on-line (or real time)
applications. Thus, in [2], Bay et al. proposed a novel
scale-invariant feature detector and descriptor, called as
SURF (Speeded Up Robust Features), to compute and

978-1-4799-0703-8/13/$31.00 2013 IEEE

472

compare feature points much faster. It relies on integral


images for image convolutions and is more efficient than
the SIFT method. However, it lacks of the supports to find
symmetrical pairs of feature points. In real word, it is often
to see various objects with different symmetry properties,
especially horizontal symmetry.
For example, the
symmetry exists in many human-made objects, natural
scenes, or animals. To treat this problem, this paper will
propose a symmetrical SURF matching scheme and apply it
to vehicle analysis.
Vehicle analysis is an important task in various
applications, such as self-guided vehicles, driver assistance
systems, electronic toll collection, intelligent parking
systems, or in the measurement of traffic parameters such
as vehicle count, speed, and flow. A pre-requisite for
enabling this analysis is to accurately locate vehicles in
video images so that attribute extraction and comparison
can be performed. In actual cases, there are significant
variations in vehicles, including their colors, sizes,
orientations, shapes, and poses. To treat this variation
problem in vehicle appearance, different approaches [3]-[8]
have been proposed that use different features and learning
algorithms to locate vehicles. In the existing literature,
most techniques [3]-[5] adopt background subtraction to
detect moving vehicles from video sequences. For example,
Faro et al. [3] used a background modeling technique to
subtract possible vehicle pixels from the road and then
applied a segmentation scheme to remove partial and full
occlusions among vehicle blobs. In Unno et al. [4], motion
information and symmetry properties were integrated to
detect vehicles from videos. Jazayeri [5] used HMM to
probabilistically model the vehicle motion and then
distinguished the vehicles from the road. However, this
type of motion feature is no longer usable and found from
still images. To address this problem, this paper proposes a
novel vehicle detection scheme to search for areas with
high vertical symmetry to locate vehicles from pairs of
symmetrical SURF matching points.
Once a vehicle is detected, the next task is vehicle
identification. For a high-security area, the identification of
vehicle MM (make and model) can offer valuable
assistance to the police when searching for suspect vehicles
[9]-[11]. With the increasing need for security awareness
and traffic control, this paper proposes a novel vehicle

MMR system to detect vehicles and recognize their makes


and models using a symmetrical SURF descriptor. Figure 1
shows the flowchart of our system. Figure 1(a) and Figure
1(b) are the components of vehicle detection and vehicle
MMR, respectively. Without using any motion features,
this paper proposes a symmetrical SURF descriptor to
detect all possible symmetric matching pairs. Each desired
vehicle ROI can then be accurately located through a
projection technique. In actual cases, one vehicle MM
often displays different shapes on the road. This paper
designates this property as the multiplicity of MMR. In
addition, vehicles manufactured by different companies
often share similar shapes. These multiplicity and
ambiguity problems cause many challenges in MMR. To
address both of these problems, this paper presents a
grid-based scheme to separate the vehicles front region
into different grids. The SURF points are then extracted
from each grid to train various weak vehicle classifiers by
using an SVM learning algorithm [13]. With a Bayesian
averaging technique, these weak classifiers are then
integrated to form an ensemble classifier to recognize
vehicle types with extreme accuracy. The major
contributions of this work are noted as follows:
1) A novel symmetrical transformation is proposed to
translate a non-symmetrical SURF descriptor into a
symmetrical one.
2) A new vehicle detection scheme is proposed to detect
vehicles from moving cameras. The advantage of this
scheme is no need of background modeling and
subtraction.
3) An ensemble scheme is proposed to separate a
vehicles front region into several grids and then
integrate the different classifiers trained on these grids
to build an accurate ensemble classifier.

of computer vision. However, it lacks the capability of


finding symmetrical pairs of feature points. To provide this
symmetrical support, the relations of SUFR descriptors
between two symmetrical points should be derived.
Let Bo denote the original square extracted from an
interest point and Bm be its horizontally mirrored version.
For illustrative purposes, an 8 8 square example is shown
as follows:
b0,0 b0,1 b0,2 b0,3
b1,0 b1,1 b1,2 b1,3
b b b b
2,0 2,1 2,2 2,3
b b b b
Bo 3,0 3,1 3,2 3,3
b4,0 b4,1 b4,2 b4,3
b5,0 b5,1 b5,2 b5,3
b6,0 b6,1 b6,2 b6,3
b b b b
7,0 7,1 7,2 7,3

b0,4 b0,5 b0,6 b0,7


b1,4 b1,5 b1,6 b1,7
b2,4 b2,5 b2,6 b2,7

b3,4 b3,5 b3,6 b3,7


b4,4 b4,5 b4,6 b4,7
b5,4 b5,5 b5,6 b5,7
b6,4 b6,5 b6,6 b6,7
b7,4 b7,5 b7,6 b7,7

and

b0,7 b0,6 b0,5 b0,4


b1,7 b1,6 b1,5 b1,4
b b b b
2,7 2,6 2,5 2,4
b b b b
Bm 3,7 3,6 3,5 3,4
b4,7 b4,6 b4,5 b4,4
b5,7 b5,6 b5,5 b5,4
b6,7 b6,6 b6,5 b6,4
b b b b
7,7 7,6 7,5 7,4

b0,3 b0,2 b0,1 b0,0


b1,3 b1,2 b1,1 b1,0
b2,3 b2,2 b2,1 b2,0

b3,3 b3,2 b3,1 b3,0


b4,3 b4,2 b4,1 b4,0
b5,3 b5,2 b5,1 b5,0
b6,3 b6,2 b6,1 b6,0
b7,3 b7,2 b7,1 b7,0

where by , x is the intensity of Bo at the pixel ( x , y ) . Let

b
b

b b

m
Bij 2i ,2 j 2i ,2 j 1 and Bij b 2i ,2 j 1b2i ,2 j . We can
b
b
2i 1,2 j 1 2i 1,2 j
2i 1,2 j 2i 1,2 j 1
then divide Bo and Bm into 4 4 sub-regions as follows:
B03m B02m B01m B00m
B00 B01 B02 B03
m m m m
B B B B
B B12 B11 B10
BO 10 11 12 13 and Bm 13
. (1)
B23m B22m B21m B20m
B20 B21 B22 B23
m m m m

B30 B31 B32 B33


B33 B32 B31 B30
For each sub-region Bij , the sums of wavelet responses can

be calculated by the form:

fij ( dx(b), dy (b), | dx(b) |, | dy (b) |) , (2)


bBij

bBij

bBij

bBij

where dx(b) by , x 1 by , x and dy(b) by 1, x by , x . For

illustrative convenience, we use dix, j , diy, j , | dix, j | , and


| diy, j | to denote the sums of wavelet responses. Then, we

have
d ix, j b2 i ,2 j 1 b2 i 1,2 j 1 b2 i ,2 j b2 i 1,2 j ,

and
d iy, j b2 i 1,2 j b2 i 1,2 j 1 b2 i ,2 j b2 i ,2 j 1 .

Let Bi Bi 0 Bi1 Bi 2 Bi 3 and Bim Bim3 Bim2 Bim1 Bim0 . From


Bi , a new feature vector fi can be constructed, i.e.,

(a) Vehicle detection

f i ( d ix,0 , d iy,0 ,| d ix,0 |,|d iy,0 |, d ix,1 , d iy,1 ,| d ix,1 |,|d iy,1 |,
d ix,2 , d iy,2 ,| d ix,2 |,|d iy,2 |, d ix,3 , d iy,3 ,| d ix,3 |,|d iy,3 |)
(b) Vehicle make and model recognition

Figure 1: Flowchart of front vehicle MMR from pairs of


symmetrical SURF points.

(3)

Similarly, from Bim , another feature vector f i m can be


constructed:
f i m ( d ix,3 , d iy,3 ,| d ix,3 |,|d iy,3 |, d ix,2 , d iy,2 ,| d ix,2 |,|d iy,2 |,
(4)
d ix,1 , d iy,1 ,| d ix,1 |,|d iy,1 |, d ix,0 , d iy,0 ,| d ix,0 |,|d iy,0 |)

2. Symmetrical SURFs

With fi and f i m , the SURF descriptors f o and

SURF [2] is more efficient than SIFT [1] and thus


becomes one of most commonly used detectors in the field

Bo and Bm can be constructed, respectively, as follows:

473

fmir of

(5)
f o f 0 f1 f 2 f 3 and f mir f 0m f1m f 2m f 3m .
The transformation between f o and fmir can be easily
t

built by converting each row fi to f i m using the relations


between Eqs.(3) and (4). Then, given two SURF
p
q
descriptors f and f , their distance is defined as
4

16

SURF ( f p , f q ) [f p ( m, n) f q ( m, n)]2 .

(6)

m 1 n 1

Fig. 3(a) shows their matching result without using the


proposed transformation. Fig. 3(b) is the matching result
after the symmetric transformation. It is clear that the
transformed SURF descriptor (denoted by a red line) is
very similar to the blue one. Figure 3 shows examples of
symmetry matching from a vehicle and a lake.

matching pair { p q} in .

With cpq , a

histogram-based method is therefore proposed to determine


the central line Lvehicle of each vehicle candidate. The
histogram (denoted by H ) is calculated by counting the

x -coordinate of each central position c p q for all


matching pairs { p q} in . The peak of H then
corresponds to the central line of the desired vehicle
candidate. As shown in Figure 4, (b) is the matching result
of symmetrical SURF points from (a). This histogram is
plotted along the bottom of (b). (c) shows the line Lv
denoted by a yellow line.

(a)

(b)

(c)

Figure 4: Matching results of symmetrical SURF points.


(a)

Before transformation

After determining Lvehicle , a ROI R v should be defined

(b) After transformation

Figure 2: Matching results of two symmetric SURF points


before/after the symmetric transformation. Lines with different
colors denote two symmetrical SURF points.

to locate the vehicle candidate, i.e., Rv (lRv , rRv , uRv , bRv ) .


Here, the symbols lRv , rRv , uRv , and bRv denote the left, right,
upper, and bottom boundaries of R v , respectively. In
actual cases, a shadow region often exists underneath a
vehicle. As shown in Figure 4(c), a shadow area (shown by
a red rectangle) was found underneath the vehicle. The
shadow region can be used to define this ROI and
determine whether R v is an actual vehicle. The area
between the shadow line and the vehicle bumper will form
a horizontal edge line lbumper (as shown in Figure 5); it is

Figure 3: Matching results of SURF points with symmetry.

3. Vehicle Detection
Vehicle detection is an important task for many
applications such as navigation systems, driver assistance
systems, intelligent parking systems, or the measurement of
traffic parameters such as vehicle count, speed, and flow.
To detect moving vehicles in video sequences, the most
commonly adopted approach is to extract motion features
through background subtraction. However, this technique
is not stable when the background includes different
environmental variations such as lighting changes or
camera vibrations. In addition, motion features are of no
use in still images. By taking advantage of the proposed
symmetrical SURF descriptor, a novel approach for
detecting vehicles on the road without using any motion
features will be proposed in this section.
Figure 1 shows the flowchart of our vehicle extraction
scheme. After matching, a set of pairs of symmetrical
SURF points is extracted. Let denote this set of
matching pairs and c p q denote the central position of the

474

used to define the bottom boundary of R v , i.e., bRv lbumper .

lbumper can be easily detected by using a horizontal Sobel


edge detector. As for the upper boundary uRv , because the
hood and the vehicle window form a longer horizontal
boundary, this paper uses this boundary to define uRv .
Figure 5 shows the notations of the different symbols used
for extracting R v , where the green and yellow colors show
the y -projections of gradients from both sides of the
central line.
To extract the left and right boundaries of Rv , the vehicle
width must first be defined. Let Wv and wv denote its
width in the real world and on the image plane, respectively.
The relation between Wv and wv can be derived as
follows [12]:
(7)
wv ( y ) ay b ,
where a and

b are the parameters used to build the

relation between wv and y . They can be easily estimated


at the training state because the camera is fixed. With
wv ( y ) , the left and right boundaries of Rv are defined,
respectively, as follows:
x
x
lRv vehicle
wv (bRv ) / 2 and rRv vehicle
wv (bRv ) / 2 , (8)
x
where vehicle
is the x position of Lvehicle .

To train a specific vehicle model classifier using SVM, a


dataset X {( xi , yi )}i 1,...,| X | is collected, where xi is a
vehicle example, yi is the label of xi . Each sample xi is
further divided to mn grids {gij } j 1,..., J , where J mn
and gij is a collection of SURF points extracted from the
jth grid of xi . Then, X can be separated to J subsets G j ,

hood boundary
h
h

4.2. Vehicle MMR Using SURF

i.e., G j {( gij , yi )}i 1,...,| X | . From G j , we will train its

corresponding classifier j by SVM.


Let denote the set of class labels, i.e.,
l1 , l2 ,..., lC . The problem to recognize a front vehicle

uRv

lBumper : shadow line


Central line :
x
Lvehicle vehicle

V can be modeled by a Bayesian MAP optimization, i.e.,

Figure 5. Notations for finding the upper and bottom boundaries


of ROI for vehicle MMR.

lopt arg max P (l | V ) arg max P(l | j ) P ( j | V ) , (9)


l

j 1

where lopt is the optimal vehicle class to interpret V ,

4. Vehicle Make and Model Recognition


Once the front vehicle Rv is extracted, different features
will be extracted from Rv for vehicle MMR.

4.1. Vehicle Detection Grid-based Representation


To represent a vehicle MM, this paper divides R v to
m n grids with equal size. As shown in Figure 6(a), 3 6
grids are extracted from R v and used for vehicle
classification, where m and n are set to three and six,
respectively. In Section 3, the hood boundary is used to
define the upper boundary uRv of R v because of its
robustness to environmental changes. However, in terms
of recognition, the hood area includes little information for
vehicle MMR. Thus, we excludes the hood region to divide
R v into m n grids. More precisely, it divides R v into
( m 1) n grids, from which m n grids are selected by
eliminating the first row. This method is named FID (full
inside division). As shown in Figure 6(a), R v is divided to
four rows; the last three rows are then selected and further
divided to six columns. (b) shows the results of the two
division methods on an actual vehicle image.
ROI

P (l | j ) is the probability of SURF points in V classified

to the lth class by j , and P ( j | V ) is the importance of


SURF points in V contributing to j .

representation, V is divided to J grids, i.e., V =


denote the number of SURF points
g vj . Let n surf
j
j 1,..., J

extracted from g vj . Let nl j denote the number of SURF


points in g vj to be classified into the class l by j . Then,
we can calculate P (l | j ) by the form:
P(l | j )

nl j
n surf
j

7L

8L

8R

7R

6R

3L

4L

5L

5R

4R

3R

0L

1L

2L

2R

1R

0R

(10)

As to the term P ( j | V ) in Eq.(9), from the Bayesian rule,


it can be further decomposed to
P(V | j ) P( j )
P( j | V )
P(V | j ) P( j ) , (11)
P(V )
where P (V | j ) is the likelihood of j important to V ,
and P(V ) and P( j ) are the priors of V and j ,
respectively. Then, Eq.(9) can be rewritten as
J

6L

With the

P (l | V ) P (l | j ) P(V | j ) P( j ) .

(12)

j 1

Let nVsurf n surf


. Then, P (V | h j ) can be calculated as
j
j 1

Symmetric Center Line

(a)
(b)
Figure 6: 3 6 grids were extracted from the front view for
vehicle MMR. (a) Division method. (b) Result on a front vehicle.

475

P (V | j )

n surf
j

,
(13)
nVsurf
Plugging Eq.(13) and Eq.(10) to Eq.(12), the ensemble
scheme to determine the optimal vehicle type is

lopt arg max nl j P ( j ).


l

(14)

j 1

night scenes can be easily detected. Figure 12 shows the


detection results of vehicles from night scenes.

p( j ) is set according to the importance of j .

5. Experimental Results
To train the vehicle classifier, a database containing
2846 vehicles was collected. For training, the SVM library
was used from [13]. In addition, another testing database
with 4090 vehicles was also collected. Twenty-nine vehicle
MMs were collected in this paper for performance
evaluation. The speed of our system is about 21fps. The
velocity of observed vehicles is permitted up to 65 km/h.

Figure 7: Results of front vehicle detection when sided views


were handled.

Figure 10: Detection results when the car front light is turn on.

Figure 11: Results of vehicle detection when irrelative objects


appeared in the scenes.

Figure 12: Detection results of vehicles from night scenes.

Figure 8: Results of vehicle detection under cloudy days.

(a)

(b)

(c)

Figure 13: Detection results of vehicles at different scales in the


same scene. (a) and (b): Results from daytime. (c): Results from
night scene.
Figure 9: Results of vehicle detection under rainy days.

Figure 7 shows the case of vehicle detection with


different pan rotations. Our method still works with the
viewing angle variations panning from 200 to 200 .
Figure 8 shows the cases of vehicle detection during cloudy
days. During cloudy days, the weak lighting does not cast
strong shadows on vehicles and also produce strong
reflections. Thus, our method of front vehicle detection
works better under cloudy conditions than sunny days.
Rainy days are another challenge because rain can blur the
strength of a feature and thus lead to a failure of vehicle
detection. Figure 9 shows the cases of vehicle detection
when rainy days were encountered.
In day time, the car lights can form a noisy pattern,
affecting the task of vehicle detection. Figure 10 shows the
results of vehicle detection when the front lights were
turned on. Another challenge is the appearance of
irrelevant objects in the analyzed scene. They will produce
variant occlusions that can affect the accuracy of vehicle
detection. Figure 11 shows the cases when irrelevant
objects appeared in the analyzed scenes. Our method also
works well in night scenes. For vehicles in night scenes,
their front lights form a pair of symmetrical SURF points.
By finding symmetrical pairs of front lights, vehicles in

476

In addition to front-viewed vehicle detection, our


proposed method also works well for rear-viewed vehicle
detection. Figure 13 shows some results of detection results
of multiple vehicles with different scales in the same scene
from rear-views. It is noticed that the result in (c) is got
from a night scene.
Methods
Performance
s

Unno,
et al.[4]

Dai, et al.
[6]

Teoh et
al. [7]

Cheon et
al. [8]

Our
Method

Precision
False Alarm
Miss Rate
Speed (fps)

67.88
8.61
9.51
36.98

76.81
10.49
8.4
12.95

88.35
3.65
2.3
15.83

91.26
2.92
6.82
14.99

98.48
1.34
0.49
43.83

Table 1. Performance comparisons of vehicle detection among


our method and other symmetry-based methods [4], [6]-[8].

In the existing literature, there are different methods [4],


[6]-[8] using the symmetry feature to detect vehicles. Table
1 lists the performance comparisons of vehicle detection
among these methods [4], [6]-[8]. In [4], vehicle candidates
were detected through background subtraction and then
verified by their symmetries. Its accuracy was worst
because its performances strongly depend on the
subtraction results and a proper window scale to measure
the vehicle symmetry. As for the method in [6], non-road
regions were isolated from roads with a thresholding
technique and then verified by searching their best
symmetrical lines. Its precision is slightly better than [4]

because its performances depend on the thresholding


results and a proper choice of window scale. However, if
the scene is highly-textured, a high false alarm rate will be
return. In [7], possible symmetric objects were detected by
a contour-based symmetry detector and then further
verified by a two-class SVM classifier. Its accuracy is
better than [4] and [6]. Its failure cases were produced by
bad results of edge detection and improper window sizes to
calculate the symmetry measure. In [8], an HOG-based
symmetry detector was proposed to detect vehicles from
the result of shadow detection. Its precision is better than
[4], [6], and [7]. Its failure cases were frequently caused by
inaccurate generation of vehicle hypothesis and inaccurate
HOG symmetry vectors.
Types Toyota
Altis
[9]
92.41
[10]
82.18
[11]
89.70
FID
99.46
Types Toyota
Tercel
[9]
90.48
[10]
51.19
[11]
76.91
FID
100
Types Nissan
Sentra
[9]
81.10
[10]
65.85
[11]
81.10
FID
92.68
Types Suzuk
Solio
[9]
87.61
[10]
73.33
[11]
83.39
FID
98.89

Toyota
Camry
87.68
52.61
83.58
99.77
Toyota
Rav4
95
96.43
86.52
100
Nissan
Cefiro
72.50
60.53
75.0
94.74
Ford
Liata
95
81.82
86.36
100

Toyota Toyota
Vios
Wish
93.21
90.68
67.08
77.27
90.54
93.71
99.69
100
Honda Honda
CRV
Civic
95
88.85
85.47
69.26
86.88
85.78
100
99.68
Nissan Nissan
Xtrail Tiida
81.57
93.94
33.3
56.42
76.77
89.69
97.98
100
Ford
Ford
Escape Mond
84.20
76
47.73
57.1
75.57
51.58
96.59 88.57

Toyota Toyota Toyota Toyota


Yaris
Previa
Inno.
Surf
93.61
83.53
86.37
88.37
76.59
44.83
40.91
76.74
85.73
86.81
66.93
86.17
100
98.28
97.73
100
Civic
Nissan Nissan Nissan
FIT
March
Livna
Teana
91.52
88.17
94.47
91.84
64.63
64.75
56.5
58.33
90.36
85.43
88.56
82.34
98.78
96.40
100
100
Mitsub. Mit.
Mit.
Mit.
Zinger Outland. Savrin Lancer
82.04
92.32
95
83.13
77.27
54.93
86.36
25
82.04
84.30
95
57
95.45
94.37
100
95
Ford
Average Accuracy
Tierra
91.93
90.52
74.19
67.33
85.80
85.90
96.77
99.07

accuracies of the above methods strongly depend on the


success of license plate detection. From the above
analyses, our method is the best in both accuracy and
efficiency.

6. CONCLUSIONS
This paper has represented a novel symmetric SURF
descriptor and applied to vehicle MMR system to detect
vehicles and recognize their makes and models extremely
accurately. Experimental results have proved the
superiority of our proposed system in vehicle MMR.

References
[1]
[2]
[3]

[4]
[5]

[6]
[7]

Table 2. Performance comparisons of vehicle MMR among


our method and other schemes [9]-[11].

[8]

Three methods [9]-[11] were implemented in this paper


to make fair comparisons in vehicle MMR. The method in
[9] used a license plate detector to define a vehicle ROI and
applied a SIFT matching scheme to retrieve desired vehicle
MMs. Its accuracy strongly depends on the success of
license plate detection and is inefficient in vehicle MMR
due to the usage of SIFT matching. In [10], a global feature
vector of edges was extracted from each vehicle ROI and
then classified by KNN classifiers. Compared with other
methods, its accuracy was the worst because the used
global edge feature lacks powerful classification ability to
classify vehicles into detailed categories. As for the
method proposed in [11], similar to [9] and [10], a vehicle
ROI was defined from the license plate location and then a
global feature vector (including gradient and corner
responses) was extracted and classified into different types
by a naive Bayes classifier. Its accuracy is better than [10]
because more features are adopted in vehicle MMR. The

[9]

477

[10]

[11]

[12]

[13]
[14]

D. G. Lowe, Distinctive image features from scale-invariant


keypoints, International Journal of Computer Vision, vol.60, no. 2,
pp. 91-110, 2004.
H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, Speeded-Up
Robust Features (SURF), Computer Vision and Image
Understanding, vol. 110, no. 3, pp. 346-359, 2008.
A. Faro, D. Giordano, and C. Spampinato, Adaptive background
modeling integrated with luminosity sensors and occlusion
processing for reliable vehicle detection, IEEE Transactions on
Intelligent Transportation Systems, vol. 12, no.4, pp.1398-1412,
2011.
H. Unno, K. Ojima, K. Hayashibe, and H. Saji, Vehicle Motion
Tracking Using Symmetry of Vehicle and Background Subtraction,
IEEE Intelligent Vehicles Symposium, 2007.
A. Jazayeri, H.-Y Cai, J.-Y. Zheng, and M. Tuceryan, Vehicle
detection and tracking in car video based on motion model, IEEE
Transactions on Intelligent Transportation Systems, vol. 12, no.2,
pp.583-595, 2011.
B. Dai, Y. Fu, and T. Wu, A Vehicle detection method via
symmetry in multi-scale windows, IEEE Industrial Electronics and
Applications, pp.1827-1831, May 2007.
S. Teoh and T. Brunl, Symmetry-based monocular vehicle
detection system, Machine Vision and Applications, pp. 1-12, 2011.
M. Cheon, W. Lee, C. Y. Yoon, and M. Park, Vision-based vehicle
detection system with consideration of the detecting location, IEEE
Transactions on Intelligent Transportation Systems, vol.13, no.3,
pp.1243-1252, Sept. 2012.
L. Dlagnekov, Video-based Car Surveillance: License Plate, Make,
and Model Recognition, Thesis, in Computer Science University of
California, San Diego, 2005.
D. T. Munroe, and M. G. Madden, Multi-class and single-class
classification approaches to vehicle model recognition from images,
16th Irish conference on Artificial Intelligence and Cognitive
Science, pp.93-104, Sep. 07-09, 2005.
G. Pearce and N. Pears, Automatic make and model recognition
from frontal images of cars, 8th IEEE International Conference on
Advanced Video and Signal-Based Surveillance (AVSS), pp. 373-378,
Sep. 2011.
Sin-Yu Chen and Jun-Wei Hsieh, Jointing Edge Labeling and
Geometrical Constraint for Lane Detection and its Application to
Suspicious Driving Behavior Analysis, Journal of Information
Science and Engineering, vol. 27, no.2, pp. 715-732, 2011.
C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector
machines, 2001. Software available at
http://www.csie.ntu.edu.tw/~cjlin/libsvm.
http://vbie.eic.nctu.edu.tw/en/introduction

You might also like