You are on page 1of 4

Vehicle Detection and Counting by Using Headlight Information in the Dark

Environment
1

Thou-Ho (Chao-Ho) Chen1, Jun-Liang Chen2, Chin-Hsing Chen2 and Chao-Ming Chang3
Department of Electronic Engineering, National Kaohsiung University of Applied Sciences,
Kaohsiung, Taiwan, R.O.C.
2
Institute of Computer and Communication Engineering, National Cheng Kung University,
Tainan, Taiwan, R.O.C.
3
Huper Laboratories Co., Taipei, Taiwan, R.O.C.
1
thouho@cc.kuas.edu.tw, 2q3895122@mail.ncku.edu.tw

Abstract
This paper is dedicated to detecting and counting
vehicles in dark (nighttime) environment by using
headlight information. The basic idea is to use
variation ratio in color space to detect the groundillumination resulted from the head-lighting of vehicle.
Then, headlight classification provides the headlight
information for determining the moving-object region
and compensating pixels, which are wrongly classified
as ground-illumination, back to the object mask.
Besides, shadow is possibly detected by prediction
rules and then excluded for deriving better results of
vehicle segmentation and counting. Experimental
results show that the proposed algorithm can detect
vehicles and reduce both effects of ground-illumination
and shadow. In the normal condition (non-crowding),
the average accuracy can be raised near to 90%.

The reported researches [1]-[3] accomplish


segmentation in nighttime condition. In [1] and [2],
some reasoning rules are used to distinguish between
headlight pair and ground-illumination, and then
headlight information is employed to represent the
vehicle instead of the object region. In [3], the infrared
image which can provide the level of thermal radiation
is adopted for detection and classification of obstacles
in night vision traffic scenes. By edge detection,
symmetry detection, pyramid linking and classification,
cars and pedestrians in the image can be detected.
Nevertheless, it has only spatial information and such
information is not enough for achieving traffic flow
counting unless temporal data is allowable. Based on
spatial and temporal information, this paper proposes a
vehicle-flow analysis method in the dark (nighttime)
environment for achieving detection and counting by
headlight information.

2. Methodology
1. Introduction
Owing to the fast development of segmentation
technology in recent years, the intelligent
transportation by computer vision will become more
and more practicable. Traffic monitoring is important,
especially in the dark environment, since many traffic
problems come into existence, such as traffic jam and
traffic accident, etc. When driving in the dark
environment, drivers normally turn on the headlights to
obtain a clear vision on the road. These headlamps
produce illumination on the ground and this region will
be classified as moving object though it should belong
to background in fact. Ground-illumination deeply
decreases the accuracy of object segmentation and
makes the segmentation in the dark environment to be
difficult.

The flowchart of the proposed vehicle detection


and counting method is depicted in Fig. 1. First,
change detection [4] is employed to obtain initial
object mask OMi. Afterward, ground-illumination
region detection module detects the illumination on the
ground and then removes it from OMi. The vehicle
headlight detection module includes high-intensity
region detection and classification for cars and bikes.
Headlight information is then used for object
compensation, shadow region prediction and vehicle
counting. Pixels detected as ground-illumination in the
compensation region will be compensated to the object
mask. Finally, shadow detection detects the shadow
pixels and removes them to acquire the final object
mask.

Ground-illumination belongs to foreground area and


has two color situations similar to streetlamps. Table 1
displays four cases of ground-illumination. Headlight
is more close to the ground than streetlamp, so the
color of ground-illumination appears to be like the
headlight.
We define three ratios, Rratio , Gratio and Bratio in
equation (3). These values represent the level of
variation in each color channel. In the following
section, we focus on each condition described in Table
1 to implement ground-illumination detection.
Table 1. Ground-illumination color under different
lights of streetlamp.
Fig. 1. The proposed method for vehicle detection
and counting.

2.1. Ground-illumination Detection


In order to reduce the error region of object
segmentation, we employ the idea of color variation
ratio [5] in different conditions of light sources to
detect pixels of ground-illumination. Generally
speaking, the color of streetlamp is either yellow or
white and so does vehicle headlight. Yellow
streetlamps make the ground seems yellow and white
streetlamps blue (if there are trees near the streetlamps,
it may be green due to the reflection by leaves).
Ground-illumination resulted from yellow headlights
seems yellow and white ones look blue.
Obviously, the ground-illumination is certainly in
front of the vehicle. Moreover, it increases the value of
intensity in illuminated area. Using the property
previously introduced, we roughly divide background
into yellow and white conditions by estimating each
average value Rmean , Gmean and Bmean of background
region in OM i , as equation (1) shows. Equation (2)
describes the classification of yellow and white
streetlamp conditions.

Rmean

Rback ( x, y )

OM i ( x , y ) 0

, Gmean

OM i ( x , y ) 0

Bmean

OM i ( x , y ) 0

Bback ( x, y )

OM i ( x , y ) 0

(1)

OM i ( x , y ) 0

if ( Rmean ! Gmean ! Bmean )


yellow streetlamp condition
else if ( Bmean t Gmean ! Rmean or Gmean ! Bmean ! Rmean )
white streetlamp condition

Rc
; Gratio
Rback

Gc
; Bratio
Gback

Bc
Bback

Rc , Gc , Bc : values of current frame


(3)
In condition (a) of Tab. 1, the level order of R, G and
B components does not change. Furthermore, the value
of saturation reduces and the intensity value increases
due to the illumination, hence the variation of B
channel is larger than R or G channel. Equation (4-a)
shows the conditions of ground-illumination detection.
if ( I diff ! 0 and Rc ! Gc ! Bc and Bratio ! Rratio and Bratio ! Gratio )

Ground  illumination

(4-a)

if ( I diff ! 0 and Bc ! Rc and Bc ! Gc and Bratio ! Rratio and Bratio ! Gratio )

Ground  illumination

(4-b)

if ( I diff ! 0 and Rc ! Gc ! Bc and Rratio ! Gratio and Rratio ! Bratio )

Ground  illumination

(4-c)

if ( I diff ! 0 and Bc ! Rc and Bc ! Gc and Rratio ! Gratio and Rratio ! Bratio )

Ground  illumination
(4-d)
where I diff I c  I back , I c is the intensity value of

current frame and I back is the intensity value of


background frame. If I diff  0 , it is set to zero.

Gback ( x, y )

OM i ( x , y ) 0

Rratio

(2)

In condition (b), the value of B component is


smaller than R and G in yellow background but bigger
in ground-illumination region. Consequently, the
variation of B is larger than other two channels. The
conditions of ground-illumination detection are as
equation (4-b) shows.
In condition (c), the relation Bback ! Gback ! Rback is
satisfied. Due to the yellow illumination, the value of
R component becomes the bigger than other two

channels, hence the variation of R is stronger.


Equation (4-c) shows the conditions of this type of
illumination.
The condition (d) is similar to (a). The order does not
change and the value of saturation decreases so that
Rratio is bigger than Gratio or Bratio . The conditions are
showed in equation (4-d).
After the detection, pixels belonging to groundillumination are removed from initial object mask to
obtain more correct segmentation result. However,
some vehicles illuminated by streetlamps may have the
similar feature to ground-illumination and be detected
in error. So we propose a method to compensate pixels
belonging to object back to the object mask.

2.2. Vehicle headlight detection


To implement the compensation of vehicles, we
detect the headlights first. Headlight is a light source
hence it appears quite bright in comparison with
background. Not to detect other bright object in the
background, we just deal with the region of OM i .
The intensity value of white headlamp is almost 255,
however it is not so high for yellow headlight. So we
take the R component with extremely large value for
both the two kinds of headlights instead of intensity
frame. First, the dynamic range of intensity in
foreground region is estimated and the maximum and
minimum Maxgray and Mingray are obtained.
Afterward we obtain the gray range Gr by equation
(5), and then determine the gray interval of light
source. Finally, as equation (6) shows, we can get the
detection result.
Gr ( Maxgray  Mingray ) / c
(5)

where c is a constant and set to 10 in our method.


if ( R ! ( Maxgray  Gr ) and ( Maxgray  2* Gr ) d I d Maxgray )
Yellow Headlight
if ( B ! ( Maxgray  Gr ) and ( Maxgray  Gr ) d I d Maxgray )

White Headlight
(6)
After the detection we can acquire the initial
headlight information, but in some situation like too
bright ground-illumination may lead to error detection.
For this reason we add some rules to distinguish the
headlights from the errors and further classify car and
bike lamps.
Each car has a pair of headlights while each bike has
only one. The center point of the light stands for whole
mass for convenience. First, for the car the slope of the
two headlight points must be small (in our method, we
set the slope to 0.2). Moreover, pixels on the line
segment between two points of a car should all belong

to foreground object. However, a part of pixels on the


line segment between two points of two bikes belong
to background, as shown in Fig. 2. For the bike we
know that it appears to be thin so we compare the
number of object pixels in the horizontal direction of
light point with the vertical ones. It will be classified
into bike headlight if it satisfies the thin condition.
Bike

Car

headlight
headlight

Fig. 2. Headlight pair and two bike headlights.

2.3 Object compensation


After headlight classification, we use the distance
between two light points of a car to determine the
region, and pixels in this area should belong to object.
Moreover, for the bike the width of headlight mass is
used to determine the compensation region. Pixels
detected as ground-illumination in these regions are
compensated to the object mask to increase the
accuracy of segmentation.

2.4 Shadow region prediction and detection


Shadows following object reduce the value of
intensity without changing the feature of ground. We
use headlight information to predict an area below the
light and accomplish shadow detection. We utilize the
concepts in [6] that a shadow darkens each color
component of the point on which it casts. Moreover,
the color components do not change their level order
and the photometric invariant feature describing the
color configuration does not change, neither.
In the region previously predicted, we employ these
concepts to detect possible shadow pixels, and then
these pixels are eliminated in the end and the final
object mask is obtained.

2.5. Vehicle counting


In addition to object segmentation, the information
of headlight can be used to implement vehicle
counting, too. We draw a horizontal counting line in
the frame. When the headlight point passes through the
line, the counter of vehicle will be added by one.

3. Experimental results

In order to obtain the headlight information, we just


focus on one-way traffic condition with coming
direction. Vehicles appear from the top of frame and
vanish after passing through the bottom. The results of
vehicle segmentation using three sequences are
showed in Fig. 3. We use equation (7) to estimate the
accuracy of initial and final object masks and then the
error improvement ratio is calculated using equation
(8).
Accuracy

[OM
(1 
[OM

seg

( x, y ) OM ref ( x, y )]

seg

( x, y )  OM ref ( x, y )]

( x, y )

) u 100%

(7)

Fig. 3. Vehicle segmentation results with original


frame, initial object mask, and final object mask
for (a) one bike; (b) one car; (c) case-1 of several
vehicles; (d) case-2 of several vehicles.
Table 2. Results of vehicle counting.
Vehicle
Case 1
Case 2
Case 3

6
8
11

( x, y)

OM seg ( x, y )

is

the object mask obtained from proposed algorithm,


is exclusive OR operator and + is the OR operator.

where

OM ref ( x, y )

Im provement
where

errorinitial

is ideal alpha map,

errorinitial  errorfinal
errorinitial

[OM

initial

) u 100% ,

( x, y ) OM ref ( x, y )]

(8)

( x, y )

errorfinal

[OM

final

( x, y ) OM ref ( x, y )]

( x, y )

The average accuracy of segmentation is raised from


30.78%, 18.16% and 33.44% of initial object mask to
56.39%, 47.99% and 47.22% of final object mask,
respectively. The average error improvement ratio is
63.36%, 72.87% and 47.22%, respectively.
To analyze the traffic flow, we combine cars with
bikes to be vehicles. In our experiment, three
sequences are used and Table 2 shows the results of
vehicle counting. From the table, it manifests that the
average accuracy of vehicle counting is near 90%.

(a)

(b)

(c)

(d)

Count

Error
Positive
6
0
7
1
11
1
Average

Error
Negative
0
0
1

Accuracy
(%)
100%
87.5%
81.8%
89.8%

4. Conclusions
In this paper we propose a method by exploiting the
characteristics of color variation and headlight
information to implement the vehicle segmentation in
the nighttime traffic scenes. Ground-illumination is
roughly eliminated from the initial object mask to
obtain a more acceptable result. Besides, the headlight
information is utilized to achieve vehicle-flow
counting instead of using a whole vehicle body.
Experimental results reveal that vehicles can be
detected when drivers normally turn on the headlights
in the dark environment under a moderate vehicle-flow
condition.

5. References
[1] R.Cucchiara and M. Piccardi, Vehicle detection under
day and night illumination, in Proc. ISCS-IIA99, Genoa,
Italy, June 1999, pp. 789-794.
[2] Rita Cucchiara, Massimo Piccardi and Paola Mello,
Image Analysis and Rule-Based Reasoning for a Traffic
Monitoring System, IEEE Transactions on Intelligent
Transportation Systems, vol. 1, no. 2, June 2000.
[3] Urban Meis, Wemer Ritter and Heiko Neumann,
Detection and Classification of Obstacles in Night
Vision Traffic Scenes based on Infrared Imagery, IEEE
International Conference on Intelligent Transportation
Systems, Shanghai, pp. 1140-1145, Oct. 2003.
[4] Thou-Ho (Chao-Ho) Chen, Yung-Chuen Chiou, MingKun Wu and Yi-Fan Li, An Efficient Video
Segmentation Algorithm Using Change Detection and
Background Updating Technique, International
Conference on Systems and Signals (ICSS), 2005.
[5] Sohail Nadimi and Bir Bhanu, Moving Shadow
Detection Using a Physics-based Approach, in
Proceedings of the 16th IEEE International Conference
on Pattern Recognition, pp. 701-704, August 2002.
[6] A. Cavallaro, E. Salvador and T. Ebrahimi, Shadowaware Object-based Video Processing, IEE Vision,
Image and Signal Processing, vol. 152, issue 4, pp. 14-22,
August 2005.

You might also like