Professional Documents
Culture Documents
Introduction
140
W. Shin et al.
has usually moderate noise level. According to the taxonomies mentioned above,
our approach belongs to the statistical method while using the color information as the spectral characteristic and the region level information rather than
the pixels as the spatial characteristics [1], [4], [6]. However, we do not use the
temporal feature. We also have similar assumptions of color constancy model.
The specialty of our algorithm lies to the development of the new color model
that is able to discriminate moving shadows from moving object effectively.
We assume that the distributional characteristics of background figure and
shadow have similar directional distributive pattern with locational difference.
To find the directional distributive pattern, we first transform the color space
from RGB to PCA space and then tilt the PCA transformed coordinate system. After all, we found two main axes representing those distributive patterns
respectively. The shadow removing algorithm is developed based on that directional distributive characteristic. We also consider various weather conditions
and car darkness which affect distributive direction.
The structure of this paper is as following. We discuss the shadow model in
section 2, the algorithm to detect and remove shadows within object regions in
section 3, the experimental results in section 4, and the conclusions in section 5.
2
2.1
Shadow Model
Foreground and Background Figure
Fig. 1. Foreground gure and Background gure(left), RGB color space of foreground
gure, Background gure(right)
The light gray dot displays the RGB value of the foreground figure which
contains the shadow and the car object meanwhile the dark gray dot displays
the background image. The background figure shows a narrow and short bar
while the foreground figure shows a wider and longer one. This shows that the
141
road in the background figure is brighter than the shadow. Thus, it is expected
that the foreground figure has a wider distribution of the RGB value than that
of background figure.
Table 1. Means and Variances of Input gures
Mean(R,G,B)
Foreground gure 158.2,170.1,165.9
Background gure 169.3,182.3,164.9
2.2
Variance(R,G,B)
7865.9,6517.4,6827.7
60.8,61.1,56.1
Shadow Feature
Shadows can be classified into the self-shadow, which is created as a part of the
object, and the cast shadow, which is created outside of the object. Self-shadows
can become recognized as a part of the object but cast shadows are not a part of
the object. Figure 2 shows the distributional characteristics of the background
and the shadows with different sunlight intensities. With less bright sunlight, we
can see that the background and the shadows are distributed in a straight line
in the same direction. But with bright sunlight, each directional line starts to
separate each other. As the sunlight increases, the degree of separation between
two directional lines increases.
Fig. 2. RGB color space of Background and Shadow, bright sunlight (left), less bright
sunlight (right)
3
3.1
Algorithm Description
Shadow Region Detection
142
W. Shin et al.
B
the foreground figure and the background figure in respectively. Let B
X and CX
be the mean and covariance of the background figure as in Equation (1).
F T
B
B
B
B T
XiF = [ RiF GF
f or i = 1, ... , M
i Bi ] , Xi = [ Ri Gi Bi ]
M
1
T
B
XiB , CX
= Covariance(X B )
B
X = [ R G B ] = M
(1)
i=1
AB = [ e1 e2 e3 ]T
(2)
Then we transform each value of the foreground figure using Equation (3).
The left one in Figure 3 shows the distribution of Y B and the distribution of
Y F where Y F is in Equation (3)
Y F = AB (X F B
X)
(3)
The center of the background figure becomes the origin of the PCA space.
Since shadows are darker than the road, the location of shadow area can be traced
along the principal axis. As shown in Figure 3(right), the shadow is located to the
left of Y1 which is the transformed value on the principal axis of the background.
However, the predicted shadow area may include a part of car. Dark sections
of a car such as windshields might recognize as a shadow. Similar problem may
arise in case of dark car.
Fig. 3. RGB color space and Principal components vectors of foreground gure and
background gure(left), candidate region of shadow(right)
3.2
Spatial data of the foreground figure and the background figure have a specific
directional pattern as shown in Figure 4. Let (Y1 ,Y2 ,Y3 ) be the coordinates of the
PCA space. The distribution of the background figure is densely accumulated
in a narrow area along the principal axis. Like a background distribution, the
foreground figure is distributed along the principal axis of the background with
143
wider and thicker dispersion. So, it is highly probable that two distributions have
identical distributive direction. When we project two distributions onto the Y2 Y3
plane, there is a locational difference due to the different distributional shapes.
The Y1 axis, the principal axis of the background figure, and the direction of
the foreground pattern are similar but not identical. Even though the background
figure and foreground figure are distributed along the principal axis, they are not
aligned to the principal direction. Therefore, we tilted the coordinate system to
find the direction of alignment by using the hill-climbing binary search which minimizes the number of projected points. If an accurate pattern direction is found
through a tilting of the coordinate system, each direction of the pattern will become accurately aligned to the tilted Y2 Y3 plane. Figure 4 shows the tilted result.
Fig. 4. Foreground pattern (left). Projecting foreground onto Y2 Y3 plane (mid), Projecting foreground onto the tilted Y2 Y3 plane (right).
Fig. 5. Projected points and peaks in tilted Y2 Y3 plane, bright-color car(left), dark
color car(right)
144
W. Shin et al.
the center of the background. P eak2 is located in the center area of a shadow
and its value differs according to the darkness of the vehicle. To simplify the
computation, we align the line connecting P eak1 and P eak2 to Y2 axis.
3.3
(4)
Here, background is the mean of the background and background is the standard
deviation of the background.
Step 2. Determine the length of the minor axis of the ellipse. Use the length of
minor axis of the ellipse as |P eak2 P eak1 |. In case of little sunlight,
|P eak2 P eak1 | becomes so small that we cannot build an ellipse with an appropriate size. In such cases, we use the standard deviation of Y2 instead of
|P eak2 P eak1 | as in equation (5).
Dpeak = max( |P eak2 P eak1 | , ) .
= 1.5 Y2 .
(5)
Step 3. Determine the length of the major axis of the ellipse. Let be the ratio
of the standard deviation of Y2 and the standard deviation of Y3 . Then we set
the length of major axis as |P eak2 P eak1 |. Equation (6) shows the resulting
ellipse.
Y22
Y2
+ 32 = 1 , a = Dpeak , b = Dpeak .
(6)
2
a
b
Step 4. Eliminate pixels which belong to the inside of the ellipse. Resulting
image shows the object with the shadow removed.
3.4
145
the window area of a car is usually darker than any other part of the car, the
window area may be contained in the eliminating area. To increase shadow discrimination rate while keeping high shadow detection rate, we detect window
from foreground images using template based searching. Constructing the template similar to window using rectangle, we search the window area in the car.
If an appropriate area is found as the window by template masking, we protect
pixels in that area in the shadow removal process.
Experimental Results
We used fifteen 320240 24 bit RGB color videos that were recorded on the
road during daytime with different sunlight intensities and locations. Data was
obtained by using background subtraction to extract the vehicle object including the shadows. The extracted results were the foreground figure including the
vehicle and the shadow, and the background figure including the image of the
road. We used the proposed algorithm to use these background/foreground figures for the detection and the removal of shadows. Figure 6 shows the removed
area which is predicted to be the shadows in the Y1 Y2 Y3 space. We then need to
remove the shadow area and must retransform to the original RGB space X F in
the Y1 Y2 Y3 space by equation (7).
X F = (AB )T Y F + B
X
(7)
T PShadow
T P F oreground
, =
T PShadow + F NShadow
T PF oreground + F NF oreground
(8)
146
W. Shin et al.
Sunny&Cloudy
(2 videos)
Cloudy
(5 videos)
Summary
Car Darkness
[# of objects]
Dark [44]
Mid [51]
Bright [169]
Average [264]
Dark [6]
Mid [14]
Bright [21]
Average [41]
Dark [30]
Mid [17]
Bright [99]
Average [146]
Dark [80]
Mid [82]
Bright [289]
Average [451]
Whereas the experimental result shows very successful shadow detection and
acceptable shadow discrimination rate, one can find that this algorithm is relatively vulnerable to the dark objects. In that condition, there are a few occasions
that has very low detection/discrimination rate, which make the variance bigger
than other conditions. Such cases occur because of the removal of some parts
of the vehicle that are recognized as shadows such as the windshields. For the
windshields, we protect a part of it by template masking in shadow removal
procedure. With this compensation, the discrimination rate is up about 5% in
average without losing the detection rate.
The encouraging part of the experiment is high shadow detection rate. Since
this algorithm is designed for the ITS applications, as long as we do not need
image restoration process by over-removing shadow pixels such as [10] equipped,
this side effect may not be as harmful as the number in the table shows in real
applications.
Regarding the performance comparison with other related algorithms, one
may argue that any comparison is not fair enough unless all algorithms are tested
with the same set of videos. However, the Table 3 could be at least reasonable
evidence that our approach is meaningful. We found that the Highway II video
comparison of [8] has the most similar environments to our experiment.
147
Shadow Detection
Accuracy (%)
51.20
46.93
54.07
60.24
88.10
Shadow Discrimination
Accuracy (%)
78.92
91.49
78.93
72.50
85.76
* Highway of [8].
Key: SNP (Statistical non-parametric approach), SP(Statistical parametric approach),
DNM(Deterministic non-model based approach).
Conclusion
In this paper, we propose an algorithm which creates a new color model and
uses this to detect and remove shadows through a series of statistical analysis
on the background and shadows. It is possible to remove shadows regardless of
the intensity of the sunlight and the darkness of the vehicle. The power of our
approach comes from the separability of the shadow and the object in the new
space based on their distributional patterns. It does not add any new strong constraints or assumptions about the position of light source, preprocessing of model
fitting to forecast the object shape [11], or image restoration process [10]. In the
experiment, the algorithm is particularly strong in shadow detection measure.
Since our main target applications are ITS-related such as vehicle recognition,
vehicle analysis, and tracking, our approach can be effective.
However, as many statistical approaches with color constancy model share
the same problem, our new color space does not completely separate the moving
shadow and the moving object. Also, in current implementation, we do not utilize
the temporal feature. We should look at the color space and the algorithm deeper
and try to generalize it for the more noisy scenes than the static background as
we used here.
Acknowledgments. This research was supported by the MIC(Ministry of
Information and Communication), Korea, under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of
Information Technology Assessment) and this research was partially supported
by Hansung University in the year of 2006.
References
1. Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting objects, shadows and
ghosts in video streams by exploiting color and motion information. In: Proceedings
of the IEEE Int l Conference on Image Analysis and Processing, pp. 360365 (2001)
148
W. Shin et al.
2. Cucchiara, R., Grana, C., Prati, A., Piccardi, M.: Eective detection of moving objects, shadows and ghosts in surveillance videos, Australia-Japan Advanced Workshop on Computer Vision (2003)
3. Haritaoglu, I., Harwood, D., Davis, L.S.: W4: real-time surveillance of people
and their activities. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 809830 (2000)
4. Horprasert, T., Harwood, D., Davis, L.S.: A statistical approach for real-time robust background subtraction and shadow detection. In: Proceedings of IEEE ICCV
1999 FRAME-RATE Workshop (1999)
5. Jollie, I.T.: Principal component analysis. Springer, New York (1996)
6. Mikic, I., Cosman, P., Kogut, G., Trivedi, M.M.: Moving shadow and object detection in trac scenes. In: Proceedings of Int l Conference on Pattern Recognition,
pp. 321324 (2000)
7. Nadimi, S., Bhanu, B.: Physical models for moving shadow and object detection
in video, Pattern Analysis and Machine Intelligence. IEEE Transactions 26(8),
10791087 (2004)
8. Prati, A., Cucchiara, R., Mikic, I., Trivedi, M.M.: Analysis and Detection of Shadows in Video Streams: A Comparative Evaluation. In: Proc. of Third Workshop
on Empirical Evaluation Methods in Computer Vision - IEEE Int l Conference on
Computer Vision and Pattern Recognition, pp. 571576 (2001)
9. Strauder, J., Mech, R., Ostermann, J.: Detection of moving cast shadow for object
Segmentation. IEEE Transaction on Multimedia 1(1), 6576 (1999)
10. Wang, J.M., Chung, Y.C., Chang, C.L, Chen, S.W.: Shadow Detection and Removal for Trac Image. In: Proceedings of the IEEE Int l Conference on Networking, Sensing & Control, pp. 649654 (2004)
11. Yoneyama, A., Yeh, C.H., Kuo, C.-C.J.: Moving cast shadow elimination for robust
vehicle extraction based on 2D joint vehicle/shadow models. In: Proceedings. IEEE
Conference on Advanced Video and Signal Based Surveillance, pp. 229236 (2003)