Professional Documents
Culture Documents
1 Introduction
Automatic detection of moving objects is a challenging and essential task in video
surveillance. It has many applications in diverse discipline such as automatic video
monitoring system, intelligent transportation system, airport security system and so
on. Detail review on moving object detection algorithms can be found in [1] and [2].
Background subtraction based methods are the most common approaches that are
used for moving object detection. In these methods, background modeling is an
important and unavoidable part to accumulate the illumination and other changes in
the background scene for proper detection [3]. However, most of the background-
modeling methods are complex in computation and time-consuming for real time
processing [4]. Moreover, most of the time it suffers from poor performance due to
lack of compensation with the dynamism of background scene [5].
Edge based methods are robust against illumination change. In [6] and [7], edge
based methods are proposed for moving object detection which utilizes double edge
*
Corresponding author.
B. Apolloni et al. (Eds.): KES 2007/WIRN 2007, Part I, LNAI 4692, pp. 501–509, 2007.
© Springer-Verlag Berlin Heidelberg 2007
502 M.A.A. Dewan, M.J. Hossain, and O. Chae
maps. In [6], one edge map is generated from difference image of background and
current frame, In. Another edge map is generated from difference image of In and In+1.
Finally, moving edge points are detected by applying logical OR operation on these
two edge maps. However, due to illumination change and random noise [6] in
background scene, false edge may appear in the first edge map and hence causes false
detection in the final detection result. In [7], first edge map is computed from the
difference image of In-1 and In, and similarly second map is obtained from In, and In+1.
Finally, moving edges of In are extracted by applying logical AND operation on these
two edge maps. However, because of noise and illumination change, edge pixels of an
edge map may be displaced little bit as compared to previous one. So, exact matching
through AND operation extracts scattered edge pixels, which fails to represent reliable
shape of moving objects. Moreover, pixel based processing for moving edge detection
is not feasible in terms of computation. A pseudo-gradient based moving edge
extraction method is proposed in [8]. Though this method is computationally faster
but its background is not updated to take care of the situation when a moving object
stops its movement in the scene. In this situation, stopped object is continuously
detected as moving object. As no background update method is adopted in this
method, it is not much robust against illumination change. Additionally, this method
also suffers from scattered edge pixels of moving objects.
Fig. 1. Difference between pixel based and segment based matching. (a) Edge image at time t;
(b) Edge image of same scene at time t+1; (c) Result obtained by pixel based matching; (d)
Result obtained by segment based matching.
Simple edge differencing approach suffers a lot with random noise. This is due to the
fact that the appearance of noise created in one frame is different from its successive
frames. This results in change of edge locations to some extent in successive frames.
Hence, instead of using simple edge differencing approach, we utilize difference
image for moving edge detection. Edges extracted from difference image are noise
robust, comparatively stable and hence partially solve the edge localization problem.
Two difference image edge maps are utilized in our proposed method for moving
object detection. To compute difference image edge maps, we compute two difference
images, Dn-1, and Dn utilizing three successive frames In-1, In, and In+1 as follows:
504 M.A.A. Dewan, M.J. Hossain, and O. Chae
Dn = I n − I n +1 (1)
After computing Dn-1 and Dn, canny edge detection algorithm [11] is applied and
generates difference image edge maps, DEn-1 and DEn, respectively. In the difference
image edge maps, edge pixels are grouped together to represent as segments using an
efficiently designed edge class [9]. To make the edge segments more efficient for
moving edge detection procedure, we maintain the following constrains during edge
segment generation:
a) If the edge segment contains multiple branches, then the braches are broken into
multiple edge segments from its branching point.
b) If the edge segment bends more than a certain limit at an edge point, the edge is
broken into two edge segments from that particular position.
c) If the length of a particular edge segment exceeds a certain limit, then the edge
segment is divided into a number of small edge segments of its permitted length.
Segment based representation helps the proposed system to use the geometric
shape of edges during matching for moving edge detection. It also helps to extract
solid edge segments of moving objects instead of extracting scattered or significantly
small edges. In this case no edge pixels are processed independently; rather all the
edge pixels in an edge segment are processed together for matching or in any other
operations. Fig. 3(d) shows the difference image edge maps generated from Fig. 3(a)
and Fig. 3(b). Similarly edge map in Fig. 3(e) is obtained from Fig. 3(b) and Fig. 3(c).
Fig. 3. DT image generation and matching. (a) In-1; (b) In; (c) In+1; (d) DEn-1; (e) DEn; (f) DT
image of DEn-1; (g) Edge matching using DT image. Here, Matching_confidence = 0.91287.
Edge maps, DEn-1 and DEn are used in this step to extract moving edges for moving
object detection in video sequence. DEn-1 contains the moving edges of In-1 and In, and
DEn contains the moving edges of In and In+1, respectively. Thus, the moving edges of
Reference Independent Moving Object Detection: An Edge Segment Based Approach 505
In is common in both of the edge maps. Therefore, to find out moving edges, we
superimpose one edge map on another one and compute matching between them.
Hence, if two edge segments are of almost similar in size and shape, and situated
almost in same positions in the edge maps, then they are considered as moving edges
of In. However, appearance of noise may cause slightly change of these parameters as
well. Hence, instead of exact matching, introducing some variability reduces
localization problem to obtain better results. Considering these issues, we have
adopted an efficient edge-matching algorithm in this proposed method, which is
known as chamfer ¾ matching [10]. According to the procedure of chamfer matching,
distance transform (DT) image is generated from one difference image edge map and
then edge segments from another one are superimposed on it and compute matching
confidence. If the matching confidence is less than a certain threshold then the edge
segment is enlisted as moving edge. This threshold value gives the variability during
matching. In our method, we utilize DEn-1 to generate DT image and thereafter, edge
segments of DEn are superimposed on it to compute the matching confidence.
To compute DT image, we use integer approximation of exact Euclidean distance
to minimize the computation time [10]. Each pixel in DT represents the corresponding
distance to the nearest edge pixel in the edge map. In DT image generation, a two-
pass algorithm is used to calculate the distance values sequentially. Initially the edge
pixels are set to zero and rest of the position is set to infinity. The first pass (forward)
modifies the distance image as follows:
vi , j = min(vi −1, j −1 + 4, vi −1, j + 3, vi −1, j +1 + 4, vi , j −1 + 3, vi , j ) (2)
where vi,j is the distance at pixel position (i, j). Fig. 3(f) illustrates a DT image which
is computed from difference image edge map shown in Fig. 3(d). In Fig. 3(f), distance
values of DT image are normalized into 0 to 255 for better visualization.
During matching, an edge segment of DEn is superimposed on DT image of DEn-1
to accumulate the corresponding distance values. A normalized average of these
values (root mean square) is the measure of matching confidence of the edge segment
in DEn, shown in following equation:
1 1 k
Matching _ confidence[l ] = ∑{dist (li )}2
3 k i =1
(4)
where k is the number of edge points in lth edge segment of DEn; dist(li) is the distance
value at position i of edge segment l. The average is divided by 3 to compensate for
the unit distance 3 in the chamfer ¾-distance transformation. Edge segments are
removed from DEn if matching confidence is comparatively higher. Existence of a
similar edge segments in DEn-1 and DEn produces a low Matching_confidence value
for that segment. We allow some flexibility by introducing a disparity threshold,
τ and empirically we set τ = 1.3 in our implementation. We consider a matching
occurs between edge segments, if Matching_confidence[l] ≤ τ . The corresponding
506 M.A.A. Dewan, M.J. Hossain, and O. Chae
edge segment is considered as moving edge and consequently enlisted to the moving
edge list. Finally, the resultant edge list contains the edge segments of MEn that
belong to moving objects in In. Fig. 3(g) illustrates the procedure of computing
matching confidence using DT image.
3 Experimental Results
Experiments have been carried out with several video sequences captured from indoor
as well as outdoor environment to verify the effectiveness of the proposed method.
We have applied our proposed method on video formats of size 640x520 and used
Intel Pentium IV 1.5 GHz processor and 512 MB of RAM. Visual C++ 6.0 and MTES
[12] have been used as of our working tools for implementation.
Fig. 4. (a) I150; (b) I151; (c) I152; (d) DE150; (e) DE151; (f) Detected moving edges of I151
their method, the difference between background and current frame incorporates most
of the noise pixels. Fig. 5(f) shows the result applying the method proposed by Dailey
and Cathey [7]. Result obtained from this method is much robust against illumination
changes as it uses most recent successive frame differences for moving edge
detection. However, it suffers from scattered edge pixels as it uses logical AND
operation in difference image edge maps for matching. Illumination variation and
quantization error induces edge localization problem in difference image edge maps.
As a result, some portions of the same edge segment are matched and some are not,
and produce scattered edges in final detection result. Our method does not experience
this problem because of applying flexible matching between difference image edge
maps containing edge segments. The result obtained from our proposed method is
shown in Fig. 5(g).
Fig. 5. (a) Background; (b) I172; (c) I173; (d) I174; (e) Detected moving edges of I173 using Kim
and Hwang method; (f) Detected moving edges of I173 using Dailey and Cathey method; (g)
Detected moving edges of I173 using our proposed method
References
1. Radke, R., Andra, S., Al-Kohafi, O., Roysam, B.: Image Change Detection Algorithms: A
Systematic Survey. IEEE Trans. on Image Processing 14(3), 294–307 (2005)
2. Kastrinaki, V., Zervakis, M., Kalaitzakis, K.: A Survey of Video Processing Techniques
for Traffic Applications. Image and Vision Computing 21(4), 359–381 (2003)
3. Chien, S.Y., Ma, S.Y., Chen, L.: Efficient Moving Object Segmentation Algorithm Using
Background Registration Technique. IEEE Transactions on Circuits and Systems for
Video Technology 12(7), 577–586 (2002)
4. Sappa, A.D., Dornaika, F.: An Edge-Based Approach to Motion Detection. In:
Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J.J. (eds.) ICCS 2006.
LNCS, vol. 3991, pp. 563–570. Springer, Heidelberg (2006)
5. Gutchess, D., Trajkovics, M., Cohen-Solal, E., Lyons, D., Jain, A.K.: A Background
Model Initialization Algorithm for Video Surveillance. Proc. of IEEE Intl. Conf. on
Computer Vision 1, 733–740 (2001)
6. Kim, C., Hwang, J.N.: Fast and Automatic Video Object Segmentation and Tracking for
Content-based Applications. IEEE Trans. on Circuits and Systems for Video Tech. 12,
122–129 (2002)
Reference Independent Moving Object Detection: An Edge Segment Based Approach 509
7. Dailey, D.J., Cathey, F.W., Pumrin, S.: An Algorithm to Estimate Mean Traffic Speed
using Un-calibrated Cameras. IEEE Trans. on Intelligent Transportation Sys. 1(2), 98–107
(2000)
8. Makarov, A., Vesin, J.M., Kunt, M.: Intrusion Detection Using Extraction of Moving
Edges, International Conf. on. Pattern Recognition 1, 804–807 (1994)
9. Ahn, K.O., Hwang, H.J., Chae, O.S.: Design and Implementation of Edge Class for Image
Analysis Algorithm Development based on Standard Edge. In: Proc. of KISS Autumn
Conference, pp. 589–591 (2003)
10. Borgefors, G.: Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm.
IEEE Trans. on PAMI 10(6), 849–865 (1988)
11. Canny, J.: A Computational Approach to Edge Detection. IEEE Trans. on PAMI 8(6),
679–698 (1986)
12. Lee, J., Cho, Y.K., Heo, H., Chae, O.S.: MTES: Visual Programming for Teaching and
Research in Image Processing. In: Sunderam, V.S., van Albada, G.D., Sloot, P.M.A.,
Dongarra, J.J. (eds.) ICCS 2005. LNCS, vol. 3514, pp. 1035–1042. Springer, Heidelberg
(2005)