You are on page 1of 5

Optik 122 (2011) 2011–2015

Contents lists available at ScienceDirect

Optik
journal homepage: www.elsevier.de/ijleo

Parallel precise speed measurement for multiple moving objects


Peng Zhao a,b,∗
a
Information and Computer Engineering Institute, Northeast Forestry University, Harbin, 150040, China
b
Department of Computer Science and Engineering, Beijing Institute of Technology, Beijing, 100081, China

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, we propose a novel scheme for the parallel speed measurement of multiple moving objects
Received 12 July 2010 with a translational motion. First, this scheme uses one interlaced scan CCD camera to obtain only one
Accepted 5 December 2010 interlaced scan image of multiple moving objects. The odd and even field images are extracted and resized.
Second, image matte is applied in these two field images to extract every moving object’s silhouette
simultaneously. Third, a geodesic active contour is applied in these two field alpha matte images to extract
Keywords:
every moving object’s contour edge simultaneously. Shape matching is performed by using a moment
Machine vision
shape descriptor for every moving object’s contour edge in two field alpha matte images. Finally, the
Speed measurement
Geodesic active contour
distance between two centroids of the two matching silhouettes is then computed. Every object’s speed
Image matte is then calculated by using the above distance and the camera imaging parameters. The simulation and
Interlaced scan real experiments prove that our scheme can fulfill the speed measurement for multiple objects with the
translational motion accurately and efficiently.
© 2011 Elsevier GmbH. All rights reserved.

1. Introduction Lin et al. propose one scheme for vehicle speed [5] or spheri-
cal ball speed [6] measurement based on a single motion blurred
Moving object’s speed measurement is of central importance in image. In their work, one single image taken by a stationary camera
many fields such as the object tracking and intelligent traffic. Two is used for speed measurement. Due to the relative motion between
popular and expensive speed measurement schemes include using the camera and the moving object during the camera exposure
RADAR (Radio Detection and Ranging) or LIDAR (Light Detection time, motion blur occurs in a dynamic image region. This motion
and Ranging) device to measure the speed of one moving object blur provides a visual cue for the object’s speed measurement. One
(e.g. a vehicle) [1]. A RADAR device bounces a radio signal off a mov- approximate object region is first segmented and blur parameters
ing object and the reflected signal is picked up by a receiver. This are calculated from the motion blurred sub image. This image is
receiver measures the frequency difference between the original then deblurred and used to derive other parameters. Finally, the
and reflected signals and converts it into the speed. A LIDAR device object’s speed is calculated using the imaging geometry, camera
records how long does it take for a light pulse to travel to the object pose and blur extent in the image.
and come back. By making some distance measurements and com- All the above machine-vision-based speed measurement
parisons, LIDAR can perform the speed measurement accurately. schemes perform the speed measurement only for a single moving
In recent years, some image-based schemes are proposed for the object, which results in a low measurement efficiency. To address
speed measurement due to the availability of the cheap and high this issue, in this paper, we propose an efficient parallel speed mea-
performance imaging hardware [2–4]. These schemes usually use surement scheme for multiple moving objects with a translational
reference information in the scene, such as the traveled distance motion (e.g. multiple moving vehicles), based on a single motion
between the adjacent image frames. The object’s speed is computed blurred interlaced scan image. In the next section, we briefly outline
by dividing this traveled distance with the inter-frame time. Two or the formulation of speed measurement based on machine vision.
more image frames are required in these schemes. However, due to Section 3 describes our novel and efficient scheme in detail. Exper-
the limited imaging frame rate (usually 30 frames per second), the imental results and comparisons are illustrated in Section 4. Some
video camera has to be installed far away from the moving object conclusions are drawn in Section 5.
to keep the object in adjacent image frames.
2. Formulation of machine-vision-based speed
measurement
∗ Corresponding author at: Information and Computer Engineering Institute, P. O.
Box 319, Northeast Forestry University, Harbin, 150040, China.
The machine-vision-based speed measurement is based on a
Tel.: +86 0451 82191523; fax: +86 0451 82190421. pinhole camera model, as shown in Fig. 1. Define the angle between
E-mail address: bit zhao@yahoo.com.cn the object’s motion direction and the image plane of the camera is

0030-4026/$ – see front matter © 2011 Elsevier GmbH. All rights reserved.
doi:10.1016/j.ijleo.2010.12.022
2012 P. Zhao / Optik 122 (2011) 2011–2015

Fig. 1. Camera model for object’s speed measurement.

, and the object’s displacement is d for a fixed time interval T, then


the object’s speed is given as follows:

d zKsx
v= ≈ (1)
T Tf cos  cos ˇ

if f  sx , where f is the focal length of the camera, Sx is the CCD


pixel size in the horizontal direction; and z is the distance between
the object and the camera in the direction parallel to the optical
axis. The parameter K is the motion blur extent (in pixel) along the
horizontal direction (i.e., x direction) for one motion blurred image;
or it is the displacement of the object (in pixel) in two consecutive
image frames along the horizontal direction. The parameter ˇ is
the angle between the motion direction and the x direction in the
image plane.
For a special case that the object is moving along a direction
parallel to the image plane of the camera (i.e.,  = 0), then Eq. (1)
can be simplified as:

d zKsx Fig. 2. Real interlaced scan vehicle image. (a) Original blurred interlaced scan car
v= ≈ (2) image; (b) its odd field image; (c) its even field image.
T Tf cos ˇ

3. Interlaced scan based parallel speed measurement 3.2. Image matte

In this section, we propose a novel parallel speed measurement Image matting schemes take as input one image I which is
scheme for multiple moving objects based on one interlaced scan assumed to be a composite of a foreground image F and a back-
image frame. This scheme uses only one image frame, but it has ground image B. The color of the ith pixel is assumed to be a linear
the advantage of the speed measurement scheme based on two combination of the corresponding foreground and background col-
image frames [2–4]. This novel scheme uses the traveled distance of ors [7]:
every object between the odd and even field images. Every object’s
speed is computed by dividing this traveled distance with the inter-
Ii = ˛i Fi + (1 − ˛i )Bi (3)
field time. The four implementation steps are explained in detail as
follows.
where ˛i is the pixel’s foreground opacity. The alpha matte con-
tains fractional pixel values which can be physically interpreted as
3.1. Odd and even field image the time percentage that the foreground pixels are exposed during
image capture. Each pixel has a fractional value between 0 and 1.
First, the odd or even field image is extracted from an interlaced With the fact that the foreground always occludes the background,
scan image frame I(x, y), y = 0, 1 . . . N − 1; Io (x, y) = I(x, y), y = 2k ; Ie (x, it further describes the partial occlusion of the background scene
y) = I(x, y), y = 2k + 1, k = 0, 1, 2 . . . (N/2 − 1). Then Io (x, y) or Ie (x, y) is by the foreground object during the exposure. So, essentially, the
enlarged to form Io1 (x, y) or Ie1 (x, y) that is identical with I(x, y) in alpha matte abstracts object’s silhouette information. The pixels
size, using an interpolation algorithm. As for the interpolation algo- in object’s internal region have value 1 and the pixels around the
rithm, a nearest-neighbor interpolation, a bilinear interpolation or a boundary of a blurred object will have fractional values, while the
bi-cubic interpolation can be used. For instance, Fig. 2 illustrates an pixels in the background have value 0. For example, the alpha matte
interlaced scan blurred vehicle image and its corresponding resized for a motion blurred image is illustrated in Fig. 3, where the alpha
odd/even field images. matte is produced by the “Scribble matte” scheme [8].
P. Zhao / Optik 122 (2011) 2011–2015 2013

Fig. 3. Moving car image. (a) Original blurred car image; (b) its scribble image; (c)
final alpha matte.

3.3. Parallel contour edge extraction for every object

The parallel contour edge extraction for every object in two


alpha matte images of the odd and even field images is performed
by a geodesic active contour. Geodesic active contours present
two advantages over the traditional parametric active contours [9].
First, geodesic active contours represented by the level set func-
tional may break or merge naturally during the evolution, and the
topological changes are automatically handled. Therefore, they can
detect multiple objects in an image simultaneously. Second, the
level set functional always remains a functional on a fixed grid,
which allows efficient numerical schemes.
In level set formulation, moving active contours are represented
by the zero level set C(t) ={(x, y)|ϕ(t, x, y) = 0} of a level set functional
ϕ(t, x, y). The evolution equation of the level set functional ϕ can be Fig. 4. Geodesic active contour. (a) Initial geodesic active contour; (b) result by 100
written as follow [10]: times iteration; (c) result by 450 times iteration.

∂ϕ
+ F|∇ ϕ| = 0 (4) 3.4. Speed calculation
∂t

where the functional F is called the speed functional. For image seg- After the shape matching is fulfilled for every object’s contour
mentation, F depends on the image data and the level set functional edge in two field alpha matte images, we calculate every object’s
ϕ. two centroids in two field alpha mattes:
Early geodesic active contours usually evolve the level set func-      
xo1 = x˛o1 (x, y)/ ˛o1 (x, y) xe1 = x˛e1 (x, y)/ ˛e1 (x, y)
tional with a partial differential equation (PDE). Compared to pure     (5)
PDE driven level set schemes, the variational schemes are more yo1 = y˛o1 (x, y)/ ˛o1 (x, y) ye1 = y˛e1 (x, y)/ ˛e1 (x, y)
convenient and natural for incorporating additional information.
In this work, we use a variational level set formulation of curve Then we compute the distance of these two centroids:
evolution proposed by Ming et al. [11]. For example, Fig. 4 illus-
trates the convergence of this variational geodesic active contour K = x = |xo1 − xe1 | (6)
in an image with two rectangles.
x
Next, every object’s contour edge in the odd field alpha matte cos ˇ =  (7)
must be matched with its corresponding contour edge in the even x2 + y2
field alpha matte. Shape descriptors such as Fourier descriptor [12],
curvature descriptor [13] or moment descriptor [14,15] can be used Other parameters in Eq. (1) or Eq. (2) can be obtained by the camera
to fulfill the contour edge matching task. In this paper, the shape calibration. Therefore, we can use Eq. (1) or Eq. (2) to perform the
matching is performed by using a moment shape descriptor for speed measurement for every moving object, and T = te /2, where te
every object’s contour edge in two field alpha matte images. is the exposure time of the interlaced scan camera.
2014 P. Zhao / Optik 122 (2011) 2011–2015

Table 1
Computed result and error of horizontal displacement of left rectangle by our scheme.

Measuring times 1 2 3 4 5 6 7 2 Average

K (in pixel) 30.3 30.5 30.1 29.8 29.7 30.3 29.7 0.7 30.1
Relative error (%) 1.0 1.6 0.3 −0.6 −1.0 1.0 −1.0 2.2 0.2

Table 2
Computed result and error of vertical displacement of right rectangle by our scheme.

Measuring times 1 2 3 4 5 6 7 2 Average

K (in pixel) 53.2 53.5 53.5 53.6 52.7 52.6 52.7 0.8 53.1
Relative error (%) 0.3 0.9 0.9 1.1 −0.5 −0.7 −0.5 1.7 0.2

4. Results and errors

4.1. Simulation experiments

To evaluate our novel scheme in Section 3 for calculating the


speed quantitatively, one simulation experiment is performed here.
Fig. 5a illustrates two red rectangles in a green background, and
Fig. 5b illustrates the translated red rectangles in the same back-
ground (i.e. the left rectangle has a horizontal translation, while the
right one has a vertical translation). These two images will be used
as the odd field and even field images to synthesize an interlaced
scan color image manually, as illustrated in Fig. 6a. Then we use
our scheme to calculate the translated distances of the two cen-
troids of two red rectangles. The computed distance K is indicated
in Tables 1 and 2 (i.e. the ground truth horizontal displacement
of the left rectangle is 30 pixels, while the ground truth vertical
displacement of the right rectangle is 53 pixels).
Another simulation experiment is also performed here. Fig. 5a Fig. 6. Synthesized interlaced scan rectangle image. (a) Synthesized clear image;
illustrates two red rectangles in a green background, and Fig. 5b (b) synthesized motion blurred image.
illustrates the translated red rectangles in the same background.
However, both images will be degraded by a horizontal motion blur
to the left rectangle, by a vertical motion blur to the right rectangle, 4.2. Real experiments
and by a defocus blur to the overall images by using the Matlab filter
functions. These two blurred images will be used as the odd field The real experiment is carried out in an outdoor environment
and even field images to synthesize an interlaced scan color image to measure the speeds of multiple moving cars with a constant
manually, as illustrated in Fig. 6b. Then we use our scheme again to velocity, as illustrated in the interlaced scan CCD image Fig. 7 by
calculate the translated distances of the two centroids of two red using the novel speed calculation scheme in Section 3. A SONY
rectangles. The computed distance K is indicated in Tables 3 and 4. Cyber-shot DSC P100/P120 digital camera is used here to produce
By comparison, we can see again that our novel scheme can perform an interlaced scan blurred image, and its imaging parameters are
the displacement measurement accurately. f = 10 mm, sx = 0.011 mm, and the shutter speed te ∈ [1/1000 s, 30 s].

Fig. 5. Red rectangle image. (a) Original red rectangle image; (b) translated rectangle
image. (For interpretation of the references to color in this figure legend, the reader Fig. 7. Real interlaced scan vehicle image. (a) Original vehicle image; (b) its alpha
is referred to the web version of the article.) matte in the odd field image; (c) its alpha matte in the even field image.
P. Zhao / Optik 122 (2011) 2011–2015 2015

Table 3
Computed result and error of horizontal displacement of left rectangle by our scheme.

Measuring times 1 2 3 4 5 6 7 2 Average

K (in pixel) 30.6 30.5 30.6 29.5 29.7 30.3 29.5 1.0 30.1
Relative error (%) 2.0 1.6 2.0 −1.6 −1.0 1.0 −1.6 3.4 0.3

Table 4
Computed result and error of vertical displacement of right rectangle by our scheme.

Measuring times 1 2 3 4 5 6 7 2 Average

K (in pixel) 53.8 52.5 53.9 53.6 52.7 52.4 52.5 1.3 53.1
Relative error (%) 1.5 −0.9 1.6 1.1 −0.5 −1.1 −0.9 2.5 0.1

Table 5
Computed result and error of vehicle speed by our scheme (ground-truth value 22.5 m/s).

Measuring times 1 2 3 4 5 6 7 2 Average

Speed (m/s) 22.0 21.2 22.7 21.8 22.8 21.9 22.9 1.3 22.2
Relative error (%) −2.2 −5.7 0.8 −3.1 1.3 −2.6 1.7 5.6 −1.4

Table 6
Computed result and error of vehicle speed by our scheme (ground-truth value 35.5 m/s).

Measuring times 1 2 3 4 5 6 7 2 Average

Speed (m/s) 35.9 34.9 35.7 34.8 35.8 34.4 35.1 1.2 35.2
Relative error (%) 1.1 −1.6 0.5 −1.9 0.8 −3.0 −1.1 3.2 −0.8

The unknown parameter z is computed by using the calibration References


method in Ref. [5] with the true length of the vehicle and the pixel
length of the vehicle in the image. [1] D. Sawicki, Traffic Radar Handbook: A Comprehensive Guide to Speed Measur-
ing Systems, Author House, 2002.
To obtain the ground-truth of the speed and verify our exper- [2] T. Schoepflin, D. Dailey, Dynamic camera calibration of roadside traffic man-
imental results, we also use a video camera to record a video agement cameras for vehicle speed estimation, IEEE Trans. Intell. Transport.
progressive scan image sequence. The frame rate is set as 30 fps Syst. 4 (2) (2003) 90–98.
[3] J. Dailey, S. Pumrin, An algorithm to estimate mean traffic speed using uncali-
and thus the speed of the moving vehicle is given by 29d, where brated cameras, IEEE Trans. Intell. Transport. Syst. 1 (2) (2000) 98–107.
d is the displacement of the object between two adjacent image [4] Z. Zhu, B. Yang, G. Xu, D. Shi, A real time vision system for automatic traffic mon-
frames. Since motion blur can happen at the frame rate of 30 fps, itoring based on 2d spatiotemporal images, in: Proc. of Workshop Computer
Vision, 1996, pp. 162–167.
image deblurring is usually performed here if the object is mov- [5] H.Y. Lin, K.J. Li, C.H. Chang, Vehicle speed detection from a single motion blurred
ing very fast in the image sequence. The computed vehicle speed is image, Image Vis. Comput. 26 (2008) 1327–1337.
indicated in Tables 5 and 6. By comparison, we can see again that [6] H.Y. Lin, C.H. Chang, Automatic speed measurements of spherical objects using
an off-the-shelf digital camera, in: Proc. of 2005 IEEE Confer. Mechatronics,
our novel scheme can perform the speed measurement accurately
2005, pp. 66–71.
and efficiently for multiple moving objects. [7] J. Sun, J. Jia, C.K. Tang, H.Y. Shum, Poisson matting, ACM Trans. Graphics 23 (3)
(2004) 315–321.
[8] A. Levin, D. Lischinski, Y. Weiss, A closed-form solution to natural image mat-
5. Conclusions ting, IEEE Trans. PAMI 30 (2) (2008) 228–242.
[9] J.A. Sethian, Level Set Methods and Fast Matching Methods, Cambridge Univer-
In this paper, one parallel and accurate speed calculation scheme sity Press, Cambridge, 1999.
[10] V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, Int. J. Comput. Vis.
is proposed based on the image matte, a geodesic active contour 22 (1) (1997) 61–79.
and an interlaced scan image. Our scheme is suitable for the prac- [11] L.C. Ming, X.C. Yang, G.C. Feng, et al. Level set evolution without re-
tical speed measurement work and can be applied in the traffic initialization: a new variational formulation. Proceedings of 2005 IEEE CVPR,
vol. 1, 430–436.
monitoring system. [12] T.P. Wallace, P. Wintz, An efficient three-dimensional aircraft recognition algo-
rithm using normalized Fourier descriptors, Comput. Graphics Image Process
13 (1980) 99–126.
Acknowledgement [13] F. Mokhtarian, A.K. Mackworth, A theory of multi-scale, curvature-based shape
representation for planar curves, IEEE Trans. PAMI 14 (5) (1992) 789–805.
This research is supported by the 2010 Northeast Forestry [14] S. Dudani, K. Breeding, R.B. Mcghee, Aircraft identification by moment invari-
ants, IEEE Trans. Comput. C-26 (1977) 39–45.
University Youth Talents Foundation, and the 2010 Heilongjiang [15] M.K. Hu, Visual pattern recognition by moment invariants, IRE Trans. Inf. Theory
Province Natural Science Foundation with grant No. F201005. 8 (1962) 179–187.

You might also like