Professional Documents
Culture Documents
Optik
journal homepage: www.elsevier.de/ijleo
a r t i c l e i n f o a b s t r a c t
Article history: In this paper, we propose a novel scheme for the parallel speed measurement of multiple moving objects
Received 12 July 2010 with a translational motion. First, this scheme uses one interlaced scan CCD camera to obtain only one
Accepted 5 December 2010 interlaced scan image of multiple moving objects. The odd and even field images are extracted and resized.
Second, image matte is applied in these two field images to extract every moving object’s silhouette
simultaneously. Third, a geodesic active contour is applied in these two field alpha matte images to extract
Keywords:
every moving object’s contour edge simultaneously. Shape matching is performed by using a moment
Machine vision
shape descriptor for every moving object’s contour edge in two field alpha matte images. Finally, the
Speed measurement
Geodesic active contour
distance between two centroids of the two matching silhouettes is then computed. Every object’s speed
Image matte is then calculated by using the above distance and the camera imaging parameters. The simulation and
Interlaced scan real experiments prove that our scheme can fulfill the speed measurement for multiple objects with the
translational motion accurately and efficiently.
© 2011 Elsevier GmbH. All rights reserved.
1. Introduction Lin et al. propose one scheme for vehicle speed [5] or spheri-
cal ball speed [6] measurement based on a single motion blurred
Moving object’s speed measurement is of central importance in image. In their work, one single image taken by a stationary camera
many fields such as the object tracking and intelligent traffic. Two is used for speed measurement. Due to the relative motion between
popular and expensive speed measurement schemes include using the camera and the moving object during the camera exposure
RADAR (Radio Detection and Ranging) or LIDAR (Light Detection time, motion blur occurs in a dynamic image region. This motion
and Ranging) device to measure the speed of one moving object blur provides a visual cue for the object’s speed measurement. One
(e.g. a vehicle) [1]. A RADAR device bounces a radio signal off a mov- approximate object region is first segmented and blur parameters
ing object and the reflected signal is picked up by a receiver. This are calculated from the motion blurred sub image. This image is
receiver measures the frequency difference between the original then deblurred and used to derive other parameters. Finally, the
and reflected signals and converts it into the speed. A LIDAR device object’s speed is calculated using the imaging geometry, camera
records how long does it take for a light pulse to travel to the object pose and blur extent in the image.
and come back. By making some distance measurements and com- All the above machine-vision-based speed measurement
parisons, LIDAR can perform the speed measurement accurately. schemes perform the speed measurement only for a single moving
In recent years, some image-based schemes are proposed for the object, which results in a low measurement efficiency. To address
speed measurement due to the availability of the cheap and high this issue, in this paper, we propose an efficient parallel speed mea-
performance imaging hardware [2–4]. These schemes usually use surement scheme for multiple moving objects with a translational
reference information in the scene, such as the traveled distance motion (e.g. multiple moving vehicles), based on a single motion
between the adjacent image frames. The object’s speed is computed blurred interlaced scan image. In the next section, we briefly outline
by dividing this traveled distance with the inter-frame time. Two or the formulation of speed measurement based on machine vision.
more image frames are required in these schemes. However, due to Section 3 describes our novel and efficient scheme in detail. Exper-
the limited imaging frame rate (usually 30 frames per second), the imental results and comparisons are illustrated in Section 4. Some
video camera has to be installed far away from the moving object conclusions are drawn in Section 5.
to keep the object in adjacent image frames.
2. Formulation of machine-vision-based speed
measurement
∗ Corresponding author at: Information and Computer Engineering Institute, P. O.
Box 319, Northeast Forestry University, Harbin, 150040, China.
The machine-vision-based speed measurement is based on a
Tel.: +86 0451 82191523; fax: +86 0451 82190421. pinhole camera model, as shown in Fig. 1. Define the angle between
E-mail address: bit zhao@yahoo.com.cn the object’s motion direction and the image plane of the camera is
0030-4026/$ – see front matter © 2011 Elsevier GmbH. All rights reserved.
doi:10.1016/j.ijleo.2010.12.022
2012 P. Zhao / Optik 122 (2011) 2011–2015
d zKsx
v= ≈ (1)
T Tf cos cos ˇ
d zKsx Fig. 2. Real interlaced scan vehicle image. (a) Original blurred interlaced scan car
v= ≈ (2) image; (b) its odd field image; (c) its even field image.
T Tf cos ˇ
In this section, we propose a novel parallel speed measurement Image matting schemes take as input one image I which is
scheme for multiple moving objects based on one interlaced scan assumed to be a composite of a foreground image F and a back-
image frame. This scheme uses only one image frame, but it has ground image B. The color of the ith pixel is assumed to be a linear
the advantage of the speed measurement scheme based on two combination of the corresponding foreground and background col-
image frames [2–4]. This novel scheme uses the traveled distance of ors [7]:
every object between the odd and even field images. Every object’s
speed is computed by dividing this traveled distance with the inter-
Ii = ˛i Fi + (1 − ˛i )Bi (3)
field time. The four implementation steps are explained in detail as
follows.
where ˛i is the pixel’s foreground opacity. The alpha matte con-
tains fractional pixel values which can be physically interpreted as
3.1. Odd and even field image the time percentage that the foreground pixels are exposed during
image capture. Each pixel has a fractional value between 0 and 1.
First, the odd or even field image is extracted from an interlaced With the fact that the foreground always occludes the background,
scan image frame I(x, y), y = 0, 1 . . . N − 1; Io (x, y) = I(x, y), y = 2k ; Ie (x, it further describes the partial occlusion of the background scene
y) = I(x, y), y = 2k + 1, k = 0, 1, 2 . . . (N/2 − 1). Then Io (x, y) or Ie (x, y) is by the foreground object during the exposure. So, essentially, the
enlarged to form Io1 (x, y) or Ie1 (x, y) that is identical with I(x, y) in alpha matte abstracts object’s silhouette information. The pixels
size, using an interpolation algorithm. As for the interpolation algo- in object’s internal region have value 1 and the pixels around the
rithm, a nearest-neighbor interpolation, a bilinear interpolation or a boundary of a blurred object will have fractional values, while the
bi-cubic interpolation can be used. For instance, Fig. 2 illustrates an pixels in the background have value 0. For example, the alpha matte
interlaced scan blurred vehicle image and its corresponding resized for a motion blurred image is illustrated in Fig. 3, where the alpha
odd/even field images. matte is produced by the “Scribble matte” scheme [8].
P. Zhao / Optik 122 (2011) 2011–2015 2013
Fig. 3. Moving car image. (a) Original blurred car image; (b) its scribble image; (c)
final alpha matte.
∂ϕ
+ F|∇ ϕ| = 0 (4) 3.4. Speed calculation
∂t
where the functional F is called the speed functional. For image seg- After the shape matching is fulfilled for every object’s contour
mentation, F depends on the image data and the level set functional edge in two field alpha matte images, we calculate every object’s
ϕ. two centroids in two field alpha mattes:
Early geodesic active contours usually evolve the level set func-
xo1 = x˛o1 (x, y)/ ˛o1 (x, y) xe1 = x˛e1 (x, y)/ ˛e1 (x, y)
tional with a partial differential equation (PDE). Compared to pure (5)
PDE driven level set schemes, the variational schemes are more yo1 = y˛o1 (x, y)/ ˛o1 (x, y) ye1 = y˛e1 (x, y)/ ˛e1 (x, y)
convenient and natural for incorporating additional information.
In this work, we use a variational level set formulation of curve Then we compute the distance of these two centroids:
evolution proposed by Ming et al. [11]. For example, Fig. 4 illus-
trates the convergence of this variational geodesic active contour K = x = |xo1 − xe1 | (6)
in an image with two rectangles.
x
Next, every object’s contour edge in the odd field alpha matte cos ˇ = (7)
must be matched with its corresponding contour edge in the even x2 + y2
field alpha matte. Shape descriptors such as Fourier descriptor [12],
curvature descriptor [13] or moment descriptor [14,15] can be used Other parameters in Eq. (1) or Eq. (2) can be obtained by the camera
to fulfill the contour edge matching task. In this paper, the shape calibration. Therefore, we can use Eq. (1) or Eq. (2) to perform the
matching is performed by using a moment shape descriptor for speed measurement for every moving object, and T = te /2, where te
every object’s contour edge in two field alpha matte images. is the exposure time of the interlaced scan camera.
2014 P. Zhao / Optik 122 (2011) 2011–2015
Table 1
Computed result and error of horizontal displacement of left rectangle by our scheme.
K (in pixel) 30.3 30.5 30.1 29.8 29.7 30.3 29.7 0.7 30.1
Relative error (%) 1.0 1.6 0.3 −0.6 −1.0 1.0 −1.0 2.2 0.2
Table 2
Computed result and error of vertical displacement of right rectangle by our scheme.
K (in pixel) 53.2 53.5 53.5 53.6 52.7 52.6 52.7 0.8 53.1
Relative error (%) 0.3 0.9 0.9 1.1 −0.5 −0.7 −0.5 1.7 0.2
Fig. 5. Red rectangle image. (a) Original red rectangle image; (b) translated rectangle
image. (For interpretation of the references to color in this figure legend, the reader Fig. 7. Real interlaced scan vehicle image. (a) Original vehicle image; (b) its alpha
is referred to the web version of the article.) matte in the odd field image; (c) its alpha matte in the even field image.
P. Zhao / Optik 122 (2011) 2011–2015 2015
Table 3
Computed result and error of horizontal displacement of left rectangle by our scheme.
K (in pixel) 30.6 30.5 30.6 29.5 29.7 30.3 29.5 1.0 30.1
Relative error (%) 2.0 1.6 2.0 −1.6 −1.0 1.0 −1.6 3.4 0.3
Table 4
Computed result and error of vertical displacement of right rectangle by our scheme.
K (in pixel) 53.8 52.5 53.9 53.6 52.7 52.4 52.5 1.3 53.1
Relative error (%) 1.5 −0.9 1.6 1.1 −0.5 −1.1 −0.9 2.5 0.1
Table 5
Computed result and error of vehicle speed by our scheme (ground-truth value 22.5 m/s).
Speed (m/s) 22.0 21.2 22.7 21.8 22.8 21.9 22.9 1.3 22.2
Relative error (%) −2.2 −5.7 0.8 −3.1 1.3 −2.6 1.7 5.6 −1.4
Table 6
Computed result and error of vehicle speed by our scheme (ground-truth value 35.5 m/s).
Speed (m/s) 35.9 34.9 35.7 34.8 35.8 34.4 35.1 1.2 35.2
Relative error (%) 1.1 −1.6 0.5 −1.9 0.8 −3.0 −1.1 3.2 −0.8