You are on page 1of 6

DETECTION OF MOVING TARGETS IN AN IMAGE SEQUENCE OF RADAR

D.TOUAHRI, B.TIGHIOUART

08 Mai 45 University ,24000 Guelma, Algeria


LRI, Badji Mokhtar University, Annaba, Algeria
tdjamel2007@yahoo.fr , b_tighiouart@yahoo.fr

Abstract: The image processing has a major importance; it’s based on image as a data source
to achieve a goal after a many steps.
In this paper, we are interested of radar image in monitoring field; we look to detect a moving
target in one area from an image sequence of radar, and the monitoring of its trajectory.
The images were taken under different conditions by the same system of acquiring, it’s a
synthetic aperture reverse radar (ISAR), images has been improved by filtering and an
enhancement of grayscale function. Finally, Results are represented by the variations of
coordinates of the moving target versus time, and they are discussed.

Keywords: Imaging radar, Pattern Recognition, region segmentation, Monitoring spatio-


temporal of visual clues.

1 INTRODUCTION
Detecting and tracking moving targets is applied in a lot of fields at different kinds: civil or
military, it’s based on radar image as a data source.
A radar system is based on artificial lighting of the scene, and analysis of the reflected waves,
this field is concerned by a many research [SA09] because of the best quality of images
produced.
Remote Sensing is scientific discipline used to the monitoring, analysis, interpretation and
environmental management from measurements obtained using airborne platforms, space,
Land and Maritime. It involves the acquisition of information remotely depending on several
parameters.

2 IMAGES PRETREATMENT
The radar images are affected by a multiplicative noise that increases with the intensity, witch
called “shimmer”; it renders ineffective the usual methods of image processing
Such as segmentation or classification methods; so, we must improve the quality of these
images by improving existing contrasts to have a good results.
Enhancement of grayscale , and filtering are techniques that we need in this paper.

2.1 Enhancement of grayscale


Is to apply a transformation on the intensity of each image pixel in order to smooth all image
intensities on the range of grayscale, from 0 to 255 levels[MA97].

2.2 Improving images by filtering


Is to modify the values of some pixels. In our case, we have chosen to use the median filter, is
a low-pass filter realized by a non-linear operation which eliminates the noise and conserve
contours.
3 MOTION DETECTION IN AN IMAGE SEQUENCE SPATIO-TIME
Motion detection is not limited to a few images; it’s to spot during the time the position of one
or more hallmarks, this correspondence between the spatiotemporal image sequences can be
used for various purposes, it allows to finding motion parameters or displacement over time
,thus identify and even predict the trajectory of a moving object.
The problem of motion detection in images sequence is to separate in each image sequence
moving areas and static areas, every moment, pixels must be labeled by an identifier bit
fixed/mobile. If radar is fixed, we can make this detection from temporary differences of each
pixel [LA07].
3.1 Algorithm « Frame Difference »
Most techniques for detecting movement in an images sequence I(x, y, t) are based on an
estimate of the temporal gradient module. If lighting conditions are constant between two
consecutive images then a significant change in grayscale of a pixel between two images is
involve a movement existence.
Other methods applied on binary images, it use images difference concept, and based on
logical operators (XOR) and (AND) [LA07].
At moments t and t-dt, noting I(x, y, t) light intensity at the moment t of a point p witch has
coordinates (x, y) in the image plane; the time difference is expressed by:

DF(x, y, t) =|I(x, y, t)-I(x, y, t-dt)|

 I(x, y, t-dt): is the brightness of pixel (x, y) for an image I at moment t-dt.
 I(x, y, t) : is the brightness of pixel (x, y) for an image I at moment t.
 DF(x, y, t) is the brightness of pixel (x, y) for an image difference at moment t.

The image DF(It, It-dt) obtained is zero at any point where input signal I is constant. If there
are changes in the points (pixels) of two images, ones “1” are obtained.
We can express the difference by using logical operator (XOR) by the expression:

DF(x, y, t) = I(x, y, t) XOR I(x, y, t-dt)

The result given by applying on two binary images are the initial and final position of the
moving object.
Images difference using (XOR and AND) is given by expression:

DF(x, y, t) = [I(x, y, t) XOR I(x, y, t-dt)] AND


[I(x, y, t) XOR Fond]
The resulting image is an indication of the final position of the moving object.

3.2 The connected component labelling


Is to assign the same label to pixels belonging to the same connected component.
The connected component labelling algorithm looks to detect the adjacencies between pixels
and gives the current point’s label In terms of neighboring points. The most traditional
method is based on a sequential scan image. We suppose that each point p has neighbors
witch are p predecessors in a sequential scan of the image.
In order to reduce to two image’s number of scans, we must construct an equivalence table T
that manages the labels equivalence appeared at the sequential scan.
3.3 Proposed Approach

 Image improving.
 Calculating of the image difference using (XOR and AND) function.
 Segmentation using connected component labelling.
 Localization of main object.
 Calculate the center of gravity coordinates for the main object, and Tracing the
trajectory.
In order to calculate the center of gravity coordinates for the target, we must detect the
bounding box of the main object.

Y2
Y1

0 X1 X2 x

Fig1 target coordinates in a 2D


benchmark
The center of gravity coordinates are given by the expression:
X= x1+ ((x2-x1)/2)
Y= y1+ ((y2-y1)/2)
After that, we trace the target’s trajectory.
(a) (b)

Fig2 :pair1 ISAR images


(b) (c)

Fig3: pair2 of ISAR images


3.4 Results
The results (Fig4, Fig5) concern three spatio-times images for a synthetic aperture reverse
radar (ISAR) taken under good conditions. The target is a fighter.

Images
(a) (b)
Treatments

Enhancement
of grayscale
+
median
filter

Difference
image

(b) XOR (a)


AND
(b) XOR
background

Connected
component
labelling
And
target
location
(calculates
the center of
gravity)

Fig4: results of applying the proposed approach on pair1 ((a), (b))


Images
(b) (c)
Treatments

Enhancement
of grayscale
+
median
filter

Difference
image

(c) XOR (b)


AND
(c) XOR
background

Connected
component
labelling
And
target
location
(calculates
the center of
gravity)

Final result
(the
trajectory of
the moving
target)

Fig5: results of applying the proposed approach on pair2 ((b), (c))


4. CONCLUSION
Results are for images taken by fixed radar of a very specific area. They appear to be
encouraging; nevertheless, the systematic pretreatment may lead to the loss of relevant
information and many main object loclalization difficulties because of the other objects in
radar area; next, we look to generalise this tracking approach by incorporating other features
such as pattern, context and processing time.

5. BIBLIOGRAPHY
[BL05] I.Bloch, et all, « Le traitement des images (tome 2) », Polycopié du cours
ANIM. TSI - Télécom-Paris version 5.0, 2005.
[GH08] A.Ghaleb, «Reconnaissance de cibles mobiles en imagerie Radar », DEMR /
TSI, 2008.
[HI89] A.Hillinton et all, « Le traitement des images de télédétection : aperçu et
perspectives », Ecole nationale supérieur de télécommunication de Bretagne, France, 1989.
[LA07] D. Laloui, « Détection de mouvement et poursuite vidéo », Ecole Militaire
Polytechnique, 2007.
[MA97] N.Maher, « Filtrage et analyse des images radar », Mémoire présenté à la
Faculté des études supérieures de l'université Laval pour l'obtention du grade de Maître ès
Sciences, Laval, 1997.
[[PO04a] Y.Pointin , « Introduction aux radars », LaMP/OPGC CNRS/UBP, 2004 ;
http://wwwobs.univ-bpclermont.fr/atmos/radar
[PO04b] L.Polidori, « Introduction à la télédétection spatiale », 2004.
[RO08] C.Ronse, « Documentation de Traitement d'Images », LSIIT UMR 7005
CNRS-ULP, Département d'Informatique de l'ULP ; http://arthur.u-strasbg.fr, 2008.
[SA09] M.N.Saidi et all, « Système Automatique de reconnaissance de cibles radar ;
Problématique de l’extraction de la forme », 2009.
[TO06a] A.Toumi et all, « Préparation des données Radar pour la
reconnaissance/identification de cibles aériennes », Laboratoire E3I2 – EA 3876 (ENSIETA),
2006.
[TO06b] A.Toumi et all, « Classification des images ISAR pour la reconnaissance des
cibles », (ENSIETA), 2006.

You might also like