You are on page 1of 6

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO.

3, MAY 1999 615

Object Detection Using Pulse Coupled Neural Networks


H. S. Ranganath and G. Kuntimad

Abstract This paper describes an object detection system The PCNN-based iterative image smoothing technique
based on pulse coupled neural networks. The system is de- is shown to smooth images without blurring, eroding, or
signed and implemented to illustrate the power, flexibility and dilating edges. The thin lines and curves present in the
potential the pulse coupled neural networks have in real-time
image processing. In the preprocessing stage, a pulse coupled image that are often completely removed by the median
neural network suppresses noise by smoothing the input image. filter degrade gracefully in the new approach [3]. The
In the segmentation stage, a second pulse coupled neural-network PCNN based segmentation method, under certain conditions,
iteratively segments the input image. During each iteration, guarantees perfect segmentation of the input image even
with the help of a control module, the segmentation network when the intensity ranges of adjacent regions overlap.
deletes regions that do not satisfy the retention criteria from
further processing and produces an improved segmentation of the Perfect segmentation conditions are derived in [1]. The
retained image. In the final stage each group of connected regions inclusion of inhibition receptive field in the neuron model
that satisfies the detection criteria is identified as an instance of significantly improves the segmentation capabilities of the
the object of interest. PCNN by compressing the intensity ranges of the individual
Index TermsImage smoothing, image segmentation, iterative regions in the image and by reducing the extent of overlap
segmentation, object detection, pulse coupled neural network, of the intensity ranges of adjacent regions. Simulation
pulse coupled neuron. results indicate that the PCNN yields better segmentation
results than most of the commonly used methods [2].
I. INTRODUCTION

I F a digital image is applied as input to a two dimensional


Eckhorns neural network, the network groups image pix-
els based on spatial proximity and brightness similarity [1].
II. OBJECT DETECTION SYSTEM
A typical object or target recognition system consists of
During grouping, the network bridges small spatial gaps and a detection system and a recognition system. The detection
minor local intensity variations. This is a desirable property for system finds potential objects in the input image and extracts
many window based image processing applications and can be a subimage from each potential object area. These subimages
exploited for image processing applications. are further processed by the recognition system. This paper
When Eckhorns model was analyzed from image process- describes an object detection system which utilizes PCNNs
ing perspective we noticed a few practical limitations [2]. for smoothing and segmentation of digital images.
Therefore, we modified Eckhorns neuron model and proposed
a new pulse coupled neuron (PCN) model for image processing
applications [1]. We eliminated the feeding leaky integrators, A. System Organization
assumed that all linking leaky integrators are identical, altered The object detection system, shown in Fig. 1, consists of
the operation of the threshold signal generator and described a five major components: 1) smoothing module; 2) segmentation
method for determining the lower limit for the maximum value module; 3) control module; 4) detection module; and 5)
for the threshold signal. In this model, every time a neuron knowledge base.
pulses its threshold signal is changed to a predetermined Any a priori information about the objects to be detected
maximum value regardless of its value just prior to the pulsing. can be stored in the knowledge base. However, it is desirable
During the last two years, we have used pulse coupled neural to store the attributes that are easily computed from the
networks (PCNNs) for image smoothing, segmentation and output of the PCNN. Moment based features such as area,
even feature extraction. A PCNN is a single layered, two- centroid, radius of gyration and Hus invariant moments can
dimensional, laterally connected network of pulse coupled easily be computed for each segment identified by the PCNN
neurons. There exists a one-to-one correspondence between [4]. The smoothing module, a PCNN, reduces the random
the image pixels and network neurons. That means, each noise present in the image before presenting the image to
pixel is associated with a unique neuron and vice versa. the segmentation module. The segmentation module is also
Interested readers may find detailed description of PCNN and a PCNN. During each pulsing cycle, the segmentation module
its operation in our previously published papers [1]. forward the resulting segments to the control module [2]. The
control module, based on the computed attributes and the
Manuscript received January 28, 1997; revised January 3, 1999. attributes stored in the knowledge base, checks each segment
H. S. Ranganath is with the Computer Science Department, University of to determine whether the segment is acceptable as a part of
Alabama, Huntsville, AL 35899 USA. the desired object. The segments that are not acceptable are
G. Kuntimad is with the Rocketdyne Division, Boeing North American,
Huntsville, AL 35806 USA. eliminated from further consideration. This is accomplished by
Publisher Item Identifier S 1045-9227(99)03181-1. disabling the group of neurons in the PCNN that correspond
10459227/99$10.00 1999 IEEE
616 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999

Fig. 2. Input image to the object detection system.

Fig. 3. Smoothed image of the tank. The tank in Fig. 2 was smoothed with
= 0:02, r = 1:5 and C = 2 over 4 pulsing cycles.
Fig. 1. Object detection system.

most of the noisy pixels do not pulse with their neighbors.


to the rejected segment. A neuron, once disabled, ceases to In other words, neurons corresponding to noisy pixels neither
pulse during the subsequent pulsing cycles. This completes the capture neighboring neurons nor are captured by them. Image
current iteration. For the next iteration, the value of the linking smoothing can be accomplished by adjusting the intensity of
coefficient is updated and the image is resegmented. each pixel so that the corresponding neuron either captures
The process of segmentation and elimination of segments its neighbors or is captured by them [2]. The intensity of each
continues until the termination condition is attained. The neuron is adjusted when it pulses and the adjustment procedure
rules for updating and the termination condition are image is described below.
dependent. Note that the control module has the ability to When a neuron pulses, if a majority of the neurons
enable the previously disabled neurons. This feature provides linked to it have not yet pulsed during the current pulsing
an elegant way to backtrack to recover from errors when cycle, the intensity is adjusted down by subtracting
needed. The segments obtained during the final iteration are , where a predetermined constant.
presented to the detection module. The detection module When pulses and a majority of the neurons linked to
identifies each grouping of the input segments which satisfies it have already pulsed during the current pulsing cycle,
the constraints specified in the knowledge base as a potential the intensity is adjusted up by adding .
object of interest. If there is no clear majority then the intensity is not
altered.
B. System Operation At the end of each pulsing cycle, the PCNN is reset, the
In order to understand the operation of each module and the modified image is applied to the PCNN and the smoothing
overall system in detail, consider the task of detecting tanks process is repeated. The smoothing process continues until the
in input images. Several 128 128 gray scale images (256 termination condition is attained. The Unidirectional termina-
levels) obtained from television camera were used as inputs to tion condition is used [2]. The image in Fig. 3 is obtained
the detection system. One such image is shown in Fig. 1. The by smoothing the image in Fig. 2 with linking coefficient
detection approach is described in steps below. , linking receptive field radius and .
Step 1: The input image is smoothed by a PCNN to reduce The smoothed image of the tank is applied as input to the
the effects of random noise. segmentation module which is also a PCNN.
Image smoothing using PCNN is accomplished by mod- Step 2: The image is segmented into several regions using
ifying the intensities of noisy pixels based on the neuron a PCNN.
firing patterns. In general, the intensity of a noisy pixel is The general approach to segment images using PCNN is
expected to be significantly different from the intensities of to adjust the parameters of the network so that the neurons
the surrounding pixels. Therefore, neurons corresponding to corresponding to the pixels of a given region pulse together
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 617

and the neurons corresponding to the pixels of adjacent regions of the segments elongation. For a given area, radius of
do not pulse together. gyration assumes its minimum value when the segment
Assume that the image to be segmented consists of is circular. Its value is maximum when all pixels are
regions and is applied as input to a PCNN. The network collinear. A segment whose radius of gyration exceeds
neurons pulse based on their feeding and linking inputs. Note the expected maximum value of the radius of gyration of
that the feeding input to a neuron is equal to the intensity of the tank cannot represent the tank or a part of the tank.
its corresponding pixel. Due to the capture phenomenon the The maximum value of the radius of gyration of a single
neurons associated with each group of spatially connected pix- segment (which is same as the radius of gyration of the
els with similar intensities tend to pulse together. Thus, each tank) was determined to be 14 pixels in length.
contiguous set of synchronously pulsing neurons identifies a In order to eliminate a segment from further consideration
segment of the image. The value of the linking coefficient the feeding inputs (pixel intensities) of the corresponding
has significant effect on the segmentation process. If the neurons are set to a unique negative value. This will also allow
value of is low, the pixels that belong to a single region are the control module to enable the previously disabled segments
partitioned by the PCNN into several segments. This is known selectively.
as over-segmentation. On the other hand, if the value of is If the number of active segments at the end of the current
high, the PCNN groups pixels that belong to two or more iteration is less than a prespecified value (25 segments), the
regions to form a single segment. This is known as under- segmentation process is terminated. The resulting segments
segmentation. It may be possible to find the optimal value are input to the detection module for further processing. Oth-
of based on the intensity probability density function of erwise, the value of is increased by a constant amount and
the image and the geometry of objects present in the image. the process of segmentation (Step 2) and segment elimination
However, the determination and the use of the optimal value (Step 3) is repeated.
of does not guarantee perfect results. The justification for gradually increasing the value of
Since the determination of the optimal values of parameters starting from a low initial value is given in this paragraph.
is not always possible, an iterative approach is used to segment It is assumed that there is some contrast between the regions
the image. In this approach, values of all parameters except of tank and the surrounding background regions. If this is not
remain constant throughout the object detection process. true the tank cannot be distinguished from the background by
The PCNN begins the first iteration with a low value of any method. When the contrast between the tank and its sur-
to ensure over-segmentation. Its value is increased slightly by rounding regions is low the risk of the tank or a part of the tank
the control module for each subsequent iteration. The rationale pulsing with the background increases. Therefore, a low initial
for choosing this approach is described later. At the end of a value of is chosen to ensure that the tank segments do not
pulsing cycle all the segments are presented to the control merge with the background segments due to increased linking
module. activity. During the early iterations, any segment that is larger
Step 3: Image segments to be discarded are determined. than a tank represents homogeneous background regions such
The knowledge base includes information to facilitate the as sky and meadow. Similarly, large and elongated regions
evaluation of individual segments and the merging of adjacent represent features such as roads and rivers. Thus, they may be
segments. The actual size and shape of the tank in the image eliminated from further consideration. After eliminating large
varies depending on the orientation, the imaging system and homogeneous regions, the image is fragmented and consists of
the imaging geometry. Model based algorithms and special many less homogeneous regions. The value of is increased
purpose hardware are available for the real-time estimation of to reduce fragmentation.
the values of these attributes for any desired object. Therefore, In the first iteration, the segmentation of the image in
it is decided to store in the knowledge base 1) the maximum Fig. 3 with and yielded 61 segments
and minimum sizes of the tank; 2) the maximum value of the as shown in Fig. 4(a). Six regions that did not satisfy the
radius of gyration of segments; and 3) the maximum distance size or elongation constraints were deleted. The black regions
between the centroids of adjacent segments that are candidates in Fig. 4(b) represent the deleted regions. Note that more
for merging. The control module examines each segment and than 50 percent of the pixels were eliminated from further
discards segments that violate any of the following constraints. consideration. Since the number of segments was greater than
Area constraint: The upper and lower limits of the area 25 the image in Fig. 4(b) was segmented again. The value
of the tank in the image are determined by the imaging of was increased by 0.03 at the end of each iteration. The
system parameters and the distance between the tank and results of five iterations are summarized in Table I. Note that
the camera. If the area of a segment exceeds the upper the number of segments decreases as the processing continues
limit, it is unlikely to represent a tank. Therefore, such from one iteration to the next. During the fifth iteration only
large segments are eliminated from further consideration. one segment was deleted to yield a total of 21 segments as
For the set of images used in simulation, the lower and shown in Fig. 4(c). Therefore, the segmentation process was
upper limits for the area of the tank were determined to terminated.
be 480 and 700 pixels, respectively. Step 4: Detection of a group of connected regions that could
Elongation constraint: In addition to its area, the elonga- form an object of interest.
tion (spatial extent) of a segment is an important property. Each segment received by the detection module may be
The radius of gyration of a segment is a good measure a tank or a part of a tank. The function of the detection
618 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999

(a) (a) (b)

(b) (c) (d)


Fig. 5. Cluster seeking. The clusters that result from the segments in
Fig. 4(c). Number of segments (in addition to the root segment) in the clusters
are (a) 3, (b) 0, (c) 3, and (d) 5.

(based on the attributes in the knowledge base). After the


clusters are formed, for each cluster, all combinations of
segments that satisfy the constraints given in the knowledge
base are detected as potential tanks (cluster analysis).
1) Cluster Seeking: There are ( ) nonempty subsets
(c)
that can be formed with input segments. In real-time or
near real-time image processing systems it is not feasible
Fig. 4. Iterative segmentation. (a) Segments at the end of the first iteration.
(b) Segments remaining at the end of the first iteration after elimination by to evaluate each subset to determine whether the subset can
the control module. (c) Segments remaining at the end of the final iteration. constitute a tank. Therefore, it is necessary to reduce the
number of segment combinations that need to be analyzed.
This is accomplished by grouping the segments into several
TABLE I
RESULTS OF ITERATIVE SEGMENTATION
clusters so that only the segments that belong to the same
cluster can be combined to form a tank. The method used to
form a cluster is described below.
The set of input segments are sorted in the nonincreasing
order of their areas.
The segment that has the largest area in the input set
forms the root segment of a new cluster.
A segment in the input set is included in the cluster of the
current root segment if the union of the segment and the
root segment does not violate the area, elongation and
separation constraints. The separation constraint states
that the distance between the centroids of the root and any
segment in the cluster must be less than the prespecified
value (40 pixels). The process continues till all the
module is to select groups of segments that could constitute a segments in the input set are processed. This completes
potential tank. This task is accomplished in two stepscluster the formation of a new cluster. Now, the root segment of
seeking and cluster analysis. First, in cluster seeking, the input the cluster is deleted from the set of input segments.
segments are grouped into several clusters so that segments The largest possible object that can be assembled from
that belong to different clusters, together, cannot form a tank the segments of a cluster consists of all the segments in
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 619

(a) (b) (a) (b)

(c) (d) (c) (d)

(e) (f) (e) (f)


Fig. 6. Regions detected as potential tanks. (a)(d) Potential tanks in the Fig. 7. Regions that contain the tank. (a)(f) Potential tanks in the fourth
first cluster. (e) Potential tank in the second cluster. (f) Potential tank in the cluster.
third cluster.

2) Cluster Analysis: The union of every subset of segments


the cluster. Therefore, if the sum of the areas of all the in a cluster constitutes a potential tank if the union satisfies
segments in a cluster is less than the minimum area of the following conditions.
the tank, the cluster is eliminated. The union must include the root segment. This eliminates
If the sum of the areas of all the segments remaining in duplication by ensuring that any subset of segments that
the input set is less than the minimum area of the tank, the can be formed from the segments of a cluster cannot be
clustering algorithm is terminated. Otherwise, the control formed from the segments of any other cluster.
is passed to Step 2 to identify the next cluster. The union of segments must satisfy area and elongation
It may be noted that the clusters do not form mutually constraints.
exclusive subsets of the input segment set. It is also possible The union of segments must form a connected region.
that a cluster may consist of only the root segment. The application of the above constraints to the four clusters
The cluster seeking algorithm processed the 21 segments of in Fig. 5 identified only 12 of 49 possible unions of segments
the image in Fig. 4(c) and identified the four clusters shown as potential tanks as shown in Figs. 6 and 7.
in Fig. 5(a)(d). In each figure the bright region represents the
root segment of the cluster. The boundary of each member
segment of the cluster is highlighted. There are three member III. CONCLUSIONS AND RECOMMENDATIONS
segments in the first cluster [Fig. 5(a)], zero member segments The PCNN-based object detection system described in this
in the second cluster [Fig. 5(b)], three member segments in paper illustrates that the PCNNs can be easily integrated
the third cluster (Fig. 5(c)) and five member segments in the into image processing systems. The iterative segmentation
fourth cluster [Fig. 5(d)]. approach, to a great extent, has alleviated the problem of
620 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999

determining optimal value for the linking coefficient. The FL, JuneJuly 1994, pp. 12851290.
use of very simple knowledge has succeeded in reducing the [2] G. Kuntimad, Pulse coupled neural network for image processing,
Ph.D. dissertation, Dept. Comput. Sci., Univ. Alabama, Huntsville,
number of regions that need to be processed by the recognition 1995.
system. The use of additional knowledge is likely to improve [3] H. S. Ranganath, G. Kuntimad, and J. L. Johnson, Pulse coupled neural
the performance further. networks for image processing, in Proc. IEEE Southeastcon, Raleigh,
NC, Mar. 1995.
[4] M. K. Hu, Visual pattern recognition by moment invariants, IRE Trans.
REFERENCES Inform. Theory, vol. IT-8, pp. 179187, 1962.
[5] H. S. Ranganath and G. Kuntimad, Iterative segmentation using pulse
[1] H. S. Ranganath and G. Kuntimad, Image segmentation using pulse coupled neural networks, Applicat. Sci. Artificial Neural Networks II,
coupled neural networks, in Proc. Int. Conf. Neural Networks, Orlando, SPIE Aerosense, Orlando, FL, vol. 2760, pp. 543554, Apr. 1994.

You might also like