Professional Documents
Culture Documents
OVERVIEW
1.1 INTRODUCTION
To solve the problem of traffic congestion many techniques have been applied. Sensors
based on closed loop control algorithms were designed based on emission of ultrasonic
waves, magnetic loops etc. but they have problems of high cost of installation and
maintenance and also poor accuracy in varying traffic conditions. Hence, sensors based
on image processing were considered an attractive alternative.
But in the past they too suffered from issues such as complexity of image processing
algorithms and also high cost of hardware needed for processing. But due to recent
developments in field of computing, hardware costs have significantly reduced and
special dedicated image processing softwares are available now. This has made image
processing a clear winner for solving traffic congestion problems.
Two main trends are noticeable in studies related with measurement of traffic flow. One
aims to algorithms that use reduced parts of the image (Inigo, 1985; Inigo,1989;
Michalopoulos, 1991) the other is concerned with algorithms that use the complete image
(J3losseville and Lenoir, 1989; Hoose, 1991). According to the available literature, there
does not exist a system that can overcome all the inconvenient distortions due
perspective, changing weather conditions, shades and reflections, varied vehicles shapes,
processing time and so on, seem still to be problems that each system have solved only
partially. Therefore the problem is still open. However, due to the expectations of
commercialization of these systems the most recent results on these topics have been kept
secret[1][2].
Image processing using a powerful tool like Matlab can give very good results to the above
problems.
The main step in our system is to perform thresholding of the grayscale image. Thresholding
should be appropriate so that counting of cars on the roads is accurate. This is affected by the
lighting in the area. Therefore a intensity normalization step can be introduces as a
preprocessing step.
In electrical engineering and computer science, image processing is any form of signal
processing for which the input is an image, such as a photograph or video frame; the
output of image processing may be either an image or, a set of characteristics or
parameters related to the image. Most image-processing techniques involve treating the
image as a two-dimensional signal and applying standard signal-processing techniques to
it.
Digital image processing is the use of computer algorithms to perform image processing
on digital images. It allows a much wider range of algorithms to be applied to the input
data and can avoid problems such as the build-up of noise and signal distortion during
processing. Digital image processing allows the use of much more complex algorithms
for image processing, and hence, can offer both more sophisticated performance at simple
tasks, and the implementation of methods which would be impossible by analog
means[4].
Typical operations
The red, green, and blue color channels of a photograph by Sergei Mikhailovich ProkudinGorskii; the fourth image is a composite
Euclidean geometry transformations such as enlargement, reduction, and rotation
Color corrections such as brightness and contrast adjustments, color mapping, color
balancing, quantization, or color translation to a different color space
Digital compositing or optical compositing (combination of two or more images), which
is used in film-making to make a "matte"
Interpolation, demosaicing, and recovery of a full image from a raw image format using a
Bayer filter pattern
Image registration, the alignment of two or more images
Image differencing and morphing
Image recognition, for example, may extract the text from the image using optical
character recognition or checkbox and bubble values using optical mark recognition
Image segmentation
High dynamic range imaging by combining multiple images
Geometric hashing for 2-D object recognition with affine invariance
2
Applications
Imaging
Computer vision
Optical sorting
Augmented Reality
Face detection
Feature detection
Lane departure warning system
Non-photorealistic rendering
Medical image processing
Microscope image processing
Morphological image processing
Remote sensing
Classification
Feature extraction
Pattern recognition
Projection
Multi-scale signal analysis
MATLAB
MATLAB was created in the late 1970s by Cleve Moler, the chairman of the computer
science department at the University of New Mexico. He designed it to give his students
access to LINPACK and EISPACK without having to learn Fortran. It soon spread to
other universities and found a strong audience within the applied mathematics
community. Jack Little, an engineer, was exposed to it during a visit Moler made to
Stanford University in 1983. Recognizing its commercial potential, he joined with Moler
and Steve Bangert. They rewrote MATLAB in C and founded MathWorks in 1984 to
continue its development. These rewritten libraries were known as JACKPAC. In 2000,
MATLAB was rewritten to use a newer set of libraries for matrix manipulation,
LAPACK.
MATLAB was first adopted by control design engineers, Little's specialty, but quickly
spread to many other domains. It is now also used in education, in particular the teaching
of linear algebra and numerical analysis, and is popular amongst scientists involved with
image processing.
MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of the time it would
take to write a program in a scalar noninteractive language such as C or Fortran.
The name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK
projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries,
embedding the state of the art in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses
in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for
high-productivity research, development, and analysis.
The MATLAB application is built around the MATLAB language. The simplest way to
execute MATLAB code is to type it in the Command Window, which is one of the
elements of the MATLAB Desktop. When code is entered in the Command Window,
MATLAB can be used as an interactive mathematical shell. Sequences of commands can
be saved in a text file, typically using the MATLAB Editor, as a script or encapsulated
into a function, extending the commands available.
The MATLAB system controls the Dvelopment Environment. This is the set of tools
and facilities that help you use MATLAB functions and files. Many of these tools are
graphical user interfaces. It includes the MATLAB desktop and Command Window, a
4
command history, an editor and debugger, and browsers for viewing help, the workspace,
files, and the search path.
The MATLAB Language. This is a high-level matrix/array language with control flow
statements, functions, data structures, input/output, and object-oriented programming
features. It allows both "programming in the small" to rapidly create quick and dirty
throw-away programs, and "programming in the large" to create large and complex
application programs.
Key Features
There is no universal or exact definition of what constitutes a feature, and the exact
definition often depends on the problem or the type of application. Given that, a feature is
defined as an "interesting" part of an image, and features are used as a starting point for
many computer vision algorithms. Since features are used as the starting point and main
primitives for subsequent algorithms, the overall algorithm will often only be as good as
its feature detector. Consequently, the desirable property for a feature detector is
repeatability: whether or not the same feature will be detected in two or more different
images of the same scene.
Feature detection is a low-level image processing operation. That is, it is usually
performed as the first operation on an image, and examines every pixel to see if there is a
feature present at that pixel. If this is part of a larger algorithm, then the algorithm will
typically only examine the image in the region of the features. As a built-in pre-requisite
to feature detection, the input image is usually smoothed by a Gaussian kernel in a scalespace representation and one or several feature images are computed, often expressed in
terms of local derivative operations.
Occasionally, when feature detection is computationally expensive and there are time
constraints, a higher level algorithm may be used to guide the feature detection stage, so
that only certain parts of the image are searched for features.
Many computer vision algorithms use feature detection as the initial step, so as a result, a
very large number of feature detectors have been developed. These vary widely in the
kinds of feature detected, the computational complexity and the repeatability.
Edges:
Edges are points where there is a boundary (or an edge) between two image regions. In
general, an edge can be of almost arbitrary shape, and may include junctions. In practice,
edges are usually defined as sets of points in the image which have a strong gradient
magnitude. Furthermore, some common algorithms will then chain high gradient points
together to form a more complete description of an edge. These algorithms usually place
some constraints on the properties of an edge, such as shape, smoothness, and gradient value.
Locally, edges have a one dimensional structure.
Corners
The terms corners and interest points are used somewhat interchangeably and refer to pointlike features in an image, which have a local two dimensional structure. The name "Corner"
arose since early algorithms first performed edge detection, and then analysed the edges to
find rapid changes in direction (corners). These algorithms were then developed so that
explicit edge detection was no longer required, for instance by looking for high levels of
curvature in the image gradient. It was then noticed that the so-called corners were also being
detected on parts of the image which were not corners in the traditional sense (for instance a
small bright spot on a dark background may be detected). These points are frequently known
as interest points, but the term "corner" is used by tradition.
Blobs
Ridges:
For elongated objects, the notion of ridges is a natural tool. A ridge descriptor computed from
a grey-level image can be seen as a generalization of a medial axis. From a practical
8
Blob Detection
In the area of computer vision, blob detection refers to visual modules that are aimed at
detecting points and/or regions in the image that are either brighter or darker than the
surrounding. There are two main classes of blob detectors (i) differential methods based on
derivative expressions and (ii) methods based on local extrema in the intensity landscape.
With the more recent terminology used in the field, these operators can also be referred to as
interest point operators, or alternatively interest region operators (see also interest point
detection and corner detection).
There are several motivations for studying and developing blob detectors. One main reason is
to provide complementary information about regions, which is not obtained from edge
detectors or corner detectors. In early work in the area, blob detection was used to obtain
regions of interest for further processing. These regions could signal the presence of objects
or parts of objects in the image domain with application to object recognition and/or object
tracking. In other domains, such as histogram analysis, blob descriptors can also be used for
peak detection with application to segmentation. Another common use of blob descriptors is
as main primitives for texture analysis and texture recognition. In more recent work, blob
descriptors have found increasingly popular use as interest points for wide baseline stereo
matching and to signal the presence of informative image features for appearance-based
object recognition based on local image statistics. There is also the related notion of ridge
detection to signal the presence of elongated objects.
10
Classification
Feature extraction
Pattern recognition
Projection
Multi-scale signal analysis
11
Pattern Recognition
In machine learning, pattern recognition is the assignment of some sort of output value
(or label) to a given input value (or instance), according to some specific algorithm. An
example of pattern recognition is classification, which attempts to assign each input value
to one of a given set of classes (for example, determine whether a given email is "spam"
or "non-spam"). However, pattern recognition is a more general problem that
encompasses other types of output as well. Other examples are regression, which assigns
a real-valued output to each input; sequence labeling, which assigns a class to each
member of a sequence of values (for example, part of speech tagging, which assigns a
part of speech to each word in an input sentence); and parsing, which assigns a parse tree
to an input sentence, describing the syntactic structure of the sentence.
Pattern recognition algorithms generally aim to provide a reasonable answer for all
possible inputs and to do "fuzzy" matching of inputs. This is opposed to pattern matching
algorithms, which look for exact matches in the input with pre-existing patterns. A
common example of a pattern-matching algorithm is regular expression matching, which
looks for patterns of a given sort in textual data and is included in the search capabilities
of many text editors and word processors. In contrast to pattern recognition, pattern
matching is generally not considered a type of machine learning, although patternmatching algorithms (especially with fairly general, carefully tailored patterns) can
sometimes succeed in providing similar-quality output to the sort provided by patternrecognition algorithms. [8]
12
Edge Detection
Edge detection is a fundamental tool in image processing and computer vision, particularly in
the areas of feature detection and feature extraction, which aim at identifying points in a
digital image at which the image brightness changes sharply or more formally has
discontinuities.
In the ideal case, the result of applying an edge detector to an image may lead to a set of
connected curves that indicate the boundaries of objects, the boundaries of surface markings
as well as curves that correspond to discontinuities in surface orientation. Thus, applying an
edge detection algorithm to an image may significantly reduce the amount of data to be
processed and may therefore filter out information that may be regarded as less relevant,
while preserving the important structural properties of an image. If the edge detection step is
successful, the subsequent task of interpreting the information contents in the original image
may therefore be substantially simplified. However, it is not always possible to obtain such
ideal edges from real life images of moderate complexity. Edges extracted from non-trivial
images are often hampered by fragmentation, meaning that the edge curves are not
connected, missing edge segments as well as false edges not corresponding to interesting
phenomena in the image thus complicating the subsequent task of interpreting the image
data.
Edge detection is one of the fundamental steps in image processing, image analysis, image
pattern recognition, and computer vision techniques.[6]
Canny Edge Detection
John Canny considered the mathematical problem of deriving an optimal smoothing filter
given the criteria of detection, localization and minimizing multiple responses to a single
edge. He showed that the optimal filter given these assumptions is a sum of four exponential
terms. He also showed that this filter can be well approximated by first-order derivatives of
Gaussians. Canny also introduced the notion of non-maximum suppression, which means that
given the presmoothing filters, edge points are defined as points where the gradient
magnitude assumes a local maximum in the gradient direction. Looking for the zero crossing
of the 2nd derivative along the gradient direction was first proposed by Haralick. It took less
than two decades to find a modern geometric variational meaning for that operator that links
it to the Marr-Hildreth (zero crossing of the Laplacian) edge detector. That observation was
presented by Ron Kimmel and Alfred Bruckstein.
Although his work was done in the early days of computer vision, the Canny edge detector
(including its variations) is still a state-of-the-art edge detector. Unless the preconditions are
particularly suitable, it is hard to find an edge detector that performs significantly better than
the Canny edge detector. [5]
13
Canny edge detection algorithm is a method using local extreme values to detect edges and
Canny defined the following three optimal criteria according to edge detection requirements.
Optimal detection. No undetected real edges and no false detection of non-edge points,
i.e. it requires that output signal-to-noise ratio should be maximum.
Optimal detection accuracy. The detection edge points are the nearest to the real edge
points.
Detection point and edge point. Every real edge point is corresponding to every detection
edge point.
Canny expresses the above criteria in mathematical form first and then uses optimized
numerical method to get the optimal edge detection template. For a two-dimensional image,
we need to use a template to carry out convolution processing of the image in several
directions respectively and then select the most probable edge direction. For step type edges,
the shape of optimal edge detector pushed out by Canny is similar to the first derivative of
Gaussian function, according to the symmetry and decomposability of two dimensional
Gaussian function, we can calculate the directional derivative of Gaussian function in any one
direction and image convolution very easily. And therefore, we can select the first derivative
of Gaussian function as the suboptimal detection operator of step type edge in practical
applications. [5]
The algorithm runs in 5 separate steps:
Smoothing: Blurring of the image to remove noise.
Finding gradients: The edges should be marked where the gradients of the image has
large magnitudes.
Non-maximum suppression: Only local maxima should be marked as edges.
Double thresholding: Potential edges are determined by thresholding.
Edge tracking by hysteresis: Final edges are determined by suppressing all edges that are
not connected to a very certain (strong) edge.
14
15
When a pixel meets the following three conditions, it is regarded as an edge point of the
image:
The edge strength of this point is greater than the edge strengths of two adjacent pixel
points in the gradient direction of the point.
The directional difference between this point and two adjacent points in its gradient
direction is less than /4.
The maximum edge strength value in a 33 neighborhood centering on this point is
less than a certain threshold.
Canny Edge Detection is the most efficient algorithm for detection of images.
16
Thresholding
Thresholding is the simplest method of image segmentation. From a grayscale image,
thresholding can be used to create binary images
During the thresholding process, individual pixels in an image are marked as object pixels
if their value is greater than some threshold value (assuming an object to be brighter than the
background) and as background pixels otherwise. This convention is known as threshold
above. Variants include threshold below, which is opposite of threshold above; threshold
inside, where a pixel is labeled "object" if its value is between two thresholds; and threshold
outside, which is the opposite of thresholding. Typically, an object pixel is given a value of
1 while a background pixel is given a value of 0. Finally, a binary image is created by
coloring each pixel white or black, depending on a pixel's label's.
Threshold selection
The key parameter in the thresholding process is the choice of the threshold value (or values,
as mentioned earlier). Several different methods for choosing a threshold exist; users can
manually choose a threshold value, or a thresholding algorithm can compute a value
automatically, which is known as automatic thresholding. A simple method would be to
choose the mean or median value, the rationale being that if the object pixels are brighter than
the background, they should also be brighter than the average. In a noiseless image with
uniform background and object values, the mean or median will work well as the threshold,
however, this will generally not be the case. A more sophisticated approach might be to
create a histogram of the image pixel intensities and use the valley point as the threshold. The
histogram approach assumes that there is some average value for the background and object
pixels, but that the actual pixel values have some variation around these average values.
However, this may be computationally expensive, and image histograms may not have
clearly defined valley points, often making the selection of an accurate threshold difficult.
One method that is relatively simple, does not require much specific knowledge of the image,
and is robust against image noise, is the following iterative method:
An initial threshold (T) is chosen, this can be done randomly or according to any other
method desired.
The image is segmented into object and background pixels as described above,
creating two sets:
o G1 = {f(m,n):f(m,n)>T} (object pixels)
o G2 = {f(m,n):f(m,n) T} (background pixels) (note, f(m,n) is the value of the
pixel located in the mth column, nth row)
The average of each set is computed.
o m1 = average value of G1
o m2 = average value of G2
A new threshold is created that is the average of m1 and m2
o T = (m1 + m2)/2
17
Go back to step two, now using the new threshold computed in step four, keep
repeating until the new threshold matches the one before it (i.e. until convergence has
been reached).
Adaptive thresholding
Thresholding is called adaptive thresholding when a different threshold is used for different
regions in the image. This may also be known as local or dynamic thresholding.
18
Our literature survey results have shown us that the current systems still have many flaws and
still are not able to accurately measure traffic. The use of edge detection algorithms add to the
problem. In an image there may to many edges so it becomes difficult to count the number of
cars. So it is better to use thresholding where clear cut blobs of cars can be seen and counted
to accurately count cars.
19
2. PROPOSAL
The system will automatically extract the individual road information from a single road.
Individual images for each road will not be taken.
The traffic density information for each road is passed onto the microcontroller controlling
the signal LEDs. The microcontroller checks the case and then opens the LEDs appropriately.
Image processing will be done using Matlab. The microcontroller controls the LEDs and also
checks the case of traffic.
20
Camera
Capture
Image
System
21
Signal
timings
Signal
Camera
Capture
image
Process
image
Signal
Automatically
change signal
timings
22
Microcontroller
Take
screenshot of
the image
Image
Convert image
into gray scale
Separate
images of each
lane
Count no. of
cars in each
lane
Microcontroller
Senses traffic
on each lane
Automatically
changes signal
timings
Signal
Traffic management is becoming one of the most important issues in rapidly growing
cities. Due to bad traffic management lot of man-hours are being wasted. One solution to
overcome these problems is to construct new roads, under-passes, fly-overs; enhance the
public transport and introduce intercity train. But the availability of free space impose a
serious problem in making new infrastructure and also the environmental damage due to
these developments have to be considered. For this reason, there is a need to improve
existing traffic light system in order to manage the traffic flow in smooth and efficient
way. This leads to the development of an adaptive traffic control system which can
monitor traffic conditions and adjust the timing of traffic lights according to the actual
road conditions.
We have thought to develop a self adaptive system which can help in better traffic
management. The advantages of efficient traffic control are unquestionable. Initially,
several Sensors were designed based on various principles such as emission of ultrasonic
waves, pneumatic tubes and magnetic loops. All these techniques , however had some
drawbacks. Firstly, a typical video camera with NTSC system captures 30 frames/second.
Since the digitalized system has 512 x 512 pixels and 256 gray levels, the data rate
becomes over 60 Mbps which is pragmatically too large to be continued with. The second
difficulty lies with the complexity of the image-analysis algorithm and due to diverse
circumstances that are produced by rain, fog, shades, etc., it is difficult to implement such
algorithms even more.
These two problems can be minimized one uses to advantage that it is not necessary to
process all the images. The images can be captured at regular intervals and then they can
be processed and changes if any, then can be recorded. So instead of capturing continuous
images, we capture the images time-to-time.
Vehicle detection is the main step in the freeway monitoring process. It has many
applications in many fields such as military and civilian applications. It was implemented
by installing loop detectors in the highway. However, loop detectors installation has many
drawbacks. One of them is the disturbing of highway traffic. In addition, it cannot give
detailed information regarding the traffic status such as queue length, number of vehicles
in a given cross section, and the quality of service. Vision-based techniques, on the other
hand, have many advantages. They are easy to install any time without interfering with
the traffic. Cameras can be mounted on many alternative places such as buildings, poles,
bridges, or towers. From these locations, vehicles could be counted, tracked, or classified.
More importantly, different traffic parameters could be easily extracted.
24
3.2 PLANNING
SN Task
Subtask
Duration
Activity
Start Date
End Date
6-7 hrs
10-8-2010
07-9-2010
21-8-2010
30-9-2010
28-9-2010
3-11-2010
5-10-2010
20-11-2010
(hrs)
1.
Problem
Formulation
Discussion
Definition.
of the problem
amongst
statement.
the group
members.
2.
Problem
Searching for
Discussion and
10-12
evaluation.
multiple alternative
searching IEEE
hrs
solutions of main
papers.
objective.
3.
4.
5.
Study of
Getting acquainted
Implementation of
MATLAB and
its features.
Build
Working on existing
Analyzing the
functions and
functions in
concept in terms of
test them.
MATLAB.
functions.
Developing
Building the
Formulated the
hardware.
working with
working of the
microcontroller.
process.
Collaboration of
the various
functions.
25
14-15
hrs
10 hrs
28-02-2010
6.
Designing and
Working on the
Interfacing the
Interfacing
modes by which
MATLAB code
output is to be
displayed.
developed.
8-9hrs
20-2-2011
15-03-2011
10-12hrs
10-03-2011
31-03-2011
8-9 hrs
25-03-2011
10-04-2011
10-12hrs
11-04-2011
25-04-2011
7.
Real-Time
Implementatio
Images and
changes in the
implementing the
signal timings.
various functions.
8.
9.
Testing
Ensuring proper
Observing System
running of the
under different
System
circumstances
Documentation Completely
Working on Black
Book. Formatting
the documentation
to a desirable need.
26
Aug
Sept
1 2 3 4 1 2 3 4
Oct
Nov
Dec
Jan
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Problem
Definition
Problem
Evaluation
Study of
MATLAB
and its
features
Build
functions
and test
them
Developing
Hardware
Designing
and
Interfacing
Real-Time
Implementation
Testing
Documentation
27
Feb
1 2 3 4
Mar
1 2 34
Apr
1 2 3 4
Capture image
SYSTEM
28
Main module
Signal
Camera
capture image
send image for processing
29
Take screenshot of
the traffic signal
if images are
not accurate
Separate images of
each lane
Take another
image
Count no of cars
in each lane
Microcontroller
processes the results
Change signal
timing
30
unsuccessful
Waiting
Start
Acquisition
Image
Capture
successful
Communication with
Microcontroller
Change Signal
Timings
31
Image
Processing
4. DESIGN
Initialize Camera
Initialize Serial Port
Take the snapshot
Image will be of size 320*240.
Convert image into grayscale
For grayscaling, only the green plane of the image is considered. Green is chosen
because it has maximum brightness.
Perform thresh-holding
Thresh-holding has to be done carefully. The upper and lower thresh-holding values
have to be adjusted to depending on intensity.
Divide the image into 4 parts, each comprising of a single road.
The four lanes of the junction have to be separated to count the cars on each. For this
we define the pixel coordinated of each road and then divide the image.
Calculating the number of blobs for each road.
These blobs are nothing but the thresh-holded cars.
Sending the data to the Microcontroller
The technique of sending data to microcontroller is as follows:
o Each separate road is identified by a number.
Road 1 is identified by 4
Road 2 is identified by 5
Road 3 is identified by 6
Road 4 is identified by 7
32
o Then we are considering four cases for cars, either 0 cars, 1 car, 2 cars, or 3
cars can be there on each road.
o So a double digit number is sent to the microcontroller.
Lower nibble indicated the no. of cars.
Higher nibble indicated road number.
Eg. If we send 53, it means:
On road no.2 there are 3 cars.
33
4.2 MICROCONTROLLER
Definition
An embedded microcontroller is a chip which has a computer processor with all its
support functions (clock & reset), memory (both program and data), and I/O (including bus
interface) built into the device. These built in functions minimize the need for external
circuits and devices to be designed in the final application.
Types of Microcontroller
Creating applications for microcontrollers is completely different than any other
development job in computing and electronics. In most other applications one probably have
a number of subsystem and interfaces already available for his/her use. This is not the case
with a microcontroller where one is responsible for
Power distribution
System clocking
Interface design and wiring
System programming
Application programming
Device programming
Before selecting a particular device for an application, it s important to understand what the
different options and features are and what they can mean with regard to developing
application.
Embedded Microcontroller
When all the hardware required to run the application is provided on the chip, it is refer to as
an Embedded Microcontroller. All that is typically required to operate the device is power,
reset, and a clock. Digital I/O pins are provided to allow interfacing with external devices.
External Memory Microcontroller
Sometimes, the program memory is insufficient for an application or, during debug; a
separate ROM (or even RAM) would make the work easier. Some microcontrollers including
the 8051 allow the connection of external memory.
An external memory microcontroller seems to primarily differ from a microprocessor
in the areas of built-in-peripheral features. These features could include memory device
selection (avoiding the need for external address decoders or DRAM address multiplexers),
timers, interrupt controllers, DMA, and I/O devices like serial ports.
34
35
Brief Description
The AT89S52 is a low-power, high-performance CMOS 8-bit microcomputer with 4K bytes
of Flash programmable and erasable read only memory (PEROM). The device is
manufactured using Phillipss high-density nonvolatile memory technology and is compatible
with the industry-standard MCS-51 instruction set and pinout. The on-chip Flash allows the
program memory to be reprogrammed in-system or by a conventional nonvolatile memory
programmer. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the
Phillips AT89S52 is a powerful microcomputer which provides a highly-flexible and costeffective solution to many embedded control applications.
Pin Description
VCC
Supply voltage.
GND
Ground.
Port 0
Port 0 is an 8-bit open-drain bi-directional I/O port. As an output port, each pin can sink eight
TTL inputs. When 1s are written to port 0 pins, the pins can be used as high impedance
inputs. Port 0 may also be configured to be the multiplexed low order address/data bus during
accesses to external program and data memory. In this mode P0 has internal pull-ups. Port 0
also receives the code bytes during Flash programming, and outputs the code bytes during
program verification. External pull-ups are required during program verification.
Port 1
Port 1 is an 8-bit bi-directional I/O port with internal pull-ups. The Port 1 output buffers can
sink/source four TTL inputs. When 1s are written to Port 1 pins they are pulled high by the
internal pull-ups and can be used as inputs. As inputs, Port 1 pins that are externally being
pulled low will source current (IIL) because of the internal pull-ups. Port 1 also receives the
low-order address bytes during Flash programming and verification.
Port 2
Port 2 is an 8-bit bi-directional I/O port with internal pull-ups. The Port 2 output buffers can
sink/source four TTL inputs. When 1s are written to Port 2 pins they are pulled high by the
internal pull-ups and can be used as inputs. As inputs, Port 2 pins that are externally being
pulled low will source current (IIL) because of the internal pull-ups. Port 2 emits the highorder address byte during fetches from external program memory and during accesses to
external data memory that uses 16-bit addresses (MOVX @ DPTR). In this application, it
uses strong internal pull-ups when emitting 1s. During accesses to external data memory that
36
uses 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function
Register.
Port 2 also receives the high-order address bits and some control signals during Flash
programming and verification.
Port 3
Port 3 is an 8-bit bi-directional I/O port with internal pull-ups. The Port 3 output buffers can
sink/source four TTL inputs. When 1s are written to Port 3 pins they are pulled high by the
internal pull-ups and can be used as inputs. As inputs, Port 3 pins that are externally being
pulled low will source Current (IIL) because of the pull-ups. Port 3 also serves the functions
of various special features of the AT89S52 as listed below:
P3.0
P3.1
P3.2
P3.3
P3.4
P3.5
P3.6
P3.7
Port 3 also receives some control signals for Flash programming and verification.
37
RST
Reset input. A high on this pin for two machine cycles while the oscillator is running
resets the device.
ALE/PROG
Address Latch Enable output pulse for latching the low byte of the address during
accesses to external memory. This pin is also the program pulse input (PROG) during
Flash programming. In normal operation ALE is emitted at a constant rate of 1/6 the
oscillator frequency, and may be used for external timing or clocking purposes. Note,
however, that one ALE pulse is skipped during each access to external Data Memory. If
desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit
set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is
weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in
external execution mode.
PSEN
Program Store Enable is the read strobe to external program memory. When the
AT89C51 is executing code from external program memory, PSEN is activated twice
each machine cycle, except that two PSEN activations are skipped during each access to
external data memory.
EA/VPP
External Access Enable. EA must be strapped to GND in order to enable the device to
fetch code from external program memory locations starting at 0000H up to FFFFH.
Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset.
EA should be strapped to VCC for internal program executions. This pin also receives the
12-volt programming enable voltage (VPP) during Flash programming, for parts that
require 12-volt VPP.
XTAL1
Input to the inverting oscillator amplifier and input to the internal clock operating circuit.
XTAL2
Output from the inverting oscillator amplifier.[7]
38
Diagrams
39
Working of project
Our project consists of a Matlab code and also hardware. Hardware consistes of a
microcontroller circuit, power supply, max 232 and traffic signals. All of these are adjusted
on a model of a junction.
Steps to use project:
1.
2.
3.
4.
5.
6.
7.
40
41
5.1.1 OUTPUT/SCREENSHOTS
OUTPUT 1:
sizeblob1 =
5 671
The number of cars on 1st road detected are
1
sizeblob2 =
1 663 483
The number of cars on 2nd road detected are
2
sizeblob3 =
632 652 671
The number of cars on 3rd road detected are
3
The number of cars on 4th road detected are
Absence of cars
42
OUTPUT 2:
sizeblob1 =
675 634 613
The number of cars on 1st road detected are
3
sizeblob3 =
6 659 7
The number of cars on 3rd road detected are
1
sizeblob4=
634 5 667
The number of cars on 4th road detected are
2
43
SCREENSHOTS 1
44
45
46
47
SCREENSHOTS 2
48
49
50
51
The present system has a single camera mounted for a particular junction. In future, a
separate camera for each road at an intersection will allow the system to use video
processing which can improve the system efficiency further.
The vehicle objects can also be categorized into various classes depending upon the
geometrical shape of vehicle for blocking the passage of large vehicles e.g., trucks during
day times. This will further help our cause in managing the traffic.
Also, the entire system can be collaborated with a GSM Interface enabling the signals to
be controlled via Mobile phones in case of an emergency. This might be beneficial in case
of ambulance, police or fire brigade.
52
REFERENCES
[8] R. Rad, and M. Jamzad, Real time classification and tracking of multiple vehicles in
highways, Pattern Recognition Letters, vol. 26, no. 10, 15 July 2005, pp. 1597-1607.
53