You are on page 1of 33

CHAPTER 1 INTRODUCTION This project primarily aims at the new technique of video image processing used to solve problems

associated with the real-time road traffic control systems. The increase in congestion problems and problems associated with existing detectors spawned an interest in these new vehicle detection technologies. But these systems possess difficulties with congestion, shadows and lighting transitions. Problem connected to any practical image processing application in road traffic has to be solved with the assistance of the real world images, processed in real time. Various algorithms, mainly based on back ground techniques have been developed for these purposes, since back ground based algorithms are very sensitive to ambient lighting conditions as they have not yielded the expected results. So a real-time image tracking approach using edge detection techniques was developed for detecting vehicles under these trouble-posing conditions. Ever since the automation of the traffic signal, traffic management has become easier and better flowing. Highway traffic has benefited, too. In the latter case, traffic safety has been enhanced because of speed detectors and automated red light running detection. Lawbreakers would not have been filed in a lawsuit due to the infringement of the law leading to become less wary and panic about the situation. The improvement of equipment and the making of better traffic plans have worked in decreasing the number of violations. Traffic safety has never been better. Certain government departments and agencies have helped to improve flow, security, systems, etc. Vast changes have been made and some schemes have been changed to accommodate the increasing number of vehicles that flow in and out of cities, localities and even rural areas.

Traffic lights involve a rather complicated automated system that relies on sensors and programs. There are basically two types: the first type of traffic light has fixed time. That means, the green light may be on for a minute and off for the next few minutes while the rest of the traffic lights turn green. There was always a fixed time for every street meeting at an intersection. And there has the variable types of traffic light. The variable type of traffic light relies on sensors underground that detect the flow of traffic approaching the intersection. If traffic is heavy, the green light stays longer than it could if the traffic were less. Coupled with the traffic light are the speed detector and the red light running detector. The speed detector uses a device that registers the speed of an oncoming vehicle. The latter, occurred because of drivers beating the stoplight. Nowadays camera surveillance controlled traffic light replaced all the above said methodologies in controlling traffic. This uses various parameter to find the vehicle count and there by controlling the traffic lighting period with the use of timers. Our project dealt with the Image processing technology wherein we count the vehicle on each side using edge detection. The development of our project was divided into four phases. Taking snapshots using camera at each interval Vehicle counting using edge detection Calculation of timer values to control the traffic light Upload the timers with the timings (in seconds) These functions are integrated together to make the system operate. Thus the vehicle traffic can be monitored periodically and can be controlled in order to solve the traffic congestion problem. This makes use of the main components Camera, Matlab and Timer.

CHAPTER 2 LITERATURE REVIEW 2.1Fuzzy Expert System Fuzzy expert system was used to control the traffic light in most cities. It was the most common system used in major areas. The fuzzy expert system composed of seven elements i.e. a radio frequency identification reader (RFID), an active RFID tag, a personal digital assistance (PDA), a wireless network, a database, a knowledge base and a backend server. The following figure gives a brief knowledge about the connections of the different elements of the system.

Fig 2.1.1 Fuzzy Expert System In this system, the RFID reader detects a RF-ACTIVE code at 1024 MHz from the active tag pasted on the car. The active tag has a battery, which is inbuilt inside it, so that it can periodically and actively transmit messages stored in the tag. As soon as the data is received, the reader will save all information in the PDA. When the PDA accumulates the required amount of data, it will use its wireless card and connect to the backend server and store them in to the
3

database in the server. Now the server uses the data stored in the database to calculate maximum flow, inter arrival time and average car speed. When all possible congestion roads and car speed are collected, then these data would be used as the input parameters of the traffic light control simulation model in the server. After getting the simulation results, the system is able to automatically give different alternatives in terms of varieties of traffic situations and then the red light or green light duration is being set via a traffic light control interface for improving the traffic congestion problems. All the rules and reasoning are used in the IF-THEN format. The system is using the forward chaining approach, which is a data driven approach, starting from a basic idea and then tries to draw conclusions. The simulation model running in this system give three optimal alternatives; the best, second and third best traffic light duration. The system uses these alternatives as well as the collected data to choose the best and the most suitable solution for that particular traffic congestion situation.

Fig 2.1.2 Traffic control simulation From the above figure, we can say that there are 6 road traffic control simulation models. The sub models A, B & C are similar and sub models D, E & F are similar. Now we will explain the model A which has the logic diagram shown below.

Fig 2.1.3 The simulation flow diagram for Traffic Light A1 The upper dash area in the above figure controls traffic signal at the first intersection on road A. When all the data required is collected and the simulation is being run, then the simulation gives results in the form of different alternatives. These results are the input for the light control A1.Light control A1 generates an entity to control the traffic light signal. The element named as Assign for light A1 clock time gets the current simulation time. Prompt seizes the resource switch A1 as first priority to get the resource. Delay for red light A1 sets the duration for red light. Release switch A1 releases the resource switch A1 for allowing the cars to seize the resource. Finally the delay for green light A1 sets the duration for green light. Like the process of light control A1, the processes of light control A2 and Light control A3 are the same. 8P 2.2 Artificial Neural Network Approach The adaptive traffic light problem was modeled using the ANN approach. The researchers (Patel and Ranganathan)created an ANN model which included
5

predicting the traffic parameters for the next time frame and computing the cycle time adjustment values. This model consisted of nine inputs (one of each past and present traffic parameters one hidden layers with 70 hidden nodes and three output nodes. The ANN model, if drawn a sketch, would like the figure shown below.

Fig 2.2.1. ANN traffic Model The input given to the ANN models are the list of data collected by the sensors which are placed around the traffic lights. The sensors give the traffic light ANN model all the data which are related to the past and present traffic parameters. The model after getting the input used the hidden layer to decide which nodes suites the current traffic situation. Each hidden nodes is given a membership function (i.e. between 0 and 1). After comparing the nodes an d matching it with the current situation with the help of membership function, the most suitable results or alternatives are selected as the output are then used by the traffic lights to set the timing for the red and green lights. The output of the ANN model will be in the form of membership functions ranging from 0 to 1. 2.3 An Intelligence Decision-making system for Urban Traffic-Control `IDUTC is a real time intelligent decision making system that computes decisions within a dynamically changing application environment. The architecture of the IDUTC is shown in figure below.

Fig 2.3.1 The architecture of the IDUTC. The IDUTC model consists of seven elements. The names of the element are as follows. Artificial Neural Network (ANN). Fuzzification element Fuzzy expert systems (FES) Defuzzification Element. Application environment. Controllers. Sensors The IDUTC is a self adjusting traffic light control system. The sensors of the system are placed at the road to sense the different parameters of the traffic conditions. The sensors are the actual input of the IDUTC model. Sensors collect the past data of the traffic conditions, which is all known as the application environments shown in the figure above. After the surrounding environmental conditions, the sensors send crisp data inputs to the artificial neural network. The ANN model collects all the data from the systems and process it through the hidden layers and gives the desired output. Now the output of ANN model are assigned fuzzy labels indicates the degree to which each crisp value is a member of a domain. Then the fuzzy expert system fires the rules based on these fuzzy values. The defuzzification
7

unit converts the computed decisions into crisp values that are used to control the environment through the controllers installed at the traffic lights. After running the simulation on the traffic light, past data are being collected along with the present data by the sensors. The cycle goes on repeating and tries to change the traffic light timings condition. This shows that the system is selfadjusting according to the situation. The IDUTC is a self adjusting traffic light control system. The sensors of the system are placed at the road to sense the different parameters of the traffic conditions. The sensors are the actual input of the IDUTC model. Sensors collect the past data of the traffic conditions, which is all known as the application environments shown in the figure above. After the surrounding environmental conditions, the sensors send crisp data inputs to the artificial neural network. The ANN model collects all the data from the systems and process it through the hidden layers and gives the desired output. Now the output of ANN model are assigned fuzzy labels indicates the degree to which each crisp value is a member of a domain. Then the fuzzy expert system fires the rules based on these fuzzy values. The defuzzification unit converts the computed decisions into crisp values that are used to control the environment through the controllers installed at the traffic lights. After running the simulation on the traffic light, past data are being collected along with the present data by the sensors. The cycle goes on repeating and tries to change the traffic light timings condition. This shows that the system is self-adjusting according to the situation.

2.4 Automated traffic control using Image processing. Automated traffic flow measurement using video image sequences, can be classified as an application of Computer Vision. A computer vision system accepts an image or a set of images as input and attempts to produce results similar in nature to those produced by a human viewer using the same image/s. The symbolic information required to produce results are generated in the process of Image Analysis for this particular task, i.e., automated traffic flow measurement, the input is a sequence of images. Image Sequence Analysis uses temporal relations and properties of the objects in a scene, in addition to the results gathered by image analysis. This review, therefore, consists of three main sections. The first and second sections review research in the preliminary research areas of computer vision and image sequence analysis respectively. In the third section, research in the specific area of Traffic Image Sequence Analysis will be reviewed. Computer Vision has been an active research area since 1960s. Its applications include document processing, microscopy, industrial automation, surveillance etc., to name a few major areas. By 1983, many ad hoc techniques for analyzing images had been developed and the subject had gradually begun to develop a scientific basis. The researcher (Rosenfeld) identified the basic steps in a general image analysis process .The researcher (Binford) suggested criteria to evaluate the performance of an industrial computer vision system.

A general computer vision system performs the following basic functions: (1) Pre-processing (2) Feature extraction (3) Image segmentation (4) Analysis of results in (2) and (3) to produce results A brief description of the state of the art in each step follows. 2.4.1 Pre-processing The input images (or image sequences) to most computer vision systems contain some amount of noise. These images are preprocessed to reduce noise so that the effect of noise on system performance is minimized. Another objective of pre processing images is to enhance certain image features while suppressing some other features, so that subsequent feature extraction and segmentation becomes easier. 2.4.2 Feature extraction A computer vision system extracts a selected set of features from an image. These features are then analyzed, or matched with those of other images or a model of the real world objects, to produce results. The features to be extracted vary from one system to another. However, the most commonly used features are outlined in the following paragraphs. The set of edges of objects in the scene can be considered as an important feature for a computer vision system working on images of that scene. Sharp, directional intensity variations in images correspond to these edges, making them easy to detect. Techniques for edge detection have been available since 1965. Roberts operator was one of the earliest edge detectors developed. This was based on analyzing the first order derivatives of pixel intensity. Griffith, Hueckel, Canny and Sobel were other edge detectors based
10

on the same principle. The researchers ( Marr and Hildreth) developed a method to detect edges in noisy images using the Laplacian of Gaussian operator. The researchers (Kashyap and Nevatia) have suggested the use of stochastic methods for extracting edges( Marr and Hildreth) identified criteria for evaluating the performance of edge detectors. The researchers (Haralick and Abdou et al) have compared the performance of several edge detectors, including those mentioned above. Detection of corners has been another popular technique in image analysis. Corners are easier to detect and match compared to edges due to two main reasons. One, corners are localized in a smaller area. Two, they are less sensitive to affine transforms of the objects in the scene. Several corner detectors with acceptable performance are available. The Plessey corner detector provides a reliable set of corners, but consumes a lot of processing time .The Wang and Brady corner detector consumes less processing time, but results in a large number of spurious edges. Both these are based on intensity gradients of the image. The Susan corner detector takes a different approach, and looks at the intensities of the pixels in the neighbourhood of a particular pixel to identify if it is a corner. Curves are another feature of interest in image analysis. The Hough transform can be used to identify various types of curves in an image. B-spline curves can be combining feature points (such as corners), where curves cannot be detected directly. For applications where objects of known shape are sought in an image, several techniques of shape matching exist. Hough transforms can be used effectively to identify basic geometric shapes .Descriptions of techniques for shape analysis and shape 2.4.3 Image segmentation The next important step in digital image analysis is image segmentation. Image segmentation is the process of partitioning the given image into meaningful regions and labeling the individual regions. This can be performed
11

in several ways. Simpler techniques use gray levels and local feature values such as edges, corners and closed curves to segment images. A more sophisticated technique is splitting and merging. In this technique, an image is split into small regions and these small regions are merged together to form larger regions. This has been employed successfully in texture. There are two main problems associated with image segmentation. One, image segmentation is very highly scene-dependent and problem-dependent .Two, determination of the number of regions of interest is difficult. This may be obvious only in very simple situations such as a uniform background and one object. Attempts have been made to use expert systems to improve performance of segmentation.

12

CHAPTER 3 SYSTEM ANALYSIS 3.1EXISTING SYSTEM In normal traffic light system, the traffic signals are changed in the predetermined time. This method is suitable when the no of vehicles(fig 3.1.1) in all sides of a junction are same.

Fig 3.1.1 Traffic in Road 1 But when there is an unequal no of vehicles in the queues (fig 3.1.2) this method is inefficient. So we go for the image processing techniques using queue detection algorithm to change the traffic signals.

Fig 3.1.2 Traffic in Road 2

13

3.2 PROPOSED SYSTEM Even if there is no vehicle on that path, the vehicles from all other paths have to wait till the count becomes zero. Hence there is wastage of time. Our system is a conservative approach to solve such problem. We have focused to avoid wastage of time that occurs due to waiting for a period greater than needed for the vehicles to pass through. The timer instead of being loaded with value of sixty seconds it will be loaded with time based on the vehicle count on that corresponding road. The count of the vehicle will be found from camera image using object counting using Image processing technique. After finding the count, the timer will be loaded with the time (in seconds) based on the vehicle count. Thus the timer works and traffic light is controlled. So traffic congestion gets reduced. In Queue detection algorithm, the number of vehicles in each side of the road can be calculated using edge detection method.

Fig 3.2.1 Queue Detection and Edge Detection

14

CHAPTER 4 MODULE DESCRIPTION Taking snapshots using camera at each interval Vehicle counting using edge detection Calculation of timer values to control the traffic light Upload the timers with the timings (in seconds) 4.1 Taking snapshots using camera at each interval Here background images are compared with images on each side of the road. Then the number of vehicles in each side of road can be found using edge detection method. In real time implementation, Axis P1343 Network camera with Superb image quality with SVGA resolution can be used .These cameras are capable of functioning both during day and night .So no problem with the lighting needed to take photos during night time. They have the feature of Digital PTZ .The images that are captured by the cameras can be made to be stored for few seconds till the next image is obtained. Later these images can be erased off automatically. As there is no Security of vehicles is considered, Local storage of the images or maintaining a large database for the images taken is not needed. Thus the system memory requirement is reduced to a greater extend. These cameras have got the IP66 rating. It has protection against dust, rain, snow and sun, and can operate in temperatures as low as -40 C (-40 F). This eliminates the need for power cables and reducing installation costs.

15

Fig 4.1.1 Background

Fig 4.1.2 Side 1

Fig 4.1.3 Side 2

16

Fig 4.1.4 Side 3

Fig 4.1.5 Side 4 4.2 Vehicle counting using edge detection A vehicle detection operation is applied on the profile of the unprocessed image. To implement the algorithm in real time, two strategies are often applied: key region processing and simple algorithms. Most of the vehicle detection algorithms developed so far are based on a background differencing technique, which is sensitive to variations of ambient lighting. The method used here is based on applying edge detector operators to a profile of the image Edges are less sensitive to the variation of ambient lighting and are used in full frame applications (detection). The below syntax specifies the Canny method, using sigma as the standard deviation of the Gaussian filter. The default sigma is 1; the size of the filter is chosen automatically, based on sigma. BW = edge(I,'canny',thresh,sigma)

4.3 Calculation of timer values to control the traffic light By doing the above process, the count of the vehicle will be found. The timing should be calculated based on the number of vehicles. Say ,if 10 seconds are allotted for each vehicle then for above simulation output count of the vehicle for side1 was found to be five. So the total timing needed for signalling a phase is 50 seconds=(5*10).Thus timings are calculated and the lighting sequence will be given.When the timer count is half the loaded value, the image
17

of traffic in the will be captured and fed to the Matlab database and processed and count of the vehicle will be found. When the timer count drops to zero, the timer gets uploaded new time value and the lighting sequence will be carried out. 4.4 Upload the timer with timings (in seconds) By the end of the last step, the timings needed for controlling signal will be calculated .Calculation of the timer values are calculated using Matlab. These values are to be downloaded to the timers, where the timers are controlling the traffic light. The decrement in the loaded timer value will be continuously monitored by the Matlab and when it reaches half the loaded value, the next image will be taken, processed and count will be calculated and which is the downloaded to timer from Matlab. Thus timer and Matlab communicates to have efficient controlling of traffic light.

18

CHAPTER 5
CONCLUSION AND FUTURE WORK

This novelistic approach would be efficient over the conservative systems in diversified ways. The application of Queue detection algorithm with the aid of image processing technique would result in a phenomenal decrease in congestion of vehicles. The automatic control system would condense the manual work involved in the process. Economical time control has been yet another advantage counting to its remarkable success in curtailing the number of accidents. For future work, we would intend to take real images using camera at each interval and going to interface camera with MATLAB.

19

APPENDIX A SAMPLE CODING clc; clear all; close all; %%The Program for the side1: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s1=imread('C:\Users\Happy\Desktop\project\ip\final\back1.png'); u=imresize(s1,[256,66],'bilinear'); level=graythresh(u); s1=im2bw(u,level); s1=s1.*ones(size(s1)); % imshow(s1); s1=edge(s1,'canny', 0.14,3.2); % figure, imshow(s1) t1=imread('C:\Users\Happy\Desktop\project\ip\final\image1.png'); u=imresize(t1,[256,66],'bilinear'); level=graythresh(u); t1=im2bw(u,level); t1=t1.*ones(size(t1)); t1=edge(t1,'canny', 0.14,3.2); % figure, imshow(t1);

20

side1=imsubtract(s1,t1); for x=1:256 for y=1:66 if side1(x,y)==1 side1(x,y)=0; else side1(x,y)=1; end end end imview(side1); count=0; length=0; y=1; for y=23:1:24 for x=1:250 if side1(x,y)==1 length = length +1; end if side1(x,y)==0 length = 0; end if length >25 length=0; count = count +1; end end end
21

w1=count; %%The Program for the side2: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s2=imread('C:\Users\Happy\Desktop\project\ip\final\back2.png'); u=imresize(s2,[55,237],'bilinear'); level=graythresh(u); s2=im2bw(u,level); % imshow(s1); s2=s2.*ones(size(s2)); s2=edge(s2,'canny', 0.14,3.2); % figure, imshow(s2); t2=imread('C:\Users\Happy\Desktop\project\ip\final\image2.png'); u=imresize(t2,[55,237],'bilinear'); level=graythresh(u); t2=im2bw(u,level); t2=t2.*ones(size(t2)); t2=edge(t2,'canny', 0.14,3.2); % figure, imshow(t2); side2=imsubtract(t2,s2); imview(side2); count=0; length=0; y=1;
22

for x=40:1:50 for y=1:230 if side2(x,y)==1 length = length +1; end if side2(x,y)==0 length = 0; end if length >25 length=0; count = count +1; end end end w2=count; %%The Program for the side3: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s3=imread('C:\Users\Happy\Desktop\project\ip\final\back3.png'); u=imresize(s3,[93 255],'bilinear'); level=graythresh(u); s3=im2bw(u,level); % imshow(s1); s3=s3.*ones(size(s3)); s3=edge(s3,'canny', 0.14,3.2); % figure, imshow(s3);

23

t3=imread('C:\Users\Happy\Desktop\project\ip\final\image3.png'); u=imresize(t3,[93 255],'bilinear'); level=graythresh(u); t3=im2bw(u,level); t3=t3.*ones(size(t3)); t3=edge(t3,'canny', 0.14,3.2); % figure, imshow(t3); side3=imsubtract(s3,t3); imview(side3); count=0; length=0; y=1; for y=1:255 for x=1:93 if side3(x,y)==1 length = length +1; end if side3(x,y)==0 length = 0; end if length >25 length=0; count = count +1; end end end w3=count;
24

%%The Program for the side4: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s4=imread('C:\Users\Happy\Desktop\project\ip\final\back4.png'); u=imresize(s4,[152 118],'bilinear'); level=graythresh(u); s4=im2bw(u,level); % imshow(s1); s4=s4.*ones(size(s4)); s4=edge(s4,'canny', 0.14,3.2); % figure, imshow(s4); t4=imread('C:\Users\Happy\Desktop\project\ip\final\image4.png'); u=imresize(t4,[152 118],'bilinear'); level=graythresh(u); t4=im2bw(u,level); t4=t4.*ones(size(t4)); t4=edge(t4,'canny', 0.14,3.2); % figure, imshow(t4); side4=imsubtract(s4,t4); for x=1:152 for y=1:118 if side4(x,y)==1 side4(x,y)=0; else side4(x,y)=1; end
25

end end imview(side4); count=0; length=0; y=1; for y=60:1:64 for x=1:150 if side4(x,y)==1 length = length +1; end if side4(x,y)==0 length = 0; end if length >25 length=0; count = count +1; end end end w4=count; whitebox=[w1 w2 w3 w4]; time=[]; ontime=[]; % we assume that a vehicle takes 30 sec to move for i=1:4 if i==1
26

time(i)=10*whitebox(i); else time(i)=time(i-1)+10*whitebox(i); end ontime(i)=10*whitebox(i); %it counts the time for four sides and it is stored in an array. %we give it to the traffic light through output ports. end % figure,barh(whitebox,ontime); xlabel('OnTime in sec'),ylabel('Vehicles Count'); title('Graph on Vehicles Vs Ontime'); % figure,barh(whitebox,time); % xlabel('TotalTime in sec'),ylabel('Vehicles Count'); % title('Graph on Vehicles Vs TotalTime'); road=[1,2,3,4] figure,bar(road,ontime,'g'); xlabel('OnTime in sec'),ylabel('Vehicles'); title('Graph on Vehicles Vs Ontime');

27

APPENDIX B SCREEN SHOTS

Fig 5.2.1 Output Image for Side 1

28

Fig 5.2.2 Output Image for Side 2

Fig 5.2.3 Output Image for Side 3

29

Fig 5.2.4 Output Image for Side 4

30

Fig 5.2.5 Graph for vehicles count

31

Fig 5.2.6 Graph for Calculation of timer values

REFERENCES

32

Digital Image Processing by Rafael C.Gonzalez and Richard E.Woods.

Hoose. N: Computer Image Processing in Traffic Engineering.

Rourke, A., and Bell, M.G.H.: Queue detection and congestion monitoring using Image processing, Traffic Engg. and control . A Real-time Computer Vision System for Vehicle Tracking and Traffic Surveillance by Benjamin Coifman.

33

You might also like