You are on page 1of 43

1.

INTRODUCTION Vehicle detection is very important for civilian and military applications, such as highway monitoring, and the urban traffic planning. For the traffic management, vehicles detection is the critical step. Vehicles detection must be Vehicles detection could be achieved using the common magnetic loop detectors which are still used even though they are not the effective. Loop detectors are considered as point detectors and could not give the traffic information for the highway. Vision based techniques are more suitable than the magnetic loop sensors. They do not disturb traffic while installed and they are easy to modify. Their applicability is more comprehensive because they could be used in many aspects as vehicle detection, counting, classification, tracking, and monitoring. One camera could be used to monitor large section of highway. In spite the apparent advantages of vision based methods there are still many challenges. These challenges are weather changes, sunlight direction and intensity changes, building shadows, vehicles have different sizes, shapes and colors. In this paper one digital camera installed over the freeway to detect successive images. Detected images could be analyzed to extract the background automatically. Each image contains background of the highway and the moving vehicles. It is difficult to get freeway image without moving vehicles (background), so the freeway background must be extracted from the sequence images. The extracted background is used in subsequent analysis to detect moving vehicles. Current approaches for vehicles detection try to overcome the environment changes. Some approaches achieve the detection using background subtraction only and predicting the background through the next update interval. In these approaches the background is not extracted but detected and then updated through the next images processing. Intensity changes, stopped vehicles (or very slow moving vehicles) and camera moving lead to miss detection in these techniques. It is used to detect vehicle in simple scenes. Another approach uses edge-based techniques. In this approach 3D model is proposed for the vehicle. This 3D model depends on the edge detection of the vehicle. It is applicable under perfect conditions for passenger vehicles only. The edge in image processing is abrupt change in the intensity values.The edge detection suffers from many difficulties such as vehicle shadows, dark colors and ambient lights. Edge detection process becomes more difficult when vehicle 1

color is close to the freeway color. In other approaches such as probabilistic and statistical methods, there is not strict distribution for vehicle model so they use the better approximations for the unknown distributions, this leads to intensive computations and time consuming. The results of applied these methods will lead to high miss detection, and could not apply for complicated scenes. In these techniques, detection rate is low as compared to the other approaches. Other approaches use explicit detail model, where they need detail model and a hierarchy for detail levels. In the model contains substructures like windshield, roof, hood, and radiometric features as color constancy between hood color and roof color (where the gray level is higher than the median of the histogram). It is apparent that a large number of models are needed to cover all types of vehicles. In a hierarchical model is used to decide on the detection step (that identifies and clusters the image pixels) which pixels have a strong probability to belong to vehicles. In this case a huge computation is needed to detect the vehicles, and this will result in miss detection for different shapes of vehicles. In vehicle detection is implemented by calculating various characteristics features in the image of a monochrome camera. The detection process uses shadow and symmetry features of vehicle to generate vehicle hypothesis. This is beneficial for driver assistance but it is not applicable for vehicles counting and complicated scenes. In neural networks were used for vehicle detections. Neural networks have drawbacks; the main one is that there is not warranty that they reach the global minimum (in this case there are not closed-form solutions for modeling the vehicle detection). The other one implies to learn a data set representative of the real world and there is not universal optimum model for neural network. In fuzzy measures are used to detect vehicles. The detection process depends on the light intensity value. When light intensity value falls in certain interval, fuzzy measures must be used to decide if it is a vehicle or not. When the intensity value is larger than this interval, it represents a vehicle and when it is less than this interval vehicle does not exist. This approach suffers from environmental light changes and interval determination that needed to apply the fuzzy measures. In this paper, background extraction and edge detection is used to detect vehicles. This is useful in two ways; the first is using the advantages of the background subtraction and edge detection to detect vehicles. Second one it is able to deal with complex scenes and treat the 2

intensity hanging problems implemented at different environment where the light and the traffic status changing. 1.1 Objective: The main objective of our project is to detect and track vehicles driving through a controlled area. This can prevent the occurrence of abnormal events such as
Traffic congestion Speeding violation Illegal driving behaviors Accidents

1.2 Problem Definition: The road region represents an important part of the problem domain knowledge and can thus be useful in detecting traffic events. It consists of the set of road pixels from the camera images perspective and contains enter, exit, and rail regions. each vehicle moving along the analyzed road region is characterized by a unique identification number thats assigned after its first detected as passing by any enter region. The state of a tracked vehicle st at frame t consists of its identification number, id, which is a consecutive number assigned after detecting the vehicle passing through an enter region; a Boolean flag ft indicating whether the vehicles current position is known at frame t; two spatial coordinates denoting the targets center-of-mass position (xt and yt, respectively);two scalars representing the sides of the vehicle bounding box (Lxt and Lyt, respectively);and the corresponding velocity coordinates (vx,t and vy,t, respectively). Therefore, this vehicle state st at frame t is represented in Equation 4 by the eight-tuple.

2. LITERATURE REVIEW
1. V. Kastrinaki, M. Zervakis, and K. Kalaitzakis, A Survey of Video

Processing Techniques for Traffic Applications, Image and Vision Computing, vol. 21, no. 4, 2003, pp. 359381. Video-based traffic flow monitoring is a fast emerging field based on the continuous development of computer vision. A survey of the state-of-the-art video processing techniques in traffic flow monitoring is presented in this paper. Firstly, vehicle detection is the first step of video processing and detection methods are classified into background modeling based methods and non-background modeling based methods. In particular, nighttime detection is more challenging due to bad illumination and sensitivity to light. Then tracking techniques, including 3D model-based, region-based, active contour-based and feature-based tracking, are presented. A variety of algorithms including MeanShift algorithm, Kalman Filter and Particle Filter are applied in tracking process. In addition, shadow detection and vehicles occlusion bring much trouble into vehicle detection, tracking and so on. Based on the aforementioned video processing techniques, discussion on behavior understanding including traffic incident detection is carried out. Finally, key challenges in traffic flow monitoring are discussed. 2. E. Bas, M. Tekalp, and F.S. Salman, Automatic Vehicle Counting from Video for Traffic Flow Analysis, Proc. IEEE Intelligent Vehicles Symp., IEEE Press, 2007, pp. 392397. We propose a new video analysis method for counting vehicles, where we use an adaptive bounding box size to detect and track vehicles according to their estimated distance from the camera given the scene-camera geometry. We employ adaptive background subtraction and Kalman filtering for road/vehicle detection and tracking, respectively. Effectiveness of the proposed method for vehicle counting is demonstrated on several video recordings taken at different time periods in a day at one location in the city of Istanbul. 3. G. Zhang, R.P. Avery, and Y. Wang, A Video-Based Vehicle Detection and Classification System for Real- Time Traffic Data Collection Using Uncalibrated Cameras, J. Transportation Research Board, IEEE Press,2007, pp. 138147. On-board video analysis has attracted a lot of interest over the two last decades, mainly for safety improvement (through e.g. obstacles detection or drivers assistance). In this context, our study aims at providing a video-based realtime understanding of the urban road traffic. Considering a video camera fixed on 4

the front of a public bus, we propose a cost-effective approach to estimate the speed of the vehicles on the adjacent lanes when the bus operates on its reserved lane. We propose to work on 1-D segments drawn in the image space, aligned with the road lanes. The relative speed of the vehicles is computed by detecting and tracking features along each of these segments, while the absolute speed of vehicles is estimated from the relative one thanks to odometer and/or GPS data. Using predefined speed thresholds, the traffic can be classified in real-time into different categories such as "fluid", "congestion"... As demonstrated in the evaluation stage, the proposed solution offers both good performances and low computing complexity, and is also compatible with cheap video cameras, which allows its adoption by city traffic management authorities. 4. S.C. Cheung and C. Kamath, Robust Techniques for Background Subtraction in Urban Traffic Video, Proc. Intl Conf. Visual Communications and Image Processing, Intl Soc. for Optonics and Photonics (SPIE), 2004, pp. 881 892. Video processing has become an efficient technique support for collecting parameters of urban traffic. Detection and tracking of multiple targets with an uncalibrated CCD camera is developed in this paper. In order to obtain moving targets from the video sequence efficiently, the paper presents mixture Gaussian background model based on object-level, and moving objects are extracted after background subtraction. Moving multi-targets are tracked through integration of the motion and shape features by Kalman filter modeling. In order to ensure the continuity and the stabilization, occlusion processing is performed. The proposed approach is validated under real traffic scenes. Experimental results show that detection and tracking are robust and adaptive, can be well applied in real-world

3. SYSTEM STUDY 3.1 HARDWARE SPECIFICATION System : Pentium IV 2.4 GHz. Hard Disk : 40 GB. Monitor : 15 . Mouse : Logitech. Ram : 1 Gb. 3.2 SOFTWARE SPECIFICATION Operating system : Windows XP Professional. Front End : Visual Studio.NET 2005 Coding Language : Visual C# .NET. Input : Video 3.3 EXISTING SYSTEM There are three major stages including vehicle detection, tracking, and classification in estimating desired traffic parameters of vehicles. Under assumption that the camera is stationary, most methods detect the vehicle by background subtraction or image difference. After that, several vehicle features, such as shape, aspect ratio, texture, etc. are extracted for classification. In the detected objects with temporal consistency are classified as the vehicles. 3.3.1. Drawbacks of Existing System If the object is moving smoothly we'll receive small changes from frame to frame. It's impossible to get the whole moving object. Things become worse, when the object is moving so slowly and the algorithms will not give any result at all. 3.4 PROPOSED SYSTEM The focus of this work is to propose a vision-based system that can successfully detect and track vehicles on the highway scenes. The system overview of the proposed approach is shown. Firstly, we apply the background subtraction method to extract the possible foreground regions from the highway scene. The false regions are identified by introducing the geometric constraint as well as shadow pixels are eliminated. To construct the temporal correspondence between the detected vehicles at different times, we formulate the vehicle tracking 6

3.4.1. Advantage of Proposed System It is much more simpler to understand The implementation of the filter is more efficient, so the filter produce better performance. The focus of this work is to propose a vision-based system that can successfully detect and track vehicles on the highway scenes. The system overview of the proposed approach is shown. 3.5 FEASIBILITY STUDY The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are, Economical feasibility Operational feasibility Technical feasibility 3.5.1 Economical Feasibility: This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased. 3.5.2 Operational Feasibility: The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

3.5.3 Technical Feasibility: This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. 3.6 SOFTWARE DESCRIPTION 3.6.1. OVERVIEW OF .NET .NET is a "Software Platform". It is a language-neutral environment for developing rich .NET experiences and building applications that can easily and securely operate within it. When developed applications are deployed, those applications will target .NET and will execute wherever .NET is implemented instead of targeting a particular Hardware/OS combination. The components that make up the .NET platform are collectively called the .NET Framework. The .NET Framework is a managed, type-safe environment for developing and executing applications. The .NET Framework manages all aspects of program execution, like, allocation of memory for the storage of data and instructions, granting and denying permissions to the application, managing execution of the application and reallocation of memory for resources that are not needed. The .NET Framework is designed for cross-language compatibility. Crosslanguage compatibility means, an application written in Visual Basic .NET may reference a DLL file written in C# (C-Sharp). A Visual Basic .NET class might be derived from a C# class or vice versa. The .NET Framework consists of two main components: Common Language Runtime (CLR) Class Libraries 3.6.2. Common Language Runtime (CLR) The CLR is described as the "execution engine" of .NET. It provides the environment within which the programs run. It's this CLR that manages the execution of programs and provides core services, such as code compilation, memory allocation, thread management, and garbage collection. The software version of .NET is actually the CLR version.

3.6.3. Working of the CLR When the .NET program is compiled, the output of the compiler is not an executable file but a file that contains a special type of code called the Microsoft Intermediate Language (MSIL), which is a low-level set of instructions understood by the common language run time. This MSIL defines a set of portable instructions that are independent of any specific CPU. It's the job of the CLR to translate this Intermediate code into a executable code when the program is executed making the program to run in any environment for which the CLR is implemented. And that's how the .NET Framework achieves Portability. This MSIL is turned into executable code using a JIT (Just In Time) complier. The process goes like this, when .NET programs are executed, the CLR activates the JIT complier. The JIT complier converts MSIL into native code on a demand basis as each part of the program is needed. Thus the program executes as a native code even though it is compiled into MSIL making the program to run as fast as it would if it is compiled to native code but achieves the portability benefits of MSIL.

Fig 3.6.3.1 Compilation Process 3.6.4. Class Libraries Class library is the second major entity of the .NET Framework which is designed to integrate with the common language runtime. This library gives the program access to runtime environment. The class library consists of lots of prewritten code that all the applications created in VB .NET and Visual Studio .NET will use. The code for all the elements like forms, controls and the rest in VB .NET applications actually comes from the Class Library. 3.6.5. Common Language Specification (CLS) If we want the code which we write in a language to be used by programs in other languages then it should adhere to the Common Language Specification (CLS). The CLS describes a set of features that different languages have in common. The CLS defines the minimum standards that .NET language compilers must conform to, and ensures that any source code compiled by a .NET compiler can interoperate with the .NET Framework.

Some reasons why developers are building applications using the .NET Framework Improved Reliability Increased Performance Developer Productivity Powerful Security Integration with existing Systems Ease of Deployment Mobility Support XML Web service Support Support for over 20 Programming Languages Flexible Data Access 3.6.6. DOTNET FRAMEWORK

Fig 3.6.6.1 .NET framework

10

3.6.7. COMPILATION AND EXECUTION

Fig 3.6.7.1 Compilation and Execution 3.6.8. OVERVIEW OF C# C# 2.0 introduces several language extensions, including Generics, Anonymous Methods, Iterators, Partial Types, and Nullable Types. Generics permit classes, structs, interfaces, delegates, and methods to be parameterized by the types of data they store and manipulate. Generics are useful because they provide stronger compile-time type checking, require fewer explicit conversions between data types, and reduce the need for boxing operations and run-time type checks. Anonymous methods allow code blocks to be written in-line where delegate values are expected. Anonymous methods are similar to lambda functions in the Lisp programming language. C# 2.0 supports the creation of closures where anonymous methods access surrounding local variables and parameters. Iterators are methods that incrementally compute and yield a sequence of values. Iterators make it easy for a type to specify how the foreach statement will iterate over its elements. 11

Partial types allow classes, structs, and interfaces to be broken into multiple pieces stored in different source files for easier development and maintenance. Additionally, partial types allow separation of machinegenerated and user-written parts of types so that it is easier to augment code generated by a tool. Nullable types represent values that possibly are unknown. A nullable type supports all values of its underlying type plus an additional null state. Any value type can be the underlying type of a nullable type. A nullable type supports the same conversions and operators as its underlying type, but additionally provides null value propagation similar to SQL. This chapter gives an introduction to these new features. Following the introduction are five chapters that provide a complete technical specification of the features. The final chapter describes a number of smaller extensions that are also included in C# 2.0. The language extensions in C# 2.0 were designed to ensure maximum compatibility with existing code. For example, even though C# 2.0 gives special meaning to the words where, yield, and partial in certain contexts, these words can still be used as identifiers. The C# foreach statement is used to iterate over the elements of an enumerable collection. In order to be enumerable, a collection must have a parameterless GetEnumerator method that returns an enumerator. Generally, enumerators are difficult to implement, but the task is significantly simplified with iterators. An iterator is a statement block that yields an ordered sequence of values. An iterator is distinguished from a normal statement block by the presence of one or more yield statements: The yield return statement produces the next value of the iteration. The yield break statement indicates that the iteration is complete.

12

3.6.9. OVERVIEW OF 3-TIER ARCHITECTURE Web Sphere Application Server provides the application logic layer in a three - tier architecture, enabling client components to interact with data resources and legacy applications. Collectively, three-tier architectures are programming models that enable the distribution of application functionality across three independent systems, typically --Client components running on local workstations (tier one) --Processes running on remote servers (tier two) --A discrete collection of databases, resource managers, and mainframe applications (tier three) These tiers are logical tiers. They might or might not be running on the same physical server.

Fig 3.6.9.1 Three Tier architecture

13

3.6.9.1. First tier: Responsibility for presentation and user interaction resides with the first-tier components. These client components enable the user to interact with the secondtier processes in a secure and intuitive manner. Web Sphere Application Server supports several client types. Clients do not access the third-tier services directly. For example, a client component provides a form on which a customer orders products. The client component submits this order to the second-tier processes, which check the product databases and perform tasks that are needed for billing and shipping. 3.6.9.2. Second tier: The second-tier processes are commonly referred to as the application logic layer. These processes manage the business logic of the application, and are permitted access to the third-tier services. The application logic layer is where most of the processing work occurs. Multiple client components can access the second-tier processes simultaneously, so this application logic layer must manage its own transactions. In the previous example, if several customers attempt to place an order for the same item, of which only one remains, the application logic layer must determine who has the right to that item, update the database to reflect the purchase, and inform the other customers that the item is no longer available. Without an application logic layer, client components access the product database directly. Separating the second and third tiers reduces the load on the third-tier services, supports more effective connection management, and can improve overall network performance. 3.6.9.3. Third tier: The third-tier services are protected from direct access by the client components residing within a secure network. Interaction must occur through the second-tier processes. 3.6.9.4. Communication among tiers: All three tiers must communicate with each other. Open, standard protocols and exposed APIs simplify this communication. You can write client components in any programming language, such as Java or C++ or C#. These clients run on any operating system, by speaking with the application logic layer. Databases in the third tier can be of any design, if the application layer can query and manipulate them. The key to this architecture is the application logic layer. 14

4. SYSTEM DESIGN 4.1 MODULE DESCRIPTION 4.1.1. Input Video: Freeways are originally designed to provide high mobility to road users. However, the increase in vehicle numbers has lead to congestion forming in freeways around the world. Daily recurrent congestion substantially reduces the freeway capacity when it is most needed. Expanding existing freeways cannot provide a complete solution to the congestion problem due to economic and space constraints. 4.1.2. Background Extraction: To extract the freeway background automatically enough number of successive frames must be available for processing. The automatic background extraction starts by processing the first three successive frames (images). The automatic background extraction results are very good and promising. The most effective parameters that are playing a main role for automatic background extraction are the threshold level and the dilation. 4.1.3. Vehicle Detection: To detect vehicles the extracted background must be subtracted from the current image Subtract the extracted background from the current image. Find the edge of the current image and the background image. Subtract the edge of the background image from the edge of the current image. The background is subtracted from the current image then the resulted image is filtered to get moving vehicles only. By using this technique most of vehicles are detected. Moving vehicles are detected easily after background is subtracted. 4.1.4. Vehicle Count: The numbers of successive frames are used to extract the background. Digital camera used to take shots. The camera placed over the highway directly. It shots six frames per second. The images are taken midday to decrease the effect of the vehicle shadow problems. 4.2 INPUT DESIGN Input design is a part of overall system design that requires special attention because it is most common source of data processing errors. The goal of designing input data is to make the data entry easy and free from errors. 15

The privacy protected query processing of the user has been designed in a way in which the users information is protected from hacking. The query of the user is sent to the location based server and it is has been processed. The form has been designed in such a way to process the query and to enter the query of the user. The cursor is the place where the data must be entered into the form. Select the Video and Load Video Videos are converted into frame 4.3 OUTPUT DESIGN Computer output is most important and direct source of information to user. Output design aims at communicating the results of processing with user and management. The application is successful only when it generates effective reports. In order to improve the system relationship with user, the efficient and effective outputs have a soft copy or a hard copy of the report according to his/her used. Output design aims at communicating the results of processing with user and management. 4.4 SYSTEM ARCHITECTURE

Fig 4.4.1 Architecture Diagram 16

4.5 UML DIAGRAMS 4.5.1 USECASE DIAGRAM


Input video clip

Grayscale conversion

Segmentation

Key frame selection

User
Calculate width and height

ROI Identification

Key frames with highlighted ROIs

Fig 4.5.1.1 Usecase Diagram

17

4.5.2 Dataflow Diagram

Apply Grayscale conversion

Input video

Convert into continuous frames

Background

Segmentation

Foreground

Blob counter algorithm

ROI Identification

Calculate width and height

Display result with count

Fig 4.5.2.1 Dataflow Diagram

18

4.5.3 ACTIVITY DIAGRAM

Input

Video

Grayscale Conversion

Segmentation

Foreground

Blob Counter

Background

Key Frame Selection

Calculate width and Height

ROI Identification

Fig 4.5.3.1 Activity Diagram 19

4.5.4 SEQUENCE DIAGRAM

Input Video

Grayscale conversion nnnn

Segment ation

Keyframe selection

Calculate width,height

ROI identification

Keyframe with HI

Input

GI

FG, BG

Frames

OWH Identified ROI

GI- Grayscale Image FG- Foreground BG- Background

OWH- Object width and Height ROI- Region Of Interest HI- Highlighted Identification

Fig 4.5.4.1 Sequence Diagram

5. CODE REVIEW AND TESTING 20

5.1 CODE REVIEW 5.1.1 AUTOMATIC BACKGROUND EXTRACTION To extract the freeway background automatically enough number of successive frames must be available for processing. The automatic background extraction starts by processing the first three successive frames (images) as in the following steps: STEP 1: Take a movie for the freeway and then convert it to number of successive frames(images). STEP 2: Use the first three successive frames Ct-2, Ct-1, Ct to calculate the differences Dt-1, t-2= |Ct-1- Ct-2| , and Dt,t-1= |Ct- Ct-1| STEP 3: Specify the gray threshold level T. STEP 4: Convert the differences to binary depend on the threshold. STEP 5: Calculate the Difference Product (DP) using the bitwise logical AND operation: DPt = DBt-1,t-2 & DBt,t-1. STEP 6: Apply binary dilation (DLT) of DPti,j . STEP 7: Apply image close. STEP 8: Calculate moving object region (MOR) by filtering the closed image. STEP 9: Fill the moving object region. STEP 10: Estimate the initial background B(kk) and store this region information at the EF (Extraction Flag). B(kk) = MOR(kk)|Ct, where the symbol | is the bitwise logical OR operator. And, EF(kk) = MOR(kk). STEP 11: For the first three successive frames calculate MOR for current input image and calculate background extraction target area (ETA). ETA(kk) = EF(kk-1)& MOR(kk ) , where MOR(kk) = 1s complement of MOR(kk). STEP 12: For the subsequent frames extract background pixels in the current input image and update EF. B(kk) = B(kk-1) & (Ct ,| ETA(kk) ), EF(kk) = EF(kk-1) ETA(kk) , where is the bitwise logical XOR (exclusive XOR) operator. Repeat steps 1-12 till get the background. 5.1.2 VEHICLE DETECTION: 21

To detect vehicles the extracted background must be subtracted from the current image as in the following steps: Step1: Subtract the extracted background from the current image. Step2: Find the edge of the current image and the background image. Step3: Subtract the edge of the background image from the edge of the current image. Step4: Fill the resulted images in steps 2&3. Implement logical and operation for the results in Steps 2&3. Step5: Filter the resulted image. Step6: Count the resulted moving vehicles. 5.2. SYSTEM TESTING Software testing is an important element of Software quality assurance and represents the ultimate review of specification, design and coding. The increasing visibility of S/W as a system element and the costs associated with Software failure are motivating forces for well planned, through testing. Though the test phase is often thought of as separate and distinct from the development effort--first develop, and then test--testing is a concurrent process that provides valuable information for the development team. There are at least three options for integrating Project Builder into the test phase: Testers do not install Project Builder, use Project Builder functionality to compile and source-control the modules to be tested and hand them off to the testers, whose process remains unchanged. The testers import the same project or projects that the developers use. Create a project based on the development project but customized for the testers (for example, it does not include support documents, specs, or source), who import it.

5.2.1 Testing objectives: There are several rules that can serve as testing objectives. 22

They are Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an undiscovered error. A successful test is one that uncovers an undiscovered error. If testing is conducted successfully according to the objectives stated above, it will uncover errors in the software. 5.2.2 Types of Testing: Testing is the process of executing the program with the intent of finding errors. Testing cannot show the absence of defects, it can only show that software errors are present. The Testing principles used are Tests are traceable to customer requirements. 80% of errors will likely be traceable to 20 % of program modules Testing should begin in-small and progress towards testing in large. 4.2.2.1 White Box Testing: This test is conducted during the code generation phase itself. All the errors were rectified at the moment of its discovery. During this testing, it is ensured that All independent paths within a module have been exercised at least one Exercise all logical decisions on their true or false side. Execute all loops at their boundaries. 4.2.2.2 Black Box Testing: It is focused on the functional requirements of the software. It is not an alternative to White Box Testing; rather, it is a complementary approach that is likely to uncover a different class of errors than White Box methods. It is attempted to find errors in the following categories. Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors and 23

Initialization errors. It is already stated that the methodology used for program development is the Component Assembly Model. Before integrating the module-interfaces, each module-interface is tested separately. This is called Unit Testing. 5.2.2.3 Unit Testing: This is the first level of testing. In this different modules are tested against the specifications produced during the design of the module. During this testing the number of arguments is compared to input parameters, matching of parameter and arguments etc. It is also ensured whether the file attributes are correct, whether the Files are opened before using, whether Input/output errors are handled etc. Unit Test is conducted using a Test Driver usually. 5.2.2.4 Integration Testing: Integration testing is a systematic testing for constructing the program structure, while at the same time conducting test to uncover errors associated within the interface. Bottom-up integration is used for this phase. It begins construction and testing with atomic modules. This strategy is implemented with the following steps. Low-level modules are combined to form clusters that perform a specific software sub function. The cluster is tested. Drivers are removed and clusters are combined moving upward in the program structure. 5.2.2.5 Alpha Testing: A series of Acceptance tests were conducted to enable the employees of the firm to Validate requirements. The End User conducted it. The suggestions, along with the additional requirements of the end user were included in the project. 5.2.2.6 Beta Testing: It is to be conducted by the end-user without the presence of the developer. It can be conducted over a period of weeks or month. Since it is a long time consuming activity, its result is out of scope of this project report. But its result will help to enhance the product at a later time. 5.2.2.7 Validation Testing: 24

This provides the final assurance that the software meets all functional, behavioral and performance requirements. The software is completely assembled as a package. Validation succeeds when the software functions in which the user expects. 5.2.2.8 Output testing: After performing the validation testing, next step is output testing of the proposed system since no system could be useful if it does not produces the required output generated or considered in to two ways. One is on screen and another is printed format. The output comes as the specified requirements by the user. Hence output testing does not result in any correction in the system. 5.2.2.9 User Acceptance testing: User acceptance of a system is the factor for the success of any system. The system under consideration is tested for the user acceptance by constantly keeping in touch with the prospective system users at the time of developing and making changes wherever required. Input screen design Output screen design On-line message to guide the user Format of the ad-hoc reports and other outputs. Taking various kinds of test data does the above testing. Preparation of test data plays a vital role in the system testing. After preparing the test data the system under study is tested using the test data. While testing the system by using test data errors are again uncovered and corrected by using above testing steps and corrections are also noted for future use.

5.3 TEST REPORT BY SYSTEM ANALYST/PROGRAMMER 25

S.No a) b) c) d)

Testing Parameters INTERFACE TESTING Mouse / Tab Navigation User Friendliness Consistent menus Consistent Graphical buttons

Applications OK

VALIDATION TESTING a) Check for improper or inconsistent typing b) Check for erroneous initialization or default values c) Check for incorrect variables names d) Check for inconsistent data types e) Check for relational / arithmetic operators DATA SECURITY/SECURITY TESTING a) Data Insert /Delete /Update b) Condition(Underflow, Overflow exception) c) Check For Unauthorized Access of data d) Check For Data Availability EFFICIENCY TESTING a) Throughput of the System b) Response Time Of the System c) Online Disk Storage d) Primary Memory Required by the System. ERROR HANDLING ROUTINES a) Error Description are Intelligent / Understandable b) Error Recovery Is Smooth c) All Error handling routines are tested and executed 26 OK OK OK OK

at least ones. Table 5.3.1 Test Report by system analyst/ Programmer 5.4 USER TEST REPORT (TO BE FILLED BY THE USER) Testing Parameters TEST FOR PULLED DOWN MENUS AND MOUSE OPERATIONS a) All the relevant Pull Down Menus, Scroll Bar, Dialog Boxes and Buttons Functioning Properly? b) Is the Appropriate menu bar displayed in the appropriate Context? c) Are all menu functions and pull down Sub-Functions properly listed? d) Does each menu function perform according to design Specification? e) Is it Possible to invoke each menu functions using it is alternative keys? f) Is all data content with in the Window Properly addressable with mouse, Functional Keys and Keyboard? g) Does the Window Properly generate when it is overwritten then Recalled? h) Is the active Window Properly Highlighted? TEST FOR DATA ENTRY LEVEL : a) Is alphanumeric data entry 27 Observations

S.No

YES

properly echoed and input to the system? b) Do graphical modes of data entry such as Scrollbars work properly? c) Are data input messages intelligible? d) Is invalid data properly recognized? f) Is all input data entry properly Saved? TEST FOR VERIFYING OUTPUTS a) Whether Output displayed is according to requirement and printed with proper alignment? b) If Calculations are there, do you check them? c) Are Report formats according to need? a) Are reports can be Printed or printer? Table 5.4.1 User Test Report

YES

YES

6. SCOPE FOR FUTURE ENHANCEMENT 28

The background subtraction and edge detection technique is used for the vehicle detection. This technique uses the advantages of both approaches. The practical applications approved the effectiveness of this method. This method consists of two procedures: First, automatic background extraction procedure, in which the background is extracted automatically from the successive frames; Second vehicles detection procedure, which depend on edge detection and background subtraction. Experimental results show the effective application of this algorithm. It is could be improved and used as a basis for automatic freeway traffic monitoring. Miss detection resulted from occluding big vehicles the small ones and the far moving vehicles that appear as a point in the image. These difficulties could be solved in the future work by install the camera over a high building near the highway and take shots for the cross section of the highway.

7. CONCLUSION 29

The experimental results of applying this approach lead to detect moving vehicles efficiently. This approach gives promising and effective results where vehicle detection rate was higher than 91%. In this approach the advantages of background subtraction and edge detection are used. It is could be improved and used as a basis for automatic freeway traffic monitoring. Miss detection resulted from occluding big vehicles the small ones and the far moving vehicles that appear as a point in the image. These difficulties could be solved in the future work by install the camera over a high building near the highway and take shots for the cross section of the highway.

8. APPENDICES 30

8.1 Source Code


Select Video Code private void openFileItem_Click( object sender, System.EventArgs e ) { if ( ofd.ShowDialog( ) == DialogResult.OK ) { // create video source VideoFileSource fileSource = new VideoFileSource( ); fileSource.VideoSource = ofd.FileName; // open it OpenVideoSource( fileSource ); } } private void OpenVideoSource( IVideoSource source ) { // set busy cursor this.Cursor = Cursors.WaitCursor; // close previous file CloseFile( ); // enable/disable motion alarm // create camera Camera camera = new Camera( source, detector ); // start camera camera.Start( ); // attach camera to camera window cameraWindow.Camera = camera; // reset statistics statIndex = statReady = 0; // set event handlers camera.NewFrame += new EventHandler( camera_NewFrame ); camera.Alarm += new EventHandler( camera_Alarm ); // start timer timer.Start( ); this.Cursor = Cursors.Default; } BackgroundExtraction: namespace motion { using System; using System.Drawing; using System.Drawing.Imaging; using AForge.Imaging;

31

using AForge.Imaging.Filters; /// <summary> /// MotionDetector1 /// </summary> public class MotionDetector1 : IMotionDetector { private IFilter grayscaleFilter = new GrayscaleBT709( ); private Difference differenceFilter = new Difference( ); private Threshold thresholdFilter = new Threshold( 15 ); private IFilter erosionFilter = new Erosion( ); private Merge mergeFilter = new Merge( ); private IFilter extrachChannel = new ExtractChannel( RGB.R ); private ReplaceChannel replaceChannel = new ReplaceChannel( private Bitmap backgroundFrame; private BitmapData bitmapData; private bool private int private int private int calculateMotionLevel = false; width; // image width height; // image height pixelsChanged; RGB.R, null );

// Motion level calculation - calculate or not motion level public bool MotionLevelCalculation { get { return calculateMotionLevel; } set { calculateMotionLevel = value; } } // Motion level - amount of changes in percents public double MotionLevel { get { return (double) pixelsChanged / ( width * height ); } } // Constructor public MotionDetector1( ) { } // Reset detector to initial state public void Reset( ) { if ( backgroundFrame != null ) { backgroundFrame.Dispose( ); backgroundFrame = null; } } // Process new frame public void ProcessFrame( ref Bitmap image ) {

32

if ( backgroundFrame == null ) { // create initial backgroung image backgroundFrame = grayscaleFilter.Apply( image ); // get image dimension width = image.Width; height = image.Height; // just return for the first time return; } Bitmap tmpImage; // apply the grayscale file tmpImage = grayscaleFilter.Apply( image ); // set backgroud frame as an overlay for difference filter differenceFilter.OverlayImage = backgroundFrame; // apply difference filter Bitmap tmpImage2 = differenceFilter.Apply( tmpImage ); // lock the temporary image and apply some filters on the locked data bitmapData = tmpImage2.LockBits( new Rectangle( 0, 0, width, height ), ImageLockMode.ReadWrite, PixelFormat.Format8bppIndexed ); // threshold filter thresholdFilter.ApplyInPlace( bitmapData ); // erosion filter Bitmap tmpImage3 = erosionFilter.Apply( bitmapData ); // unlock temporary image tmpImage2.UnlockBits( bitmapData ); tmpImage2.Dispose( ); // calculate amount of changed pixels pixelsChanged = ( calculateMotionLevel ) ? CalculateWhitePixels( tmpImage3 ) : 0; // dispose old background backgroundFrame.Dispose( ); // set backgound to current backgroundFrame = tmpImage; // extract red channel from the original image Bitmap redChannel = extrachChannel.Apply( image ); // merge red channel with moving object mergeFilter.OverlayImage = tmpImage3; Bitmap tmpImage4 = mergeFilter.Apply( redChannel ); redChannel.Dispose( ); tmpImage3.Dispose( ); // replace red channel in the original image

33

replaceChannel.ChannelImage = tmpImage4; Bitmap tmpImage5 = replaceChannel.Apply( image ); tmpImage4.Dispose( ); image.Dispose( ); image = tmpImage5; } // Calculate white pixels private int CalculateWhitePixels( Bitmap image ) { int count = 0; // lock difference image BitmapData data = image.LockBits( new 0,width,height ),ImageLockMode.ReadOnly, PixelFormat.Format8bppIndexed ); int offset = data.Stride - width; unsafe { byte * ptr = (byte *) data.Scan0.ToPointer( ); for ( int y = 0; y < height; y++ ) { for ( int x = 0; x < width; x++, ptr++ ) { count += ( (*ptr) >> 7 ); } ptr += offset; } } // unlock image image.UnlockBits( data ); return count; } } } CaptureDeviceForm.cs using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using dshow; using dshow.Core; namespace motion { /// <summary> /// Summary description for CaptureDeviceForm. /// </summary> public class CaptureDeviceForm : System.Windows.Forms.Form { Rectangle( 0,

34

FilterCollection filters; private System.Windows.Forms.Label label1; private System.Windows.Forms.ComboBox deviceCombo; private System.Windows.Forms.Button cancelButton; private System.Windows.Forms.Button okButton; private string device; /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.Container components = null; // Device public string Device { get { return device; } } // Constructor public CaptureDeviceForm() { // // Required for Windows Form Designer support // InitializeComponent(); // try { filters = new FilterCollection(FilterCategory.VideoInputDevice); if (filters.Count == 0) throw new ApplicationException(); // add all devices to combo foreach (Filter filter in filters) { deviceCombo.Items.Add(filter.Name); } } catch (ApplicationException) { deviceCombo.Items.Add("No local capture devices"); deviceCombo.Enabled = false; okButton.Enabled = false; } deviceCombo.SelectedIndex = 0; } /// <summary> /// Clean up any resources being used. /// </summary> protected override void Dispose( bool disposing ) { if( disposing ) {

35

if(components != null) { components.Dispose(); } } base.Dispose( disposing ); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.label1 = new System.Windows.Forms.Label(); this.deviceCombo = new System.Windows.Forms.ComboBox(); this.cancelButton = new System.Windows.Forms.Button(); this.okButton = new System.Windows.Forms.Button(); this.SuspendLayout(); // // label1 // this.label1.Location = new System.Drawing.Point(10, 10); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(156, 14); this.label1.TabIndex = 0; this.label1.Text = "Select capture device:"; // // deviceCombo // this.deviceCombo.DropDownStyle System.Windows.Forms.ComboBoxStyle.DropDownList; this.deviceCombo.Location = new System.Drawing.Point(10, 30); this.deviceCombo.Name = "deviceCombo"; this.deviceCombo.Size = new System.Drawing.Size(325, 21); this.deviceCombo.TabIndex = 6; // // cancelButton // this.cancelButton.DialogResult = System.Windows.Forms.DialogResult.Cancel; this.cancelButton.FlatStyle = System.Windows.Forms.FlatStyle.Flat; this.cancelButton.Location = new System.Drawing.Point(180, 78); this.cancelButton.Name = "cancelButton"; this.cancelButton.TabIndex = 9; this.cancelButton.Text = "Cancel"; // // okButton // this.okButton.DialogResult = System.Windows.Forms.DialogResult.OK; this.okButton.FlatStyle = System.Windows.Forms.FlatStyle.Flat; this.okButton.Location = new System.Drawing.Point(90, 78); this.okButton.Name = "okButton"; this.okButton.TabIndex = 8; this.okButton.Text = "Ok"; this.okButton.Click += new System.EventHandler(this.okButton_Click);

36

// // CaptureDeviceForm // this.AcceptButton = this.okButton; this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.CancelButton = this.cancelButton; this.ClientSize = new System.Drawing.Size(344, 113); this.Controls.AddRange(new System.Windows.Forms.Control[] { this.cancelButton, this.okButton, this.deviceCombo, this.label1}); this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.FixedToolWindow; this.MaximizeBox = false; this.MinimizeBox = false; this.Name = "CaptureDeviceForm"; this.ShowInTaskbar = false; this.StartPosition = System.Windows.Forms.FormStartPosition.CenterParent; this.Text = "Open Local Capture Device"; this.ResumeLayout(false); } #endregion // On "Ok" button private void okButton_Click(object sender, System.EventArgs e) { device = filters[deviceCombo.SelectedIndex].MonikerString; } } } Camera.cs namespace motion { using System; using System.Drawing; using System.Threading; using VideoSource; /// <summary> /// Camera class /// </summary> public class Camera { private IVideoSource private IMotionDetector private Bitmap

videoSource = null; motionDetecotor = null; lastFrame = null;

// image width and height private int width = -1, height = -1; // alarm level

37

private double

alarmLevel = 0.005;

// public event EventHandlerNewFrame; public event EventHandlerAlarm; // LastFrame property public Bitmap LastFrame { get { return lastFrame; } } // Width property public int Width { get { return width; } } // Height property public int Height { get { return height; } } // FramesReceived property public int FramesReceived { get { return ( videoSource == null ) ? 0 : videoSource.FramesReceived; } } // BytesReceived property public int BytesReceived { get { return ( videoSource == null ) ? 0 : videoSource.BytesReceived; } } // Running property public bool Running { get { return ( videoSource == null ) ? false : videoSource.Running; } }

8.2 Screen shots 38

Load a Video file

Input a video file

Grayscale Object 39

Vehicle Movement Tracking Detector 1

Detector 2 40

Detector 3

41

Detector 4 (Vehicle Count)

9. BIBLIOGRAPHY 42

i. S. Gupte, O. Masoud, R. Martin, and N. Papanikolopoulos, Detection and classification of vehicles, IEEE Trans. Intelligent Transportation Systems. ii. Z. Kim and J. Malik,; Fast vehicle detection with probabilistic feature grouping and its application to vehicle tracking,. Ninth IEEE International Conference on Computer Vision, 13-16 Oct. 2003, pp. 524 531.

iii. H. Schneiderman, and T. Kanade, A statistical method for 3D object detection applied to faces and cars, IEEE Conf. Computer Vision and Pattern Recognition, vol.1,13-15-Jun-2000,pp.746-751. iv. T. Zhao, and R. Nevatia, Car detection in low resolution aerial image, Image and Vision Computing, vol. 21, 18 march 2003, pp. 693 703.

v. A. Rajagopalan, P. Burlina, and R. Chellappa, Higher order statistical learning for vehicle detection in images, in Proc. 7th Int. IEEE Conf. Computer Vision, vol.2,20-27-Sept.1999,pp.1204-1209.

43

You might also like