You are on page 1of 12

SOFTWARE REQUIREMENT SPECIFICATION ON VISUAL FEATURE EXTRACTION

Submitted by PRAVITHA E P S6 MCA NO:24

INDEX 1. Introduction 1.1 Purpose 1.2 Scope 1.3 Definitions, acronyms, and abbreviations 1.4 References 1.5 Overview 2. Overall description 2.1 Product perspective 2.2 Product functions 2.3 User characteristics 2.4 Constraints 2.5 Assumptions and dependencies 3. Specific requirements 3.1 Functional Requirements 3.2 Use case Diagram 3.3 Non-Functional Requirements 3.3.1 Usability 3.3.2 Reliability 3.4 Interfaces 3.4.1 User Interfaces 3.4.2 Hardware interfaces 3.4.3 Software interfaces 4. Analysis Models 4.1 Data Flow Diagram 5. Conclusion

1.Introduction An image is digitized to convert it to a form which can be stored in a computer's memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image has been digitized, it can be operated upon by various image processing operations. Image processing operations can be roughly divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image. Image defects which could be caused by the digitization process or by faults in the imaging set-up (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image. Some examples of Image Enhancement and Measurement Extraction are given below. The examples shown all operate on 256 grey-scale images. This means that each pixel in the image is stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a white pixel and values in-between represent shades of grey. These operations can be extended to operate on colour images.

1.1 Purpose: An image refers to a 2D light intensity function f(x,y), where (x,y) denote spatial coordinates and the value of f at any point (x,y) is proportional to the brightness or gray levels of the image at that point. A digital image is an image f(x,y) that has been discretized both in spatial coordinates and brightness. The elements of such a digital array are called image elements or pixels.
DIGITAL IMAGE

A digital remotely sensed image is typically composed of picture elements (Pixels) located at the intersection of each row i and column j in each K bands of imagery. Associated with each pixel is a number known as Digital Number (DN) or Brightness Value (BV), that depicts the average radiance of a relatively Small area within a scene (Fig. 1). A smaller number indicates low average

Radiance from the area and the high number is an indicator of high radiant Properties of the area. The size of this area effects the reproduction of details within the scene. As pixel size is reduced more scene detail is presented in digital representation.

1.2 Scope: A digital image processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means. In particular, digital image processing is the only practical technology for:

Classification Feature extraction Pattern recognition Projection Multi-scale signal analysis

Some techniques which are used in digital image processing include:


Pixelization Linear filtering Principal components analysis Independent component analysis Hidden Markov models Anisotropic diffusion Partial differential equations Self-organizing maps Neural networks Wavelets

1.3 Definitions, Acronyms and Abbreviations: SRS: Software Requirement Specification


MATLAB : matrix laboratory

1.4 References: WWW.wikepedia.org Russ,John C,The Image Processing Handbook,2nd ed.,CRC Press 1995 Jahne,Bernd,Digital Image Processing:Concepts,Algorithms,and Sientific Appliations,2nd sed.,Springer-Verlag

1.5 Overview: The SRS will provide a detailed description of visual feature extraction in Digital image processing.

2 General Description: 2.1 Product perspective: In a typical image understanding task such as object identification, an essential step is to segment an image into different regions corresponded to different objects in the scene. Edge detection is a term in image processing and computer vision. Corner detection is an essential part of low-level image processing and computer vision. Since information about a shape is concentrated at the corners and corners can be considered to be descriptive primitives in shape representation and image interpretation, corner detection is useful in many vision applications. 2.2 Product Functions: The Visual feature extraction includes the following modules Edge detection Corner detection EDGE DETECTION Edge detection is a well-developed field on its own within image processing. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries. Edge detection techniques have therefore been used as the base of another segmentation technique.

The edges identified by edge detection are often disconnected. To segment an object from an image however, one needs closed region boundaries. The desired edges are the boundaries between such objects. Segmentation methods can also be applied to edges obtained from edge detectors. Lindbergh and Li developed an integrated method that segments edges into straight and curved edge segments for parts-based object recognition, based on a minimum description length (MDL) criterion that was optimized by a split-andmerge-like method with candidate breakpoints obtained from complementary junction cues to obtain more likely points at which to consider partitions into different segments. CORNER DETECTION The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two dimensional structure. The name "Corner" arose since early algorithms first performed edge detection, and then analyzed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of curvature in the image gradient. It was then noticed that the so-called corners were also being detected on parts of the image which were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition. 2.3 User characteristics: The user must select and upload an image for the extraction of features. 2.4 General Constraints: The software is developed to work on windows platform. The system must respond in a reasonable time

2.5 Assumptions and Dependencies: The users have sufficient knowledge of computers. The users should have sufficient English knowledge

3 Specific Requirements This section describes in detail all the functional requirements 3.1 Functional Requirements: EDGE DETECTION An edge in an image is a boundary or contour at which a significant change occurs in some physical aspect of an image, such as the surface reflectance, illumination or the distances of the visible surfaces from the viewer. Changes in physical aspects manifest themselves in a variety of ways, including changes in intensity, color, and texture. Detecting edges is very useful in a no of contexts. For example in a typical image understanding task such as object identification, an essential step is to segment an image into different regions corresponded to different objects in the scene. Edge detection is the 1 st step in image segmentation. Edge detection is a term in image processing and computer vision, it refers to algorithms which aim at identifying points in a digital image at which there is an abrupt change in image brightness or more formally, has discontinuities or simply where there is a jump in intensity from one pixel to the next There are many ways to perform edge detection; however, the majority of different methods may be grouped into two categories: F Gradient: The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. F Laplacian: The Laplacian method searches for zero crossings in the second derivative of the image to find edges. An edge has the one- dimensional shape of a ramp/slope/rise calculating the derivative of the image can highlight its location.

REVIEW OF EDGE DETECTOR A. The Marr-Hildreth Edge Detector The Marr-Hildreth edge detector was a very popular edge operator before Canny released his paper. It is a gradient based operator which uses the Laplacian to take the second derivative of an image. The idea is that if there is a step difference in theintensity of the image, it will be represented by in the second derivative by a zero crossing.

B. The Canny Edge Detector The Canny edge detector is widely considered to be the standard edge detection algorithm in the industry. It was first created by John Canny for his Masters thesis at MIT in 1983 and still outperforms many of the newer algorithms that have been developed. Canny saw the edge detection problem as a signal processing optimization problem, so he developed an objective function to be optimized .The solution to this problem was a rather complex exponential function, but Canny found several ways to approximate and optimize the edge-searching problem. C. The Local Threshold and Boolean Function Based Edge Detection This edge detector is fundamentally different than many of the modern edge detectors derived from Cannys original. It does not rely on the gradient or Gaussian smoothing. It takes advantage of both local and global thresholding to find edges. Unlike other edge detectors, it converts a window of pixels into a binary pattern based on a local threshold, and then applies masks to determine if an edge exists at a certain point or not. By calculating the threshold on a per pixel basis, the edge detector should be less sensitive to variations in lighting throughout the picture. It does not rely on blurring to reduce noise in the image. It instead looks at the variance on a local level.

D: Color Edge Detection Using Euclidean Distance and Vector Angle Most edge detectors work on the grayscale representation of the image. This cuts down the amount of data you have to work with (one channel instead of three), but you also lose some information about the scene. By including the color component of the image, the edge detector should be able to detect edges in regions with high color variation but low intensity variation. This edge detector uses two operators: Euclidean Distance and Vector Angle. The Euclidean Distance is a good operator for finding edges based on intensity and the Vector Angle is a good operator for finding edges based on hue and saturation. The detector applies both operators to the RGB color space of an image, and then combines the results from each based on the amount of color in a region. E: Color Edge Detection using the Canny Operator Another approach to edge detection using color information is simply to extend a traditional intensity based edge detector into the color space. This method seeks to take advantage of the known strengths of the traditional edge detector and tries to overcome its weaknesses by providing more information in the form of three color

channels rather than a single intensity channel. As the Canny edge detector is the current standard for intensity based edge detection, it seemed logical to use this operator as the basis for color edge detection. F: Depth Edge Detection using Multi-Flash Imaging This is another edge detector following the principle that using more data in the edge detection process should result in better detection of edges. However, in this case rather than merely extending from one channel of intensity to three channels of color, this edge detector actually makes use of multiple different images. The approach is based on taking successive photos of a scene, each with a different light source close to and around the cameras center of projection. The location of the shadows abutting depth discontinuities are used as a robust cue to create a depth edge map in both static and dynamic scenes. The idea is that rather than using complicated mathematical techniques to try to extract edges from existing photographs, we should change the way we take photographs in general. 3.2 USE CASE DIAGRAM

SELECT IMAGE

LOG IN

PROCESS IMAGE

DETECT EDGE

3.3 Non Functional Requirements: 3.3.1 Usability: The system uses an interface. The System is user friendly and self-explanatory. 3.3.2 Reliability: The system has to be very reliable due to the importance of data and the damages incorrect or incomplete data can do . 3.4 Interfaces: The system has to be very reliable due to the importance of data and the damages incorrect or incomplete data can do.

3.4.1 User Interfaces: The System will make use of Matlab as platform. The user-interface of the system shall be designed in Matlab.

3.4.2 Hardware Interfaces: Operating System: Windows vista/2007/XP Processor: Core i3 or higher. RAM: 256 Mb or more Platform: Matlab

3.4.3 Software Interfaces: A firewall will be used with the server to prevent unauthorized access to the system.

4.DATA FLOW DIAGRAM

MATLAB
EDGE DETECTION

IMAGE FILE

EDGE DETECTION

DISPLAY

5. Conclusion: In this project, we detect the edges and corners of a digital image. Different methods used for edge detection and corner detection .These are important tasks in visual feature extraction.

You might also like