You are on page 1of 12

SEMINAR REPORT ON

DIGITAL IMAGE PROCESSING

Submitted to:Mrs. Dhaarna Arora Mr. Abhishek Sharma

Submitted by:Kanupriya Choudhary (EC 4th year)

(Electronics & Communication Engineering Department)

INTRODUCTION
An image may be defined as a 2-D function f(x,y) where x and y are spatial (plane) coordinates and the amplitude of f at any pair of coordinates(x,y) is called intensity or gray level of image at that point when x,y and the intensity values of f are all finite, discrete quantities, we call the image a digital image. Digital image is composed of a finite number of elements, each of which has a particular location and value called pixels. In digital imaging, a pixel (or picture element) is a single point in an image. The pixel is the smallest addressable screen element; it is the smallest unit of picture that can be controlled. Each pixel has its own address. The address of a pixel corresponds to its coordinates. Pixels are normally arranged in a two-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. The word pixel is based on a contraction of pix ("pictures") and el (for "element"). The pixels are so close together that they appear connected. The number of bits used to represent each pixel determines how many colors or shades of gray can be displayed. For example, in 8-bit color mode, the color monitor uses 8 bits for each pixel, making it possible to display 2 to the 8th power (256) different colors or shades of gray.

IMAGE SAMPLING AND QUANTIZATION-Before going to processing an image, it is converted into a digital form. Digitization includes sampling of image and quantization of sampled values. An image may be continuous with respect to the x- and the y- coordinates, and also in amplitude.

To convert it into digital form, we have to sample the function in both coordinates and in amplitude. Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization. The 1-D function in fig (b) is a plot of amplitude (intensity level) values of the continuous image along the line segment AB in fig (a). The random variations are due to image noise. To sample this function, we take equally spaced samples along line AB, as shown in fig(c). The samples are shown as small white squares superimposed on the function. The set of these discrete locations gives the sampled function. In order to form a digital function, the intensity values also must be converted (quantized) into discrete quantities. The right side of fig(c) shows the intensity scale divided into eight discrete intervals, ranging from black to white. The vertical tick marks indicate the specific value assigned to each of the eight intensity intervals. The continuous intensity levels are quantized by assigning one of the eight values to each sample. The assignment is made depending on the vertical proximity of a sample to a vertical tick mark. The digital samples resulting from both sampling and quantization are shown in fig (d). Starting at the top of the image and carrying out this procedure line by line produces a 2-D digital image.

FUNDAMENTAL STEPS
Image sensing and acquisitionMost of the images in which we are interested are generated by combination of an illumination source and the reflection or absorption of energy from that source by the elements of the scene being imaged. Incoming energy is transformed into a voltage by the combination of input electrical power and sensor material that is responsive to the particular type of energy being detected. The output voltage waveform is the response of the sensor(s), and a digital quantity is obtained from each sensor by digitizing its response.

Image acquisition using single sensor- perhaps the most familiar sensor of this type is the photodiode, which is constructed of silicon materials and whose output voltage waveform is proportional to light. The use of a filter in front of a sensor improves selectivity. For example, a green (pass) filter in front of a light sensor favors light in the green band for green light than for other components in the visible spectrum. In order to generate a 2-D image using a single sensor, there has to be relative displacements in both the x- and y-directions between the sensor and the area to be imaged. Figure shows an arrangement used in high precision scanning, where a film negative is mounted onto a drum whose mechanical

rotation can be controlled with high precision, this method is an inexpensive (but slow) way to obtain high-resolution images.

Image enhancementIt is the process of manipulating an image so that result is more suitable than original for a specific application. There is no general theory of image enhancement. When an image is processed for visual interpretation, the viewer is the ultimate judge of how well a particular method works. Using Digital Processing to Change Image ContrastHere we are using a very simple image to show how digital processing can be used to change the image contrast. As we see, the image consists of a background area and a small square object in the center. In the low contrast image on the left the background area has pixel values of 40 and the object in the center has pixel values of 30. The numerical contrast (the object relative to the back ground) is the difference (40-30=10). Look up tables (LUT) are data stored in the computer that is used to substitute new values for each pixel during the processing. In our example here, we are keeping it simple and working an image with only two pixel values, 40 and 30. As we see here, the processing uses a LUT that substitutes a 90 for a 40 and a 10 for a 30. The effect of this is to increase the image contrast (90-10=80). As we are about to discover, it is usually possible for the user to select from a variety of LUTs, each one designed to produce specific contrast characteristics.

Digital Image Windowing-The ability to window is a valuable feature of all digital images. Windowing is the process of selecting some segment of the total pixel value range (the wide dynamic range of the receptors) and then displaying the pixel values within that segment over the full brightness (shades of gray) range from white to black. Important point...Contrast will be visible only for the pixel values that are within the selected window. All pixel values that are either below or above the window will be all white or all black and display no contrast. The person controlling the display can adjust both the center and the width of the window. The combination of these two parameters determines the range of pixel values that will be displayed with contrast in the image.

Effect of Changing the Window Level-One of the advantages of windowing is that it makes it possible to display and enhance the contrast in selected segments of the total pixel value range. This can be compared to the limitations of images displayed on film where the full range of exposure is displayed in one image and cannot be changed. With windowing we can create many displayed images, each one "focusing on" a specific range of pixel values. As we see here, when the window is set to cover the lower segment of total pixel value range, we see good contrast in the lighter areas like the medistimum. Setting the window to the higher segment produces good contrast in the darker areas like the lungs.

SPATIAL FILTERING A characteristic of remotely sensed images is a parameter called spatial frequency defined as number of changes in Brightness Value per unit distance for any particular part of an image. If there are very few changes in Brightness Value once a given area in an image, this is referred to as low frequency area. Conversely, if the Brightness Value changes dramatically over short distances, this is an area of high frequency. Spatial filtering is the process of dividing the image into its constituent spatial frequencies, and selectively altering certain spatial frequencies to emphasize some image features. This technique increases the analysts ability to discriminate detail. The three types of spatial filters used in remote sensor data processing are: Low pass filters, Band pass filters and High pass filters.

Image restorationIt is an area that also deals with improving the appearance of an image. However, unlike image enhancement, which is subjective, image restoration is objective in nature, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Noise in an image-Periodic noise in an image arises typically from electrical or electromechanical interference during image acquisition. It can be reduced significantly via frequency domain filtering. Noise is sinusoidal of various frequencies. The Fourier transform of a pure sinusoid is a pair of conjugate impulses located at the conjugate frequencies of the sine wave. Thus if amplitude of a sine wave is strong enough, we would expect to see in the spectrum of the image a pair of impulses for each sine wave in the image. Restoration in the presence of noise-when the only degradation present is noise, g(x,y)=f(x,y)+n(x,y) And G(u,v)=F(u,v)+N(u,v) The noise terms are unknown, so subtracting them from g(x,y) and G(u,v) is not a realistic option. In the case of periodic noise, it usually is possible to estimate N(u,v) from the spectrum of G(u,v). In this case N(u,v) can be subtracted from G(u,v) to obtain an estimate of the original image.

Image compressionThe term data compression refers to the process of reducing the amount of data required to represent a given quantity of information. It deals with techniques for reducing the storage required saving an image, or the bandwidth required transmitting it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. It is most familiar to most users in the form of image file extensions, such as jpg file extension used in the JPEG (joint photographic experts group) image compression standard.

Because various amounts of data can be used to represent the same amount of information, representations that contain irrelevant or repeated information are said to contain redundant data. If we let b and b denote the number of bits (or information-carrying units) in two representations of the same information, the relative data redundancy R of the representation with b bits is

Where C, commonly known as compression ratio, is defined as

Two dimensional intensity arrays suffer from 3 principal types of data redundancies that can be identified and exploited: 1. Coding redundancy-a code is a system of symbols (letters, numbers, bits and the like) used to represent a body of information or set of events. Each piece of information or event is assigned a sequence of code symbols, called a code word. The number of symbols in each code word is its length. The 8-bit codes that are used to represent the intensities in most 2-D intensity arrays contain more bits than are needed to represent the intensities. 2. Spatial and temporal redundancy-because the pixels of most 2-D intensity arrays are correlated spatially (i.e., each pixel is similar to or dependent on neighboring pixels), information is unnecessarily replicated in the representations of the correlated pixels. In a video sequence, temporally (i.e. those similar to or dependent on pixels in nearby frames) also duplicate information. 3. Irrelevant information-most 2-D intensity arrays contain information that is ignored by the human visual system and/or extraneous to the intended use of image. It is redundant in the sense that is not used. Some basic compression methods are-Huffman coding, Shannon fano coding, golomb coding, arithmetic coding, LZW coding, run length coding, symbol based coding, bit plane coding, block transform coding. Lossless compression:

In this, data is not altered in process of compression or decompression. Decompression generates an exact replica of an original image. Text compression is a good example. Spreadsheets, processor files usually contain repeated sequence of Characters. By reducing repeated characters to count, we can reduce requirement of bits. Gray scale & images contain repetitive information. This repetitive graphic images and sound allows replacement of bits by codes. In color images, adjacent pixels can have different color values. These images do not have sufficient repetitiveness to be compressed. In these cases, this technique is not applicable. Lossless compression techniques have been able to achieve reduction in size in the range from 1/10 to1/50 of original uncompressed size. Lossy compression: It is used for compressing audio, grayscale or color images, and video objects. In this, compressing results in loss of information. When a video image is decompressed, loss of data in one frame will not be perceived by the eye. If several bits are missing, information is still perceived in an acceptable manner as eye fills in gaps in shading gradient.

APPLICATIONS

X-ray imaging Images from satellite MRI Checking of goods like circuit board controller, packaged pills, air bubbles detection etc. Thumb print Paper currency check Automated license plate reading Study of microscopic images Digital cameras Scanners

Ultrasound imaging

CONCLUSIONS

Digital image processing of satellite data can be primarily grouped into three categories: Image Rectification and Restoration, Enhancement and Information extraction. Image rectification is the pre-processing of satellite data for geometric and radiometric connections. Enhancement is applied to image data in order to effectively display data for subsequent visual interpretation. Information extraction is based on digital classification and is used for generating digital thematic map.

REFERENCES
R.C. Gonzalez and R.E. Woods, Digital Image Processing, 3rd Ed., Prentice-Hall2008 http://www.sprawls.org/resources/DIGPROCESS/module.htm#1

You might also like