You are on page 1of 10

IMAGE PROCESSING

ABSTRACT
In the era of multimedia and Internet, image processing is a key technology. Image processing is any form of information processing for which the input is an image, such as photographs or frames of video; the output is not necessarily an image, but can be for instance a set of features of the image. Image processing is of two types Analog image processing and digital image processing. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing - it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing. But the cost of analog image processing was fairly high compared to digital image processing. Analog image can be converted to a digital image which can be processed in greater aspects, having greater advantages affordably and the processes such as sampling, quantization, Image acquisition, Image Segmentation of converting analog image to a digital image is explained in this report. Image processing has a very good scope in the fields of Signal-processing aspects of image processing, imaging systems, and image scanning, display and printing. Includes theory, algorithms, and architectures for image coding, filtering, enhancement, restoration, segmentation, and motion estimation; image formation in tomography, radar, sonar, geophysics, astronomy, microscopy, and crystallography; image scanning, digital half-toning and display, and color reproduction.

History
Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and a few other places, with application to satellite imagery,

wirephoto standards conversion, medical imaging, videophone, character recognition, and photo enhancement. But the cost of processing was fairly high with the computing equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers and dedicated hardware became available. Images could then be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and compute-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest

Introduction

Digital image processing is the use of computer algorithms to perform image processing on digital images. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing. We will restrict ourselves to two-dimensional (2D) image processing although most of the concepts and techniques that are to be described can be extended easily to three or more dimensions. We begin with certain basic definitions. An image defined in the "real world" is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). An image may be considered to

contain sub-images sometimes referred to as regions-of-interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition. The amplitudes of a given image will almost always be either real numbers or integer numbers. The latter is usually a result of a quantization process that converts a continuous range (say, between 0 and 100%) to a discrete number of levels. In certain image-forming processes, however, the signal may involve photon counting which implies that the amplitude would be inherently quantized. In other image forming procedures, such as magnetic resonance imaging, the direct physical measurement yields a complex number in the form of a real magnitude and a real phase.

Image
It is a 2D function f(x, y) where x and y are spatial co-ordinates and f (Amplitude of function) is the intensity of the image at x, y. Thus, an image is a 2-dimensional function of the coordinates x, y.

Digital Image
If x, y and amplitude of f are all discrete quantities, then the image is called Digital Image. Digital image is a collection of elements called pixels, where each pixel has a specific coordinate value and a particular gray-level. Processing of this image using a digital computer is called Digital Image Processing. E.g. Fingerprint Scanning Handwriting Recognition System Face recognition system Biometric scanning used for authentication in Modern pen drives. The effect of digitization is shown in Figure 1. The 2D continuous image a(x, y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m,n]. In fact, in most cases a(x, y)--which we might

consider to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color ( ), and time (t). Unless otherwise stated, we will consider the case of 2D, monochromatic, static images in this

Fig 1: Digitization of a continuous image. The pixel at coordinates [m=10, n=3] has the integer brightness value 110. The image shown in Figure 1 has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. The process of representing the amplitude of the 2D signal at a given coordinate as an integer value with L different gray levels is usually referred to as amplitude quantization or simply quantization.

Analyzing and manipulating images with a computer. Image processing generally involves three steps:
1. Import an image with an optical scanner or directly through digital photography. 2. Manipulate or analyze the image in some way. This stage can include image enhancement and data compression, or the image may be analyzed to find patterns that aren't visible by the human eye. For example, meteorologists use image processing to analyze satellite photographs. 3. Output the result. The result might be the image altered in some way or it might be a report based on analysis of the image.

Converting from Analog image to Digital Image


The captured real image must be converted to digital form. This involves 2 processes: sampling and quantization.

Sampling and Quantization


A scan-line is chosen, and samples are taken at a fixed interval. The gray level intensity is recorded. This is called Sampling. Sampling is the process of digitizing co-ordinate values.

After sampling, quantization is done. The process of digitizing (approximating) the amplitude of f, f(x, y) is called Quantization.

Color and Grayscale Images


Gray scale images only have Luminosity component (Grayscale intensity levels).Color images have Luminosity, and Red, Blue and Green Channels. So we apply Sampling and Quantization to all 4 components. Throughout this, subject, we will mostly be dealing with gray-scale images.

Digital Image Processing (DIP) System


A digital image processing system consists of the following components: The image is acquired using the Image sensors. This image is given to Specialized Input Hardware. It

consists of a digitizer for digitizing the image, ALU like device that can perform some basic operations like addition, subtraction, logical AND, logical OR of images. Such a system has a central processor or CPU to perform processing, and Mass Storage to store the images. Display devices are used to display the output on the screen. Hard copy devices like printer are used to obtain physical copy of the output. The Image processing software like MATLAB is used to give instructions to the Computer. Storage can be temporary, online or archived. Temporary storage takes the form of RAM, or frame buffer. Frame buffer is a special memory chip used to store images. The use of frame buffer over RAM is preferred, because we can read or write images to the Frame Buffer at TV rate. From ordinary RAM, this is very difficult.(This is also called refreshing rate - 30 Hz). This is done to avoid flickering on the screen. Also, some basic operations like scrolling and panning are built in. Display devices are of two types; Random Scan and Raster Scan. Raster scan follows a zigzag pattern. Random scan follows the boundary of the image. Random scan is high resolution, whereas raster scan is much more realistic.

Digital Image Processing Steps


First the image is captured, and converted into a digital image. This requires two types of devices: Sensors and digitizers. This is called Image Acquisition. Once, we have acquired the image, before any processing we may like to pre-process the image. E.g. while shooting a picture with a camera; sometimes we use certain types of filters to give a look and feel to the image. Thus, using a filter before the lens is a type of pre-processing. Pre-processing is done mainly to enhance Image Quality. Image enhancement can be done in spatial domain as well as frequency domain.

Image acquisition
Until the early 1990s, most image acquisition in video microscopy applications was typically done with an analog video camera, often simply closed circuit TV cameras. Today, acquisition is usually done using a CCD camera mounted in the optical path of the microscope. The camera may be full color or monochrome. Very often, very high resolution cameras are employed to gain as much direct information as possible. Cryogenic cooling is also common, to minimize noise. Often digital cameras used for this application provide pixel intensity data to a resolution of 12-16 bits, much higher than is used in consumer imaging products. Ironically, in recent years, much effort has been put into acquiring data at video rates or higher (25-30 frames per second or higher). What was once easy with off-the-shelf video cameras now requires special, high speed electronics to handle the vast digital data bandwidth.

Image Segmentation
Image Segmentation is the process of dividing any image into its constituent parts. The process of extracting the characteristic features also called descriptors is called Description. The features are represented using various Representation schemes. Recognition and Interpretation is the process of assigning labels to individual objects, after recognizing the individual objects, we assign meaning to the recognized collection of on ejects. This is called Interpretation. This is the output of the system. In computer vision, segmentation refers to the process of partitioning a digital image into multiple regions (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.

The result of image segmentation is a set of regions that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region is similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).

2D image techniques
Image processing for microscopy application begins with fundamental techniques intended to most accurately reproduce the information contained in the microscopic sample. This might include adjusting the brightness and contrast of the image, averaging images to reduce image noise and correcting for illumination non-uniformities. Such processing involves only basic arithmetic operations between images (i.e. addition, subtraction, multiplication and division). The vast majority of processing done on microscope image is of this nature. Another class of common 2D operations called image convolution are often used to reduce or enhance image details. Such "blurring" and "sharpening" algorithms in most programs work by altering a pixel's value based on a weighted sum of that and the surrounding pixels.

Analysis
Analysis of images will vary considerably according to application. Typical analysis includes determining where the edges of an object are, counting similar objects, calculating the area, perimeter length and other useful measurements of each object. A common approach is to create an image mask which only includes pixels that match certain criteria, and then perform simpler scanning operations on the resulting mask. It is also possible to label objects and track their motion over a series of frames in a video sequence.

Noise
Images acquired through modern sensors may be contaminated by a variety of noise sources. By noise we refer to stochastic variations as opposed to deterministic distortions such as shading or lack of focus. We will assume for this section that we are dealing with images

formed from light using modern electro-optics. In particular we will assume the use of modern, charge-coupled device (CCD) cameras where photons produce electrons that are commonly referred to as photoelectrons. Nevertheless, most of the observations we shall make about noise and its various sources hold equally well for other imaging modalities. While modern technology has made it possible to reduce the noise levels associated with various electro-optical devices to almost negligible levels, one noise source can never be eliminated and thus forms the limiting case when all other noise sources are "eliminated".

Some of the practical applications of image Processing are:

Medical Imaging
o o o o o o

Locate tumors and other pathologies Measure tissue volumes Computer-guided surgery Diagnosis Treatment planning Study of anatomical structure

Locate objects in satellite images (roads, forests, etc.) Face recognition Automatic traffic controlling systems Machine vision Used in - Newspaper Industry and Media - Astronomy and space programs - Medicine in CAT and X-Ray - Image enhancement is used by geographers to study pollution patterns - In science, to process images which are degraded. - In archaeology, to restore blurred images of rare or lost artifacts

Conclusion

With in recent years, Image processing is very widely used in medicine for locating minor tumors inside our body and it also applied in satellites to find the images of roads, forests etc. Image processing has taken the technology to a new era and still have a lot to achieve and providing a very good scope for us to explore and find new things. In this way Image processing is serving the technology to reach higher altitudes in future

References
Russ, John C. [January 1992] (2006-12-19). The Image Processing Handbook, 5th edition, CRC Press. ISBN 0849372542.

Jan-Mark Geusebroek, Color and Geometrical Structure in Images, Applications in Microscopy, ISBN 90-5776-057-6 Young Ian T., Not just pretty pictures: Digital quantitative microscopy, Proc. Royal Microscopical Society, 1996, 31(4), pp. 311-313. Young Ian T., Quantitative Microscopy, IEEE Engineering in Medicine and Biology, 1996, 15(1), pp. 59-66. Young Linda G. Shapiro and George C. Stockman (2001): Computer Vision, pp 279-325, New Jersey, Prentice-Hall, ISBN 0-13-030796-3

You might also like