You are on page 1of 4

Q. Explain the process of formation of image in human eye?Ans : The eye is an optical image-forming system.

Many parts of the eye shown and descr ibed on the page about the anatomy of the eye play importantroles in the formati on of an image on the retina (which is the back surface of the eye that consists of layers of cells whose function is to transmit to the brain information corre sponding to the the imageformed on it).The following example is explained below: Notes about the Basic Ray Diagram of image formation within the Human Eye: 1. Representation of an object: First consider the object - which is represented by a simple red arrow pointing upwards (left-hand-sideof diagram).Most real objects have complicated shapes, te xtures, and so on. This arrow is used to represent a very simple object for which just two clearly defined points on the object are traced through the eye to theretina. 2. Light leaves the object - propagating in all directions: It is assumed for simplicity that this is a scattering object, meaning that afte r light in the area (whichmay be called "ambient light") reaches the object, it leaves the surface of the object traveling in a widerange of directions. 3. Some of the light leaving the object reaches the eye: Although the object is scattering light in all directions, only a small proporti on of the light scatteredfrom it reaches the eye.The longer strong pink and gree n lines with the arrows marked along them are called "rays".These represent the direction of travel of light.The pink rays indicate paths taken by light leaving the top point of the object (that eventually reachesthe retina), while the gree n rays indicate paths taken by light leaving the lower point of the object (that eventually reaches the retina).Only two rays are shown leaving each point on the object. This simplification is to keep the diagramclear.The two rays drawn in e ach case are the extreme rays, that is those that only just get through theoptic al system called the eye. These generally represent a cone of light that propaga tes all the waythrough the system from the object to the image. 4. Light changes direction when it passes from the air into the eye: When light traveling away from the object, towards the eye, arrives at the eye, the first surface itreaches is the cornea.The ray-diagram shows the rays changin g direction when they pass through the cornea.This change in direction is due to refraction (i.e. the re-direction of light as it passes from one mediuminto ano ther, different, medium). Refraction is covered in more detail on the next page. To describe thisray-diagram it is sufficient to say that several structures in the eye contribute to image formation by re-directing the light passing through them in such a way as to improve the quality of the image formed on the retina. 2. Q. Explain different linear methods for noise cleaning? Noise reduction is the process of removing noise from a signal. Noise reduction techniques areconceptually very similar regardless of the signal being processed , however a priori knowledge of thecharacteristics of an expected signal can mea n the implementations of these techniques vary greatlydepending on the type of s ignal.All recording devices, both analogue or digital, have traits which make th em susceptible to noise. Noisecan be random or white noise with no coherence, or coherent noise introduced by the device'smechanism or processing algorithms.In electronic recording devices, a major form of noise is hiss caused by random ele ctrons that, heavilyinfluenced by heat, stray from their designated path. These stray electrons influence the voltage of theoutput signal and thus create detect able noise. In the case of photographic film and magnetic tape, noise (both visible and audi ble) is introduced due tothe grain structure of the medium. In photographic film , the size of the grains in the film determines thefilm's sensitivity, more sens

itive film having larger sized grains. In magnetic tape, the larger the grains o f the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise.One method to remove noise is by convolving the original ima ge with a mask that represents a low-passfilter or smoothing operation. For exam ple, the Gaussian mask comprises elements determined by aGaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average,of itself and its nearby neighbors; the Gaussian fi lter is just one possible set of weights.Smoothing filters tend to blur an image , because pixel intensity values that are significantly higher orlower than the surrounding neighborhood would "smear" across the area. Because of this blurring ,linear filters are seldom used in practice for noise reduction; they are, howev er, often used as the basis for nonlinear noise reduction filters. 3. Q. Which are the two quantitative approaches used for the evaluation of image features?Ans : The theory of histogram modification of continuous real-valued pictures is devel oped. It is shownthat the transformation of gray levels taking a picture's histo gram to a desired histogram is uniqueunder the constraint that the transformatio n be monotonic increasing. Algorithms for implementing thissolution on digital p ictures are discussed. A gray-level transformation is useful for increasing visu alcontrast, but may destroy some of the information content. It is shown that so lutions to the problem of minimizing the sum of the information loss and the his togram discrepancy are solutions to certaindifferential equations, which can be solved numericall. 4 Q. Explain with diagram Digital image restoration model?Ans : Digital Image RestorationA current research project at IMM lead by Prof. Per Chr istian Hansen.Digital image restoration - in which a noisy, blurred image is res tored on the basis of amathematical model of the blurring process - is a well-kn own example of a 2-D deconvolution problem.A recent survey of this topic, includ ing a discussion of many practical aspects, can be found in [1].There are many s ources of blur. Here we focus on atmospheric turbulence blur which arises,e.g., in remote sensing and astronomical imaging due to long-term exposure through the atmosphere,where the turbulence in the atmosphere gives rise to random variatio ns in the refractive index. Formany practical purposes, this blurring can be mod elled by a Gaussian point spread function, and thediscretized problem is a linea r system of equations whose coefficient matrix is a block Toeplitz matrixwith To eplitz blocks.Discretizations of deconvolution problems are solved by regulariza tion methods - such as those implemented in the Matlab package Regularization Tools - that seek to balance th e noise suppressionand the loss of details in the restored `. Unfortunately, cla ssical regularization algorithms tend toproduce smooth solutions, and as a conse quence it is difficult to recover sharp edges in the image.We have developed a 2 -D version [2] of new algorithm [3] that is much better able toreconstruct the s harp edges that are typical in digital images. The algorithm, called PP-TSVD, is amodification of the truncated-SVD method and incorporates the solution of a li near l1-problem, and itincludes a parameter k that controls the amount of noise reduction. The algorithm is implemented inMatlab and is available as Matlab func tion pptvsd.The four images on top of this page show various fundamental solutio ns that can be computedby means of the PP-TSVD algorithm. The underlying basis f unctions are delta functions, piecewiseconstant functions, piecewise linear func tions, and piecewise 2. degree polynomials, respectively. Weare currently invest igating the use of the PP-TSVD algorithms in such areas as astronomy and geophys ics.

Q. Discuss Orthogonal Gradient Generation for first order derivative edge detect ionAns : Orthogonal Gradient GenerationAn edge in a continuous domain edge segment F(x,y) can be detected by forming the continuousone-dimensional gradient G(x,y) along a line normal to the edge slope, which is at an angle T withrespect to the horiz ontal axis. If the gradient is sufficiently large (i.e., above some threshold va lue), anedge is deemed present. The gradient along the line normal to the edge s lope can be computed in termsof the derivatives along orthogonal axes according to the followingFor computational efficiency, the gradient amplitude is sometime s approximated by themagnitude combinationThe orientation of the spatial gradien t with respect to the row axis isThe remaining issue for discrete domain orthogo nal gradient generation is to choose a good discreteapproximation to the continu ous differentials of Eq. 8.3a.The simplest method of discrete gradient generatio n is to form the running difference of pixels alongrows and columns of the image . The row gradient is defined asand the column gradient isDiagonal edge gradient s can be obtained by forming running differences of diagonal pairs of pixels. Th is is the basis of the roberts cross-difference operator which is defined in mag nitude form as and in square root form as Prewitt has introduced a pixel edge gradient operator described by the pixel nu mbering The Prewittoperator square root edge gradient is defined asWithwhere K = 1. In this formulation, the row and column gradients are normalized to provide unit-gainpositive and negative weighted averages about a separated edge position .The Sobel operator edge detector differs from the Prewitt edge detector in that the values of the north, south, east and west pixels are doubled (i.e., K = 2). The motivation for this weighting is togive equal importance to each pixel in t erms of its contribution to the spatial gradient.C) Second-Order Derivative Edge DetectionSecond-order derivative edge detection techniques employ some form of spatial second- orderdifferentiation to accentuate edges. An edge is marked if a significant spatial change occurs in thesecond derivative. We will consider Lap lacian second-order derivative method.The edge Laplacian of an image function F( x,y) in the continuous domain is defined aswhere, the Laplacian isThe Laplacian G(x,y) is zero if F(x,y) is constant or changing linearly in amplitude. If the r ate of change of F(x,y) is greater than linear, G(x,y) exhibits a sign change at the point of inflection of F(x,y).The zero crossing of G(x,y) indicates the pre sence of an edge. The negative sign in the definition of Eq.8.4a is present so t hat the zero crossing of G(x,y) has a positive slope for an edge whose amplitude increases from left to right or bottom to top in an image.Torre and Poggio have investigated the mathematical properties of the Laplacian of an imagefunction. T hey have found that if F(x,y) meets certain smoothness constraints, the zero cro ssings of G(x,y) are closed curves. In the discrete domain, the simplest approxi mation to the continuous Laplacianis to compute the difference of slopes along e ach axis:This four-neighbor Laplacian can be generated by the convolution operat ion Where The four-neighbor Laplacian is often normalized to provide unit-gain a verages of the positive weighted andnegative weighted pixels in the 3 * 3 pixel neighborhood. The gain-normalized four-neighbor Laplacianimpulse response is def ined byPrewitt has suggested an eight-neighbor Laplacian defined by the gain nor malized impulseresponse array First-Order Derivative Edge Detection There are two fundamental methods for generating first-order derivative edge gra dients. Onemethod involves generation of gradients in two orthogonal directions in an image the second utilizes a set of directional derivaties. 6

Q. Explain about the Region Splitting and merging with example.Ans : Region SplittingThe basic idea of region splitting is to break the image into a set of disjoint regions which are coherentwithin themselves:Initially take the i mage as a whole to be the area of interest.Look at the area of interest and deci de if all pixels contained in the region satisfy some similarityconstraint.If TR UE then the area of interest corresponds to a region in the image.If FALSE split the area of interest (usually into four equal sub-areas) and consider each of t he sub-areasas the area of interest in turn.This process continues until no furt her splitting occurs. In the worst case this happens when the areas are just one pixel in size.This is a divide and conquer or top down method.If on ly a splitting schedule is used then the final segmentation would probably conta in manyneighbouring regions that have identical or similar properties.Thus, a me rging process is used after each split which compares adjacent regions and merge s them if necessary. Algorithms of this nature are called split and merge algori thms.To illustrate the basic principle of these methods let us consider an imagi nary image.1. Let denote the whole image shown in Fig 35(a).2. Not all the pixels in are similar so the region is split as in Fig 35(b).3. Assume that all pixels within regions , and respectively are similar but those i n are not.4. Therefore is split next as in Fig 35(c).5. Now assume that all pixels within each region are similar with respect to that r egion, and thatafter comparing the split regions, regions and are found to be id entical.6. These are thus merged together

You might also like