You are on page 1of 5

SRI BALAJI CHOCKALINGAM ENGG COLLEGE DEPT OF ELECTRONICS & COMMUNICATION ENGG.

MODEL EXAMINATION
DS7201-ADVANCED DIGITAL IMAGE PROCESSING
YEAR SEM! I I DATE ! TIME! " H#$. MARKS ! 100

PART-A %10X2&20' (Answer All Question) 1. Define mach band effect. 2. What are image transforms? ". Define segmentation. (. Define texture. ). Define moments. *. What is skeletonization? 7. Define points. +. Define image fusion. ,. Define 3D image. 10. State the sources of 3D data sets. PART-B %)X1*&+0' (Answer All Question) 11 (a). xplain the human !isual perception s"stem. ($r) (b). xplain the properties of D%&'()&. 1* (a). xplain +a!elet based segmentation method. ($r) (b). xplain region gro+ing based segmentation. (,-./Digital image processing) (1#) (1#) (1#) (1#)

13 (a). xplain the edge detection techni0ues. (,3# digital image processing) (1#) (or) (b). xplain the image descriptors. 11 (a). xplain the follo+ing2 (i).)ine detection using 3ough transforms.(14,/feature extraction) (ii).)east s0uare line filtering. (or) (b). xplain the image fusion based on discrete +a!elet transforms(*4/blum r). (1#) 1. (a). xplain the follo+ing2 (i).5olumetric displa".(,3* image porce handbook) (ii).6a" tracing(,11 image porce handbook) (or) (b). xplain in briefl" about measurements on 3D images. (,#3/image proc handbook) (1#) (1#) (1#) (1#)

. .

1) One inherent property of the eye, known as Mach bands, affects the way we
perceive images. These are illustrated in Figure 1.4 and are the bands that appear to be where two stripes of constant shade join. y assigning values to the image brightness levels, the cross! section of plotted brightness is shown in Figure 1.4"a#. This shows that the picture is formed from stripes of constant brightness. $uman vision perceives an image for which the cross! section is as plotted in Figure 1.4"c#. These %ach bands do not really e&ist, but are introduced by your eye. The bands arise from overshoot in the eyes' response at boundaries of regions of different intensity "this aids us to differentiate between objects in our field of view#. The real cross!section is illustrated in Figure 1.4"b#. (ote also that a human eye can distinguish only relatively few grey levels. )t has a capability to discriminate between *+ levels "e,uivalent to five bits#, whereas the image of Figure 1.4"a# could have many more brightness levels. This is why your perception finds it more difficult to discriminate between the low!intensity bands on the left of Figure 1.4"a#.

*) The image transformations covered practical


effects that can change image appearance, and were- rotation, scale change, viewpoint change, image blur, ./01 compression, and illumination. For some of these there were two scene types available, which allowed for separation of understanding of scene type and transformation *# To segment an image according to its te&ture, we can measure the te&ture in a chosen region and then classify it. This is e,uivalent to template convolution, but where the result applied to pi&els is the class to which they belong, as opposed to the usual result of template convolution. $ere, we shall use a 232 template si4e- the te&ture measures will be derived from the 45 points within the template. First, though, we need data from which we can make a classification decision, the training data. This depends on a chosen application. $ere, we shall consider the problem of segmenting the eye image into regions of hair and skin. This is a two!class problem for which we need samples of each class, samples of skin and hair. 6e will take samples of each of the two classes7 in this way, the classification decision is as illustrated in Figure 8.9. The te&ture measures are the energy, entropy and inertia of the

co!occurrence matri& of the 232 region, so the feature space is three! dimensional. The training data is derived from regions of hair and from regions of skin, as shown in Figure 8.:"a# and "b#, respectively. The first half of this data is the samples of hair, the other half is samples of the skin, as re,uired for the k!nearest neighbour classifier of ;ode 8.9. 6e can then segment the image by classifying each pi&el according to the description obtained from its 232 region. ;learly, the training samples of each class should be classified correctly. The result is shown in Figure 8.2"a#. $ere, the top left corner is first "correctly# classified as hair, and the top row of the image is classified as hair until the skin commences 4# Te&ture is a very nebulous concept, often attributed to human perception, as either the feel or the appearance of "woven# fabric. 0veryone has their own interpretation as to the nature of te&ture7 there is no mathematical definition of te&ture, it simply e&ists. y way of reference, let us consider one of the dictionary definitions "O&ford <ictionary, 155:#texture n., = v>t. 1. n. arrangement of threads etc. in te&tile fabric7 characteristic feel due to this7 arrangement of small constituent parts, perceived structure, "of skin, rock, soil, organic tissue, literary work, etc.#7 representation of structure and detail of objects in art7> > > That covers ,uite a lot. )f we change ?threads' for ?pi&els', the definition could apply to images "e&cept for the bit about artwork#. 9# Moments describe a shape's layout "the arrangement of its pi&els#, a bit like combining area, compactness, irregularity and higher order descriptions together. %oments are a global description of a shape, accruing this same advantage as Fourier descriptors since there is selectivity, which is an in!built ability to discern, and filter, noise. Further, in image analysis, they are statistical moments, as opposed to mechanical ones, but the two are analogous. For e&ample, the mechanical moment of inertia describes the rate of change in momentum7 the statistical second order moment describes the rate of change in a shape's area. )n this way, statistical moments can be considered as a global region description. %oments for image analysis were originally introduced in the 15:@s "$u, 15:+# "an e&citing time for computer vision researchers tooA# and an e&cellent and a review is available "/rokop and Beeves, 155+#. %oments are often associated more with statistical pattern recognition than with model-based vision, since a major assumption is that there is an unoccluded view of the target shape. Target images are often derived by thresholding, usually one of the optimal forms that can re,uire a single object in the field of view

:#

You might also like