You are on page 1of 18

Image Enhancement

Spatial domain techniques


Point operations Linear transformation Logarthmatic transformation Power-law transformation Piece wise linear transformation Contrast Stretching Histogram equalization and matching Arithmetic operation Filtering Filter mask, convolution Smoothing Spatial Filters Averaging filters or low pass filters Weight average Sharpening Spatial Filters

Point Operations Overview


Point operations are zero-memory operations where a given gray level x[0,L] is mapped to another gray level y[0,L] according to a transformation

y = f ( x)
y L

L
L=255: for grayscale images
1

y=x
y L

Pointoperations

Digital Negative
L

y = Lx
0 L x L x

No influence on visual quality at all

Load a image hh=255-a(1:1536,1:2048,1); digita negative. figure imshow(hh) hh=255-a(1:1536,1:2048,1:3); g=a(:,:,2); imshow(g)

Contrast Stretching
Contrast stretch (also known as normalization) operates by stretching the pixel range in the input image over a larger dynamic range in the output image. By applying only this linear scaling of the dynamic range it is less sophisticated than histogram equalization and provides a visually less harsh enhancement. pp and lower limit ( (a,b) , ) should be known 0 , ,255 Upper
In its simplest form this transform then scans the present image to determine the maximum and minimum pixel values currently present, denoted c and d respectfully.

Matlab function for same purpose: Imadjust (f,[low_in,high_in],[low_out,hight_out],gamma Maps the intensity values in in imagef to new values in g. Values between low_in and high_in map to value between low_out and high_out. A= imread(test.jpg) g1=imadjust(a,[0 1],[1 0]); imshow(g1)

In reality this method of choosing c and d is very naive as a single outlier in the image will effect the overall enhancement result

Contrast Stretching
x y = ( x a ) + ya ( x b) + y b
0 x<a a x<b bx<L
yb ya 0 a b L x

function contraststretch(filename,alph,bet,gam,a,b) sir = imread(filename); % reading the image ir = rgb2gray(sir); [l,m] = size(ir); % getting the row and column sizes for i = 1:l for j = 1:m if((ir(i,j)>=0) & (ir(i,j)<=a)) nir(i,j) = alph*ir(i,j); else if((ir(i,j)>a) & (ir(i,j)<=b)) nir(i,j) = ((bet*(ir(i,j)-a)+(alph*a))); else if((ir(i,j)>b) & (ir(i,j)<=255)) nir(i,j) = ((gam*(ir(i,j)-b)+(bet*(b-a)+(alph*a)))); end end end end end imshow(nir); subplot(1,2,1),imshow(ir); hold on; subplot(1,2,2),imshow(nir);

a = 50, b = 150, = 0.2, = 2, = 1, ya = 30, yb = 200


7 8

Clipping
0 x<a 0 y = ( x a) a x < b (b a ) b x < L
0 a b L

Logarithmtic transform
It can also be used to enhance the contrast of an image: The effect of the logarithmic transform is to increase the dynamic range of dark regions in an image and decrease the dynamic range in light regions. Effectively this spreads out the low values (dark) and compresses high values (light) in to a smaller range. Applying this operator to an image with a high intensity range may lead to a loss of information.

y = c log10 (1 + x)
In practice as the logarithm is undefined for zero so the following transform is used:

with additional scaling factors sigma and c. sigma controls the input range to the logarithmic function whilst c scales the output to the range 0->255. The addition of 1 is included to prevent problems with Iinput(i, j) = 0.
a = 50, b = 150, = 2
9 10

Logarithmtic transform
A common variant from the same family is the following square-root transform: Here (1+alpha) is the basis and c is the scaling factor to ensure output in an appropriate range. when Iinput(i,j) = 0 this results in Ioutput(i,j) = c unless we add in the -1 to counter this potential offset appearing in the output image. Gamma Correction Gamma correction is the term used to describe the correction required for the non-linear output curve of modern computer displays. When we display a given intensity on a monitor we vary the analogue voltage in proportion to the intensity we require. The problem that all monitors have in common is that input voltage and output intensity is not a linear relationship. This can be done using the 'raise to power' transform.

Both square-root and logarithmic transforms increase the contrast of low pixel values (i.e. dark image areas) and compresses the contrast (i.e. range) of high pixel values ( p (i.e. bright g image g areas). ) Exponential Transform The exponential is the inverse transform of the logarithmic transform. Ioutput(i, j) = exp(Iinput(i, j)) This transform enhances detail in high value regions of the image (bright) whilst decreasing the dynamic range in low value regions (dark) the opposite of the logarithmic transform.
11

12

MatLab code Contrast Stretching Matlab code with exponential function Gamma tranform can make pixel brighter or darker depends on its value. % powelaw of intesity transformation. when f(x,y) is in the range of of % [0,1] and gamma is larger than one it makes the image darker. % when gamma is smaller than one it makes the image brighter. clear all close all f=imread('winter.jpg'); f=rgb2gray(f); f=im2double(f); figure ,imshow(f),title ('org img') [m,n]=size(f) c=1; y=input('Gmma value'); for i=1:m for j=1:n s(i,j)=c*(f(i,j)^y); end end figure, imshow(s),title ('gamma=0.25'); %muliply image , image gets darker by multiplying it short number close all f=imread('test.jpg'); f=im2double(f); figure,imshow(f), title('gray scale'); m=0.75; %contrast E=0.55 % slope p of function g=1./(1+(m./(f+eps)).^E); figure,imshow(g), title('emhance image');

13

EE465: Introduction to Digital Image Processing

14

SummaryofPointOperation
Sofar,wehavediscussedvariousformsof mappingfunctionf(x)thatleadstodifferent enhancementresults
MATLABfunction>imadjust

HistogramProcessing
Histogramofadigitalimagewithgraylevelsinthe range[0,L1]isadiscretefunction

h(rk) = nk
Where Wh
rk :thekth graylevel nk :thenumberofpixelsintheimagehavinggraylevelrk h(rk) :histogramofadigitalimagewithgraylevelsrk

Thenaturalquestionis:Howtoselectan appropriatef(x)foranarbitraryimage? Onesystematicsolutionisbasedonthe histograminformationofanimage


Histogramequalizationandspecification
15

Whatishistogram
Histogramisagraphshowingthenumberof pixelsinanimageateachdifferentintensity valuefoundinthatimage. For an 8bit grayscale image there are 256 different possible intensities, and so the histogram will graphically display 256 numbers showing the distribution of pixels amongst those grayscale values.
17

Example
No. of pixels
6

2 4 3 2

3 2 2 4

3 4 3 2

2 3 5 4

5 4 3 2 1

4x4 image Gray scale = [0,9]

Gray level
0 1 2 3 4 5 6 7 8 9

histogram

NormalizedHistogram
dividingeachofhistogramatgraylevelrk bythetotal numberofpixelsintheimage,n

Histogram based Enhancement


Histogram of an image represents the relative frequency of occurrence of various gray levels in the image
3000 2500

p(rk) = nk / n
Fork=0,1,,L 0 1 L1 p(rk) givesanestimateoftheprobabilityof occurrenceofgraylevelrk Thesumofallcomponentsofanormalized histogramisequalto1

2000

1500

1000

500

50

100

150

200

MATLAB function >imhist(x)

20

Example

h(rk) or p(rk) rk Dark image


Components of g are histogram concentrated on the low side of the gray scale.

Example
Low-contrast image
histogram is narrow and centered toward the middle of the gray scale

High-contrast image
histogram covers broad range of the gray scale and the distribution of pixels is not too far from uniform, with very few vertical lines being much higher than the others

Bright image
Components of histogram are concentrated on the high side of the gray scale.

WhyHistogram?
x 10 4 3.5 3 2.5
4

AnotherExample
7000 6000 5000

2 1.5 1 0.5 0 0 50 100 150 200 250

4000 3000 2000 1000 0 0 50 100 150 200 250

It is a baby in the cradle!

Histogram information reveals that image is under-exposed

Over-exposed image

23

24

HistogramEqualizationOfaImage
Oneoftheseveraltechniquestoenhanceanimageinsucha mannerishistogramequalization,whichiscommonlyusedto compareimagesmadeinentirelydifferentcircumstances. Theconceptofhistogramequalizationistospreadotherwise clutteredfrequenciesmoreevenlyoverthelengthofthe histogram.Frequenciesthatlieclosetogetherwill dramaticallybestretchedout

HistogramEqualizationOfaImage
Histogramequalizationassignstheintensity valuesofpixelsintheinputimagesuchthat theoutputimagecontainsauniform distributionofintensities. Histogramequalizationredistributesintensity distributions. Ifthehistogramofanyimage hasmanypeaksandvalleys,itwillstillhave peaksandvalleyafterequalization,butpeaks andvalleywillbeshifted.
26

25

HistogramEqualization
Asthelowcontrastimageshistogramisnarrowand centeredtowardthemiddleofthegrayscale,ifwe distributethehistogramtoawiderrangethequality oftheimagewillbeimproved. Wecandoitbyadjustingtheprobabilitydensity functionoftheoriginalhistogramoftheimageso thattheprobabilityspreadequally

Example
before after Histogram equalization

Example
before after Histogram equalization

Generalworking
The histogram equalization is operated on an image in three step: 1). Histogram Formation 2). New Intensity Values calculation for each Intensity Levels 3). Replace the previous Intensity values with the new intensity values New intensity values are calculated for each intensity level by applying the following equation:

The quality is not improved much because the original image already has a broaden gray-level scale

if the image is in the grayscale domain, then the count is 255. And if the image is of size 256x256 then, the No. of pixels is 65536. The next step is to replace the previous intensity level with the new intensity level. This is accomplished by putting the value of Oi in the image for all the pixels, where Oi represents the new intensity value, whereas i represents the previous intensity level.
30

Example
No. of pixels
6

2 4 3 2

3 2 2 4

3 4 3 2

2 3 5 4

5 4 3 2 1

Gray Level(j) No. of pixels

0
0 0

1
0 0 6

2
6 6

3
5 11

4
4 15

5
1 16

6
0 16

7
0 16

8
0 16

9
0 16

n
j =0

s=
Gray level
0 1 2 3 4 5 6 7 8 9
j =0

nj n
0 0

4x4 image Gray scale = [0,9]

/ 16 3.3 3

11 15 16 16 16 16 16 / / / / / / / 16 16 16 16 16 16 16 6.1 6 8.4 8 9 9 9 9 9

sx9

histogram

Example
No. of pixels
6

3 8 6 3

6 3 3 8

6 8 6 3

3 6 9 8

5 4 3 2 1 0 1 2 3 4 5 6 7 8 9

clear all close all a=imread('humd.jpg'); h=imhist(a(:,:,1)); h1=h(1:10:256); horz=1:10:256; figure,title('histogram') bar(horz,h1); g=histeq(a(: : 1) 256); g=histeq(a(:,:,1),256); gg=g(1:10:256); bar(horz,gg);

Histogramequalization

Output image Gray scale = [0,9]

figure ,title('histogrameq') bar(horz,gg); figure,title('orgpic'),imshow(a(:,:,1)) figure,title('histogrameqpic'),imshow(g)

Gray level Histogram equalization

34

Histogram equalizationimplemenation clear all, close all


x0=imread('histest1.jpg'); imshow(x0); x0=rgb2gray(x0); figure,imshow(x0),title('gray') [m,n]=size(x0); len=m*n; x=reshape(x0,len,1); L=256 xpdf=hist(x,[0:L-1]); % pdf, 1 x L tr=round(xpdf*triu(ones(L))*(L-1)/len); % cdf, range from 0 to L-1 y0=zeros(m,n); for i=1:L, if xpdf(i)>0, y0=y0+[x0==i-1]*tr(i); end end ypdf=hist(reshape(y0,len,1),[0:L-1]); % pdf of y, 1 x L

Histogramequalizationimplemenation
figure,subplot(211),stem([0:L-1],xpdf),title('histogram, original') axis([0 256 0 500]) subplot(212),stem([0:L-1],ypdf),title('histogram, equalized'),axis([0 256 0 500]) figure,subplot(121),imshow(uint8(x0)),title('before') g p ( ) ( ( )) ( ) subplot(122),imshow(uint8(y0)),title('after') figure stairs([0:L-1],tr),title('transformation'),axis([0 256 0 256]) Assignment Apply the image histogram on color image Hint apply on single frame then find color value see lecture 1.

35

36

Arithmetic operation on image


Image Example
Arithmetic & logic operations on images used extensively in most image processing applications May cover the entire image or a subset Arithmetic operation between pixels p and q are defined as: Addition: Additi (p+q) :adding ddi a value l t to each hi image pixel i l value l Contrast Adjustment: Adding a +ve constant value to each pixel location increases its value and hence its brightness. Blending: Adding images together produces a composite image of both input images.
37 38

before

after

Image subtraction Subtraction: subtracting a value from each pixel location ..... Contrast Adjustment: as per addition Image differencing : subtracting one image from another shows us the difference between images. If we subtract two images in a video sequence, taken from a static camera, we get a difference image showing the movement that has occurred between these video frames in the scene A useful variation on subtraction is the absolute difference Ioutput = | IA - IB | between images images. Division: dividing each pixel value .... Contrast Adjustment: to uniformly scale image contrast by a given factor (e.g. reduce contrast by 25% = division by 4 (=100/25). Often referred to as image colour scaling. Image differencing: dividing image by another gives a result of 1 where the image pixel values are identical and a value != 1 where differences occur. Notably image differencing using subtraction is computationally more efficient.
39 40

Multiplication
Contrast Adjustment: image colour scaling as per division Saturation This is commonly known as saturation in the image space the value exceeds the representational capacity of the image. A solution is to detect this overflow and avoid it by setting all such values to the max value for the image representation (e.g. 255). We must also be aware of -ve pixel values resulting from subtraction and deal with these accordingly commonly they are set to zero. zero Image blending This can be used to produce ghosting or overlay effects between different images.

Alpha Blending
Transparency in a given image can be introduced by giving a weight between 0 and 1 to individual in the image. The combined alpha for each pixel location in the image form an additional image channel, the alpha-channel, indexed as alpha(i,j). An alpha value of 0 is transparent (no pixel colour visible, only background) and a value of 1 is opaque (full pixel colour visible, no background visible). A value in between gives a level of blending with the background so that the displayed d sp ayed p pixel e co colour, ou , Idisplay(i,j), d sp ay( ,j), is: s

equal weights for each image (1/N) produces and equally weighted composite of each input image 1...N. Alternatively different weights can be used between images to enhance/suppress the features of different images in the final result.
41 42

Adding an image
when we add 128, all grey values of 127 or greater will be mapped to 255. And when we subtract 128, all grey values of 128 or less will be mapped to 0. By looking at these graphs, we see that in general adding a constant will lighten an image, and subtracting a constant will darken it. b=imread('blocks.tif'); b1=b+128 ??? Error b1=uint8(double(b)+128); or b1=imadd(b b1 imadd(b,128); 128); b2=imsubtract(b,128); % Adding two images a1=imread('sence1.jpg'); a2=imread('sence2.jpg'); ad=a2(:,1:138,1:3); % should have same size b=imadd(a1,ad); imshow(a1) imshow(a2) imshow(b)
43

We can also perform lightening or darkening of an image by multiplication; b3=immultiply(b,0.5); or b3=imdivide(b,2) b4=immultiply(b,2); b5=imadd(immultiply(b,0.5),128); or b5=imadd(imdivide(b,2),128);

44

Filtering
Filteringisatechniqueformodifyingorenhancingan imagetoemphasizecertainfeaturesorremoveother features. Filteringisaneighborhoodoperation,inwhichthe valueofanygivenpixelintheoutputimageis determinedby yapplying pp y gsomealgorithm g tothe valuesofthepixelsintheneighborhoodofthe correspondinginputpixel. includesmoothing,sharpening,andedge enhancement. FilterterminDigitalimageprocessing isreferredto thesubimage ,mask,kernel,template,orwindow.
45

Spatialfilteringisapixelneighborhoodoperation Linearfilteringisaccomplishedbyconvolution andcorrelation


meansthevalueofanygivenpixelinoutputimageis representedbyweightedsumofthepixelvalueofit neighborhood

Commonelementofafilterare
neighborhood anoperationonneighborhoodincludingpixelitself. AFilterMaskofdifferentsize3X3,5X5etc.
46

FilterMask
Thesumofweightsinamaskaffecttheoverallintensityof theresultingimage. Typically,amaskisnormalized suchthatthesumofweightsis equaltoone. Whenamaskcontainsnegativevalues values,itisnormalizedsuch thatthesumofweightsisequaltozero.

Processoffiltering
Theprocessconsistssimplyofmovingthefiltermaskfrompoint topointinanimage. Spatialfilteringrequiresthreesteps: 1.positionthemaskoverthecurrentpixel, 2.formallproductsoffilterelementswiththe correspondingelementsoftheneighbourhood, 3.addupalltheproducts. Thismustberepeatedforeverypixelintheimage.

47

48

Frequenciesinimage
Fundamentally,thefrequenciesofanimageare theamountbywhichgreyvalueschangewith distance Highfrequencycomponents
arecharacterizedbylargechangesingreyvaluesoversmall distances;

SpatialFiltering
Twotypeoffiltering Linearfiltering
Medianfiltering averagefiltering Gaussianfiltering

Orderstatistic Filterscanbeclassifiedas: Lowpass(preservelowfrequencies,reducesoreliminateshigh frequencycomponents) Highpass (i.e.,preservehighfrequencies,reducesoreliminateslow frequencycomponents)


Bandpass (i.e.,preservefrequencies within aband) Bandreject (i.e.,preservefrequencies outsideaband)

i.e.edges andnoise.

Lowfrequency
components,ontheotherhand,arepartsoftheimagecharacterized bylittlechangeinthegreyvalues.Thesemayincludebackgrounds, skintextures.
49

CorrelationinlinearSpatialFiltering
Pixels of image
w(-1,-1) w(-1,0) w(-1,1)

The result is the sum of products of the mask coefficients with the corresponding pixels directly under the mask

Linearfiltering
Thecoefficientw(0,0)coincideswithimage valuef(x,y),indicatingthatthemaskis centeredat(x,y)whenthecomputationof sumofproductstakesplace Ingeneral,linearfilteringofanimagefofsize MxN withafiltermaskofsizemxn isgivenby theexpression:
g ( x, y ) =
s = at = b

f(x-1,y-1) f(x-1,y) f(x-1,y+1)


w(0,-1) w(0,0) w(0,1)

Mask coefficients
w(-1,-1) w(-1,0) w(-1,1)

f(x,y-1) f(x,y)f(x,y+1)
w(0,-1) w(0,0) w(0,1) w(1,-1) w(1,0) w(1,1) w(1,-1) w(1,0) w(1,1)

f(x+1,y-1) f(x+1,y) f(x+1,y+1)

f ( x, y) = w( 1,1) f ( x 1, y 1) + w( 1,0) f ( x 1, y ) + w( 1,1) f ( x 1, y + 1) +

w(0,1) f ( x , y 1) + w(0,0) f ( x, y ) + w(0,1) f ( x, y + 1) + w(1,1) f ( x + 1, y 1) + w(1,0) f ( x + 1, y ) + w(1,1) f ( x + 1, y + 1)

w(s, t ) f ( x + s, y + t )

Eq- A

IssuesinspatialFiltering
What happens when center of filter approaches image border For nxn mask , at least one edge of the mask will coincide with border of the image at distance (n-1)/2 One or more column of the mask will be located outside image plane. Remedy: Limit the excursions of centre of mask to be at distance no less than (n-1)/2 pixels from border. Problem: resulting image will be shorter than orginal but all the pixels in the filter image would be processed by mask. Padding Adding rows and column of zeros or other constant gray level. Padding by replicating rows and column Padding is then stripped off at the end of filtering. filtered and original image would have same size. Value of the padding will effect at the edges which increase with the mask size.
53 54

SmoothingSpatialFilters
Smoothingfiltersareusedforblurringandfor noisereduction.
Blurringisusedinpreprocessingsteps,suchas removalofsmalldetailsfromanimagepriorto objectextraction,andbridgingofsmallgapsinlines orcurves Noisereductioncanbeaccomplishedbyblurring duetoreducedsharptransitioningraylevels Imageblurringcanbeobtainedinspatialdomainby pixelaveragingorintegrationinaneighborhood. Theimagecanbefurtherblurredbyusingan averagingfilteroflargersize

LinearFilters
Averaging,GaussianandHighpassfilter. Averagingfilter: IsaLowPassFiltermeansitpassesonlylow frequencyrangespixelsandblockthehigher frequencypixels.Thatswhyedgeswhichare attributesofsharpnessinanimageduetotheir highchangeinfrequenciesareblurredafter averagingfilter. filter Theideaisreplacingthevalueofeverypixelinan imagebytheaverageofthegraylevelsinthe neighborhoodi.e.outputpixelvalueisthemean ofitsneighborhood. Sizeofthemaskcontrolsthedegreeofsmoothing andlossofthedetails.

Two3x3SmoothingAverageFiltersMasks
1
1 1 9

SmoothingLinearFilters
Thegeneralimplementationforfilteringan MxNimagewithaweightedaveragingfilterof sizemxnisgivenbytheexpression
g ( x, y ) =
s = at = b

1 1 1
zi

1 1 1
1 16

1 2 1

2 4 2

1 2 1

1
1 9

Standard average

Weighted average
Pixels are multiplied by different coefficients giving more weights to Some pixels. For example pixel at centre is given more weight The other pixels are inversely related to their distance from centre

w(s, t ) f ( x + s, y + t )
s = at = b

i=1

A averaging filter in which all cofficient are equal to one is called as box filter

w(s, t )

ExampleofAverageFilter
Theeffectofcomputingtheaverageofaneighborhoodofpixelsis toeliminateanysuddenjumpsinthegreylevelwhichcouldbe causedbysomenoiseprocesses i.e.largedeviationsfromthe norm Supposewehavethe3 3neighborhood:
223

Averagefilter:Imfilter examples
a=imread('p.jpg'); h=ones(5,5)/25; f1=imfilter(a,h);%useofzeropadding h10=ones(10,10)/100; f2=imfilter(a,h10);%useofzeropadding f3=imfilter(a,h10,'replicate'); %useofzeropadding figure subplot(2,2,1),imshow(a),title('orginal') subplot(2 2 2) imshow(f1) subplot(2,2,2),imshow(f1), title('5X5withzeropadding') subplot(2,2,3),imshow(f2), title('10X10 withzeropadding') subplot(2,2,4),imshow(f3), title('10X10 withoutzeropadding,replicate'

3302 132 Compared to the numbers 1,2 and 3, the number 30 is relatively large and can be taken to be a digital representation of a noise spike. The average value of this group of numbers is 5.3. By assigning this value to the central pixel we obtain the neighborhood

223 35.32 132

Effectofimfilter withandwithoutzeropadding,replicatemeanswithout. Withzeropaddingablacklineisobservedatboundariesespeciallyathigherfilter butnotwhenusedwithreplicate.


59 60

10

Gaussianfilter

Gaussianfiltering

a=imread('p.jpg'); h=fspecial('gaussian', 5,1) f1=imfilter(a,h,'conv'); %useofzeropadding h10=fspecial('gaussian', 10,1) f2=imfilter(a,h10,'conv'); %useofzeropadding f3=imfilter(a,h10,'replicate','conv'); %useofzeropadding figure subplot(2,2,1),imshow(a),title('orginal') p ( , , ), ( ), ( g ) subplot(2,2,2),imshow(f1), title('5X5withzeropadding') subplot(2,2,3),imshow(f2), title('10X10withzeropadding') subplot(2,2,4),imshow(f3) ,title('10X10withoutzeropadding,rep) Doesnotmuchdifferencethanaveragefilter. Testdifferentvalueofsigma
61 62

HighPassfilter
Sumofthecoefficients(thatis,thesumofalleelementsinthematrix),inthe highpassfilteriszero. Thismeansthatinalowfrequencypartofanimage,wherethegrey valuesaresimilar,theresultofusingthisfilteristhatcorrespondinggrey valuesinthenewimagewillbeclosetozero.
a=imread('coins.bmp'); h=fspecial('laplacian',0.9) f1=imfilter(a,h);%useofzeropadding h10=fspecial('laplacian',0.9) f2=imfilter(a,h10);%useofzeropadding ( , , p ); %useofzeropadding p g f3=imfilter(a,h10,'replicate'); figure subplot(2,2,1),imshow(a),title('orginal') subplot(2,2,2),imshow(f1),title('5X5 withzeropadding') subplot(2,2,3),imshow(f2),title('10X10 withzeropadding') subplot(2,2,4),imshow(f3),title('10X10 withoutzeropadding,replicate')

Theresulting valuesareclosetozero, which istheexpectedresultofapplying ahighpassfiltertoalowfrequency component

63

64

Medianfilter
Themedianfilterisalsoaslidingwindowspatialfilter,butit replacesthecentervalueinthewindowwiththemedianof allthepixelvaluesinthewindow.Asforthemeanfilter,the kernelisusuallysquarebutcanbeanyshape.Anexampleof medianfilteringofasingle3x3windowofvaluesisshown below.

ProcessofMedianfilter
Corpregionof neighborhood Sortthevaluesof thepixelinour region IntheMxN mask themedianisMxN div2+1

10 20 20

15 100 20

20 20 25

10, 15, 20, 20, 20, 20, 20, 25, 100

5th
65

11

SharpeningSpatialFilters
Thehighlightingoffinedetailsinanimageorto enhancedetailsthathasbeenblurrediscalledas sharpeningofspatialfilters. Applicationinvolveselectronicprinting,medical imaging,industrialinspectionandautonomous guidance id in i military ili system Blurring isaccomplishedinthespatialdomainby pixelaveragingorintegrationinaneighborhood. Sharpeningcouldbeaccomplishedbyspatial differentiation
67

ImageProcessingoperations
VerticalImageFlipping ThetransposeimageB(MXN)ofA(NXM)can beobtainedas (j;i) )=A(i; ( ;j) B(j; fori =1:512 forj=1:512 B(j;i)=A(i;j);OR>>B=A0; end end
68

Thresholding is a vital part of image segmentation, produces a binary image from a graycale or colour image by setting pixel values to 1 or 0 depending on whether they are above or below the threshold value. It is used to separate out a region or object within the image based upon its pixel value. A pixel becomes white if its grey level is > T A pixel becomes black if its grey level is < T Thresholding can be done very simply in Matlab. Matlab Suppose we have an 8 bit image, stored as the variable X. Then the command X>T will perform the thresholding. r=imread('rice.tif'); imshow(r),figure,imshow(r>110) The command X>T will thus return 1 (for true) for all those pixels for which the grey values are greater than T, and 0 (for false) for all those pixels for which the grey values are less than or equal to T. We thus end up with a matrix of 0's and 1's, which can be viewed as a binary image. Matlab has the im2bw function, which thresholds an image of any data type, using the general syntax im2bw(image, level)

Noise
Noiseisanydegradationintheimagesignal, causedbyexternaldisturbance. Cleaninganimagecorruptedbynoisethusan importantareaofimagerestoration. restoration TypesofNoises SaltandPeeperNoise GaussianNoise PeriodicNoise
69 70

SaltandpeeperNoise
SaltandpeeperNoise
presenceofwhiteorblackorbothpixelontheimagealso calledbinarynoise. Canbecausedbysharp,suddendisturbancesintheimage signal. ,weusetheMatlab function Toaddnoise, imnoise,whichtakesanumberofdifferent parameters.To addsaltandpeppernoise: t_sp=imnoise(t,'salt &pepper'); weincludeanoptionalparameter,beingavaluebetween0 and1indicatingthefractionofpixelstobecorrupted. Exercise:create40%saltandpeepernoiseinagivenimage
71

Cleaningsaltandpeppernoise Giventhatpixelscorruptedbysaltandpepper noisearehighfrequencycomponentsofan image,weshouldexpectalowpassfilter shouldreduce them. a3=fspecial('average'); t sp a3=filter2(a3,t filter2(a3,t_sp); sp); t_sp_a3 Exercise:Observetheeffectwithlarger averagefilter: a7=fspecial('average',[7,7]); t_sp_a7=filter2(a7,t_sp);
72

12

Cleaningsaltandpeppernoise
Medianfilter:willingeneralreplaceanoisyvaluewith
oneclosertoitssurroundings.Amedianfilterismore effectivethanconvolutionwhenthegoalistosimultaneously reducenoiseandpreserveedges. I=imread(coins.tif'); ( ); J=imnoise(I,'salt &pepper',0.02); K=medfilt2(J); imshow(J),figure,imshow(K) Exercise:observethedifferenceofaverageandmedianfilter withincreasingorderi.e.5x5etconincreasingsaltand peppernoisei.e.0.4ormore.
73

Guassian noise
Gaussiannoiseisanidealizedformofwhitenoise,whichis causedbyrandomfluctuationsinthesignal.Gaussiannoiseis whitenoisewhichisnormallydistributed.Iftheimageis representedasI,andtheGaussiannoisebyN thenwecanmodelanoisyimagebysimplyaddingthetwo ,then I+N t_ga=inoise(t,'gaussian'); Aswithsaltandpeppernoise,thegaussian parameteralsocantakeoptionalvalues,givingthemeanand varianceofthenoise.

74

CleaningGaussiannoise
Therecouldmanycopiesofcorruptedimage havingGaussiannoise.E.g satelliteimages ifasatellitepassesoverthesamespotmany times,wewillobtainmanydifferentimagesof thesameplace. Insuchacaseaverysimpleapproachto cleaningGaussiannoiseistosimplytakethe averagethemeanofalltheimages.

edges
Edgescontainsomeofthemostusefulinformationin animage.Wemayuseedgestomeasure: thesizeofobjectsinanimage; toisolateparticularobjectsfromtheirbackground; torecognizeorclassifyobjects. objects

Anedgemaybelooselydefinedasalineof pixelsshowinganobservabledifference.
76

75

OriginofEdges
surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity

EdgeTypes

Step Edges

Line Edges Change is measured by derivative in 1D Biggest change, derivative has maximum magnitude Or 2nd derivative is zero. Edge Detection: Difference operators Parametric-model matchers

Edgesarecausedbyavarietyoffactors

13

Graylevelprofile

Edgesbydifference
suppose that the values of the "ramp" edge in figure are, from left to right: 20,20,20,20,20,20,100,180,180,180,180,180 If we form the differences, by subtracting each , we obtain: value from its successor, 0,0,0,0,0,80,0,0,0,0 and it is these values which are plotted next figure:

000123200226332233000000776553 7 6 5 4 3 2 1 0

It appears that the difference tends to enhance edges, and reduce other components.
80

Edgesdetectors
Threesteps:
We can define the difference in three separate ways:

Noisereduction
E.g.,medianfilter E.g.,meanfilter Dilemma:
Largefilter=>removenoise Largefilter=>removeedges Smallfilter=>keepedges Smallfilter=>keepnoise

Edgeenhancement Edgelocalisation
81 82

Edgesdetectors
Threesteps:
Noisereduction Edgeenhancement
Calculatecandidatesfortheedges

Edgesdetectors
Welllookatfourmethods(butothersexist!) Withrespecttocomplexity(simplestfirst):
Sobel Laplacian Canny
Prewitt

Edgelocalisation
Decidewhichedgecandidatestokeep

83

84

14

Gradientoperators

(a): Roberts cross operator (b): 3x3 Prewitt operator (c): Sobel operator (d) 4x4 Prewitt operator

clear ic=imread('ic.jpg') figure,imshow(ic),title('orginal') ic=ic(:,:,1) px=[101;101;101];%highlightsverticaledges icx=filter2(px,ic); %divide255converttheics matrixintodoubletobeshowninimshow figure,imshow(icx/255),title('px') py=px'; icy=filter2(py,ic);%highlightshorizontal edges figure,imshow(icy/255),title('py') g y py %wecancreateafigurecontainingalltheedgeswith: edge_p=sqrt(icx.^2+icy.^2); figure,imshow(edge_p/255),title('alledges') %binaryimagecontainingedgesonlycanbeproducedbythresholding. edge_t=im2bw(edge_p/255,0.3); figure,imshow(edge_t);title('bwl') %WecanobtainedgesbythePrewittlters directlybyusingthecommand edge_p=edge(ic,'prewitt'); figure,imshow(edge_p);title('directperwettl') Exercise:useRobertandsobel filtertodosamejob
86

TheCannyEdgeDetector

The Canny Edge Detector

originalimage(Lena)

magnitude of the gradient

TheCannyEdgeDetector

Seconddifferences
Abasicdefinitionofthefirstorderderivativeofaone dimensionalfunctionf(x)is f

= f ( x + 1) f ( x)

Wedefineasecondorderderivativeasthedifference.

2 f = f ( x + 1) + f ( x 1) 2 f ( x). x 2

After non-maximum suppression

15

Derivativeofimageprofile

0 0 0 1 2 3 2 0 0 2 2 6 3 3 2 2 3 3 0 0 0 0 0 0 7 7 6 5 5 3

first second

0 0 1 1 1-1-2 0 2 0 4-3 0-1 0 1 0-3 0 0 0 0 0-7 0-1-1 0-2

0-1 0 0-2-1 2 2-2 4-7 3-1 1 1-1-3 3 0 0 0 0-7 7-1 0 1-2

91

Analyze
The1storderderivativeisnonzeroalongtheentire ramp,whilethe2ndorderderivativeisnonzeroonly attheonsetandendoftheramp. 1st make thick edge and 2nd make thin edge Theresponseatandaroundthepointismuch strongerforthe2nd thanforthe1storderderivative. derivative A2ndorderderivativeenhancemoredetailthanfirst order.

Isotropicfilter
Arerotationinvarianti.e.rotatingtheimageandthen applyingthefiltergivesthesameresultasapplyingthefilter firstandthenrotatingtheresult. ShownbyRosenfeldandKak[1982]thatthesimplestisotropic derivativeoperatoristheLaplacian isdefinedas

2 f =

Itsderivateoperator,highlightsgrayleveldiscontinuitiesinan image.

2 f 2 f + x 2 y 2

94

Discreteformofderivative
f(x-1,y) f(x,y) f(x+1,y) f(x,y-1) f(x,y)
2 f = f ( x, y + 1) + f ( x, y 1) 2 f ( x, y ) y 2

Laplacian
1 1 -4 1 mask 1 0 1 0 1 0 1 1 -4 1 0 -4 0 0 1 0 1 0 1 0 -1 Diagonal direction can be incorporated by adding two more terms one for each direction. 0 -1 0 -1 -1 4 -1 0 4 0 0 -1 0 -1 0 -1 -1 -1 -1 -1 8 -1 mask Negative Laplacian operator 1 1 1 1 -8 1 1 1 1

f = f ( x + 1, y ) + f ( x 1, y ) 2 f ( x, y) x 2
2

The digital implementation of the 2 Dimensional f(x,y+1) Laplacian is obtained by summing 2 components

2 f 2 f f = 2 + 2 x x 2 f = f ( x + 1, y ) + f ( x 1, y ) + f ( x, y + 1) + f ( x, y 1) 4 f ( x, y )
2

-1 -1 -1

Laplacian filtered image is obtained by multiplying the mask operator using convolution

16

Implementation
Laplacian filter image is superimposed on a dark featureless background Background can be recovered preserving the sharpness effect of the laplacian operator simply by adding the original image and lapician image

Implementation
closeall f=imread('moon.jpg'); ! fspecial istogenerate2Dfilter w=fspecial('laplacian',0) ! implementationofequation3Aofslides, g1=imfilter(f,w,'replicate') figure,title('Lapician filteredimagewithoutim2double') imshow(g1) !wecanexpectingeneralto havelapician imagewithnegativevalues !butbecaz ofuint8negative valueshavebeentruncated !convertimageinto double f2=im2double(f); g2=imfilter(f2,w,'replicate') figure,title('Lapician filteredimagewithim2double') imshow(g2) ! Restoregraytonelostbyusinglapician bysubtractingascentrecoefficientisnegative. g=f2g2; figure,title('sharpern imageusingLapician implementation') imshow(g)
98

f ( x, y ) 2 f ( x , y ) g ( x, y ) = 2 f ( x, y ) + f ( x , y )
Where f(x,y) is the original image

If the center coefficient is negative If the center coefficient is positive

2 f ( x, y )

is Laplacian filtered image

g(x,y) is the sharpen image

Usingdifferentlaplacian operator,diognal maskaresharper.


closeall f=imread('moon.jpg'); ! fspecial istogenerate2Dfilter w4=fspecial('laplacian',0) w8=[111;181;111]; f=im2double(f); g4=fimfilter(f,w4,'replicate'); g8=fimfilter(f,w8,'replicate'); imshow(f) figure,imshow(g4) figure,imshow(g8)

ZeroCrossing
Ingeneralthesearetheplaceswherethefilterschanges sign.Zerocrossingisdefinedinfilteredimagessatisfying eitheroffollowing Theyhavenegativevalueandarenexttoapixelwhosegrey valueispositive. Theyhaveavalueofzeroandarebetweennegativeand positivevaluespixels.

99

100

OnemoremethodoftheEdge Detection
TaketheLaplacefilterandapplyzerocrossing I=imread('coins.bmp'); LP_filter=fspecial('laplacian',0); icz=edge(I(: : 1) 'zerocross' icz=edge(I(:,:,1), zerocross ,LP_filter); LP filter); verygoodresult???? Toeliminatethem,wemayfirstsmooththeimage withaGaussianfilter. Edgedetection;theMarrHildreth method:

MarrHildreth method
1. smooththeimagewithaGaussianlter, 2. convolvetheresultwithalaplacian, 3. findthezerocrossings. Methodtobeascloseaspossibletobiologicalvision. Thefirsttwostepscanbecombinedintoone,toproduceaLaplacian of GaussianorLoG filter. fspecial('log',13,2) f l('l ' ) h=fspecial('log',hsize,sigma)returnsarotationallysymmetric Laplacian ofGaussianfilterofsizehsize withstandarddeviationsigma (positive).hsize canbeavectorspecifyingthenumberofrowsand columnsinh,oritcanbeascalar,inwhichcasehisasquarematrix. Thedefaultvalueforhsize is[55]and0.5forsigma.

log=fspecial('log',13,2); edge(ic,'zerocross',log);

101

102

17

EdgeEnhancement:Unsharp masking
Arelatedoperationistomakeedgesinanimageslightlysharper ratherthanisolating,whichgenerallyresultsinanimagemore pleasingtothehumaneye. Aprocesstosharpenimagesconsistsofsubtracting ablurredversion ofanimagefromtheimageitself.Thisprocess,calledunsharp masking,isexpressedas

Sharpeningimage
closeall x=imread('butterfly.bmp'); figure,imshow(x(:,:,1)),title('original') x=x(:,:,1); f=fspecial('average'); xf=filter2(f,x); xu=double(x)xf/1.5 figure,imshow(xu/70),title('sharpenedges')

f s ( x , y ) = f ( x, y ) f ( x , y )
Where f s ( x, y ) denotes the sharpened image obtained by unsharp masking, and f ( x, y ) is a blurred version of f ( x, y )

Thelastcommandscalestheresultsothat imshow displaysanappropriateimage;


104

Highboostfiltering
Alliedtounsharp maskinglters arethehighboostfilters,whichare obtainedby Highboost=A(orginal) lowpass,

HighboostfilteringandLaplacian
IfwechoosetousetheLaplacian,thenweknow fs(x,y)
Af ( x, y ) 2 f ( x, y ) If the center coefficient is negative f hb = 2 Af ( x, y ) + f ( x, y ) If the center coefficient is positive
0 -1 0 -1 A+4 -1 0 -1 0 -1 -1 -1 -1 A+8 -1 -1 -1 -1

Ahighboostfilteredimage image,fhb isdefinedatany point(x,y)as


f hb ( x, y ) = Af ( x, y ) f ( x, y ) where A 1 f hb ( x, y ) = ( A 1) f ( x, y ) + f ( x, y ) f ( x, y )
f hb ( x, y ) = ( A 1) f ( x, y ) + f s ( x, y )
This equation is applicable general and does not state explicity how the sharp image is obtained

18

You might also like