You are on page 1of 91

Islamic University Of Gaza Digital Image Processing

Faculty of Engineering Discussion


Computer Department Chapter 1
Eng. Ahmed M. Ayash Date: 10/02/2013

Chapter 1
Introduction

1. Theoretical

 Image
 An image may be defined as a two-dimensional function, f(x, y), where x and y are
spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is
called the intensity or gray level of the image at that point.

 Digital Image
 When x, y, and the amplitude values of f are all finite, discrete quantities, we call the
image a digital image.
 Pixel is the term most widely used to denote the elements of a digital image
 A finite array of data values.

1
 Digital Image Processing
 The field of digital image processing refers to processing digital images by means of
a digital computer.

 Image processing typically attempts to accomplish one of four things:

 Image Restoration
 Image Enhancement
 Image Understanding(or Computer Vision)
 Image Data Compression

• Restoration takes a corrupted image and attempts to recreate a clean original.


• Enhancement alters an image to makes its meaning clearer to human observers.
• Understanding usually attempts to mimic the human visual system in extracting
meaning from an image.
• Compression To reduce the amount of data required to represent images.

 Three Types of Processes


 Low-level Processes :
Involve primitive operations such as image preprocessing to reduce noise, contrast
enhancement, and image sharpening.
A low-level process is characterized by the fact that both its inputs and outputs are
images.

 Mid-level Processes:
Involves tasks such as segmentation (partitioning an image into regions or objects),
description of those objects to reduce them to a form suitable for machine learning,
and classification (recognition) of individual objects.
Its inputs generally are images, but its outputs are attributes extracted from those
images (e.g., edges, contours, and the identity of individual objects).

 High-level Processes :
Processing involves "making sense“ of an ensemble of recognized objects, as in
image analysis, and, at the far end of the continuum, performing the cognitive
functions normally associated with vision.

2
2. Practical

 Image formats supported by Matlab


The following image formats are supported by Matlab:

Format Description Extension


TIFF Tagged Image File Format .tiff .tif
JPEG Joint Photographic Expert Group .jpg .jpeg
GIF Graphic Interchange Format .gif
BMP Windows Bitmap .bmp
PNG Portable Network Graphics .png
XWD X Window Dump .xwd

 Examples

Example1: read, write and display image


%Example1.m
a=imread('lab2_1.tif');
imwrite(a,gray(256),'b.bmp');
imshow('b.bmp')% imshow is used to display image

Example2: Resize Image


%Example2.m
a=imread('lab2_1.tif');
disp('Original Size:');
s=size(a)
a1 = imresize(a, [300 400]);
disp('New Size:');
s2=size(a1)
imshow(a1);
3
Output:

>> Example2
Original Size:
s =
291 240
New Size:
s2 =
300 400

Homework:
1. Write a Matlab code to read two images and display the result of addition and subtraction
the two images.

2. Write a Matlab code to implement im2double(img) function.

4
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 2
Eng. Ahmed M. Ayash Date: 17/02/2013

Chapter 2
Digital Image Fundamentals
Part 1

1. Theoretical

 Brightness Adaptation
 The ability of the eye to discriminate between changes in light intensity at any
specific adaptation level.
 The eye's ability to discriminate between different intensity levels
 For example, when you enter a dark theater on a bright day, it takes an appreciable
interval of time before you can see well enough to find an empty seat.

 How do we see colors?


 The colors that we perceive are determined by the nature of the light reflected from an
object.
 For example, if white light is shone onto a green object most wavelengths are
absorbed, while green light is reflected from the object.

1
 White is the color which contains all the wavelengths of visible light without
absorption, has maximum brightness.
 Black is the darkest color, the result of the absence of or complete absorption of light.
 Green objects reflect light with wavelengths primarily in the 500 to 570 nm range
(green color range) while absorbing most of the energy at other wavelength

 A Simple Image Formation Model

f(x,y) =i(x,y)r(x,y)

 f(x,y):intensity at the point (x,y)


 i(x,y): illumination at the point (x,y) (determined by illumination Source)
0 < i(x,y) < ∞
 r(x,y): reflectance (determined by imaged object)
0 < r(x,y) < 1
 In real situation:
o Lmin≤ L=f(x,y) ≤ Lmax
o Lmin= imin * rmin
o Lmax= imax * rmax
o L :gray level

 Sampling & Quantization


 Digitalization of an analog signal involves two operations:
o Sampling: Digitizing the x- and y-coordinates.
o Quantization: Digitizing the amplitude values.

 Representing Digital Image


 Images can easily be represented as matrices.
 Pixel values are most often grey levels in the range 0-255(black-white).
 Discrete intensity interval [0, L-1], L=2k
 The number b of bits required to store a M × N digitized image
b=M×N×k

 Spatial and Intensity Resolution


Spatial Resolution
 The spatial resolution of an image is determined by how sampling was carried out
 Spatial resolution simply refers to the smallest discernable detail in an image

2
 Changing the spatial resolution of a digital image, by zooming or shrinking. Simply,
zooming and shrinking are the operations of oversampling (increase resolution) and
undersampling (decrease resolution) a digital image, respectively.

 Interpolation
Process of using known data to estimate unknown values
 Interpolation (sometimes called resampling)
an imaging method to increase (or decrease) the number of pixels in a digital
image.

 Zooming a digital image requires two steps:


1. The creation of new pixel locations,
2. And assignment of gray levels to those new locations.

The assignment of gray levels to the new pixel locations is an operation of great
challenge. It can be performed using three approaches:

1) Nearest Neighbor Interpolation (Pixel Replication): each pixel in the zoomed


image is assigned the gray level value of its closest pixel in the original image.
2) Bilinear Interpolation: the value of each pixel in the zoomed image is a
weighted average of the gray level values of the pixels in the nearest 2-by-2
neighborhood, in the original image.

V(x,y)= ax+by+cxy+d

3) Bicubic Interpolation: The intensity value assigned to point (x,y) is obtained


by the following equation

The sixteen coefficients are determined by using the sixteen nearest neighbors.

In MATLAB, this can be performed using the imresize function. This function
accepts a factor greater than 1.0 for zooming and smaller than 1.0 for shrinking. Also,
it accepts the required method of interpolation, either nearest neighbor or bilinear.
Cubic interpolation is the default method.

imresize(A, SCALE, METHOD)


3
 Shrinking is done in the same manner as just described for zooming

Intensity (gray level) resolution


 Gray-level resolution is the smallest discernible change in gray level.

2. Practical

Example1: Reducing the Spatial Resolution of an Image


% Reading the image and converting it to a gray-level image.
I=imread('Fig','jpg');
s=size(I); %256*256*3
I=rgb2gray(I);
s1=size(I); %256*256
% Reducing the Size of I using bilinear Interpolation
I128=imresize(I,0.5,'bilinear'); imshow(I128),pause
I64=imresize(I,0.25,'bilinear');close,imshow(I64),pause
I32=imresize(I,0.125,'bilinear');close,imshow(I32),pause
I16=imresize(I,0.0625,'bilinear');close,imshow(I16),pause

% Resizing the Reduced Images to the Original Size (256 X 256) and
Comapre
% them:
I16=imresize(I16,16,'bilinear');
I32=imresize(I32,8,'bilinear');
I64=imresize(I64,4,'bilinear');
I128=imresize(I128,2,'bilinear');

close
figure
subplot(121),imshow(I),title('I')
subplot(122),imshow(I128),title('I128')
pause,close

figure
subplot(221),imshow(I),title('I')
subplot(222),imshow(I64),title('I64')
subplot(223),imshow(I32),title('I32')
subplot(224),imshow(I16),title('I16')
pause,close

Output:

4
Example2: Reducing the Number of Gray Levels of an Image

% Reading the image and converting it to a gray-level image.


I=imread('Fig.jpg');
I=rgb2gray(I);

% A 64 gray-level image:
[I64,map64]=gray2ind(I,64);
% A 32 gray-level image:
[I32,map32]=gray2ind(I,32);
% A 16 gray-level image:
[I16,map16]=gray2ind(I,16);
% A 8 gray-level image:
[I8,map8]=gray2ind(I,8);
% A 2 gray-level image:
[I2,map2]=gray2ind(I,2);

5
figure(1)
subplot(321),subimage(I),title('I'),axis off
subplot(322),subimage(I64,map64),title('I64'),axis off
subplot(323),subimage(I32,map32),title('I32'),axis off
subplot(324),subimage(I16,map16),title('I16'),axis off
subplot(325),subimage(I8,map8),title('I8'),axis off
subplot(326),subimage(I2,map2),title('I2'),axis off

3. Homework:
1. Write a MATLAB code like example1 to reduce the size of image using nearest Interpolation.
Which is better in quality of image, nearest or bilinear interpolation?

2. Write a MATLAB code that reads a gray scale image and generates the vertically flipped image
of original image. (Typically like the following result)

6
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 2
Eng. Ahmed M. Ayash Date: 24/02/2013

Chapter 2
Digital Image Fundamentals
Part 2

1. Theoretical

 Basic Relationships Between Pixels


 Neighborhood
 Adjacency
 Connectivity
 Paths
 Regions and boundaries

Neighbors of a Pixel
 The 4- neighbors of pixel p are: N4(p)
Any pixel p(x,y) has two vertical and two horizontal neighbors,
given by:
(x+1, y), (x-1, y), (x, y+1), (x, y-1)

 The 4- diagonal neighbors are: ND(p)


given by:
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)

 The 8-neighbors are : N8(p)


8-neighbors of a pixel pare its vertical, horizontal and 4 diagonal neighbors
denoted by N8(p)
N8(p) = N4(p) U ND(p)

1
Connectivity
Two pixels are said to be connected if they are adjacent in some sense.
o They are neighbors (N4, ND, N8) and
o Their intensity values (gray levels) are similar.

Adjacency
Let V be the set of intensity used to define adjacency; e.g. V={1} in a binary image or
V={100,101,102,…,120} inn a gray-scale image.
We consider three types of adjacency:
o 4-adjacency:
Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).

o 8-adjacency:
Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).

o m-adjacency (mixed adjacency):


Two pixels p and q with values from V are m-adjacent if:
(i) q is in N4(p),or
(ii) q is in ND( p) and N4(p) ∩ N4(q) is empty (has no pixels whose values
are from V )

Two image subsets S1 and S2 are adjacent if some pixel in S1 is adjacent to some
pixel in S2.

Question1:
Consider the two image subsets, S1 and S2, shown in the following figure. For V={1},
determine whether these two subsets are (a) 4-adjacent, (b) 8-adjacent, or (c) m-adjacent.

2
Solution:

Let p and q be as shown in Fig. Then:


(a) S1 and S2 are not 4-connected because q is not in the set N4(p);
(b) S1 and S2 are 8-connected because q is in the set N8(p);
(c) S1 and S2 are m-connected because
(i) q is in ND(p), and
(ii) the set N4(p) ∩ N4(q) is empty.

Paths
A (digital) path (or curve) from pixel p with coordinates (x, y) to pixel q with
coordinates (s, t) is a sequence of distinct pixels with coordinates

(x0, y0), (x1,y1), ……., (xn, yn)

o where (x0, y0) = (x, y), (xn, yn) = (s, t),


o and pixels (xi, yi) and (xi-1, yi-1) are adjacent for 1≤ i ≤ n.
o In this case, n is the length of the path.
o If (x0, y0) = (xn, yn) the path is a closed path.
o The path can be defined 4-,8-,m-paths depending on adjacency type.

Let S be a subset of pixels in an image. Two pixels p and q are said to be connected
in S if there exists a path between them consisting entirely of pixels in S

 For any pixel p in S, the set of pixels that are connected to it in S is called a
connected component of S.
 If it only has one connected component, then set S is called a connected set.

Question2:
Consider the image segment shown.
Let V={0, 1} and compute the lengths of the shortest 4-, 8-, and m-path between p
and q. If a particular path does not exist between these two points, explain why.

3
Solution:

 When V = {0,1}, 4-path does not exist between p and q because it is impossible
to get from p to q by traveling along points that are both 4-adjacent and also have
values from V . Fig. a shows this condition; it is not possible to get to q.
 The shortest 8-path is shown in Fig. b its length is 4.
 The length of the shortest m- path (shown dashed) is 5.
 Both of these shortest paths are unique in this case.

Regions and boundaries


 Let R be a subset of pixels in an image. We call R a region of the image if R is a
connected set.
 The boundary (also called border or contour) of a region R is the set of pixels in
the region that have one or more neighbors that are not in R.

 Distance Measures
Given pixels p, q and z with coordinates (x, y), (s, t), (u, v) respectively, the distance
function D has following properties:
o D(p, q) ≥ 0 [D(p, q) = 0, iff p = q]
o D(p, q) = D(q, p)
o D(p, z) ≤ D(p, q) + D(q, z)

 The following are the different Distance measures:


a. Euclidean Distance:

De(p, q) = [(x-s)2 + (y-t)2]1/2

- The pixels having a distance less than or equal to some value r from (x, y) are
the points contained in a disk of radius r centered at (x, y).

b. City Block Distance:

D4(p, q) = |x-s| + |y-t|

- The pixels having a D4 distance from (x, y) less than or equal to some value r
form a diamond centered at (x, y). For example, the pixels with D4 distance ≤ 2
from (x, y) (the center point) form the following contours of constant distance:
4
- The pixels with D4=1 are the 4-neighbors of (x, y).

c. Chess Board Distance:

D8(p, q) = max(|x-s|, |y-t|)

- The pixels with D8 distance from (x, y) less than or equal to some value r form a
square centered at (x, y). For example, the pixels with D8 distance ≤ 2 from (x, y)
(the center point) form the following contours of constant distance:

- The pixels with D8=1 are the 8-neighbors of (x, y).

2. Practical

Example1: Image rotate


%ex1.m
clear all;
I = imread('1.jpg');
J = imrotate(I,35); %rotates an image 35° counterclockwise
subplot(121),imshow(I),title('Original');
subplot(122),imshow(J),title('Rotatated by 35');

Output:

5
Example2: Cropping an Image
%ex2.m
clear all;
I = imread('1.jpg');
J = imrotate(I,35);
J2= imcrop(J);
subplot(131),imshow(I),title('Original');
subplot(132),imshow(J),title('Rotatated by 35');
subplot(133),imshow(J2),title('Rotatated by 35 croped');

Output:

6
3. Homework:
1. Consider the image segment shown.
Let V= {1, 2} and compute the lengths of the shortest 4-, 8-, and m-path between p and q. If a
particular path does not exist between these two points, explain why.

2. Write a Matlab code to rotate an image by 180° (do not use the imrotate function, you should
implement it)
Hint (Use end function)

3. Given the two images below, perform an enhancement operation to get Fig (1).
Hint (Use a logical operation)

Fig (1)

7
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 3
Eng. Ahmed M. Ayash Date: 03/03/2013

Chapter 3
Image Enhancement in the Spatial Domain
Part 1

1. Theoretical

 Enhancement
 The principal objective of enhancement is to process an image so that the result is
more suitable than the original image for a specific application.

 Image enhancement approaches fall into two broad categories:


o Spatial domain methods: Techniques are based on direct manipulation of pixels in
an image.
o Frequency domain methods: Techniques are based on modifying the Fourier
transform of an image

 The general form of the enhancement approach is:

S = T(r)

where T is a transformation that maps a pixel value (intensity) r into a pixel value s.

 Gray Level Transformations


 Three basic types of functions used frequently for image enhancement
o Linear (negative and identity transformation)
o Logarithmic (log and inverse-log transformation)
o Power-law (nth power and nth root transformation)

1
Linear
 Identity: s = r, no transformation

 Image Negatives: assume the gray level range is [0, L-1]:

S = L-1-r
L=0,1,2,3,4 total 5

2
Logarithmic
 Log Transformations

S = c log(1+r)

- Where c is a constant and it is assumed that r≥0.


- Stretch low gray levels and compress high gray level.
- maps a narrow range of dark input values into a wider range of output values.

 The opposite of this applies for inverse-log transform.

Power-law
 Power-Law Transformations:

S = c rγ

 where c and γ are positive constants

 γ < 1  T plays as log transformation.


 γ > 1  T plays as inverse log transformation.
 c = γ = 1  Identity function
 This transformation function is also called as gamma correction.

3
2. Practical
Example1: Image Negatives
%img_neg.m
close all;
clear all;
I=imread('ch3.jpg');
I=im2double(I);

for i=1:size(I,1)
for j=1:size(I,2)
I1(i,j)=1-I(i,j);
end
end
subplot(121),imshow(I),title('original image')
subplot(122),imshow(I1),title('enhanced image (image negative)')

Output:

Example2: Power-Law Transformations


%power_tr.m
close all;
clear all;
clc;
I=imread('ch3.tif');
I=im2double(I);
c=input('Enter the value of the constant c=');
g=input('Enter the value of gamma g=');
for i=1:size(I,1)
for j=1:size(I,2)
I3(i,j)=c*I(i,j)^g;
end
end
subplot(121), imshow(I),title('original image')
subplot(122), imshow(I3),title('power-low transformation')

4
Output:

Enter the value of the constant c=1


Enter the value of gamma g=0.2 % for gamma value less than 1 u gets Bright image

Enter the value of the constant c=1


Enter the value of gamma g=1 % for gamma value equals to 1 the result will be the same image

5
Enter the value of the constant c=1
Enter the value of gamma g=5 % for gamma value greater than 1 u gets dark image

3. Homework:
1. Repeat Example1 (Image Negatives) without converting the image to double, your output should
be the same as example1 output.

2. Write a Matlab code to apply Log Transformations function.

6
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 3
Eng. Ahmed M. Ayash Date: 17/03/2013

Chapter 3
Image Enhancement in the Spatial Domain
Part 2

1. Theoretical

 Histogram Processing
 Histogram of a digital image with gray levels in the range [0,L-1] is a discrete
function:

h(rk) = nk

Where rk is the kth gray level and nk is the number of pixels in the image having level rk.

Example1

It is a common practice to normalize the histogram by dividing each of its values by the
total number of pixels in the image, denoted by n.

Thus a normalized histogram is given by:

p(rk) = nk/n
1
 p(rk) = nk/MN , k=0, 1, …, L-1
 p(rk) gives an estimate of the probability of occurrence of a gray level rk.
 The sum of all components of a normalized histogram is equal to 1.

 Histogram Equalization
 It is an enhancement technique which is related to the probability density function of the gray
levels of the input image, would give an output image of uniformly distributed gray levels.

 The goal of histogram equalization is to spread out the contrast of a given image evenly
throughout the entire available dynamic range.

 In histogram equalization technique, it is the probability density function (pdf) that is


being manipulated.

PDF can be approximated using the probability based on the histogram as follows:

From this pdf, we can then obtain the cumulative density function (CDF) as follows:

where p(rk) is the probability for pixel of intensity .

The output pixels from the histogram equalization operation is then equals to the cdf
of the image or mathematically:

To get the value of the pixel, need to be multiplied by [L – 1] and then round it to the
nearest integer.

Example2:

Consider the following 4x4 matrix of a 3-bit image.

2
a) Find image Histogram.

Solution:

b) Find the PDF for this image.

Solution:

The pdf for this image can be computed simply by taking the histogram above and
divided each value by the number of total pixels (16 in this example).

c) Find the CDF for this image.

Solution:

The cdf of this image matrix is computed as an accumulation of the above histogram.

To get the value of the pixel s, need to be multiplied by 7 and then round it to the nearest
integer. The output is:

3
 Histogram Matching (Histogram Specification)
Histogram matching is an extension to the histogram equalization technique. In histogram
equalization technique, what we are trying to achieve is that the output histogram should follow
the uniform pdf. However, for histogram matching, we want the output histogram to follow
according to the histogram we specify. To achieve this, we first histogram equalize the input
image, then the pdf of this resulting equalized image will be matched to the pdf of the desired
histogram.

Example3:
Consider the following 4x4 matrix of a 3-bit image and the desired histogram:

Find the Histogram Matching of this image.

Solution:

From this desired histogram, we come up with the desired cdf i.e. by accumulating the
probability of the desired histogram. This is shown in the table below:

r 0 1 2 3 4 5 6 7
1.PDF = p(rk) 0.1875 0.25 0.3125 0 0.0625 0.0625 0 0.125
2.CDF = p(sk) 0.1875 0.4375 0.75 0.75 0.8125 0.875 0.875 1
3.PDF = p(rrk) 0.125 0.1875 0.1875 0.1875 0.125 0.0625 0.0625 0.0625
4.CDF = p(zk) 0.125 0.3125 0.5 0.6875 0.8125 0.875 0.9375 1
s 0 2 3 3 4 5 5 7

 1 and 2 for the given image.


 3 and 4 for the desired histogram.

For p(s0) = 0.1875, the nearest value for p(zk) is 0.125, which is for z0. Hence, the output
pixel corresponding to r = 0 is s = 0. Then p(s1) = 0.4375, the nearest value for p(zk) is
0.5, which is for z2. Hence, the output pixel corresponding to r = 1 is s = 2.

The process is continued until we’ve done the last pixel. The output matrix then will be
as follows:

4
2. Practical
Example1: Image Histogram
%hist_ex.m
close all;
clear all;
i=imread('ch3_1.jpg');
I=rgb2gray(i);
I=im2double(I);
m=im2bw(I,1); %Black
subplot 121,imhist(I),title('Gray Image'),ylim('auto');
subplot 122,imhist(m),title('Binary Image'),ylim('auto');

Output:

Example2: Histogram Equalization


%histeq_ex.m
clc
close all
clear all
f = imread('pollen.tif');
g = histeq(f);
subplot 221,imshow(f),title('original image')
subplot 222,imhist(f),title('original Image Histogram'),ylim('auto')
subplot 223, imshow(g),title('Image after Histogram Equalization')
subplot 224, imhist(g),title('Histogram Equalization'),ylim('auto')

5
Output:

3. Homework (Enhancement Using Arithmetic Operations):


Assume you have an image for a human and you need to hide the features of the face only. Using
Matlab, develop a function that can be used to do this. (Consider '2.gif' as a mask in the attached
images carefully and used it to hide the face of '1.gif' image).

Quiz Next Week


6
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 3
Eng. Ahmed M. Ayash Date: 24/03/2013

Chapter 3
Image Enhancement in the Spatial Domain
Part 3

1. Theoretical

 Spatial Filtering
The idea of spatial filters is to define a filter (mask) in the spatial domain, slide it over the
image to be processed, and apply some operation in the gray levels of the image pixels under
the mask. This process is repeated at each pixel in the input image and a new pixel is formed
in the output image at the same position as the pixel under the center of the mask.

There are several types of spatial filters:

I) Linear Spatial Filters:


The linear operations of interest in this section consist of multiplying each pixel in the
neighborhood by a corresponding coefficient of the mask and summing the results to obtain
a pixel in the output image.
For a mask of size m × n, we assume typically that m = 2a+1 and n = 2b+1, where a and b
are nonnegative integers. Thus, linear filtering is described by:

a b
g(x,y) =   w(s, t ) f ( x  s, y  t )
s   at   b

Figure (1) illustrates the mechanisms of spatial filtering. The result (or response), R, of
linear filtering with the filter mask at a point (x, y) in the image is

1
Figure (1): The Mechanics of Spatial Filtering

The spatial averaging filters are example of linear filters which used for blurring (smoothing)
images and for noise reduction. The output of the spatial averaging filter is simply the average
of the pixels contained in the neighborhood of the mask as shown by the following equation:

a b

  w(s, t ) f ( x  s, y  t )
s   at   b
g(x,y) = a b

  w(s, t )
s   at   b

The idea of averaging filter is straightforward. It seeks to replace each pixel value in an
image with the average value of its filter neighborhood.

2
 3x3 Smoothing Linear Filters

Example1:

3
II) Statistical-Order Filters (Nonlinear Filters):

Statistical-order filters also use pixel neighborhoods but do not explicitly use coefficients.
The most famous application is noise reduction by median gray-level value computation
in the neighborhood of the mask, which is called Median filtering. The Median filter is
excellent in removing impulsive noise (called salt and pepper noise)

The median filter is based on the sliding neighborhood operation, but instead of simply
replacing the pixel value with the average in the neighborhood spanned by the filter, it
replaces the pixel with the median of those values.

Example2:

4
 Convolution and Correlation
Convolution:

Linear filtering of an image is accomplished through an operation called convolution.


Convolution is a neighborhood operation in which each output pixel is the weighted sum of
neighboring input pixels. The matrix of weights is called the convolution kernel, also known
as the filter. A convolution kernel is a correlation kernel that has been rotated 180 degrees.

For example, suppose the image is


A = [17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9]
And the convolution kernel is
h = [8 1 6
3 5 7
4 9 2]
The following figure shows how to compute the (2,4) output pixel using these steps:
1. Rotate the convolution kernel 180 degrees about its center element.
2. Slide the center element of the convolution kernel so that it lies on top of the (2,4)
element of A.
3. Multiply each weight in the rotated convolution kernel by the pixel of A underneath.
4. Sum the individual products from step 3.

Computing the (2,4) Output of Convolution

Hence the (2,4) output pixel is

5
Correlation:

The operation called correlation is closely related to convolution. In correlation, the value of
an output pixel is also computed as a weighted sum of neighboring pixels. The difference is
that the matrix of weights, in this case called the correlation kernel, is not rotated during the
computation.
The following figure shows how to compute the (2,4) output pixel of the correlation of A,
assuming h is a correlation kernel instead of a convolution kernel, using these steps:

1. Slide the center element of the correlation kernel so that lies on top of the
(2,4) element of A.
2. Multiply each weight in the correlation kernel by the pixel of A underneath.
3. Sum the individual products from step 3.

Computing the (2,4) Output of Correlation

The (2,4) output pixel from the correlation is

2. Practical
Example1: Average Filter for blurring and noise reduction
%ex1.m
close all
clear all
clc
%%%%%averaging filter%%%%
%%%%I)for blurring and noise reduction:%%%
I=imread('1.tif');
imshow(I)
%fspecial('average',HSIZE) returns an averaging filter H of size HSIZE.
M3=fspecial('average',3);

6
M9=fspecial('average',9);
M15=fspecial('average',15);
M35=fspecial('average',35);
J3=imfilter(I,M3); % correlation is the default,
J9=imfilter(I,M9);
J15=imfilter(I,M15);
J35=imfilter(I,M35);
figure(2),
subplot(221),imshow(J3),title('Filtered by 3X3')
subplot(222),imshow(J9),title('Filtered by 9X9')
subplot(223),imshow(J15),title('Filtered by 15X15')
subplot(224),imshow(J35),title('Filtered by 35X35')

Output:
Original:

After averaging filter

7
Example2: Median Filter
%ex2.m
close all
clear all
clc
% 2)Median Filter:

I=imread('3.jpg');
In=imnoise(I,'salt & pepper',0.2);
Ic=medfilt2(In); % 3x3 Median Filter(default)

subplot(131),imshow(I),title('original')
subplot(132),imshow(In),title('Noisy')
subplot(133),imshow(Ic),title('Filtered')

Output:

Quiz Next Week

8
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 4
Eng. Ahmed M. Ayash Date: 31/03/2013

Chapter 4
Image Enhancement in the Frequency Domain
Part 1

1. Theoretical

 Fourier Transform
 The Fourier transform is a representation of an image as a sum of complex exponentials
of varying magnitudes, frequencies, and phases.

 Working with the Fourier transform on a computer usually involves a form of the
transform known as the discrete Fourier transform (DFT). There are two principal
reasons for using this form:
1) The input and output of the DFT are both discrete, which makes it convenient
For computer manipulations.
2) There is a fast algorithm for computing the DFT known as the fast Fourier
transform (FFT).

The DFT is usually defined for a discrete function f(x,y) that is nonzero only over the
finite region 0 ≤ x ≤ M-1 and 0 ≤ y ≤ N-1.
The general idea is that the image (f(x,y) of size M x N) will be represented in the
frequency domain (F(u,v)). The equation for the two-dimensional discrete Fourier
transform (DFT) is:

M 1 N 1
1
F (u, v) 
MN
  f ( x, y)e
x 0 y 0
 j 2 ( ux / M vy / N )

For u=0, 1, 2,……., M-1


For v=0, 1, 2,……., N-1

1
The concept behind the Fourier transform is that any waveform that can be constructed
using a sum of sine and cosine waves of different frequencies. The exponential in the
above formula can be expanded into sins and cosines with the variables u and v
determining these frequencies
The inverse of the above discrete Fourier transform is given by the following equation:

M 1 N 1
f ( x, y )   F (u, v)e
u 0 v 0
j 2 ( ux / M vy / N )

For x=0, 1, 2,……., M-1


For y=0, 1, 2,……., N-1

Thus, if we have F(u,v), we can obtain the corresponding image (f(x,y)) using the inverse
discrete Fourier transform.

 In MATLAB we can compute DFT and inverse DFT using fft2 and ifft2 functions
respectively.

 The function fftshift is used to shift the zero-frequency component to center of spectrum.

How does the Discrete Fourier Transform relate to Spatial Domain Filtering?
The following convolution theorem shows an interesting relationship between the spatial
domain and frequency domain:

And, conversely,

The symbol "*" indicates convolution of the two functions. The important thing to extract out
of this is that the multiplication of two Fourier transforms corresponds to the convolution of the
associated functions in the spatial domain.

 Filters in the Frequency Domain


Basic Steps of Filtering in the frequency domain:

The following summarize the basic steps in DFT Filtering

1. Obtain the Fourier transform:


F=fft2(f);

2
2. Generate a filter function, H
3. Multiply the transform by the filter:
G=H.*F;
4. Compute the inverse DFT:
g=ifft2(G);
5. Obtain the real part of the inverse FFT of g:
g2=real(g);

You can create filters directly in the frequency domain. There are three commonly discussed
filters in the frequency domain:

 Lowpass filters, sometimes known as smoothing filters.


 Highpass filters, sometimes known as sharpening filters.
 Bandpass filters.

 A Lowpass filter attenuates high frequencies and retains low frequencies unchanged.
 A Highpass filter blocks all frequencies smaller than Do and leaves the others unchanged.
 Bandpass filters are a combination of both lowpass and highpass filters. They attenuate all
frequencies smaller than a frequency Do and higher than a frequency D1, while the
frequencies between the two cut-offs remain in the resulting output image.

In Matlab, to get a lowpass filter we use this command:

H = fspecial('gaussian',HSIZE,SIGMA)

- Returns a Gaussian lowpass filter of size HSIZE with standard deviation SIGMA (positive).

- The default HSIZE is [3 3], the default SIGMA is 0.5.

In Matlab, to get a highpass filter we use this command:


H = fspecial('laplacian',ALPHA)

- Returns a 3-by-3 filter approximating the shape of the two-dimensional Laplacian operator.

- The parameter ALPHA controls the shape of the Laplacian and must be in the range
0.0 to 1.0, the default ALPHA is 0.2

3
2. Practical
Example1: Apply FFT and IFFT.
%ex1.m
close all
clear
clc
%====================================
% 1) Displaying the Fourier Spectrum:
%====================================
I=imread('1.jpg');
I=im2double(I);
FI=fft2(I); %(DFT) get the frequency for the image
FI_S=abs(fftshift(FI));%Shift zero-frequency component to center of
img_spectrum.
I1=ifft2(FI);
I2=real(I1);
subplot(131),imshow(I),title('Original'),
subplot(132),imagesc(0.5*log(1+FI_S)),title('Fourier Spectrum'),axis off
subplot(133),imshow(I2),title('Reconstructed')
%imagesc: the data is scaled to use the full colormap.

Output:

4
Example2: Apply lowpass filter.
%ex2.m
close all
clear
clc
%=============================
% 2) Low-Pass Gaussian Filter:
%=============================
I=imread('1.jpg');
I=im2double(I);
FI=fft2(I); %1.Obtain the Fourier transform
LP=fspecial('gaussian',[11 11],1.3); %2.Generate a Low-Pass filter
FLP=fft2(LP,size(I,1),size(I,2)); %3. Filter padding
LP_OUT=FLP.*FI; %4.Multiply the transform by the filter
I_OUT_LP=ifft2(LP_OUT); %5.inverse DFT
I_OUT_LP=real(I_OUT_LP); %6.Obtain the real part(Output)

%%%%spectrum%%%%
FLP_S=abs(fftshift(FLP));%Filter spectrum
LP_OUT_S=abs(fftshift(LP_OUT));%output spectrum

subplot(221),imshow(I),title('Original'),
subplot(222),imagesc(0.5*log(1+FLP_S)),title('LowPass Filter
Spectrum'),axis off
subplot(223),imshow(I_OUT_LP),title('LowPass Filtered Output')
subplot(224),imagesc(0.5*log(1+LP_OUT_S)),title('LowPass
Spectrum'),axis off

Output:

5
3. Homework
Write a Matlab code to apply highpass Laplacian filter on 1.jpg figure.

6
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 5
Eng. Ahmed M. Ayash Date: 16/04/2013

Chapter 5
Image Restoration
Part 1

1. Theoretical

Image Restoration
 Image restoration attempts to restore images that have been degraded by using a
prior knowledge of the degradation phenomenon.
o model the degradation
 Identify the degradation process and attempt to reverse it.
 Similar to image enhancement, but more objective

Degradation/restoration process model

1
Degradation model
A degradation function and additive noise that operate on an input image f(x, y) to
produce a degraded image g(x, y).

 If H is a linear invariant process:


o In spatial domain

o In an equivalent frequency domain

Order Statistics Filters


 Spatial filters that are based on ordering the pixel values that make up the
neighborhood operated on by the filter.

o Useful spatial filters include


o Median filter
o Max and Min filter
o Midpoint filter

Example1:
A 4x4 grayscale image is given by

1) Filter the image with a 3x3 median filter, after zero-padding at the image borders.

2
2) Filter the image with a 3x3 median filter, after replicate-padding at the image borders

Adaptive Filters
The filters discussed so far are applied to an entire image without any regard for how
image characteristics vary from one point to another. The behavior of adaptive filters
changes depending on the characteristics of the image inside the filter region.
This approach often produces better results than linear filtering. The adaptive filter is
more selective than a comparable linear filter, preserving edges and other high-frequency
parts of an image.

Adaptive Filter example:


 Adaptive Median Filter

2.1.1 Adaptive Median Filter


The application of median filter has been investigated. As an advanced method compared
with standard median filtering, the Adaptive Median Filter performs spatial processing to
preserve detail and smooth non-impulsive noise. A prime benefit to this adaptive approach
to median filtering is that repeated applications of this Adaptive Median Filter do not
erode away edges or other small structure in the image.

The adaptive median filtering has been introduced as an improvement to the standard median
filtering, as we explained before that the Median filtering can detect the noise but in the same
it can't differentiate between the fine details and the noise. So the main idea in the Adaptive
Median Filter is to perform a spatial processing to determine which pixels in an image have
been affected by impulse noise, and run the filter only in this pixel. The Adaptive Median
Filter classifies pixels as noise by comparing each pixel in the image to its surrounding
neighbor pixels. The size of the neighborhood is adjustable, as well as the threshold for the
comparison. A pixel that is different from a majority of its neighbors, as well as being not
structurally aligned with those pixels to which it is similar, is labeled as impulse noise. These
noise pixels are then replaced by the median pixel value of the pixels in the neighborhood
that have passed the noise labeling test.

3
The standard median filter does not perform well when impulse noise is greater than 0.2,
while the adaptive median filter can better handle these noises.

Keep in mind that the output of the filter is a single value used to replace the value of the
pixel at (x,y), the particular point on which the window Sxy is centered at given time.
Consider the following notation:

zmin = minimum gray level in Sxy


zmax = maximum gray level in Sxy
zmed = median of gray levels in Sxy
zxy = gray level at coordinates (x, y)
Smax =maximum allowed size of Sxy

The adaptive median filtering algorithm works in two levels, denoted level A and level
B, as follows:
 Level A:
A1 = zmed – zmin
A2 = zmed – zmax
If A1 > 0 and A2 < 0, Go to level B
Else increase the window size
If window size ≤ Smax repeat level A
Else output zxy

 Level B:
B1 = zxy – zmin
B2 = zxy – zmax
If B1 > 0 and B2 < 0, output zxy
Else output zmed

► The key to understanding the algorithm is to remember that the adaptive median
filter has three purposes:
 Remove salt-and-pepper (impulse) noise.
 Provide smoothing of other noise.
 Reduce distortion.

With Another Expression:


 Level A:
If zmin < zmed < zmax, go to level B
Else increase the window size
If window size ≤ Smax, repeat level A
Else output zxy

 Level B:
If zmin < zxy < zmax, output zxy (→ do not filter)
Else output zmed (→ filter by replacing the pixel with zmed)
4
● Explanation
Level A: IF Zmin < Zmed < Zmax, then
• Zmed is not an impulse
(1) Go to level B to test if Zxy is an impulse...
ELSE
• Zmed is an impulse
(2) The size of the window is increased and
(3) Level A is repeated until...
(a) Zmed is not an impulse and go to level B or
(b) Smax reached: output is Zxy

Level B: IF Zmin < Zxy < Zmax, then


• Zxy is not an impulse
(1) Output is Zxy (distortion reduced)
ELSE
• Either Zxy = Zmin or Zxy = Zmax
(2) Output is Zmed (standard median filter)
• Zmed is not an impulse (from level A)

5
Example2: Apply 3x3 adaptive median filter on pixel (2,2) , with maximum allowed
size of 3x3.
4 0 0
5 7 7
3 6 0
Solution:
Zxy = 7 , Zmed = 4, Zmin = 0, Zmax = 7

Level A
Test if Zmin < Zmed < Zmax  True, Go to level B

Level B
Test if Zmin < Zxy < Zmax  False, then output = Zmed = 4

2. Practical
Example1: Median Filter and Adaptive Median Filter
%%ex1.m%%%
clc;
clear all;
close all;
I=imread('Lab9_1.jpg');
In=imnoise(I,'salt & pepper',0.25);
Im=medfilt2(In,[7 7]); %7*7 median filter
Iam = adpmedian(In, 7);

subplot(221),imshow(I),title('original image');
subplot(222),imshow(In),title('image corrupted by salt & pepper');
subplot(223),imshow(Im),title('filtered image by median filter');
subplot(224),imshow(Iam),title('filtered image by adaptive median
filter');

Output:

6
Quiz Next Week

7
Islamic University Of Gaza Digital Image Processing
Faculty of Engineering Discussion
Computer Department Chapter 5
Eng. Ahmed M. Ayash Date: 21/04/2013

Chapter 5
Image Restoration
Part 2

1. Theoretical

Bandpass Filter
A bandpass attenuates very low and very high frequencies, but retains a middle range band
of frequencies. Bandpass filtering can be used to enhance edges (suppressing low
frequencies) while reducing the noise at the same time (attenuating high frequencies). We
obtain the filter function of a bandpass by multiplying the filter functions of a low pass
and of a high pass in the frequency domain, where the cut-off frequency of the low pass is
higher than that of the high pass.
Bandpass filtering is attractive but there is always a trade-off between blurring and noise:
lowpass reduces noise but accentuates blurring; high pass reduces blurring but accentuates
noise.

Ideal Bandpass Filter


The ideal bandpass filter passes only frequencies within the pass band. An ideal
bandpass filter is defined as follows:

Where DL and DH are the cut frequencies of the low and high pass filters respectively.

Butterworth Bandpass Filter


This filter can be derived mathematically by multiplying the transfer functions of a low
and high pass filter. The low pass filter will have the higher cut off frequency.

1
Where DL and DH are the cut frequencies of the low and high pass filters respectively; n
is the order of the filter and D(u,v) is the distance from the center.

Gaussian Bandpass Filter


A Gaussian bandpass filter is defined as follows:

Where DL and DH are the cut frequencies of the low and high pass filters respectively
and D(u,v) is the distance from the center.

BandReject Filter
 Removing periodic noise from an image involves removing a particular range of
frequencies from that image.
 Performs opposite operation of bandpass filter.

2
2. Practical
Example1: Bandpass Filter
close all
clear
clc
a=imread('1.tif');
[M,N]=size(a);
a=im2double(a);
F=fft2(a);
% Set up range of variables.
u = 0:(M-1);
v = 0:(N-1);

% Compute the indices for use in meshgrid


idx = find(u > M/2);
u(idx) = u(idx) - M;
idy = find(v > N/2);
v(idy) = v(idy) - N;

%set up the meshgrid arrays needed for


% computing the required distances.
[V, U] = meshgrid(u, v);

% Compute the distances D(U, V).


D = sqrt(U.^2 + V.^2);

disp('Band PASS FILTERING IN FREQUENCY DOMAIN');


disp('PRESS 1 FOR IDEAL BPF');
disp('PRESS 2 FOR BUTTERWORTH BPF');
disp('PRESS 3 FOR GAUSSIAN BPF');

type=input('Enter the filter type==>');


D0=input('Enter the cutoff d0 distance==>');
D1=input('Enter the cutoff d1 distance==>');

% Begin filter computations.


switch type
case 1
H = double(D <= D1 & D>=D0);
case 2
n=input('Enter the filter ORDER==>');
Hl = 1./(1 + (D./D1).^(2*n));
Hh = 1-(1./(1 + (D./D0).^(2*n)));

H=Hl.*Hh;
case 3
Hl = exp(-(D.^2)./(2*(D1^2)));
Hh = 1 - (exp(-(D.^2)./(2*(D0^2))));
H=Hl.*Hh;
otherwise
error('Unknown filter type.')
end

3
G=H.*F;
G=real(ifft2(G));
ff=abs(fftshift(H));
subplot(131)
imshow(a)
title('original image')
subplot(132),imshow(ff)

switch type

case 1
title('IDEAL BPF Image')
case 2
title('BUTTERWORTH BPF Image')
case 3
title('GAUSSIAN BPF Image')
end
subplot(133),imshow(G)

switch type
case 1
title('IDEAL BPF Filtered Image')
case 2
title('BUTTERWORTH BPF Filtered Image')
case 3
title('GAUSSIAN BPF Filtered Image')
end
figure, mesh(ff),axis off,grid off

The output:
Band PASS FILTERING IN FREQUENCY DOMAIN
PRESS 1 FOR IDEAL BPF
PRESS 2 FOR BUTTERWORTH BPF
PRESS 3 FOR GAUSSIAN BPF
Enter the filter type==>1
Enter the cutoff d0 distance==>30
Enter the cutoff d1 distance==>100

4
Band PASS FILTERING IN FREQUENCY DOMAIN
PRESS 1 FOR IDEAL BPF
PRESS 2 FOR BUTTERWORTH BPF
PRESS 3 FOR GAUSSIAN BPF
Enter the filter type==>2
Enter the cutoff d0 distance==>30
Enter the cutoff d1 distance==>120
Enter the filter ORDER==>2

5
Band PASS FILTERING IN FREQUENCY DOMAIN
PRESS 1 FOR IDEAL BPF
PRESS 2 FOR BUTTERWORTH BPF
PRESS 3 FOR GAUSSIAN BPF
Enter the filter type==>3
Enter the cutoff d0 distance==>30
Enter the cutoff d1 distance==>100

Example2: BandReject Filter


close all
clear
clc
a=imread('1.tif');
[M,N]=size(a);
a=im2double(a);
F=fft2(a);
% Set up range of variables.
u = 0:(M-1);
v = 0:(N-1);

% Compute the indices for use in meshgrid


idx = find(u > M/2);
u(idx) = u(idx) - M;
idy = find(v > N/2);
v(idy) = v(idy) - N;

6
%set up the meshgrid arrays needed for
% computing the required distances.
[V, U] = meshgrid(u, v);

% Compute the distances D(U, V).


D = sqrt(U.^2 + V.^2);

disp('Band Reject FILTERING IN FREQUENCY DOMAIN');


disp('PRESS 1 FOR IDEAL BRF');
disp('PRESS 2 FOR BUTTERWORTH BRF');
disp('PRESS 3 FOR GAUSSIAN BRF');

type=input('Enter the filter type==>');


D0=input('Enter the cutoff d0 distance==>');
D1=input('Enter the cutoff d1 distance==>');

% Begin filter computations.


switch type
case 1
H = double(D <= D1 & D>=D0);
H=1-H;
case 2
n=input('Enter the filter ORDER==>');
Hl = 1./(1 + (D./D1).^(2*n));
Hh = 1-(1./(1 + (D./D0).^(2*n)));
H=Hl.*Hh;
H=1-H;
case 3
Hl = exp(-(D.^2)./(2*(D1^2)));
Hh = 1 - (exp(-(D.^2)./(2*(D0^2))));
H=Hl.*Hh;
H=1-H;
otherwise
error('Unknown filter type.')
end

G=H.*F;
G=real(ifft2(G));
ff=abs(fftshift(H));
subplot(131)
imshow(a)
title('original image')
subplot(132),imshow(ff)

switch type

case 1
title('IDEAL BRF Image')
case 2
title('BUTTERWORTH BRF Image')
case 3
title('GAUSSIAN BRF Image')
end
subplot(133),imshow(G)
7
switch type
case 1
title('IDEAL BRF Filtered Image')
case 2
title('BUTTERWORTH BRF Filtered Image')
case 3
title('GAUSSIAN BRF Filtered Image')
end
figure, mesh(ff),axis off,grid off

The output:
Band Reject FILTERING IN FREQUENCY DOMAIN
PRESS 1 FOR IDEAL BRF
PRESS 2 FOR BUTTERWORTH BRF
PRESS 3 FOR GAUSSIAN BRF
Enter the filter type==>1
Enter the cutoff d0 distance==>30
Enter the cutoff d1 distance==>100

8
Band Reject FILTERING IN FREQUENCY DOMAIN
PRESS 1 FOR IDEAL BRF
PRESS 2 FOR BUTTERWORTH BRF
PRESS 3 FOR GAUSSIAN BRF
Enter the filter type==>2
Enter the cutoff d0 distance==>30
Enter the cutoff d1 distance==>120
Enter the filter ORDER==>2

Band Reject FILTERING IN FREQUENCY DOMAIN


PRESS 1 FOR IDEAL BRF
PRESS 2 FOR BUTTERWORTH BRF
PRESS 3 FOR GAUSSIAN BRF
Enter the filter type==>3
Enter the cutoff d0 distance==>30
Enter the cutoff d1 distance==>100

9
10
Digital Image processing Lab

Islamic University – Gaza


Engineering Faculty
Department of Computer Engineering
2013
EELE 5110: Digital Image processing Lab
Eng. Ahmed M. Ayash

Lab # 10

Image Segmentation

April 27, 2013


1. Objectives
 To understand image segmentation.
 To understand object detection.
 Implement a Cell Detection Using Image Segmentation

2. Theory
Image segmentation
Segmentation subdivides an image into its constituent regions or objects that have similar
features (Intensity, Histogram, mean, variance, Energy, Texture, ...etc.) according to a set
of predefined criteria
Most segmentations algorithms we will consider are based on one of two basic properties
of intensity values:

Discontinuity
The strategy is to partition an image based on abrupt changes in intensity
 Detection of gray level discontinuities:
– Point detection
– Line detection
– Edge detection
• Gradient operators
• LoG: Laplacian of Gaussian

Similarity
The strategy is to partition an image into regions that are similar according to a set of
predefined criteria.

In our Lab we will focus on Detection of gray level discontinuities.

1
Goal of Image analysis
• Extracting information from an image
– Step1: segment the image ---> objects or regions
– Step2: describe and represent the segmented regions in a form suitable for
computer processing
– Step3: image recognition and interpretation

2.1 Point, Line and Edge Detection


The most common way to look for discontinuities is to run a mask through the image.
The response R of mask (Spatial filter) at any point in the image is given by:

Where zi is the intensity of the pixel associated with mask coefficient wi

2.1.1 Point Detection


The detection of isolate points embedded in constant or nearly constant areas is detected
as:

Fig1: A mask for point detection

2
where T is a nonnegative threshold. Point detection is implemented in matlab using
function imfilter. If T is given, the following command implements the point detection
approach:
g = abs(imfilter(double(f),w)) >= T;

where f is the input image, w is an appropriate point-detection mask, and g is the


resulting image.

Example1:
%%example1.m
clc;
clear all;
close all;
f = imread('Lab10_1.jpg');
w=[-1,-1,-1;-1,8,-1;-1,-1,-1];
g=abs(imfilter(double(f),w));
T=max(g(:));
g=g>=T; %(T given threshold)

subplot(121),imshow(f),title('original image');
subplot(122),imshow(g),title('Result of point detection');

Output:

3
2.1.2 Line Detection
The next level of complexity is to try to detect lines. Masks for lines of different
directions:

Fig2: Line detection masks

Notes:
o These filter masks would respond more strongly to lines.
o Note that the coefficients in each mask sum to zero, indicating a zero response from
the masks in areas of constant gray level.
o If we are interested in detecting lines in a specified direction (e.g. vertical), we could
use the mask associated with that direction and threshold its output.
o If interested in lines of any directions run all 4 masks and select the highest response.
o These filters respond strongly to lines of one pixel thick of the designated direction
and correspond closest to the direction defined by the mask.

Let R1, R2, R3, R4 denotes the responses of the masks in Fig2, from left to right.
If at a certain point in the image, |Ri|>|Rj|, for all j ≠ i, that point is said to be more likely
associated with a line in the direction of mask i.

Example2:
%%example2.m
clc;
clear all;
close all;
f = imread('Lab10_2.jpg');
w=[2,-1,-1;-1,2,-1;-1,-1,2]; %%-45
g1=imfilter(double(f),w);

g=abs(g1);
gtop=g1(1:120,1:120);
gtop=pixeldup(gtop,4);%%Duplicates pixels of an image in both
directions.

4
gbot=g1(end-119:end,end-119:end);
gbot=pixeldup(gbot,4);

T=max(g(:));
gg=g>=T;

subplot(121),imshow(f),title('original image');
subplot(122),imshow(g1,[]),title('g1:Result of processing with -45
detector');
figure
subplot(121),imshow(gtop,[]),title('Zoomed view of the top left region
of g1');
subplot(122),imshow(gbot,[]),title('Zoomed view of the bottom right
region of g1');
figure
subplot(121),imshow(g,[]),title('Absolute value of g1');
subplot(122),imshow(gg),title('Final Result');

Output:

5
2.1.3 Edge Detection using Function Edge
What is an edge?
A set of connected pixels that lie on the boundary between two regions.

Edge detection is the most common approach for detecting meaningful discontinuities.
Edge is detected by:
 First‐order derivative (gradient operator)
 Second‐order derivative (laplacian operator)

6
The gradient of a 2-D function f(x,y) is defined as:

The magnitude of this vector is

Second-order derivatives are generally computed using the Laplacian, which as follows:

Two general criteria for edge detection:


 Find places where the first derivative of the intensity is greater in magnitude than a
specified threshold
 Find places where the second derivative of the intensity has a zero crossing.

Edge function
edge function Find edges in intensity image.

Syntax
[g, t] = edge(f, 'method', parameters)

Where f is the input image, method is one of the approches listed in Table1 below.
The parameters you can supply differ depending on the method you specify. And g is binary
output image, t is threshold value.
If you do not specify a method, edge uses the Sobel method.

Description
Edge takes an intensity or a binary image f as its input, and returns a binary image g of the same
size as f, with 1's where the function finds edges in f and 0's elsewhere.

7
Edge supports six different edge-finding methods:
The Sobel Finds edges using the Sobel approximation to the derivative.
It returns edges at those points where the gradient of I is maximum.
The Prewitt Finds edges using the Prewitt approximation to the derivative.
It returns edges at those points where the gradient of I is maximum.
The Roberts Finds edges using the Roberts approximation to the derivative.
It returns edges at those points where the gradient of I is maximum.
The Laplacian of Gaussian Finds edges by looking for zero crossings after filtering I with a Laplacian
of Gaussian filter.
The zero-cross Finds edges by looking for zero crossings after filtering I with a filter you
specify.
The Canny Finds edges by looking for local maxima of the gradient of I. The gradient
is calculated using the derivative of a Gaussian filter. The method uses two
thresholds, to detect strong and weak edges, and includes the weak edges
in the output only if they are connected to strong edges. This method is
therefore less likely than the others to be "fooled" by noise, and more
likely to detect true weak edges.

The most powerful edge-detection method that edge provides is the Canny method.

Example3:
%%example3.m
clc;
clear all;
close all;
f = imread('Lab10_3.gif');
[gv,t]=edge(f,'sobel','vertical');%%automatically threshold
subplot(121),imshow(f),title('original image');
subplot(122),imshow(gv),title('Results using vertical Sobel with
automatically t');

8
gv=edge(f,'sobel',0.15,'vertical'); %%specified threshold
gboth=edge(f,'sobel',0.15); %%ver and hor
figure
subplot(121),imshow(gv),title('Results using vertical Sobel with
specified t');
subplot(122),imshow(gboth),title('Results using ver and hor Sobel with
specified t');

w45=[-2 -1 0;-1 0 1;0 1 2];%%45


g45=imfilter(double(f),w45);
T=0.3*max(abs(g45(:)));
g45=g45>=T;
figure
subplot(121),imshow(g45),title('The edge oriented at 45 ');
ww45=[0 1 2;-1 0 1;-2 -1 0]; %%-45
gg45=imfilter(double(f),ww45);
T2=0.3*max(abs(gg45(:)));
gg45=gg45>=T2;
subplot(122),imshow(gg45),title('The edge oriented at -45 ');

Output:

9
2.2 Detecting a Cell Using Image Segmentation
An object can be easily detected in an image if the object has sufficient contrast from the
background. We use edge detection and basic morphology tools to detect a prostate
cancer cell.

Example4:
%%example4.m
clc;
clear all;
close all;
%%%%%Step 1: Read Image%%%%%
I = imread('Lab10_5.gif');
%%%%Step 2: Detect Entire Cell%%
BWs = edge(I, 'sobel', (graythresh(I) * .1)); %%graythresh computes a
threshold
%%%%%%Step 3: Fill Gaps%%%%
%%% line, length 3 pixels, angle 90 degrees
se90 = strel('line', 3, 90); % strel Create morphological structuring
element.
se0 = strel('line', 3, 0);%%% line, length 3, angle 0 degrees
%%%%%%%%Step 4: Dilate the Image%%%%%%%%%%%%%%%%%%
BWsdil = imdilate(BWs, [se90 se0]);
%%%%%%%%%%Step 5: Fill Interior Gaps%%%%
BWdfill = imfill(BWsdil, 'holes'); %%filling in the background from the
edge of the image.
%%%%%%%Step 6: Remove Connected Objects on Border%%%
BWnobord = imclearborder(BWdfill, 4);
%%%%%%%%%Step 7: Smooth the Object%%%%%%
seD = strel('diamond',1); %%R=1 is the distance from th structuring
element origin to the points of the diamond.
BWfinal = imerode(BWnobord,seD);
BWfinal = imerode(BWfinal,seD);
%%%%%Step 8: Displaying the Segmented Object%%%%%

11
BWoutline = bwperim(BWfinal); %%Find perimeter of objects in binary
image.

Segout = I;
Segout(BWoutline) = 255;

subplot(421),imshow(I),title('1.original image');
subplot(422),imshow(BWs),title('2.binary gradient mask using sobel');
subplot(423),imshow(BWsdil),title('3.dilated gradient mask');
subplot(424),imshow(BWdfill),title('4.binary image with filled holes');
subplot(425),imshow(BWnobord),title('5.cleared border image');
subplot(426),imshow(BWfinal),title('6.segmented image');
subplot(4,2,7:8),imshow(Segout),title('7.outlined original image');

Output:

11
Example4 Explanation:
Step 1: Read Image
Read in ' Lab10_5.gif', which is an image of a prostate cancer cell.

Step 2: Detect Entire Cell


Two cells are present in this image, but only one cell can be seen in its entirety. We detected this
cell. Another word for object detection is segmentation. The object to be segmented differs
greatly in contrast from the background image. Changes in contrast can be detected by operators
that calculate the gradient of an image. One way to calculate the gradient of an image is the
Sobel operator, which creates a binary mask using a user-specified threshold value. We
determine a threshold value using the graythresh function. To create the binary gradient mask,
we use the edge function.

Step 3: Fill Gaps


The binary gradient mask shows lines of high contrast in the image. These lines do not quite
delineate the outline of the object of interest. Compared to the original image, you can see gaps
in the lines surrounding the object in the gradient mask. These linear gaps will disappear if the
Sobel image is dilated using linear structuring elements, which we can create with the strel
function.

Step 4: Dilate the Image


The binary gradient mask is dilated using the vertical structuring element followed by the
horizontal structuring element. The imdilate function dilates the image.

Step 5: Fill Interior Gaps


The dilated gradient mask shows the outline of the cell quite nicely, but there are still holes in the
interior of the cell. To fill these holes we use the imfill function.

Step 6: Remove Connected Objects on Border


The cell of interest has been successfully segmented, but it is not the only object that has been
found. Any objects that are connected to the border of the image can be removed using the
imclearborder function. The connectivity in the imclearborder function was set to 4 to remove
diagonal connections.

Step 7: Smooth the Object


Finally, in order to make the segmented object look natural, we smooth the object by eroding the
image twice with a diamond structuring element. We create the diamond structuring element
using the strel function.

Step 8: Displaying the Segmented Object


An alternate method for displaying the segmented object would be to place an outline around the
segmented cell. The outline is created by the bwperim function.

12
4. Appendix

Pixeldup.m
function B = pixeldup(A, m, n)
%PIXELDUP Duplicates pixels of an image in both directions.
% B = PIXELDUP(A, M, N) duplicates each pixel of A, M times in the
% vertical direction and N times in the horizontal direction.
% Parameters M and N must be integers. If N is not included, it
% defaults to M.

% Check inputs.
if nargin < 2
error('At least two inputs are required.');
end
if nargin == 2
n = m;
end

% Generate a vector with elements 1:size(A, 1).


u = 1:size(A, 1);
% Duplicate each element of the vector m times.
m = round(m); % Protect against nonintergers.
u = u(ones(1, m), :);
u = u(:);

% Now repeat for the other direction.


v = 1:size(A, 2);
n = round(n);
v = v(ones(1, n), :);
v = v(:);
B = A(u, v);

Homework

Write a Matlab code to find the edges of the Lab10_4.jpg image using the Prewitt and
Canny method.

13
Digital Image processing Discussion 2013
Ahmed Ayash

Chapter#1 10/02/2013

Homework:
1. Write a Matlab code to read two images and display the result of addition and
subtraction the two images.

Solution:

clear all;
clc;
close all;
I1 = imread('first.jpg'); imresize(I1,[256 256]);
I2 = imread('second.jpg'); imresize(I2,[256 256]);
%%%The tow images must be the same size
Idiff = I1-I2; %%%-ve value round to 0
subplot(131),imshow(I1),title('First Image');
subplot(132),imshow(I2),title('Second Image');
subplot(133),imshow(Idiff),title('Diff.');

figure

Iadd = Idiff+I2; %I used Idiff to test results


subplot(131),imshow(Idiff),title('First Image');
subplot(132),imshow(I2),title('Second Image');
subplot(133),imshow(Iadd),title('Add.');

1
Digital Image processing Discussion 2013
Ahmed Ayash

2. Write a Matlab code to implement im2double(img) function.

Solution:

a=imread('b.bmp');
c=double(a)/255;
imshow(c)

2
Digital Image processing Discussion 2013
Ahmed Ayash

Chapter#2(Part 1) 17/02/2013

Homework:
1. Write a MATLAB code like example1 to reduce the size of image using nearest Interpolation.
Which is better in quality of image, nearest or bilinear interpolation?
Solution:

% Reading the image and converting it to a gray-level image.


I=imread('Fig.jpg');
I=rgb2gray(I);

% Reducing the Size of I using nearest Interpolation


I128=imresize(I,0.5,'nearest'); imshow(I128),pause
I64=imresize(I,0.25,'nearest');close,imshow(I64),pause
I32=imresize(I,0.125,'nearest');close,imshow(I32),pause
I16=imresize(I,0.0625,'nearest');close,imshow(I16),pause

% Resizing the Reduced Images to the Original Size (256 X 256) and
Comapre
% them:
I16=imresize(I16,16,'nearest');
I32=imresize(I32,8,'nearest');
I64=imresize(I64,4,'nearest');
I128=imresize(I128,2,'nearest');

close
figure
subplot(121),imshow(I),title('I')
subplot(122),imshow(I128),title('I128')
pause,close

figure
subplot(221),imshow(I),title('I')
subplot(222),imshow(I64),title('I64')
subplot(223),imshow(I32),title('I32')
subplot(224),imshow(I16),title('I16')
pause,close

1
Digital Image processing Discussion 2013
Ahmed Ayash

 Nearest-neighbor interpolation is the fastest method, but it has the lowest quality.
So the bilinear is better than nearest.

2
Digital Image processing Discussion 2013
Ahmed Ayash

2. Write a MATLAB code that reads a gray scale image and generates the vertically flipped
image of original image.
Solution:

close all;
clear all;
a=imread('Fig2.tif');
[r c]=size(a);
for i = 1:1:r
for j = 1:1:c
b(r+1-i,j)=a(i,j);
end
end
subplot(1,2,1),imshow(a),title('Original Image') ;
subplot(1,2,2),imshow(b),title('Flipped Image');

OR (Using end function)

close all;
clear all;
a=imread('Fig2.tif');
b=a(end:-1:1,:);
subplot(1,2,1),imshow(a),title('Original Image') ;
subplot(1,2,2),imshow(b),title('Flipped Image');

3
Digital Image processing Discussion 2013
Ahmed Ayash

Chapter#2(Part 2) 24/02/2013

Homework:
1. Consider the image segment shown.
Let V={1, 2} and compute the lengths of the shortest 4-, 8-, and m-path between p and q. If a
particular path does not exist between these two points, explain why.

Solution:

 One possibility for the shortest 4-path when V ={1,2} is shown in Fig c
its length is 6. It is easily verified that another 4-path of the same length exists
between p and q.
 One possibility for the shortest 8-path (it is not unique) is shown in Fig d
its length is 4.
 The length of a shortest m-path (shown dashed) is 6. This path is not unique.

2. Write a Matlab code to rotate an image by 180 ° (do not use the imrotate function, you should
implement it)
Hint (Use end function)

Solution:

%hw2.m
clear all;
clc;
close all;
I = imread('3.jpg');

%%%%%180 rotate%%%%%%%%
r180 = I(end:-1:1,end:-1:1);

1
Digital Image processing Discussion 2013
Ahmed Ayash
r1=imrotate(I,180);
subplot(131),imshow(I),title('original');
subplot(132),imshow(r180),title('my r180');
subplot(133),imshow(r1),title('build r180');

Output:

3. Given the two images below, perform an enhancement operation to get Fig (1).
Hint (Use a logical operation)

Fig (1)

Solution:

%hw3.m
clear all;
clc;
close all;
FIG1 = imread('4.gif'); %binary image
FIG2 = imread('5.gif'); %binary image
RESULT = xor(FIG1,FIG2);

subplot(131),imshow(FIG1,[]),title('First')
subplot(132),imshow(FIG2,[]),title('Second')
subplot(133),imshow(RESULT,[]),title('X-ORing')

2
Digital Image processing Discussion 2013
Ahmed Ayash

Output:

3
Digital Image processing Discussion 2013
Ahmed Ayash

Chapter#3(Part 2) 17/03/2013

Homework (Enhancement Using Arithmetic Operations):


Assume you have an image for a human and you need to hide the features of the face only.
Using Matlab, develop a function that can be used to do this. (Consider '2.gif' as a mask in the
attached images carefully and used it to hide the face of '1.gif' image).

Solution:

%hw.m
clear all;
clc;
close all;
FACE = imread('1.gif');
MASK = imread('2.gif');
%%USING ARETHMETIC OPERATION
DIFFERENCE = FACE - MASK; %%THE RESULT WILL BE DARK ON THE FACE
RESULT= DIFFERENCE + MASK; %% THE MASK WILL BE ON THE DARK AREA ONLY

subplot(221),imshow(FACE),title('original')
subplot(222),imshow(MASK),title('mask')
subplot(223),imshow(DIFFERENCE),title('difference')
subplot(224),imshow(RESULT),title('Result')

Output:

1
Islamic University Of Gaza Image Processing
Faculty of Engineering Quiz 1 (Chapter 3)
Computer Department Time: 10 minutes.
Eng. Ahmed M. Ayash Date: 24/03/2013

Student Name: …Solution…… ID: ………………….. Grade: ………………….

Question:
a. Consider the following 4x4 matrix of a 3-bit image, find histogram equalization of this
image.

Solution:

r 0 1 2 3 4 5 6 7
1.PDF = p(rk) 0 0 0.375 0.3125 0.25 0.0625 0 0
2.CDF = p(sk) 0 0 0.375 0.6875 0.9375 1 1 1
s 0 0 3 5 7 7 7 7

3 5 5 3

7 3 7 5

5 3 5 7

3 7 3 7

1
b. Consider the following 4x4 matrix of a 3-bit image, find histogram matching of this
image using the following desired histogram.

Solution:

r 0 1 2 3 4 5 6 7
1.PDF = p(rk) 0 0 0.375 0.3125 0.25 0.0625 0 0
2.CDF = p(sk) 0 0 0.375 0.6875 0.9375 1 1 1
3.PDF = p(rk) 0.125 0.1875 0.1875 0.1875 0.125 0.0625 0.0625 0.0625
4.CDF = p(zk) 0.125 0.3125 0.5 0.6875 0.8125 0.875 0.9375 1
s 0 0 1 3 6 7 7 7

1 3 3 1

6 1 6 3

3 1 3 7

1 6 1 6

2
Islamic University Of Gaza Image Processing
Faculty of Engineering Quiz 2 (Chapter 3)
Computer Department Time: 10 minutes.
Eng. Ahmed M. Ayash Date: 31/03/2013

Student Name: …Solution… ID: ………………….. Grade: ………………….

Q1) The following figure shows a 3-bit image of size 5-by-5 image in the square

Compute the following:

(a) The output of a 3 × 3 mean filter at (3,3)

(4+6+1+7+2+5+0+6+2)/9 = 3.67 ≈ 4

(b) The output of a 3 × 3 median filter at (3,3)

median(0,1,2,2,4,5,6,6,7) = 4

(c) The histogram of the whole image.

r 0 1 2 3 4 5 6 7

h 2 4 5 2 3 3 3 3

(d) The result of histogram equalization at the point (3,3). Show steps in obtaining your solution.

r 0 1 2 3 4 5 6 7

pdf 0.08 0.16 0.2 0.08 0.12 0.12 0.12 0.12

cdf 0.08 0.24 0.44 0.52 0.64 0.76 0.88 1

s 1 2 3 4 5 5 6 7

3.08 ≈ 3

1
Q2)
a) The following figure shows a 3-bit image of size 5-by-5 image in the square:

Compute the following:

(1) Apply the filter kernel given below to calculate the filter response at pixel location (3,3).

(4+12+1+14+8+10+0+12+2)/16 = 3.9375 ≈ 4

(2) The histogram of the whole image.

r 0 1 2 3 4 5 6 7

h 2 4 5 2 3 3 3 3

(3) The result of histogram equalization at the point (4,5). Show steps in obtaining your solution.

r 0 1 2 3 4 5 6 7

pdf 0.08 0.16 0.2 0.08 0.12 0.12 0.12 0.12

cdf 0.08 0.24 0.44 0.52 0.64 0.76 0.88 1

s 1 2 3 4 5 5 6 7

1.68 ≈ 2

2
b) The following figure shows an 8-bit image of size 5-by-5 image in the square,

Compute the following:

(1) The output of a 3 × 3 mean filter at (2,3)

(25+95+65+95+40+65+25+65+40)/9 = 57.2222 ≈ 57

(2) The output of a 3 × 3 median filter at (2,3)

median(25,25,40,40,65,65,65,95,95) = 65

3
Islamic University Of Gaza Image Processing
Faculty of Engineering Quiz 3 (Chapter 5)
Computer Department Time: 20 minutes.
Eng. Ahmed M. Ayash Date: 28/04/2013

Student Name: .Solution… ID: ………………….. Grade: ………………….

Q) Given an input image of size 7 x 7 shown below, was filtered using 3 x 3 adaptive
median filter with maximum allowed size of 5 x 5.

3 3 4 3 3 3 0
3 0 0 0 0 0 3
3 4 0 0 4 4 3 Z
Adaptive Median
4 5 7 7 0 0 3 Filter X 1
3 3 6 0 0 7 0 Y
3 0 A 3 3 5 3
3 4 3 3 0 0 4

Input Image Output Image

A. What is the value of the pixel A in the input image?

B. What are the values of the pixels X, Y, and Z in the output image?

Solution:
Pixel Value
 Find X: use 3 x 3 filter
A 1

4 0 0
X 4

5 7 7 Y 3
3 6 0 Z 3

Zxy = 7 , Zmed = 4, Zmin = 0, Zmax = 7

Level A
Test if Zmin < Zmed < Zmax  True, Go to level B

Level B
Test if Zmin < Zxy < Zmax  False, then output = Zmed = 4  X=4

1
 Find Z: use 3 x 3 filter

0 0 0
4 0 0
5 7 7

Zxy = 0 , Zmed = 0, Zmin = 0, Zmax = 7

Level A
Test if Zmin < Zmed < Zmax  False, use 5 x 5 filter, and repeat level A

3 3 4 3 3
3 0 0 0 0
3 4 0 0 4
4 5 7 7 0
3 3 6 0 0

Zxy = 0 , Zmed = 3, Zmin = 0, Zmax = 7

Level A
Test if Zmin < Zmed < Zmax  True, Go to level B

Level B
Test if Zmin < Zxy < Zmax  False, then output = Zmed = 3  Z=3

<><><><><><><><><><><><><><><><><><><><><><><><><><><><>

 Find A: use 5 x 5 filter (The result of filtering 7 is 1)

0 0 0 0 0
4 0 0 4 4
5 7 7 0 0
3 6 0 0 7
0 A 3 3 5

Order : (0,0,0,0,0,0,0,0,0,0,0,0),(3,3,3,4,4,4,5,5,6,7,7,7),(A=?)
Zxy = 7 , Zmed = ?, Zmin = 0, Zmax = 7

First Assumption:
Level A
Test if Zmin < Zmed < Zmax  False, then output = Zxy = 7 ≠ 1, So wrong
assumption.

Second Assumption:
Level A
Test if Zmin < Zmed < Zmax  True, Go to level B

2
Level B
Test if Zmin < Zxy < Zmax  False, then output = Zmed
Where output = 1, then Zmed =1, so A should equal 1

A=1

<><><><><><><><><><><><><><><><><><><><><><><><><><><><>

 Find Y: use 3 x 3 filter

7 7 0
6 0 0
1 3 3
Zxy = 0 , Zmed = 3, Zmin = 0, Zmax = 7

Level A
Test if Zmin < Zmed < Zmax  True, Go to level B

Level B
Test if Zmin < Zxy < Zmax  False, then output = Zmed = 3  Y=3

You might also like