You are on page 1of 32

EXPERIMENT No.

1
OBJECTIVE:
Program to generate a synthetic image of 256x256 using the equation f(x,y)=Asin(ux+vy).
SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
A digital image is a numeric representation of a two-dimensional image. As we know that image can be represented as
2-dimensional function of the form f(x,y). The value of f at spatial coordinates (x,y) is a positive scalar quantity whose
physical meaning is determined by the source of image. When an image is generated from a physical process, its
intensity value is proportional to energy radiated by a physical source (e.g. EM waves). As a consequence f(x,y) must be
non zero and finite.
0< f (x, y)<

(1)

The function f(x,y) may be characterized by 2-components


a) the amount of source illumination incident on the scene being viewed.
b) The amount of illumination reflected by the object in the scene.
Appropriately we called it as illumination & reflectance components and are denoted by i(x,y) and r(x,y).
The two functions combined as a product to form f(x,y):
where,

f (x, y)=i(x, y).r(x, y)


0 <i(x,y) <

(2)
(3)

0 <r(x,y) < 1

(4)

Let f(s,t), represent a continuous image function of 2 variables, s & t we convert this function in to digital image by
sampling and quantization. Let us suppose that array, f(x,y), containing N rows and N columns, where x,y are descrete
coordinates. Therefore x=0,1,.M-1 and y= 0,1,2,N-1.
A digital image can be represented as:

Where, aij=f(x=i,y=j)=f(I,j).
For the given experiment, function is Sin(ux+vy).

CODE:
clear;
clc;
A=2;
u=3;
v=4;
x=1:256;
y=1:256;
for x=1:256
for y=1:256
f(x,y)=A*sin(u*x+v*y);
end
end
imshow(f);

RESULT AND OBSERVATION:

EXPERIMENT No. 2
OBJECTIVE:
Program to reduce the spatial resolution of image and notice the point at which the image becomes
unrecognizable.
SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
Spatial resolution is number of pixel (dots) per unit distance. Higher resolution means more
image detail. Image resolution can be measured in various ways. Basically, resolution quantifies how
close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per
mm, lines per inch), to the overall size of a picture. In short what spatial resolution refers to is that we cannot compare
two different types of images to see that which one is clear or which one is not. If we have to compare the two images,
to see which one is more clear or which has more spatial resolution, we have to compare two images of the same size.
EXAMPLE: let us consider a chart with alternate black and white vertical lines, each width is W units. The width of
pair of lines is 2W and per unit 1/2W line pairs. Let W=0.5mm. Then there will be 5 lines per unit distance. DOTS PER
UNIT DISTANCE IS A MEASURE OF IMAGE RESOLUTION. PUBLISHING INDUSTRY USED IT dpi(dots per
inches) US standard.
Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly
rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration
of pixels, the sharp squares make the point better).

CODE

:
clear; clc;
U=imread('C:\Users\Garg\Desktop\smile.jpg');
subplot(1,2,1);
imshow(U);
title('original image');
cd=imresize(U,1/10);
subplot(1,2,2);imshow(cd);
title('reduced image');
RESULT AND OBSERVATION:

EXPERIMENT No. 3
OBJECTIVE:
Program to plot the bit planes of grayscale image.
SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
Pixels are digital numbers composed of bits. For example a gray scale image is composed of 8 bits having 256
levels. Instead of highlighting intensity level ranges we could highlight contribution made to total image appearance by
specific bits.
In terms of 8-bits bytes, plane 0 contains all lowest order bits in the bytes comprising the pixels in the image and plane 7
contains all high order bits.

Separating a digital image into its bit planes is useful for analyzing the relative importance played by each bit of the
image, implying, it determines the adequacy of numbers of bits used to quantize each pixel , useful for image
compression.

In terms of bit-plane extraction for a 8-bit image, it is seen that binary image for bit plane 7 is obtained by proceeding
the input image with a thresholding gray-level transformation function that maps all levels between 0 and 127 to one
level (e.g 0)and maps all levels from 129 to 253 to another (eg. 255).

CODE:
clear; clc;
U=imread('C:\Users\Garg\Desktop\smile.jpg');
cd=rgb2gray(U);
imshow(cd);
title('original image');
cd=double(cd);
c0=mod(cd,2);
c1=mod(floor(cd/2),2);
c2=mod(floor(cd/4),2);
c3=mod(floor(cd/8),2);
c4=mod(floor(cd/16),2);
c5=mod(floor(cd/32),2);
c6=mod(floor(cd/64),2);
c7=mod(floor(cd/128),2);
figure;
subplot(2,4,1);imshow(c0);
subplot(2,4,2);imshow(c1);
subplot(2,4,3);imshow(c2);
subplot(2,4,4);imshow(c3);
subplot(2,4,5);imshow(c4);
subplot(2,4,6);imshow(c5);
subplot(2,4,7);imshow(c6);
subplot(2,4,8);imshow(c7);

RESULT AND OBSERVATION:

EXPERIMENT No. 4
OBJECTIVE:
Program to perform histogram equalization of grayscale and color images.
SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
Histograms
Given a grayscale image, its histogram consists of the histogram of its gray levels; that is, a graph indicating the
number of times each gray level occurs in the image. We can infer a great deal about the appearance of an image from
its histogram, as the following examples indicate:
_In a dark image, the gray levels (and hence the histogram) would be clustered at the lower end.
_In a uniformly bright image, the gray levels would be clustered at the upper end.
_In a well contrasted image, the gray levels would be well spread out over much of the range.
Histogram Processing
Intensity transformation based on histogram information to yield desired histogram.
- Histogram equalization
To make histogram distributed uniformly

- Histogram matching
To make histogram as the desire

CODE:

clear; clc;
I=imread('C:\Users\Garg\Desktop\smile2.jpg');
IER=histeq(I(:,:,1));
IEG=histeq(I(:,:,2));
IEB=histeq(I(:,:,3));
IE(:,:,1)=IER;
IE(:,:,2)=IEG;
IE(:,:,3)=IEB;
subplot(2,4,1);imshow(I);
title('Original image in rgb');
subplot(2,4,2);imhist(I(:,:,1));
title('Histogram of red');
subplot(2,4,3);imhist(I(:,:,2));
title('Histogram of green');
subplot(2,4,4);imhist(I(:,:,3));
title('Histogram of blue');
subplot(2,4,5);imshow(IE);
title('Equalized image in rgb');
subplot(2,4,6);imhist(IER);
title('Histogram of eualized image(r)');
subplot(2,4,7);imhist(IEG);
title('Histogram of eualized image(g)');
subplot(2,4,8);imhist(IEB);
title('Histogram of eualized image(b)');
J=rgb2gray(I);
JE=histeq(J);
figure;subplot(2,2,1);imshow(J);
title('original image in grayscale');
subplot(2,2,2);imhist(J);
title('Histogram of original image');
subplot(2,2,3);imshow(JE);
title('Equalized image in rgb');
subplot(2,2,4);imhist(JE);
title('Histogram of eualized image');

RESULT AND OBSERVATION:

EXPERIMENT No. 5
OBJECTIVE:
Write a program to load a monkey and perform the following mask with the loaded image:

SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
A mask is a filter. Concept of masking is also known as spatial filtering. Masking is also known as filtering. In this
concept we just deal with the filtering operation that is performed directly on the image.
A sample mask is:
-1

-1

-1

FILTERING
The process of filtering is also known as convolving a mask with an image. As this process is same of convolution so
filter masks are also known as convolution masks.

The general process of filtering and applying masks is consists of moving the filter mask from point to point in an
image. At each point (x,y) of the original image, the response of a filter is calculated by a pre defined relationship. All
the filters values are pre defined and are a standard.
TYPES OF FILTERS
Generally there are two types of filters. One is called as linear filters or smoothing filters and others are called as
frequency domain filters.
Use of Filters:
Filters are applied on image for multiple purposes. The two most common uses are as following:
Filters are used for Blurring and noise reduction
Filters are used or edge detection and sharpness
BLURRING AND NOISE REDUCTION:

Filters are most commonly used for blurring and for noise reduction. Blurring is used in preprocessing steps, such as
removal of small details from an image prior to large object extraction.
MASKS FOR BLURRING.
The common masks for blurring are.

Box filter

Weighted average filter

In the process of blurring we reduce the edge content in an image and try to make the transitions between different pixel
intensities as smooth as possible.
Noise reduction is also possible with the help of blurring.
CODE:
load mandrill X map
figure
image(X)
colormap(map)
axis off
title('Mandrill');
%%% h stores the desired mask
h=[0 -1 0; -1 4 -1; 0 -1 0];
a=imfilter(X,h);
figure,imshow(a)
title('Image with Mask 8')

RESULT AND OBSERVATION:

EXPERIMENT No. 6
OBJECTIVE:
Program to create a box of size100x100in an image of size 512x512 and plot the fourier transform.
SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
The Fourier transform is a representation of an image as a sum of complex exponentials of varying magnitudes,
frequencies, and phases. The Fourier transform plays a critical role in a broad range of image processing applications,
including enhancement, analysis, restoration, and compression.
The variables 1 and 2 are frequency variables; their units are radians per sample. F(1,2) is often called
thefrequency-domain representation of f(m,n). F(1,2) is a complex-valued function that is periodic both in 1 and 2,
with period 2. Because of the periodicity, usually only the range 1,2 is displayed. Note that F(0,0) is the sum
of all the values of f(m,n). For this reason, F(0,0) is often called the constant component or DC component of theFourier
transform. (DC stands for direct current; it is an electrical engineering term that refers to a constant-voltage power
source, as opposed to a power source whose voltage varies sinusoidally.)
The inverse of a transform is an operation that when performed on a transformed image produces the original
image. The inverse two-dimensional Fourier transform is given by
j n

j m

f(m,n)=
14

F(1,2)e 1
1=2=

d d2.
1

Roughly speaking, this equation means that f(m,n) can be represented as a sum of an infinite number of complex
exponentials (sinusoids) with different frequencies. The magnitude and phase of the contribution at the frequencies
(1,2) are given by F(1,2).
Visualizing the Fourier Transform
To illustrate, consider a function f(m,n) that equals 1 within a rectangular region and 0 everywhere else. To simplify the
diagram, f(m,n) is shown as a continuous function, even though the variables m and n are discrete.
Rectangular Function

Magnitude Image of a Rectangular Function

The peak at the center of the plot is F(0,0), which is the sum of all the values in f(m,n). The plot also shows thatF(1,2)
has more energy at high horizontal frequencies than at high vertical frequencies. This reflects the fact that horizontal
cross sections of f(m,n) are narrow pulses, while vertical cross sections are broad pulses. Narrow pulses have more highfrequency content than broad pulses.
Discrete Fourier Transform

Working with the Fourier transform on a computer usually involves a form of the transform known as the discrete
Fourier transform (DFT). A discrete transform is a transform whose input and output values are discrete samples,
making it convenient for computer manipulation. There are two principal reasons for using this form of the transform:

The input and output of the DFT are both discrete, which makes it convenient for computer manipulations.
There is a fast algorithm for computing the DFT known as the fast Fourier transform (FFT).

The DFT is usually defined for a discrete function f(m,n) that is nonzero only over the finite
region 0mM1 and 0nN1. The two-dimensional M-by-N DFT and inverse M-by-N DFT relationships are given
by
j2pm/M j2qn/N
F(p,q)=
f(m,n)e
e
p,q=0, 1, ..., M1=0, 1, ..., N1
M1m=0N1n=0
and
j2pm/M j2qn/N
F(p,q)e
e
m n=0, 1, ..., M1=0, 1, ..., N1
1MNM1 p=0N1 q=0
The values F(p,q) are the DFT coefficients of f(m,n). The zero-frequency coefficient, F(0,0), is often called the "DC
component." DC is an electrical engineering term that stands for direct current. (Note that matrix indices in
MATLABalways start at 1 rather than 0; therefore, the matrix elements f(1,1) and F(1,1) correspond to the
mathematical quantities f(0,0) and F(0,0), respectively.)
The MATLAB functions fft, fft2, and fftn implement the fast Fourier transform algorithm for computing the onedimensional DFT, two-dimensional DFT, and N-dimensional DFT, respectively. The functions ifft, ifft2,
andifftn compute the inverse DFT.
Relationship to the Fourier Transform
f(m,n)=

The DFT coefficients F(p,q) are samples of the Fourier transform F(1,2).
F(p,q)=F(1,2)|1=2p/M2=2q/Np=0,1,...,M1q=0,1,...,N1

CODE:
I = imread('snowflakes.png');
I1=imresize(I,[512 512]);
img=imshow(I1);title('Original grayscale image of size 512X512 with a box of size 100X100')
h=imrect(gca,[60 40 100 100]);
BW = createMask(h,img);
J = imcrop(I1,[60 40 100 100]);
F=fft2(J);
F = fftshift(F); % Center FFT
F = abs(F); % Get the magnitude
F = log(F+1); % Use log, for perceptual scaling, and +1 since log(0) is undefined
F = mat2gray(F); % Use mat2gray to scale the image between 0 and 1
figure, imshow(F,[]);title('Plot of Fourier transform of box of size 100X100 in the image')

RESULT AND OBSERVATION:

EXPERIMENT No. 7
OBJECTIVE:
Program to perform the effect of various noise model on grayscale image like gaussian,salt & pepper noise.
SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
Gaussian Noise- Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise
caused by poor illumination and/or high temperature, and/or transmission e.g. electronic.A typical model of image noise
is Gaussian, additive, independent at each pixel, and independent of the signal intensity, caused primarily by Johnson
Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors. Amplifier noise is a major
part of the "read noise" of an image sensor, that is, of the constant noise level in dark areas of the image.[4] In color
cameras where more amplification is used in the blue colour channel than in the green or red channel, there can be more
noise in the blue channel.

Wher
e
z : Gray
level

: Mean average value


of z
: Standard deviation of

Salt and Pepper Noise- Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise or spike
noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and bright pixels in dark
regions. This type of noise can be caused by analog-to-digital converter errors, bit errors in transmission, etc. It can be
mostly eliminated by using dark frame subtraction and interpolating around dark/bright pixels.

CODE:
clear; clc;
I=imread('C:\Users\Garg\Desktop\smile2.jpg');
J1=imnoise(I,'gaussian',0,.02);
J2=imnoise(I,'salt & pepper');
imshow(I);
title('original image');
figure;subplot(1,3,1);imshow(J1);
title('Gaussian');
subplot(1,3,2);imshow(J2);
title('Salt & pepper');

RESULT AND OBSERVATION:

EXPERIMENT No. 8
OBJECTIVE:
Program to perform removal of periodic noise in an image.

SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
Periodic noise in images is typically caused by electrical and/or mechanical systems, such as mechanical jitter
(vibration) or electrical interference in the system during image acquisition. It appears in the frequency domain as
impulses corresponding to sinusoidal interference. Removing periodic noise from an image involves removing a
particular range of frequencies from that image.
It can be removed with band reject and notch filters. An ideal band reject filter is be given as follows:

CODE:

clc;
clear all;
close all;
im = imread('moonlanding.png');
figure,imshow(im);
FT = fft2(double(im));
FT1 = fftshift(FT);
%finding spectrum
imtool(abs(FT1),[]);
m = size(im,1);
n = size(im,2);
t = 0:pi/20:2*pi;
xc=(m+150)/2;
yc=(n-150)/2;
r=200;
r1 = 40;
xcc = r*cos(t)+xc;
ycc = r*sin(t)+yc;

% point around which we filter image


%Radium of circular region of interest(for BRF)

xcc1 = r1*cos(t)+xc;
ycc1 = r1*sin(t)+yc;
mask = poly2mask(double(xcc),double(ycc), m,n);
mask1 = poly2mask(double(xcc1),double(ycc1), m,n);%generating mask for filtering
mask(mask1)=0;
FT2=FT1;
FT2(mask)=0;
imtool(abs(FT2),[]);
output = ifft2(ifftshift(FT2));
imtool(output,[]);

%cropping area or bandreject filtering

RESULT AND OBSERVATION:

EXPERIMENT No. 9
OBJECTIVE:
Program to perform thresholding operation on grayscale image
1.
2.

Two Level
Multilevel

SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
The simplest method of image detection is called the thresholding method. This method is based on a clip level(or
a threshold value) to turn a gray scale image into a binary image.The key to this method is to select the threshold value.
The input to a thresholding operation is typically a grayscale or color image. In the simplest implementation, the output
is a binary image representing the segmentation. Black pixels correspond to background and white pixels correspond to
foreground (or vice versa). In simple implementations, the segmentation is determined by a single parameter known as
the intensity threshold. In a single pass, each pixel in the image is compared with this threshold. If the pixel's intensity
is higher than the threshold, the pixel is set to, say, white in the output. If it is less than the threshold, it is set to black.
Two Level Thresholding- In bi-level thresholding, image is segmented into two different regions. The pixels with gray
values greater than a certain value T are classified as object pixels, and the others with gray values lesser than T are
classified as background pixels.
Multilevel Thresholding- Multilevel thresholding is a process that segments a gray level image into several distinct
regions. This technique determines more than one threshold for the given image and segments the image into certain
brightness regions,which correspond to one background and several objects.The method works very well for objects
with colored or complex backgrounds.

CODE:
%%%%% Program for two-level thresholding of a grayscale image %%%
I = imread('snowflakes.png');
imshow(I);title('Original image');

level = graythresh(I);
BW = im2bw(I,level);
imshow(BW);title('Image after two-level thresholding')
%%%%%% Program for multi-level thresholding of a grayscale image %%%%%
X = grayslice(I,16);
%%%% grayslice uses the threshold values 1/16,2/16,....15/16 %%%%%
imshow(I);
figure, imshow(X,jet(16));title('Image after multilevel thresholding')

RESULT AND OBSERVATION:

EXPERIMENT No. 10
OBJECTIVE:
Program to perform various edge detection techniques on cameraman.tif image:
1.
2.
3.
4.

Roberts
Prewitt
Sobel
Laplacian

SOFTWARE REQUIRED:
Matlab 2009 and above
THEORY:
Edge detection is the name for a set of mathematical methods which aim at identifying points in a digital image at
which the image brightness changes sharply or, more formally, has discontinuities. The points at which image
brightness changes sharply are typically organized into a set of curved line segments termed edges.Goal of edge
detection is to produce a line drawing of a scene from an image of that scene. Important feature can be extracted from
the edges of an image(example: corners,lines,curves). These features are used by higher level computer vision
algorithms (example: recognition)
Roberts :The Roberts operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image. It
thus highlights regions of high spatial gradient which often correspond to edges. In its most common usage, the input to
the operator is a greyscale image, as is the output. Pixel values at each point in the output represent the estimated
absolute magnitude of the spatial gradient of the input image at that point.
Prewitt :The Prewitt edge detection masks are one of the oldest and best understood methods of detecting edges in
images. Basically, there are two masks, one for detecting image derivatives in X and one for detecting image derivatives
in Y. To find edges, a user convolves an image with both masks, producing two derivative images (dx & dy). The
strength of the edge at any given image location is then the square root of the sum of the squares of these two
derivatives.
Sobel :The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high
spatial gradient that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at
each point in an input greyscale image. In theory at least, the operator consists of a pair of 33 convolution masks. One
mask is simply rotated by 90 degree. These masks are designed to respond maximally to edges running vertically and
horizontally relative to the pixel grid, one mask for each of the two perpendicular orientations. The masks can be
applied separately to the input image, to produce separate measurements of the gradient component in each orientation

(call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each
point and the orientation of that gradient.
Laplacian: The zero crossing detector looks for places in the Laplacian of an image where the value of the Laplacian
passes through zero --- i.e. points where the Laplacian changes sign. Such points often occur at `edges' in images --- i.e.
points where the intensity of the image changes rapidly, but they also occur at places that are not as easy to associate
with edges. It is best to think of the zero crossing detector as some sort of feature detector rather than as a specific edge
detector. Zero crossings always lie on closed contours and so the output from the zero crossing detector is usually a
binary image with single pixel thickness lines showing the positions of the zero crossing points.

CODE:
clear;
clc;
I = imread('cameraman.tif');
figure, imshow(I);
BW1 = edge(I,'prewitt');
BW2 = edge(I,'sobel');
BW3 = edge(I,'roberts');
BW4 = edge(I,'log');
figure;
subplot(2,2,1)
imshow(BW1)
title('Prewitt');
subplot(2,2,2)
imshow(BW2)
title('Sobel');
subplot(2,2,3)
imshow(BW3)
title('Roberts');
subplot(2,2,4)
imshow(BW4)
title('Laplacian');

RESULT AND OBSERVATION:

% Original Image
% Prewitt
% Sobel
% Roberts
% Laplacian

You might also like