Professional Documents
Culture Documents
Supervised by
Prepared by
Muhammad Ikram
Department of Computer Systems Engineering
N-W.F.P University of Engineering and Technology
Peshawar
mikram@nwfpuet.edu.pk
February 2008
Preface
This Digital Image Processing (DIP) Lab Manual is designed for the 8th semester students
of Department of Computer System Engineering (DCSE), Spring 2008. Following are the
prerequisites for the labs.
DSP First
Signals and Systems
MATLAB Basics
The students are advised to obtain a copy of the following.
MATLAB
dipum tool box pcode
The labs will be carried out by first simulating the objective in MATLAB. The code for the
labs is not given in the manual but the students are guided to perform the procedures, in
order to encourage them using their own thoughts and ideas in implementing the labs.
Students will observe strict discipline in following all the labs and will submit a DIP related
project at the end of the semester. The details of the projects will be decided during the
labs.
The textbooks used in this lab are [?] and [?].
Contents
1 MATLAB Specifics Digital Image Processing
1.1 Basic MATLAB Function Related to a Digital Image . . . . . .
1.2 Data classes and Image Types . . . . . . . . . . . . . . . . . . .
1.3 Image Types . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Intensity Images . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Binary Images . . . . . . . . . . . . . . . . . . . . . . .
1.4 Converting between Data Classes and Image Types . . . . . . .
1.4.1 Converting between Data Classes . . . . . . . . . . . . .
1.4.2 Converting between Image Classes and Types . . . . . .
1.5 Indexing Rows and Columns in 2-D Image and Image Rotation
1.6 Practical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . .
1.6.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . .
1.6.3 Hint for Activity No.2 . . . . . . . . . . . . . . . . . . .
1.6.4 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . .
1.6.5 Hint for Activity No.3 . . . . . . . . . . . . . . . . . . .
1.6.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
3
3
3
4
4
4
4
5
5
5
6
6
6
6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
7
9
9
10
10
10
10
11
.
.
.
.
.
.
.
.
12
12
13
14
15
15
15
15
15
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
. . . . . . .
in an Image
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.4.5
3.4.6
3.4.7
Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Histogram Processing
6.1 Histogram of an Image . . . .
6.2 Histogram Equalization . . .
6.3 Histogram Specification . . .
6.4 Practical . . . . . . . . . . . .
6.4.1 Activity No.1 . . . . .
6.4.2 Activity No.2 . . . . .
6.4.3 Hint for Activity No.2
6.4.4 Activity No.3 . . . . .
6.4.5 Activity No.4 . . . . .
6.4.6 Questions . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . .
Statistics
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
15
16
.
.
.
.
.
.
.
.
.
.
17
17
17
18
18
19
19
19
20
20
20
.
.
.
.
.
.
.
.
.
.
22
22
22
23
23
23
24
24
24
24
25
.
.
.
.
.
.
.
.
.
.
27
27
28
29
30
30
30
30
30
30
31
.
.
.
.
.
.
.
.
32
32
33
34
35
35
35
36
36
7.4
Practical . . . . . . . . . . . . .
7.4.1 Activity No.1 . . . . . .
7.4.2 Activity No.2 . . . . . .
7.4.3 Hint for Activity No.2 .
7.4.4 Activity No.3 . . . . . .
7.4.5 Activity No.4 . . . . . .
7.4.6 Hint for Activity No. 4
7.4.7 Questions . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
36
36
36
36
37
37
37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
39
40
41
41
42
42
42
43
43
43
43
44
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
46
47
47
48
48
49
49
49
49
49
49
49
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
51
52
52
53
53
53
53
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10.3.3
10.3.4
10.3.5
10.3.6
10.3.7
2
. .
3
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
54
54
54
54
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
55
55
55
56
56
56
56
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
57
57
58
58
58
58
.
.
.
.
.
59
59
59
59
59
60
Processing
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
2 . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
61
61
61
61
62
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
64
64
64
64
16 Image Segmentation
16.1 Practical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.1.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.1.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
65
65
65
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
16.1.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
II
66
Abbreviations
IPT
D-Neighbors
Diagonal Neighbors
4-Neighbors
Four Neighbors
8-Neighbors
All Neighbors
FT
Fourier Transform
DFT
LSB
MSB
FFT
LUT
Look Up Table
CDF
LPF
Lowpass Filter
HPF
Highpass Filter
IFFT
IIR
LF
Linear Filter
NLF
Nonlinear Filter
DFS
ILPF
BLPF
GLPF
IHPF
BHPF
GHPF
DIPUM
LMS
LZW
Lempel-Ziv-Welch coding
RGB
DWT
DCT
III
Lab 1
1.1
Digital image is a two dimensional array, having finite rows and columns. In MATLAB
it is stored in two dimensional matrices. For example Fig. 1.1 shows pixels in an image.
The top left corner of this array shows the origin (0,0). In MATLAB we have origin
starts from (1,1) because MATLAB addresses first element in two dimensional matrix by
< matrixname >(1,1).
This digital image can be retrieved into a MATLAB matrix using imread() function. The
imread() loads an image stored on specific location to a 2-D array. Its Syntax is:
F = imread(0 < f ilename.extension >0 )
The imwrite() command is used for writing an image to the disk. The syntax imwrite()
command is:
F = imwrite(2 Darray, 0 < f ilename.extension >0 , ...)
The imshow() function is used for displaying the stored images. The syntax of imshow()
function is:
F = imshow(0 < f ilename.extension >0 )
The imf inf o command to collect the statistics about an image in a structure variable. The
output of command is given in Fig. 1.2.
Lab 1
1.2
MATLAB and IPT support various data classes for representing pixels values. These data
classes include: double, uint8, uint16, uint32, int8, int15, in32, single, char, and logical. The
first eight are numeric data classes and the last two are character and logical data classes
2
Lab 1
respectively. The double data type is most frequently used data type and all numeric computations in MATLAB require quantities represented in it. The uint8 data classes is also
used frequently especially when reading data from storage device, a 8-bit images are the most
common representation found in practice. We will focus mostly on these two data types in
MATLAB. Data type double requires that quantity represented in 8-bytes, uint8 and int8
require 1-byte each, uint16 and int16 require 2-bytes, and uint32, int32, and single, require
4-bytes each. The char data class holds character in Unicode representation. A character
string is merely a 1xn array of characters. A logical array contains only the values 0 and 1,
with each element being stored in memory using one byte per element. Logical array a re
created by using function, logical, or by using relational operations.
1.3
Image Types
In this section we will study the four types of images supported by IPT toolbox.
Intensity images
Binary images
Indexed images
RGB Images
We will initially focus on Intensity images and Binary images because most of the
monochrome image processing operation are carried out using these two image types.
1.3.1
Intensity Images
An intensity image is a data matrix whose values have been scaled to represent intensities.
When the elements of an intensity image are class uint8, or class uint16, they have integer
values in the range [0, 255] and [0,65535], respectively. Values are scaled in the range [0,1]
when double class is used to represent the values in data matrix.
1.3.2
Binary Images
Binary images are logical array of 0s and 1s. Logical array, B, is obtained from numeric array,
A, using logical function.
B = logical(A)%convertingnumericarrayintologicalarray
islogical function is used to test the logical value of, B.
islogical(B) % To check whether B is logical or not
Lab 1
1.4
It is frequently desired in IPT application to convert between data classes and image types.
1.4.1
1.4.2
The table in Fig. 1.3 contains necessary functions for converting between image classes and
types. The conversion of 3 x 3 double image into uint8 image is given in Fig. 1.4.
Figure 1.3: Functions for converting between image classes and types.
1.5
As we studied in Sec. 1.1 that a digital image is represented 2-D matrix and its pixels, rows
and columns are addressed as:
Individual pixel < matrixname > (row, col)
Complete row < matrixname > (row, :)
Complete column < matrixname > (:, col)
The above array indexing is used in 90o rotation of an image and is shown in Fig. 1.5
Lab 1
Figure 1.5: Original Lena (left) and 90o Degree Rotated Lena (right)
1.6
Practical
1.6.1
Activity No.1
Read various image ,having different file extensions, stored at work directory and c : mydocmypic
using imread() function and show them using imshow function.
1.6.2
Activity No.2
Use MATLAB help to compress an image, with .jpeg extension, with imwrite() function and
and find the compression ratio with original image.
5
Lab 1
1.6.3
Use Fig. 1.2 for finding the sizes of original and compressed image and then divide the size
of original image by the size of compressed image to find the compression ratio.
1.6.4
Activity No.3
Read an image and rotate it by 180o and 270o , using array indexing. Add the original
image and rotated image using imadd() function. Repeat this activity for imsubtract(),
immultiply(), imabsdif f (), imlincomb(), and imcomplement() functions.
1.6.5
First convert the images into double from gray levels and the apply these function and then
convert it to gray levels using mat2gray() function.
1.6.6
Questions
1. Why we convert gray levels into double before using numeric operations?
2. What is the effect of compression of an image on its quality?
Lab 2
2.1
Shrinking of an Image
Spatial resolution refers to the smallest number of discernible line pairs per unit distance i.e
500 lines per millimeter. Shrinking of an image, Fig. 2.1, in spatial domain can be achieved
by deleting its rows and columns. In Fig. 2.2, alternate rows and columns are deleted to
shrink the image. Similarly, we can extend it to result outputs in Fig. 2.3, Fig. 2.4, and
Fig. 2.5.
2.2
Zooming of an Image
As discussed in Sec. 2.1, that shrinking requires undersampling while in zooming requires
oversampling i.e creation of a new pixels locations, and assigning gray levels to those new
locations as shown in Fig. 2.7. When can assign values to the newly created pixel location
using one of the three techniques:
Nearest Neighbor Interpolation
Pixels Replication
Bilinear Interpolation
In nearest neighbor interpolation values of newly created pixels location, in the imaginary
zoomed image, are assigned from the pixels in the nearest neighborhood. In pixels
replication the location pixels and gray levels are predetermined and it is used when we
want to increase the size of an image integer number of times. It is achieved by duplicating
rows in horizontal direction and columns vertical direction of an image. The bilinear
7
Lab 2
Figure 2.2: The zoom out lena (256 x 256) at left and (128 x 128) at right
interpolation is more sophisticated gray levels assignment technique. The four neighbors
Fig. 2.6 of a new pixels v(x0 , y 0 ) location are used for calculating its gray level using
relation given by
v(x0 , y 0 ) = ax0 + by 0 + cx0 y 0 + d
where four coefficients are determined from the four equations in four unknowns that can be
written using the four newares neighbors of point (x0 , y 0 )
Lab 2
Figure 2.3: The zoom out lena (64 x 64) at left and (32 x 32) at right
Figure 2.4: The zoom out lena (16 x 16) at left and (8 x 8) at right
2.3
Practical
2.3.1
Activity No.1
Write MATLAB scripts (your own programming code) for obtaining results as discussed in
Sec. 2.1.
Lab 2
Figure 2.6: Newly created pixel (red) and four neighbors (blue)
Figure 2.7: The zoomed out lena (512 x 512) at right from lena 256 x 256 at left
2.3.2
Take Fig. 2.1 as input and delete relevant numbers of rows and column by using array indexing
to obtain the required results.
2.3.3
Activity No.2
Write MATLAB scripts for zooming a digital image from various spatial resolution i.e, 32 x
32, 64 x 64, 128 x 128, to spatial resolution 512 x 512 using pixels replication.
2.3.4
Use Fig. ??fig:iminfo) for finding the sizes of original and compressed image and then divide
the size of original image by the size of compressed image to find the compression ratio.
2.3.5
Activity No.3
Repeat the problem solving in Sec. 2.3.3 using nearest neighbor interpolation and bilinear
interpolation.
10
Lab 2
2.3.6
Questions
1. What is the effect of sever shrinking of a digital image in term of its size and its quality?
2. What is the effect of zooming from low resolution to high resolution?
3. What are the advantages and disadvantages of pixels replication over nearest neighbor
interpolation?
11
Lab 3
3.1
The spatial coordinates of 4-neighbors (blue), N4 (V ), of a pixels (red), V (x0 , y 0 ) ,in Fig. 3.1
(a) are:
(x - 1, y) spatial coordinate of V1
(x , y - 1) spatial coordinate of V2
12
Lab 3
(x , y + 1) spatial coordinate of V3
(x + 1, y) spatial coordinate of V4
The spatial coordinates of D-neighbors (blue), ND (V ), of a pixels (red), V (x0 , y 0 ), in Fig.
3.1 (b) are:
(x - 1, y - 1) spatial coordinate of V1
(x - 1, y + 1) spatial coordinate of V2
(x + 1 , y - 1) spatial coordinate of V3
(x + 1, y + 1) spatial coordinate of V4
The spatial coordinates of 8-neighbors (blue), N8 (V ), of a pixels (red), V (x0 , y 0 ), in Fig.
3.1(c) are given
(x - 1, y - 1) spatial coordinate of V1
(x - 1, y) spatial coordinate of V2
(x - 1 , y + 1) spatial coordinate of V3
(x , y - 1) spatial coordinate of V4
(x , y + 1) spatial coordinate of V5
(x + 1, y - 1) spatial coordinate of V6
(x + 1, y) spatial coordinate of V7
(x + 1, y + 1) spatial coordinate of V8
3.2
Connectivity between pixels provides foundations for understanding other digital image processing concepts. Two pixels are said to be connected if they are neighbors and have either
same gray level or their gray levels belongs already decided criterion of similarity.
Lets consider Z = 1 be the set of gray level value used to define adjacency between pixels of
a binary image shown in Fig. 3.2(a). There are three types of adjacency between the pixels:
4 adjacency. Two pixels p and q with values from Z are 4-adjacent if q is in the set
N4 (P ). The 4-adjacency of of the pixel p is shown by green line in Fig. 3.2(b).
8 adjacency. Two pixels p and q with values from Z are 8-adjacent if q is in the set
N8 (P ). The 8-adjacency of of the pixel p is shown by green lines in Fig. 3.2(c).
m adjacency. Two pixels p and q with values from Z are 4-adjacent, shown in Fig.
3.3, if
q is in N4 (p) or
T
q is in ND (p) and the set N4 (q) N4 (q) has no pixels whose values from Z.
13
Lab 3
Figure 3.2: (a) Binary image, (b) pixels that are 4-adjacent (shown by green line) to the
center pixel, (c) pixels that are 8-adjacent to the center pixel
Figure 3.3: Pixels that are m-adjacent (shown by green line) to the center pixel,
3.3
As discussed in Sec. 3.2 that connectivity between pixels provides foundations for understanding other digital image processing concepts i.e digital paths, regions, boundaries. Two
pixels are said to be connected if they are neighbors and have either same gray level or their
gray levels belongs already decided criterion of similarity.
A digital path or curve from pixels p with coordinates (x, y) to pixels q with coordinates
(s, t) is a sequence of distinct pixels with coordinates
(x0 , y0 ), (x1 , y1 ), ... , (xn , yn )
where (x0 , y0 ) = (x, y), (xn , yn ) = (s, t) and pixels (xi , yi ) and (xi1 , yii ) are adjacent for
1 i n. In this case, n is the length of path. For closed path (x0 , y0 ) = (xn , yn ). Depending
upon the 4-, 8-, or m-adjacency we can define 4 path, 8 path, or m path, respectively.
Two pixels p and q are said to be connected if they there exist a path between them and
they belong to S, a subset of pixels in an image. For any pixel p in S, the set of pixels that
14
Lab 3
3.4
Practical
3.4.1
Activity No.1
3.4.2
Take an arbitrary 5 x 5 image using magic() function and prompt user to input the pixel for
finding the required neighborhoods.
3.4.3
Activity No.2
3.4.4
Take an arbitrary 3 x 3 binary image using rand() and round() function and prompt user to
input the pixel for finding the required adjacency.
3.4.5
Activity No.3
Write MATLAB script to find 4-adjacency, 8-adjacency, and m-adjacency of an image, shown
in Fig. 2.1 and take Z = {50:80}.
3.4.6
Activity No.4
Modify your MATLAB code in Act. 3.4.3 to find 4-path, 8-path, and m-path of a pixel in a
binary image and take Z = 0. Extend this idea to find the regions of an image in Fig. 2.1
by taking R = {65:80}.
15
Lab 3
3.4.7
Questions
1. Why we convert gray levels into double before using numeric operations?
2. What is the effect of compression of an image on its quality?
16
Lab 4
4.1
Thresholding
Fig. 4.1 shows the thresholded of Fig. 2.1. In this method we increase the contrast by
darkening the gray levels below m and brightening the levels above m in the original image.
As a result we obtain a narrower range of gray levels Fig. 4.2.
4.2
Negative Transformation
Generally, negative transformation of an image having gray levels [0, L-1] is obtained as
s = L - 1 - r.
Fig. 4.3 show the negative transformed image of Lena and its histogram. While input
image having original Lena and its histogram is shown in Fig. 4.4. The intensity levels, in
the histogram of negative transformed image, are reversed than that of input image.
17
Lab 4
4.3
Log Transformation
Log transformation can be obtained by the following relation between input, s, and output
image, r.
s = clog(r + 1)
Log transformation expands the values of dark pixels in an image while compressing the
higher-level values. The inverse is true for inverse log transformation. Log transformation
compresses the dynamic range of images with large variation in pixels values and shown in
Fig. 4.5
4.4
Power-law Transformation
The power law transformation between input image, r, and output image, s is given by
s = c r
18
Lab 4
Figure 4.3: (B) Negative transformed image, Lena hB (l) Histogram of negative transformed
image
where c and are positive constants. For example Fig. 4.6 (b) shows the power law
transformation of the image in Fig. 4.6 (b).
4.5
Practical
4.5.1
Activity No.1
4.5.2
Activity No.2
19
Lab 4
Figure 4.5: (A) Fourier Spectrum (a) Result after applying log transformation
Figure 4.6: (a) Original Lena (b) Result after applying power law transformation with
= 0.2
4.5.3
Activity No.3
4.5.4
Activity No.4
4.5.5
Questions
20
Lab 4
3. In negative transformation the lower gray levels are assigned to lower gray levels or
higher gray levels?
4. How the input gray levels are transformed into to the output gray levels after
log/antilog transformation?
21
Lab 5
5.1
Contrast Stretching
Low contrast images can result from poor illumination, lake of dynamic range in imaging
sensor, wrong setting of a lens aperture during image, and many other factors. So, our goal
is to increase the dynamic range of gray level to enhance the visual appearance in an image.
Lets consider contrast stretching function in Eq. 5.1, in Fig. 5.1 (a), is applied on a low
contrast image Fig. 5.1 (b) to produce a high contrast image shown in Fig. 5.1 (c).
a3 (r r2 ) + s2 , r2 r (L 1).
(5.1)
Here a1 , a2 , and a3 control the result of contrast stretching. If a1 = a2 = a3 then there will
be no change in the gray levels. Conversely, if a1 = a3 = 0 and r1 = r2 then T (.) is a
thresholding function discussed in Sec. 4.1. In general, if r1 r2 and T (r1 ) T (r2 ) then
T (.) is single valued and monotonically increasing function, which preserve the order of
gray levels and prevents the creation of intensity artifacts.
5.2
Gray level slicing is used to highlight a specific range of gray levels in an image. It is used
to enhance the features such as masses of water in satellite images, or flaws in x-ray images.
22
Lab 5
Figure 5.1: (a) Contrast stretching function, (b) Low contrast input image, (c) High contrast processed output image
The methods to achieve gray level slicing are divided into two categories as given by Eq.
5.2 (a) and Eq. 5.3(b), graphed in Fig. 5.2(a) and Fig. 5.2 (b), respectively.
s=
s=
5.3
T (r) = sH , rrangeofinerest;
T (r) = sL , otherwise.
(5.2)
T (r) = sH , rrangeofinerest;
T (r) = r,
otherwise.
(5.3)
In this section we will discuss the importance of individual bit in image appearance. Bit
plane slicing aid in determining the adequacy of the number of bits used to quantize each
pixel which is useful for compression. In gray scale image each pixels is represented by
8-bits. We can imagine an image consist of eight 1-bit planes, Fig. 5.4, ranging from from
bit-plane 0 for the least significant bit (LSB) to bit-plane 7 for the most significant bit
(MSB). The results of bit planes slicing with specific masking bit patterns are shown in Fig.
5.3.
5.4
Practical
5.4.1
Activity No.1
Write MATLAB script to obtain g(l), piecewise transformation function, from l as shown in
Fig. 5.5.
23
Lab 5
Figure 5.2: (a) Function that highlights [A,B] gray levels and reducing others (b) Function that highlights [A,B] gray levels and preserve others (c) Input image (d)
Processed output image by Transfer function in (a)
5.4.2
Activity No.2
5.4.3
Activity No.3
Write MATLAB code to take an input image and to implement gray level slicing discussed
in Sec. 5.2.
5.4.4
Activity No.4
5.4.5
Activity No.5
Take an image and find hits histogram and match it with the histogram of equalized
histogram.
24
Lab 5
Figure 5.3: 4th bit-plane set is to 0 (b) 4th &5th bit-planes are set to 0 (c) 4th , 5th , & 6th
bit-plans are set to 0 (d) 4th , 5th , 6th , & 7th bit-plans are set to 0
5.4.6
Questions
1. What is the impact on the appearance of image when bit-plane 0 is set to zero.
2. What is the impact on the appearance of image when bit-plane 7 is set to zero.
3. What effect would setting to zero the lower-order bit planes have on the histogram of
an image in general?
4. What would be the effect on the histogram if we set to zero the higher order bit
planes instead?
25
Lab 5
26
Lab 6
Histogram Processing
In this lab we will find histogram of an image and will interpret an image in term of its
histogram. We will focus on different histogram processing techniques to enhance an image.
6.1
Histogram of an Image
Histogram provides us global statistics about an image.Let S be a set and define |S| to be
the cardinality of this set, i.e |S| is the number of elements in S. The histogram hA (l)
(l = 0, ..., 255) of the image A is defined as:
hA (l) = |{(i, j) | A(i, j) = l, i = 0, ..., N 1, j = 0, ..., M 1}|
255
X
(6.1)
(6.2)
i=0
nk
n
rk = k th gray level (0 k L 1)
(6.3)
(6.4)
and n is the total number of pixels in the image. The function p(rk ) estimates probability
of occurrence of gray level rk . Histogram of an image A 250 x 250, Fig. 6.1 (a), is shown in
Fig. 6.1 (b). As we can see that the image A has half of its portion as black and half as
white and its histogram of this image shows equal number occurrence of black and white
pixels.
Histogram of an image does not depend upon it shape. Its is proved in Fig. 6.2. Again, we
have equal parts of black and white portion ,indicated in Fig. 6.2 (a), but its shape is
different from the image in Fig. 6.1 (a). Surprisingly, histogram in Fig. 6.2 (b) is exactly
similar to the histogram in Fig. 6.1 (b).
27
Lab 6
Histogram Processing
6.2
Histogram Equalization
s = T (r)
28
(6.5)
Lab 6
Histogram Processing
Let pr (r) and ps (s) be the probabilities of input and output gray levels, respectively. If the
above two conditions are true then
r = T 1 (s)
(6.6)
Generally, we know pr (r) and T (r) and T 1 (s) satisfies condition (1), then the probability
density function ps (s) of the transformed variable s can be obtained using a rather simple
formaula;
dr
ps (s) = pr (r)|
|
(6.7)
ds
In discrete variable, probability density function is defined by Eq. 6.3 and the Eq. 6.8
define histogram equalization or histogram linearization.
sk = T (rk ) =
k
X
pr (rj ) =
j=0
k
X
nj
j=0
(6.8)
Fig. ?? shows the equalized histogram of input image and processed output image. We can
see that the output histogram has dynamic range of gray level as compared to the
histogram of input image.
6.3
Histogram Specification
sk = T (rk ) =
k
X
pr (rj ) =
j=0
vk = G(zk ) =
k
X
k
X
nj
j=0
, k = 0, . . . , L 1
pz (zj ) = sk , k = 0, . . . , L 1
(6.9)
(6.10)
j=0
29
(6.11)
Lab 6
Histogram Processing
6.4
Practical
6.4.1
Activity No.1
Prove that the images in Fig. 6.1(a) and Fig. 6.2 (a) have the same histogram.
6.4.2
Activity No.2
Prove that dark , bright, low contrast, and high contrast images have their histograms
similar to the histograms in Fig. 6.4 (a), Fig. 6.4 (b),Fig. 6.4 (c), and Fig. 6.4(d),
respectively.
6.4.3
Take dark, bright, low contrast, and high contrast image that accompany with this lab and
sketch their histogram either using LUT or for loops.
6.4.4
Activity No.3
Apply histogram specification on a low contrast image. Sketch and compare the histogram
of input and output image.
6.4.5
Activity No.4
30
Lab 6
Histogram Processing
Figure 6.4: (a) Histogram of dark image (b) Histogram of bright image (c) Histogram of
low contrast image (d) Histogram of high contrast image
6.4.6
Questions
1. What are the situations where histogram equalization and histogram specification are
suitable?
2. What issues are related to the use of histogram equalization?
3. Explain difference between histogram of low contrast image and high contrast image.
4. What will be the visual effect of histogram equalization on high contrast input image?
5. What will be the visual effect of histogram equalization on high low contrast input
image?
6. Explain why the discrete histogram equalization technique does not, in general, yield
a flat histogram.
7. suppose that a digital image is subjected to histogram equalization. Show that a
second pass of histogram equalization will produce exactly the same result as the first
pass.
31
Lab 7
7.1
32
Lab 7
Figure 7.1: (a) Input noised and blurred image (b) Output image after global histogram
processing (c) Output image after local histogram processing
7.2
In addition to histogram processing, Local enhancement functions are also based on other
statistical properties of gray levels in the block i.e mean s (x, y) and variance s (x, y).
Mean is used for averaging brightness and variance is used for contrast in an image. Lets
consider that Sxy represents a neighborhood subimage of size NS xy ; a block centered at
(x, y) and s (x, y) in Eq. 7.1 and S (x, y) in Eq. 7.2 represent gray level mean and
standard deviation in Sxy , respectively. Let G and G denote global mean and standard
deviation in image f (x, y).
P
(s,t)Sxy f (s, t)
(7.1)
S (x, y) =
NS xy
P
(s,t)Sxy f (s, t) S (x, y)
2
S (x, y) =
(7.2)
NS xy
There are two methods to implement these statistical properties for local enhancement.
Mathematically, the first method is shown in Eq. 7.3. Where, A(x, y) is called local gain
factor which is inversely proportional to standard deviation shown in Eq. 7.4.
g(x, y) = A(x, y).[f (x, y) S (x, y)] + S ( x, y)
A(x, y) = k.
G
;0 k 1
S (x, y)
(7.3)
(7.4)
The method represented by Eq. 7.3 is implemented for input image Fig. 7.2(a) and 15 x 15
block which results in locally enhanced output image shown in Fig. 7.2(b).
The second method is represented by Eq. 7.2(a). Here the successful selection of
parameters (E, k0 , k1 , k2 ) requires experiment for different images. Where E cannot be too
large to affect the general visual balance of the image and k0 < 0.5. The selection of Nxy
should be as possible to preserve detail and reduce computational load.
g(x, y) =
[k1 G s (x, y) k2 G ]
. . . Eq. 7.2(a)
33
Lab 7
Figure 7.2: (a)Input image (b) Locally enhanced output image with 15 x 15 block
7.3
In this section we will focus spatial filtering and its contribution to the enhancement of an
image. A subimage Fig. 7.3(b) called f ilter, mask, kernal, template, or window is
masked with input image Fig. 7.3(a) as in Eq. 7.5. The values in the window are called
filter coefficients. Based upon the filter coefficient we can classified spatial filters as linear
filters(LF), nonlinear filters(NLF), lowpas filters(LPF), and highpass filters(HPF). base on
spatial filtering.
a
b
X
X
(i, j).f (x + i, y + j)
(7.5)
g(x, y) =
i=a j=b
Specifically, the response of a 3 x 3 mask with a subimage with gray levels z1 , z2 ,z2 ,. . . ,z9 is
given in Eq. ??
R = w1 z1 + w2 z2 + . . . + w9 z9 =
9
X
wi zi eq : shortmask
(7.6)
i=1
Mask operation near the image border some part of the masks is located outside the image
34
Lab 7
1. Discard the problem pixels (e.g. 512 x 512 input becomes 510 x 510 output image if
the mask is 3 x 3)
2. Zero padding at the first and last rows and columns. So, 512 x 512 image become
equal to 514 x 514 intermediate output image. To get the final output 512 x 512
image the first and last rows and columns are discarded to.
7.3.1
Smoothing filters are also called LPF. Because it attenuate or eliminate high-frequency
components that characterize edges and sharp details in an input image. It results in
blurring or noise reduction. Blurring is usually used in preprocessing steps, e.g., to remove
small details from an image prior to object extraction or to bridge small gaps in lines or
curves. These filters are based on neighborhood averaging and in general M x M mask with
the same filter coefficients, Eq. 7.7 are used to design a LPF. Some filters have weighted
masks. To preserve the nearest neighborhood effect they are weighted with some weight
factor. These filters produce some undesirable edge blurring.
wi =
7.3.2
1
; 1 wi M
M2
(7.7)
The order statistic filters are nonlinear filters whose response is based on ordering (ranking)
the pixels contained in the image area surrounded by the filter, and then the center value is
replaced with the value determined by the ranking result. Based on ranking criteria we can
classified order statistic filters into three types.
7.3.3
The transformation function of median filter is given in Eq. 10.7. Median filters are useful
in situation where impulse noise, salt and pepper noise. The median, , of masked
neighbors, zk (x, y), is calculated by ranked in ascending order of gray levels. Ultimately,
half of the pixels above the median, , and half are below than . And finally assigned to
the output pixel R(x, y), at (x, y).
R(x, y) = (zk (x, y)|k = 1, 2, . . . , 9)
(7.8)
Generally, The transfer function of median filter forces the output gray levels to be more
2
similar to the neighbors. If the isolated group of pixels have area A n2 ; are eliminated by
n x n median filter. Conversely, the larger cluster are less affected by it.
35
Lab 7
7.3.4
(7.9)
These filters are applied, similar to the median filters, on input images to result masked
intermediate outputs zk (x, y) but the ranking and assigning criteria is different from median
filters. The assignment of minimum value, to the output zk (x, y), in the neighborhood make
it useful to remove salt noise.
7.3.5
(7.10)
These filter are applied, similar to min filters, on input images to result masked
intermediate outputs zk (x, y) but the ranking and assigning criteria is different from both
mean and median filters. The assignment of maximum value, in the neighborhood, to the
output zk (x, y) make them useful to remove pepper noise.
7.4
Practical
7.4.1
Activity No.1
Use the figures accompanied with this lab and generate the results discussed in Sec. 7.1.
7.4.2
Activity No.2
Take the image in Fig. 7.4 and take out hidden object in the background using the
techniques discussed in Sec. 7.1.
7.4.3
Apply a 3 x 3 moving average filter on the input image and detect areas of higher contrast
like edges and then use the co-efficient in g(x, y) to produce the output image.
7.4.4
Activity No.3
Take the input image shown in Fig. 7.5 and apply smoothing filter mask of order: i 3 x 3,
ii 5 x 5, iii 9 x 9, and iv 15 x 15. Sketch the output image and comment on your
results.
36
Lab 7
7.4.5
Activity No.4
Take a noised image and prove that which order statistic f ilter is best.
7.4.6
Read an image and add salt and pepper noise with it using imnoise() function. And then
write you code to implement Eq. 10.7, ??, and ??.
7.4.7
Questions
37
Lab 7
38
Lab 8
8.1
As discussed in Sec. 7.3 that spatial domain filtering is performed by designing a mask or
window and convolving it with the input image. These filters are isotropic-rotation
invariant and are obtained by approximating second order derivatives Eq. 8.1 in digital
domain; described in Eq. 8.2 to Eq. 8.4. The implementation of Eq. 8.4 is given in the
simplest window in Fig. 8.1 and its application on an input image Fig. 8.2 (a) is given in
Fig. 8.2(b).
2 f =
2f
2f
+
x2
y 2
39
(8.1)
Lab 8
8.2
2f
= [f (x + 11, y) + f (x 1, y)] 2f (x, y)
x2
(8.2)
2f
= [f (x, y + 1) + f (x, y 1)] 2f (x, y)
y 2
(8.3)
(8.4)
As said, that sharpening filters, Laplacian filters, deemphasizes the features of slowly
varying gray levels regions and highlights discontinuities of gray levels, edges, in an image.
So, background can be recovered by simply adding Laplacian output to the input image,
given in Eq. 8.5 (when the center of filter coefficients of the Laplacian mask is negative)
and Eq. 8.6(when the center of filter coefficients of the Laplacian mask is positive). In this
way the background tonality can be perfectly preserved while details are enhanced as shown
in Fig. 8.3.
Figure 8.3: (a) Input image (b) Laplacian output (c) Enhanced Image
(8.5)
(8.6)
40
Lab 8
8.3
Images can be sharpen,fs (x, y), by subtracting their blurred versions, f(x, y), from them
f (x, y). This technique of sharpening is called unsharp masking, represented in Eq. ??.
fs (x, y) = f (x, y) f(x, y)
(8.7)
Highbost f iltering is generalized version of Eq. 8.7 and is represented in Eq. 8.8 where A
1.F romEq. ??andEq. 8.8wecanrepresenthigh-boostfilteringintermof unsharpfiltering, Eq. 8.9.fhb (x, y) =
A.f (x, y) f(x, y)(8.8)
fhb (x, y) = (A 1).f (x, y) + fs (x, y)
(8.9)
We can represent the high boost f iltering sharpening in term of Laplacian masks in Fig.
8.4. The Eq. 8.10 and Eq. 8.11 high boost f iltering when the center coefficient of the
Laplacian mask is negative and positive, respectively.
fhb (x, y) = A.f (x, y) 2 f (x, y)
(8.10)
(8.11)
The constant A is inversely proportional to the sharpening. Sharpening decreases with increase of A.
8.4
The 2-D derivatives (2 f (x, y))in image processing is implemented using the magnitude of
the gradient. As we know that gray levels increase or decrease the f (x, y) has local maxima
or local minima, respectively and 2 f (x, y) has zero crossing when there is discontinuities in
image. The Eq. 8.12
f f
f = [Gx Gy ] = [
]
(8.12)
x x
The magnitude of gradient, Eq. 8.13, is approximated to P rewitt operator in Eq. 8.14 and
implemented in 3 x 3 mask in Fig. 12.1. We can use another operator called Sobel operator
in Eq. 8.15 to find the magnitude of gradient in horizontal and vertical direction represented
in Fig. ??.
f (x, y) 2 f (x, y) 2 1
f (x, y) = [(
) ,(
) ]2
(8.13)
x
y
41
Lab 8
(8.14)
Figure 8.5: (a) Pixels arrangement (b) Mask for extracting horizontal edges (c) Mask for
extracting vertical edges
(8.15)
Figure 8.6: (a) Pixels arrangement (b)Sobel Mask for extracting horizontal edges (c) Sobel
Mask for extracting vertical edges
8.5
Practical
8.5.1
Activity No.1
Take a blurred image and apply Laplacian masks on it sketch the resultant image. Also,
prove that the mask in Fig. 8.7 isotropic for rotation in increment of 900 and the mask in
Fig. 8.7 is rotation invariant increment of 450 .
8.5.2
Compare the results of Laplacian masks on input images with that of rotated (discussed in
Sec. 1.5 images.
42
Lab 8
8.5.3
Activity No.2
Prove that we can obtain the same results for masks in given in Fig. 8.8.
8.5.4
Activity No.3
Obtain a sharped image from a blurred image using unsharp masking and high-boost masks
on a blurred. Repeat the boosting of image for different values of A.
8.5.5
Activity No.4
8.5.6
(8.16)
Activity No. 5
Compare and contrast the sharpen images after applying Sobel operators and Laplacian
images. and Laplacian
43
Lab 8
8.5.7
Questions
44
Lab 9
9.1
Frequency is the number of times that a periodic function repeats the same sequence of
values during a unit variation of the independent variable. In image processing, we have
digital images which are 2-D signals. Frequency can be defined as the number of repeated
pattern of gray levels in an image. We can transform the spatial domain representation into
frequency domain with the help of Fourier Transform(FT). MATLAB uses Discrete Fourier
Transform(DFT) and Fast Fourier Transform(FFT) algorithm to obtain FT of an image.
Lets consider an image f (x, y) Eq. 9.1. This image is a aperiodic sequence represented as
one period of periodic sequence with a period M x N in Eq. 9.2. Here, M and N are period
in x-spatial and y-spatial coordinate, respectively. The Discrete Fourier Series(DFS) pair of
approximated image f(x, y) and image in frequency domain F (u, v) is given in Exp. 9.3
and illustrated in Fig. 9.1.
f (x, y) =
f(x, y) =
f (x r1 M, y r2 N )
(9.1)
(9.2)
r1 = r2 =
f(x, y) F (u, v)
(9.3)
After mathematical simplification we can express DFT in, Analysis equation Eq. 9.4 and
its inverse in Synthesis Equation Eq. 9.5.
45
Lab 9
Figure 9.1: (a) Image in spatial domain (x,y) (b) Image in frequency domain(u,v)
M 1 N 1
vy
ux
1 X X
F (u, v) = .
f (x, y). expj2( M + N )
N
x=0 y=0
(9.4)
N
M 1 N 1
vy
ux
1 X X
F (x, y) = .
f (u, v). expj2( M + N )
N
(9.5)
u=0 y=0
Figure 9.2: (a) Input image (b) Fourier spectra (c) Fourier spectra after application of log
transformation
9.2
The basic steps for taking Fourier transform of an image are illustrated in Fig. 9.3 and are
listed below.
1. Multiply the input image by (1)x+y to center the transform
2. Compute the DFT of shifted imgae, i.e F (u, v)
46
Lab 9
3. Multiply F (u, v) by a f ilter function H(u, v) to obtain output image, i.e G(u, v)
G(u, v) = H(u, v)F (u, v)
(9.6)
9.3
In this section we will implement some basic type of smoothing filters. We will implement
ideal lowpass f ilters (ILP F ) Hilp (u, v) Eq. 9.7, Butterworth lowpass f ilter (BLP F )
Hblp (u, v) Eq. 9.8, Guassian lowpass f ilter (GLP F ) Hglp (u, v) Eq. 9.9.
Hilp (u, v) =
1, if D(u, v) D0 ;
0, if D(u, v) D0
Hblp (u, v) =
1
1+
2n
[ D(u,v)
D0 ]
D 2 (u,v)
2 2
(9.7)
(9.8)
(9.9)
where D0 = is a specified nonnegative number and D(u, v) is a distance from center point
of frequency rectangle (u, v) and n is the order of butterworth lowpass filter.
9.4
Sharpening filters attenuates the lower frequencies without disturbing higher frequencies. In
this section we will discuss highpass filters generally expressed in Eq. 9.10. We will implement
47
Lab 9
ideal highpass f ilters (IHP F ) Hihp (u, v) Eq. 9.11, Butterworth highpass f ilter (BHP F )
Hbhp (u, v) Eq. 9.12, Guassian highpass f ilter (GLP F ) Hghp (u, v) Eq. 9.13.
Hhp (u, v) = 1 Hlp (u, v)
Hihp (u, v) =
0, if D(u, v) D0 ;
1, if D(u, v) D0
Hbhp (u, v) =
1
D0 2n
1 + [ D(u,v)
]
9.5
D 2 (u,v)
2 2
(9.10)
(9.11)
(9.12)
(9.13)
Background of an image is reduced to near black when its is passed through highpass filters.
So, we need to add original image to the highpass filtered image to preserve the gray levels
in the background. This technique is called unsharp masking Eq. 9.14 and generally called
highboost f iltering Eq. 9.15.
fum (x, y) = f (x, y) flp (x, y)
(9.14)
(9.15)
Where flp (x, y) is a filtered image by applying any of the lowpass filters discussed in Sec.
9.3. The outputs fum (x, y) and fhb (x, y) are highboosted images.
Sometime its is advantageous to emphasize on the contribution to enhancement made by
the high-frequency components of an image. In this case we simply multiply the HPF by a
constant, b, and add an offset, a so that the zero frequency component is not eliminated by
the filter. This process is called high f requency emphasize. We will implement this
process by using the transfer function, Hhf e (x, y), in Eq. 9.16.
9.6
Practical
48
(9.16)
Lab 9
9.6.1
Activity No.1
Compute the Fourier Transform of an image and show the fourier spectra. Comment of the
PSD of image by indicating lower and higher frequencies.
9.6.2
Use f f t() and f f tshif t() function to shift and take fourier transform of image and then
display the fourier spectra.
9.6.3
Activity No. 2
Implement and compare the results of ILPF, BLPF, and GLPF for different values of D0 =
.
9.6.4
You can use both, your own MATLAB code and DIP U M built-in function lpf ilter(), to
find out transfer functions of these filters and then take there DFT for find the final output.
9.6.5
Activity No. 3
Implement and compare the results of IHPF, BHPF, and GHPF for different values of D0 =
.
9.6.6
You can use both, your own MATLAB code and DIP U M built-in function hpf ilter(), to
find out transfer functions of these filters and then take there DFT for find the final output.
9.6.7
Activity No. 4
Take an image and implement the sharpening techniques discussed in Sec. 9.5.
9.6.8
Questions
1. Predict about the frequencies of slowly varying and repidly gray levels in an image and
show them in frequency rectangle.
2. Why we need to multiply an image with (1)x+y what if we dont do so?
3. What are the effects on output image when the cutof f f equency in ILP F and BLP F
is increased or decreased.
49
Lab 9
4. What are the effect of ringing effects and how we can resolve them?
5. What are the effects on output image when the standard deviation is changed in
GLP F and GHP F .
6. Why we use the offset in high f requency emphasize f ilters?
50
Lab 10
10.1
Introduction
Image restoration is objective process as compare to subjective on, image enhancement. The
basic concept of restoration is to reconstruct or recover a degraded image by using priori
knowledge of the degradation phenomenon and statistical nature of noise. Statistical models
are used to model the noises produce during image acquisition, transmission, and during
sudden off and on of camera, etc. Our approach in this lab will be application of reverse
process of degradation. We will use spatial domain restoration when noise is additive to the
input image and will use frequency domain restoration for the degradations such as image
blur.
The Fig. ?? shows a model of the degradation process and its restoration. The input
image f (x, y) is degraded by degradation function and additive noise n(x, y). The degraded
output ,g(x, y) Eq. 10.1, is estimated to f(x, y) Eq. 10.2 by restoration filters.
g(x, y) = H[f (x, y)] + (x, y)
(10.1)
(10.2)
Principal sources of noise are image acquisition and transmission. Noise is either assumed to be independent of spatial coordinates or periodic noise; that depend upon spatial
coordinates.
10.2
We will use spatial filtering when only additive noise is present in the image. Their application
is exactly similar to the spatial filters discussed in Lab. 8, but here, they will have different
computation nature.
51
Lab 10
10.2.1
Mean Filters
In this section of the lab, we will to implement the spatial filters which restore a noisy image.
Let Sxy represent the set of coordinates in a rectangular subimage window of size m x n,
centered at point (x, y). The arithematic mean filter process Eq. 10.3 compute the average
value of the corrupted image g(x, y) in the area defined by Sxy .
X
1
g(s, t).
f(x, y) =
.
mn
(10.3)
(s,t)Sxy
While geometric mean Eq. 10.4 filtering of this noisy will result a smoother image but will
last some detail.
1
f(x, y) = [(x,y)Sxy g(x, y)] mn .
(10.4)
Harmonic mean and contraharmonic filter given in Eq. 10.5 and Eq. 10.5, respectively.
Harmonic mean works better in salt noise but fails to produce desired restore image in case
of pepper noise. While contraharmonic Eq. 10.6 depends upon the value of filter order Q.
10.2.2
1
f(x, y) = [(x,y)Sxy g(x, y)] mn .
(10.5)
P
Q+1
(x,y)Sxy g(x, y)
f (x, y) = P
Q
(x,y)Sxy g(x, y)
(10.6)
As discussed in Sec. 7.2 that the response of order statistic filters are based on the ranking
of pixels in neighborhood area. We will use median filter Eq. 10.7, max Eq. 10.8 and min
Eq. 10.9 filters, midpoint filter Eq. 10.9, and aplha-trimmed mean filter Eq. 10.11 to
restore a noisy image.
f(x, y) = median(x,t)Sxy [g(s, t)]
(10.7)
(10.8)
(10.9)
1
f(x, y) = .[max(x, t)Sxy [g(s, t)] + min(x, t)Sxy [g(s, t)]
2
(10.10)
f(x, y) =
1
.
mn d
52
X
(x,t)Sxy
(10.11)
Lab 10
10.2.3
The Fig. 10.1 compares the implementation of least sqaure mean (LM S) or Weiner filtering
with inverse filtering. Weiner filter incorporate degradation function and noise. It restores
the image f(x, y), in Eq. 10.12 from original image f (x, y) such that the mean square error
between them is minimum. Fig. ??.
F (u, v) = [
10.3
1
|H(u, v)|2
.
]G(u, v)
H(u, v) |H(u, v)|2 + S (u, v)/Sf (u, v)
(10.12)
Practical
10.3.1
Activity No.1
Compare and contrast the image and its histograms of an image given in Fig. ?? by adding
the following noises to it.
1. Gaussian
2. Raleigh
3. Gamma
4. Exponential
5. Uniform
6. Salt Pepper
10.3.2
Activity No. 2
10.3.3
You can use both, your own MATLAB code and DIP U M built-in function lpf ilter(), to
find out transfer functions of these filters and then take there DFT for find the final output.
53
Lab 10
10.3.4
Activity No. 3
Implement and compare the results of IHPF, BHPF, and GHPF for different values of D0 =
.
10.3.5
You can use both, your own MATLAB code and DIP U M built-in function hpf ilter(), to
find out transfer functions of these filters and then take there DFT for find the final output.
10.3.6
Activity No. 4
10.3.7
Questions
1. Which noise model best describe the electrical and electromechanical interference during
image acquisition?
2. Which noise model best describe the phenomena of electronic noise and sensor noise
due to poor illumination and/or high temperature?
3. Which noise model best characterizes the phenomena in range imaging?
4. Which noise model best describe the phenomena of laser imaging?
5. Which noise model best describe when faulty switching take place during image acquisition?
6. Uniform density noise model is used to describe which type of noisy situation?
54
Lab 11
11.1
spread the intensities components, without the effecting hue and saturation.
11.2
Practical
11.2.1
Activity No.1
Write MATLAB code that takes a color image and plot its various color space components.
55
Lab 11
11.2.2
Activity No. 2
Prove that RGB transformation functions and complement of a color image are identical.
11.2.3
Activity No. 2
Find the histogram of a color image and use histogram equalization to uniformly distribute
the entire intensities components.
11.2.4
Read a color image and take its complement and RGB transformation.
11.2.5
Questions
56
Lab 12
12.1
The main difference between gray scale images smoothing and color images smoothing is
that we deal with component vectors of the form given in Eq. 12.1
cR (x, y)
R(x, y)
c(x, y) = cG (x, y) = G(x, y)
(12.1)
cB (x, y)
B(x, y)
The average of pixels in the neighborhood, Sxy , centered at (x, y) is given by Eq. 12.2 and
average of component vectors in Eq. 12.3.
X
1
c(x, y) = .
c(x, y)
(12.2)
K
(x,y)Sxy
1 P
K . P(x,y)Sxy R(x, y)
(12.3)
The averaging is carried out on per-color-plane basis i.e averaging Red, Green, and Blue color
planes.
12.2
Color image sharpening is carried out using Laplacian masks and Laplacian of color image is
equal to the Laplacian of individual scalar component of input vector, given in Eq. 12.4.
2
2
cR (x, y)
R(x, y)
2 [c(x, y)] = 2 cG (x, y) = 2 G(x, y)
(12.4)
2
2
cB (x, y)
B(x, y)
57
Lab 12
12.3
Practical
12.3.1
Activity No.1
12.3.2
Activity No. 2
12.3.3
Questions
1. Sharp the edges of a color image using the gradient in Fig. 12.1 by summing the three
individual gradient vectors. Is there are any erroneous results? How we can solve it?
Figure 12.1: (a) Pixels arrangement (b) Mask for extracting horizontal edges (c) Mask for
extracting vertical edges
58
Lab 13
13.1
Discrete Wavelet Transform provides powerful insight into an images spatial and frequency
characteristics of an image. The DWT of an image, f (x, y), of size M x N is given in Eq.
13.1 and its inverse is given in Eq. 13.2.
X
T (u, v, ...) =
f (x, y)gu,v,... (x, y)
(13.1)
x,y
f (u, v) =
(13.2)
u,v,...
Where, u, v, ... are transf orm domain variable and gu,v,... and hu,v,... are f orward and
backward transf orm kernels, respectively. We will use Wavelet Toolboxs wavedec2 function and Fast Wavelet Transform wavef ast to compute Wavelet transform of an image.
13.2
Practical
13.2.1
Activity No.1
Read a color image and take its Wavelet transform using wavef ast and wavedec2 functions
and compare the speed of calculation using tic and toc commands or etime command.
13.2.2
Activity No. 2
In Act. 13.2.1 magnify the details and absolute values. Show the result of zeroing all the
approximation coeficients and absolute value of wavelet transform. Comment on your result.
59
Lab 13
13.2.3
Questions
60
Lab 14
Image Compression
14.1
Practical
14.1.1
Activity No.1
Read a simple 4 x 4 image whose histogram is given in Fig. ?? Now, model the symbol
probabilities. And find out the entropy and average code word length of this image. Extend
your idea to 512 x 512 or any other resolution gray scale images.
14.1.2
Activity No.2
Huf f man encoding uses variable length coding base upon the probabilities of occurrence of
a gray levels to compress an image. Human encoding can be accomplished using Huf f man()
function. Now, read a simple 16-byte 4 x 4 image whose each gray level is represented by
8-bits. Use variable length coding schemes to compress the image. Find out the compression
ratio. Extend your implementation to any other resolution. Also do the reverse process,
decoding to obtain the original image.
14.1.3
Activity No.3
Use LZW encoding/decoding for the images that you have read in Act 16.1.2.
14.1.4
Activity No.4
Lab 14
Image Compression
the result. In MATLAB an image IGS is obtained using quantize() function. Now, take an
input image and quantize and comment on the results. Find out the compression ratio after
quantization and implement other (Huf f man) encoding for further compression.
14.1.5
Questions
1. Write MATLAB for converting a Gray-coded number to its binary equavelent and use
it to decode a given binary number.
2. Write code for LZW and Huffman compression algorithm on an image and find out the
relative compression ratios.
62
Lab 15
15.1
N
1 N
1
X
X
f (x, y)g(x, y, u, v)
(15.1)
T (u, v)h(x, y, u, v)
(15.2)
x=0 y=0
f (u, v) =
N
1 N
1
X
X
x=0 y=0
(2x + 1)u
(2x + 1)v
].cos[
]
2N
2N
(15.3)
where
q
1
, for u = 0;
qN
(u) =
2
N , for u = 1, 2, ..., N 1.
63
(15.4)
Lab 15
15.2
Practical
15.2.1
Activity No. 1
Take an arbitrary 8 x 8 image using magic or round(a rand()) functions and take its
2-D DCT by using dct2 function and show the DCT coefficient matrix.Quantize the DCT
coefficient matrix for less than 10 take its inverse DCT using idct2 function.
15.2.2
Activity No. 2
Extend Act. 15.2.1 for a gray scale image. Set the values less than magnitude 20 in DCT
matrix to zero, and then reconstruct the image using the inverse DCT function idct2.
15.2.3
Questions
64
Lab 16
Image Segmentation
16.1
Practical
16.1.1
Activity No.1
Read an image with a nearly invisible black point in the dark gray area of the north-east
quadrant. Letting f denote this image we find the location of the point in Fig. 16.1.
16.1.2
Activity No. 2
Thresholding plays an import role in image extraction by selecting a threshold that separate
a region from its background. Now read an image of a text and extract text from it using
global, local, and adaptive thresholding.
65
Lab 16
16.1.3
Image Segmentation
Questions
1. A binary image contains straight lines oriented horizontally, vertically, at 45o , and at
45o . Give a set of 3 x 3 masks that can be used to detect 1-pixels-long breaks in these
line and implement them in your MATLAB code.
66