You are on page 1of 75

ACKNOWLEDGEMENT

Every orientation work has an imprint of many people and it becomes the duty of the
team to express deep gratitude for the same.

We are very grateful to our Head of the Computer Technology Department Prof.
N.J.Janwe for providing us with an environment to complete our project successfully and for his
unwavering support during the entire course of this project.

We take this opportunity to express our gratitude towards our esteemed project guide Mr.
Prof. N.J.Janwe, for his valuable guidance and support. We are also earnestly thankful to our,
project in-charge Prof. Manisha Pise and Prof. Manisha More, for their encouragement and
timely guidance.

We thank all the staff members, for their indispensable support, priceless suggestions and
for valuable time lent as and when required.

Finally, we take this opportunity to extend our deep appreciation to our project members
and friends, for all that they meant to us during the crucial times of the completion of our project.

1
INDEX

SR NO. INDEX PAGE NO.

1. CHAPTER 1 10

ABSTRACT

2. CHAPTER 2 11

INTRODUCTION

2.1 DEFINITION

2.2 APPLICATIONS OF CBIR

2.3 CBIR SYSTEM

3 CHAPTER 3 13

LITERATURE SURVEY

4 CHAPTER 4 14

PROBLEM STATEMENT

4.1 PROBLEM MOTIVATION

4.2 PROBLEM STATEMENT

2
5 CHAPTER 5 15

PROPOSED SOLUTION

6 CHAPTER 6 16

SYSTEM REQUIREMENT

6.1 SOFTWARE REQUIREMENT

6.2 HARDWARE REQUIREMENT

7 CHAPTER 7 17

TECHNOLOGIES

FEATURE

7.1 COLOR FEATURE

7.1.1 COLOR SPACE

7.1.1.a RGB COLOR MODEL

7.1.1.b HSV COLOR MODEL

7.1.2 COLOR CONVERSION

7.1.3 HISTOGRAM BASED SEARCH METHOD

7.1.4 COLOR HISTOGRAM DEFINITION

3
7.1.5 COLOR UNIFORMITY

7.1.6 COLOR HISTOGRAM DISCRIMINATION

7.1.7 HISTOGRAM EUCLIDEAN DISTANCE

7.1.8 QUANTIZATION

7.1.9 ADVANTAGES

7.1.10 FLOWCHART FOR COLOR

7.2 TEXTURE FEATURE

7.2.1 DEFINITION

7.2.2 METHODS OF REPRESENTATION

7.2.2.1 CO-OCCURRENCE MATRIX

7.2.2.2 TAMURA TEXTURE

7.2.2.3 WAVELET TRANSFORM

7.2.2.4 PYRAMID STRUCTURED

WAVELET TRANSFORM

7.2.2.4.1 ENERGY LEVEL

7.2.2.4.2 EUCLIDEAN DISTANCE

7.2.2.4.3 FLOWCHART FOR TEXTURE

4
7.3 SHAPE FEATURE

7.3.1 DEFINITION

7.3.2 METHODS OF REPRESENTATION

7.3.3 HOUGH TRANSFORM

7.3.3.1 BRIEF DESCRIPTION

7.3.3.2 HOW IT WORKS

7.3.4 FLOWCHART FOR SHAPE

8 CHAPTER 8 41

ARCHITECTURE OF PROJECT

9 CHAPTER 9 42

GRAPHICAL USER INTERFACE & SOURCE

CODE

9.1 GUI

9.2 SOURCE CODE

10 CHAPTER 10 67

5
OUTPUT

10.1 COLOR

10.2 TEXTURE

10.3 SHAPE

11 CHAPTER 11 71

ADVNTAGES AND DISADVANTAGES

11.1 ADVANTAGES

11.2 DISADVANTAGES

12 CHAPTER 12 72

CONCLUSION

13 CHAPTER 13 73

FUTURE EXPANSION

14 CHAPTER 14 74

REFERENCES

15 CHAPTER 15 75

BIBLIOGRAPHY

6
FIGURE INDEX

Sr. No. INDEX Page No.


1. BLOCK DIAGRAM OF 14
CBIR
2. RGB CO-ORDINATE 17
SYSTEM
3. RGB COLOR MODEL 18
4. HSV CO-ORDINATE 19
SYSTEM
5. HSV COLOR MODEL 19
6. OBTAINABLE HSV COLOR 20
FROM RGB COLOR SPACE
7. FLOWCHART FOR COLOR 24
8. EXAMPLE OF TEXTURE 25
9. IMAGE EXAMPLE 27
10. CLASSICAL CO- 27
OCCURRENCE MATRIX
11. HAAR WAVELET 29
TRANSFORM
12. DAUBECHIES WAVELET 30
TRANSFORM
13. PYRAMID-STRUCTURED 31

7
WAVELET TRANSFORM
14. FLOWCHART FOR 33
TEXTURE
15. BOUNDARY-BASED AND 34
REGION-BASED
16. HOUGH TRANSFORM 35
17. CO-ORDINATE POINTS 36
AND POSSIBLE STRAIGHT
LINE FITTINGS
18. PARAMETRIC 37
DESCRIPTION OF A
STRAIGHT LINE
19. FLOWCHART FOR SHAPE 39
20. ARCHITECTURE OF 40
PROJECT

8
CHAPTER 1

ABSTRACT
The purpose of this report is to describe our research and solution to the problem of
designing a Content Based Image Retrieval, CBIR system. It outlines the problem, the proposed
solution, the final solution and the accomplishments achieved. Due to the enormous increase in
image database sizes, as well as its vast deployment in various applications, the need for CBIR
development arose. Firstly, this report outlines a description of the primitive features of an
image; texture, colour, and shape. These features are extracted and used as the basis for a
similarity check between images. The algorithms used to calculate the similarity between
extracted features, are then explained. Our final result was a MatLab built software application,
with an image database, that utilized texture and colour features of the images in the database as
the basis of comparison and retrieval. The structure of the final software application is illustrated.
Furthermore, the results of its performance are illustrated by a detailed example.

9
CHAPTER 2

INTRODUCTION

2. Introduction to CBIR:-

As processors become increasingly powerful, and memories become increasingly


cheaper, the deployment of large image databases for a variety of applications have now become
realisable. Databases of art works, satellite and medical imagery have been attracting more and
more users in various professional fields — for example, geography, medicine, architecture,
advertising, design, fashion, and publishing. Effectively and efficiently accessing desired images
from large and varied image databases is now a necessity.

2.1 Definition:-
CBIR or Content Based Image Retrieval is the retrieval of images based on visual features
such as colour, texture and shape. Reasons for its development are that in many large image
databases, traditional methods of image indexing have proven to be insufficient, laborious, and
extremely time consuming. These old methods of image indexing, ranging from storing an image
in the database and associating it with a keyword or number, to associating it with a categorized
description, have become obsolete. This is not CBIR. In CBIR, each image that is stored in the
database has its features extracted and compared to the features of the query image. It involves
two steps:

10
 Feature Extraction: The first step in the process is extracting image features to a
distinguishable extent.
 Matching: The second step involves matching these features to yield a result that is
visually similar.

2.2 Applications of CBIR:-


Examples of CBIR applications are:

 Crime prevention: Automatic face recognition systems, used by police forces.


 Security Check: Finger print or retina scanning for access privileges.
 Medical Diagnosis: Using CBIR in a medical database of medical images to aid
diagnosis by identifying similar past cases.
 Intellectual Property: Trademark image registration, where a new candidate mark is
compared with existing marks to ensure no risk of confusing property ownership.

2.3 CBIR Systems:-


Several CBIR systems currently exist, and are being constantly developed. Examples are:

 QBIC or Query By Image Content was developed by IBM, Almaden Research Centre, to
allow users to graphically pose and refine queries based on multiple visual properties
such as colour, texture and shape. It supports queries based on input images, user-
constructed sketches, and selected colour and texture patterns.
 VIR Image Engine by Virage Inc., like QBIC, enables image retrieval based on primitive
attributes such as colour, texture and structure. It examines the pixels in the image and
performs an analysis process, deriving image characterization features.
 VisualSEEK and WebSEEK were developed by the Department of Electrical
Engineering, Columbia University. Both these systems support colour and spatial
location matching as well as texture matching.

11
 NeTra was developed by the Department of Electrical and Computer Engineering,
University of California. It supports colour, shape, spatial layout and texture matching, as
well as image segmentation.
 MARS or Multimedia Analysis and Retrieval System was developed by the Beckman
Institute for Advanced Science and Technology, University of Illinois. It supports colour,
spatial layout, texture and shape matching.
 Viper or Visual Information Processing for Enhanced Retrieval was developed at the
Computer Vision Group, University of Geneva. It supports colour and texture matching.

CHAPTER 3

LITERATURE SURVEY
The features drawn from histograms between the query image and corresponding
database images, in RGB color space, serve as local descriptors of color and texture. From
Reference [1], we have studied and implemented Color Histograms and the Quadratic Distance
formulae for the computation of two Histograms.
The Hough Transform technique for Shape Detection has been implemented by
taking reference from [2] as given in the reference chapter.
The concept of Edge Detection has been implemented by taking reference from
Reference no. [3].
The concept of Color Space and Color Segmentation has been studied by taking
reference from Reference no. [4].

12
CHAPTER 4

PROBLEM STATEMENT

4.1. Problem Motivation:-

Image databases and collections can be enormous in size, containing hundreds, thousands
or even millions of images. The conventional method of image retrieval is searching for a
keyword that would match the descriptive keyword assigned to the image by a human
categorizer [6]. Currently under development, even though several systems exist, is the retrieval
of images based on their content, called Content Based Image Retrieval, CBIR. While
computationally expensive, the results are far more accurate than conventional image indexing.
Hence, there exists a tradeoff between accuracy and computational cost. This tradeoff decreases
as more efficient algorithms are utilized and increased computational power becomes
inexpensive.

4.2. Problem Statement:-

13
The problem involves entering an image as a query into a software application that is
designed to employ CBIR techniques in extracting visual properties, and matching them. This is
done to retrieve images in the database that are visually similar to the query image.

CHAPTER 5

PROPOSED SOLUTION:-
The solution initially proposed was to extract the primitive features of a query image and
compare them to those of database images. The image features under consideration were colour,
texture and shape. Thus, using matching and comparison algorithms, the colour, texture and
shape features of one image are compared and matched to the corresponding features of another
image. This comparison is performed using colour, texture and shape distance metrics. In the
end, these metrics are performed one after another, so as to retrieve database images that are
similar to the query. The similarity between features was to be calculated using algorithms used
by well known CBIR systems such as IBM's QBIC. For each specific feature there was a specific
algorithm for extraction and another for matching.

14
Figure: Block Diagram of CBIR

CHAPTER 6

SYSTEM REQUIREMENTS

6.1. Software Requirements:-

MatLab:-

MATLAB is a software program that was developed for engineers, scientists,


mathematicians, and educators with a need for technical computing applications. MatLab can be
used for several applications including: Data analysis and exploration, mathematical algorithms,
modeling and simulation, visualization and image processing, and also programming and
application development.

15
MATLAB is a numerical computing environment and fourth-generation programming
language. Developed by The MathWorks, MATLAB allows matrix manipulation, plotting of
functions and data, implementation of algorithms, creation of user interfaces, and interfacing
with programs in other languages.

Images can be conveniently represented as matrices in MATLAB. One can open an image
as a matrix using imread command. The matrix may simply be m x n form or it may be 3
dimensional arrays or it may be an indexed matrix, depending upon image type. The image
processing may be done simply by matrix calculation or matrix manipulation.

6.2. Hardware Requirements:-

 RAM of 1 GB capacity
 Hard disk of 40 GB capacity

CHAPTER 7

TECHNOLOGIES

7. Feature:-
Feature is anything that is localized, meaningful and detectable. In an image noticeable
features include corners, lines, objects, color, shape, spatial location, motion and texture. No
particular visual feature is most suitable for retrieval of all types of images.

 Color visual feature is most suitable for describing and representing color images.

16
 Texture is most suitable for describing and representing visual patterns, surface properties
and scene depth. CBIR system using texture is particularly useful in satellite images,
medical images and natural scenes like clouds.
 Shape is suitable for representing and describing boundaries of real world objects and
edges. In reality no one particular feature can completely describe an image.

7.1. Color Feature:-

Histogram-based search method can be applied in two different color spaces. Histogram
search characterizes an image by its color distribution, or histogram. Many histogram distances
can be used to define the similarity of two color histogram representations. Euclidean distance
and its variations can be used.

7.1.1. Color Space:-

A color space is defined as a model for representing color in terms of intensity values.
Typically, a color space defines a one- to four-dimensional space. The following two models are
commonly used in color image retrieval system.

7.1.1.a. RGB Color Model:-

The RGB color model is composed of the primary colors Red, Green, and Blue. They
are considered the "additive primaries" since the colors are added together to produce the desired
color. The RGB model uses the Cartesian coordinate system as shown in Figure 1. (a).  Notice
the diagonal from (0,0,0) black to (1,1,1) white which represents the grey-scale. Figure 1. (b) is a
view of the RGB color model looking down from "White" to origin.

17
Figure 1.  (a) RGB coordinates system

                 

(b) RGB color model

18
7.1.1.b. HSV Color Model:-

The HSV stands for the Hue, Saturation, and Value based on the artists (Tint, Shade,
and Tone). The coordinate system in a hexacone in Figure 2.(a). And Figure 2.(b) a view of the
HSV color model. The Value represents intensity of a color. The hue and saturation components
are intimately related to the way human eye perceives color resulting in image processing
algorithms with physiological basis. As hue varies from 0 to 1.0, the corresponding colors vary
from red, through yellow, green, cyan, blue, and magenta, back to red, so that there are actually
red values both at 0 and 1.0. As saturation varies from 0 to 1.0, the corresponding colors (hues)
vary from unsaturated (shades of gray) to fully saturated (no white component). As value, or
brightness, varies from 0 to 1.0, the corresponding colors become increasingly brighter.

                          

Figure 2.  (a) HSV coordinates system

19
Figure 2. (b) HSV color model

7.1.2. Color Conversion:-

In order to use a good color space for a specific application, color conversion is needed
between color spaces. The good color space for image retrieval system should preserve the
perceived color differences. In other words, the numerical Euclidean difference should
approximate the human perceived difference. 

RGB to HSV conversion

   In Figure 3., the obtainable HSV colors lie within a triangle whose vertices are defined by the
three primary colors in RGB space:

20
.
Figure 3. Obtainable HSV color from RGB color space 

   The hue of the point P is the measured angle between the line connecting P to the triangle
center and line connecting RED point to the triangle center.The saturation of the point P is the
distance between P and triangle center.The value (intensity) of the point P is represented as
height on a line perpendicular to the triangle and passing through its center.The grayscale points
are situated onto the same line. And the conversion formula is as follows:

,  

  ,
                    

7.1.3. Histogram-Based Search Method:-

The color histogram for an image is constructed by counting the number of pixels of
each color. The following steps are followed for computing the color histogram:

(1) Selection of a color space.

(2) Quantization of the color space.

21
(3) Computation of histograms.

(4) Derivation of the histogram distance function.

(5) Identification of indexing shortcuts.

Each of these steps may be crucial towards developing a successful algorithm. 

7.1.4. Color Histogram Definition:-

An image histogram refers to the probability mass function of the image intensities. This
is extended for color images to capture the joint probabilities of the intensities of the three color
channels. More formally, the color histogram is defined by,

                                      

where A, B and C represent the three color channels (R, G, B or H, S, V) and N is the number of
pixels in the image.

7.1.5. Color Uniformity:-

The RGB color space is far from being perceptually uniform. To obtain a good color
representation of the image by uniformly sampling the RGB space it is necessary to select the
quantization step sizes to be fine enough such that distinct colors are not assigned to the same
bin. The drawback is that over sampling at the same time produces a larger set of colors than
may be needed. The increase in the number of bins in the histogram impacts performance of
database retrieval.
    However, the HSV color space offers improved perceptual uniformity. It represents with
equal emphasis the three color variants that characterize color: Hue, Saturation and Value
(Intensity). This separation is attractive because color image processing performed independently
on the color channels does not introduce false colors. Furthermore, it is easier to compensate for
many artifacts and color distortions. For example, lighting and shading artifacts are typically be
isolated to the lightness channel. Thus we have converted RGB color space to HSV color space
and computed the histogram.

22
7.1.6. Color Histogram Discrimination:-

There are several distance formulas for measuring the similarity of color histograms.
This is because visual perception determines similarity rather than closeness of the probability
distributions. Essentially, the color distance formulas arrive at a measure of similarity between
images based on the perception of color content.
Three distance formulas that have been used for image retrieval including histogram Euclidean
distance, histogram intersection and histogram quadratic (cross) distance.These three histogram
distance measures are used in RGB color space and in HSV color space separately, which makes
up the six retrieval methods.

7.1.7. Histogram Euclidean Distance:-

   Let h and g  represent two color histograms. The euclidean distance between the color
histograms h and g can be computed as:

                                                                

In this distance formula, there is only comparison between the identical bins in the
respective histograms. Two different bins may represent perceptually similar colors but are not
compared cross-wise. All bins contribute equally to the distance.

7.1.8. Quantization:

23
In order to reduce computation time, a 256x256x256 = 16777216 color image is
quantized into a 8 x 8 x 8 = 512 color image in RGB color space. Otherwise, it will take really
long time to compute and compare 16777216 bins of a histogram with others. This is a trade-off
between performance and time. A RGB color space image should first be transformed to a HSV
color space image. In HSV color space, H is quantized to 18 levels and S & V is quantized to 3
levels. The quantized HSV space has 18x3x3 = 162 histogram bins. Once a query image and a
retrieval method are chosen by users, the rest of whose process is done automatically. However,
the histogram data for all images in database are computed and saved in DB (Database) in
advance so that only the image indexes and histogram data can be used to compare the query
image with images in DB.

7.1.9. Advantage:-

In a viewpoint of computation time, using HSV color space is faster than using RGB
color space. Thus, we can conclude that the Histogram Euclidean-based image retrieval in HSV
color space is most desirable among six retrieval methods mentioned in considering both
computation time and retrieval effectiveness

7.1.10. Flowchart for Color:-

24
7.2. Texture Feature:-

7.2.1. Definition:-

Texture is that innate property of all surfaces that describes visual patterns, each having
properties of homogeneity. It contains important information about the structural arrangement of
the surface, such as; clouds, leaves, bricks, fabric, etc. It also describes the relationship of the
surface to the surrounding environment. In short, it is a feature that describes the distinctive
physical composition of a surface.

25

(c) Rocks
(a) Clouds

(b) Bricks

Texture properties include:

 Coarseness
 Contrast
 Directionality
 Line-likeness
 Regularity
 Roughness
Figure: Examples of Textures…

Texture is one of the most important defining


features of an image. It is characterized by the spatial distribution of gray levels in a
neighborhood [8]. In order to capture the spatial dependence of gray-level values, which
contribute to the perception of texture, a two-dimensional dependence texture analysis matrix is
taken into consideration. This two-dimensional matrix is obtained by decoding the image file;
jpeg, bmp, etc.

7.2.2. Methods of Representation:-

There are three principal approaches used to describe texture; statistical, structural and
spectral.

 Statistical techniques characterize textures using the statistical properties of the grey
levels of the points/pixels comprising a surface image. Typically, these properties are
computed using: the grey level co-occurrence matrix of the surface, or the wavelet
transformation of the surface.
 Structural techniques characterize textures as being composed of simple primitive
structures called “texels” (or texture elements). These are arranged regularly on a
surface according to some surface arrangement rules.
 Spectral techniques are based on properties of the Fourier spectrum and describe
global periodicity of the grey levels of a surface by identifying high-energy peaks in
the Fourier spectrum.

26
For optimum classification purposes, what concern us are the statistical techniques of
characterization. This is because it is these techniques that result in computing texture properties.
The most popular statistical representations of texture are:

 Co-occurrence Matrix
 Tamura Texture
 Wavelet Transform

7.2.2.1. Co-occurrence Matrix:-


Originally proposed by R.M. Haralick, the co-occurrence matrix representation of texture
features explores the grey level spatial dependence of texture. A mathematical definition of the
co-occurrence matrix is as follows:

- Given a position operator P(i,j),


- let A be an n x n matrix
- Whose element A[i][j] is the number of times that points with grey level (intensity)
g[i] occur, in the position specified by P, relative to points with grey level g[j].
- Let C be the n x n matrix that is produced by dividing A with the total number of
point pairs that satisfy P. C[i][j] is a measure of the joint probability that a pair of
points satisfying P will have values g[i], g[j].
- C is called a co-occurrence matrix defined by P.
Examples for the operator P are: “i above j”, or “i one position to the right and two below j”, etc.
This can also be illustrated as follows… Let t be a translation, then a co-occurrence matrix Ct of
a region is defined for every grey-level (a, b) by [1]:

C t(a ,b )  ca rd { (s,s  t)  R 2| A [s]  a , A [s  t]  b}


Here, Ct(a, b) is the number of site-couples, denoted by (s, s + t) that are separated by a
translation vector t, with a being the grey-level of s, and b being the grey-level of s + t. For

27
example; with an 8 grey-level image representation and a vector t that considers only one
neighbor, we would find:

Figure: Image example

Figure: Classical Co-occurrence matrix

At first the co-occurrence matrix is constructed, based on the orientation and distance
between image pixels. Then meaningful statistics are extracted from the matrix as the texture
representation. Haralick proposed the following texture features:

1. Angular Second Moment


2. Contrast
3. Correlation
4. Variance
5. Inverse Second Differential Moment
6. Sum Average
7. Sum Variance
8. Sum Entropy
9. Entropy

28
10. Difference Variance
11. Difference Entropy
12. Measure of Correlation 1
13. Measure of Correlation 2
14. Local Mean

Hence, for each Haralick texture feature, we obtain a co-occurrence matrix. These co-
occurrence matrices represent the spatial distribution and the dependence of the grey levels
within a local area. Each (i,j) th entry in the matrices, represents the probability of going from one
pixel with a grey level of 'i' to another with a grey level of 'j' under a predefined distance and
angle. From these matrices, sets of statistical measures are computed, called feature vectors.

7.2.2.2 Tamura Texture:-

By observing psychological studies in the human visual perception, Tamura explored the
texture representation using computational approximations to the three main texture features of:
coarseness, contrast, and directionality. Each of these texture features are approximately
computed using algorithms.

 Coarseness is the measure of granularity of an image, or average size of regions that have
the same intensity.

 Contrast is the measure of vividness of the texture pattern. Therefore, the bigger the
blocks those make up the image, the higher the contrast. It is affected by the use of
varying black and white intensities.
 Directionality is the measure of directions of the grey values within the image [12].

7.2.2.3. Wavelet Transform:-


Textures can be modeled as quasi-periodic patterns with spatial/frequency representation.
The wavelet transform transforms the image into a multi-scale representation with both spatial
and frequency characteristics. This allows for effective multi-scale image analysis with lower
computational cost. According to this transformation, a function, which can represent an image,

29
a curve, signal etc., can be described in terms of a coarse level description in addition to others
with details that range from broad to narrow scales.

Unlike the usage of sine functions to represent signals in Fourier transforms, in wavelet
transform, we use functions known as wavelets. Wavelets are finite in time, yet the average value
of a wavelet is zero. In a sense, a wavelet is a waveform that is bounded in both frequency and
duration. While the Fourier transform converts a signal into a continuous series of sine waves,
each of which is of constant frequency and amplitude and of infinite duration, most real-world
signals (such as music or images) have a finite duration and abrupt changes in frequency. This
account for the efficiency of wavelet transforms. This is because wavelet transforms convert a
signal into a series of wavelets, which can be stored more efficiently due to finite time, and can
be constructed with rough edges, thereby better approximating real-world signals.

Examples of wavelets are Coiflet, Morlet, Mexican Hat, Haar and Daubechies. Of these,
Haar is the simplest and most widely used, while Daubechies have fractal structures and are vital
for current wavelet applications. These two are outlined below:

Haar Wavelet

The Haar wavelet family is defined as:

Figure: Haar Wavelet Example…

Daubechies Wavelet

The Daubechies wavelet family is defined as:

30
Figure: Daubechies Wavelet Example

7.2.2.4. Pyramid-Structured Wavelet Transform:-

We used a method called the pyramid-structured wavelet transform for texture


classification. Its name comes from the fact that it recursively decomposes sub signals in the low
frequency channels. It is mostly significant for textures with dominant frequency channels. For
this reason, it is mostly suitable for signals consisting of components with information
concentrated in lower frequency channels. Due to the innate image properties that allows for
most information to exist in lower sub-bands, the pyramid-structured wavelet transform is highly
sufficient.

Using the pyramid-structured wavelet transform, the texture image is decomposed into
four sub images, in low-low, low-high, high-low and high-high sub-bands. At this point, the
energy level of each sub-band is calculated. This is first level decomposition. Using the low-low
sub-band for further decomposition, we reached fifth level decomposition, for our project. The
reason for this is the basic assumption that the energy of an image is concentrated in the low-low
band. For this reason the wavelet function used is the Daubechies wavelet.

31
Figure: Pyramid-Structured Wavelet Transform.

7.2.2.4.1. Energy Level:-


Energy Level Algorithm:

1. Decompose the image into four sub-images

2. Calculate the energy of all decomposed images at the same scale, using [2]:

m n
1
E    X i , j 
M N i1 j1

where M and N are the dimensions of the image, and X is the intensity of the pixel located at
row i and column j in the image map.

3. Repeat from step 1 for the low-low sub-band image, until index ind is equal to 5.
Increment ind.

Using the above algorithm, the energy levels of the sub-bands were calculated, and
further decomposition of the low-low sub-band image. This is repeated five times, to reach fifth

32
level decomposition. These energy level values are stored to be used in the Euclidean distance
algorithm.

7.2.2.4.2. Euclidean Distance:-


Euclidean Distance Algorithm:

1. Decompose query image.


2. Get the energies of the first dominant k channels.
3. For image i in the database obtain the k energies.
4. Calculate the Euclidean distance between the two sets of energies, using [2]:

 x 
k
2
Di  k  y i ,k
k 1

5. Increment i. Repeat from step 3.


Using the above algorithm, the query image is searched for in the image database. The
Euclidean distance is calculated between the query image and every image in the database. This
process is repeated until all the images in the database have been compared with the query
image. Upon completion of the Euclidean distance algorithm, we have an array of Euclidean
distances, which is then sorted. The five topmost images are then displayed as a result of the
texture search.

33
7.2.2.5. Flowchart for Texture:-

7.3. Shape Feature:-

7.3.1. Definition:-
Shape may be defined as the characteristic surface configuration of an object; an outline or
contour. It permits an object to be distinguished from its surroundings by its outline. Shape
representations can be generally divided into two categories:

 Boundary-based, and
 Region-based.

34
Figure: Boundary-based & Region-based…

Boundary-based shape representation only uses the outer boundary of the shape. This is
done by describing the considered region using its external characteristics; i.e., the pixels along
the object boundary. Region-based shape representation uses the entire shape region by
describing the considered region using its internal characteristics; i.e., the pixels contained in that
region.

7.3.2. Methods of Representation:-


For representing shape features mathematically, we have:

Boundary-based:

 Polygonal Models, boundary partitioning


 Fourier Descriptors
 Splines, higher order constructs
 Curvature Models
Region-based:

 Super quadrics
 Fourier Descriptors
 Implicit Polynomials
 Blum's skeletons
The most successful representations for shape categories are Fourier Descriptor and Moment
Invariants:

35
 The main idea of Fourier Descriptor is to use the Fourier transformed boundary as the
shape feature.
 The main idea of Moment invariants is to use region-based moments, which are invariant
to transformations as the shape feature.

7.3.3. Hough Transform:-

Common Names: Hough transform

7.3.3.1. Brief Description:-

The Hough transform is a technique which can be used to isolate features of a


particular shape within an image. Because it requires that the desired features be specified in
some parametric form, the classical Hough transform is most commonly used for the detection
of regular curves such as lines, circles, ellipses, etc. A generalized Hough transform can be
employed in applications where a simple analytic description of a feature(s) is not possible. Due
to the computational complexity of the generalized Hough algorithm, we restrict the main focus
of this discussion to the classical Hough transform. Despite its domain restrictions, the classical
Hough transform (hereafter referred to without the classical prefix) retains many applications, as
most manufactured parts (and many anatomical parts investigated in medical imagery) contain
feature boundaries which can be described by regular curves. The main advantage of the Hough

36
transform technique is that it is tolerant of gaps in feature boundary descriptions and is relatively
unaffected by image noise.

7.3.3.2. How It Works:-

The Hough technique is particularly useful for computing a global description of a


feature(s) (where the number of solution classes need not be known a priori), given (possibly
noisy) local measurements. The motivating idea behind the Hough technique for line detection is
that each input measurement (e.g. coordinate point) indicates its contribution to a globally
consistent solution (e.g. the physical line which gave rise to that image point).

As a simple example, consider the common problem of fitting a set of line segments to
a set of discrete image points (e.g. pixel locations output from an edge detector). Figure 1 shows
some possible solutions to this problem. Here the lack of a priori knowledge about the number
of desired line segments (and the ambiguity about what constitutes a line segment) render this
problem under-constrained.

Figure 1 a) Coordinate points. b) and c) Possible straight line fittings.

37
We can analytically describe a line segment in a number of forms. However, a
convenient equation for describing a set of lines uses parametric or normal notion:

where r is the length of a normal from the origin to this line and ɵ is the orientation of r with
respect to the X-axis.(See Figure 2.) For any point (x,y) on this line, r and ɵ are constant.

Figure 2 Parametric description of a straight line.

In an image analysis context, the coordinates of the point(s) of edge segments (i.e.
(xi,yi)) in the image are known and therefore serve as constants in the parametric line equation,
while r and ɵ are the unknown variables we seek. If we plot the possible (r,ɵ) values defined by
each (xi,yi), points in Cartesian image space map to curves (i.e. sinusoids) in the polar Hough
parameter space. This point-to-curve transformation is the Hough transformation for straight
lines. When viewed in Hough parameter space, points which are collinear in the Cartesian image
space become readily apparent as they yield curves which intersect at a common (r,ɵ) point.

The transform is implemented by quantizing the Hough parameter space into finite
intervals or accumulator cells. As the algorithm runs, each (xi,yi) is transformed into a

38
discredited (r,ɵ) curve and the accumulator cells which lie along this curve are incremented.
Resulting peaks in the accumulator array represent strong evidence that a corresponding straight
line exists in the image.

We can use this same procedure to detect other features with analytical descriptions.
For instance, in the case of circles, the parametric equation is

where a and b are the coordinates of the center of the circle and r is the radius. In this case, the
computational complexity of the algorithm begins to increase as we now have three coordinates
in the parameter space and a 3-D accumulator. (In general, the computation and the size of the
accumulator array increase polynomially with the number of parameters. Thus, the basic Hough
technique described here is only practical for simple curves.)

39
7.3.4. Flowchart for Shape:-

40
CHAPTER 8

ARCHITECTURE OF PROJECT

Figure: Architecture of Project

41
CHAPTER 9

GRAPHICAL USER INTERFACE & SOURCE CODE

9.1. Graphical User Interface:-

9.2. Source Code:-

Main function:-

function varargout = main(varargin)


% MAIN M-file for main.fig
% MAIN, by itself, creates a new MAIN or raises the existing
% singleton*.

42
%
% H = MAIN returns the handle to a new MAIN or the handle to
% the existing singleton*.
%
% MAIN('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in MAIN.M with the given input arguments.
%
% MAIN('Property','Value',...) creates a new MAIN or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before main_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to main_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help main

% Last Modified by GUIDE v2.5 08-Feb-2011 20:35:49

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @main_OpeningFcn, ...
'gui_OutputFcn', @main_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})

43
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before main is made visible.


function main_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to main (see VARARGIN)

% Choose default command line output for main


handles.output = hObject;

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes main wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = main_OutputFcn(hObject, eventdata, handles)

44
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

function edit2_Callback(hObject, eventdata, handles)


% hObject handle to edit2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit2 as text


% str2double(get(hObject,'String')) returns contents of edit2 as a double

% --- Executes during object creation, after setting all properties.


function edit2_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

45
% --- Executes on button press in loadImage.
function loadImage_Callback(hObject, eventdata, handles)
% hObject handle to loadImage (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
%for i=1:60
%cd('C:\Users\compaq\desktop\pics');
[filename, pathname]= uigetfile({'*.bmp';'*.jpg';'*.gif';'*.*';}, 'pick an image file');
S=imread([pathname,filename]);
%cd('C:\Users\compaq\desktop\cbir1');

axes(handles.axes1);
imshow(S);
handles.S=S;
handles.a=[pathname,filename];
guidata(hObject, handles);
%fid=fopen('colorbase1.txt','a+');
% fprintf(fid,'%s\r',filename);
%end
%cd('C:\Users\compaq\desktop\cbir1');

% --- Executes on button press in colorSearch.


function colorSearch_Callback(hObject, eventdata, handles)
% hObject handle to colorSearch (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
I=handles.a;
A=imread(I);

46
[X,map] = rgb2ind(A,256);
%querymap=rgb2hsv(A);
%figure
%imshow(A);
[handles.queryx, handles.querymap]=imread(I);
cd('C:\Users\compaq\Desktop\pics');
fid=fopen('colorbase1.txt');
resultValues=[];
resultNames={};
i=1;
j=1;
while 1
imagename=fgetl(fid);
if ~ischar(imagename),break,end
disp(imagename);
%figure
%imshow(imagename);
Z=imread(imagename);
[Y,map1] = rgb2ind(Z,256);
%HSVmap=rgb2hsv(RGBmap);
D=quadratic(X,map,Y,map1);
resultValues(i)=D;
resultNames(j)={imagename};
i=i+1;
j=j+1;
end
fclose(fid);

%sorting color results...


[sortedvalues,index]=sort(resultValues);
cd('C:\Users\compaq\Desktop\cbir1');

47
fid=fopen('colorResults.txt','w+');
for i=1:10
tempstr=char(resultNames(index(i)));
fprintf(fid,'%s\r',tempstr);
disp(resultNames(index(i)));
disp(sortedvalues(i));
disp(' ')
end
fclose(fid);
disp('color part done')
%cd('C:\Users\compaq\Desktop\cbir1');
displayResults('colorResults.txt','Color Results');
cd('C:\Users\compaq\Desktop\cbir1');

% --- Executes on button press in textureSearch.


function textureSearch_Callback(hObject, eventdata, handles)
% hObject handle to textureSearch (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Texture search...
I=handles.a;
A=imread(I);
[handles.queryx, handles.querymap]=imread(I);
querymap=rgb2hsv(A);
cd('C:\Users\compaq\Desktop\cbir1');
fid=fopen('colorbase.txt');
queryEnergies = obtainEnergies(handles.queryx, 6); % Obtain top 6 energies of the image.

% Open colourResults txt file... for reading...

48
fresultValues = []; % Results matrix...
fresultNames = {};
i = 1; % Indices...
j = 1;

while 1
imagename = fgetl(fid);
if ~ischar(imagename), break, end % Meaning: End of File...

X = imread(imagename);

imageEnergies = obtainEnergies(X, 6);

E = euclideanDistance(queryEnergies, imageEnergies);

fresultValues(i) = E;
fresultNames(j) = {imagename};
i = i + 1;
j = j + 1;
end

fclose(fid);
disp('Texture results obtained...');
% Sorting final results...

[sortedValues, index] = sort(fresultValues); % Sorted results... the vector index


% is used to find the resulting files.

fid = fopen('textureResults.txt', 'w+'); % Create a file, over-write old ones.

for i = 1:10 % Store top 10 matches...

49
imagename = char(fresultNames(index(i)));
fprintf(fid, '%s\r', imagename);

disp(imagename);
disp(sortedValues(i));
disp(' ');

end

fclose(fid);
disp('Texture parts done...');
disp('Texture results saved...');
disp(' ');
displayResults('textureResults.txt', 'Texture Results...');
cd('C:\Users\compaq\Desktop\cbir1');

% --- Executes on button press in shapeSearch.


function shapeSearch_Callback(hObject, eventdata, handles)
% hObject handle to shapeSearch (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
I=handles.a;
A=imread(I);
B=shape(A);

cd('C:\Users\compaq\Desktop\cbir1');
fid=fopen('colorbase.txt');
sresultValues = []; % Results matrix...
sresultNames = {};
i = 1; % Indices...
j = 1;

50
%fid = fopen('shapeResults.txt', 'w+');
while 1
imagename=fgetl(fid);
if ~ischar(imagename), break, end
K=imread(imagename);
result=shape(K);
%B=similarityMatrix(shape,result);
%d = s.'*A*s;
%d = d^1/2;
%d = d / 1e8;
if strcmp(result,'Square')
result='Square';
end
if strcmp(result,'Round')
result='Round';
end
if strcmp(result,'Triangle')
result='Triangle';
end
C=strcmp(B,result);
if C>0
sresultValues(i)=C;
sresultNames(j)={imagename};
%fprintf(fid,'%s\r',imagename);
%disp(imagename);
i=i+1;
j=j+1;
end
end
disp('Shape results obtained');
fclose(fid);

51
% Sorting final results...
fid = fopen('shapeResults.txt', 'w+');

[sortedValues, index] = sort(sresultValues); % Sorted results... the vector index


% is used to find the resulting files.

% Create a file, over-write old ones.


for i = 1:10 % Store top 10 matches...
imagename =char(sresultNames(index(i)));
fprintf(fid, '%s\r', imagename);
end
disp(imagename);
disp(sortedValues(i));
disp(' ');
fclose(fid);
disp('Shape parts done...');
disp('Shape results saved...');
disp(' ');
displayResults('shapeResults.txt', 'Shape Results...');
cd('C:\Users\compaq\Desktop\cbir1');

% --- Executes on button press in colorTexture.


function colorTexture_Callback(hObject, eventdata, handles)
% hObject handle to colorTexture (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
I=handles.a;
A=imread(I);
querymap=rgb2hsv(A);
[handles.queryx, handles.querymap]=imread(I);
cd('C:\Users\compaq\Desktop\CBIR\imagedatabase');

52
fid=fopen('database.txt');
resultValues=[];
resultNames={};
i=1;
j=1;
while 1
imagename=fgetl(fid);
if ~ischar(imagename),break,end
disp(imagename);
[X,RGBmap]=imread(imagename);
HSVmap=rgb2hsv(RGBmap);

D=quadratic(handles.queryx, handles.querymap,X,HSVmap);
resultValues(i)=D;
resultNames(j)={imagename};
i=i+1;
j=j+1;
end
fclose(fid);

%sorting color results...


[sortedvalues,index]=sort(resultValues);

fid=fopen('colorResults.txt','w+');
for i=1:20
tempstr=char(resultNames(index(i)));
fprintf(fid,'%s\r',tempstr);
disp(resultNames(index(i)));
disp(sortedvalues(i));
disp(' ')
end

53
fclose(fid);
fid=fopen('colorResults.txt');
queryEnergies = obtainEnergies(handles.queryx, 6);

% Open colourResults txt file... for reading...

fresultValues = []; % Results matrix...


fresultNames = {};
i = 1; % Indices...
j = 1;

while 1
imagename = fgetl(fid);
if ~ischar(imagename), break, end % Meaning: End of File...

X = imread(imagename);

imageEnergies = obtainEnergies(X, 6);

E = euclideanDistance(queryEnergies, imageEnergies);

fresultValues(i) = E;
fresultNames(j) = {imagename};
i = i + 1;
j = j + 1;
end

fclose(fid);
disp('Color texture results obtained...');
% Sorting final results...

54
[sortedValues, index] = sort(fresultValues); % Sorted results... the vector index
% is used to find the resulting files.

fid = fopen('colorTexture.txt', 'w+'); % Create a file, over-write old ones.

for i = 1:10 % Store top 10 matches...


imagename = char(fresultNames(index(i)));
fprintf(fid, '%s\r', imagename);

disp(imagename);
disp(sortedValues(i));
disp(' ');

end

fclose(fid);
disp('Texture parts done...');
disp('Texture results saved...');
disp(' ');
displayResults('colorTexture.txt', 'Color Texture Results...');
cd('C:\Users\compaq\Desktop\cbir1');

% --- Executes on button press in colorShape.


function colorShape_Callback(hObject, eventdata, handles)
% hObject handle to colorShape (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
I=handles.a;
A=imread(I);
querymap=rgb2hsv(A);
[handles.queryx, handles.querymap]=imread(I);

55
cd('C:\Users\compaq\Desktop\CBIR\imagedatabase');
fid=fopen('database.txt');
resultValues=[];
resultNames={};
i=1;
j=1;
while 1
imagename=fgetl(fid);
if ~ischar(imagename),break,end
disp(imagename);
[X,RGBmap]=imread(imagename);
HSVmap=rgb2hsv(RGBmap);

D=quadratic(handles.queryx, handles.querymap,X,HSVmap);
resultValues(i)=D;
resultNames(j)={imagename};
i=i+1;
j=j+1;
end
fclose(fid);

%sorting color results...


[sortedvalues,index]=sort(resultValues);

fid=fopen('colorResults.txt','w+');
for i=1:20
tempstr=char(resultNames(index(i)));
fprintf(fid,'%s\r',tempstr);
disp(resultNames(index(i)));
disp(sortedvalues(i));
disp(' ')

56
end
fclose(fid);
% Shape part starting......

I=handles.a;
A=imread(I);
B=shape(A);
cd('C:\Users\compaq\Desktop\CBIR\imagedatabase');
fid=fopen('colorResults.txt');
sresultValues = []; % Results matrix...
sresultNames = {};
i = 1; % Indices...
j = 1;
%fid = fopen('shapeResults.txt', 'w+');
while 1
imagename=fgetl(fid);
if ~ischar(imagename), break, end
K=imread(imagename);
result=shape(K)
%B=similarityMatrix(shape,result);
%d = s.'*A*s;
%d = d^1/2;
%d = d / 1e8;
if strcmp(result,'Square')
result='Square';
end
if strcmp(result,'Round')
result='Round';
end
if strcmp(result,'Triangle')
result='Triangle';

57
end
C=strcmp(B,result);
if C>0
sresultValues(i)=C;
sresultNames(j)={imagename};
%fprintf(fid,'%s\r',imagename);
%disp(imagename);
i=i+1;
j=j+1;
end
end
disp('Shape results obtained');
fclose(fid);
% Sorting final results...
fid = fopen('colorShape.txt', 'w+');

[sortedValues, index] = sort(sresultValues); % Sorted results... the vector index


% is used to find the resulting files.

% Create a file, over-write old ones.


for i = 1:10 % Store top 10 matches...
imagename =char(sresultNames(index(i)));
fprintf(fid, '%s\r', imagename);
end
disp(imagename);
disp(sortedValues(i));
disp(' ');
fclose(fid);
disp('Color-Shape parts done...');
disp('Color-Shape results saved...');
disp(' ');

58
displayResults('colorShape.txt', 'Color Shape Results...');
guidata(hObject,handles);

Quadratic function:-

% Works to obtain the Quadratic Distance between two colour images...


% ------------------------------------------------------------
% Executes on being called, with inputs:
% X1 - number of pixels of 1st image
% X2 - number of pixels of 2nd image
% map1 - HSV colour map of 1st image
% map2 - HSV colour map of 2nd image
% ------------------------------------------------------------
function value = quadratic(X1, map1, X2, map2)

% Obtain the histograms of the two images...


[count1, y1] = imhist(X1, map1);
[count2, y2] = imhist(X2, map2);

% Obtain the difference between the pixel counts...


q = count1 - count2;
s = abs(q);

% Obtain the similarity matrix...


A = similarityMatrix(map1, map2);

% Obtain the quadratic distance...


d = s.'*A*s;
d = d^1/2;
d = d / 1e8;

59
% Return the distance metric.
value = d;

% ------------------------------------------------------------

Euclidean Distance function:-

% Works to obtain the Euclidean Distance of the passed vector...

% ------------------------------------------------------------
% Executes on being called, with input vectors X and Y.
% ------------------------------------------------------------
function value = euclideanDistance(X, Y)

[r, c] = size(X); % The length of the vector...

e = [];

% Euclidean Distance = sqrt [ (x1-y1)^2 + (x2-y2)^2 + (x3-y3)^2 ...]

for i = 1:c
e(i) = (X(i)-Y(i))^2;
end

Euclid = sqrt(sum(e));

%Obtain energyLevel...

60
value = Euclid;

Energy Level function:-

% Works to obtain the energy level of the passed matrix...

% ------------------------------------------------------------
% Executes on being called, with input matrix.
% ------------------------------------------------------------
function value = energyLevel(aMatrix)

% Obtain the Matrix elements... r - rows, c - columns.


[r, c] = size(aMatrix);

%Obtain energyLevel...
value = sum(sum(abs(aMatrix)))/(r*c);

Obtain Energies function:-

% Works to obtain the first 'n' energies of the passed grayscale image...

% ------------------------------------------------------------
% Executes on being called, with input matrix & constant 'n'.
% ------------------------------------------------------------
function value = obtainEnergies(iMatrix, n)

dm = iMatrix; % The matrix to be decomposed...

61
energies = [];

i = 1;

for j = 1:5
[tl, tr, bl, br] = decompose(dm);

energies(i) = energyLevel(tl);
energies(i+1) = energyLevel(tr);
energies(i+2) = energyLevel(bl);
energies(i+3) = energyLevel(br);

i = i + 4;
dm = tl;
end

%Obtain array of energies...


sorted = -sort(-energies); % Sorted in descending order...
value = sorted(1:n);

Decompose function:-

% Works to decompose the passed image matrix.....


%--------------------------------------------------------------------------
% Executes on being called, with input image matrix.
%--------------------------------------------------------------------------

function [Tl, Tr, Bl, Br]=decompose(imMatrix)

[A,B,C,D]=dwt2(imMatrix,'db1');

Tl=wcodemat(A); % Top left...

62
Tr=wcodemat(B); % Top right...
Bl=wcodemat(C); % Bottom left...
Br=wcodemat(D); % Bottom right...

Similarity Matrix function:-

% Works to obtain the Similarity Matrix between two HSV color


% histograms. This is to be used in the Histogram Quadratic
% Distance equation.

% ------------------------------------------------------------
% Executes on being called, with input matrices I and J.
% ------------------------------------------------------------
function value = similarityMatrix(I, J)

% Obtain the Matrix elements... r - rows, c - columns. The


% general assumption is that these dimentions are the same
% for both matrices.
[r, c] = size(I);

A = [];

for i = 1:r
for j = 1:r
% (sj * sin hj - si * sin hi)^2
M1 = (I(i, 2) * sin(I(i, 1)) - J(j, 2) * sin(J(j, 1)))^2;
% (sj * cos hj - si * cos hi)^2
M2 = (I(i, 2) * cos(I(i, 1)) - J(j, 2) * cos(J(j, 1)))^2;

63
% (vj - vi)^2
M3 = (I(i, 3) - J(j, 3))^2;

M0 = sqrt(M1 + M2 + M3);

%A(i, j) = 1 - 1/sqrt(5) * M0;


A(i, j) = 1 - (M0/sqrt(5));
end
end

%Obtain Similarity Matrix...


value = A;

Shape function:-

function result=shape(S)
S=im2bw(S);
[H, theta,rho]=hough(S);
%axes(handles.axes2);
%figure
%imshow(H,[],'xdata',theta,'ydata',rho);
%xlabel('\theta'),ylabel('\rho')
%axis on, axis normal;
%title('Hough Matrix');
datas=[];
%clear datas;
for cnt = 1:max(max(H))
datas(cnt) = sum(sum(H == cnt));
end
%axes(handles.axes3);
64
datas(datas==0)=NaN;
%figure
%plot(datas,'--x');
%xlabel('Hough Matrix Intensity'),ylabel('Counts')
%handles.data = data;
%data = handles.data;
[maxval,maxind] = max(datas);
medval = median(datas);

[p]=polyfit(1:maxind-5,datas(1:maxind-5),2);

if maxval<3*medval
result='Triangle';
elseif p(3)>100
result='Square';
else
result='Round';
end

Display Results function:-

% Works to display the images named in a text file passed to it...

% ------------------------------------------------------------
% Executes on being called, with inputs:
% filename - the name of the text file that has the
% list of images
% header - the figure header name
% ------------------------------------------------------------
function displayResults(filename, header)

65
figure('Position',[200 100 700 400], 'MenuBar', 'none', 'Name', header, 'Resize', 'off',
'NumberTitle', 'off');

% Open 'filename' file... for reading...


fid = fopen(filename);

i = 1; % Subplot index on the figure...

while 1
imagename = fgetl(fid);
if ~ischar(imagename), break, end % Meaning: End of File...

x = imread(imagename);

subplot(2,5,i);
subimage(x);
xlabel(imagename);

i = i + 1;

end

fclose(fid);

CHAPTER 10

OUTPUT

66
10.1. For Color:-

Loading an Image:-

Output:-

10.2. For Texture:-

Loading an Image:-

67
Output:-

68
10.3 For Shape:-

Loading an Image:-

Output:-

69
CHAPTER 11

ADVANTAGES & DISADVANTAGES

11.1. ADVANTAGES:-

 An image retrieval system is a computer system for browsing, searching and retrieving
images in an image database. In text-based retrieval, images are indexed using keywords,
subject headings or classification codes, which in turn are used as retrieval keys during
search and retrieval.
 Text-based retrieval is non-standardized because different users use different keywords
for annotation. Text descriptions are sometimes subjective and incomplete because it
cannot depict complicated image features very well. Examples are texture images that
cannot be described by text.

70
 In text retrieval ,humans are required to personally describe every image in the database,
so for a large image database the technique is cumbersome, expensive and labour-
intensive.
 Content-based image retrieval (CBIR) technique use image content to search and retrieve
digital images. Content-based image retrieval system was introduced to address the
problems associated with text-based image.
 Using application on intranet provides the advantage of accessing the application by
multiple users.

CHAPTER 12

CONCLUSION

The dramatic rise in the sizes of images databases has stirred the development of
effective and efficient retrieval systems. The development of these systems started with
retrieving images using textual connotations but later introduced image retrieval based on
content. This came to be known as CBIR or Content Based Image Retrieval. Systems using
CBIR retrieve images based on visual features such as colour, texture and shape, as opposed to
depending on image descriptions or textual indexing. In this project, we have researched various
modes of representing and retrieving the image properties of colour, texture and shape. Due to
lack of time, we were only able to fully construct an application that retrieved image matches
based on colour and texture only.

71
The application performs a simple colour-based search in an image database for an input
query image, using colour histograms. It then compares the colour histograms of different
images using the Quadratic Distance Equation. Further enhancing the search, the application
performs a texture-based search in the colour results, using wavelet decomposition and energy
level calculation. It then compares the texture features obtained using the Euclidean Distance
Equation. A more detailed step would further enhance these texture results, using a shape-based
search.

CBIR is still a developing science. As image compression, digital image processing, and
image feature extraction techniques become more developed, CBIR maintains a steady pace of
development in the research field. Furthermore, the development of powerful processing power,
and faster and cheaper memories contribute heavily to CBIR development. This development
promises an immense range of future applications using CBIR.

CHAPTER 13

Future Expansion:-

 Providing methods for Web searching, allowing users to identify images of interest in remote
sites by a variety of image and textual cues.

 Providing video retrieval techniques, including automatic segmentation, query-by-motion


facilities, and integration of sound and video searching.

 Better user interaction, including improved techniques for image browsing and exploiting
user feedback.

 Automatic or semi-automatic methods of capturing image semantics for retrieval.

72
CHAPTER 14

REFERENCES:-
[1]. Content Based Image Retrieval based on Color, Texture and Shape features using Image
and its complement by P. S. Hiremath and Jagadeesh Pujari, Dept. of P.G. Studies and
Research in Computer Science, Gulbarga University, Gulbarga, Karnataka, India.
[2]. Hough Transform by R. Fisher, S. Perkins, A. Walker, and E. Wolfart.
[3].Image Retrieval with Shape Features Extracted using Gradient Operators and Slope
Magnitude Technique with BTC by Dr. H. B. Kekre, Priyadarshini Mukherjee, Shobhit
Wadhwa, SVKM’s NMIMS University, MPSTME Mumbai.
[4].Content Based Image Retrieval using Low-Dimensional Shape Index.

73
CHAPTER 15

BIBLIOGRAPHY:-

 CBIR USING COLOR HISTOGRAM PROCESSING by P.S.Suhasini, Dr. K.Shri Rama


Krishna, Dr. I. V. Murali Krishna.

 Content Based Image Retrieval Literature Survey by Michele Saad.

 Website: http://mathworks.com.

 Content-based Image Retrieval Using Gabor Texture Features Dengsheng Zhang, Aylwin
Wong, Maria Indrawan, Guojun Lu.

74
 Ballard, D. H. (1987). Generalizing the hough transform to detect arbitrary shapes.

 Object Detection using a Max-Margin Hough Transform by Subhransu Maji, Jitendra Malik.

 INTEGRATED CONTENT-BASED IMAGE RETRIEVAL USING TEXTURE, SHAPE


AND SPATIAL INFORMATION By Michael Eziashi Osadebey.

 Content Based Image Retrieval System Modeling by Mariana Stoeva.

75

You might also like