You are on page 1of 37

nuMAP

A Content Based Image Retrieval Project


BSCS Final Year Evening
Group Members
• Mohammad Umer Sheikh EP046125
• Syed Arbab Ahmed EP046142
• Pervaiz Ahmed EP04A6136
• Noman Iqbal EP046133
• Mustafa Turab Ali EP04A6132

Project Supervisor
• Dr.Aqil Burny
• Badar Sami
• Syed Arbab Ahmed
• EP046142
Definition

• Content-based image retrieval (CBIR), also


known as query by image content (QBIC)
and content-based visual information
retrieval (CBVIR) is the application of
computer vision to the image retrieval
problem, that is, the problem of searching
for digital images in large databases.
Scope of the project
• Content-based image retrieval potentially
provides new opportunities to extend and
enhance the constraints and limitations imposed
by the traditional information retrieval paradigm
on image collections.

• The number of CBIR systems is extremely


encouraging.
CBIR Systems
Potential uses for CBIR include
• Photograph archives
• Retail catalogs
• Medical diagnosis
• Crime prevention
• The military
• Art collections
• Intellectual property
• Architectural and engineering design
• Geographical information and remote sensing
systems
Difference b/w human with Computer
• The basic reason why image retrieval is more
difficult than text retrieval is that the digital
representation for most images is as a
collection of pixels.

• The only information which is explicit in such a


representation is the color values at each pixel
point.
CBIR software systems and techniques

• Query by example

• Semantic retrieval

• Other query methods


• Pervaiz Ahmed
• EP04A6136
Our CBIR System Design
Problem Statement

The problem involves entering an image as a


query into a software application that is
designed to employ CBIR techniques in
extracting visual properties, and matching
them. This is done to retrieve images in the
database that are visually similar to the
query image
Requirement Analysis

At the very first step we require an algorithm


which extract features from images.

SIFT algorithm for features extraction

NNS for matching


SIFT Algorithm
(Scale-Invariant Feature Transform )

• SIFT is an image processing algorithm which


can be used to detect distinct features in an
image.
• Once features have been detected for two
different images, one can use these features to
answer questions like “are the two images
taken of the same object?”
Out put of SIFT
• Noman Iqbal
• EP046133
Algorithm working phases

Four phases of SIFT

1 Scale-space Extrema Detection


2 Key point localization
3 Orientation Assignment
4 Key point descriptor
Phase 1: Scale-space Extrema Detection

The first phase of the computation seeks to


identify potential interest points. It searches
over all scales and image locations. The
computation is accomplished by using a
difference-of-Gaussian (DoG) function. The
resulting interest points
are invariant to scale and rotation, meaning
that they are persistent across image scales
and rotation.
Phase 2: Key point localization
For all interest points found in phase 1, a
detailed model is created to determine
location and scale.

Key points are selected based on their


stability. A stable key point is thus a key point
resistant to image distortion
Phase 3: Orientation Assignment

For each of the key points identified in phase


2, SIFT computes the direction of gradients
around.

One or more orientations are assigned to


each key point based on local image gradient
directions.
Phase 4: Key point descriptor
The local image gradients are measured in the
region around each key point.

These are transformed into a representation that


allows for significant levels of local shape
distortion and change in illumination.
• Mustufa Turab Ali
• EP04A6132
NNS Algorithm
(nearest neighbor search)

For matching we use a NNS.

An algorithm that is able to detect similarities


between key points.
Output of NNS
KD-tree
KD-tree is the most important
multidimensional structure decomposes a
multidimensional space into hyper
rectangles.
A binary tree with both a dimension number
and splitting value at each node Each node
corresponds to a hyper rectangle Fields of
KD-tree node.
KD-Tree
Image matching
A match where the whole of one image
matches the whole of another image.

Part of one image matching the whole of


another image.

Part of one image matching part of


another image.
Image Test 1
Image Test 2
Image Test 3
• Muhammad Umer Sheikh
• EP046125
Key point generation
Key point matching

Select a node from the set of all nodes not yet


selected.
Mark the node as selected.

Locate the two nearest neighbors of the selected


node.
If the distance between the two neighbors are less
than or equal to a given distance, we have a match.
Mark the key points as match.
Key points matching
Quality of Match

KS the numbers of Key points in source


image
KC the numbers of Key points in compare
image
KM the numbers of Key points in match
image
Summary and Conclusion
SIFT does what it is designed to do, and it
does it well. The most obvious drawback with
SIFT is the time it takes to compare two
images. The running time of an NNS search is
so large that it effectively renders SIFT
useless for a System like M2S. However, with
modifications like quality of match and the
utilization of other metadata, SIFT could be an
extremely robust resource for object detection
and image matching.
Thank you

Questions ?

You might also like