You are on page 1of 71

Dissertation on

REGISTRATION OF MULTISENSOR REMOTELY SENSED

IMAGES

For the Degree of

Master of Engineering in
Computer Science and Engineering
By

Student Name
Under the Guidance of

Guide Name

Department of Computer Science and Engineering


Marathwada Institute of Technology, Aurangabad
Maharashtra State, India

2013-2014

CERTIFICATE
This is to certify that, the dissertation entitled REGISTRATION OF
MULTISENSOR REMOTELY SENSED IMAGES, which has been submitted
herewith in partial fulfillment for the award of the Master of Engineering in
Computer Science and Engineering/Software Engineering of Dr. Babasaheb
Ambedkar Marathwada University, Aurangabad (M.S.). This is the result of the original
work and contribution by Student Name under my supervision and guidance.

Place: Aurangabad
Date:

Name Guide

Dr. Radhakrishna Naik


Head

Department of Computer Science and Engineering

Dr. C. L. Gogte
Principal
Marathwada Institute of Technology
Aurangabad (M.S.) - 431 005

Contents
List of Abbreviations
i
List of Figure
ii
List of Table
iii
Abstract

iv

1. INTRODUCTION
Page
1.1
Introduction

5
1.2
Image Registration
6
1.3
Various Methods
7
1.4
Need of the work
8

2. LITERATURE SURVEY
18
2.1
History of Image Registration
14
2.2
Survey of Image Registration Techniques
19

3. SYSTEM DEVELOPMENT
25
3.1
Disadvantage of Previous System
25
3.2
Methodology
28

Conclusion
30
References
31

List of Figures

Figure
Illustration
Page

1.1
Flow Diagram of Process of Image Registration
6

1.2
System Architecture
27

2.1
Registration flow Diagram
31

Introduction

Image registration is the process of aligning two or more images of the same scene. This process
involves designating one image as the reference (also called the reference image or the fixed image),
and applying geometric transformations to the other images so that they align with the reference.
Images can be misaligned for a variety of reasons. Commonly, the images are captured under
variable conditions that can change camera perspective. Misalignment can also be the result of lens
and sensor distortions or differences between capture devices.

A geometric transformation maps locations in one image to new locations in another image. The step
of determining the correct geometric transformation parameters is key to the image registration
process. Image registration is often used as a preliminary step in other image processing
applications. For example, you can use image registration to align satellite images or to align
medical images captured with different diagnostic modalities (MRI and SPECT). Image registration
allows you to compare common features in different images. For example, you might discover how a
river.

Image registration is the process of transforming different sets of data into one coordinate system;
Data may be multiple photographs, data from different sensors, from different times, or from
different viewpoints. It is used in computer vision, medical imaging, military automatic target
recognition, compiling and analyzing images and data from satellites. Registration is necessary in
order to be able to compare or integrate the data obtained from these different measurements.

Image registration is the process of overlaying two or more images of the same scene taken at
different times, from different viewpoints, and/or by different sensors. It is also a classical problem
encountered in image processing applications in which, the final information is gained from the
combination of various data sources like in image fusion, change detection, multichannel image
restoration and can be applied in the fields of change detection, cartography, medical imaging and
photogrammetry.

Image registration is the process by which we determine a transformation that provides the most
accurate match between two images. The search for the matching transformation can be automated
with the use of a suitable metric, but it can be very time- consuming and tedious. Computational
time becomes even more critical with the current increase in data. As a result, high performance
image registration is needed.

In remote sensing applications, generally manual registration used which is not feasible when large
number of images need to be registered because of manual selection of control points. Therefore, it
leads to the need of automatic image registration. Automatic image registration is to perform the
image registration task without the guidance and intervention of users. The tremendous amount of

incoming satellite images from the Earth Observing System (EOS) program and from new missions
with hyper spectral instruments mandate the need for automatic image registration.

The flow diagram for process of Image Registration is as shown in Figure 1.

Feature detection

Feature matching

Transformation parameter

Transformation and
resampling

Figure 1: Flow Diagram of Process of Image Registration.

In general, we can describe the process of Image Registration in four steps as follows,

Feature detection: In image, we detect salient and distinctive objects such as closed-boundary
regions, edges, contours, line intersections, corners, etc. in both reference and sensed images.

Feature matching: The correspondence between the features in the reference and sensed image
established.

Transform model estimation: The type and parameters of the so-called mapping functions, aligning
the sensed image with the reference image are estimated.

Image re-sampling and transformation: The sensed image is transformed by means of the mapping
functions.

Image registration is widely used in remote sensing, medical imaging, computer vision etc. In
general, its applications can be divided into four main groups according to the manner of the image
acquisition, [08]:

Different viewpoints (multiview analysis): Images of the same scene are acquired from different
viewpoints. The aim is to gain larger 2D view or a 3D representation of the scanned scene. For
example, Remote sensingmosaicing of images of the surveyed area.

Different times (multitemporal analysis): Images of the same scene are acquired at different times,
often on regular basis, and possibly under different conditions. The aim is to find and evaluate
changes in the scene that appeared between the consecutive images acquisitions. For example,
Computer vision automatic change detection for security monitoring, motion tracking and
Medical imagingmonitoring of the healing therapy, monitoring of the tumors evolution.

Different sensors (multimodal analysis): Different sensors acquire images of the same scene. The
aim is to integrate the information obtained from different source streams to gain more complex and
detailed scene representation. For example, Remote sensingfusion of information from sensors
with different characteristics like panchromatic images, offering better spatial resolution,
color/multispectral images with better spectral resolution, or radar images independent of cloud
cover and solar illumination.

Scene to model registration: Images of a scene and a model of the scene are registered. The model
can be a computer representation of the scene, for instance maps or digital elevation models (DEM)
in GIS, another scene with similar content (another patient) etc. The aim is to localize the acquired
image in the scene and to compare them. For example, Medical

imagingcomparison of the patients image with digital anatomical atlases, specimen

classification.

II. IMAGE REGISTRATION METHODS

Various image registration methods are as follows

Intensity-based and feature-based methods: Intensity-based methods compare intensity patterns


in images via correlation metrics, while feature-based methods find correspondence between image
features such as points, lines, and contours. Intensity-based methods register entire images or sub
images.

Spatial and frequency domain methods: Spatial methods operate in the image domain, matching
intensity patterns or features in images whereas Frequency domain methods find the transformation
parameters for registration of the images while working in the transform domain. Such methods
work for simple transformations, such as translation, rotation, and scaling.

Single and multi-modality methods: Single-modality methods tend to register images in the same
modality acquired by the same scanner/sensor type, while multi-modality registration methods
tended to register images acquired by different scanner/sensor types.

Automatic and interactive methods: Based on level of automation registration method provide
they are classified as manual, interactive, semi-automatic, and automatic methods Have been
developed. Manual methods provide tools to align. Interactive methods reduce user bias by
performing certain key operations automatically while still relying on the user to guide the
registration. Semi-automatic methods perform more of the registration steps automatically but
depend on the user to verify the correctness of a registration. Automatic methods do not allow any
user interaction and perform all registration steps automatically.

Similarity measures for image registration: Mostly, image similarity methods are being use in
medical imaging. An image similarity measure quantifies the degree of similarity between intensity
patterns in two images The choice of an image similarity measure depends on the modality of the
images to be registered.

Experiments performed by author within same image, different images of same type, and different
image of same scene. Performance evaluation with SIFT during the affine template matching, with
varying condition of scene types, and matching in a real world scenes. Result shows that FAST
algorithm best deals with the photometric changes as well as the blur and JPEG images.

In , registration method is perform by applying regions considered segmentation on the images with
consideration of several attributes such as perimeter, fractal dimension and structural features.
Initially the image undergoes through pre-processing stage after which features are extracted from

the enhanced image, then matching is done using these features, and finally rotation differences are
detected between the images that are to be registered. The purpose of this paper is to perform
automatic image registration accurately through segmentation that leads to satisfaction of all the
constraints present over image and thus can effectively improve the quality of registered images.

In, registration is perform by dividing image into regions and SIFTS key points. Initially the real
image is partitioned into four sub regions and then SIFT key points are extracted from both real and
reference images. After that we establish location constraint relation of key points in all four sub
regions of image i.e. constraint relations between all SIFT key points and their corresponding center
points. In next step, SIFT matching is performed on real and reference images and mapping is done
by calculating minimum distance. Based on these mapping we can correct corresponding coordinates
in images in next step and finally in we get output of four center points and the target region. In this
paper registration is done using multi-sub regions and SIFT. Due to establishment of location
constraints and mapping relationship, we get better performance.

In using lake centroids as features automated image registration is done in dynamic lake rich
environments. Initially histogram thresholding is applied to both master and slave images. The
segmentation result is used to create training samples to classify lakes from the land background.
The segmentation is further refined by supervised classification technique using multispectral bands.
Areas where pixel values are less than mean value of the water body segment serve as the training
set for water bodies. Similarly, pixels having values that are greater than the mean background value
are the training set for the background class. Based on these two training sets, the classifier produces
Lake Map from all four bands of slave image and master image is processed in same way to produce
Lake Map.

Classification of Image Registration Techniques

In general image registrations techniques can be categorized according to the various distinguish
points. Based on image acquisition, Brown in 1992, divided image registration in four classes .
Barbara Zitova and Jan Flusser have categorized the image registration techniques in the area based
methods and feature based method. Maintz proposed a nine-dimensional method that provides a
categorization, which basically used in medical imaging application.

2.1.1 Classification Based on Application

Image registration can be categorized into four different classes according to the image application

Different viewpoints: Same scenes of the image are captured from different viewpoints. The purpose
behind this to achieve a two-dimensional image representation. Example is remote sensing- For the
mosaicking of image.

Different times: Same scenes of the image are acquired at different time depending on different
conditions. The purpose behind this is to find and evaluate changes occur in the scene during
different time period. Example is remote sensing and computer vision.

Different sensors: Same scenes of the image can be acquired from different sources. The purpose
behind this is to analyze the information from different sources for obtaining the more detailed
information. Example is remote sensing and medical imaging.

Scene of model registration: Image of the scene and model are to be registered. The purpose behind
this is to localize the acquired image in the model and compare them. Example is computer vision,
remote sensing and medical imaging.

2.1.2 Classification Based on Nine Dimensional Schemes

Maintz proposed a nine-dimensional scheme to characterize an image registration which is basically


used for medical imaging.

Dimensionality: it refers to the number of geometric dimensions which any image space has. It can
be two dimensional or three dimensional.

Nature of registration: it is the aspect of two views used to effect registration. It may be intensity
based or surface based methods.

Nature of transformation: it refers to the property which holds various transformations like affine,
projective, and rigid.

Domain of transformation: it tells whether transformation is calculated local or global.

Degree of interaction: it refers to the control given by human over the registration algorithm.

Optimization procedure: it refers to the standard approach in which the quality of registration is
estimated continually during registration.

Modalities involved: it refers to the means by which the images to be registered are acquired.

Subject: it refers the sensed image.

Object: it refers to the region to be registered.

2.1.3 Classification Based on Essentials

Barbara and Jan classified the image registration techniques as follows :

Area Based Methods: These methods are applied information about image is absent and distinctive
information is provided by gray level or colors.

Feature Based Methods: These methods are applied when local structure information about an image
is given.

2.2 Steps Involved in Image Registration Process

Due to diversity of image to be registered and various degradation in images it is impossible to


define a universal method for image registration. That why each image registration method has its
own importance. Majority of image registration method have four basic steps

Feature Detection: salient and different objects in an image (edges, lines, contours) are detected
manually or automatically. These feature points are represented by their descriptors.

Feature Matching: In this step, the features detected in the reference image and those detected in
sensed image has been matched. Feature descriptor and similarity measure are used for this purpose.

Transform Model Estimation: the type and parameter of mapping function is established which is
used to align the sensed image with reference image.

Image Re-sampling and Transformation: The sensed image is transformed by establish mapping
functions.

The implementation of each registration step has its own typical problems. First we have to decide
what kind of feature is appropriate for the given task. The features should be frequently spread over
the image and easily detectable. The detected features in both the sense image and reference image
must have enough common elements. The detection method should have good spatial accuracy and
has not been affected by assumed degradation.

In the feature matching step, the problem caused by incorrect feature detection and image
degradation can arise. Feature can be dissimilar due to different imaging conditions. We have to
choose feature descriptor and similarity measure carefully so that it doesnt affect the matching.

They have to be discriminable enough.

The type of mapping function should be chosen according to the prior information about image
degradation. If no information is available then it should flexible enough to handle all kind of
degradation.

Finally, the choice of the appropriate type of re-sampling technique based on the accuracy of the
interpolation and computational complexity. Some cases use bilinear and nearest neighbor approach.

3. Feature Detection

Formerly, the features are objects detected by manually by expert. There are two main approaches to
feature understanding.

3.1 Area-Based Methods

Area-Based methods put emphasis on feature matching rather than detection. No features are
detected in this approach. So the first image registration step is omitted in area-based methods.

3.2 Feature-Based Methods

Feature-based approach is based on the extraction of salient features in the images. Regions, lines,
points are understood as features here. They should be spread over the image and efficiently
detectable in both images. The number of common elements of the detected set of features should be
high, regardless of image geometry, additive noise. Feature based methods do not work on image
intensity values as opposed to area-based method.

3.2.1 Region Feature Detection

The region features can be the closed boundary of appropriate size, reservoirs, forest, urban areas.
Regions are generally represented by their center of gravity. Region feature are detected by means of
segmentation methods. The accuracy of segmentation significantly influences the resulting
registration. S.K. Pal [4] proposed a method as a refinement to segmentation to produce better
registration accuracy. In 2004, Harris-Laplace region detector locates potential relevant points with
the Harris corner detector and then selects the point with a characteristic scale. A new region feature
descriptor based image registration proposed in 2012.

3.2.2 Line Feature Detection

The line features can be represented by line segments, object contours, roads or anatomical structure
in medical imaging. Standard edge detection methods like canny detector or Laplacian detector are
used for line feature detection. The Marr-Hildreth edge detector has been a very popular edge
detector before canny presented his paper. Canny edge detector is widely considered to be the
standard edge detection algorithm. A recent detailed comparative study of various edge detection
algorithms can be found in . Apart from it Li proposed a new method to detect the lines in the
reference and source image. Figure 1 shows the artificial image and its corresponding line segments
and edges.

3.2.3 Interest Point and Corner Feature Detection

Corners are estimated as points of high curvature on the region boundaries. The first corner detector
has been in the late 1970s. In 1977, Moravec defined the concept of point of interest as different
regions in images. Based on Moravec concept Harris developed the algorithm known as Harris
corner detector. Figure 2 shows the image containing different types of corners.

A comparative study of interest point performance on a data set can be found in 2011 and different
detection method can be found in 2009.

3.2.4 Shape Feature Detection

Some objects may be recognized by their outline shape and it is a very powerful feature in image
processing. Yang Mingqiang and KpalmaKidiyo in 2008 discussed the essential properties of shape
to be feature that includes rotation, translation, scaling, noise resistance and reliability After
detecting the features, we have to match them. We can say that we have to determine which feature
come from corresponding locations in images that are different. Again we have to discuss two
different aspect of feature matching. One is area based and other is feature based.

4.1 Area Based Methods

All techniques in area based methods merge the feature detection step with feature matching step and
deal with the images without detecting the salient feature of object.

4.1.1 Correlation or Pixel Based Methods

Cross correlation is the first basic approach of registration process. It is generally used for pattern
matching. The classical method of area based method is cross correlation .

Cross correlation is a type of similarity measure or match metric C (u, v) of image I(x, y) with
displacement u in X direction and in Y direction. Two dimensional cross correlation functions is
shown below.

This similarity measure is computed for window pairs from the sensed and target images. The
window pair for which the maximum is found should be set as the corresponding one.

Two main disadvantages of the correlation like methods has the high computational complexity.

4.1.2 Fourier Methods

Correlation theorem states that the Fourier transform of the correlation of two images is the product
of the Fourier transform of one image and complex conjugate of Fourier transform of other [10]. The
Fourier transform of image f(x, y) is a complex function in which each function has a real part (( )
and an imaginary part ( ) at each frequency( ) of frequency spectrum.

Where ( x )is a magnitude and ( y ) is a phase angle.

The phase shift correlation method is based on the Fourier shift which is proposed for the
registration of translated image.

Recently, image mosaic based on phase correlation and Harris operator is proposed by yang in which
first the scaling and translation is performed then the unregistered image is adjusted. After that
feature point are detected and matched.

It is observed that if the computational speed is required or the images are corrupted by noise then
Fourier methods are preferred than correlation methods.

4.1.3 Mutual Information Methods

Mutual information based registration process begins with the estimation of the joint probability of
the intensities of corresponding pixels in the two images. Mutual information between two random
variables X and Y is given by the formula [3]. ( ) ( ) ( ) ( )

Where ( ) = -Ex log P(X) represents entropy of random variable and is probability distribution of

X. this method is based on maximizing MI.

It is observed that MI gives accurate result than any other registration method. But when images
have low resolution or it contains little information then it gives worse results.

4.2 Feature Based Methods

Feature based method used image feature derived by feature extraction algorithm instead of intensity
values for matching purpose.

4.2.1 Methods using Spatial Relations

Methods based on spatial relations are usually applied if detected feature are not clear or their
neighbors are distorted.

Barrow in 1977 has introduced the chamfer matching for image registration. Line feature detected in
image are matched by the minimization of the distance between them. Recently, Gongjian wen

developed high performance feature matching method for image registration by combining spatial
and similarity information .

4.2.2 Relaxation Methods

In the relaxation method, one of the famous method is consistent labeling problem (CLP) in which
we label each feature from the sensed image with the label of a feature from the targeted image. So it
is consistent with other images.

Another solution to CLP problem and to the image registration is backtracking, where consistent
labeling is generated recursively.

4.2.3 Pyramids and Wavelets

In 1977, when a sub-window was used to find out the probable candidates of the corresponding
window in reference image and then full size window is applied.After that a rectangular grid of
windows is taken on which cross correlation is performed for reducing the computational load. All
these techniques are just an example of early pyramidal methods.

Recently, wavelet decomposition of the images was proposed for pyramidal approach. There are
many comparison tests have been carried out to establish which wavelet family has the best
performance.

4.2.4 Methods using Invariant Descriptor

Another method for exploiting the spatial relations is the correspondence of features can be
estimated by their descriptor. Descriptor contains information about feature points detected in both

the reference image and source image. The most common method is to use closed boundary regions
as features. Theoretically any invariant and discriminative enough shape descriptor cab ne employed
in region matching such as shape vectors. There are various types of descriptors are used but we will
proceed with the examination of the SIFT, SIFT variants and SURF feature matching.

SIFT Descriptor and Its Variants

A different algorithm of feature extraction is registration based on Scale Invariant Feature Transform
(SIFT) proposed by David G. Lowe in 1999. Lowe in 2004 proposed SIFT for extracting distinctive
invariant features from images that can be invariant to rotation and scaling. It was widely applied in
mosaicking recognition, retrieval etc., There are various variants of SIFT that proposed for different
types of images which provide better result. Some of them are listed below with comparison to SIFT
on the basis of time and computational complexity.

Centroids are used as tiepoints for image registration. The identical lakes in two images are
associated by comparing the Euclidean distance between centroid points in two images. The lakes
are said to be identical if their centroids are closet on image and the Euclidean distance is minimum.
In this paper, the approach is automated and it achieves sub pixel accuracy and is feasible way to
register images for lake change detection.

In , image registration is done using SIFT and image segmentation. Initially the input image with
multi spectral bands is transform to single band. To convert multiple spectral bands into single band,
PCA method is used. The PCA image is segmented through threshold segmentation. Here threshold
value is determined and according to that image is segmented into intersecting and non-intersecting
areas. After segmentation SIFT, keypoints are calculated. After obtaining these SIFT keypoints they
applied to bi-variant histogram. According to that algorithm, final set of key points will be obtained
through which some evaluation metrics are calculated. For e.g., rms low, rms all etc. These are the
key points used for analysis. In this paper, robust and efficient automatic image registration method
is proposed in which pair of images with different pixel size, translation and rotation effects and to
some extent with different spectral content is registered. This method also achieves subpixel
accuracy. Only drawback exists is that the computation time required is more since we have to
compare all the SIFT keypoints.

LITERATURE SURVEY

Image registration is an important and useful area of study in computer vision. In this section a brief about all
the research papers reviewed and studied is documented. Also a brief about the work done in the same
include here.

Sr No
Name of Author
Title Of Paper
Date of
Publication
Method Use
Result

publication

A Novel Image Registration Algorithm

remote-sensing image registration approach is proposed


HTSC can boost the accuracy of feature matching in

Zhili Song, Shuigeng

IEEE
and implemented. This new approach is based on a novel
the top-25 correspondences of the sorted queues

for Remote
Aug-14

Zhou,

TRANSACTION
and robust transformation parameter estimation algorithm
for remote-sensing image registration to 80% or

Sensing Under Affine Transformation

called HTSC that is also developed in this paper


higher, in some cases to 90%.

Bin Fan, Chunlei Huo,


Registration of Optical and SAR

an improved version of the scale invariant feature


The effectiveness of this method gives with the

Satellite Images by Exploring the

experiment results in . One disadvantage of this

2
Chunhong Pan, and

Jul-14
IEEE
transform for optical and SAR(synthetic aperture radar)

Spatial Relationship of the Improved

method is that the utilized low distortion

Qingqun Kong

images.

SIFT,

constraint restricts its direct use for registering

Rachna P. Gajre, Dr.


Comparison ofImage Registration

compared work done by various authors on image


methods including SIFT are time consuming and

Nov-13
IJSCR

depend on the size and resolution of the image,

Leena Ragha
methods for Satellite Images"

registration of satellite images

which can be overcome further

Mr. D.P.Khunt, 2 PROF.


ImageRegistration Using Intensity

method for image registration using intensity based image


Limitation of this method is that if the difference

Oct-13
JOIKR
registration algorithm and application of image
between two images is very large then it could not

Y.N.Makwana
Based Technique,

registration.
be registered properly.

A Comprehensive

The mentioned method of image registration in it

a way for automatic image registration through histogramusing enhanced segmentation approach gives

5
R.V.Prasad CH, S.Suresh
ImageSegmentation Approach For
Oct-12
IJCTT

based image segmentation.


accurate results, even in the presence associated

Image Registration,

with a considerable amount of noise.

robust and efficient method for AIR(Automatic Image

6
S.Govindarajulu, K.Nihar
Image Registration on satellite
Sep-12
IOSR-JECE,
Registration) is mentioned which combines image
The performance of this method is evaluated

Kumar Reddy
Images,

segmentation and SIFT, complemented by an outlier


through some measures.

removal stage and PCA(Principle Component Analysis).

Automatic ImageRegistration through

method for AIR through Histogram based image


This method is very advantageous to generate

Shiv BhagwanOjha, D.V

accurate rotation and shift of a base image with

7
Ravi Kumar
Histogram-Based Image
May-12
IJERA
segmentation. This method is able to estimate the rotation

respect to unregistered image, register base image

Segmentation,

and/or translation between two images.


with unregistered image.

Automatic Image Registration

benefit of this method is that accurate

AIR method based on the combination of image


segmentation of the objects present in the image is

Hernani Gonalves, Lus


throughImage Segmentation and SIFT,

Jul-11
IEEE
segmentation and SIFT, complemented by a robust
not needed. Combination of different techniques

Corte-Real
IEEE Transactions on geoscience and

procedure of outlier removal.


in this paper provides a vigorous and precise

remote sensing

method for AIR.

Jianglin Ma, Jonathan


Fully Automatic Subpixel Image

illustrated a two-step nonrigid automatic registration

scheme for multiangle satellite images. They also used


The mentioned method works well in areas with

9
Cheung-Wai Chan, and
Registration ofMultiangle CHRIS/Proba
Jul-10
IEEE

techniques like SIFT and NCC for this scheme and this
little difference in topography.

FrankCanters
Data,

method was tested on the Compact High Resolution

Zitova and Flusser [1], describes the various approaches of image registration is described like area based
and feature based method and are retained and further classified into subcategories according to the basic
ideas of matching methods. Also the four basic steps of image registration procedure: feature detection,
feature matching, mapping function design, and image transform & resampling are mentioned. Major goals
and outlook for future research as well as the advantages and drawbacks regardless of particular application
area are discussed too.

Ezzeldeen et al [2], design a comparative study between a Fast Fourier Transform (FFT)-based technique, a
Contour-based technique, a Wavelet-based technique, a Harris-Pulse Coupled Neural

Network (PCNN)-based technique and Harris-Moment-based technique for remote sensing images
to calculate the RMSE ranges, Timing results, and the average number of control points.

It is concluded that that the more suitable technique is the FFT but having largest RMSE is above 2,
where least running time technique is Contour (2.103sec for 256*256 and 2.214sec for 512*512
image size) and the technique having the largest Control points is Wavelet 30.

Maes et al [3], proposed mutual information is a time consuming, but with the property of high
precision image registration method. So to improve the computation efficiency images are registered
with low resolution, and calculating entropy of reference and recent images, and the joint entropy of
both. Now the pixels are mapped using the affine transform between the approximation coefficients.
The coefficients parameterized with the six degrees of freedom of transformation.

So an adaptive search for optimum transformation parameters was performed in order to maximize
the mutual information. Method is a user independent and no need of any data makes a method
completely independent and highly robust. The approach with robustness evaluation and maximizing
the mutual information are applied on rigid bodies of CT, MR, and PET images.

Li et al [4], proposed an efficient multiscale deformable registration framework, by combining the


Edge preserving scale space (EPSS) with Free form deformation (FFD) for medical image
registration. The proposed method shows the accuracy and robustness when compared to traditional
methods for medical image processing by using the criteria of multiscale decomposition for medical
images. The implemented framework also increases the efficiency of registration process, and
improves the application for image guided radiation therapy with current medical system.

Huang et al [5], evaluates a hybrid method. In contrast with purely feature based or intensity based
methods integrating the merits of both the approaches. By means of a small number of automatically
extracted scale invariant salient region

features, whose interior intensities can be matched using robust similarity measures. The goal is to
identify as many good feature correspondences as possible, and fully utilize these correspondences
to predict an appropriate transformation model for registration.

The existing algorithms to feature matching consist of two steps: region component matching
(RCPM) and region configural matching (RCFM), respectively. Procedure carried out by first
finding the correspondence between individual region features now the joint correspondence
detection between multiple pairs of salient region features using a generalized expectation-

maximization framework and finally the joint correspondence is then used to recover the optimal
transformation parameters.

Huang and Li [6], introduces a feature based image registration using shape content for object
recognition and in hand written digits. Use of thin-plate spline interpolation is the mapping function
in this technique. Method implementation is first by shape content and calculating the control points
in both reference image and target image. To eliminate speckle noise Lee filter is designed, control

points are extracted using the Harris operator and the edge features or corners are extracted from
both the images by canny operator.

Based on the shape content the control points are matched within a described NN pixel area.
Invalid control points are removed and the affine transformation is the mapping parameter based on
both the images. And finally the Thin-plate spline is used to wrap the images. The proposed method
is for the optical-SAR images and multi-band SAR images.

Mekky et al [7], introduces the concept of wavelet based image registration techniques. Four
different image registration techniques are compared namely cross-correlation based registration,
mutual information (MI) based hierarchical registration, scale invariant feature transform (SIFT),
and hybrid registration approach using MI and SIFT.

The proposed method is the wavelet-based decomposition of the reference and the new image, now
intensity based registration with MI and with transformation model rough results are implemented
using the SIFT algorithm and the outliers are removed using the RANSAC algorithm. Results
obtained by the hybrid approach have same impact as the MI and SIFT independently.

Sarvaiya and Patnaik [8], proposed a combined approach using Mexican hat, Wavelet, and Radon
transform are the feature based approach of image registration. Features points are extracted using
the Mexican hat and Wavelet transform with invariant moments and corresponding control points are
registered using the Radon transform.

Feature points extraction using the Mexican hat is the process of calculating the local maxima and
convolving the image with Mexican hat wavelet. Laplacian of Gaussian with Gabor wavelet makes a
better impact while feature extraction. Around the feature points a circular template is considered to
determine invariant moments. Radon transform impact is on the scaling, and rotation while matching
the feature points. Result of proposed method shows a better performance with high degree of
rotation and scaling up to 1.8.

Lowe [9], designs SIFT algorithm, stands for scale invariant feature transform algorithm was first
proposed by D. G. Lowe in 1999. SIFT is a feature detection algorithm used to identify the similar
objects in two different images. SIFT algorithm identifies different objects using corner detection
approach invariant to scale. The main advantage of this approach is to identify a large number of
features in an image for reliable identification. The procedure of this algorithm is to first find the
best suitable features from a single or a set of reference images and storing them into a self-designed
suitable database. The features from the predesigned database are individually matched to a new
image or target image and finding the matching features based on Euclidean distance of their feature
vectors. Set of image features are generated using the major stages of computation are Scale-space
extrema detection to identify the interest points in an image which are invariant to scale and
orientation, Keypoint localization to proper identification of key points in reference image selected

based on measures of their stability, Orientation assignment based on local image gradient direction
and orientation, thereby providing invariance to these transformations, and Keypoint descriptor

which are the image features that are transformed into a representation that allows for significant
level of shape distortion and change in illumination.

Liu et al [10], proposed SIFT feature in Steerable-Domain for remote sensing images using
multiscale registration. Steerable-domain deals with the large variation to scale, rotation, and
illumination between images. Reference and sensed images with First in Last Stage gradual
optimization are adopted to achieve the registration results. Because of the external image feature
measurement in transformed image, the dominant gradient orientation around the point is computed.
The steerable pyramid transform decomposes the image for computer vision applications. Author
compares the performance of propose robust S-RSIFT algorithm with the SIFT and SIFT+SVD
approach, and gets a good result with large scale of variations, rotation, and intensity changes.

Chen et al [11], introduces a new method, which is based on linear search with SIFT and nearest
neighbor algorithms. This method is proposed for accelerating the registration of partially
overlapping images. Using low resolution correspondence of candidate images by a SIFT-based
method overlapping areas of images is rapidly estimated. The purposed approach reduces the
computational cost to 10%-30% but with little compromise in accuracy.

Hongbo et al [12], proposed a rapid automatic image registration method based on Improved SIFT
for narrow-baseline images. This approach achieves the great improvement in the speed and
accuracy of image registration. By accelerating the matching speed and reducing the number of
candidate key points by lessing the complexity of the feature descriptor. Time consuming during the
process of extracting key points and finds correspondence is shorten by 1.451sec.

Author uses the SIFT algorithm to extract the features called candidate key points from both the
images as well as the amount of inliers. Computing the corner response of each keypoint by harris
approach and filtering them by corner responses. Matching keypoints by Best-bin-first search
method and checking the consistency and removing the outliers by RANSAC. Least-square
approach is for transformation matrix calculation, and finally overlapping the target image over the
reference image.

ViniVidadharan and SubuSurendran [13], presented a automatic image registration technique


using Scale invariant feature transform (SIFT) and Normalized cross-correlation (NCC) method to
determine the feature points of overlapping area in both reference and target images. Author
describes the combination of Best bin first search using k-d tree for feature matching and also the
images containing the large numbers of speckles, noise, and some distortion are eliminated using
RANSAC.

The approach works successfully with different set of images when tested against various scale,
rotation, and illumination.

Moorthi et al [14], design a framework for remote sensing images from different sensors using the
corner detection algorithm. Algorithm used by author

is Harris corner detection and Random sample consensus (RANSAC) to remove the outliers. Steps
involving to design a proposed work is the feature point extraction in both the images, control points
plays a vital role in feature matching step using the spatial transformation using least square
estimation and finally the image is resampled. Unwanted control points or the outliers are removed
using the RANSAC algorithm.

The results shows the accuracy in image registration using four different images from Indian remote
sensing satellite (IRS) with 599, 608, 587, and 469 control points and RMSE (in pixels) 0.57, 0.62,
0.48, 0.54 respectively.

Mahesh and Subramanyam [15], proposed a new corner detection algorithm using Steerable filters
and Harris algorithm for vast application in image processing and computer vision. He compares the
performance of proposed method with the SUSAN and Harris corner detection algorithms. Steerable
filters are used as a basic filter bank while transformation is translation, shiftable, or rotation. Steps
involves in proposed method with first decomposing the image using steerable filters, detecting the
corners, combining all the detectable corners with dilation to make the one and finally finding the
centroid of these corners.

Better results are obtained even after rotation, scaling, and translation of an image from the proposed
approach with true corners and minimum number of false or missed corners.

Nichat and Shandilya [16], proposed a scheme for area matching by using different transform
based methods. This paper implements image registration technique based on different transforms.
The procedure is carried out with comparing the reference image with the target image by finding
out an object or area from unregistered image using the area based approach of image registration.
HAAR and WALSH transform used for comparison between results obtained by these two
transforms and the root mean square error (RMSE) is used as similarity measures.

Above approach is simple, fast and easy with advantage of Walsh transform reduces the
computational time by a considerable amount so, it greatly reduce the complexity of computation.

Pandey et al [17], implements the Speeded up robust feature detector (SURF) algorithm and
increases the matching points of images for automatic image registration. Because of its fast feature
detection and with less time consuming property SURF algorithm is mostly used. Increasing the
matching points gives rise to a proper image registration. For feature extraction SURF is used which
is based on

Approximated Hessian matrix. Nearest neighbor algorithm is for key point matching with minimum
Euclidean distance for invariant descriptor vector. For outliers elimination RANSAC is used and
affine transformation is used as transformation model.

As in panoramic images increase in matching points may improves the quality of image and by using
the SURF algorithm for feature extraction leads to quick image registration.

Korman et al [18], proposed a Fast affine template matching algorithm is a approximate template
matching under 2D affine transform that minimize the Sum-of-Absolute-Differences (SAD) error

measure. SAD errors are randomly examined and with consideration to pixels and the further

transformation parameters are solved out with Branch-and-Bound algorithm.

Automatic and precise ortho rectification, co registration, and sub pixel correlation of satellite

images, application to ground deformation measurements.

Author: S. Leprince, S. Barbot, F. Ayoub, and J. Avouac

Year: 2011.

This paper presents a complete procedure for automatic and precise ortho-rectification and coregistration of optical satellite images. The process based on SPOT images and SRTM DEM,
without any external information such as GPS.

Advantage: This techniques allow to co-register optical satellite images, possibly acquired from
different satellite systems, with unprecedented accuracy and reducing or eliminating measurements
uncertainties and biases for any change detection applications.

Disadvantage: The measurement of disparities between a set of satellite images is thus subjected to
several kind of noises its make few diffulties while registration.

Evaluation of color descriptors for object and scene recognition

Author: K. van de Sande, T. Gevers, and C. Snoek.

Year: 2014

In this paper, the invariance properties of color descriptors are studied using a taxonomy of
invariance with respect to photomeric transformations. The distinctiveness of color descriptors is
assessed experimentally using two benchmarks from the image domain and the video domain.

Advantage: We studied invariance to light intensity changes and light color changes affects object
and scene category recognition. Andalso it gives light intensity changes, the usefulness of invariance
is category-specific.

Disadvantage:This paper provide the information about effect of invariance to light intensity.

Shape retrieval using triangle-area representation and dynamic space warping

Author: N. Alajlan, I. El Rube.

Year: Jul. 2007.

In this paper present a closed-boundary shape representation to measure the convexity/concavity of


each boundary point. Using a dynamic space warping algorithm they match a two shapes. Among
other image features that are used to achieve this objective, like color and texture, shape is
considered the most promising for the identification of entities in an image.

Advantage: This method is Invariant to translation, rotation, and scaling and robust against noise
and moderate amounts of noise and occlusion. It is the most comprehensive shape retrieval test.

Disadvantage:In this method some noises are present in the output.

Efficient FFT-Accelerated Approach to Invariant OpticalLIDAR Registration

Author: Alexander Wong, Jeff Orchard,

Year: November 2008.

This paper deals with FFT accelerated approach to image registration of optical and light detection
and ranging (LIDAR) images. In that paper differences in intensity mapping between optical and
LIDAR images are addressed through an integrated local intensity mapping transformation
optimization process. Frequency based techniques are also used to find characteristics such as phase
to determine the alignment between two images. A popular frequency-based technique is phase
correlation,

Advantage: The cost of an exhaustive search is substantially reduced by exploiting the fast Fourier
transform. Experimental results demonstrate good intersensor registration accuracy under various
difficult opticalLIDAR image pairs.

Disadvantage: This method isnt suitable for investigating higher order intensity transformation

models.

Multifocus color image fusion based on quaternion curvelet transform

Author:Liqiang Guo,Ming Dai.

Year:

Image fusion is commonly described as the task of combine information from multiple images. It is
difficult to get an image with all objects in focus, which is mainly caused by the limited depth-offield for camera lens. The multifocus image fusion algorithm can solve this problem . Quaternion
curvelet transform is a multifocus image fusion algorithm to remove image blur. The fused color
image is obtained by the inverse quaternion curvelet transform.

Advantage: The previous fusion algorithms can exclude the blurred regions from both of the source
images well. The QCT removed the image blur in multifocus color image fusion tasks.

Disadvantage: Quaternion curvelet transform Cant be used in color image processing domain.

PROPOSED SYSTEM

The process is intensity based and the transformation of the images requires complex mechanisms.
The accuracy of the process measured shows that the process is affected due to the physical
disturbances in the images. The computation time of the process is increased due to the employment
of a large amount of complex algorithms.

Considering the aforementioned pros and cons, we propose a novel feature-based image registration
approach that can make the best use of transformation information. In the feature matching step, it
adopts a novel robust estimation algorithm named Histogram of Triangle area representation (TAR)
Sample Consensus (HTSC). This algorithm can effectively find out the CCs from the candidates,
which is the critical drawback of the existing feature-based methods when they are applied to
multimodal image registration. Experimental results show that the proposed method can effectively
deal with the problems mentioned earlier. Furthermore, with the help of this method, most existing
feature-point-based algorithms (e.g., SIFT and SURF) can register remote-sensing images
automatically.

Major contributions of this project include the following.

An algorithm for calculating the statistic information of correspondences based on the histogram of
TAR (HTAR) is presented.

A novel robust estimation algorithm named HTSC is developed, which can substitute the RANSAC
and

PROSAC algorithms and is more effective than them in some cases.

An approach for remote-sensing image registration based on HTSC is proposed.

The proposed image registration approach is generalized to align remote-sensing images under 2-D
planar projection transformation.

Extensive experiments are conducted to validate the proposed approach.

ADVANTAGES:

The proposed process can able to detect the feature points in an accurate even if the intensity
variations in the images were high. The SIFT matching points were more exact for all the type of
images. The performance of the process is measured by calculating PSNR value and MSE value. The
performance measures obtained through PSNR and MSE shows that the proposed method has better
performance compared to the existing HSTC method for Image Registration process. The process is
computationally effective and the performance of the process is also improved.

SYSTEM ARCHITECTURE:

Preprocessing
SIFT Feature extraction

FCM Clustering
Transformation

Performance Measures

FLOW DIAGRAM:

CONCLUSION

An image registration process that registers the images based on the extracted SIFT feature points is
proposed. The proposed approach is improved compared to the existing HSTC based image
registration. Satellite images are collected from different sensors. The image pixel values and the
objects located in the pixels were different in the reference and the target image. The reference
image has mainly intensity variations and object location difference. The registration process
transforms the target image as the reference image which helps in remote sensing process. The input
reference image and sensed image are preprocessed by applying gaussian filter. The feature points
are extracted from the preprocessed image using SIFT algorithm. The feature points refers to the
location of the image pixels present in the images that have the same coordinate vales in different
orientations. The extracted feature points were the same for the different images. Correspondence is
established between features detected in the sensed and the reference image. The correspondence
process refers to the matching of the pixels. The reference and the clustered regions were clustered.
The clustering process is employed in both target and the reference image. The clustered portions in
both the reference and the target portions which are in common is were identified. The identification
of the correct matching pixels will give the pixels that are need to be modified in the sensed image.
The Sensed image is transformed similar to the target image by means piecewise transformation of
the pixels from the sensed image to the reference image is employed. Performance is analysed by

measuring the accuracy and the error rate. The error rate gives the rate of number of pixels modified
in the sensed image corresponding to the target image. The error rate is the rate of number of pixels
modified in the sensed images corresponding to the target image. The proposed method has
overcome difficulties in the existing complex HSTC and other methods and the accuracy is also
improves in the proposed process.

REFERENCES

Barbara Zitova, and Jan Flusser, Image registration methods: a survey, Image and Vision Computing,
vol. 21, pp. 977-1000, 2013.

R. M. Ezzeldeen, H. H. Ramadan, T. M. Nazmy, M. Adel Yehia, and M. S. Abdel-Wahab,


Comparative study for image registration techniques of remote sensing images, The Egyptian
Journal of Remote Sensing and Space Sciences, vol. 13, pp. 3136, 2011.

Frederik Maes, Andre Collignon, Dirk Vandermeulen, Guy Marchal, and Paul Suetens,
Multimodality image registration by maximization of mutual information, IEEE Transactions on
Medical Imaging, vol. 16, no. 2, pp. 187-198, April 1997.

Dengwang Li, Honglin Wan, Hongjun Wang, and Yong Yin, Medical image registration framework
using multiscale edge information, International Workshop on Information and Electronics
Engineering IWIEE, Procedia Engineering, vol. 29, pp. 2480-2484, 2012.

Xiaolei Huang, Yiyong Sun, Dimitris Metaxas, Frank Sauer, and Chenyang Xu, Hybrid image
registration based on configural matching of scaleinvariant sailent region features, IEEE Computer

Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW04).

Lei Huang, and Zhen Li, Feature-based image registration using the shape context, International
Journal of Remote Sensing, vol. 31, pp. 2169-2177, 2014.

Nagham E. Mekky, F. E.-Z. Abou-Chadi, and S. Kishk, Wavelet-based image registration techniques:
a study of performance, International Journal of Computer Science and Network Security IJCSNS,
vol. 11, no. 2, pp. 188-196, February 2011.

Jignesh N Sarvaiya, and Dr. Suprava Patnaik, Automatic Image Registration Using Mexican Hat
Wavelet, Invariant Moment, and Radon Transform, International Journal of Advanced Computer
Science and Applications IJACSA, Special Issue on Image Processing and Analysis, pp. 75-84, 2012

David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of
Computer Vision IJCV, January 2011.

[10]Xiangzeng Liu, Zheng Tian, Chunyan Chai, and Huijing Fu, Multiscale registration of remote
sensing image using robust SIFT features in steerable-domain, The Egyptian Journal of Remote
Sensing and Space Sciences, vol. 14, pp. 6372, 2011.

[11]Shu-qing Chen, Hua-wen Chang, and Hua Yang, An efficient registration method for partially
overlapping images, Advanced in Control Engineering and Information Science ACEIS, Procedia
Engineering, vol. 15, pp. 2266-2270, 2011.

[12]Zhu Hongbo, Xu Xuejun, Wang Jing, Chen Xuesong, and Jiang Shaohua, A rapid automatic
image registration method based on improved SIFT, Procedia Environmental Sciences, vol. 11, pp.
85-91, 2014.

[13] ViniVidyadharan, and SubuSurendran, Automatic image registration using SIFT-NCC, Special
Issue of International Journal of Computer Applications on Advanced Computing and
Communication Technologies for HPC Applications - ACCTHPCA, pp- 0975-8887, June 2012.

[14]S. Manthira Moorthi, Indranil Misra, Debajyoti Dhar and R. Ramakrishnan, Automatic image
registration framework for remote sensing data using harris corner detection and random sample
consensus (RANSAC) model, International Journal of Computer Engineering and Architecture
IJACEA, vol. 2, no. 2, June-December 2012.

[15]Mahesh, and Dr. M. V. Subramanyam, Invariant corner detection using steerable filters and
harris algorithm, Signal & Image Processing: An International Journal (SIPIJ), vol. 3, no. 5, October
2012.

[16]Nayana M. Nichat, and Prof. V. K. Shandilya, Image registration for area matching by using
transform based methods, International Journal of Advanced Research in Computer Science and
Software Engineering IJARCSSE, vol. 3, Issue 4, April 2013.

[17] Megha M Pandya, Nehal G Chitaliya, and Sandip R Panchal, Accurate image registration using
SURF algorithm by increasing the matching points of images, International Journal of Computer
Science and Communication Engineering IJCSCE, vol. 2, Issue 1, February 2013 Issue.

[18] Simon Korman, Daniel Reichman, Gilad Tsur, and Shai Avidan, Fast-Match: fast affine
template matching, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
1940-1947, 2013

You might also like