You are on page 1of 66

Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-1
INTRODUCTION

1.1 Image
In common usage, an image or picture is an artifact that produces the likeness of
some subject–usually a physical object or a person. Images may be two dimensional (e.g.
a photograph) or three dimensional (e.g. a statue). They are typically produced by optical
devices–such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural
objects and phenomena, such as the human eye or water surfaces. The word image is also
used in the broader sense of any two dimensional figures or illustration, e.g. a map, a
graph, a pie chart, an abstract painting, etc. In this wider sense, images can also be
produced manually (by drawing painting, carving, etc.), by computer graphics
technology, or a combination of the two.
Digital Image
A digital image is a representation of a two–dimensional image as a finite set of
digital values, called picture elements or pixels. Typically, the pixels are stored in
computer memory as a raster image or raster map, a two–dimensional array of small
integers. These values are often transmitted or stored in a compressed form. Digital
images can be created by a variety of input devices and techniques, such as digital
cameras, scanners, coordinate–measuring machines, seismographic profiling, airborne
radar, and more.
1.1.1 Pixel
A pixel is one of the many tiny dots that make up the representation of a picture in
a computer„s memory. Each such information element is not really a dot, nor a square,
but an abstract sample. With care, pixels in an image can be reproduced at any size
without the appearance of visible dots or squares; but in many contexts, they are
reproduced as dots or squares and can be visibly distinct when not fine enough.
The intensity of each pixel is variable; in color systems, each pixel has typically
three or four dimensions of variability such and Red, Green and Blue, or Cyan, Magenta,
Yellow and Black.

Aditya Engineering College (A) Page 1


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Sub Pixels
Many display and image-acquisition systems are, for various reasons, not capable
of displaying the different color channels at the same site. This approach is generally
resolved by using multiple sub pixels, each of which handles a single color channel. For
example, LCD displays typically divide each pixel into four sub pixels; one red, one
green, and two blue. Most digital camera sensors also use sub pixels by using colored
filters.
For systems with sub pixels two different approaches can be taken: the sub pixels
can be ignored with pixels being treated as the smallest addressable imaging element, or
the sub pixels can be included in rendering calculations, which requires more analysis
and processing time, but can produce apparently superior images in some cases. The later
approach has been used to increase the apparent resolution of color displays.
Mega Pixel
A mega pixel is 1 million pixels, and is usually used to express the resolution
capabilities of digital cameras. For example, a camera that can take pictures with a
resolution of 2048 x 1536 pixels is commonly said to have 3.1 mega pixels (2048 x
1536=3,145,728). Digital cameras use photo sensitive electronics; either Charge-coupled
devices (CCDs) or CMOS sensors, which record brightness levels on a per-pixel basis.
1.1.2 Images in Matlab
The basic data structure in MATLAB is the array, an ordered set of real or
complex elements. This object is naturally suited to the representation of images, real-
valued ordered sets of color or intensity data
MATLAB stores most images as two-dimensional arrays (i.e., matrices), in which
each element of the matrix corresponds to a single pixel in the displayed image. (Pixel is
derived from picture element and usually denotes a single dot on a computer display.)
1.1.3 Image Representation
An image is stored as a matrix using standard MATLAB matrix conventions.
There are four basic types of images supported by MATLAB.
1. Binary images
2. Intensity images

Aditya Engineering College (A) Page 2


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

3. RGB images
4. Indexed images
1. Binary Image
In a binary image, each pixel assumes one of only two discrete values: 1 or 0. A
binary image is stored as a logical array. By convention, this documentation uses the
variable name BW to refer to binary images.
The following figure shows a binary image with a close-up view of some of the
pixel values.

Fig1.1:Pixel Values in a Binary Image

2. Grayscale Image
A gray scale image (also called gray-scale or gray-level) is a data matrix whose
values represent intensities within some range. MATLAB stores a grayscale image as an
individual matrix, with each element of the matrix corresponding to one image pixel. By
convention, this documentation uses the variable name I to refer to grayscale images.
The matrix can be of class uint8, uint16, int16, single, or double. While grayscale
images are rarely saved with a color map, MATLAB uses a colour map to display them.

Aditya Engineering College (A) Page 3


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

For a matrix of class single or double, using the default gray scale colour map, the
intensity 0 represents black and the intensity 1 represents white. For a matrix of type
uint8, uint16, or int16, the intensity intmin (class (I)) represents black and the intensity
intmax (class (I)) represents white. The figure below depicts a grayscale image of class
double.

Fig 1.2: Pixel Values in a Gray scale, Image Define Gray Levels

3. RGB Image
A color image is an image in which each pixel is specified by three values one
each for the red, blue, and green components of the pixel's color. MATLAB store color
images as an m-by-n-by-3 data array that defines red, green, and blue color components
for each individual pixel. Color images do not use a color map. The color of each pixel is
determined by the combination of the red, green, and blue intensities stored in each color
plane at the pixel's location.
A color array can be of class uint8, uint16, single, or double. In a color array of
class single or double, each color component is a value between 0 and 1. A pixel whose
color components are (0, 0, 0) is displayed as black, and a pixel whose color components

Aditya Engineering College (A) Page 4


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

are (1, 1, 1) is displayed as white. The three color components for each pixel are stored
along the third dimension of the data array. For example, the red, green, and blue color
components of the pixel (10,5) are stored in RGB(10,5,1), RGB(10,5,2), and
RGB(10,5,3), respectively The following figure depicts a color image of class double.

Fig 1.3: Color Planes of a True color Image

4. Indexed Image
An indexed image consists of an array and a colour map matrix. The pixel values
in the array are direct indices into a color map. By convention, this documentation uses
the variable name X to refer to the array and map to refer to the color map.
The relationship between the values in the image matrix and the color map
depends on class of the image matrix. If the image matrix is of class single or double, it
normally contains integer values 1 through p, where p is the length of the color map. The
value 1 points to the first row in the color map, the value 2 points to the second row, and
so on. If the image matrix is of class logical, uint8 or uint16, the value 0 points to the first
row in the color map, the value 1 points to the second row, and so on.

Aditya Engineering College (A) Page 5


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

The following figure illustrates the structure of an indexed image. In the figure,
the image matrix is of class double, so the value 5 points to the fifth row of the color
map.

Fig 1.4: Pixel Values Index to Color map Entries in Indexed Image

1.1.4 Digital Image File Types


The 5 most common digital image file types are as follows
1. JPEG (Joint Photo Graphic Experts Group)
It is a compressed file format that supports 24 bit color (millions of colors). This
is the best format for photographs to be shown on the web or as email attachments. This
is because the color informational bits in the computer file are compressed (reduced) and
download times are minimized.
2. GIF (Graphics Interchange Format)
It is an uncompressed file format that supports only 256 distinct colours. Best
used with web clip art and logo type images. GIF is not suitable for photographs because
of its limited color support.
3. TIFF (Tagged Image File Format)
It‟s an uncompressed file format with 24 or 48 bit color support. Uncompressed
means “all of the color information from your scanner or digital camera for each
individual pixel is preserved when you save as TIFF”. TIFF is the best format for saving

Aditya Engineering College (A) Page 6


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

digital images that you will want to print. Tiff supports embedded file information,
including exact color space, output profile information and EXIF data. There is a lossless
compression for TIFF called LZW. LZW is much like 'zipping' the image file because
there is no quality loss. An LZW TIFF decompresses (opens) with all of the original pixel
information unaltered.
4. BMP (Windows Bitmap)
It is a Windows (only) operating system uncompressed file format that supports
24 bit color. BMP does not support embedded information like EXIF, calibrated colour
space and output profiles. Avoid using BMP for photographs because it produces
approximately the same file sizes as TIFF without any of the advantages of TIFF.
5. Camera Raw
It is a lossless compressed file format that is proprietary for each digital camera
manufacturer and model. A camera RAW file contains the 'raw' data from the camera's
imaging sensor. Some image editing programs have their own version of RAW too.
However, camera RAW is the most common type of RAW file. The advantage of camera
RAW is that it contains the full range of colour information from the sensor. This means
the RAW file contains 12 to 14 bits of colour information for each pixel.
If you shoot JPEG, you only get 8 bits of colour for each pixel. These extra colour bits
make shooting camera RAW much like shooting negative film. You have a little more
latitude in setting your exposure and a slightly wider dynamic range.
1.1.5 Image Coordinate Systems
Pixel Coordinates
Generally, the most convenient method for expressing locations in an image is to
use pixel coordinates. In this coordinate system, the image is treated as a grid of discrete
elements, ordered from top to bottom and left to right, as illustrated by the following
figure.

Aditya Engineering College (A) Page 7


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Fig 1.5: Pixel Coordinate System


For pixel coordinates, the first component r (the row) increases downward, while
the second component c (the column) increases to the right. Pixel coordinates are integer
values and range between 1 and the length of the row or column.
For example, the data for the pixel in the fifth row, second column is stored in
the matrix element (5, 2). You use normal MATLAB matrix subscripting to access
values of individual pixels.
For example, the MATLAB code I (2, 15) returns the value of the pixel at row 2,
column 15 of the image I.
Spatial Coordinates
In the pixel coordinate system, a pixel is treated as a discrete unit, uniquely
identified by a single coordinate pair, such as (5, 2). From this perspective, a location
such as (5.3, 2.2) is not meaningful.
At times, however, it is useful to think of a pixel as a square patch. From this
perspective, a location such as (5.3, 2.2) is meaningful, and is distinct from (5, 2). In this
spatial coordinate system, locations in an image are positions on a plane, and they are
described in terms of x and y (not r and c as in the pixel coordinate system).
The following figure illustrates the spatial coordinate system used for images

Aditya Engineering College (A) Page 8


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Fig 1.6: Spatial Coordinate System


This spatial coordinate system corresponds closely to the pixel coordinate system
in many ways. For example, the spatial coordinates of the center point of any pixel are
identical to the pixel coordinates for that pixel.
Differences between Pixels Coordinate System and Spatial Coordinate System
There are some important differences, however. In pixel coordinates, the upper
left corner of an image is (1, 1), while in spatial coordinates, this location by default is
(0.5, 0.5). This difference is due to the pixel coordinate system's being discrete, while the
spatial coordinate system is continuous. Also, the upper left corner is always (1, 1) in
pixel coordinates, but you can specify a non-default origin for the spatial coordinate
system. See using a non-default Spatial Coordinate System for more information.
1.1.6 STORAGE CLASSES SUPPORTED BY MATLAB AND IPT
For image processing, however, this data representation is not always ideal. The
number of pixels in an image can be very large. To reduce memory requirements,
MATLAB supports storing image data in arrays as 8-bit or 16-bit unsigned integers, class
uint8 and uint16. These arrays require one eighth as much memory as double arrays and
can perform many standard MATLAB array manipulations.
The data classes listed in the below table are supported by most IPT functions.

Aditya Engineering College (A) Page 9


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

TABLE 1.1: Data Classes supported by IPT functions


Data Class Description

Double Double-precision, floating point numbers in the approximate


range -10^308 to 10 ^308 (8 bytes per element).

Unit8 Unsigned 8-bit integers in the range [0,255] (1 byte per


element).

Unit16 Unsigned 16-bit integers in the range [0, 65535] (2 bytes per
element).

Unit32 Unsigned 32-bit integers in the range [0, 4294967295] (4 bytes


per element)

Int8 Signed 8-bit integers in the range [-128, 127] (1 byte per
element).

Int16 Signed 16-bit integers in the range [-32768, 32767] (2 bytes per
element)

Int32 Signed 32-bit integers in the range [-2147483648,2147483647]


(4 bytes per element)

Single Single-precision, floating point numbers in the approximate


range -10^38 to 10 ^38 (4 bytes per element).

Char Characters (2 bytes per element)

Logical Values are 0 or 1 (1 byte per element)

Certain MATLAB functions, including the find, all, any, conv2, convn, fft2, fftn,
and sum functions, accept uint8 or uint16 data but return data in double-precision format.

Aditya Engineering College (A) Page 10


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Summary of Image Types and Data Classes


TABLE 1.2: Image Types and Numeric Classes
Image Type Data Class Interpretation
Binary Logical Array of zeros (0) and ones (1)
Double Array of integers in the range [1, p]. The
associated colour map is a p-by-3 array of
Indexed floating-point values in the range [0, 1]
uint8 or Array of integers in the range [0, p-1]. The
uint16 associated colour map is a p-by-3 array of
floating-point values in the range [0, 1].
Double Array of floating-point values. The typical
range of values is [0, 1]. The associated
colour map, typically grayscale, is a p-by-
3 array of floating-point values in the
Intensity range [0, 1].
uint8 or Array of integers. The typical range of
uint16 values is [0, 255] or [0, 65535]. The
associated colour map, typically grayscale,
is a p-by-3 array of floatingpoint values in
the range [0, 1].
Double m-by-n-by-3 array of floating-point values
RGB in the range [0, 1]
(truecolour) uint8 or m-by-n-by-3 array of integers in the range
uint16 [0, 255] or [0, 65535]

Aditya Engineering College (A) Page 11


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Image Type Conversion Functions


For certain operations, it is helpful to convert an image to a different image type
TABLE1.3: Image Type conversion Functions

Function Description

Dither Create a binary image from a grayscale intensity image by


dithering; create an indexed image from an RGB image by
dithering

gray2ind Create an indexed image from a grayscale intensity image

gray slice Create an indexed image from a grayscale intensity image


bythresholding

im2bw Create a binary image from an intensity image, indexed image, or


RGB image, based on a luminance threshold

ind2gray Create a grayscale intensity image from an indexed image

ind2rgb Create an RGB image from an indexed image

mat2gray Create a grayscale intensity image from data in a matrix, by


scaling the data

Aditya Engineering College (A) Page 12


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

1.2 Digital Image Processing


Image processing is a method to perform some operations on an image, in order to
get an enhanced image or to extract some useful information from it. It is a type of signal
processing in which input is an image and output may be image or characteristics/features
associated with that image. Nowadays, image processing is among rapidly growing
technologies. It forms core research area within engineering and computer science
disciplines too.
Image processing basically includes the following three steps:
 Importing the image via image acquisition tools;
 Analyzing and manipulating the image;
 Output in which result can be altered image or report that is based on image
analysis.
There are two types of methods used for image processing namely, analogue and
digital image processing. Analogue image processing can be used for the hard copies like
printouts and photographs. Image analysts use various fundamentals of interpretation
while using these visual techniques. Digital image processing techniques help in
manipulation of the digital images by using computers. The three general phases that all
types of data have to undergo while using digital technique are pre-processing,
enhancement, and display, information extraction.
In this lecture we will talk about a few fundamental definitions such as image,
digital image, and digital image processing. Different sources of digital images will be
discussed and examples for each source will be provided. The continuum from image
processing to computer vision will be covered in this lecture. Finally we will talk about
image acquisition and different types of image sensors.
Digital image processing is the use of computer algorithms to perform image
processing on digital images. As a subfield of digital signal processing, digital image
processing has many advantages over analog image processing; it allows a much wider
range of algorithms to be applied to the input data, and can avoid problems such as the
build-up of noise and signal distortion during processing.
1.2.1 Image Digitization

Aditya Engineering College (A) Page 13


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

An image captured by a sensor is expressed as a continuous function f(x, y) of


two co-ordinates in the plane. Image digitization means that the function f(x, y) is
sampled into a matrix with M rows and N columns. The image quantization assigns to
each continuous sample an integer value. The continuous range of the image function
f(x, y) is split into K intervals. The finer the sampling (i.e., the larger M and N) and
quantization (the larger K) the better the approximation of the continuous image function
f(x, y).
1.2.2 Image Pre-Processing
Pre-processing is a common name for operations with images at the lowest level
of abstraction -- both input and output are intensity images. These iconic images are of
the same kind as the original data captured by the sensor, with an intensity image usually
represented by a matrix of image function values (brightness). The aim of pre-processing
is an improvement of the image data that suppresses unwanted distortions or enhances
some image features important for further processing. Four categories of image pre-
processing methods according to the size of the pixel neighbourhood are used for the
calculation of new pixel brightness.
 Pixel brightness transformations.
 Geometric transformations.
 Pre-processing methods that use a local neighborhood of the processed pixel.
 Image restoration that requires knowledge about the entire image.
1.2.3 Image Segmentation
Image segmentation is one of the most important steps leading to the analysis of
processed image data. Its main goal is to divide an image into parts that have a strong
correlation with objects or areas of the real world contained in the image. Two kinds of
segmentation
1.2.3.1. Complete Segmentation
This results in set of disjoint regions uniquely corresponding with objects in the
input image. Cooperation with higher processing levels which use specific knowledge of
the problem domain is necessary.

Aditya Engineering College (A) Page 14


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

1.2.3.2. Partial Segmentation


Image is divided into separate regions that are homogeneous with respect to a
chosen property such as brightness, colour, reflectivity, texture, etc. In a complex scene, a
set of possibly overlapping homogeneous regions may result. The partially segmented
global knowledge image must then be subjected to further processing, and the final image
segmentation may be found with the help of higher level information.
Segmentation methods can be divided into three groups according to the
dominant features they employ.
 First is about an image or its part; the knowledge is usually represented by a
histogram of image features.
 Edge-based segmentations form the second group; and
 Region-based segmentations
1.2.4 Image Enhancement
The aim of image enhancement is to improve the interpretability or perception of
information in images for human viewers, or to provide `better' input for other automated
image processing techniques. Image enhancement techniques can be divided into two
broad categories:
1. Spatial domain methods, which operate directly on pixels, and
2. Frequency domain methods, which operate on the Fourier transform of an image.
Unfortunately, there is no general theory for determining what `good‟ image
enhancement is when it comes to human perception. If it looks good, it is good!
However, when image enhancement techniques are used as pre-processing tools for other
image processing techniques, then quantitative measures can determine which techniques
are most appropriate.
1.3 Applications Of Digital Image Processing
 Image sharpening and restoration.
 Medical field.
 Remote sensing.
 Transmission and encoding.
 Machine/Robot vision.

Aditya Engineering College (A) Page 15


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Color processing.
 Pattern recognition.
 Video processing
1.4 Motivation
Image fusion is a technique of fusing multiple Images for better information and
more accurate image compared to Input images. Different types of image fusion process
are used to increase the visibility of an image. Depend upon the requirement image fusion
types will be varied. The proposed methodology uses benefits of NSCT and SVD to fuse
the two images. The Non sub-sampled contourlet (NSCT) type of fusion method have
more advantages like multi scale, multidirectional expansion and provides better
frequency selectivity. The Singular Value Decomposition (SVD) based image processing
techniques were focused in compression, watermarking and quality measure.
1.5 Need of Image Fusion
Fusion technique can be applied for various methods. When compared with other
methods, NSCT (Non-sub sampled contourlet transform) provides quality to the image at
the edges by removing the noise, and analyzes feature of the fused images better. NSCT
provides better frequency selectivity and regularity when compared to the CT (contourlet
transform).Using only NSCT or only SVD makes the algorithm complex so we use
combination of NSCT and SVD.
The SVD is the optimal matrix decomposition in a least square sense that it packs
the maximum signal energy into as few coefficients as possible. The major advantages of
SVD are image compression, watermarking and quality measure. Among all the
techniques, NSCT and SVD are the major important transformation technique for the
fusion process of an image.
1.6 Objectives and Goals
Image fusion is a technology that keeps that images are the main research
contents and it refers the techniques that to integrate the multi images of the same scene
or multi images of one sensor. The main goal of this paper presents that to search the
algorithms and that can be used to implement the image fusion in various applications
like runway extraction etc. To evaluate the performance with different image quality

Aditya Engineering College (A) Page 16


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

terms and those properties are chosen with great impact factor to detect the image fusion
algorithms.
To obtain an informative image using NSCT and SVD, the major advantage is
NSCT is a kind of multi-scale and multi-direction computation framework of the discrete
images which can be divided into two stages includes Non-Sub sampled Pyramid (NSP)
and Non-Sub sampled Directional filter bank (NSDFB) to improve the quality of the
image.SVD reduces the noise of the image. The major goal is that implement that fusion
process by using the NSCT and SVD techniques.
1.7 Software Requirement
MATLAB, Version: R2017a, 64-bit
1.8 Organization of Thesis
In chapter-1 it explains Introduction to digital Image processing and different types of
images and its applications.
In chapter-2 it explains literature survey, different methods in Image fusion like DCT,
SVD, NSCT, SWT etc, and also includes existing methods and its drawbacks.
In chapter-3 it explains about the definition of Image fusion its techniques, steps, and its
applications in different domains.
In chapter-4 the proposed algorithm, the major aim of the project is explained.
In chapter-5 it introduces the MATLAB and its working environment.
In Chapter-6 it discusses the performance characteristics of NSCT& SVD based Image
fusion technique and its results.
In Chapter-7 it concludes the thesis and future scope of project further research.

Aditya Engineering College (A) Page 17


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-2
LITERATURE SURVEY

2.1 Introduction
At present, the resolution of image is one of the major issues in functioning at
under low visibility conditions. So, various kinds of issues are proposed for increasing
the visibility conditions. Among all the techniques, NSCT and SVD are the major
important transformation technique for the fusion process of an image.
2.2 Research Papers
In the part of literature survey first consider the research papers for project to
implement the image fusion technique for vision systems is as follows
Research Paper-1 (1)
Maes, D. Vandermeulen, and P. Suetens propose - Analysis of multispectral or
multitemporal images requires proper geometric alignment of the images to compare
corresponding regions in each image volume. Retrospective three-dimensional alignment
or registration of multimodal medical images based on features intrinsic to the image data
itself is complicated by their different photometric properties, by the complexity of the
anatomical objects in the scene and by the large variety of clinical applications in which
registration is involved. While the accuracy of registration approaches based on matching
of anatomical landmarks or object surfaces suffers from segmentation errors, voxel-based
approaches consider all voxels in the image without the need for segmentation. The
recent introduction of the criterion of maximization of mutual information, a basic
concept from information theory, has proven to be a breakthrough in the field. While
solutions for intrapatient affine registration based on this concept are already
commercially available, current research in the field focuses on interpatientnonrigid
matching.

Analysis of multispectral or multitemporal images requires proper geometric


alignment of the images to compare corresponding regions in each image volume.
Retrospective three-dimensional alignment or registration of multimodal medical images

Aditya Engineering College (A) Page 18


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

based on features intrinsic to the image data itself is complicated by their different
photometric properties, by the complexity of the anatomical objects in the scene and by
the large variety of clinical applications in which registration is involved. While the
accuracy of registration approaches based on matching of anatomical landmarks or object
surfaces suffers from segmentation errors, voxel-based approaches consider all voxels in
the image without the need for segmentation. The recent introduction of the criterion of
maximization of mutual information, a basic concept from information theory, has
proven to be a breakthrough in the field. While solutions for intrapatient affine
registration based on this concept are already commercially available, current research in
the field focuses on interpatientnonrigid matching.
Research Paper-2 (2)
A. Cardinali and G. P. Nason propose an algorithm to adaptively segment and
fuse images by alternating wavelet packet and local cosine transforms each containing
best basis selection and thresholding. Within segmented regions fusion is informed by
multiple hypothesis testing based on a log-linear factorial model. This fusion identifies
homogenous regions from which to select wavelet or local cosine packets, possibly from
the original images. The successful performance of the fusion algorithm and
segmentation is demonstrated on some multispectral thematic mapper imagery. With the
increasing availability of large amounts of various kinds of data sources, each
characterizing different kinds of phenomena, a need has arise for statistical and
mathematical methods that are capable of capturing complementary information and
merging it in an efficient way. The purposes of merging might be for human presentation
or for further processing with techniques that might not be able to handle the original
data. As an example, we fuse some multispectral images: images of the same scene but
sensed using different frequencies. The example that we show later in Figure 2 depicts
Chew Valley Lake in Somerset, UK and is available at 12 frequencies. The task we
attempt is to fuse two of these bands into one that contains important features available
separately in one of the images but not the other.
Research Paper-3 (3)
Ding Li proposes a new method of fusing panchromatic and multispectral images

Aditya Engineering College (A) Page 19


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

based on NSCT and PCA. PCA reduces the dimensionality but preserves maximum
possible information of the data sources. It transforms a vector of multivariate data with
correlated variables into uncorrelated variables. Considering the advantages of NSCT,
this paper proposes a new method of fusing remote sensing image combining NSCT with
PCA. The final experimental results show that it has better subjective visual effect and
objective evaluation compared to other methods. First we perform PCA on the
multispectral image, and get principal components. Then apply histogram matching
between the original panchromatic image and PC1 to get approximate mean value. Now
employ NSCT on PC1 and panchromatic image to get low frequent and high frequent sub
bands. Fuse the PC1 and panchromatic image and apply NSCT reconstruction with new
coefficient to obtain the new PC1.Finally perform the inverse PCA transform to obtain
the fused image.
Research Paper-4 (4)
Gaurav Bhatnagar, Q. M. Jonathan Wu, Balasubramanian Raman proposes
Image Fusion is a technique which attempts to combine complimentary information
from multiple images of the same scene so that the fused image is more suitable for
computer processing tasks and human visual system. In this paper, a simple yet
efficient real time image fusion algorithm is proposed considering human visual
properties in spatial domain. The algorithm is computationally simple and implemented
very easily in real-time applications. Experimental results highlights the expediency
and suitability of the algorithm and efficiency is carried by the comparison made
between proposed and existing algorithm.
Research Paper-5 (5)
Kurakula Sravya, Dr. P. Govardhan, Naresh Goud M proposes a paper on
Image Fusion on Multi Focused Images Using NSCT. Image fusion is the process that
combines information in multiple images of the same scene. These images may be
captured from different sensors, acquired at different times, or having different spatial
and spectral characteristics. The object of the image fusion is to retain the most desirable
characteristics of each image. With the availability of multi-sensor data in many fields,
image fusion has been receiving increasing attention in the researches for a wide

Aditya Engineering College (A) Page 20


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

spectrum of applications. Here image fusion algorithm based on Wavelet Transform


which faster developed was a multi-resolution analysis image fusion. It has good time-
frequency characteristics.
To overcome the limitations of Wavelet Transform we put forward Curvelet
Transform which consists of special filtering process and multi-scale Ridgelet Transform.
This includes realization, sub-band division, smoothing block, normalization and so on.
NSCT is a kind of multi-scale and multi-direction computation framework of the
discrete images which can be divided into two stages includes Non-Sub sampled
Pyramid (NSP) and Non-Sub sampled Directional filter bank (NSDFB). To improve the
quality of the image at the edges by removing the noise we use Non-sub sampled
Contourlet Transform. The multi scale property using two-channel filter bank, and one
low-frequency image and one high-frequency image can be produced at each level of
NSP decomposition. The subsequent NSP decomposition stages are carried out to
decompose the low-frequency components of the image. The property of NSP is
obtained by NSFB structure which is similar to that of Laplacian pyramid which is
achieved by using the Non-sub sampled filter banks
In this paper a better image fusion algorithm based on transformation techniques,
Discrete Wavelet Transform, the Discrete Curvelet Transform and Non-sub sampled
Contourlet Transform. It includes the multi resolution analysis to check the ability in
Wavelet transform, and also has better direction identification ability for the edge feature
of awaiting describing images in the non-sub sampled Curvelet Transform. This method
could better describe the edge direction of images, and analyzes feature of the fused
images better by NSCT.
2.3 Existing Methods
There are so many techniques for image fusion process depending up on the
requirement and they have some drawbacks. Those methods are as follows:
(1) Intensity-hue-saturation (IHS) transform based fusion
(2) Principal component analysis (PCA) based fusion
(3) Multi scale transform based fusion:-
a. High-pass filtering method

Aditya Engineering College (A) Page 21


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

b. Pyramid method:
(i) Gaussian pyramid
(ii) Laplacian Pyramid
(iii) Gradient pyramid
(iv) Morphological pyramid
(v) Ratio of low pass pyramid
c. Wavelet transforms:
(i) Discrete wavelet transforms (DWT)
(ii) Stationary wavelet transforms
(iii) Multi-wavelet transforms
d. Curvelet transforms
2.3.1 IHS Transform
IHS transform is the oldest method for image fusion process. Intensity, Hue and
saturation are the three properties of colours and give a visibility perception of image.
Hue and Saturation contains more spectral information compare to intensity of image.
This method gives a fused output but not recent technique.
2.3.2 Principal Component Analysis (PCA)
The advantage of PCA when compared with IHS is it uses the arbitrary number of
bands. It is one of the important methods to perform the fusion process. Uncorrelated
principal components are formed from the low resolution multi spectral images. The first
PCA1 component is having the information of the variance and it gives the more
effective information of panchromatic image. Then an inverse PCA is used for the fused
image.
PCA is a mathematical tool transforms correlated variables into uncorrelated
variables called principal components. It is used in mainly image classification and image
compression. The first principal component occurs for the maximum variance in an
image and second principal component occurs is at subspace perpendicular to the first
component. The third principal component is subspace perpendicular to the first and two
and so on. This is the way of occuring the principal components in PCA method.

Aditya Engineering College (A) Page 22


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

The image fusion process by using Principal Component Analysis (PCA) is as follows in
FIG 2.3:

Image I1(x, y)

Principle Component Fused


Analysis Image

Image I2(x, y)

Fig 2.1: Image Fusion Process in PCA


2.3.3 Pyramid Techniques
It is an old technique for binocular fusion vision systems. By forming a pyramid
structure, the original image is represented with the different data levels. A composite
image is formed by selecting the pattern approach of the fusion technique. The pyramid
decomposition is formed on each and every image. Inverse pyramid transform is used for
fused image. Finally the resultant image will be clarity compare with the above
techniques such as PCA, IHS etc. But the contrast will be high of resultant image. So, the
visibility precipitance will be low to the normal human observer.
 Gaussian Pyramid
The Gaussian pyramid generation is done by starting with an initial image and
then lowpass filtering this image to obtain a reduced image. The image is reduced in the
sense that both spatial density and resolution are decreased. The low pass filtering is done
by a procedure equivalent to convolution by the set of local symmetry weighting function
(for example a Gaussian distribution). In a gaussian pyramid, subsequent images are
weighted down using a guassian blur and scaled down. Each pixel containing a local
average that corresponds to a pixel neighborhood on a lower level of a pyramid, this
technique is used especially in texture synthesis.
 Laplacian Pyramid
The laplacian pyramid is completed as the difference between the original image

Aditya Engineering College (A) Page 23


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

and the low pass filtered image. Laplacian pyramid is a set of band pass filters it can be
used to represent images as a series of band pass filtered images, each sampled at
successively sparser density. It is frequently used in image processing and pattern
recognition.
 Ratio of Low Pass Pyramid
In ratio of low pass pyramid we take the ratio of two successive layers. The ROLP
pyramid is a complete representation of the original image. A ROLP pyramid is
constructed for each of the source images .As it is constructed for the composite image
from the corresponding nodes in the component pyramids, the one with maximum
absolute contrast.
 Morphological Pyramid
Morphological pyramids systematically split the input signal in to approximation
and detail signals by repeatedly applying morphological filters follow by down sample.
The fundamental morphological operators are: erosion, dilation, opening and closing.
Consistent analysis of techniques will help in deciding the suitability of a particular
technique towards the fusion of large number of images.
2.3.4 Wavelet transforms
 Discrete Cosine Transform (DCT)
DCT comprises the images in the form of MPEG, JVG etc. In this transform
technique the spatial domain image will be converted into frequency domain image. The
2 dimensional DCT will be applied on gray scale image and the frequency of gray scale
image will be converted spatial domain into frequency domain. Fused DCT coefficients
are obtained by the fusion rule. By using the inverse DCT transform the fused image will
be obtained. It is one of the most spatial domain fusion methods. These methods are
complex and time consuming process. These are hard to perform. When real time
applications, the source images are coded in JPEG and in JPEG format, the fusion
approaches are applied in DCT domain is very efficient. The DCT operation is performed
on each and every block and it generates 64 coefficients to reduce the magnitude.
Those coefficients are rearranged in nonlinear manner for their further encoding
process. In case of using spatial domain the images are decoded and transferred and then

Aditya Engineering College (A) Page 24


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

after applying fusion procedure the fused image will be coded again.
 Stationary Wavelet Transform (SWT)
The stationary wavelet transform (SWT) is an expansion of standard discrete
wavelet transform (DWT) that utilizes high and low pass channels. SWT apply high and
low pass channels to the information at every level and at next stage it produces two
sequences. The two new successions will have same length as that of first grouping. In
SWT, rather than annihilation the channels at every level is altered by cushioning them
with zeroes. Stationary Wavelet Transform is computationally more complex. The
Discrete Wavelet Transform is a time variant transform. The best approach to restore the
interpretation invariance is to average some slightly distinctive DWT, called undecimated
DWT to characterize the stationary wavelet transform (SWT). SWT does this by
suppressing the down-sampling step of the DWT and instead up-sampling the filters by
padding with zeros between the filter coefficients. After decomposition, four images are
generally furnished (one approximation and three detail coefficients) which are at half the
resolution of the original image in DWT, whereas in SWT the approximation and detail
coefficients will have the same size as the input images. SWT is like discrete wavelet
transforms (DWT), however the main procedure of down-sampling is stifled which
implies that SWT is shift invariant. It applies the DWT and excludes both down-
sampling in the forward and up-sampling in the reverse direction. More precisely, it
executes the transform at each point of the image and saves the detail coefficients and
uses the low frequency information at each level.

 Discrete Wavelet Transform (DWT)


The discrete wavelet transform (DWT) is a direct transformation that works on an
information vector whose length is a whole number power of two, changing it into a
numerically diverse vector of the same length. This isolates information into distinctive
frequency components, and studies every segment with resolution coordinated to its
scale. DWT of an image delivers a non-redundant image representation, which gives
better spatial and spectral localization compared to existing multiscale representations. It
is computed with a cascade of filters followed by a factor 2 sub sampling and the

Aditya Engineering College (A) Page 25


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

principle highlight of DWT is multi scale representation. By utilizing the wavelets, given
functions can be analyzed at different levels of resolution. DWT decomposition utilizes a
course of low pass and high-pass channels and a sub-sampling operation. The yields from
2D-DWT are four images having size equal to half the size of input image. So from first
input image HHa, HLa, LHa, LLa images are obtained and from second input image
HHb, HLb, LHb, LLb images are obtained. Here LL image contains the approximation
coefficients. LH image contains the horizontal detail coefficients. HL image contains the
vertical detail coefficients and HH contains the diagonal detail coefficients. One of the
significant disadvantages of wavelet transform is their absence of translation invariance.
 Curvelet Transform
In addition to shift-invariance, it has been recognized that an efficient image
representation has to account for the geometrical structure pervasive in natural scenes. In
this direction the contourlet transform is a multidirectional and multi scale transform that
is constructed by combining the Laplacian pyramid with the directional filter bank (DFB)
proposed in curvelet transform, which represents edges better than wavelets .The
pyramidal filter bank structure of the contourlet transform has very little redundancy,
which is important for compression applications. However, designing good filters for the
contourlet transform is a difficult task. In addition, due to down samplers and up samplers
present in both the Laplacian pyramid and the DFB, the contourlet transform is not shift-
invariant.

Aditya Engineering College (A) Page 26


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

2.4 Drawbacks of Existing Methods


The existing methods having different drawbacks because of the techniques
applied on image as follows:
 They failed under low visibility conditions such as fog, pollution, darkness etc.
 Resolution of fused image will be less.
 Performance will be less.
 Time consumption is high.
 Delay also be more.
 The complexity of performing operation will be high.

Aditya Engineering College (A) Page 27


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-3
IMAGE FUSION

Introduction
Image fusion is the process of combining two or more images into a single image.
The resulting image will be more informative than source images. These images may be
captured from different sensors, acquired at different times or having different spatial and
spectral characteristics. With the availability of multi sensor data in many fields, image
fusion has been receiving increasing attention in the researches for a wide spectrum of
applications. Image fusion has been widely used in military, remote sensing, robot vision,
medical image processing and other areas.
3.1 Definition of Image Fusion
Image fusion is type of data fusion, which can be defined as the process of
combining two or more source images from the same scene into a composite image with
extend information content by using a certain algorithm. The fused image may provide
increased interpretation capabilities and more reliable results since data with different
characteristics. Moreover, image fusion can be performed at three different processing
levels according to the stage at which the fusion takes: pixel, feature and decision level.

(a) Left blurred (b) Right blurred (c) Fused image


Fig 3.1: Image fusion
3.2 Image Fusion Techniques
The image fusion techniques can be divided into two ways. The techniques are as
follows depending upon the requirement.
 Spatial Domain Fusion Method

Aditya Engineering College (A) Page 28


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Transform Domain Fusion Method


3.2.1 Spatial Domain Fusion Method
In this technique, uses pixel range of an image. The pixel values are varied to
achieve the desired output. The simplest spatial-based method is to take the average of
the input images pixel by pixel. However, along with its simplicity, this method leads to
several side effects, such as reduced contrast. To improve the quality of the fused image,
some researchers have proposed to fuse input images by dividing them into uniform-sized
blocks and having those blocks to take the place of single pixels. The spatial domain
fusion method is more accurate results are obtained by altering the pixel value in pixel
analysis.
3.2.2 Transform Domain Fusion Method
Image fusion is applied in every filed such as remote sensing applications,
computer vision, monitoring. The fusion methods are more effective for the vision
systems. Transform domain fusion method is also called as High Pass Filter based
Technique. It allows high range of pixel ranges and suppresses the low range of pixel
values in the image. When compare with the spatial and transform domain fusion
methods, the transform domain method has more effective output of fusion process. In
the spatial domain fusion methods, the fusion is directly on pixel gray level or color space
from the source images for fusion operation, so the spatial domain fusion methods are
also known as single-scale fusion method. For transform domain-based methods, each
source image is first decomposed into a sequence of images through a particular
mathematical transformation. Then, the fused coefficients are obtained through some
fusion rules for combination. Finally, the fusion image is obtained by means of a
mathematical inverse transform. Thus, the transform domain fusion methods are also
known as Multi-scale fusion methods.
3.3 Steps in Image Fusion
The various steps to perform an image fusion operation on image are to obtain the
proper effective information is as follows as in fig 3.2:
 First consider the input required images from a single image or from multi-sensor
image same scene.

Aditya Engineering College (A) Page 29


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Split the images into two or more up to n depend upon the requirement.
 Fusion process is applied by using different techniques like NSCT, PCA, DCA
etc.
 Finally the fused image is obtained and it contains all the information, more
effective when compared with the input image.

The basic steps of image fusion process are as follows:

Input Image

Image Decomposition/Split

Image Fusion

Fused Output Image

Fig: 3.2: Image fusion steps


3.4 Applications of Image Fusion in Different Domains
 Remote Sensing
The field of remote sensing is a continuously growing market with applications
like vegetation mapping and observation of the environment. The increase in application
is due to the availability of high quality images for a reasonable price and improved
computation power.
However, as a result of the demand for higher classification accuracy and the need
in enhanced positioning precision there is always a need to improve the spectral and
spatial resolution of remotely sensed imagery. These requirements can be either fulfilled
by building new satellites with a superior resolution power, or by the utilization of image
processing techniques. The main advantage of the second alternative is the significantly
lower expense.

Aditya Engineering College (A) Page 30


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Methods for fusing multispectral lo-resolution remotely sensed images with a


more highly resolved panchromatic image are described. The goal is to obtain a high
resolution multi spectral image which combines the spectral characteristics of the low
resolution data with the spatial resolution of panchromatic image.
 Computer Vision
Computer vision is a field that includes methods for acquiring , processing,
analyzing, and understanding images and, in general, high-dimensional data from real
world in order to produce numerical or symbolic information, e.g., in the forms of
decisions. A Theme in the development of this field has been to duplicate the abilities of
human vision by electronically perceiving and understanding an image. This image
understanding can be seen as the disentangling of symbolic information from image data
using models constructed with aid of geometry, physics, statistics and learning theory.
Computer vision has also been described as the enterprise of auto mating and integrating
a wide range of processes and representations for vision perception.
Applications range from tasks such as industrial machine vision systems which,
say, inspect bottles speeding by on production line, to research into artificial intelligence
and or computers or robots that can comprehend the document the world around them.
The computer vision and machine fields have significant overlap. Computer
vision covers the core technology of automated image analysis which is used in many
fields.
 Robotics
Mobile roots are providing great assistance operating in hazardous environments
such as nuclear cores, battlefields, natural disasters, and even at the nano-level of human
cells. These robots are usually equipped with a wide variety of sensors in order to collect
data and guide their navigation. Whether a single robot operating all sensors or a swarm
of cooperating robots operating their special sensors, the captured data can be too large to
be transferred across limited resources (e.g., bandwidth, battery, processing, and response
time ) in hazardous environments.
 Medical Imaging
Diagnostic imaging lets doctors look inside your body for clues about a medical

Aditya Engineering College (A) Page 31


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

condition. A variety of machines and techniques can create pictures of the structures and
activities inside your body. The type of imaging your doctor uses depends on your
symptoms and the part of your body being examined. Each type of technology gives
different information about the area of the body being studied or treated, related to
possible disease, injury, or the effectiveness of medical treatment. The medical imaging
techniques include
 X-rays
 MRI scan
 CT scan etc.
Many imaging tests are painless and easy. Some require you to stay still for a long
time inside a machine. This can be uncomfortable. Certain tests involve exposure to a
small amount of radiation.
3.5 Image Fusion Categories
Image fusion operation can perform in four levels such as pixel level, signal level,
feature level and decision level.
 In Signal Level Fusion, the signal from different sensors is combined to produce
a new signal with better signal to noise ratio than the original signals.
 In Pixel Level Fusion, it performs the operation on every pixel and produces the
fused image information from a set of pixels in source images to improve the
performance of image.
 In Feature Level Fuision, it requires an extraction of objects from various data
sources. It requires the information from pixel intensities, edges and textures.

In Decision Level Fusion, the information is merging at higher level of
abstraction and it combines the results from multiple algorithms and finally gets
the fused image.

Aditya Engineering College (A) Page 32


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-4
PROPOSED METHODOLOGY

4.1 Introduction
The image fusion method widely uses wavelet transform, contourlet transform
and Non-sub sampled contourlet transform. The wavelet transform can preserve spectral
information efficiently but cannot capture the image geometry structure. Furthermore the
isotropic wavelets are scant of shift invariance and multi directionality and fail to
provide, and optimal expression of highly anisotropic edges and contours in images. The
CT and NSCT overcome the short coming, which have advantages of localization,
directionality and anisotropy.
The NSCT is a fully shift invariance, multi scale and multi direction expansion
whose core is non separable two channel non subsamples filter bank(NSFB).The less
stringent design condition of the NSFB to design filters leads to NSCT with better
frequency selectivity and regularity when compared CT. To achieve shift-invariance the
NSCT is built upon coupling a Non-sub sampled Pyramid (NSP) with the Non-sub
sampled Directional filter bank (NSDFB). The multi scale property of the NSCT is
obtained from NSP which is a two-channel NSFB. NSP is completely different from the
Laplacian Pyramid (LP) in the CT because it has no down samp ling or up sampling,
hence it is shift-invariance.
The salient features of NSCT over existing methodology are as follows:
 Two different fusion rules are proposed for combining low and high-frequency
coefficients.
 For fusing the low-frequency coefficients, the phase congruency based model is
used. The main benefit of phase congruency is that it selects and combines
contrast- and brightness - invariant representation contained in the low frequency
coefficients.
 On the contrary, a new definition of directive contrast in NSCT domain is
proposed and used to combine high-frequency coefficients. Using directive

Aditya Engineering College (A) Page 33


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

contrast, the most prominent texture and edge information are selected from high-
frequency coefficients and combined in the fused ones.
 The definition of directive contrast is consolidated by incorporating a visual
constant to the SML based definition of directive contrast which provides a richer
representation of the contrast.

4.2 Flowchart of Proposed Algorithm

Image 1 Image 2

NSCT NSCT

Low High Low High


coefficient coefficient coefficient coefficient

MSVD MSVD MSVD MSVD

Fusion based on MSVD and applies IMSVD Fusion based on MSVD and applies IMSVD

INSCT

Fused image

Fig 4.1: Flowchart of proposed Algorithm


4.3 NSCT (Non sub-sampled contourlet transform)

Aditya Engineering College (A) Page 34


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

NSCT, based on the theory of CT, is a kind of multi-scale and multi-direction


computation framework of the discrete images. It can be divided into two stages
including non-subsampled pyramid (NSP) and non-subsampled directional filter bank
(NSDFB).
4.3.1 NSCT Pyramid Method
To retain the directional and multiscale properties of the transform, the Laplacian
Pyramid was replaced with a non-sub sampled pyramid structure to retain the multiscale
property, and a non-sub sampled directional filter bank for directionality. The first major
notable difference is that upsampling and downsampling are removed from both
processes. Instead the filters in both the Laplacian Pyramid and the directional filter
banks are upsampled. Though this mitigates the shift invariance issue a new issue is now
present with aliasing and the directional filter bank. When processing the coarser levels
of the pyramid there is potential for aliasing and loss in resolution. This issue is avoided
though by upsampling the directional filter bank filters as was done with the filters from
the pyramidal filter bank.

Fig4.2: pyramid decomposition

4.3.2 Filter Bank


The directional filter bank is constructed by combining critically-sampled two-

Aditya Engineering College (A) Page 35


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

channel fan filter banks and re-sampling operation. The result is a tree-structured filter
bank that splits the 2-D frequency plane into directional wedges. A shift-invariant
directional expansion is obtained with a non-sub sampled DFB (NSDFB). The NSDFB is
constructed by eliminating the down-samplers and up samplers in the DFB. This is done
by switching off the down samplers/ up samplers in each two-channel filter bank in the
DFB tree structure and up sampling the filters accordingly. This results in a tree
composed of two-channel NSFBs.
The Non-sub sampled contour let transform (NSCT) is used in the proposed
framework. NSCT has properties such as multistate, localization, multidirectional, and
shift invariance, but only limits the signal analysis to the time frequency domain. Two
different fusion rules are proposed for combining low and high-frequency coefficients.
For fusing the low-frequency coefficients, the phase congruency based model is used. A
new definition of directive contrast in
NSCT domain is proposed and used to combine high-frequency coefficients. Finally, the
fused image is constructed by the inverse NSCT with all composite coefficients.

Low Coefficients A
Fusion
Image A
Rule 1
Low Coefficients A
Fused Fused
NSCT
coeffi Image
Low Coefficients B cients
Fusion
Image B Rule 2
Low Coefficients B INSCT
NSCT

Fig4.3: Schematic diagram of NSCT based fusion Algorithm

Aditya Engineering College (A) Page 36


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Fig: 4.4: Non sub sampled contourlet decomposed schematic diagram


4.3.3 Applications of NSCT
The major applications of non-sub sampled contourlet transform are
1. Image De-noising
2. Image Enhancement
3. Image Restoration
1. Image De-noising
Image de-noising is an important image processing task, both as a process itself,
and as a component in other processor. Very many ways to de-noise an image or a set of
data exists. The main properties of a good image de-noising model are that it well
removes noise while preserving edges.
2. Image Enhancement
Image Enhancement is the process of adjusting digital images, so that the results
are more suitable for display or further image analysis. For example, you can remove
noise, sharpen, or brighten an image, making it easier to identify the key features.
3. Image Restoration
Image restoration is the operation of taking a corrupt/noisy image and estimating
the clean, original image. Corruption may come in many forms such as motionblur, noise
and camera mis-focus. Image restoration is performed by reversing the process that
blurred the image and such is performed by imaging a point source and use the point

Aditya Engineering College (A) Page 37


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

source image.
4.4 SVD (Singular Value Decomposition)
Singular Value Decomposition (SVD) has recently emerged as a new paradigm
for processing different types of images. SVD is an attractive algebraic transform for
image processing applications.
The SVD is the optimal matrix decomposition in a least square sense that it packs
the maximum signal energy into as few coefficients as possible. Singular value
decomposition (SVD) is a stable and effective method to split the system into a set of
linearly independent components, each of them bearing own energy contribution.
Singular value decomposition (SVD) is a numerical technique used to diagonalize
matrices in numerical analysis. SVD is an attractive algebraic transform for image
processing, because of its endless advantages, such as maximum energy packing which is
usually used in compression, ability to manipulate the image in base of two distinctive
subspaces data and noise subspaces, which is usually uses in noise filtering and also was
utilized in watermarking applications.
Each of these applications exploits key properties of the SVD. Also it is usually
used in solving of least squares problem, computing pseudo- inverse of a matrix and
multivariate analysis. SVD is robust and reliable orthogonal matrix decomposition
methods, which is due to its conceptual and stability reasons becoming more and more
popular in signal processing area. SVD has the ability to adapt to the variations in local
statistics of an image. Many SVD properties are attractive and are still not fully utilized.
The developed SVD based image processing techniques were focused in compression,
watermarking and quality measure. Experiments are performed to validate some of well
known but unutilized properties of SVD in image processing applications. This
contributes in utilizing SVD generous properties that are not unexploited in image
processing. It also introduces new trends and challenges in using SVD in image
processing applications. Some of these new trends are well examined experimentally and
validated and others are demonstrated and needs more work to be maturely validated. It
opens many tracks for future work in using SVD as an imperative tool in signal
processing.

Aditya Engineering College (A) Page 38


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

0
The singular value decomposition of a matrix A is the factorization of A into the
product of three matrices A = UDVT where the columns of U and V are orthonormal and
the matrix D is diagonal with positive real entries. The SVD is useful in many tasks. Here
we mention some examples. First, in many applications, the data matrix A is close to a
matrix of low rank and it is useful to find a low rank matrix which is a good
approximation to the data matrix. We will show that from the singular value
decomposition of A, we can get the matrix B of rank k which best approximates A; in
fact we can do this for every k. Also, singular value decomposition is defined for all
matrices (rectangular or square) unlike the more commonly used spectral decomposition
in Linear Algebra. The reader familiar with eigenvectors and eigenvalues (we do not
assume familiarity here) will also realize that we need conditions on the matrix to ensure
orthogonality of eigenvectors. In contrast, the columns of V in the singular value
decomposition, called the right singular vectors of A, always form an orthogonal set with
no assumptions on A. The columns of U are called the left singular vectors and they also
form an orthogonal set. A simple consequence of the orthogonality is that for a square
and invertible matrix A, the inverse of A is V D-1 UT, as the reader can verify. To gain
insight into the SVD, treat the rows of an n × d matrix A as n points in a d-dimensional
space and consider the problem of finding the best k- dimensional subspace with respect
to the set of points. Here best means minimize the sum of the squares of the
perpendicular distances of the points to the subspace. We begin with a special case of the
problem where the subspace is 1-dimensional, a line through the origin. We will see later
that the best-fitting k-dimensional subspace can be found by k applications of the best
fitting line algorithm. Finding the best fitting line through the origin with respect to a set
of points {xi |1 ≤ i ≤ n} in the plane means minimizing the sum of the squared distances
of the points to the line. Here distance is measured perpendicular to the line. The problem
is called the best least squares fit. In the best least squares fit, one is minimizing the
distance to a subspace. An alternative problem is to find the function that best fits some
data. Here one variable y is a function of the variables x1, x2, _ _ _, xd and one wishes to
minimize the vertical distance, i.e., distance in the y direction, to the subspace of the xi

Aditya Engineering College (A) Page 39


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

rather than minimize the perpendicular distance to the subspace being fit to the data.
4.4.1. Singular Vectors
We now define the singular vectors of an n × d matrix A. Consider the rows of A
as n points in a d-dimensional space. Consider the best fit line through the origin. Let v
be a unit vector along this line. The length of the projection of ai , the i th row of A, onto
v is |ai · v|. From this we see that the sum of length squared of the projections is |Av| 2.
The best fit line is the one maximizing |Av| 2 and hence minimizing the sum of the
squared distances of the points to the line. With this in mind, define the first singular
vector, v1, of A, which is a column vector, as the best fit line through the origin for the n
points in d-space that are the rows of A. Thus v1 = arg max |v|=1 |Av|. The value σ1 (A)
= |Av1| is called the first singular value of A. Note that σ 2 1 is the sum of the squares of
the projections of the points to the line determined by v1. The greedy approach to find the
best fit 2-dimensional subspace for a matrix A, takes v1 as the first basis vector for the 2-
dimenional subspace and finds the best 2-dimensional subspace containing v1. The fact
that we are using the sum of squared distances will again help. For every 2-dimensional
subspace containing v1, the sum of squared lengths of the projections onto the subspace
equals the sum of squared projections onto v1 plus the sum of squared projections along a
vector perpendicular to v1 in the subspace. Thus, instead of looking for the best 2-
dimensional subspace containing v1, look for a unit vector; call it v2, perpendicular to v1
that maximizes |Av| 2 among all such unit vectors. Using the same greedy strategy to find
the best three and higher dimensional subspaces, defines v3, v4, . . . in a similar manner.
This is captured in the following definitions. There is no apriori guarantee that the greedy
algorithm gives the best fit. But, in fact, the greedy algorithm does work and yields the
best-fit subspaces of every dimension as we will show. The second singular vector, v2, is
defined by the best fit line perpendicular to v1 v2 = arg max v⊥v1,|v|=1 |Av| . The value
σ2 (A) = |Av2| is called the second singular value of A. The third singular vector v3 is
defined similarly by v3 = arg max v⊥v1,v2,|v|=1 |Av| and so on. The process stops when
we have found v1, v2, . . . , vr as singular vectors and arg max v⊥v1,v2,...,vr |v|=1 |Av| =
0. If instead of finding v1 that maximized |Av| and then the best fit 2-dimensional
subspace containing v1, we had found the best fit 2- dimensional subspace, we might

Aditya Engineering College (A) Page 40


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

have done better. This is not the case. We now give a simple proof that the greedy
algorithm indeed finds the best subspaces of every dimension.
4.4.2. Power Method for Computing SVD
Computing the singular value decomposition is an important branch of numerical
analysis in which there have been many sophisticated developments over a long period of
time. Here we present an “in-principle” method to establish that the approximate SVD of
a matrix A can be computed in polynomial time. The reader is referred to numerical
analysis texts for more details. The method we present, called the Power Method, is
simple and is in fact the conceptual starting point for many algorithms.
4.4.3. Multi-Resolution Singular Value Decomposition
Multi-resolution singular value decomposition is very similar to wavelets
transform, where signal is filtered separately by low pass and high pass finite impulse
response (FIR) filters and the output of each filter is decimated by a factor of two to
achieve first level of decomposition. The decimated low pass filtered output is filtered
separately by low pass and high pass filter followed by decimation by a factor of two
provides second level of decomposition. The successive levels of decomposition can be
achieved by repeating this procedure. The idea behind the MSVD is to replace the FIR
filters with singular value decomposition.
4.4.4. Fusion by MSVD
One can observe that the modification of the present scheme is the use MSVD instead of
wavelets or pyramids. The images to be fused are decomposed into L (l =1, 2... L) level
using MSVD. At each decomposition level (l =1, 2... L), the fusion rule will select the
larger absolute value of the two MSVD detailed coeficients, since the detailed
coefficients correspond to sharper brightness changes in the images such as edges and
object boundaries etc. These coefficients are fluctuating around zero. At the coarest level
(l = L) , the fusion rule take average of the MSVD approximation coefficients since the
approximation coefficents at coarser level are the smoothed and subsampled verion of the
original image. Similalrly, at each decomposition level (l =1, 2... L), the fusion rule takes
the average of the two MSVD eigen matrices
4.5. Applications of SVD

Aditya Engineering College (A) Page 41


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Noise Reduction
 Image Compression
 Image Forensic Tracks
 steganography,
 authentication,
 labeling,
 captioning
 fingerprinting
 copy control for DVD
 hardware/ software watermarking
 executable watermarks
 signaling (signal information for automatic counting) for propose of
broadcast monitoring count

Aditya Engineering College (A) Page 42


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-5
INTRODUCTION TO MATLAB

MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation. Typical uses
include
 Math and computation
 Algorithm development
 Data acquisition
 Modeling, simulation, and prototyping
 Data analysis, exploration, and visualization
 Scientific and engineering graphics
 Application development, including graphical user interface building.
MATLAB is an interactive system whose basic data element is an array that does
not require dimensioning. This allows you to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of the time it would
take to write a program in a scalar non interactive language such as C or FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally
written to provide easy access to matrix software developed by the LINPACK and
EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS
libraries, embedding the state of the art in software for matrix computation.
This is a library that allows you to write C and FORTRAN programs that interact
with MATLAB. It includes facilities for calling routines from MATLAB (dynamic
linking), calling MATLAB as computational engine, and for reading and writing MAT-
files.
5.1 Using Matlab Editor to Create M-Files
MATLAB editor is both a text editor specialized for creating M-files and a
graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a

Aditya Engineering College (A) Page 43


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The
MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses
colour to differentiate between various elements of code, this text editor is recommended
as the tool of choice for writing and editing M-functions.
To open the editor, type edit at the prompt opens the M-file filename.min an editor
window, ready for editing. As noted earlier, the file must be in the current directory, or in
a directory in the search path.
5.2 Getting Help
The principal way to get help online is to use the MATLAB help browser, opened
as a separate window either by clicking on the question mark symbol (?) on the desktop
toolbar, or by typing help browser at the prompt in the command window. The help
Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext
Mark-up Language (HTML) documents. The Help Browser consists of two panes, the
help navigator pane, used to find information, and the display pane, used to view the
information. Self-explanatory tabs other than navigator pane are used to perform a search.
For example, help on a specific function is obtained by selecting the search tab,
selecting Function Name as the Search Type, and then typing in the function name in the
Search for field. It is good practice to open the Help Browser at the beginning of a
MATLAB session to have helped readily available during code development or other
MATLAB task.
Another way to obtain for a specific function is by typing doc followed by the
function name at the command prompt. For example, typing doc format displays
documentation for the function called format in the display pane of the Help Browser.
This command opens the browser if it is not already open.
M-functions have two types of information that can be displayed by the user. The
first is called the H1 line, which contains the function name and alone line description.
The second is a block of explanation called the Help text block. Typing help at the
prompt followed by a function name displays both the H1 line and the Help text for that
function in there command window. Occasionally, this information can be more up to

Aditya Engineering College (A) Page 44


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

date than the documentation of the M-function in question. Typically look for followed
by a keyword displays all the H1 lines that contain that keyword. This function is useful
when looking for a particular topic without knowing the names of applicable functions.
For example, typing look for edge at the prompt displays the H1 lines containing
that keyword. Because the H1 line contains the function name, it then becomes possible
to look at specific functions using the other help methods. Typing look for edge-all at the
prompt displays theH1 line of all functions that contain the word edge in either the H1
line or the Help text block. Words that contain the characters edge also are detected. For
example, the H1 line of a function containing the word poly-edge in the H1 line or Help
text would also be displayed.
5.3 Saving and Retrieving A Work Session
There are several ways to save and load an entire work session or selected
workspace variables in MATLAB. The simplest is as follows.
To save the entire workspace, simply right-click on any blank space in the
workspace Browser window and select Save Workspace As from the menu that appears.
This opens a directory window that allows naming the file and selecting any folder in the
system in which to save it. Then simply click Save. To save a selected variable from the
workspace, select the variable with a left click and then right-click on the highlighted
area. Then select Save Selection As from the menu that appears. This again opens a
window from which a folder can be selected to save the variable.
To select multiple variables, use shift click or control click in the familiar manner,
and then use the procedure just described for a single variable. All files are saved in the
double-precision, binary format with the extension. Mat. These saved files commonly are
referred to as MAT-files.
For example, a session named, says mywork_2003-02-10, and would appear as
the MAT-file mywork_2003_02_10.mat when saved. Similarly, a saved video called
final video will appear when saved as final_video.mat.
To load saved workspaces or variables, left click on the folder icon on the toolbar
of the workspace browser window. This causes a window to open from which a folder
containing MAT-file or selecting open causes the contents of the file to be restored in the

Aditya Engineering College (A) Page 45


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

workspace Browser window.


It is possible to achieve the same results described in the preceding paragraphs by
typing save and load at the prompt, with the appropriate file names and path information.
This approach is not as , but it is used when formats other than those available in the
menu method are required.

Fig5.1: MATLAB command window


5.4 Plotting Tools
Plotting tools are attached to figures and create an environment for creating
Graphs. These tools enable you to do the following:
 Select from a wide variety of graph types
 Change the type of graph that represents a variable
 See and set the properties of graphics objects
 Annotate graphs with text, arrows, etc.
 Create and arrange subplots in the figure
 Drag and drop data into graphs
Aditya Engineering College (A) Page 46
Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Display the plotting tools from the View menu or by clicking the plotting tools
icon in the figure toolbar, as shown in the following picture

Fig 5.2: Plotting window


5.5 Editor/Debugger
Use the Editor/Debugger to create and debug M-files, which are programs you
write to run MATLAB functions. The Editor/Debugger provides a graphical user
interface for text editing, as well as for M-file debugging. To create or edit an M-file use
File > New or File > Open, or use the edit function.

Fig5.3: Editor Window

MATLAB is a high-level language and interactive environment that enables you


to perform computationally intensive tasks faster than with traditional programming
languages such as C, C++, and FORTRAN.

Aditya Engineering College (A) Page 47


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Open the new script

 New script

Aditya Engineering College (A) Page 48


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Write the code

 Save the code

Aditya Engineering College (A) Page 49


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Run the code

Aditya Engineering College (A) Page 50


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

 Observe the output

Aditya Engineering College (A) Page 51


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-6
RESULTS

Brain-Axial

(a) Input image-1 (b) Input image-2

After applying NSCT on image-1, we get the co-efficients of each level

Aditya Engineering College (A) Page 52


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

After applying NSCT on image-2, we get the co-efficients of each level

Aditya Engineering College (A) Page 53


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

The final output is

(c)Fused image

Aditya Engineering College (A) Page 54


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

Brain-Hemisphere

(a) Input image-1 (b) Input image-2

(c)Fused image

Aditya Engineering College (A) Page 55


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

6.1 PERFORMANCE CHARACTERISTICS:


6.1.1. PEAK SIGNAL TO NOISE RATIO:
Peak signal-to-noise ratio, often abbreviated PSNR, is an engineering term for the
ratio between the maximum possible power of a signal and the power of corrupting noise
that affects the fidelity of its representation. Because many signals have a very wide
dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale.
PSNR is most commonly used to measure the quality of reconstruction of lossy
compression codecs (e.g., for image compression). The signal in this case is the original
data, and the noise is the error introduced by compression. When comparing compression
codecs, PSNR is an approximation to human perception of reconstruction quality.
Although a higher PSNR generally indicates that the reconstruction is of higher quality,
in some cases it may not. One has to be extremely careful with the range of validity of
this metric; it is only conclusively valid when it is used to compare results from the same
codec (or codec type) and same content.
PSNR is most easily defined via the mean square error (MSE). Given a noise-free
m×n monochrome image I and its noisy approximation K, MSE is defined as:
1
i 0  j 0
m 1 n 1
MSE  [ I (i, j )  k (i, j )]2
mn
The PSNR (in dB) is defined as:
 MAX 2 
PSNR  10 log 10  
 MSE 
Here, MAXI is the maximum possible pixel value of the image
6.1.2. Structural Similarity Index (SSIM)

The structural similarity (SSIM) index is a method for predicting the perceived
quality of digital television and cinematic pictures, as well as other kinds of digital
images and videos. The first version of the model was developed in the Laboratory for
Image and Video Engineering (LIVE) at The University of Texas at Austin and further
developed jointly with the Laboratory for Computational Vision (LCV) at New York
University.

Aditya Engineering College (A) Page 56


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

SSIM is used for measuring the similarity between two images. The SSIM index
is a full reference metric; in other words, the measurement or prediction of image quality
is based on an initial uncompressed or distortion-free image as reference. SSIM is
designed to improve on traditional methods such as peak signal-to-noise ratio (PSNR)
and mean squared error (MSE).

The difference with respect to other techniques mentioned previously such as


MSE or PSNR is that these approaches estimate absolute errors; on the other hand, SSIM
is a perception-based model that considers image degradation as perceived change in
structural information, while also incorporating important perceptual phenomena,
including both luminance masking and contrast masking terms. Structural information is
the idea that the pixels have strong inter-dependencies especially when they are spatially
close. These dependencies carry important information about the structure of the objects
in the visual scene. Luminance masking is a phenomenon whereby image distortions (in
this context) tend to be less visible in bright regions, while contrast masking is a
phenomenon whereby distortions become less visible where there is significant activity
or "texture" in the image.

6.1.3. Entropy

The entropy of a system as defined by Shannon gives a measure of uncertainty


about the images‟ actual structure. Shannon‟s function is based on the concept that the
information gain from an event is inversely related to its probability of occurrence.
Several authors have used Shannon‟s concept for image processing and pattern
recognition problems. Many used Shannon‟s concept to define the entropy of an image
assuming that an image is entirely represented by its gray level histogram only. As a
result segmentation algorithms using Shannon‟s function resulted in an unappealing
result, same entropy and threshold values for different images with identical histogram.

Shannon defined the entropy of an n-state system as

H  i 1 pi log pi
n

where p, is the probability of occurrence of the event i and

Aditya Engineering College (A) Page 57


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

p
i 1
i  1 , 0 ≤ pi ≤ 1

6.2. Performance Comparison

Table6.1: Performance Comparison

S.No Image PSNR Entropy SSIM

31.591840 0.332138
1. Brain-hemisphere 31.434193 1.876440 0.688603

30.461982 0.670912
2. Brain-axial 30.794717 1.634661 0.521033

By observing the performance characteristics, Brain- hemisphere axial images are


more efficient than Brain- axial images because the values of PSNR are more and the
values of SSIM are less when compared to the values of PSNR and SSIM of Brain-axial
images.

Aditya Engineering College (A) Page 58


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CHAPTER-7
CONCLUSION AND FUTURE SCOPE

7.1 CONCLUSION
Image fusion is a technique that combines two or more images relevant
information from a single image and it contains all the information regarding those input
images. The images are taken from multi sensor images of the same scene or multi scene
images from same sensor. The fusion algorithm was implemented by many techniques
like DCT, DWT, and IHS etc. When compared with all techniques the combination of
NSCT and SVD gives effective result. This technique split the images into two or more
up to n. From the comparison on the basis of various performance metrics, it has been
concluded that proposed work performs effectively over existing NSCT and SVD
algorithms.
7.2 FUTURE SCOPE
Image fusion means the combining of multiple images into a single image that has
the most information contented without producing facts which can be missing in certain
image. The design of image fusion in multi-focus cameras to combine data from various
images of the related landscape in order to take the multi focused image.
The proposed work has been implemented and designed in the MATLAB. The
Image fusion using NSCT and SVD provides a future assistance where image is not only
fused by gray image but also it can be applied for RGB image. The results indicate that
the NSCT provides better performance than competing transform such as the curvelet
transform.

Aditya Engineering College (A) Page 59


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

REFERENCES:
(1)
(2) A. Cardinali and G. P. Nason, “A statistical multiscale approach to image
segmentation and fusion,” in Proc. Int. Conf. Information Fusion, Philadelphia,
PA, USA, 2005, pp. 475–482.
(3) Ding Li ”Remote sensing Image fusion based on NSCT & PCA”, copy rights
2009.
(4) G. Bhatnagar, Q. M. J. Wu, and B. Raman, “Real time human visual system based
framework for image fusion,” in Proc. Int. Conf. Signal and Image Processing,
Trois-Rivieres, Quebec, Canada, 2010, pp. 71–78.
(5) Kurakula Sravya, Dr. P. Govardhan, Naresh Goud M, “Image Fusion on Multi
Focused Images using NSCT”, International Journal of Computer Science and
Information Technologies, Vol. 5 (4) , 2014.
(6) A. Toet, L. V. Ruyven, and J. Velaton, “Merging thermal and visual images by a
contrast pyramid,” Opt. Eng., vol. 28, no. 7, pp. 789–792, 1989.
(7) V. S. Petrovic and C. S. Xydeas, “Gradient-based multiresolution image fusion,”
IEEE Trans. Image Process., vol. 13, no. 2, pp. 228–237, Feb. 2004.
(8) H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the
wavelet transform,” Graph Models Image Process., vol. 57, no. 3, pp. 235–245,
1995.
(9) A. Toet, “Hierarchical image fusion,” Mach. Vision Appl., vol. 3, no. 1, pp. 1–11,
1990.
(10) G. Bhatnagar and B. Raman, “A new image fusion technique based on directive
contrast,” Electron. Lett. Comput. Vision Image Anal., vol. 8, no. 2, pp. 18–38,
2009.
(11) Q. Zhang and B. L. Guo, “Multifocus image fusion using the nonsubsampled
contourlet transform,” Signal Process., vol. 89, no. 7, pp. 1334–1346, 2009.
(12) Y.Chai, H. Li, and X. Zhang, “Multifocus image fusion based on features
contrast of multiscale products in nonsubsampled contourlet transform domain,”
Optik, vol. 123, pp. 569–581, 2012.

Aditya Engineering College (A) Page 60


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

(13) G. Bhatnagar and Q. M. J.Wu, “An image fusion framework based on human
visual system in framelet domain,” Int. J. Wavelets, Multires., Inf. Process., vol.
10, no. 1, pp. 12500021–30, 2012.
(14) S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based on a new
contourlet packet,” Inf. Fusion, vol. 11, no. 2, pp. 78–84, 2010.

Aditya Engineering College (A) Page 61


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

APPENDIX:
Matlab code:
clc;
clearall;
closeall;

nlevels=1;
dfilter='dmaxflat7';
pfilter='maxflat';

im1=imread('11.png');
figure
imshow(im1);
title('input image 2');
% im1=rgb2gray(im1);
im=imread('12.png');
figure
imshow(im);
title('input image 1');
% im=rgb2gray(im);
im=double(im);
coeffs = nsctdec( double(im), nlevels, dfilter, pfilter );

figure;
imshow(mat2gray(coeffs{1,1}));
title('NSCT coeffs{1,1}');% subplot(1,3,1);

figure;

imshow(mat2gray(coeffs{1,2}{1,1}));
title('NSCT coeffs{1,2}{1,1}');

figure;
imshow(mat2gray(coeffs{1,2}{1,2}));
title('coeffs{1,2}{1,2}');

im1=double(im1);

coeffs1 = nsctdec( double(im1), nlevels, dfilter, pfilter );

% disp('Displaying the contourlet coefficients of image 1...') ;


% shownsct(coeffs ) ;

Aditya Engineering College (A) Page 62


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

figure;
imshow(mat2gray(coeffs1{1,1}));
title('NSCT coeffs1{1,1}');
% subplot(1,3,1);
figure;
imshow(mat2gray(coeffs1{1,2}{1,1}));
title('NSCT coeffs1{1,2}{1,1}');

% subplot(1,3,2);

figure;
imshow(mat2gray(coeffs1{1,2}{1,2}));
title('NSCT coeffs1{1,2}{1,2}');

[X1, U1] = MSVD(coeffs{1,1});


[X2, U2] = MSVD(coeffs1{1,1});

%fusion starts
X.LL = 0.5*(X1.LL+X2.LL);

D = (abs(X1.LH)-abs(X2.LH)) >= 0;
X.LH = D.*X1.LH + (~D).*X2.LH;
D = (abs(X1.HL)-abs(X2.HL)) >= 0;
X.HL = D.*X1.HL + (~D).*X2.HL;
D = (abs(X1.HH)-abs(X2.HH)) >= 0;
X.HH = D.*X1.HH + (~D).*X2.HH;

%XX = [X.LL, X.LH; X.HL, X.HH];


U = 0.5*(U1+U2);

%apply IMSVD
coe{1,1} = IMSVD(X,U);

[X1, U1] = MSVD(coeffs{1,2}{1,1});


[X2, U2] = MSVD(coeffs1{1,2}{1,1});

%fusion starts
X.LL = 0.5*(X1.LL+X2.LL);

D = (abs(X1.LH)-abs(X2.LH)) >= 0;
X.LH = D.*X1.LH + (~D).*X2.LH;
D = (abs(X1.HL)-abs(X2.HL)) >= 0;
X.HL = D.*X1.HL + (~D).*X2.HL;
D = (abs(X1.HH)-abs(X2.HH)) >= 0;

Aditya Engineering College (A) Page 63


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

X.HH = D.*X1.HH + (~D).*X2.HH;

%XX = [X.LL, X.LH; X.HL, X.HH];


U = 0.5*(U1+U2);

%apply IMSVD
coe{1,2}{1,1} = IMSVD(X,U);

[X1, U1] = MSVD(coeffs{1,2}{1,2});


[X2, U2] = MSVD(coeffs1{1,2}{1,2});

%fusion starts
X.LL = 0.5*(X1.LL+X2.LL);

D = (abs(X1.LH)-abs(X2.LH)) >= 0;
X.LH = D.*X1.LH + (~D).*X2.LH;
D = (abs(X1.HL)-abs(X2.HL)) >= 0;
X.HL = D.*X1.HL + (~D).*X2.HL;
D = (abs(X1.HH)-abs(X2.HH)) >= 0;
X.HH = D.*X1.HH + (~D).*X2.HH;

%XX = [X.LL, X.LH; X.HL, X.HH];


U = 0.5*(U1+U2);

coe{1,2}{1,2} = IMSVD(X,U);

%%
imrec=nsctrec(coe,dfilter,pfilter);

disp('Displaying the reconstructed image...') ;


disp('It should be a perfect reconstruction' ) ;
disp(' ') ;

% Show the reconstruction image and the original image


figure;
subplot(1,3,1), imagesc( im, [0, 255] );
title('Original image1' ) ;
colormap(gray);
axisimageoff;

subplot(1,3,2), imagesc( im1, [0, 255] );


title('Original image2' ) ;
colormap(gray);
axisimageoff;

Aditya Engineering College (A) Page 64


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

subplot(1,3,3), imagesc( imrec, [0, 255] );


title('Reconstructed image' ) ;
colormap(gray);
axisimageoff;

x=psnr(imrec,im);
x=x*-1;
fprintf('\nThepsnr is: %f\n',x );
y=psnr(im1,imrec);
y=y*-1;
fprintf('The psnr1 is: %f\n',y );
entrv=entropy(imrec);
fprintf('\n The entropy is: %f\n\n',entrv);

ssimval = ssim(imrec,im);
fprintf('\nThessim value with input image1 is: %f\n',ssimval);
ssimval1 = ssim(imrec,im1);
fprintf('The ssim value with input image2 is: %f\n',ssimval1);

Aditya Engineering College (A) Page 65


Image Fusion by Non-Subsampled Contourlet Transform and Singular Value Decomposition

CONTACT DETAILS

Name :
Roll Number. :
Mail Id :
Contact Number :

Name :
Roll Number. :
Mail Id :
Contact Number :

Name :
Roll Number. :
Mail Id :
Contact Number :

Name :
Roll Number. :
Mail Id :
Contact Number :

Name :
Roll Number. :
Mail Id :
Contact Number :

Aditya Engineering College (A) Page 66

You might also like