You are on page 1of 4

Bonfring International Journal of Advances in Image Processing, Vol. 5, No.

3, July 2015

23

Image Fusion Algorithms for Medical


Images-A Comparison
M.D. Nandeesh and Dr.M. Meenakshi
Abstract--- This paper presents a comparative study of
medical image fusion algorithms along with its performance
analysis. Magnetic Resonance Imaging (MRI) and Computed
Tomography (CT) images are used to fuse which form a
contemporary image so as to improve the complementary and
redundant information for diagnosis purpose. For this,
Discrete Wavelet Transform (DWT), Stationary Wavelet
Transform (SWT), Principle Component Analysis (PCA) and
curvelet transform techniques are employed and its
experimental results are evaluated and compared.
Comparison of fusion performance is based on its root mean
square error (RMSE), peak signal to noise ratio (PSNR),
Mutual Information (MI) and Entropy (H). Comparison
results demonstrate the achievement of better performance of
fusion by using curvelet transform.
Keywords--- Image Fusion, PCA, DWT, SWT, Curvelet
Transform, Entropy, PSNR and MI

I.

INTRODUCTION

HE demand for Image fusion for image processing


applications has been drastically increased due to
limitations like optical limitation, improper image capturing
and lack of clarity and quality with a single image sensor [1].
Any segment of information will be useful only if it conveys
the actual content with clarity and quality. Image fusion
implies integration of multiple images acquired by multiple
sensors with the intention of providing a better perspective of
a scene that contains more information [2]. This extracts the
useful information from several images into a single image.
Applications where image fusions are mainly used includes
medical imaging, microscopic imaging, remote sensing,
computer vision, and robotics. Image processing involves
both high spatial and high spectral information in a single
image. Image fusion involves blending of the complementary
as well as the common features of a set of images which give
superior information for both subjective as well as objective
analysis. The integration of multi source images offers
immense potential for further research as each rule emphasizes
on different characteristics of the source image. Fusion of
images is more suitable for human/machine perception for
object detection in the field of remote sensing and diagnosis in
case of medical imaging.

M.D. Nandeesh, Assistant Professor, Department of Instrumentation


Technology, M.S.R.I.T, Bangalore, India. E-mail:mdnandeesh@yahoo.com
Dr.M. Meenakshi, Professor, Department of Instrumentation Technology,
Dr.A.I.T, Bangalore, India. E-mail:meenakshi_mbhat@yahoo.com
DOI: 10.9756/BIJAIP.8051

During last two decades, many image fusion techniques


have been developed. Further the work by Pohl et al. [3]
categorized image fusion algorithms into pixel, feature, and
decision levels. Pixel level is a basic level of fusion, which is
used to analyze the collective information from different
images of same before original data is estimated and
recognized. Feature level is an intermediate level of fusion
which extracts significant attributes from an image like shape,
length, edges, segments and direction. Decision level is a high
level of fusion which indicates the actual target. Image fusion
methods can be broadly classified into two types, spatial
domain and transform domain fusion. Spatial domain method
directly deals with the pixel value of an image. The pixel
values are manipulated to achieve the required result. In
frequency domain methods the pixel value is first transferred
into frequency domain by applying fusion methods and further
alters its frequency component.
Images are obtained from different imaging systems like
CT, MRI, and PET plays an important role in medical
diagnosis and other clinical applications by imparting a
distinct level of information. Study by vivek [4] et al. showed
that accurate size and location of brain tumor can be detected.
For example CT is commonly used for visualizing dense
structures and is not suitable for soft tissues. MRI on the other
hand provides better visualization of soft tissues and is
commonly used for detection of tumor and other tissue
abnormalities. Therefore fusion of images obtained from
different modalities is desirable to extract sufficient
information for clinical diagnosis and treatment.
This
information includes the size of tumour and its location, which
enable better detection when compared to the source images.
The work by Shih-gu Huang [5] demonstrates
classification of various algorithms available in literature to
perform image fusion. The different image fusion algorithm
existing in literature are
Intensity-hue-saturation (IHS)
transform, PCA, Arithmetic combinations such as Brovey
transform and Ratio enhancement technique, Multi-scale
transform based fusion such as HPF method, Pyramid method
(Gaussian, Laplacian, Gradient, Morphological pyramid),
DWT, SWT, Dual tree discrete wavelet transform and Lifting
wavelet transform, Neural network, fuzzy logic method and
sparse technique. The work by Naidu et al. [6] suggested that
pixel-level image fusion implemented using DWT and PCA
shows better performance for aeronautical application. Further
the work by Anjali et al. used pixel and region based fusion
scheme for Infrared images with DWT for feature selection
[7]. Further the work by Srikanth [8], Vipin [9] and Neetu et
al [10] showed the various performance measures such as
PSNR and Entropy for DWT to multimodal medical and
general images. They were implemented and evaluated for

ISSN: 2277-503X| 2015 Bonfring

Bonfring International Journal of Advances in Image Processing, Vol. 5, No. 3, July 2015

subjective and objective parameter. Further work


demonstrated the implementation of different families and
coefficients of DWT by Pajares [11], H. Li [12], Sonali [13],
Zhijun [14] et al. and performance comparison was carried out
by computing PSNR and MI.
The study by authors Gaurav [16], Deepak Kumar [17],
Yudong et al [18] have categorized results of fused image
based on comparison and evaluation of existing methods.
Main limitation of DWT is its translation variant property
which can be nullified by using SWT. In SWT, even if the
signal is shifted, the obtained coefficients will not change and
performs better in denoising and edge-detecting. In contrast to
DWT, SWT can be applied to any arbitrary size of images
rather than size of power of two. SWT fusion has shown better
fusion performance in both medical and other images by
authors Kusum [19], Chavez [20], Mirajkar et al [21] and
Houkui et al [22]. The basic limitation of wavelet fusion
algorithm is in the fusion of curved shapes which can be
handled by curvelet transform efficiently. So, the application
of the curvelet transform for curved object image fusion would
result in better fusion efficiency by authors Choi [23],
Shriniwas [24] and Navneet et al [25]. The study by author
Nandeesh et al [26] has shown comparisons of PCA, DWT,
and SWT with its performance analysis based on PSNR and
entropy.
This paper gives a comparative study related to
performance of the image fusion technique. The results of
fused images are compared with PCA, DWT, SWT and
curvelet technique applied to medical images. The
organization of this paper is as follows; Section 2 explains the
principle of PCA, DWT, SWT and curvelet image fusion
techniques. Next in section 3 fusion performance assessment
techniques are explained. Results and analysis is given in
section 4 and finally conclusions are drawn in section 5.
II.

PRINCIPLE OF IMAGE FUSION TECHNIQUES

In this work methods used for image fusion for medical


application are PCA, DWT, SWT and Curvelet transform.
A Principal Component Analysis
PCA projects data from original space to eigen space to
improve its variance and minimize the covariance by
preserving the components corresponding to the significant
eigen values and discarding the other, so as to enhances the
signal-to-noise ratio [6,7]. PCA is extensively used in data
compression and pattern matching by highlight the similarities
and differences with no loss of information. The PCA is a
statistical technique which is used to transform the
multivariate dataset of correlated variables into a dataset of
uncorrelated linear combinations of the original variables. The
input images (images to be fused) are arranged in two column
vectors and their empirical means are subtracted. Eigenvector
and Eigenvalues for this resulting vector are computed and the
eigenvectors corresponding to the larger eigenvalues are
obtained. The normalized components P1 and P2 are
computed from the obtained eigenvector. Fused image is
obtained by I= P1*i1(i,j)+P2*i2(i,j).

24

B Discrete Wavelet Transform


DWT is a time-scale representation of the digital signal
obtained by using digital filtering techniques. Signal to be
analyzed is passed through filters with different cut-off
frequencies at different scales. The Wavelet Transform
provides a time-frequency representation of the signal which is
capable of revealing aspects of data which other signal
analysis techniques overlook. This includes trends, breakdown
points, discontinuities in higher derivatives, and selfsimilarity. Delicate information like medical imaging and
complex information like speech signals can be significantly
analysed using wavelet. Images and patterns are decomposed
into elementary forms at different positions and scales and
subsequently reconstructed with high precision. The wavelet
transform [10, 11] decomposes the image into spatial
frequency bands of various levels such as low-high, high-low,
high-high and the low-low groups. Wavelet transform fusion
method decomposes an image into various sub images based
on local frequency content and by indicating the prominent
wavelet coefficients. A general fusion rule is to select, the
coefficients whose values are higher and the more dominant
features at each scale are preserved in the new multi-resolution
representation. A new image is constructed by performing an
inverse wavelet transformation. In the decimated algorithm
[12], the signal is down- sampled after each level of
transformation for one level DWT, and second level DWT.
Down-sampling is performed by keeping one out of every two
rows and columns, making the transformed image one quarter
of the original size and half the original resolution. The
decimated algorithm can therefore be represented visually as a
pyramid, where the spatial resolution becomes coarser as the
image becomes smaller.. The steps involved [12] in fusion of
images through wavelet transform are given in figure 1. This
type of image fusion provides better PSNR, but has spectral
degradation [20]
W

Image
1

Fusion
W

Image
2

Fused
image

Rules

Figure 1: Discrete Wavelet Fusion Scheme


C Stationary Wavelet Transform
The Stationary Wavelet Transform is a time invariant
transform. Translation invariance is restored by averaging undecimated DWT, called SWT. By suppressing the downsampling step of the decimated algorithm and instead upsampling the filters by inserting zeros between the filter
coefficients. The approximation images from the undecimated
algorithm are therefore represented as levels in a
parallelepiped, with the spatial resolution becoming coarser at
each higher level and the size remaining the same. SWT is
only a process of suppressing down-sampling which is
translation-invariant. Algorithms in which the filter is

ISSN: 2277-503X| 2015 Bonfring

Bonfring International Journal of Advances in Image Processing, Vol. 5, No. 3, July 2015

upsampled are called Atrous, meaning with holes SWT is


similar to DWT but main difference is down-sampling is
suppressed and is translation-invariant
D Curvelet Transform
The curvelet transform is an advanced tool for graphical
application for representation of curved shapes which can be
extended to the fields of edge detection. Special filtering
process and multi-scale directional transforms are used in
curvelet transforms that allows optimal non-adaptive sparse
representation of objects with edge. Curvelet transform can
represent the contour of image more sparsely and provide
more information for image processing. In first generation
curvelet transform used a complex series of steps involving
the ridgelet analysis of radon transform of an image for
extracting edge information. In this curvelet approach, input
image is first decomposed into a set of sub bands each of
which is then partitioned into several blocks for ridgelet
analysis. This is a complicated process which is time
consuming. Second generation curvelet transform reduces the
amount of redundancy in the transform and increased the
speed considerably based on the wrapping of specially
selected Fourier samples

25

IV.

RESULT AND ANALYSIS

The proposed techniques for data fusion in the above


section are implemented using Matlab and results are
compared using simulated results. Figure 2a and 2b shows
original input MRI and CT images used as input for various
algorithm for image fusion, figure 2c , 2d, 2e, and 2f shows
results of image fusion obtained using PCA, DWT, SWT and
curvelet transform respectively. Table 1 summarises
comparison of performance of image fusion technique for
MRI and CT image

Original Image 2(a) Ct 2(b) MRI images


III.

PERFORMANCE ASSESSMENT

In this work performance assessment of proposed


algorithm is carried out using measurement of Entropy, MSE,
PSNR and Mutual Information. This comparison enables
identification of best method for image fusion for medical
application
a.

b.

Entropy: Entropy is the measure of information


quantity contained in an image. If the fused image has
relatively uniform frequency content then it contains
maximum entropy. Larger entropy for fused image
indicate more information content of original image.
Mathematically, entropy is defined as:
E=-

Fused Image 2(c) PCA 2(d) DWT

where p(xi) is probability of the occurrence of .


Peak Signal to Noise Ratio: The PSNR indicates the
similarity between two images. The higher the value
of PSNR, the better the fused image is.
PSNR= 10log

where RMSE (root mean

square error) is defined as


RMSE =
c.

Mutual Information (MI): measures the degree of


dependency of two images, Its value is zero if I1 and
I2 are independent of each other MI between two
source is given by

Fused Image 2(e) SWT 2(f) Curvelet Transform


Figure 2
Table 1: Comparison of Performance of Images

Where p(x,y) denotes the joint probability


distribution function and p(a) and p(b) marginal
probability distribution functions

PCA
DWT
SWT
Curvelet

ISSN: 2277-503X| 2015 Bonfring

Entropy
0.012
0.007
0.07
0.07

MSE
8
241
475
419

PSNR
15.38
15.95
21.35
21.94

NMI
0.8736
0.466
0.4854
0.4533

Bonfring International Journal of Advances in Image Processing, Vol. 5, No. 3, July 2015

V.

CONCLUSION

In this work DWT, PCA, SWT and curvelet algorithm are


used for medical images. Results demonstrated better
performance in terms of smaller entropy and higher PSNR and
smaller normalised mutual information values. Though DWT
and SWT showed good performance, but better performance is
obtained by using curvelet transform. Curvelet transform has
better ability to identify the edge direction feature and better
analysis and tracking of important characteristics of the image
ACKNOWLEDGEMENT
Our thanks to the experts who have contributed in
development of image fusion algorithm and database system.
Our thanks to the experts who have contributed in
development of European ST-T datasets of physionet
database.
REFERENCE
[1]
[2]

[3]

[4]

[5]
[6]

[7]

[8]

[9]

[10]

[11]
[12]

[13]

[14]

[15]

[16]

[17]

L.Wald, Some terms of reference in data fusion, IEEE Trans. Geosci.


Remote Sens., Vol. 37, No. 3, Pp. 11901193, May 1999.
Vincent Barra and Jean-Yves Boire, A General Framework for the
Fusion of Anatomical and Functional Medical Images, NeuroImage 13,
Pp. 410424, 2001
C. Pohl and J. L. Van Genderen, Multisensor image fusion in remote
sensing: concepts, methods and applications, IJRS, Vol. 19, No. 5, Pp.
823- 854, 1998.
Vivek Angoth, CYN Dwith and Amarjot Singh, A Novel Wavelet
Based Image Fusion for Brain Tumor Detection, International Journal
of Computer Vision and Signal Processing, 2(1), Pp.1-7,2013
Shih-Gu Huang, Wavelet for Image Fusion
V.P.S Naidu and J.R. Raol, Pixel-Level Image Fusion using Wavelets
and Principal Component Analysis, Defence Science Journal, Vol. 58,
No. 3, May 2008.
Anjali Malviya1, S. G. Bhirud Image Fusion of Digital Images in
International Journal of Recent Trends in Engineering, Vol 2,No. 3, Pp.
146-148, November 2009.
J. Srikanth, C.N Sujatha, Image Fusion Based On Wavelet Transform
For Medical diagnosis, Int. Journal of Eng Research and Applications, ,
Vol. 3, Issue 6, Pp.252-256, Nov-Dec 2013.
Vipin Wani, Mukesh Baghe, Hitesh Gupta, A Comparative Study of
Image Fusion Technique Based on Feature Using Transforms Function
,International Journal of Emerging Technology and Advanced
Engineering, Volume 3, Issue 11, November 2013
Neetu Mittal Rachana Gupta, Comparative analysis of medical images
fusion using different fusion methods for Daubechies Complex Wavelet
Transform, IJARCSSE Volume 3, Issue 6, June 2013
G. Pajares, J. Manuel, A wavelet-based image fusion tutorial, Pattern
Recognition 3, Pp1855-1872, July 2004.
H. Li, B.S. Manjunath, S.K. Mitra, Multisensor Image Fusion Using the
Wavelet Transform, Graphical Models and Image Processing, Vol. 57,
No. 3, Pp.235-245, 1995.
Sonali Mane, S. D. Sawant, Image Fusion of CT/MRI using DWT ,
PCA Methods and Analog DSP Processor ,Int. Journal of Engineering
Research and Applications, Vol. 4, Issue 2, Pp.557-563, February 2014.
Zhijun Wang, Djemel Ziou, Costas Armenakis, Deren Li, and Qingquan
Li, A Comparative Analysis of Image Fusion Methods, IEEE
Transactions on geosciences and remote sensing, Vol. 43, No. 6,
Pp1391-1402, June 2005.
Rosemin Thanga Joy R, Priyadharshini A, A Comparative Analysis of
CT and MRI Image Fusion using Wavelet and Framelet Transform,
IJERT Vol. 2 Issue 2, February- 2013.
Gaurav Bhatnagar and Balasubramanian Raman, A new image fusion
technique based on directive contrast, Electronic Letters on Computer
Vision and Image Analysis 8(2), Pp.18-38, 2009.
Deepak Kumar Sahu, M.P.Parsai, Different Image Fusion Techniques
A Critical Review, IJMER, Vol. 2, Issue. 5, Pp-4298-4301, Sep.-Oct.
2012.

26

[18] Yudong Zhang, Zhengchao Dong, Lenan Wu , Shuihua Wang , Zhenyu


Zhou, Feature extraction of brain MRI by stationary wavelet
transform, IEEE , 2010 .
[19] Kusum Rani, Reecha Sharma, Study of Different Image fusion
Algorithm, International Journal of Emerging Technology and
Advanced Engineering ,Volume 3, Issue 5, May 2013.
[20] P.S. Chavez, S.C. Sides, J.A. Anderson, Comparison of three different
methods to merge multiresolution and multispectral data: Landsat TM
and SPOT panchromatic, Photogrammetric Engineering and Remote
Sensing, 57(3): 295-303, 1991.
[21] Mirajkar Pradnya P.Sachin D. Ruikar, Image fusion based on stationary
wavelet transform, International Journal of Advanced Engineering
Research and Studies, Pp 99-101, July-Sept., 2013
[22] Houkui Zhou, An Stationary Wavelet Transform and Curvelet
Transform Based Infrared and Visible Images Fusion Algorithm,
International Journal of Digital Content Technology and its
Applications,Volume 6,Number 1,January 2012
[23] Choi, M., R. Y. Kim, and M. G. Kim, The curvelet transform for image
fusion, International Society for Photo grammetry and Remote Sensing,
ISPRS 2004, Vol. 35, Part B8, Pp.5964, Istanbul, 2004.
[24] Shriniwas T. Budhewar, Wavelet and Curvelet Transform based Image
Fusion Algorithm, (IJCSIT) International Journal of Computer Science
and Information Technologies, Vol. 5 (3), Pp. 3703-3707, 2014.
[25] Navneet kaur, Madhu Bahl ,Harsimran Kaur, Review On: Image Fusion
Using Wavelet and Curvelet Transform, International Journal of
Computer Science and Information Technologies, Vol. 5 (2) , Pp.24672470, 2014
[26] M D Nandeesh and Dr M.Meenakshi, A comparative study of different
Image fusion algorithm, The 3rd National conference on Computational
control system and optimization (CCSO 2015) at Dr AIT Bangalore,
Volume I, 23rd and 24th April 2015.
M.D. Nandeesh was born in Karnataka, India. He
received his B.E. degree in Instrumentation Technology
from Mysore University, Karnataka, in 1998. He
received M.Tech degree in Biomedical Instrumentation
from VTU, Karnataka, in 2000. He is currently working
as Assistant Professor (Sr. grade), Department of
Instrumentation Technology,
M.S.Ramaiah Institute of Technology, Bangalore. He
has more than 14 years of experience in teaching. He is presently research
scholar in Visvesvaraya Technological University (VTU). His research
interests are Artificial Intelligence, Image processing, Biomedical
Instrumentation. He has presented many papers in national conferences &
published papers in International journals. He is a Life member of MISTE. (Email:hasnimurthy@rediffmail.com)
Dr.M. Meenakshi is graduated from SJCE Mysore in the field of
Instrumentation Technology. She received her Master Degree in the field of
Controls, Guidance and Instrumentation from I.I.T Madras and Ph.d degree
from the department of Aerospace engineering. I.I.Sc Bangalore in the field of
Controls & Instrumentation. She has a teaching experience of 20 years. She
published more than 40 research publications, including International
Journals, International Conferences, National Conferences, workshops and
seminars. She is currently Professor & Head of the Dep. Of Instrumentation
Technology, Dr.AIT, Bangalore-56. (E-mail: meenakshi_mbhat@yahoo.com)

ISSN: 2277-503X| 2015 Bonfring

You might also like