You are on page 1of 7

A Novel Approach to Multi Modal Hybrid

Image Fusion Using Wavelet and Contourlet


Transform for Medical Diagnosis Applications
P C VIJAYA LAKSHMI1, T MAHEBOOB RASOOL2
2Associate Professor & HOD
1,2 Dept. of Electronics and Communication Engineering
Bharath college of Engineering and Technology For Women, C.K.Dinne,Kadapa

Abstract— Image fusion is used to enhance the quality X-rays: Is used to detect fractures, abnormality in bone
of images by combining two images of same scene obtained position
from single or multiple modalities. In medical diagnosis
different types of imaging modalities such as X-ray, computed CT: Is used to provide more accurate information about
tomography (CT), magnetic resonance imaging (MRI), calcium deposit, air and dense structures like bones with less
magnetic resonance angiography (MRA), PET scan etc.,
provide limited information where some information is distortion, acute bleeds and tumours. But it cannot detect
common, and some unique. This paper presents a hybrid physiological changes.
combination of Discrete Wavelet and Contourlet. For the MRI: Under strong magnetic field and radio-wave energy,
obtained fused images from above case, it is proposed to information about Nervous system, structural abnormalities
compute performance metrics like Entropy, Peak signal to of soft tissue, muscles can be better visualized.
noise ratio and mean square error and compare them with MRA: Is used to evaluate blood vessels and its
existing methods as to come out with best combination of abnormalities.
transformations that yield highly informative fused image. PET: PET (positron emission tomography) offers
Proposed method will be simulated using MATLAB tool.
quantitative analyses, allowing relative changes over time to
Index Terms— Fusion, wavelet, contourlet, Filter banks, Multi- be monitored as a disease process evolves or in response to a
resolution, Multi-direction, Geometric image processing, MSE, specific stimulus by looking at blood flow, metabolism,
PSNR, Entropy. neurotransmitters, and radio-labelled drugs.
SPECT: Single positron emission computed tomography
I. INTRODUCTION provides functional and metabolic information. It helps to
diagnose and stage a cancer.
Medical image fusion is the process in which FMRI: Functional magnetic resonance imaging is a
complimentary information from multiple images from functional neuro-imaging procedure using MRI technology
single or multiple imaging modalities are combined to that measures brain activity by detecting changes associated
improve the imaging quality so that the resultant fused with blood flow
image will be more informative and suitable for processing Hence, we can understand none of these modalities are
tasks thereby increasing their clinical applicability in able to carry all relevant information in a single image.
diagnosis and assessment of medical problems Generally for a physician to analyze the condition of a
In the recent years, Multimodal image fusion patient, in most of the cases he needs to study different
algorithms and devices, has evolved as a powerful tool in images like MRI, CT, PET, SPECT simultaneously, which
the clinical applications, of medical imagining techniques. It is time consuming. Hence the anatomical and functional
has shown significant achievements in improving clinical medical images are needed to be combined for a concise
accuracy of diagnosis based on medical images. The main view. For this purpose, the multimodal medical image
motivation is to produce most relevant information from fusion has been identified as a source with better potential. It
different sources into a single output, which plays a crucial aims to integrate information from multiple modalities to
role in medical diagnosis. obtain a more complete and accurate description of the same
Medical imaging has gained significant attention due to object which facilitate in more precise diagnosis and better
its predominant role in health care. Some of the different treatment. Fused image provides higher accuracy and
types of imaging modalities used now-a-days are X-ray, reliability by removing redundant information.
computed tomography (CT), magnetic resonance imaging The applications of image fusion are found in radiology,
(MRI), magnetic resonance angiography (MRA), etc., These molecular and brain imaging, oncology, diagnosis of cardiac
imaging techniques are used for extracting clinical diseases, neuro-radiology and ultrasound. Multimodal
information, which are although complementary in nature medical image fusion helps in diagnosing diseases, and also
most of the times, some are unique depending on the cost effective by minimising storage to a single fused image
specific imaging modality used. instead of multiple-source images.
For example,
Wavelet decomposition can be implemented by two channel
II. BACKGROUND AND RELATED WORK filter bank shown in fig.1.

A. Image fusion techniques:


Image fusion method can be broadly classified into two
groups namely, Spatial domain and Transform domain fusion.
In spatial domain techniques, we directly deal with the image
pixels. The pixel values are manipulated to achieve desired
result. The fusion methods such as averaging, Brovey
method, principal component analysis PCA) and IHS based
methods fall under spatial domain approaches. The
disadvantage of spatial domain approaches is that they
produce spatial distortion in the fused image.
It does not give directional information and also leads to
spectral distortion, while the arithmetic combination will
Keep 1 column out of 2 (down sampling in columns) Keep row out of 2
lose original details as a result of low contrast of the fused
image. It becomes a negative factor while we go for further (down sampling in columns)
processing, such as classification problem, of the fused Fig. 1 Two level WT decomposition
image. The Discrete Wavelet Transform has the property that
This can be overcome by transform domain. It involves the spatial resolution is small in low-frequency bands but
the decomposition of the source image into sub-bands which large in high frequency bands. This is because the scaling
are then selectively processed using appropriate fusion function is treated as a low pass filter and the mother
algorithm. In frequency domain methods the image is first wavelet as high pass filter in DWT implementation. The
transferred in to frequency domain. It means that the Fourier wavelet transform decomposition and reconstruction take
Transform of the image is computed first. All the Fusion place column and row wise. Firstly row by row
operations are performed on the Fourier transform of the decomposition is performed and then column by column.
image and then the Inverse Fourier transform is performed This can be shown in fig.2.
to get the resultant image. Image Fusion applied in every
field where images are ought to be analyzed.
(GENERAL IMAGE FUSION BASED ON DWT DWT
comes under the classification of multi-scale
decomposition. This is used to map the wavelet transform to Fig. 2 Image decomposition using DWT
A. digital world. Filter banks are used to approximate
the behaviour of the continuous wavelet transform. The ILL(x, y) sub-band is the original image at the coarser
Double channel filter bank is used in discrete wavelet resolution level, which can be considered as a smoothed and
transform (DWT). The coefficients of these filters are sub-sampled version of the original image. Most
information of their source images is kept in the low
evaluated using mathematical analysis. The wavelet frequency sub-band. It represents the frequency usually
transform is used to identify local features in an image. contains slowly varying grey value information in an image
It also used for decomposition of two dimensional so called approximation.
(2D) signals such as 2D gray-scale image for multi- The ILH (x, y), IHL(x, y) and IHH(x, y) are sub-bands contain
resolution analysis. The available filter banks the detail coefficients of an image, which usually have large
decompose the image into two different components absolute values correspond to sharp intensity changes and
i.e. high- and low- frequency. When decomposition is preserve salient information in the image.
carried out, the approximation and detail components
can be separated 2-D Discrete Wavelet Transformation
(DWT) converts the image from the spatial domain to
transform domain. The image is divided by vertical
and horizontal lines and represents the first-order of
DWT, and the image can be separated with four parts
those are LL1, LH1, HL1 and HH1. Wavelet
Decomposition of Images:
Wavelet separately filters and down samples the 2-D data 1, 2, 3 - - - Decomposition Levels
(image) in the vertical and horizontal directions (separable H - - - - - High Frequency Bands
filter bank). The input (source) image is I(x, y) filtered by L - - - - - Low Frequency Bands
low pass filter L and high pass filter H in horizontal Fig. 3 Different levels of decomposition
direction and then down sampled by a factor of two
There are different levels of decomposition which are
(keeping the alternative sample) to create the coefficient
matrices IL(x, y) and IH(x, y) . The coefficient matrices IL shown in fig.3. After one level of decomposition, there will
(x, y) and IH(x, y) are both low pass and high pass filtered be four frequency bands, as listed above. By recursively
in vertical direction and down sampled by a factor of two applying the same scheme to the LL sub-band a multi-
to create sub bands (sub images) I LL(x, y) , ILH (x, y) , resolution decomposition with a desires level can then be
IHL(x, y), and IHH (x, y). achieved.
The schematic diagram of wavelet- based image The overall result is an image expansion using basic
fusion algorithm is shown in Fig.4. In wavelet image elements like contour segments, and thus is named
fusion scheme, the source images I1(x, y) and I2(x, y), are contourlet.In particular, contourlet have elongated supports at
decomposed into approximation and detailed coefficients various scales, directions and aspect ratios. This allows
at required level using DWT. The approximation and contourlet to efficiently approximate a smooth contour at
detailed coefficients of both images are combined using multiple resolutions.In LP at each stage the original image is
multiplied with Gaussian filter by down sampling to obtain
Low pass version of the original image or blur image and then
the obtained low-pass image is separated from the original
image by up-sampling to obtain band-pass image (or High
Frequency components). This band-pass image is applied to
directional filter bank to obtain directionality information. The
decomposition process in Laplacian pyramid is implemented as
shown below fig.7.
This is the process that, Laplacian pyramids separates low
frequency and high frequency components. The obtained LF
Sub-band (Scaled) image is further decomposed until we get
the desired fused image.
Fig.7 Laplacian Pyramid decomposition process
A. Directional Filter Bank:
fusion rule ø. The fused image (If(x, y)) could be obtained The High frequency components are given to the directional
by taking the inverse discrete wavelet transform (IDWT) as: filter bank which links point discontinuities into linear
structure.
If(x, y) = IDWT [ø {DWT (I1(x, y)), DWT (I2(x, y))}] (1)

Fused Wavelet
Registered Wavelet coefficients
Source images coefficients
Fig.8 Frequency partitioning in DFB

Fig. 4 Multi-level image fusion using DWT The High pass sub-band images are applied to Directional
The fusion rule used in this paper is simply averages the filter bank to further decompose the frequency spectrum using
approximation coefficients and picks the detailed coefficient an n-level iterated tree structured filter banks as shown in
in each sub band with the largest magnitude. fig.8. By doing this we capture smooth contours and edges at
any orientation. Finally we combine the scaled information
B. Procedure for implementation of wavelet transforms: with scaled multiplication. Since the directional
Step 1: The images to be fused must be registered to assure
that the corresponding pixels are aligned.
Step 2: These images are decomposed into wavelet
transformed images, respectively, based on wavelet
transformation. The transformed images with K -level
decomposition will finally have 3K+1 different frequency
bands, which include one low-frequency portion (ILL)
and 3K high-frequency portions (low-high bands, high-low
bands, and high-high bands).
Step 3: The transform coefficients of different portions or
bands are performed with a certain fusion rule. Fig:5 general block diagram
a) Fuse approximate coefficients of source image
III. General image fusion based on Contourlet Transforms
Advantages:
This new construction results in a flexible multi-
resolution, local, and directional image expansion using 1) Curved singularity representation.
contour segments, and thus it is named the contourlet 2) Limited orientation (vertical, Horizontal and
transform. It is of interest to study the limit behaviour Diagonal).
when such schemes are iterated over scale and/or direction, 3) Absence of anisotropic element (isotropic scaling).
which has been analyzed in the connection between filter Time and frequency Localization.
banks, their iteration, and the associated wavelet Limitations:
construction. The general block diagram is shown below in 6. Anisotropy: To capture smooth contours in images, the
Fig.5. representation should contain basis elements using a
 Laplacian Pyramid variety of elongated shapes with different aspect ratios.
 Directional Filter Bank
b) Need for contourlet transform: Among these These frequency coefficients are fused together
desiderata, the initial three are effectively provided by based on certain fusion algorithms are reconstructed
separable wavelets, while the last two require new using the inverse CT. The schematic diagram of our
construction. Besides, a significant challenge in proposed methodology which is the hybrid image fusion
capturing geometry and directionality in images using wavelet-contourlet transform is shown in fig.9.
comes from the discrete nature of the data. For this
reason we construct multi-resolution and multi-
direction image development using non-separable
filter banks. using average method.
c) Fuse detail coefficients of source image using
Maxima method.
Step 4: The fused image in spatial domain is constructed by
performing an inverse wavelet transform based on the
combined transform coefficients from Step 3. The contours
of original images can be captured effectively with a few
coefficients by using Contourlet transform. Contourlet
transform provides multi-scale and multi directional
decomposition. It consists of two stages
A. Laplacian Pyramid: filter bank (DFB) was designed to
capture the high frequency (representing directionality) of
Fig:9 schematic for proposed fusion method
the input image, the low frequency content is poorly
handled. In fact, with the frequency partition low
frequency would “leak” into several directional sub-bands, till different frequencies and resolutions are obtained.
hence the DFB alone does not provide a sparse Thus a hybrid of wavelet and contourlet would lead to
representation for images. This fact provides another better results that could be used for medical diagnosis
reason to combine the DFB with multi- scale when compared with Existing methods those are
decomposition, where low frequencies of the input image individual results of wavelet and contourlet transform.
are removed before applying the DFB.
C. Flow chart
IV. PROPOSED FUSION
METHODOLOGY
A. Pre-processing stage:
Source images are necessary to align or register as perfectly
as possible prior to the main fusion process in order to
produce the best fusion results. Image registration process
can also be called as image alignment. The datasets of input
images are not aligned to each other, it is impossible to
yield best fusion results although fusion framework,
scheme and algorithm are optimum for maximum
counterlet transformation.
A. Decomposition -1:
After pre-processing these source images are first
decomposed using Wavelet Transform (WT) in order to
realize multi-scale sub band decompositions with no
redundancy. These sub-bands coefficients are
predominantly low and the high frequency sub-bands of
the image. Now, the obtained approximation and
detailed coefficients after application of appropriate
Step 8: To get final hybrid fused image in spatial
fusion rule are reconstructed using the inverse DWT.
domain we apply inverse contourlet transform.
The entire process carried out in this stage serves to
provide significant localization leading to a better
I. PERFORMANCE EVALUATION OF IMAGE
preservation of features in the fused image.
FUSION TECHNIQUES
B. Decomposition -2:
After reconstruction at Stage-1, fusion algorithm is
Quantitative performance of the various Hybrid Image
again applied at stage-2 in Contourlet domain. The
Fusion techniques in Transform domain can be done by
significance of such an approach is to overcome the
the following metrics Entropy Entropy measures amount
limitation of directionality in wavelets (in stage-1).
of information contained in an image. Higher entropy
Contourlet transform (CT) is applied in order to achieve
value of the fused image indicates presence of more
angular decompositions. After applying sub-band
information and improvement in fused image. If L
decomposition using CT, a set of coefficients are
indicates the total number of grey levels and p=
obtained (for both the images).
{p0, p1 ... pL-1} is the probability distribution of each
level, Entropy is defined as, i=0
VIII. Comparison of Various Parameters of Proposed
Methods Have Been Listed In Tabular Forms and Related
Bar charts
Table 1: Fusion of CT and MRI of coronal section of lungs affected with
B. MSE tuberculosis.
The Mean Square Error (MSE) and the Peak Signal to
Noise
Ratio (PSNR) are the two error metrics used to compare
image compression quality. The MSE represents the
cumulative squared error between the compressed and the
original image, whereas PSNR represents a measure of the
peak error. The lower the value of MSE, the lower the
error.

C. PSNR
The PSNR block computes the peak signal-to-
noise ratio, in decibels, between two images. This ratio is
often used as a quality measurement between the original
and a compressed image. The higher the PSNR, the better
the quality of the compressed or reconstructed image

VII. RESULTS ANDANALYSIS

In this section we will check the effectiveness of the


proposed scheme that is hybrid of the DWT-Contourlet
transform. Various parameters such as Entropy, MSE and
PSNR are used to evaluate the effectiveness and compared
the performance metrics among the proposed method and
existed methods. We assume the source images to be in

perfect registration. Here we consider different source


images like CT, MRI, and PET of brain tumour, Skull,
MRI_T1, MRI_T2.
Table 2: Fusion of X-ray and MRI images of knee

These bar charts and Tabular values give the


experimental values performed on different images we
discussed previously. From these we can observe that
curvelet contourlet transform has given better results for
which PSNR and Entropy values are of high values when
compared to other transformation techniques and the MSE
is less when compared with other methods, which satisfied
Table 3: Fusion of CT and PET scan of Lung cancer the conditions of better image quality that will obtain after
fusion.
IX. CONCLUSION
In this project, a hybrid technique for image fusion
using the conbination of DWT-Contourlet is being
simulated. The simulated results obtained for proposed
method for different combinations of medical images like
CT, MRI ,PET, and also for various input image sizes are
compared with existing techniques .
In all cases , DWTContourlet based hybrid
technique is observed to be outsmarting than one which
provide best quality fused image when compared to other
fusion techniques in terms of PSNR, MSE, & Entropy. So
it is best suited for better medical diagnosis.

REFERENCES

[1] M. N. Do and M. Vetterli, “The contourlet


transform An efficient directional Multi-resolution
image representation”, IEEE Transactions on
Image Processing, vol. 14, No.12, pp. 2091–2106,
2005.

[2] Jyoti Agarwal1, Sarabjeet Singh Bedi,


”Implementation of hybrid image fusion technique
for feature enhancement in medical diagnosis,
Springer, Human centric Computing and
Information Sciences, page 1-17, (2015) 5:3.

[3] S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang,


“Image fusion based on a new contourlet packet,”
Inf.Fusion, vol. 11, no. 2, pp. 78–84, 2010.

[4] J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala


and R. Arbiol, “Multiresolution-based image
fusion with additive wavelet decomposition,”
IEEE Transactions on Geo-science and Remote
sensing, vol. 37, no. 3, 1999,pp. 1204-1211.
[5] Sweta Mehta, Bijith Mara, “CT and MRI image
fusion using curvelet transform,” ISSN: 0975 –
6779, nov 12 to oct 13, volume – 02, issue – 02,
page 848-852.
[6] Navneet kaur, Madhu Bahl, Harsimran Kaur,
“Review On: Image Fusion Using Wavelet and
Curvelet Transform,” IJCSIT, Vol. 5 (2), page
2467-2470, 2014.
[7] R. J. Sapkal, S. M. Kulkarni, “Image Fusion based
on Wavelet Transform for Medical Application,”
IJERA, Vol. 2, Issue 5, September- October 2012,
pp.624-627.
[8] S. Ibrahim and M. Wirth, “Visible and IR Data
Fusion Technique Using the Contourlet
Transform”,International conference on
computational science and engineering, CSE 09,
IEEE, vol. 2, pp. 42-47, 2009.
[9] Miao Qiguang, Wang Baoshul, “A Novel Image
Fusion Method Using Contourlet Transform,”
IEEE trans. geosci. remote sens., vol. 43, no. 6,
pp. 1391-1402, June 2005.
[10] Huang, Junfeng Gao, Zhongsheng Qian “Multi-
focus Image Fusion Using an Effective Discrete
Wavelet Transform Based algorithm
measurement”, SCIENCE REVIEW, Volume 14,
No. 2, 2014,102-108.

You might also like