Professional Documents
Culture Documents
RESEARCH ARTICLE
OPEN ACCESS
I.
INTRODUCTION
ISSN: 2347-8578
II.
DIFFERENT METHODS
www.ijcstjournal.org
Page 208
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 2, Mar-Apr 2015
d) Wavelet transformation (WT): based on pyramidal
transformation, first the high resolution image is
decomposed into a series of low resolution images, and
then replaced by low resolution images with the original
multispectral images with the corresponding resolution.
We then apply an inverse pyramidal transformation to
obtain the new fused images. [9]
III.
PROPOSED METHODOLGY
That is,
(1)
As mentioned before, the pixel intensity in If,H is higher
(brighter) than the pixel intensity in Ip,H. Therefore, the
value Davg(x, y) in is always negative. So we can lower
down the intensity level for each non-WM pixel of the
high-activity region in I-component fused image If,H,
where the percentage w is set to 50% or 70% in our
implementation.
(2)
ISSN: 2347-8578
Now we can combine If,H with Ilf,H, to form a new graylevel image Ifused as the I-component of the finally fused
image. By taking Ifused, Sp(saturation-component of PET
image), and Hp (hue-component of PET image) as the
inputs for inverse IHS transform, we can obtain a finally
fused image for PET and MR images. Later contrast of
final fused gray scale image is enhanced by adapthisteq
method which increases the AG and reduces the SD of
image .
www.ijcstjournal.org
Page 209
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 2, Mar-Apr 2015
anatomical structural information than those obtained from
the IHS+RIM method visually and quantitatively.
Table 1. Performance comparison based on spectral
discrepancy
Fusion method
IHS-RIM method
Our method(w=0.5)
IV.
EXPERIMENTAL RESULTS
The PET images and MR images and fusion results for our
proposed method are shown in fig 2 and fig 3. In this study,
we used three metrics for objective assessments. Firstly,
the spectral discrepancy (SD) was used to measure the
spectral quality of a fused image [7]. A small discrepancy
means an acceptable fusion result. Secondly, average
gradient (AG) was used to measure the spatial resolution
of an image [7]. The larger the AG is, the higher the
spatial resolution of the fused image has.
Table I show the performance comparisons between
IHS+RIM and our method. As we can see in Table I, our
method has smaller SD values than IHS+RIM method. It is
noted that the fused image obtained from our method has
better quality in color as compared to the HIS+RIM
method. Table II shows the performance comparisons
based on AG metric. It is also obvious that our method is
better than HIS+RIM method in terms of AG because our
method always
Has higher AG values.
V. CONCLUSION
In this paper, a new fusion method for fusing PET and MR
brain images based on wavelet transform with structural
information adjustment and spectral information patching
is proposed. Our method is different from the regular
simple
DWT fusion method in that our method performs wavelet
decomposition with different levels for low and highactivity regions, respectively, in the PET and MR brain
images. In addition, a novel adjustment for the pixel
intensity in the non-WM area of high-activity region in the
gray-level fused image will bring more anatomical
structural information into the final color fused image.
Spectral information patching in the white matter area of
high-activity region will preserve more color information
from PET image for the white-matter area. Experimental
results demonstrated that our fused results for PET-MRI
brain images have less color distortion and richer
ISSN: 2347-8578
Data
7.7061
0.22781
Fusion method
IHS-rim method
Our method(w=0.5)
Data
5.3603
19.5908
Before
using
adapthisteq method
19.5908
0.22781
After
using
adapthisteq method
30.2543
0.48248
REFERENCES
[1] P.L. Lin, P.W. Huang, C.I. Chen, T.T. Tsai, and
C.H. Chan, Brain medical image fusion based on
IHS and Log-Gabor with suitable decomposition
scale and orientation for different regions, IADIS
Multi Conference on Computer Science and
Information Systems 2011CGVCVIP, Rome, Italy,
2011.
[2] T. Tu, S. Su, H. Shyu, and P.S. Huang, A new look
at IHS-like image fusion methods, In Information
Fusion, vol. 2, no 3, pp. 177-186(10), September
2001.
[3] A. Wang, H. Sun, and Y. Guan, The application of
wavelet transform to multimodality medical image
fusion, Proceedings of IEEE International
Conference on Networking, Sensing and Control,
pp. 270- 274, 2006.
[4] Y. Chibani and A. Houacine, Redundant versus
orthogonal wavelet decomposition for multisensor
image fusion, In Pattern Recognition, vol.36,
pp.879-887, 2003.
www.ijcstjournal.org
Page 210
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 2, Mar-Apr 2015
[5] G. Pajares, J. M. D. L. Cruz, A wavelet-based
images fusion tutorial, In Pattern Recognition, vol.
37, pp. 1855-1872, 2004.
[6] H. Zhang, C. Chang, L. Liu, N. Lin, A novel
wavelet
medical
image
fusion
method,
International Conference on Multimedia and
Ubiquitous Engineering, pp. 548-553, 2007.
[7] S. Daneshvar and H. Ghassemian, MRI and PET
image fusion by combining IHS and retina-inspired
models, In Information Fusion, vol.11, pp.114-123,
2010.
[8] Maruturi Haribabu , CH.Hima Bindu, Dr.K.Satya
Prasad Multimodal Medical Image Fusion of MRI
PET Using Wavelet Transform 2012
International Conference on Advances in Mobile
Network, Communication and Its Applications.
[9] Dong-Chen He*, Li Wang+ and Massalabi Amani
A new technique for multi-resolution image
fusion. Proceedings of IEEE 2004.
ISSN: 2347-8578
www.ijcstjournal.org
Page 211