You are on page 1of 4

Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong

IMAGE FUSION USING BLUR ESTIMATION

Seyfollah Soleimani1,2 , Filip Rooms1 , Wilfried Philips1 , Linda Tessens1


1
TELIN - IPI - IBBT - Ghent University - St-Pietersnieuwstraat 41, B-9000 Gent, Belgium
tel: +32 9 264 34 12, fax: +32 9 264 42 95
2
Arak University, Shahid Beheshti Street, Arak, Iran
{Seyfollah.Soleimani, Filip.Rooms, Wilfried.Philips, Linda.Tessens}@telin.ugent.be,

ABSTRACT one is kept and others are eliminated. Suppose the slices
In this paper, a new wavelet based image fusion method is are f1 , f2 , ..., fn . To find the corresponding edge pixels of
proposed. In this method, the blur levels of the edge points an edge pixel in slice k located at (xk , yk ), we look in the
are estimated for every slice in the stack of images. Then from neighborhood of that position in other slices(fi , i = k). The
corresponding edge points in different slices, the sharpest one radius of neighborhood under inspection is set equal to the
is brought to the final image and others are eliminated. The blur level of (xk , yk ). Now for every detected edge pixel, we
intensities of non-edge pixels are assigned by the slice of its know from which slice its intensity should be assigned. To
nearest neighbor edge. Results are promising and outperform assign the intensity of the non-edge pixels, we use the slice
other methods in most cases of the tested methods. of its nearest neighbor edge pixel.
In section 2 we present an overview of Ducottet’s method.
Index Terms— image fusion, blur estimation, local max- In section 3 the changes that we have made in Ducottet’s
ima, wavelet transform method are explained. In section 4 the new fusion method
is presented. In section 5, the results for synthetic and real
1. INTRODUCTION images are shown and a comparison has been done with other
methods. Finally a conclusion is given in section 6.
Image fusion is an important technique in image processing.
It is needed when we image non-flat objects while the depth
of field of the optics is not enough to sharply image the whole 2. EDGE DETECTION AND BLUR ESTIMATION
object at once. The solution is to combine several images,
each focusing on other parts of the object. Those images In Ducottet’s method, singularities of images are modeled as
should fuse to reach an image with as much sharp parts as transitions, lines or peaks.
possible. Transitions are modeled as the convolution of a Heaviside
In literature a lot of methods have been proposed for im- function (H) and a two dimensional Gaussian (G) with vari-
age fusion. A survey has been done in [1]. Some of them use ance σ 2 and amplitude A:
variance-based fusion, real and complex wavelet [2, 3] and   
A x
curvelet [4]. Tσ (x, y) = AH(x, y) ∗ Gσ (x, y) = 1 + erf √
In existing wavelet-based methods, the fusion step is done 2 σ 2
in the transform domain by keeping the larger coefficients in (∗ is convolution). The line edge model is the convolution of
amplitude, because the assumption is that larger coefficients a Dirac line function and the Gaussian function:
are from the in-focus parts. As explained in [3, 4], after the in-
verse transform, the fused image may contain intensities that Lσ (x, y) = 2πσ 2 AGσ (x, 0)
are not present in any of the slices in the stack, so a post-
processing step is needed to overcome this problem. The peak edge model is the convolution of a Dirac point
Here we propose a new wavelet-based method where function with the Gaussian function:
the fusion step is done in the spatial domain, so no post-
processing is needed. In this method first, the edge pixels Pσ (x, y) = 2πσ 2 AGσ (x, y)
are detected and their blur level are estimated using Ducot-
tet’s method [5]. In addition to make some improvement in In the above equations, σ is the blur level of every edge model.
Ducottet’s method, we limit the edge detection to sharp parts Ducottet’s method can be summarized as follows:
by setting a scale-dependent threshold. Then from every 1. The undecimated wavelet transform of the input image
corresponding set of edge pixels in the stack, the sharpest is calculated for scales ranging from 1 to a selected maximum

978-1-4244-7994-8/10/$26.00 ©2010 IEEE 4397 ICIP 2010


scale with a scale step of at most 0.5 using the following com- 3. IMPROVEMENTS TO DUCOTTET’S METHOD
plex wavelet:
ψs = ψs 1 + iψs 2 , 3.1. Thresholding

where To decrease the computational cost of the edge detection and


1 ∂Gs the blur estimation step, we decrease the number of local
ψs (x, y) = s (x, y)
∂x maxima in every scale by removing the weak edges. The
∂Gs wavelet that is used here is the derivative of a Gaussian func-
ψs 2 (x, y) = s (x, y) tion, so the moduli of complex wavelet coefficients are inten-
∂y
sity differences of adjacent pixels of the input image, and in
and blurred parts the differences between amplitudes are small.
1 −1/2s2 (x2 +y2 )
e Gs (x, y) = . So the amplitude of moduli of wavelet coefficients in these
2πs2
parts are smaller than the sharp parts.
2. In every scale of the wavelet domain, the local maxima To remove weak local maxima, we set a threshold in the
of the wavelet coefficients are found. local maxima finding step. We should emphasis that Ducot-
3. For every local maximum in the finest scale, its candi- tet does not use any threshold. He proposed this method for
date corresponding local maxima are found in the next coarser segmentation and that is why he keeps all local maxima. But
scale. This procedure is repeated until the coarsest scale is here for fusion, we can eliminate weak maxima that represent
reached. For every local maximum in the finest scale, the blurred parts.
maxima function m(s) is defined as the values of the corre- If we set the threshold to a fixed value for all scales, it
sponding maxima in scale s. Setting the scale step at most to would remove some pixels in a scale and keep their corre-
0.5 guarantees that the local maximum in the next scale will spondences in other scales. As a result the process of creating
not move more than one pixel compared with the location of maxima functions fails. The threshold should be proportional
the corresponding local maximum in the current scale, so for to values of wavelet coefficients in every scale, so for every
finding correspondences only the 8 neighbors of every loca- scale we set as the threshold a factor of the average of the
tion in the coarser scale are considered. moduli of all wavelet coefficients in that scale. This way of
4. Every extracted maxima function is compared with setting the threshold is very important, because it preserves
maxima functions of the three edge models (transition, line the parent-child links across scales.
and peak ) that have been found analytically, and the best fit-
ting model is selected.
The maxima functions of the edge models are respectively 3.2. Rounding the arguments
[5]: In finding local maxima, we round the arguments of the
wavelet coefficient to one of the following values:
A s
M Tσ (s) = √ √
2π s2 + σ 2 0, ±π/4, ±π/2, ±3π/4, ±π
A sσ
M Lσ (s) = √ 2 This decreases time complexity of finding the local maxima
e + σ2
s even further. As said in [5, 6] to check if a point is a local
A sσ 2 maximum, we should compare its modulus with moduli of
M Pσ (s) = √ two points in the image grid along the direction of the argu-
e (s2 + σ 2 )3/2
ment of that point.When the argument is not one of the above
These maxima functions are shown in Figure 1 for σ = 4 values we should interpolate the points’ moduli. Ducottet’s
and A = 1. When the type of the extracted maxima function method uses two nearest neighbors linear interpolation, but
is specified, the blur level and the amplitude for that maxima by rounding the arguments to one of the above values, we use
function are calculated by curve fitting. For more details of in fact nearest neighbor interpolation.
this method, we refer to [5].
0.4
Transition 3.3. Edge Localization
0.35

Line
If we connect pixels of a given maxima function across scales,
Maxima Functions Moduli

0.3

0.25
it will be a 3 dimensional curve, because they are not from
0.2

0.15
the same locations in different scales. A difficulty here is
0.1
Peak in which location we should report an edge. In our work,
0.05 we have selected the third scale for edge localization for all
0
0 2 4 6 8 10 12 maxima functions, because the finer scales are more sensitive
scale

Fig. 1. Maxima functions for σ = 4, A = 1. to noise and in coarser scales, edges are affected by adjacent
edges. Another problem is that the maxima functions may not

4398
include a coefficient in coarser scales because the process of Now we have a labeled image that shows for some loca-
finding correspondences may stop in these scales. Since we tions from which slice the intensity should be assigned. For
only take into account maxima functions with at least three locations that have not been specified yet, we assign it like the
values, the maxima functions always have an edge pixel from closest labeled pixel.
the third scale. 300

intensity
200

100

4. PROPOSED FUSION METHOD 0


−10 −8 −6 −4 −2 0 2 4 6 8 10
x
(a)
In image fusion, we have a stack of slices in which some parts 300

modulus of wavelet coefficient


are sharp and we want to combine all slices to have a fused 250

image. We apply edge detection and blur estimation for ev- 200

ery slice in the stack. Then, for every slice, we have the edge 150

locations and their blur level. Now we combine all these in- 100

50

formation to compose the fused image. A big difficulty that 0


−10 −8 −6 −4 −2 0 2 4 6 8 10

arises here is that the corresponding lines and peaks in dif- x


(b)
ferent slices are reported in different locations. This problem
is illustrated in Figure 2. In Figure 2(a) the profiles of two Fig. 2. (a) Profiles of two blurred lines with blur levels (σ) of
lines (one by dash line and another by solid line) with differ- 2 and 3 (b)Modulus maxima of wavelet coefficients of (a)
ent blur level and in Figure 2(b) the positions and amplitudes
of the local maxima of their wavelet coefficients (edge pixels)
are shown. For every line, one pair of peaks (local maxima) 5. RESULTS
are reported, one pair represents the sharper line and other one
5.1. Application to synthetic images
represents the blurrier line. The sharper pair should come to
the fused image and the blurrier pair should eliminated. Be- We tested this method with some synthetic images and com-
cause the locations of corresponding peaks are different, we pared the results with several existing methods. The synthetic
can not just compare the blur level of edge pixels with the images are the same as the ones used in [4] which is one of
same positions in different slices and select the sharpest one. the reference methods we are comparing with. From every
The reason of this displacement is that the used wavelet (the ground truth image, three partially blurred slices are created.
gradient of the Gaussian) is an odd function so the location of Every location is sharp only in one slice. One test image and
corresponding lines and peaks will not be the same. Figure 2 its derived stack are shown in Figure 3. Then we apply the
inspires one possible solution. method to created stack and the resulting image is compared
The lines are Gaussians at the same location with differ- with the initial ground truth image using the PSNR (peak sig-
ent blur levels. In our models, the blur level is the σ of Gaus- nal to noise ratio). The resulted PSNRs for different methods
sians. Suppose that the variances of the lines are σ1 2 and σ2 2 are shown in Table 1 (all numbers are in dB). The results for
and σ1 > σ2 and the center of Gaussians are at 0. We know the new method have been calculated for a scale step of 0.1
that the peaks (local maxima here) of moduli of first deriva- and maximum scale 4. The threshold factor is set to 1 or 2 and
tive of Gaussians will be located at x = ±σ1 and x = ±σ2 the best result in every case is shown. The best result for every
and the distance of the corresponding peaks will be σ1 − σ2 . stack has been set in bold. Our method outperforms the others
If we suppose that the estimated blur level is equal or larger in 3 stacks. For the Cloud stack, our method outperforms the
than 0 then, σ2 is at least 0 and the distance between two cor- curvelet but not the variance method. The proposed method
responding peaks will be at most σ1 . So if there is a sharper performs worse than the curvelet for Algae and Eggs. It can
local maximum, it should be within a neighborhood of size be explained due to failure of thresholding to remove blurred
σ1 . parts enough, because in these two images, the smoothness of
To solve the problem of displacement of corresponding the blurred and the sharp parts are very close to each other.
local maxima, we look for every reported local maximum
within a neighborhood of the size of its blur level in all other 5.2. Application to real images
slices and if there is one with the same argument and less blur,
we can infer that they are from the same feature and one of As a real world data test, we applied this method to a stack
them should be kept, so we eliminate the more blurred one. of color microscopic images of Peyer plaques from the intes-
After this elimination step, we inspect all slices in the tine of a mouse 1 . The stack size is 15. The resulted fused
stack for every location. If for a location only in one slice image of the curvelet method is shown in Figure 4(a) and for
an edge pixel has been reported, we assign the label of that our method is shown in figure 4(b). Clearly we can see that in
slice in the pre-final image, otherwise we do not assign that 1 The images are courtesy of Jelena Mitic, Laboratoire d’Optique Biomed-

location. icate at EPF Lausanne, Zeiss and MIM at ISREC Lausanne.

4399
(a) (b)

(a) Curvelet method (b) Proposed method

Fig. 4. Fused images.

(c) (d)
Fig. 3. Fabric Ground truth image (a) and derived slices from
it (b-d).
(a) Real Image (b) Curvelet Result (c) Our Result
Fig. 5. Enlarged corresponding parts (To enhance the con-
Table 1. Results of different methods in dB
trast we applied color stretch contrast of Gimp Editor to these
Complex Db6

Complex Db6

New Method
with checks

images).
Variance

Curvelet

bright eld microscopy,” Micron, vol. 32, pp. 559 569,


2001.

Leaves 28.75 39.20 34.97 41.27 45.35 [2] H. Li, B.S. Manjunath, and S.K. Mitra, “Multisensor im-
Metal 32.50 41.24 36.62 44.18 45.43 age fusion using the wavelet transform,” in Graphical
Fabric 41.47 41.25 35.50 43.14 47.15 Models and Image Processing, 1995, vol. 57 of 3, p. 235
Eggs 47.76 59.80 59.73 65.82 46.16 245.
Algae 53.34 62.17 58.77 63.92 52.98
[3] B. Forster, D. Vam De Ville, J. Berent, D. Sage, and
Clouds 54.79 49.26 49.21 52.73 53.40
M. Unser, “Complex wavelets for extended depth-of-
some parts, our proposed method works better. One of these field: A new method for the fusion of multichannel mi-
parts is highlighted in two output images. These parts accom- croscopy images,” in Microscopy Research and Tech-
panying with suitable slice in stack are enlarged and shown nique, 2004, vol. 65 of 1-2, pp. 33–42.
in Figure 5 . The curvelet method failed in this part , but our
[4] L. Tessens, A. Ledda, A. Pizurica, and W. Philips, “Ex-
proposed method has worked well.
tending the depth of field in microscopy through curvelet-
based frequency-adaptive image fusion,” in Proc. of
6. CONCLUSION the IEEE International Conference on Acoustics, Speech,
and Signal Processing (ICASSP), Honolulu, Hawaii,
The new method outperforms the other methods in half of the USA, 2007, pp. 861–864.
test cases and has the second best result in one case. For the
other two cases, the results are still acceptable. One advantage [5] C. Ducottet, T. Fournel, and C. Barat, “Scale-adaptive
of the new method is that it is based only on edge pixels in detection and local characterization of edges based on
the images, while other methods are based on all information wavelet transform,” Signal Processing, vol. 84, pp. 2115–
of the images. Another advantage is that the fusion step is 2137, 2004.
done in the spatial domain, so no post-processing is needed
for checking if any intensity is not in any slice of the stack. [6] C. L. Tu and W. L. Hwang, “Analysis of singularities
from modulus maxima of complex wavelets,” IEEE tran-
sations on Information Theory, vol. 51, no. 3, pp. 1049–
7. REFERENCES 1062, 2005.
[1] A.G. Valdecasas, D. Marshall, J.M. Becerra, and J.J. Ter-
rero, “On the extended depth of focus algorithms for

4400

You might also like