You are on page 1of 5

2010 Second International Conference on Computer Engineering and Applications

Automatic Localized Deblurring in Digital Images

Cherin Joseph, Pradip Harindran Vallathol, Raj Kumar Gupta*


Physics Department
Birla Institute of Technology and Science, Pilani
Rajasthan, India
e-mail: raj@bits-pilani.ac.in

Abstract— This paper suggests a new approach for automatic applying the algorithm using MATLAB on test images and
selective deblurring of the blurred regions in an image. We section VI concludes.
have evaluated the degree of blurness (β) at the edges and
applied the deblurring algorithm where β is significantly large. II. RELATED WORK
Blurring increases the width of the edges and the original gray
level values of the pixels are lost. We have formulated the
A blurred image can be modeled as [1]
deblurring algorithm based on the edge direction and on linear ,  =   ℎ, ; ,   ,  + , 
interpolation which assigns gray level values to the pixels in the  
selected blurred regions. where x is the original image, h is the point spread
function (PSF) that models the blurring and η is an additive
Keywords- deblurring; laplacian; interpolation
noise term. The equivalent expression in the frequency
I. INTRODUCTION domain is
 =  + 
Image deblurring is an antic problem in the field of
One of the earliest approaches to image deblurring [2]
image processing. Loss in sharpness or blurring of an image
was to estimate the point spread function h or H and to find
occurs mainly due to the limitations in the image capturing
its inverse transformation in order to obtain x from y. Later,
technology. While capturing the image of a scene where
algorithms were developed [3,4] for finding h such that the
different objects are at different distances from the camera,
sharp transitions of gray levels were preserved. But these
the lens is focused only on objects at a specific distance
algorithms reassigned values to more or less all the pixels in
from it. The other objects in the image appear blurred. It
the image.
may even happen that none of the objects are on focus and
Also these methods had the inherent disadvantage of
this can result in the entire image being blurred. Another
being based on an ill-posed problem of predicting x from y
type of blurring is due to the relative motion of the camera
where a small additive noise in y results in a large change in
with respect to the object.
the predicted image x. This ill-posed problem was overcome
Numerous algorithms have been proposed and
[5] by using Tiknonov-miller regularization. However, such
implemented for improving the sharpness of an image. Most
regularization techniques had the drawback of being based
of these algorithms are globally applied to the entire image.
on a single global regularization parameter which led to
This often results in degradation of the image quality in
ringing effects in regions with already sharp gray level
sharp regions of the image. Thus arises the need of a
transitions.
selective deblurring algorithm that automatically identifies
Global deblurring techniques [6] like unsharp masking
only the blurred regions in an image for processing. The
which involved subtracting a blurred version of the image
remaining portions of the image is left unaltered, thus
from itself and high-boost filtering which relied on adding
avoiding the undesirable effects of application of the
the laplacian of an image to itself were also developed long
algorithm in the sharp regions.
back and are still widely in use. Other global deblurring
In this paper, we propose an automatic localized
algorithms based themselves on a variety of concepts like
deblurring algorithm which utilizes the responses of
nonparametric regression [7], wavelet transformations and
laplacian operator to an image to identify the blurred edge
neural networks [8], tonal corrections [9] and fuzzy
regions. Further the use of a linear interpolation function is
projections onto convex sets [10]. All these global
proposed to reassign the pixel values to pixels in a region of
deblurring algorithms process the entire image.
interest surrounding the blurred edges. The proposed
Lagendijk et al. [11] proposed a spatially adaptive
algorithm was implemented in MATLAB and was applied
deblurring technique based on a weighting matrix
on a variety of test images.
constructed as a function of the local variance instead of a
Section II briefly outlines the principles behind a few
global regularization parameter. Huang et al. [1] proposed
existing deblurring algorithms. Section III contains a
yet another weighting matrix based spatially adaptive
detailed description of the methodology involved in arriving
regularization algorithm that uses the Abdou operator [12].
at the proposed algorithm. Section IV contains the final
Spatially adaptive algorithms take special care in ensuring
algorithm. Section V discusses the results obtained on

978-0-7695-3982-9/10 $26.00 © 2010 IEEE 520


DOI 10.1109/ICCEA.2010.106
that the regions with sharp change in gray level are shielded The above algorithm would select the former and reject the
to a certain extend from the deblurring mechanism. latter. To overcome this problem, we used the laplacian
l (the
In our proposed algorithm we automatically select the  &)) which gives a double edge
2nd derivative of the image (
blurred regions and apply a simple linear interpolation for every edge in the original image.
image The distance between
function to achieve deblurring. The remaining parts of thet the double edges can be used to measure the degree of
image are left untouched. blurness of the edge.

III. METHODOLOGY
Initially we identified the edges in the image without
discriminating between blurred and sharp edges. For this we
found the gradient  of the input image as the square root
of sum of squares of the x derivative  and the y derivative
 of the input image. The 33 Sobel masks were used for
finding  and . The gradient was then passed through a
threshold stage to extract the regions of prominent edges
referred to as edge region henceforth.
The next step was to identify a region of interest
interes on
either side of the edge where interpolation is to be carried
out. For any given point on the edge [say point A in figure
2], a set of points were selected in the direction of the Figure1.. The gradient
gradien of a diagonal edge
gradient (α) as well as in the negative direction of the
gradient (π-α) where

! 
 =  

We proceed until we move out of the edge region at point
C (along α) and point B (along π-α).. Then additional N
pixels are also selected along the direction α and π-α after
points C and B respectively. The points thus selected along
the line joining B andd C, together with the 2N pixels, N each
on either side, constitute the line of interpolation
corresponding to point A. While selecting the N pixels, it is
ensured that the line of interpolation does not extend into
another edge region. The set of all lines of interpolation
encompasses the region of interest. Figure 1 shows the
gradient of a diagonal edge and Figure 2 its associated edge Figure2.. Extracted edge region along with a line of
region. interpolation
Now a pseudo centre of the line of interpolation is
chosen as the point equidistant
distant from points B and C,
C and the It was observed thatat the double edges in the laplacian
pseudo centers of all lines of interpolation define the new were very close to each other, making it virtually impossible
sharp edge. The next step is to perform interpolation on to obtain a satisfactory parameter to quantify
q the degree of
either side of the pseudo center. Each half of the line of blurness (β). The difference ( " ) of the gradient and
interpolation separated by the pseudo center is called line of laplacian was made use of to obtain really thin lines for sharp
interest. Performing the interpolation along all lines of edges and relatively thicker lines for blurred edges.
interest in the edge region would result in a sharp image. " = #– % & where a,b are scaling constants.
It was observed that the pseudo centers
center of the lines of For finding",, both the gradient
grad and the laplacian were
interpolation formed a discontinuous edge.edge This is because scaled to a common range of 0-2550 in order to limit storage
the edge region does not have parallel and straight size to one byte. "waswas averaged and then an automatic
boundaries on either side. This can be observed in Figure 2. threshold algorithm was applied to obtain the new edge
To overcome this problem, the edge region was averaged region which is called picdiffff. Picdiff has information about
using an averaging mask ask and subsequently passed through a the degree of blurriness (β) of the different edges in the
threshold filter too obtain an updated edge region that yielded image.
acceptable continuity of the centers. The automatic threshold algorithm starts with an arbitrary
There was one problem with the above mentioned choice of grey level value 127 as the threshold limit.
technique of detecting edges. This method would not Thresholding is performed. The average gray gra level value of
differentiate between sharp and blurred edges. For instance, all the pixels threshold to 255 is calculated. The average gray
consider a really
ally sharp edge separating two regions of highly level value of all the pixels threshold to 0 is also calculated.
contrasting gray levels. Consider a second edge that is Now the mean of these two averages is calculated and used
blurred but has similar gray levels on either side of the edge. as the threshold limit for the next iteration. This process is

521
repeated until the threshold limits in two consecutive • Find the difference between the gradient and the
iterations are almost equal. laplacian to get " . Average this image and
Even while working with picdiff for identifying the edge threshold it using the automatic threshold algorithm
regions, we also kept track of the gradient to ensure that the described in section II. The resultant image is called
lines of interpolation do not cross over sharp edges which are picdiff.
not in picdiff. • The gradient ∇f is also operated on by the automatic
One of the problems encountered was that some really threshold algorithm. The resultant image is called
blurred edges were left undetected as the scaling of the picbound.
derivatives greatly emphasizes the prominent edges. To • The edge regions in picdiff that are smaller than a
bring out those less prominent edges, scaling of the gradient fraction FRAC of the original image are ignored.
and the laplacian to the range 0-255 and calculation of", • A backup of picdiff is created in picedgedup and a
was performed locally in cells of size MM. smoothened version picsmooth of the original image
For interpolation, we first tried a polynomial function. is created for use in the linear interpolation function.
Initially a polynomial of n degree was chosen, n depending • For every pixel which is equal to 255 in picedgedup,
on the number of points in the line of interest. This resulted perform the Step 1 and Step 2.
in deformed images which had pure black and pure white
pixels. The reason could have been the noise in the images Step 1 : Find out a line of interpolation
leading to polynomials that yield irrationally huge and small This is done according to the algorithm explained in
[even negative] gray level values. We later tried using section II and illustrated by figures 1 and 2. Once the
second degree polynomials and we also attempted averaging line of interpolation moves out of the region where
the image to reduce noise before application of polynomial picdiff is equal to 255, it is ensured that it does not
interpolation. Although the results improved considerably, re-enter a region where either of picdiff or picbound
they were still not satisfactory. But linear interpolation gave are 255. And for all points traced by the line of
in good results. For some images, iterating the interpolation interpolation, picedgedup is made 0 so that we do
algorithm gave better results. not construct other lines of interpolation
It was further observed that the algorithm selected some corresponding to these points.
small edges in areas of fine detail in the image. Manipulation
of pixel values in these regions resulted in considerable Step 2 : Perform Linear interpolation separately on
distortion of the image. To avoid this, the selected edge either side of the pseudo centre along the line of
regions of small size were eliminated by specifying a cutoff interpolation
of r times the size of the original image (r << 1). The Figure 3 shows the points in the line of interest
following parameters were defined so that their values could numbered from 1 to n where c is the pseudo centre
be easily changed for any given input image. pixel. Pixels from 1 to c-1 constitute the negative
GRID: The dimension of the square cells that divide the side of the edge whereas pixels from c to n constitute
input image for gradient and laplacian and " calculations the positive side of the edge. Pixel 1 is left as it is.
(M). Pixel 2 is replaced by the average of pixel 1 and
FRAC: The minimum size of an edge region as a fraction pixel 2. A linear function is constructed using pixels
of the input image that is accepted as a region to be operated 1 and pixel 2 and this function is used to re-assign
on (r). the value of pixel 3. Similarly, pixels 1,2 and 3 are
ITER: The number of times the linear interpolation used to construct a linear function which is used to
algorithm is executed on the line of interest. re-assign value of pixel 4. In this way, the values of
HALFLEN: The extend of the line selected for each of the pixels till pixel c-1 is recalculated. In the
interpolation in number of pixels after the end of the edge same way interpolation is done in the positive side of
region (N). the edge as well. The linear interpolation is repeated
as many times as specified by the parameter ITER.
IV. PROPOSED ALGORITHM
The overall procedure that is finally employed is
explained in detail in this section.
• Read the image and convert it to a grayscale image Figure 3. Pixel numbering along line of interpolation
in case it is a color image. This image is denoted by
the name f. V. RESULTS
• Find out the gradient ∇f of the input image as the The above algorithm was applied on a large number of
square root of the sum of squares of the x derivative trial images. The algorithm successfully selected blurred
 and the y derivative  of the input image. edges and sharpened them. The result of applying the above
• Evaluate the laplacian ∇&  of the original image. algorithm on a set of trial images is shown in Table I. Table
• Normalize both the laplacian and the gradient to the II enlists the various parameter values used with each
range 0-255 in each sub-image of size sample image.
GRIDGRID.

522
TABLE I. SAMPLE IMAGES AND THE CORRESPONDING DEBLURRED RESULTS

Sample &o. Input Image Deblurred Image

Sample 1

Sample 2

Sample 3

Sample 4

523
[3] D. Geman, G. Reynolds, “Constrained restoration and the
TABLE II. PARAMETER VALUES recovery of discontinuities'', IEEE Transactions on Pattern
Analysis and Machine Intelligence, 1992,14(3):pp. 767-783.
Sample Sample Sample Sample Sample
&o. 1 2 3 4 [4] M. Hurn, C. Jennison, “An extension of Geman and Reynolds'
approach to constrained restoration and the recovery of
Size 320X240 320X240 400X282 320X240
discontinuities”, IEEE Transactions on Pattern Analysis and
GRID 500 50 200 500 Machine Intelligence, 1996,18(6):pp. 657-662.
FRAC 0.001 0.01 0.008 0.001 [5] J.G. Nagy, K.M. Palmer, “Iterative methods for image
ITER 3 2 1 1 deblurring: a Matlab object-oriented approach”, Numerical
HALFLE& 5 4 10 4 Algorithms, 2004,36(11):pp. 73-93.
[6] R.A. Schowengerdt, Remote Sensing: Models and Methods
for Image Processing, Academic Press, 1983.
VI. CONCLUSION AND FUTURE WORK
[7] V. Katkovnik, K. Egiazarian, J. Astola, “A Spatially Adaptive
The results of applying the algorithm on various test Nonparametric Regression Image Deblurring”, IEEE
images was found to be satisfactory. The algorithm Transactions on Image Processing, 2005,14(10):pp. 1469-
successfully selects the blurred edges in the image 1478.
automatically and sharpens them. Future work could aim at [8] A.P. Yang, Z.X. Hou, C.Y. Wang, “Image Deblurring Based
automating the generation of the four input parameters in the on Wavelet and Neural Network”, Proceedings of the
algorithm. While proposing the novel idea of selectively International Conference on Wavelet Analysis and Pattern
processing the blurred edges, and demonstrating the same Recognition, 2007,2:pp. 647-651.
successfully, this paper also opens up the scope for further [9] Q.R. Razligh, N. Kehtarnavaz, “Image Blur Reduction for
work in improving the computational complexity of the Cell-Phone Cameras Via Adaptive Tonal Correction”,
selection process. Proceedings of the IEEE International Conference on Image
Processing, ICIP, 2007,1:pp. 113-116.
[10] S.W. Jung,T.H. Kim, S.J. Ko, “A Novel Multiple Image
REFERENCES Deblurring Technique Using Fuzzy Projection onto Convex
[1] J. Huang, J. Zhang, “Spatially Adaptive Image Deblurring Sets”, IEEE Signal Processing Letters, 2009,16(3):pp. 192-
Algorithm Based on Abdou Operator”, Proceedings of the 195.
Fourth International Conference on Image and Graphics, [11] R.L. Lagendijk, J. Biemond, “Regularized iterative image
ICIG, 2007:pp. 67-70. restoration with ringing reduction”, IEEE Transaction on
[2] H.C. Andrews, B.R. Hunt, Digital Image Restoration. Prentice Acoustics, Speech, and Signal Processing, 1988, 36(12):pp.
Hall, 1977. 1874-1888.
[12] W.K. Pratt, Digital Image Processing. Mechanical Industry
Press, 2005.

524

You might also like