Professional Documents
Culture Documents
Abstract
−8
−6
−8
as red curves in Fig.3. The Gaussian component with larger −10 −10
−14
−12
−14
−20 −20
−200 −100 0 100 200 −200 −100 0 100 200
−4
Gaussian component 1
Gaussian component 2
Mixture of Gaussian
−2
−4
Gaussian component 1
Gaussian component 2
Mixture of Gaussian
−10 −10
with significant structure contrast but relatively small gradi- −12 −12
−16 −16
−20
−18
−20
−200 −100 0 100 200 −200 −100 0 100 200
0.95
rithm to ranking the confidence of blur for an image. In 0.95
0.9
0.9
0.8
segmentation.
Precision
0.8 0.75
0.75 0.7
0
saturation
unblurred images from photo sharing websites, such as Recall 0 0.2 0.4 0.6 0.8 1
Flickr.com and PBase.com, to form our dataset. In each cat- (a) (b)
egory, half of the images are used for training and the other Figure 5. Precision-Recall curves of our classification. (a)
half are for testing. All the images are manually segmented Blur/nonblur classification results. The curve corresponding to the
into square patches. The size of each patch ranges from classifier using ηa is shown in red, whereas the curves for classi-
50 × 50 to 500 × 500 pixels, which occupies 5% ∼ 20% fiers using η1 , η2 , and η3 , are shown in green, blue, and purple,
of the size of the original images. Examples of images and respectively. (b) Precision-Recall curve for motion/focal blur clas-
patches in our datasets are shown in Fig. 6. Then we la- sification.
bel each patch as one of the following three types: “sharp”,
“motion blur”, or “focal blur”. In total, we generated 290
Classification Overall Maximum
focal blur patches, 217 motion blur patches, and 516 un-
Tasks Accuracy Rate Accuracy Rate
blurred patches from training set and 223, 139, and 271
patches, respectively, from testing set. Blur / Nonblur 63.78% 76.98%
We evaluate the performance of our classifier using the Motion / Focal blur 65.45% 78.84%
Precision-Recall curve. Let N be the number of patches to
be classified, fi be the label for patch i, and ai be the ground Table 1. Accuracy Rate on the testing dataset.
truth label for patch i, we define the measurements as
In the second step, we test motion/focal blur classifica-
|{i; fi = ai &ai = true}|
Recall = , tion using ηb . Fig. 5(b) shows the precision-recall curve.
|{i; fi = ture}|
As listed in Table 1, the maximum accuracy rate is 78.84%
|{i; fi = ai &ai = true}| when ηb = 1.3. This threshold is also used later for blur
P recision = ,
|{i; ai = true}| segmentation.
|{i; fi = ai }| Because of the similarity of the blurred and low-contrast
Accuracy = .
N regions in natural images, our classification results in-
In the first step of our algorithm, patches labeled evitably contain errors. We examined incorrectly classified
as “blur” are considered as true instances in evaluating patches and found that the latent ambiguous texture or struc-
blur/nonblur classifier whereas, in the second step, patches ture in patches is the main cause of errors. For example,
labeled as “motion” are considered as true in evaluating Fig. 7(a) and (b) show patches wrongly classified as blurred
motion/focal blur classifier. regions due to the shadow or low contrast pixels. The patch
For evaluating individual features in blur detection, we in Fig. 7(c) is mistakenly classified as motion blur because
plot precision-recall curves to show the discriminatory of the strong directional structures.
power. Similar to the definition of ηa , we set ηi = The patch-based blur detection and classification can
P (qi | Blur) /P (qi | Sharp), where i = {1, 2, 3} and serve as a foundation for ranking the degree of image blur.
show in Fig. 5(a) the precision-recall curves for each In experiments, we collect a set of flower pictures searched
blur metric ηi and the combined metric ηa in classifying from Google and Flickr. Each image is segmented into
frag replacements PSfrag replacements PSfrag replacements
focal blur focal blur
motion blur focal blur motion blur motion blur sharp
sharp sharp
Figure 6. Selected examples of images(first row) and manually labeled patches(second row) from our datasets.
Figure 9. Partial blur recognition for flower images. Images are shown in an ascending order in term of the size of blurry regions.
6. Conclusion and Future Work [9] D. Field. Relations between the statistics of natural images
and the response properties of cortical cells. Journal of the
In this paper, we have proposed a partial-blur image de- Optical Society of America A, 4(12):2379–2394, 1987. 3
tection and analysis framework for automatically classify- [10] D. Field and N. Brady. Visual sensitivity, blur and the sources
ing whether one image contains blurred regions and what of variability in the amplitude spectra of natural scenes. Vi-
types of blur occur without needing to performing image sion Research, 37(23):3367–3383, 1997. 3
deblurring. Several blur features, measuring image color, [11] W. T. Freeman and E. H. Adelson. The design and use of
gradient, and spectrum information, are utilized in a param- steerable filters. PAMI, 13(9):891–906, 1991. 2
eter training process in order to robustly classify blurred im- [12] B. Hansen and R. Hess. Discrimination of amplitude spec-
ages. Extensive experiments show that our method works trum slope in the fovea and parafovea and the local ampli-
tude distributions of natural scene imagery. Journal of Vi-
satisfactorily with challenging image data and can be ap-
sion, 6(7):696–711, 2006. 3
plied to partial-blur image detection and blur segmentation.
[13] C. Harris and M. Stephens. A combined corner and edge
Our method, in principal, provides a foundation for solv- detector. Alvey Vision Conference, 15, 1988. 5
ing many blur-oriented and region-based computer vision [14] J. Jia. Single image motion deblurring using transparency.
problems, such as content-based image retrieval, image en- In CVPR, 2007. 2
hancement, high-level image segmentation, and object ex- [15] C. Kim. Segmenting a low-depth-of-field image using mor-
traction. phological filters and region merging. IEEE Transactions on
Image Processing, 14(10):1503–1511, 2005. 2
[16] V. Kolmogorov and R. Zabih. What energy functions can be
References minimized via graph cuts? PAMI, 26(2):147–159, 2004. 7
[1] L. Bar, B. Berkels, M. Rumpf, and G. Sapiro. A variational [17] L. Kovács and T. Szirányi. Focus area extraction by
framework for simultaneous motion estimation and restora- blind deconvolution for defining regions of interest. PAMI,
tion of motion-blurred video. In ICCV, 2007. 2 29(6):1080–1085, 2007. 2
[18] A. Levin. Blind motion deblurring using image statistics. In
[2] P. Bex and W. Makous. Spatial frequency, phase, and the
NIPS, pages 841–848, 2006. 1, 2, 7
contrast of natural images. Journal of the Optical Society of
[19] Y. Li, J. Sun, C.-K. Tang, and H.-Y. Shum. Lazy snapping.
America A, 19(6):1096–1106, 2002. 3
ACM Trans. Graph., 23(3):303–308, 2004. 7
[3] G. Burton and I. Moorhead. Color and spatial structure in [20] P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi. A no-
natural scenes. Applied Optics, 26(1):157–160, 1987. 3 reference perceptual blur metric. In ICIP (3), pages 57–60,
[4] Y. Chung, J. Wang, R. Bailey, S. Chen, and S. Chang. A non- 2002. 2
parametric blur measure based on edge analysis for image [21] S. Roth and M. J. Black. Fields of experts: A framework for
processing applications. IEEE Conference on Cybernetics learning image priors. In CVPR (2), pages 860–867, 2005. 3
and Intelligent Systems, 1, 2004. 2 [22] Q. Shan, W. Xiong, and J. Jia. Rotational Motion Deblurring
[5] J. Da Rugna and H. Konik. Automatic blur detection for of a Rigid Object from a Single Image. In ICCV, 2007. 2
metadata extraction in content-based retrieval context. In [23] D. Tolhurst, Y. Tadmor, and T. Chao. Amplitude spectra
SPIE, volume 5304, pages 285–294, 2003. 1 of natural images. Ophthalmic Physiol Opt, 12(2):229–32,
[6] R. Datta, D. Joshi, J. Li, and J. Z. Wang. Studying aesthetics 1992. 3
in photographic images using a computational approach. In [24] J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold. Unsu-
ECCV (3), pages 288–301, 2006. 2 pervised multiresolution segmentation for images with low
[7] J. H. Elder and S. W. Zucker. Local scale control for edge depth of field. PAMI, 23(1):85–90, 2001. 2
detection and blur estimation. PAMI, 20(7):699–716, 1998. [25] E. W. Weisstein. Autocorrelation. MathWorld.
1, 2 http://mathworld.wolfram.com/Autocorrelation.html. 4
[26] W. Zhang and F. Bergholm. Multi-Scale Blur Estimation and
[8] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T.
Edge Type Classification for Scene Analysis. International
Freeman. Removing camera shake from a single photograph.
Journal of Computer Vision, 24(3):219–250, 1997. 2
ACM Trans. Graph., 25(3):787–794, 2006. 2