You are on page 1of 9

International

Journal of Advanced
Research
in Engineering RESEARCH
and Technology IN
(IJARET),
ISSN 0976
INTERNATIONAL
JOURNAL
OF ADVANCED
ENGINEERING
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

AND TECHNOLOGY (IJARET)

ISSN 0976 - 6480 (Print)


ISSN 0976 - 6499 (Online)
Volume 5, Issue 11, November (2014), pp. 37-45
IAEME: www.iaeme.com/ IJARET.asp
Journal Impact Factor (2014): 7.8273 (Calculated by GISI)
www.jifactor.com

IJARET
IAEME

ANALYSIS OF COLLABORATIVE LEARNING METHODS


FOR IMAGE CONTRAST ENHANCEMENT
Santhosh Kumar K.L1,

Jharna Majumdar2

Assistant Prof, Dept of CSE (PG), Nitte Meenakshi Institute of Technology,Bangalore, India
Dean R&D, Prof and Head CSE (PG), Nitte Meenakshi Institute of Technology,Bangalore, India

ABSTRACT
Image enhancement is an important area in digital image processing. Basically, the idea
behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain
features of interest in an image. In this paper, we propose a modified version of collaborative
learning method, first proposed by Chang et al [4]. We combine the random spatial sampling concept
from existing collaborative learning method with block-based histogram equalization with sliding
window concept. The experimental study is done using a set of underwater and medical images. It is
seen that the method proposed in this paper gives better results compared to the conventional
collaborative learning method. To demonstrate the effectiveness of our method, we have used a set
of quality metric parameters, which measures the quality of enhancement.
Keywords: Image Enhancement, Histogram Equalization, Collaborative Learning; Quality Metric
parameters
I. INTRODUCTION
Contrast enhancement adjusts the brightness intensity of the image by stretching brightness
values between dark and bright area. The output of this process will produce clearer image to the
eyes or assist feature extraction processing in computer vision system. There are two approaches of
contrast enhancement, global and local [1].
Global approaches improve image quality through brightness intensity values redistribution
of the whole image. However this leads to the washed-out effect due to the change of the average
intensity to middle level. This problem often occurs in low contrast images with narrow gray level
distribution. This leads to large number of noises termed as over-equalize process. These issues were
37

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

overcome by varying the gray-scale transformation on each small block of the image. This is termed
as local contrast enhancement [1-2].
The traditional histogram equalization technique is based on a transformation using the
histogram of the entire image to obtain a contrast-enhanced image with a more uniform histogram.
Although histogram equalization used on the entire image enhances the contrast to a large extent to
produce a better visualization effect, it still cannot discriminate details in homogeneous regions in
the image [3].
Block-based histogram equalization methods such as adaptive histogram equalization (AHE)
and contrast limited adaptive histogram equalization (CLAHE) consider only a local window or
neighboring windows for contrast enhancement [3]. The collaborative learning (CL) enhancement
algorithm [4-5] is derived from collaborative learning in knowledge creating communities. The use
of collaborative learning helps to determine each pixels final gray level from multiple perspectives.
In this paper, we propose a method which is a combination of collaborative learning method
and the block-based histogram equalization with sliding window approaches. Each pixels gray level
is determined from multiple randomly selected sliding windows; the conventional histogram
equalization method is applied for each of the sliding windows, the monotonically increasing
ordering is modified in order to enhance resolution with contextual information. The rest of the paper
contains quality metric parameters, results, analysis using underwater and medical images.
II. EXISTING METHODS
A. HISTOGRAM EQUALIZATION (HE)
Histogram equalization is a process which transforms a histogram with closely grouped
values to spread out into a flat or equalized histogram [1-5]. This method is widely used for contrast
enhancement in a variety of applications due to its simple function and effectiveness. If r is a random
variable that represents one gray level of an image, the transformation of histogram equalization can
T ( r ) = p ( x ) dx
be represented as v=T(r), where v has a uniform distribution that can be used for the
construction of the output image. Typically r is set to lie within the closed interval [0, 1], where r=0
represents black and r=1 represents white. The original image and the contrast enhanced image can
be characterized by probability density functions (PDFs) pr(r) and pv(v) respectively. If the
cumulative density functions (CDF) can be considered as monotonically increasing in the interval [0,
1]. The transformation-based function (CDF) can be considered as monotonically increasing in the
interval [0, 1]. The transformation-based CDF of r can output a uniform density distribution to
enhance the contrast of the original image.
One drawback of the histogram equalization can be found on the fact that the brightness of an
image can be changed after the histogram equalization, which is mainly due to the flattening
property of the histogram equalization. HE normally changes the brightness of the input image
significantly, makes some of the uniform regions of the output image become saturated with very
bright or very dark intensities. Although a wide variety of approaches have been used for
enhancement, histogram equalization continues to be one of the most commonly used contrast
enhancement techniques, and it is the basis of many derivatives.
r

B. COLLABORATIVE LEARNING (CL) METHOD


The collaborative learning (CL) enhancement algorithm is derived from collaborative
learning in knowledge creating communities [4]. The use of collaborative learning helps to
determine each pixels final gray level from multiple perspectives. It does not restrict the
determination of each pixels gray level by contexts in the local window. A strategy to set each
38

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

pixels gray level according to both local and global information is used. The contrast is enhanced by
random spatial sampling and global normalization of histogram equalized sub-images.
Algorithm: Collaborative Learning for Image Contrast Enhancement
Manual Input: NIL
1.

2.
3.
4.
5.
6.

For a given image I, the centre (enX(),enY()) of h randomly chosen window is


calculated, the width and height of the window are calculated from inW() and
WinH()
N such sub-images S(i)WinW(i)xWinH(i) are extracted and Histogram Equalized.
For every pixel (, ) in the image I, the number of sub-images that cover (,) is calculated
with ount(,).
The histogram-equalized gray level of each pixel (, ) in image I is accumulated from N
sub-images as acc(,).
The average histogram-equalized gray level of each pixel (, )is then calculated from
ave(,).
The gray level of each pixel in the image ave is normalized into the range 0-255; this gives
the Collaborative Learning enhanced image.

The average histogram-equalized gray level can reflect the current viewing perspective as one
individuals viewpoint in the community. Because these N (set N to 500) sub-images have different
window sizes and locations, the histogram equalization is performed with different image
information, which provides different image enhancement perspectives. The N distinct individuals
focus on N different portions of the image from different perspectives, with the combined result of a
global view for image enhancement. Conceptually this is similar to a community of distinct human
learners, each with a unique knowledge background, and a unique understanding of shared
information. By combining the perspectives from their unique viewpoints, they benefit from each
others understanding, and the overall group achieves a more knowledgeable and informed state.
III. PROPOSED METHODOLOGY
The main aim of our approach is to provide the better contrast enhancement compare to the
existing collaborative learning method. As mentioned above we added block-based histogram
equalization with sliding window concept to the collaborative learning strategy. The sizes of the
widow are chosen randomly for N number of times. For every window the histogram equalization is
calculated. Further the enhanced sub-images are accumulated in the output image. Due to N different
window sizes, the histogram equalization is performed with different image information, which
provides different image enhancement perspectives. At last combined result of a global view for
image enhancement are provided. The different portions of the image from different perspectives are
combined to get the result of a global view for image enhancement.
Algorithm: Modified Collaborative Learning for Image Contrast Enhancement
Manual Input: N- the number of passes
1.
For a given image I, the width and height of the Nth window is calculated randomly from
inW() and WinH()

39

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME
rand (1) W
WinW (i ) = 2
+ 1
2

rand (1) H
WinH (i ) = 2
+1
2

2.
Use this window to extract sub-image, Histogram Equalize it and accumulate in cc, slide the
window horizontally and vertically covering entire image.
N

I acc ( m , n ) =

[ (m , n , S

(i )

WinW ( i ) WinH ( i ) EquS

(i )

WinW ( i ) WinH ( i )( m , n )

i =1

3.
For every pixel (, ) in the image I, the number of sub-images that cover (,) is calculated
with Count (,).
4.
Do the steps 1 to 3 for N times. Accumulate result from each passes in cc.
5.
Average vg calculated using Iacc and Count (, ) and normalized to the range 0-255, ICE
gives the enhanced image.
I ave ( m , n ) =

I acc ( m , n )
Count ( m , n )

I ( m , n ) min( I ave )

I CE ( m , n ) = ave
255
max( I ave ) min( I ave )

The proposed method is a modified form of Collaborative Learning algorithm. In this


algorithm, each P number of passes, the window size is calculated randomly. With this window size
a sub-image is extracted from image I, Histogram equalized and placed in an output image. This
window is slid horizontally and vertically over the entire image in a single pass. In the next pass
another window size is generated and the procedure is repeated. A counter keeps counting the
number of times a pixel is being histogram equalized. The results from each pass are accumulated in
the output image. The accumulated output image is averaged with help of counter and normalized to
range 0 to 255.
The main limitation of a histogram-based sliding window is its high computational cost. For
an image of size n x n, a window of size r x r and a histogram of dimension B, a straightforward
method scans n2 windows, scans r2 pixels per window to construct the histogram and scans B bins of
the histogram to evaluate the objective function.
IV. QUALITY METRICS FOR IMAGE ENHANCEMENT
Image quality measurement is crucial for most image processing applications. The measures
used to determine the quality of an image is called Quality Metrics (QM). Generally speaking, an
image quality metric has three kinds of applications: First, it can be used to monitor image quality
for quality control systems. Second, it can be employed to benchmark image processing systems and
algorithms. Third, it can be embedded into an image processing system to optimize the algorithms
and the parameter settings. There are two kinds of quality measurements: Subjective and Objective
quality metrics. The principle of subjective methods is that groups of assessors (or even a single
assessor) judge the quality of an image being presented to them. The subjective quality measurement
Mean Opinion Score (MOS) has been used for many years. However, the MOS method is too
40

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

inconvenient, slow and expensive for practical usage. The objective image quality metrics can
predict perceived image and video quality automatically [5].
In this paper we implemented eight subjective quality metrics and study their suitability for
assessing the quality of image enhancement. The parameters are Entropy, Visibility, Global Contrast
(GC), Spatial Frequency (SF), Fitness Measure (FM), Average Local Variances ALVs- ALVS (ALV
in Smooth region), ALVD (ALV in Detail region), and ALVE (ALV in Edge region). Detail
calculation of the parameters is given in Appendix-1.
Fig. 1 shows the general framework of the algorithm used for quality parameters study. We
have taken two sample images of very high and very low contrast and applied the above parameters
to study their effectiveness for different algorithms. Table 1 shows the sample result.

Fig.1. General Framework of Single Input Single Output Algorithm Quality Study

Fig. 2(a) Low Contrast image

(b) High Contrast image

Table-I.The Comparison of Quality Metric of Low and High contrast Images


Quality Metrics
Entropy
GC
Visibility
SF
FM
ALVS
ALVD
ALVE

Low Contrast Image


4.889
71.848
165.465
2.845
10.994
0.000
11.989
12.670
41

High Contrast Image


7.143
2809.554
1782.335
28.156
18.999
2.958
9.036
13.763

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

It is seen from Table-I that the values of Entropy, Global Contrast, Visibility, Spatial
Frequency, Fitness Measure ALVS and ALVE increase as the image goes from low contrast to high
contrast. The value of ALVD decreases as the image goes from low contrast to high contrast. We
used these quality metric parameters to justify the effectiveness of the proposed algorithm in
comparison with the conventional collaborative method.
V. RESULTS AND DISCUSSION
We used two underwater (Image 1 & Image 2) and two medical test images (Image 3 &
Image 4) for our experiment. We compared the proposed method with the traditional histogram
equalization and collaborative learning algorithms (Fig 3 to Fig 6). We set N=10 in the experiment.
The main goal of histogram equalization is to stretch the dynamic range of the pixel values in such a
way that light pixels may turn lighter, while the comparatively dark pixels may become even darker.
Table-II.The Comparison of Quality Metric Parameters for Test Images
QMs
Image #

Image 1

Image 2

Image 3

Image 4

Methods
Original Image
Histogram Equalization
Collaborative Learning
Proposed Method
Original Image
Histogram Equalization
Collaborative Learning
Proposed Method
Original Image
Histogram Equalization
Collaborative Learning
Proposed Method
Original Image
Histogram Equalization
Collaborative Learning
Proposed Method

Entropy

GC

Visibility

SF

FM

ALVS

ALVD

ALVE

6.852
6.771
7.872
7.944
6.112
5.958
7.803
7.810
6.633
6.507
7.718
7.877
7.755
7.517
7.876
7.932

1218.553
5491.869
4921.837
5713.111
676.742
5152.155
3693.134
5548.870
1052.361
5038.691
4658.443
5205.184
3790.265
4966.780
4391.498
5386.849

1167.172
1697.728
1814.569
2542.647
1535.379
1698.787
2557.098
3575.928
1641.020
1739.003
1917.394
2362.796
1307.996
1680.597
1695.379
1777.053

14.496
37.657
39.798
44.753
11.648
21.904
26.274
27.696
12.562
29.334
28.985
32.130
10.135
14.822
16.536
22.936

8.946
12.419
21.674
22.009
3.532
9.631
20.879
21.153
6.112
11.322
20.825
21.470
5.322
7.673
20.657
21.170

0.000
1.333
5.484
6.089
0.000
0.892
7.023
7.101
0.922
0.948
5.677
6.209
3.750
4.996
5.263
5.387

100.288
89.432
78.001
69.844
86.719
74.021
61.196
39.058
48.352
43.455
35.004
31.342
83.293
80.868
78.770
75.313

148.182
158.490
189.580
192.411
157.274
161.641
189.992
198.521
130.826
142.353
195.301
197.771
102.041
188.567
193.565
203.157

Further, the collaborative learning algorithm not only enhanced the whole image contrast, but
also discriminated details in relatively homogeneous regions. But the proposed method establishes
more effectiveness. As can be seen from Table II, the proposed method gives better subjective image
qualities compared to the histogram equalization and collaborative learning methods.

Fig.3. Image 1 (a) Original,

(b) Histogram Equalization,

(c) Collaborative Learning,

42

(d) Proposed Method

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

Fig.4. Image 2 (a) Original,

(b) Histogram Equalization,

(c) Collaborative Learning,

(d) Proposed Method

Fig.5. Image 3 (a) Original,

(b) Histogram Equalization,

(c) Collaborative Learning,

(d) Proposed Method

Fig.6. Image 4 (a) Original,

(b) Histogram Equalization,

(c) Collaborative Learning,

(d) Proposed Method

VI. CONCLUSION
We have proposed a new image contrast enhancement method which is the combination of
collaborative learning and histogram-based sliding window. First, we explained the importance of
image contrast enhancement. Then, we explained different histogram equalization methodologies. To
prove our results we have taken different quality metric parameters from which our method provides
better enhancement results compared to the other histogram-based methods such as histogram
equalization and collaborative learning methods. In future the proposed method can be extended
using efficient histogram-based sliding window to minimize the high computation cost.
VII. ACKNOWLEDGMENTS
The authors express their sincere gratitude to Prof N R Shetty, Director, Nitte Meenakshi
Institute of Technology and Dr H C Nagaraj, Principal, Nitte Meenakshi Institute of Technology for
providing encouragement, support and the infrastructure to carry out the research.
43

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

VIII. REFERENCES
[1]

[2]

[3]

[4]
[5]
[6]

[7]
[8]
[9]

[10]

[11]
[12]
[13]
[14]
[15]

Siti Arpah Bt Ahmadi, Mohd Nasir Taib, Noor Elaiza A.Khalid. The Effect of Sharp
Contrast-Limited Adaptive Histogram Equalization (SCLAHE) on Intra-oral Dental
Radiograph Images. 2010 IEEE EMBS Conference on Biomedical Engineering & Sciences
(IECBES 2010), Kuala Lumpur, Malaysia
Ramyashree N, Pavithra P, Shruthi T V, Dr. JharnaMajumdar. Enhacement of Aerial and
Medical Image using Multi resolution pyramid. Special Issue of IJCCT Vol. 1 Issue 2,3,4;
International Conferecnce - ACCTA-2010
Stephen M. Pizer, E. Philip Amburn, John D. Austin, Robert Cromartie. Adaptive Histogram
Equalization and Its Variations.
COMPUTER VISION, GRAPICS, AND IMAGE
PROCESSING 39, 355-368 (1987)
Yuchou Chang, Dah-Jye Lee, James Archibald and Yi Hong. Using Collaborative Learning
for Image Contrast Ehnancement. IEEE 2008
Zhou Wang, Alan C. Bovik, Ligang Lu. WHY IS IMAGE QUALITY ASSESSMENT SO
DIFFICULT? IBM Research Lab 2003
Zhengmao Ye. Objective Assessment of Nonlinear Segmentation Approaches to Gray Level
Underwater Images. ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April
2009
Jia-Guu Leu. Image Contrast Enhancement Based on the Intensities of Edge Pixels. CVGIP:
Graphical Models and Image Processing Vol. 54, No. 6, November, pp. 497-506, 1992.
Sonja Grgi c Mislav Grgic Marta Mrak. Reliability of Objective Picture Quality Measures.
Journal of ELECTRICAL ENGINEERING, VOL. 55, NO. 1-2, 2004, 3-10
Munteanu C and Rosa A. Gray-Scale Image Enhancement as an Automatic Process Driven
by Evolution. IEEE Transactions on Systems, Man, and CyberneticsPart B: Cybernetics,
Vol. 34, No. 2, April 2004.
Iyad Jafar Hao Ying. A New Method for Image Contrast Enhancement Based on Automatic
Specification of Local Histograms. IJCSNS International Journal of Computer Science and
Network Security, VOL.7 No.7, July 2007
Xiaoyuan Su and Taghi M. Khoshgoftaar Review Article : A Survey of Collaborative
Filtering Techniques,Advances in Artificial Intelligence, Volume 2009
Manikanta Arrepu, Adaptive Enhancement of Aerial & Medical Images. M.Tech Thesis.
March 2010
Kapoor, A., Caicedo, J., Lischinski, D., And Kang, S. Collaborative Personalization of
Image Enhancement IJCV, 2013
Peter ODonovan, Aseem Agarwala, Aaron Hertzmann, Collaborative Filtering of Color
Aesthetics Proceedings of the Workshop on Computational Aesthetics, CAe 2014
Manav Jaiswal, Akshay Gavandi, Kundan Srivastav and Dr. Srija Unnikrishnan, MotionSensed Rtos-Based Application Control Using Image Processing International journal of
Computer Engineering & Technology (IJCET), Volume 4, Issue 6, 2013, pp. 337 - 346, ISSN
Print: 0976 6367, ISSN Online: 0976 6375

APPENDIX-1 QUALITY METRIC PARAMETERS


A. ENTROPY
The entropy [6] also called discrete entropy is a measure of information content in an image
and is given by,
Entropy = k =0 p(k ) log2 ( p(k ))
255

44

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976
6480(Print), ISSN 0976 6499(Online), Volume 5, Issue 11, November (2014), pp. 37-45 IAEME

Where p(k) is the probability distribution function. Larger the entropy, larger is the
information contained in the image and hence more details are visible in the image.
B. GLOBAL CONTRAST (GC)
The global contrast [7] value of an image is defined as the second central moment of its
histogram divided by N, the total number of pixels in the image.
GC =

L
i =0

(i ) 2hist (i )
N

Where, is the average intensity of the image, hist(i) is the number of pixels in the image
with the intensity value i and L is the highest intensity value.
C. VISIBILITY
The visibility is a measure of clarity of being visible in the image. Where is the mean
intensity value of the image and is a visual constant which varies from 0.6 to 0.7
M

m=1 n=1

F (m, n)

+1

D. SPATIAL FREQUENCY (SF)


The SF [8] indicates the overall activity level in an image. SF is defined as follows:
SF =

R2 + C 2

R=

C=

1
MN

(x

1
M

(x

j ,k

x j , k 1 )

j ,k

x j 1, k )

j =1 k = 2

k =1 j = 2

Where R is row frequency, C is column frequency and xj,k denotes the pixel intensity values of
image; M and N are numbers of pixels in horizontal and vertical directions.
E. FITNESS MEASURE (FM)
The Fitness measure [9] depends on the entropy H(I), no. of edges n(I) and the intensity of
edges E(I). Compared to the original image, the enhanced version should have a higher intensity of
the edges.
FM = ln(ln E ( I ) + e )

n( I )
H (I )
( width * height )

F. AVERAGE LOCAL
VARIANCES (ALVS)
A set of three measures of local variance called ALVs (average local variances) has been
used to evaluate the extent of enhancement. The steps involved in computing the ALVs can be
summarized as:

For each pixel do the following:


a. Calculate the local standard deviation (LSD) in the 3x3 window centred on the pixel.
b. Classify each pixel according to the following rules
LSD < T1 -> Smooth Region (Calculate the average local variance in smooth region AVLS)
T1 <= LSD < T2 -> Detail Region (Calculate the average local variance in Detail region ALVD)
T2 <= LSD -> Edge Region (Calculate the average local variance in Edge region ALVE)

We have taken the default value T1=3 and T2=12.

45

You might also like