Professional Documents
Culture Documents
=
(1)
Where u, v=0, 1, 2, 3.. N-1
And
=
=
=
) 1 ...( 3 , 2 , 1 , ,
2
0 , ,
1
) ( ), (
N v u
N
v u
N
v D u D
The inverse 2D-DCT transformation is given by the
following equation:
N
v y
N
u x
v u c v D u D
y x f
N
u
N
v
2
) 1 2 (
cos
2
) 1 2 (
cos ) , ( ) ( ) (
) , (
1
0
1
0
+ +
=
=
.(2)
Where x, y =0, 1, 2, (N-1)
3.1.2 JPEG Process
The JPEG process is shown below-
1. Original image is divided into blocks of 4-by-4 or 8 x
8.
2. Pixel values of a black and white image range from 0-
255 but DCT is designed to work on pixel values ranging
from -128 to 127. Therefore each block is modified to
work in the range.
3. Equation (1) is used to calculate DCT matrix.
4. DCT is applied to each block by multiplying the
modified block with DCT matrix on the left and transpose
of DCT matrix on its right.
5. Each block is then compressed through quantization.
6. Quantized matrix is then entropy encoded.
7. Compressed image is reconstructed through reverse
process.
8. Inverse DCT is used for decompression
3.1.3 Quantization
Quantization is achieved by compressing a range of
values to a single quantum value. When the number of
discrete symbols in a given stream is reduced, the stream
becomes more compressible. A quantization matrix is
used in combination with a DCT coefficient matrix to
carry out transformation. Quantization is the step where
most of the compression takes place. DCT really does not
compress the image because it is almost lossless.
Quantization makes use of the fact that higher frequency
components are less important than low frequency
components. It allows varying levels of image
compression and quality through selection of specific
quantization matrices. Thus quality levels ranging from 1
to 100 can be selected, where gives the poorest image
quality and highest compression, while 100 gives the best
quality and lowest compression. As a result quality to
compression ratio can be selected to meet different
needs.JPEG committee suggests matrix with quality level
50 as standard matrix. For obtaining quantization
matrices with other quality levels, scalar multiplications
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 3, Issue 1, January February 2014 ISSN 2278-6856
Volume 3, Issue 1 January February 2014 Page 93
of standard quantization matrix are used. Quantization is
achieved by dividing transformed image matrix by the
quantization matrix used. Values of the resultant matrix
are then rounded off. In the resultant matrix coefficients
situated near the upper left corner have lower frequencies
.Human eye is more sensitive to lower frequencies
.Higher frequencies are discarded. Lower frequencies are
used to reconstruct the image.
3.1.4 Huffman Algorithm
The basic idea in Huffman coding is to assign short code
words to those input blocks with high probabilities and
long code words to those with low probabilities. A
Huffman code is designed by merging together the two
least probable characters, and repeating this process until
there is only one character remaining. A code tree is thus
generated and the Huffman code is obtained from the
labeling of the code tree.
3.1.5 Peak Signal to Noise Ratio
The formula for calculating the peak signal to noise ratio
is:
|
|
.
|
\
|
=
|
|
.
|
\
|
=
noise
signal
noise
signal
db
A
A
A
A
SNR
10
2
10
log 20 log 10
Where,
A
signal
, A
noise
=root mean square (RMS) amplitude
4 RESULTS AND DISCUSSION
In the JPEG image compression algorithm, the input
image is divided into 4-by-4 or 8-by-8 blocks, and the
two-dimensional DCT is computed for each block. The
DCT coefficients are then quantized, coded, and
transmitted. The JPEG receiver (or JPEG file reader)
decodes the quantized DCT coefficients, computes the
inverse two-dimensional DCT of each block, and then
puts the blocks back together into a single image. For
typical images, many of the DCT coefficients have values
close to zero; these coefficients can be discarded without
seriously affecting the quality of the reconstructed image.
The example code below computes the two-dimensional
DCT of 8-by-8 blocks in the input image, discards (sets to
zero) all but 10 of the 64 DCT coefficients in each block,
and then reconstructs the image using the two-
dimensional inverse DCT of each block. The transform
matrix computation method is used.
Figure 2: Al-though there is some loss of quality in
the reconstructed image, it is clearly recognizable,
even though almost 85% of the DCT coefficients
were discarded
Figure 3. (Left-Bottom) Lena, 8-by-8 DCT, 4-by-4 DCT
(Right-Bottom) Apple, 8-by-8 DCT, 4-by-4 DCT
TABLE 1: Compression PSNR Results for Adaptive
Huffman Coding
Compres
sion by 8-
by-8 DCT
Compress
ion by 4-
by-4 DCT
Compress
ion by 8-
by-8 DCT
Compress
ion by 4-
by-4 DCT
Image of
Lena
Image of
Lena
Image of
Mona
Lisa
Image of
Mona
Lisa
J PEG-
1
6.70% 6.31% 2.97% 2.76%
J PEG-
2
6.24% 4.86% 2.47% 1.73%
J PEG-
3
6.14% 4.43% 2.29% 1.56%
J PEG-
4
6.04% 4.17% 2.14% 1.34%
J PEG-
5
5.19% 3.76% 1.51% 1.26%
J PEG-
6
4.47% 3.20% 1.26% 0.93%
J PEG-
7
3.79% 2.44% 1.11% 0.69%
J PEG-
8
3.02% 1.63% 0.81% 0.44%
J PEG-
9
2.25% 0.00% 0.26% 0.00%
5 CONCLUSION
DCT is used for transformation in JPEG standard. DCT
performs efficiently at medium bit rates. Disadvantage
with DCT is that only spatial correlation of the pixels
inside the single 2-D block is considered and the
correlation from the pixels of the neighboring blocks is
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 3, Issue 1, January February 2014 ISSN 2278-6856
Volume 3, Issue 1 January February 2014 Page 94
neglected. Blocks cannot be de-correlated at their
boundaries using DCT.
A new lossless image compression scheme based on the
DCT was developed. This method caused a signicant
reduction in entropy, thus making it possible to achieve
compression using a traditional entropy coder. The
method performed well when compared to the popular
lossless.
References
[1] R. C. Gonzalez and R. E. Woods, Digital Image
Processing,Second edition,pp. 411-514,2004.
[2] D. Coltuc, Low Distortion Transform for Reversible
Watermarking,IEEE Trans. on Image Processing,
vol. 21, no. 1, pp. 412-417, Jan.2012
[3] A. K. Jain,Fundamentals of Digital Image
Processing,PrenticeHall,Englewood Cliffs, NJ,
1989.
[4] Ronald A. DeVore, Bjorn Jawerth, and Bradley J.
Lucier, Member,"Image Compression Through
Wavelet Transform Coding" IEEE Trans. on
Information Theory, Vol. 38. NO. 2, pp. 719-746,
MARCH 1992.
[5] Amir Averbuch, Danny Lazar, and Moshe
Israeli,"Image Compression Using Wavelet transform
and Multiresolution decomposition IEEE Trans. on
Image Processing, Vol. 5, No. 1, JANUARY 1996.
[6] N. D. Memon and K. Sayood, Lossless image
compression: A comparative study, IS&T/SPIE
Electronic Imaging Conference. San Jose,University
of New Mexico. CA, February, 1995.
[7] D.S.Taubman and M.W.Marcellin, JPEG2000:
Image Compression Fundamentals,Standards and
Practice, Kluwar Academic Publishers,2002.
[8] P.J.Oonincx and P.M.d.Zeeuw, Adaptive lifting for
shape-based image retrieval. Pattern Recognition,
2003,36:2663-2672
[9] Zhao Yu-qian; Gui Wei-hua; Chen Zhen-cheng;
Tang Jing-tian; Li Ling-yun; "Medical Images Edge
Detection Based on Mathematical Morphology",
Engineering in Medicine and Biology Society, 2005.
IEEE-EMBS 2005. 27th Annual International
Conference. Issue Date: 17-18 Jan. 2006. Pp:6492-
6595.
[10] Sonja Grgic, Mislov Grgic and Branka Zorko-cihlar,
"Performance Analysis of Image Compression Using
Wavelets" IEEE Trans on Industrial Electronics, Vol
48, No 3, June 2001.
[11] C. G. Boncelet, Rlock arithmetic coding for source
compression, IEEE Transactions on Information
Theory, vol. IT-39, pp. 1546-1554, Sept. 1993.
[12] Lossless Data Compression, Recommendation for
Space Data System Standards, CCSDS 121.0-B-1,
May 1997.
[13] Image Data Compression, Recommendation for
Space Data System Standards, CCSDS 122.0-B-1,
November 2005.
[14] J. Feng, I. Lin, C. Tsai, et al., Reversible
Watermarking: Current Statusand Key Issues,
International Journal of Network Security, vol. 2,
no.3, pp. 161-171, May. 2006.
[15] K. Chung, Y. Huang, P. Chang, et al., Reversible
Data Hiding-Based Approach for Intra-Frame Error
Concealment in H.264/AVC, IEEE Trans. on
Circuits System.
[16] J. Fridrich and M. Goljan, Lossless Data Embedding
for All ImageFormats, in SPIE Proceedings of
Photonics West, Electronic Imaging,Security and
Watermarking of Multimedia Contents, vol. 4675,
pp. 572-583, San Jose, Jan. 2002.
[17] L. Luo, Z. Chen, M. Chen, et al., Reversible Image
WatermarkingUsing Interpolation Technique, IEEE
Trans. on Information Forensicsand Security. vol. 5,
no. 1, pp. 187-193, Mar. 2010.
[18] T. Kalker, F. M. Willems, Capacity Bounds and
Code Constructions for Reversible Data-Hiding,
Proc. of 14th International Conference on Digital
Signal Processing, DSP2002, pp. 71-76, 2002.
AUTHORS
Deepak Kumar Jain received Bachelor of
Engineering in 2010 and Master of Technology in
2012 from Jaypee University of Engineering and
Technology fromGuna Campus, India.
Devansh Gaur received Bachelor of Engineering
fromBirla Institute of Technology, Mesra, Ranchi
in 2010.
Kavya Gaur received Bachelor of Technology
from Rajasthan Institute of Engineering and
Technology, J aipur, Rajasthan in 2013.
Neha Jain received Bachelor of Engineering
in 2008 and Master of Technology in 2012
from Jaypee University of Engineering and
Technology from Guna Campus, India.