You are on page 1of 5

International Journal of Computer science engineering Techniques-– Volume 2 Issue 2, Jan - Feb 2017

RESEARCH ARTICLE –– OPEN ACCESS

An Improved Color Image Compression Approach based on Run


Length Encoding
Uqba bn Naffa
Computer Science Department, University of Mustansiriyah, Baghdad, Iraq

Abstract:
Image compression is the noticeable requirement of recent digital image processing strategies as
well as codes to save large digital images in small images. And for the same reason we need the image
compression algorithms which has optimum performance of compression without losing visual quality of
image. This paper presents an improved color image compression approach that has the ability to
compress the color image. The proposed approach divides the color image into RGB bands; each band is
selected to be divided. The division processes of the band into blocks are based on specific criteria with
non-overlapping. For each selection, the some operations are applied. A particular implementation of this
approach was tested, and its performance was quantified using the peak signal-to-noise ratio and similarity
index. Numerical results indicated general improvements in visual quality for color image coding.

Keywords — Image compression, Color image, DCT transforms, Image coding.

the size of the data that images contain. Generally,


image compression schemes in [2, 3] exploit certain
I. INTRODUCTION
data redundancies to convert the image to a smaller
Compression is any method reducing the original form.
amount of data to another less quantity. One can For compression, a luminance-chrominance
categorize already elaborated algorithms to lossless representation is considered superior to the RGB
or lossy techniques: In the first case: the quality is representation. Therefore, RGB images are
totally preserved and we converse about storage or transformed to one of the luminance-chrominance
transmission reduction. In the counterpart, lossy models, performing the compression process, and
algorithms look for the check of the well-known then transform back to RGB model because
tradeoff rate-distortion. It means that the quality of displays are most often provided output image with
the decompressed data must be in the tolerable direct RGB model. The luminance component
bounds defined by each specific application represents the intensity of the image and look like a
according to the maximum possible compression gray scale version. The chrominance components
ratio reachable [1]. represent the color information in the image [4].
In the recent years there has been an astronomical Douak et al. [5] have proposed a new algorithm for
increase in the usage of computers for a variety of color images compression.
tasks. One of the most common usage has been the Mohamed et al. [6] proposed a hybrid image
storage, manipulation, and transfer of digital images. compression method, which the background of the
The files that comprise these images, however, can image is compressed using lossy compression and
be quite large and can quickly take up precious the rest of the image is compressed using lossless
memory space on the computer’s hard drive. In compression. In hybrid compression of color
multimedia application, most of the images are in images with larger trivial background by histogram
color. The color images contain lot of data segmentation, input color image is subjected to
redundancy and require a large amount of storage binary segmentation using histogram to detect the
space. Image compression refers to the reduction of background. The color image is compressed by

ISSN: 2455-135X http://www.ijcsejournal.org Page 1


International Journal of Computer science engineering Techniques-– Volume 2 Issue 2, Jan - Feb 2017
standard lossy compression method. The difference
between the lossy image and the original image is
computed and is called as residue. The residue at
the background area is dropped and rest of the area
Fig. 1 DCT-Based Encoder Processing Steps
is compressed by standard lossless compression
method. This method gives lower bit rate than the
lossless compression methods and is well suited to
any color image with larger trivial background.
II. ARCHITECTURE FOR THE COMPRESSION
TECHNIQUE STANDARD
The compression technique contains the four
“modes of operation”. For each mode, one or more Fig. 2 DCT-Based Decoder Processing Steps
distinct codec's are specified. Codec's within a
A. The 8x8 FDCT and IDCT
mode differ according to the precision of source
image samples they can handle or the entropy At the input to the encoder, source image samples
coding method they use. Although the word codec are grouped into 8x8 blocks, and a DCT is
(encoder/decoder) is used frequently in this project, performed on each block, processing them from left
there is no requirement that implementations must to right, top to bottom, and input to the Forward
include both an encoder and a decoder. Many DCT (FDCT). At the output from the decoder, the
applications will have systems or devices which Inverse DCT (IDCT) outputs 8x8 sample blocks to
require only one or the other [7]. form the reconstructed image. The following
The four modes of operation and their various equations are the idealized mathematical definitions
codec's have resulted from JPEG’s goal of being of the 8x8 FDCT and 8x8 IDCT [8]:
generic and from the diversity of image formats
across applications. The multiple pieces can give
the impression of undesirable complexity, but they
should actually be regarded as a comprehensive
“toolkit” which can span a wide range of
continuous-tone image applications. It is unlikely
that many implementations will utilize every tool --
indeed, most of the early implementations now on
the market (even before final ISO approval) have
implemented only the Baseline sequential codec.
Figures (1) and (2) show the key processing steps Where, the JPEG compression algorithm works
which are the heart of the DCT-based modes of in three steps: the image is first transformed, and
operation. These figures illustrate the special case then the coefficients are quantized, and finally
of single-component (gray scale) image encoded with a variable-length lossless code.
compression [8]. B. Quantization
After output from the FDCT, each of the 64 DCT
coefficients is uniformly quantized in conjunction
with a 64-element Quantization Table, which must
be specified by the application (or user) as an input
to the encoder. Each element can be any integer
value from 1 to 255, which specifies the step size of
the quantizer for its corresponding DCT coefficient.
The purpose of quantization is to achieve further
compression by representing DCT coefficients with

ISSN: 2455-135X http://www.ijcsejournal.org Page 2


International Journal of Computer science engineering Techniques
Techniques-– Volume 2 Issue 2
2, Jan - Feb 2017
no greater precision than is necessary to achieve the At low rates, in order to meet the target
ta coding
desired image quality. Stated another way, the goal rate, the quantization table chosen is so coarse that
of this processing step is to discard information only the DC coefficient is encoded without the
which is not visually significant. Quantization is a higher frequency details. At the decoder this creates
many-to-one
one mapping, and therefore is visually noticeable discontinuities at the block
fundamentally lossy. It is the principal source of boundaries [7].
lossless in DCT-based encoders.
Quantization is defined as division of each DCT
coefficient
cient by its corresponding quantizer step size,
followed by rounding to the nearest integer [8]:

This output value is normalized by the quantizer


step size. Dequantization is the inverse function,
which in this case means simply that the
normalization is removed by multiplying by the
step size, which returns the result to a
representation appropriate for input to the IDCT:
Fig 4. Zigzag ordering of coefficients of the 8 x 8 blocks in the JPEG
algorithm.

C. Entropy Coding
The quantized values are then reordered The final DCT-based
based encoder processing step is
according to a zigzag pattern shown in Figures (3) entropy coding. This step achieves additional
and (4). compression losslessly by encoding the quantized
DCT coefficients more compactly based on their
statistical characteristics. The JPEG proposal
specifies two entropy coding methods Huffman
coding [7] and Run Length coding [5].
D. Run-length encoding
Run-length encoding is a data compression
algorithm that helps us encode large runs of
repeating items by only sending one item from the
run and a counter showing how many times this
item is repeated. Unfortunately this technique is
useless when
en trying to compress natural language
Fig. 3 The Quantization Table. texts, because they don’t have long runs of
To take advantage of the slow varying nature of repeating elements. In the other hand RLE is useful
most natural images, the DC coefficient is predicted when it comes to image compression, because
from the DC coefficient of the previous block and images happen to have long runs pixels with
differentially encoded with a variable length code. identical color. As you can see on o the following
The rest of the coefficients (referred to as AC picture we can compress consecutive pixels by only
coefficients) are run-length
length coded. JPEG produces replacing each run with one pixel from it and a
relatively good performance at medium to high counter showing how many items it contains [9].
coding rates, but suffers from blocking artifacts at
lower rates because of the block-based
based encoding.

ISSN: 2455-135X http://www.ijcsejournal.org Page 3


International Journal of Computer science engineering Techniques-– Volume 2 Issue 2, Jan - Feb 2017
performed the testing on a standard color test
images Lena, Fruit and Airplane of size 256×256
(see Figure 6). We analyze the results obtained with
III. THE PROPOSED IMAGE COMPRESSION the first, second and finally, the proposed algorithm.
APPROACH All images and tables from the experiments are
In order to obtain the best possible compression given. Standard measures for image compression
ratio (CR), Discrete cosine transform (DCT) has [9], like compression ratio (CR) and peak signal to
been widely used in image and video coding noise ratio (PSNR) and structural similarity (SSIM)
systems, where zigzag scan is usually employed for were used, which are calculated for comparing the
DCT coefficient organization and it is the last stage performance of the proposed approach as per the
of processing a compressed image in a transform
coder, before it is fed to final entropy encoding
stage. The basic idea of the new approach is to
divide the image into 8×8 blocks and then extract
the consecutive non-zero coefficients preceding the
zero coefficients in each block. The decompression
process can be performed systematically and the
number of zero coefficients can be computed by
subtracting the number of non-zero coefficients
from 64 for each block. The block diagram of
proposed image compression approach is shown in
Fig.5.

following representations:

Fig 6. The results of proposed approach, left column represent the original
images, right column represent the compressed images.

Fig 5.The block diagram of proposed image compression approach


From the results listed in Table (1), the proposed
IV. EXPERIMENTAL RESULTS codec achieves a high performance.
For the implementation and evaluation of the TABLE I
approach we developed a visual basic 6 code and NUMERICAL RESULTS FOR PROPOSED METHOD.

ISSN: 2455-135X http://www.ijcsejournal.org Page 4


International Journal of Computer science engineering Techniques-– Volume 2 Issue 2, Jan - Feb 2017
Color Images with LargerTrivial Background
by Histogram Segmentation“, (IJCSIS)
Image PSNR SSIM CR International Journal of Compute.
Lena 44.67 0.903 10.5 [7]. Makrogiannis S, Economou G, Fotopoulos S.
Fruit 42.32 0.879 16.7 Region oriented compression of color images
Airplane 43.88 0.894 14.5 using fuzzy inference and fast merging,” Pattern
Recognition, 35:1807–20, 2002.
V. CONCLUSIONS [8]. Alkholidi A, Alfalou A, Hamam H. “A new
In this paper an improved color image approach for optical colored image compression
compression approach was proposed along with its using the jpeg standards,” Signal Processing,
applications to compress color images. The 87:569–83, 2007.
obtained results shows the improvement of the [9]. G. Sreelekha, P.S. Sathidevi, "An HVS based
proposed method over the recent published paper adaptive quantization scheme for the
both in quantitative PSNR terms and very compression of color images", Digital Signal
particularly, in visual quality of the reconstructed Processing, Vol. 20(4), pp. 1129-1149, 2010.
images. Furthermore, it increased the compression
rate.

REFERENCES
[1]. Pallavi N. Save and Vishakha Kelkar, “An
Improved Image Compression Method using
LBG with DCT”, IJERT Journal, Volume-
3,Issue-06, June-2014.
[2]. X. O. Zhao and Z. H. He, “Lossless image
compression using super-spatial structure
prediction,” Signal Processing Letters, IEEE,
vol. 17, no. 4, pp. 383–386, 2010.
[3]. W.M. Abd-Elhafiez, “Image compression
algorithm using a fast curvelet transform,”
International Journal of Computer Science and
Telecommunications, vol. 3, no. 4, pp. 43–47,
2012.
[4]. M. Sonka, V. Halva, and T.Boyle, “Image
Processing Analysis and Machine Vision”,
Brooks/Cole Publishing Company, 2nd Ed.,
1999.
[5]. F. Douak, Redha Benzid, Nabil Benoudjit “Color
image compression algorithm based on the DCT
transform combined to an adaptive block
scanning,” Int. J. Electron. Commun. (AEU),
vol. 65, pp. 16–26, 2011.
[6]. M. Mohamed Sathik, K.Senthamarai Kannan and
Y.Jacob Vetha Raj, “Hybrid Compression of

ISSN: 2455-135X http://www.ijcsejournal.org Page 5

You might also like