You are on page 1of 5

International Journal of Research and Innovation (IJRI)

International Journal of Research and Innovation in


Electronics and Communication Engineering (IJRIECE)
JOINT DATA HIDING AND COMPRESSION BASED ON SALIENCY AND SMVQ

S. Girish1, E. Balakrishna2, S. Rehana Banu3.


1 Research Scholar,Department of Electronics AndCommunication Engineering, Chiranjeevi Reddy Institute of Engineering and Technology,
Anantapur, A. P, India.
2 Assistant Professor,Department of Electronics AndCommunication Engineering, Chiranjeevi Reddy Institute of Engineering and Technology,
Anantapur, A. P, India.
3 Assistant Professor,Department of Electronics AndCommunication Engineering, Chiranjeevi Reddy Institute of Engineering and Technology,
Anantapur, A. P, India.

Abstract
Global interconnect planning becomes a challenge as semiconductor technology continuously scales. Because of the
increasing wire resistance and higher capacitive coupling in smaller features, the delay of global interconnects becomes
large compared with the delay of a logic gate, introducing a huge performance gap that needs to be resolved A novel
equalized global link architecture and driver receiver co design flow are proposed for high-speed and low-energy on-chip
communication by utilizing a continuous-time linear equalizer (CTLE). The proposed global link is analyzed using a linear system method, and the formula of CTLE eye opening is derived to provide high-level design guidelines and insights.
Compared with the separate driverreceiver design flow, over 50% energy reduction is observed.
*Corresponding Author:
S. Girish
Research Scholar,
Department of Electronics AndCommunication Engineering, Chiranjeevi Reddy Institute of Engineering and
Technology,Anantapur, A. P, India.
Email: girish.seela@gmali.com.
Year of publication: 2016
Review Type: peer reviewed
Volume: I, Issue : I
Citation: S. Girish, Research Scholar"Joint Data Hiding
And Compression Based On Saliency And Smvq" International Journal of Research and Innovation on Science,
Engineering and Technology (IJRISET) (2016) 01-05
DATA HIDING FOR IMAGE AUTHENTICATION
Introduction
For years, audio, image, and video have played important role in journalism, archiving and litigation. King incident played an important role in prosecution, a secretly
recorded conversation between Monica Lewinsky and
Linda Tripp touched off the 1998 presidential impeachment, just to name a few. Keeping our focus on still pictures, we have no difficulty in realizing that the validity of
the old saying Picture never lies has been challenged in
the digital world of multimedia. Compared with the traditional analog media, making seamless alteration is much
easier on digital media by software editing tools. With
the popularity of consumer-level scanner, printer, digital
camera, and digital camcorder, detecting tampering becomes an important concern. In this chapter, we discuss
using digital watermarking techniques to partially solve
this problem by embedding authentication data invisibly
into digital images. In general, authenticity is a relative
concept: whether an item is authentic or not is relative
to a reference or certain type of representation that is regarded as authentic.

The following features are desirable to construct an effective authentication scheme


1. to be able to determine whether an image has been
altered or not;
2. to be able to integrate authentication data with host
image rather than as a
separate data file;
3. the embedded authentication data be invisible under
normal viewing conditions;
4. to be able to locate alteration made on the image;
5. to allow the watermarked image be stored in lossycompression format, or more generally, to distinguish
moderate distortion that does not change the high-level
content vs. content tampering.
This chapter presents a general framework of watermarking for authentication and proposes a new authentication
scheme by embedding a visually meaningful watermark
and a set of features in the transform domain of an image
via table look-up. The embedding is a Type-II embedding
according to Chapter 3. Making use of the predistortion
nature of Type-II embedding, our proposed approach can
be applied to compressed image using JPEG or other compression techniques, and the watermarked image can be
kept in the compressed format. The proposed approach
therefore allows distinguishing moderate distortion that
does not change the high-level content versus content
tampering. The alteration made on the marked image can
be also localized.
These features make the proposed scheme suitable for
building a trustworthy digital camera. We also demonstrate the use of shuffling (Chapter 4) in this specific
problem to equalize uneven embedding capacity and to
enhance both embedding rate and security.
Previous Work
The existing works on image authentication can be classified into several categories: digital signature based, pixeldomain embedding, and transform-domain embedding.
The latter two categories are invisible fragile or semi-fragile watermarking. Digital signature schemes are built on

International Journal of Research and Innovation (IJRI)

the ideas of hash (or called message digest) and public


key encryption that were originally developed for verifying the authenticity of text data in network communication. Friedman extended it to digital image as follows. A
signature computed from the image data is stored separately for future verification. This image signature can be
regarded as a special encrypted checksum. It is unlikely
that two different natural images have same signature,
and even if a single bit of image data changes, the signature may be totally different. Furthermore, public-key
encryption makes it very difficult to forge signature, ensuring ] a high security level. After his work, Schneider et
al. [and Storck proposed content-based signature. They
produce signatures from low-level content features, such
as block mean intensity, to protect image content instead
of the exact representation. Another content-signature
approach by Lin et al. developed the signature based on a
relation between coefficient pairs that is invariant before
and after compression [41, 76]. Strictly speaking, these
signature schemes do not belong to watermarking since
the signature is stored separately instead of embedding
into images.
A STUDY OF VARIOUS IMAGE COMPRESSION
TECHNIQUES
IMAGE
An image is essentially a 2-D signal processed by the human visual system. The signals representing images are
usually in analog form. However, fo r processing, storage and transmission by computer applications, they are
converted from analog to digital form. A digital image is
basically a 2-Dimensional array of pixels. Images form the
significant part of data, particularly in remote sensing, biomedical and video conferencing applications. The use of
and dependence on information and computers continue
to grow, so too does our need for efficient ways of storing
and transmitting large amounts of data.
IMAGE COMPRESSION
Image compression addresses the problem of reducing
the amount of data required to represent a digital image.
It is a process intended to yield a compact representation
of an image, thereby reducing the image storage/transmission requirements. Compression is achieved by the
removal of one or more of the three basic data redundancies:

SALIENCY MODEL
Automatic scene analysis
In the last decade, security cameras have become a common sight in the urban landscape. The ability to monitor
a number of different places from one location has aided
crime prevention. A vast number of security cameras are
being installed in everything from public streets to train
carriages, which has spawned a fundamental problem.
On the London Underground alone, there are at least
9000 security cameras. Security staff can have as many
as 60 cameras to watch at any one time so monitoring
is an extremely difficult task, requiring considerable concentration for long periods of time (Donald, 1999). It is
easy to imagine that manpower on its own is not enough
to deal with the vast quantities of data that are being recorded. Automation is clearly the next step.
INFORMATION HIDING USING VECTOR QUANTIZATION
In the earlier part of the thesis different methods in the
spatial domain and transform domain are studied This
chapter deals with the techniques for hiding Information
in the compressed domain. One of the most commonly
studied image compression technique is Vector Quantization (VQ)[60], which is a lossy image compression
technique based on the principle of block coding. VQ is
a clustering technique & every cluster is represented by
a codevector. It is widely used to compress grey-level images because of its low bit rate. The main concept of VQ
is to utilize templates instead of blocks to do the image
compression. These templates, also referred to as codewords or codevectors, are stored in a codebook, and the
codebook is shared only between the sender and the receiver. Hence, the index value of the template is used to
represent all the pixel values of the block so that data
compression can be achieved. Such a mechanism is extremely easy to implement although the organization of
the templates affects the quality of the compressed image.
Experimental Results:
Table Average values of PSNR, MSE and AFCPV using 1
bit , 2, 3, 4, and variable bits for Information hiding in
Vector Quantized codebook method on LBG, KPE, KMCG
and KFCG codebook is of size 2048

1.Coding Redundancy
2.Interpixel Redundancy
3.Psychovisual Redundancy
Coding redundancy is present when less than optimal
code words are used. Interpixel redundancy results from
correlations between the pixels of an image. Psychovisual
redundancy is due to data that is ignored by the human
visual system (i.e. visually non essential information).
BENEFITS OF COMPRESSION
It provides a potential cost savings associated with sending less data over switched telephone
It not only reduces storage requirements but also overall
execution time.
It also reduces the probability of transmission errorssince fewer bits are transferred.
It also provides a level of security against illicit monitoring.

International Journal of Research and Innovation (IJRI)

Remark: It is observed that KFCG performs better


than LBG, KPE and KMCG considering MSE, PSNR and
AFCPV.
Figure shows the results for cover image Lioness and secret image work logo

with the restoration of films, the second one is related to


texture synthesis, and the third one, a significantly less
studied class though very influential to the work here presented, is related to disocclusion. Joyeux et al. [4] devised
a 2-steps frequency based reconstruction system for detecting and removing line scratches in films. They propose
to first recover low and then high frequencies. Although
good results are obtained for their specific application,
the algorithm can not handle large loss areas. Frequency
domain algorithms trade a good recovery of the overall
structure of the regions for poor spatial results regarding,
for instance, the continuity of lines. Kokaram et al. [6] use
motion estimation and autoregressive models to interpolate losses in films from adjacent frames. The basic idea
is to copy into the gap the right pixels from neighboring
frames. The technique can not be applied to still images
or to films where the regions to be inpainted span many
frames.
Our contribution

Remark: It is observed that Stego is similar to the


original image using any of the four codebook generation algorithms.
Table shows the hiding capacity for all covers and messages for 1,2,3,4 and variable bits for all 4 Codebook generation techniques Figure 6.4, 6.5, 6.6 and 6.7 show the
hiding capacity, PSNR, MSE and AFCPV for all 4 algorithms and 1,2,3,4 and variable bit hiding method.
Table Hiding Capacity in bits using 1 bit, 2 bits, 3 bits, 4
bits, and variable bits method on LBG, KPE, KMCG and
KFCG codebook of size 2048.

Algorithms devised for film restoration are not appropriate for our application since they normally work on relatively small regions and rely on the existence of information from several frames. On the other hand, algorithms
based on texture synthesis can fill large regions, but require the user to specify what texture to put where. This
is a significant limitation of these approaches, as may be
seen in examples presented later in this paper, where the
region to be inpainted is surrounded by hundreds of different backgrounds, some of them being structure and
not texture. The technique we propose does not require
any user intervention, once the region to be inpainted has
been selected. The algorithm is able to simultaneously
fill regions surrounded by different backgrounds, without the user specifying what to put where. No assumptions on the topology of the region to be inpainted, or on
the simplicity of the image, are made. The algorithm is
devised for inpainting in structured regions (e.g., regions
crossing throughboundaries), though it is not devised to
reproduce large textured areas. As we will discuss later,
the combination of our proposed approach with texture
synthesis techniques is the subject of current research.
RESULTS

Related work and our contribution


We should first note that classical image denoising algorithms do not apply to image inpainting, since the regions to be inpainted are usually large. That is, regions
occupied by top to bottom scratches along several film
frames, long cracks in photographs, superimposed large
fonts, and so on, are of significant larger size than the
type of noise assumed in common image enhancement
algorithms. In addition, in common image enhancement
applications, the pixels contain both information about
the real data and the noise, while in image in painting,
there is no significant information in the region to be
inpainted. The information is mainly in the regions surrounding the areas to be inpainted. There is then a need
to develop specific techniques to address these problems.
Mainly three groups of works can be found in the literature related to digital in painting. The first one deals

Input Image

The image is given to the saliency extraction block. The following is the
output of Saliency.

International Journal of Research and Innovation (IJRI)

CONCLUSION AND FUTURE SCOPE

Saliency output of the input image

In this project new methods of Information hiding in compressed domain using Vector Quantization and SMVQ
are proposed. They are Information Hiding in salient images using Vector Quantized Codebook and SMVQ. In this
paper, we proposed a joint data-hiding and compression
scheme by using SMVQ and PDE-based image inpainting
and saliency detection. The blocks, except for those in
the leftmost and topmost of the image, can be embedded
with secret data and compressed simultaneously, and the
adopted compression method switches between SMVQ
and image inpainting adaptively according to the embedding bits. VQ is also utilized for some complex blocks to
control the visual distortion and error diffusion. On the
receiver side, after segmenting the compressed codes into
a series of sections by the indicator bits, the embedded
secret bits can be easily extracted according to the index
values in the segmented sections, and the decompression
for all blocks can also be achieved successfully by VQ,
SMVQ, and image inpainting.
The existing code book generation can be improved and
produce better results. Some other techniques include.
The data size should be increased than the present.
REFERENCES
[1] W. B. Pennebaker and J. L. Mitchell, The JPEG Still
Image Data Compression Standard. New York, NY, USA:
Reinhold, 1993.

Saliency MAP. Image divided into salient part and non salient part.

Then the original image is Compressed image using data SMVQ and
VQ along with data hiding. The following the output.

[2] D. S. Taubman and M. W. Marcellin, JPEG2000: Image Compression Fundamentals Standards and Practice.
Norwell, MA, USA: Kluwer, 2002.
[3] A. Gersho and R. M. Gray, Vector Quantization and
Signal Compression. Norwell, MA, USA: Kluwer, 1992.
[4] N. M. Nasrabadi and R. King, Image coding using vector quantization: A review, IEEE Trans. Commun., vol.
36, no. 8, pp. 957971, Aug. 1988.
[5] Announcing the Advanced Encryption Standard (AES),
National Institute of Standards & Technology, Gaithersburg, MD, USA, Nov. 2001.
[6] R. L. Rivest, A. Shamir, and L. Adleman, A method
for obtaining digital signatures and public-key cryptosystems, Commun. ACM, vol. 21, no. 2, pp. 120126, 1978.
[7] F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn,
Information hiding survey, Proc. IEEE, vol. 87, no. 7,
pp. 10621078, Jul. 1999.

Compressed Image

[8] C. D. Vleeschouwer, J. F. Delaigle, and B Macq, Invisibility and application functionalities in perceptual watermarking: An overview, Proc. IEEE, vol. 90, no. 1, pp.
6477, Jan. 2002.
[9] C. C. Chang, T. S. Chen, and L. Z. Chung, A steganographic method based upon JPEG and quantization
table modification, Inf. Sci., vol. 141, no. 1, pp. 123138,
2002.
[10] H. W. Tseng and C. C. Chang, High capacity data
hiding in JPEGcompressed images, Informatica, vol. 15,
no. 1, pp. 127142, 2004.

The final Reconstructed image at the output.

[11] P. C. Su and C. C. Kuo, Steganography in JPEG2000


compressed images, IEEE Trans. Consum. Electron., vol.
49, no. 4, pp. 824832, Nov. 2003.

International Journal of Research and Innovation (IJRI)

[12] W. J. Wang, C. T. Huang, and S. J. Wang, VQ applications in steganographic data hiding upon multimedia
images, IEEE Syst. J., vol. 5, no. 4, pp. 528537, Dec.
2011.

[22] C. C. Lee, W. H. Ku, and S. Y. Huang, A new steganographic scheme based on vector quantisation and
search-order coding, IET Image Process., vol. 3, no. 4,
pp. 243248, 2009.

[13] Y. C. Hu, High-capacity image hiding scheme based


on vector quantization, Pattern Recognit., vol. 39, no. 9,
pp. 17151724, 2006.

[23] S. C. Shie and S. D. Lin, Data hiding based on compressed VQ indices of images, Comput. Standards Inter.,
vol. 31, no. 6, pp. 11431149, 2009.

[14] Y. P. Hsieh, C. C. Chang, and L. J. Liu, A two-codebook combination and three-phase block matching based
image-hiding scheme with high embedding capacity, Pattern Recognit., vol. 41, no. 10, pp. 31043113, 2008.

[24] C. C. Chang, G. M. Chen, and M. H. Lin, Information


hiding based on search-order coding for VQ indices, Pattern Recognit. Lett., vol. 25, no. 11, pp. 12531261, 2004.

[15] C. H. Yang and Y. C. Lin, Fractal curves to improve


the reversible data embedding for VQ-indexes based on
locally adaptive coding, J. Vis. Commun. Image Represent., vol. 21, no. 4, pp. 334342, 2010.

[25] T. Kim, Side match and overlap match vector quantizers for images, IEEE Trans. Image Process., vol. 1, no.
2, pp. 170185, Apr. 1992.
AUTHOR

[16] Y. Linde, A. Buzo, and R. M. Gray, An algorithm for


vector quantization design, IEEE Trans. Commun., vol.
28, no. 1, pp. 8495, Jan. 1980.
[17] C. C. Chang and W. C. Wu, Fast planar-oriented ripple search algorithm for hyperspace VQ codebook, IEEE
Trans. Image Process., vol. 16, no. 6, pp. 15381547,
Jun. 2007.
[18] W. C. Du and W. J. Hsu, Adaptive data hiding based
on VQ compressed images, IEE Proc. Vis., Image Signal
Process., vol. 150, no. 4, pp. 233238, Aug. 2003.
[19] C. C. Chang and W. C. Wu, Hiding secret data adaptively in vector quantisation index tables, IEE Proc. Vis.,
Image Signal Process., vol. 153, no. 5, pp. 589597, Oct.
2006.
[20] C. C. Lin, S. C. Chen, and N. L. Hsueh, Adaptive embedding techniques for VQ-compressed images, Inf. Sci.,
vol. 179, no. 3, pp. 140149, 2009.
[21] C. H. Hsieh and J. C. Tsai, Lossless compression of
VQ index with search-order coding, IEEE Trans. Image
Process., vol. 5, no. 11, pp. 15791582, Nov. 1996.

S. Girish,

Research Scholar,
Department of Electronics AndCommunication Engineering,
Chiranjeevi Reddy Institute of Engineering and Technology,
Anantapur, A. P, India.

E. Balakrishna,

Assistant Professor,
Department of Electronics AndCommunication Engineering,
Chiranjeevi Reddy Institute of Engineering and Technology,
Anantapur, A. P, India.

S. Rehana Banu ,

Assistant Professor,
Department of Electronics AndCommunication Engineering,
Chiranjeevi Reddy Institute of Engineering and Technology,
Anantapur, A. P, India.

You might also like