You are on page 1of 111

Image Compression

Data vs Information

Information = Matter ()

Data = The means by which information is conveyed

Image Compression

Reducing the amount of data required to represent


a digital image while keeping information as much as
possible
Relative Data Redundancy and Compression Ratio
Relative Data Redundancy
1
RD 1
CR
Compression Ratio
n1
CR
n2

Types of data redundancy


1. Coding redundancy
2. Interpixel redundancy
3. Psychovisual redundancy
Coding Redundancy

Different coding methods yield different amount of data needed to


represent the same information.

Example of Coding Redundancy :


Variable Length Coding vs. Fixed Length Coding

Lavg 3 bits/symbol Lavg 2.7 bits/symbol


(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Variable Length Coding

Concept: assign the longest code word to the symbol with the least
probability of occurrence.

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Interpixel Redundancy
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.

Interpixel redundancy:
Parts of an image are
highly correlated.

In other words,we can


predict a given pixel
from its neighbor.
Run Length Coding

The gray scale image


of size 343x1024 pixels

Binary image
= 343x1024x1 = 351232 bits

Line No. 100

Run length coding

Line 100: (1,63) (0,87) (1,37) (0,5) (1,4) (0,556) (1,62) (0,210)

Total 12166 runs, each run use 11 bits Total = 133826 Bits
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Psychovisual Redundancy
8-bit gray scale 4-bit gray scale 4-bit IGS
image image image

False
contours

The eye does not response with equal sensitivity to all visual
information. (Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Improved Gray Scale Quantization

Pixel Gray level Sum IGS Code


i-1 N/A 0000 0000 N/A
+
i 0110 1100 0110
0110 1100
i+1 1000 1011 1001 0111 1001
i+2 1000 0111 1000 1110 1000
i+3 1111 0100 1111 0100 1111
Algorithm
1. Add the least significant 4 bits of the previous value of Sum to
the 8-bit current pixel. If the most significant 4 bit of the pixel is
1111 then add 0000 instead. Keep the result in Sum

2. Keep only the most significant 4 bits of Sum for IGS code.
Fidelity Criteria: how good is the compression algorithm

-Objective Fidelity Criterion


- RMSE, PSNR

-Subjective Fidelity Criterion:


-Human Rating

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Image Compression Models
Reduce data Increase noise
redundancy immunity
f ( x, y ) Source encoder Channel encoder

Noise Channel

f ( x, y ) Source decoder Channel decoder


Source Encoder and Decoder Models

Source encoder
f ( x, y ) Mapper Quantizer Symbol encoder

Reduce Reduce Reduce


interpixel psychovisual coding
redundancy redundancy redundancy

Source decoder

f ( x, y ) Inverse mapper Symbol decoder


Channel Encoder and Decoder

- Hamming code, Turbo code,


Information Theory

Measuring information

1
I ( E ) log log P( E )
P( E )

Entropy or Uncertainty: Average information per symbol

H P( a j ) log( P ( a j ))
j
Simple Information System

Binary Symmetric Channel


Source Destination
A = {a1, a2} ={0, 1} B = {b1,b2} ={0, 1}
z = [P(a1), P(a2)] v = [P(b1), P(b2)]
(1-Pe)
P(a1) 0 0 P(a1)(1-Pe)+(1-P(a1))Pe
Pe
Source Destination
Pe
1-P(a1) 1 1 (1-P(a1))(1-Pe)+P(a1)Pe
Pe= probability of error (1-Pe) (Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Binary Symmetric Channel
Source Destination
A = {a1, a2} ={0, 1} B = {b1,b2} ={0, 1}
z = [P(a1), P(a2)] v = [P(b1), P(b2)]
H(z) = - P(a1)log2P(a1) H(z|b1) = - P(a1|b1)log2P(a1|b1)
- P(a2)log2P(a2) - P(a2|b1)log2P(a2|b1)
H(z|b2) = - P(a1|b2)log2P(a1|b2)
- P(a2|b2)log2P(a2|b2)
H(z|v) = H(z|b1) + H(z|b2)

Mutual
information I(z,v)=H(z) - H(z|v)

Capacity C max I ( z, v )
z
Binary Symmetric Channel
Let pe = probability of error

p(a1 ) pbs
z
1 p ( a )
1 p
bs

1 pe pe pbs
v p
p e 1 pe bs

H ( z ) pbs log2 ( pbs ) (1 pbs ) log2 (1 p bs )

H ( z | v ) pbs (1 pe ) log2 ( pbs (1 pe )) (1 pbs ) pe log2 ((1 p bs ) pe )


(1 pbs )(1 pe ) log2 ((1 pbs )(1 pe )) pbs pe log2 ( p bs pe )

I ( z, v ) H bs ( pbs pe pbs pe ) H bs ( pe )

C 1 H bs ( pe )
Binary Symmetric Channel

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Communication System Model

2 Cases to be considered: Noiseless and noisy

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Noiseless Coding Theorem
Problem: How to code data as compact as possible?

Shannons first theorem: defines the minimum average code word


length per source that can be achieved.

Let source be {A, z} which is zero memory source with J symbols.


(zero memory = each outcome is independent from other outcomes)
then a set of source output of n element be
A {1 , 2 , 3 ,... J n }
Example:
A {0,1}
for n = 3,
A {000,001,010,011,100,101,110,111}
Noiseless Coding Theorem (cont.)
Probability of each j is
P ( j ) P ( a j1 ) P (a j 2 ) P (a jn )
Entropy of source :
jn
H ( z) P(i ) log( P(i )) nH ( z )
i 1

Each code word length l(i) can be

1 1
log l (i ) log 1
P(i ) P(i )

Then average code word length can be


Jn J n
J n
1 1

i 1
P ( i) log P( i)l (i ) P ( i) log
P(i ) i 1 i 1 P(i )
1
Noiseless Coding Theorem (cont.)
We get H ( z) Lavg
H ( z) 1

from H ( z) nH ( z )
then

Lavg 1
H ( z) H ( z)
n n
or

Lavg The minimum average code word
lim H ( z) length per source symbol cannot
n
n lower than the entropy.

Coding efficiency
H ( z)
n

Lavg
Extension Coding Example

H = 0.918
Lavg = 1
H = 1.83
Lavg = 1.89

0.918 1.83
1 1 0.918 2 0.97
1 1.89

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Noisy Coding Theorem
Problem: How to code data as reliable as possible?

Example: Repeat each code 3 times:


Source data = {1,0,0,1,1}

Data to be sent = {111,000,000,111,111}

Shannons second theorem: the maximum rate of coded information is



R log
r

= code size
r = Block length
Rate Distortion Function for BSC

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Error-Free Compression: Huffman Coding

Huffman coding: give the smallest possible number of


code symbols per source symbols.

Step 1: Source reduction

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Error-Free Compression: Huffman Coding

Step 2: Code assignment procedure

The code is instantaneous uniquely decodable without referencing


succeeding symbols.

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Near Optimal Variable Length Codes

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Arithmetic Coding
Nonblock code: one-to-one correspondence between source symbols
And code words does not exist.
Concept: The entire sequences of source symbols is assigned a single
arithmetic code word in the form of a number in an interval of real
number between 0 and 1.

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Arithmetic Coding Example
0.2x0.4 0.04+0.8x0.04 0.056+0.8x0.016

The number
between 0.0688
and 0.06752
can be used to
represent the
sequence
a1 a2 a3 a3 a4

0.2x0.2 0.04+0.4x0.04 0.056+0.4x0.016

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
LZW Coding
Lempel-Ziv-Welch coding : assign fixed length code words to
variable length sequences of source symbols.

24 Bits

9 Bits
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
LZW Coding Algorithm
0. Initialize a dictionary by all possible gray values (0-255)
1. Input current pixel
2. If the current pixel combined with previous pixels
form one of existing dictionary entries
Then
2.1 Move to the next pixel and repeat Step 1
Else
2.2 Output the dictionary location of the currently recognized
sequence (which is not include the current pixel)

2.3 Create a new dictionary entry by appending the currently


recognized sequence in 2.2 with the current pixel

2.4 Move to the next pixel and repeat Step 1


LZW Coding Example
Dictionary Currently
Location Entry Input recognized Encoded
0 0 pixel Sequences Output (9 bits)
1 1 39 39
39 39 39
255 255 126 126 39
256 39-39 126 126 126
257 39-126 39 39 126
258 126-126 39 39-39
259 126-39 126 126 256
260 39-39-126 126 126-126
261 126-126-39 39 39 258
262 39-39-126-126 39 39-39
126 39-39-126
126 126 260
Bit-Plane Coding

Original image Binary image


Bit 7 compression

Bit 6 Binary image


compression

Bit 0 Binary image


compression

Bit plane
images

Example of binary image compression: Run length coding


(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Bit Planes

Bit 7 Bit 3

Bit 6 Bit 2
Original gray
scale image

Bit 5 Bit 1

Bit 4 Bit 0
(Images from Rafael C.
Gonzalez and Richard E.
Wood, Digital Image
Processing, 2nd Edition.
Gray-coded Bit Planes
Original
bit planes a Gray code:
7 g7
gi ai ai 1
for 0i6
and
a6 g6
g7 a7

ai= Original bit planes


a5 g5 = XOR

a4 g4 (Images from Rafael C.


Gonzalez and Richard E.
Wood, Digital Image
Processing, 2nd Edition.
Gray-coded Bit Planes (cont.)

There are less 0-1 and 1-0


a3 g3 transitions in grayed code
bit planes.
Hence gray coded bit planes
are more efficient for coding.
a2 g2

a1 g1

a0 g0
Relative Address Coding (RAC)
Concept: Tracking binary transitions that begin and end eack black
and white run

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Contour tracing and Coding
Represent each contour by a set of boundary points and directionals.

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Error-Free Bit-Plane Coding

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Lossless VS Lossy Coding

Source encoder
Lossless f ( x, y ) Mapper Symbol encoder
coding

Reduce Reduce
interpixel coding
redundancy redundancy

Source encoder
Lossy f ( x, y ) Mapper Quantizer Symbol encoder
coding

Reduce Reduce Reduce


interpixel psychovisual coding
redundancy redundancy redundancy
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Transform Coding (for fixed resolution transforms)

Encoder

Input image Construct


Forward Symbol
(NxN) nxn Quantizer
transform encoder
subimages

Quantization process causes Compressed


The transform coding lossy image
Decoder
Construct
Decompressed Inverse Symbol
nxn
image transform decoder
subimages

Examples of transformations used for image compression: DFT and DCT


(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Transform Coding (for fixed resolution transforms)

3 Parameters that effect transform coding performance:

1. Type of transformation

2. Size of subimage

3. Quantization algorithm
2D Discrete Transformation

Forward transform:
N 1 N 1
T (u, v ) f ( x, y ) g ( x, y , u, v )
x 0 y 0

where g(x,y,u,v) = forward transformation kernel or basis function


T(u,v) is called the transform coefficient image.
Inverse transform:
N 1 N 1
f ( x, y ) T (u, v )h( x, y , u, v )
u 0 v 0

where h(x,y,u,v) = inverse transformation kernel or


inverse basis function
Transform Example: Walsh-Hadamard Basis Functions
m 1

1 bi ( x ) pi ( u )bi ( y ) pi ( v )
g ( x, y , u, v ) h(u, v, x, y ) ( 1) i 0
N

N = 2m
bk(z) = the kth bit of z
p0 (u ) bm1 (u )
p1 (u ) bm 1 (u ) bm 2 (u )
p2 (u ) bm 2 (u ) bm 3 (u )

pm 1 (u ) b1 (u ) b0 (u )
Advantage: simple, easy to implement
N=4 Disadvantage: not good packing ability
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Transform Example: Discrete Cosine Basis Functions
(2 x 1)u ( 2 y 1)v
g ( x, y , u, v ) h(u, v, x, y ) (u ) (v ) cos cos
2N 2N

1
for u 0
(u ) N
2
for u 1, , N 1
N

DCT is one of the most frequently


N=4 used transform for image compression.
For example, DCT is used in JPG files.
Advantage: good packing ability,
N=4 modulate computational complexity
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Transform Coding Examples
Fourier Error

RMS Error = 1.28

Hadamard
Original image
512x512 pixels
Subimage size:
8x8 pixels = 64 pixels RMS Error = 0.86
DCT
Quatization by truncating
50% of coefficients (only
32 max cofficients are kept.)
(Images from Rafael C.
Gonzalez and Richard E.
Wood, Digital Image
Processing, 2nd Edition. RMS Error = 0.68
DCT vs DFT Coding

DFT
coefficients
have abrupt
changes at
boundaries
of blocks

1 Block

Advantage of DCT over DFT is that the DCT coefficients are


more continuous at boundaries of blocks.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Subimage Size and Transform Coding Performance

This experiment:
Quatization is made by
truncating 75% of
transform coefficients

DCT is the best

Size 8x8 is enough


(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Subimage Size and Transform Coding Performance

Reconstructed DCT Coefficients


by using 25%
of coefficients
(CR = 4:1)
with 8x8 sub-
images
Zoomed detail
Zoomed detail Subimage size:
Original 2x2 pixels

Zoomed detail Zoomed detail


Subimage size: Subimage size:
4x4 pixels 8x8 pixels

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Quantization Process: Bit Allocation

To assign different numbers of bits to represent transform


coefficients based on importance of each coefficient:

- More importance coefficeitns assign a large number of bits

- Less importance coefficients assign a small number of bits


or not assign at all

2 Popular bit allocation methods


1. Zonal coding : allocate bits based on the basis of
maximum variance, using fixed mask for all subimages

2. Threshold coding : allocate bits based on maximum


magnitudes of coefficients
Example: Results with Different Bit Allocation Methods

Reconstructed Reconstructed
by using 12.5% by using 12.5%
of coefficients of coefficients
(8 coefficients (8 coefficients
with largest with largest
magnitude are variance are
used) used)

Threshold coding Zonal coding


Error Error

Zoom details

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Zonal Coding Example

Zonal mask Zonal bit allocation

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Threshold Coding Example

Threshold mask Thresholded coefficient


ordering

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Thresholding Coding Quantization
3 Popular Thresholding Methods
Method 1: Global thresholding : Use a single global threshold
value for all subimages
Method 2: N-largest coding: Keep only N largest coefficients
Method 3: Normalized thresholding: each subimage is normalized by a
normalization matrix before rounding

Bit allocation T ( u, v )
T (u, v ) round
Z ( u , v )
Restoration before decompressing
~
T (u, v ) T (u, v ) Z (u, v )

Example of
Normalization
Matrix Z(u,v) (Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
DCT Coding Example

(CR = 38:1) (CR = 67:1)

Method:
- Normalized
Error image Thresholding,
RMS Error = 3.42 - Subimage size:
8x8 pixels

Blocking
Zoom details Artifact at
Subimage
boundaries
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Wavelet Transform Coding: Multiresolution approach

Encoder

Input image Wavelet Symbol


(NxN) Quantizer
transform encoder

Compressed
image
Decoder
Inverse
Decompressed Symbol
wavelet
image decoder
transform

Unlike DFT and DCT, Wavelet transform is a multiresolution transform.


What is a Wavelet Transform
One up on a time, human uses a line to represent a number.
For example
= 25
With this numerical system, we need a lot of space to represent
a number 1,000,000.
Then, after an Arabic number system is invented, life is much easier.
We can represent a number by a digit number:
X,XXX,XXX

An Arabic number is one The 1st digit = 1x


kind of multiresolution
Representation. The 2nd digit = 10x
The 3rd digit = 100x

Like a number, any signal can also be represented by a multiresolution
data structure, the wavelet transform.
What is a Wavelet Transform

Wavelet transform has its background from multiresolution


analysis and subband coding.

Other important background:


- Nyquist theorem: The minimun sampling rate needed for sampling
a signal without loss of information is twice the maximum frequency
of the signal.

-We can perform frequency shift by multiplying a complex sinusiodal


signal in time domain.

f ( x, y )e j 2 ( u0 x v0 y ) F (u u0 , v v0 )
Wavelet History: Image Pyramid
If we smooth and then down sample an image repeatedly, we will
get a pyramidal image:

Coarser, decrease resolution

Finer, increase resolution

Pyramidal structured image (Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Image Pyramid and Multiscale Decomposition

Down
Image Image
Smooth Sampling
NxN N/2xN/2
By 2
Question: What Up
Information is Sampling
loss after down By 2
Sampling?

Answer: Loss Interpolate


Information is
Prediction + A prediction
error image: Predicted
Error
Image
(loss details) -
NxN
NxN
Image Pyramid and Multiscale Decomposition (cont.)
Hence we can decompose an image using the following process
Approxi- Approxi-
Smooth and Smooth and
Image -mation -mation
down sampling down sampling .
NxN Image Image
by 2 by 2
N/2xN/2 N/4xN/4
Up Up
sampling sampling
by 2 and by 2 and
interpolate interpolate
- -
+ +

Prediction Prediction
Error Error
NxN N/2xN/2
Image Pyramid and Multiscale Decomposition (cont.)

Approximation image N/8xN/8

Prediction error
Original Image N/4xN/4
=
NxN Prediction
error
N/2xN2

Prediction error
(residue)
NxN

Multiresolution Representation
Multiresolution Decomposition Process

Note that this process is not a wavelet decomposition process !

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Example of Pyramid Images

Approximation
Images (using
Gaussian
Smoothing)

Prediction
residues

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Subband Coding
Subband decomposition process

h0(n) Approximation
LPF Down
a(n)
Sampling
N/2 points
by 2
x(n)
N points
h1(n) Detail
Down
HPF Freq. d(n)
Sampling
shift by N/2 points
by 2
N/2

All information of x(n) is completely


preserved in a(n) and d(n).
Subband Coding (cont.)
Subband reconstruction process

g0(n)
Up
a(n)
Sampling
N/2 points Interpolation
by 2
x(n)

N points
g1(n)
Up
d(n) Freq.
Sampling
N/2 points Interpolation shift by
by 2
N/2
Subband Coding (cont.)

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
2D Subband Coding

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Example of 2D Subband Coding
Vertical detail:
Approximation: filtering in x-
filtering in both direction using
x and y directions h0(n) and in y-
using h0(n) direction using
h1(n)

Horizontal detail: Diagonal detail:


filtering in x- filtering in both
direction using x and y directions
h1(n) and in y- using h1(n)
direction using
h0(n)

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
1D Discrete Wavelet Transformation

h(n) 2 d1(n) N/2 points


x(n)
N points
h(n) 2

h (n) 2 d2(n) N/4 points

h (n) 2
Note that the number
of points of x(n) and
wavelet coefficients h (n) 2 d3(n) N/8 points
are equal.
h (n) 2 a3(n) N/8 points
(n) = a wavelet function
(n) = a scaling function Wavelet coefficients
(N points)
1D Discrete Wavelet Transformation
2D Discrete Wavelet Transformation

d = diagonal detail
Original image h = horizontal detail
NxN v = vertical detail
a = approximation

d1 h1

v1 a1

d2 h2
Level 1 Level 3
v2 a2
d3 h3
Level 2 v3 a3
2D Discrete Wavelet Transformation (cont.)
a3h3 h2
v3d3 h1
Original image v2 d2
NxN
v1 d1

Wavelet coefficients
NxN

d = diagonal detail: filtering in both x and y directions using h (n)


h = horizontal detail: filtering in x-direction using h (n) and in y
direction using h (n)
v = vertical detail: filtering in x-direction using h (n) and in y
direction using h (n)
a = approximation: filtering in both x and y directions using h (n)
Example of 2D Wavelet Transformation

Original
Image

Original image
Example of 2D Wavelet Transformation (cont.)

LL1 HL1

LH1 HH1

The first level wavelet decomposition


Example of 2D Wavelet Transformation (cont.)

LL2 HL2

HL1
LH2 HH2

LH1 HH1

The second level wavelet decomposition


Example of 2D Wavelet Transformation (cont.)

LL3 HL3
HL2
LH3 HH3

HL1
LH2 HH2

LH1 HH1

The third level wavelet decomposition


Example of 2D Wavelet Transformation

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Examples: Types of Wavelet Transform

Haar Daubechies
wavelets wavelets

Biorthogonal
Symlets wavelets

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Wavelet Transform Coding for Image Compression

Encoder

Input image Wavelet Symbol


(NxN) Quantizer
transform encoder

Compressed
image
Decoder
Inverse
Decompressed Symbol
wavelet
image decoder
transform

Unlike DFT and DCT, Wavelet transform is a multiresolution transform.


Wavelet Transform Coding Example

(CR = 38:1) (CR = 67:1)

Error Image Error Image


RMS Error = 2.29 RMS Error = 2.96

Zoom details
No blocking
Artifact
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Wavelet Transform Coding Example (cont.)

(CR = 108:1) (CR = 167:1)

Error image Error image


RMS Error = 3.72 RMS Error = 4.73

Zoom details

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Wavelet Transform Coding vs. DCT Coding

Wavelet DCT 8x8

(CR = 67:1) (CR = 67:1)

Error image Error image


RMS Error = 2.96 RMS Error = 6.33

Zoom details

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Type of Wavelet Transform and Performance

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
No. of Wavelet Transform Level and Performance
Threshold Level and Performance

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Table 8.14 (Cont)

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Table 8.19 (Cont)

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Lossless Predictive Coding Model

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Lossless Predictive Coding Example

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Lossy Predictive Coding Model

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Delta Modulation

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Linear Prediction Techniques: Examples

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Quantization Function

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Lloyd-Max Quantizers

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Lossy DCPM

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
DCPM Result Images

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Error Images of DCPM

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.

You might also like