You are on page 1of 8

UNIT – I

2 MARKS

1. Define a discrete memoryless channel. APRIL/MAY 2004


2. State channel Capacity Theorem. NOV/DEC 2008 , APRIL/MAY 2004
3. Draw the Huffman code tree and find out the code for the given data:
AAAABBCDAB
4. What is Prefix coding?Give one example. APRIL/MAY 2008
5. What is channel coding theorem ?
(Or)
What is the condition to achieve error free communication over the channel?
6. Define Entropy.
7. Write down the properties of Entropy .
8. What is source encoding?
Answer:
Source encoding is the process of generating efficient representation by a
discrete source.
9. Define Variable length code.Give an example.
10. Define the Code efficiency of the source encoder.
11. State Source Coding theorem. APRIL/MAY 2008
12. Draw the transition probability diagram of a Binary Symmetric Channel.
13. A Source emits any one of the 4 possible symbols during each signaling
interval.The Symbols Occur with probability P0=0.4, P1=0.3, P2=0.2, P3=0.1
Find the amount of information gained by Observing each symbol.
14.What is Nyquist rate and Nyquist interval? NOV/DEC 2006
15. Write down the expression for the entropy of a binary memoryless source.
NOV/DEC 2006 , MAY/JUNE 2006
19. If X represents the outcome of a single roll a fair die.What is the entropy of X?
NOV/DEC 2007
20. A code is composed of dots and dashes. Assume that dashes three times longer
than dot and has one third of probability of occurrence. Calculate average
information in the dot-dash code. NOV/DEC 2007
21. Calculate entrophy H(X) for a discrete memory less source X, which has four
Symbols x1,x2,x3 and x4 with probabilities p(x1)=0.4, p(x2)=0.3, p(x3)=0.2
and p(x4)=0.1 MAY/JUNE 2007
22. Consider an additive White Gaussian noise channel with 4-Khz bandwidth and
noise noise power spectral density η / 2 =10-2 w/hz. The signal power required at
the receiver is 0.1 mw. Calculate the capacity of this channel. MAY/JUNE 2007
23. What is transition matrix?Give its significance. NOV/DEC 2008

8 or 16 Marks:

1. A Discrete memoryless source has a alphabhet of five symbols whose probabilities


Of occurrence are as described here:
Symbols : X1 X2 X3 X4 X5
Probability: 0.2 0.2 0.1 0.1 0.4
Use Huffman’s encoding for symbols and find the average codeword length.
Also Prove that it satisfies source coding theorem. MAY/JUNE 2006
2. Apply the Shannon Fano encoding procedure to the following ensemble.
X = { x1,x2,x3,x4,x5,x6,x7,x8,x9 }
P = { 0.49 , 0.14 , 0.14 , 0.07 , 0.07 , 0.04 , 0.02, 0.02, 0.01 }
1
Find Efficiency η , Average codewordlength L , Redundancy.
3. A Discrete memory less Source has an alphabet of seven symbols whose
Pobabilities of occurrence are as described below:
Symbol : S0 S1 S2 S3 S4 S5 S6
Probability : 0.25 0.25 0.0625 0.0625 0.125 0.125 0.125
a)Compute the Huffman code for this source moving a “Combined”
symbols as high as possible.
b)Calculate the Coding efficiency & Redundancy. APRIL/MAY 2004
4. A voice grade channel of telephone network has a bandwidth of 3.4khz Calculate
a) the information capacity of the telephone channel for a signal – to – noise
ratio of
30db and
b) the minimum signal – to – noise ratio required to support information
transmission through the telephone channel at the rate of 9.6kb/s.
NOV /DEC 2003
5. Find Capacity of a Binary(symmetric)channel in bits/sec, when probability of error
Is 0.1 and the symbol rate is 1000 symbols/sec.
6. Develop Huffmann’s Code for a Source which units 6 symbols which are
equiprobable.
X = { X1 , X2 , X3 , X4 , X5 , X6 }
P = { 1/6, 1/6, 1/6, 1/6, 1/6, 1/6}
Find Efficiency,H(s),Average codeword length.
7. Consider that two sources S1 and S2 emits messages x1,x2 & x3 and y1,y2 & y3
With Joint probability P(X,Y)as shown in the matrix form..
P(X,Y) =
Y1 Y2 Y3
X1 3/40 1/40 1/40
X2 1/20 3/20 1/20
X3 1/8 1/8 3/8

Calculate the entropies H(X),H(Y),H(X/Y),H(Y/X) &H(X,Y). MAY/JUNE 2007


8. A Discrete memoryless source X has five symbols x1,x2,x3,x4 and x5 with
Probabilities p(x1)=0.4, p(x2)=0.19, p(x3)=0.16, p(x4)= 0.15 and p(x5)= 0.1
(i) Construct a Shannon-Fano code for X, and calculate the efficiency of the
code.
(ii) Repeat for the Huffman code and compare the results. MAY/JUNE 2007
9. A statistical encoding algorithm is being considered for the transmission of a
number of long text files over a public network.Analysis of the file contents have
shown that each file comprises only the six different characters M, F, Y, N, O and L,
each of which occurs with a relative frequency of occrrence of 0.25, 0.25, 0.125,
0.125, 0.125 and 0.125 respectively.Use Huffman algorithm for encoding these
characters and find the following :
(i) Code words for each of the character
(ii) Average number of bits per codeword
(iii) Entropy of the source. NOV /DEC 2006
10. State and explain source coding theorem. NOV /DEC 2006 , MAY/JUNE 2007
11. State and explain Channel coding theorem.NOV /DEC 2006 , NOV /DEC 2008
12. An analog signal having 4khz bandwidth is sampled at 1.25 times the Nyquist rate
and each sample is quantized into one of 256 equally likely levels.Assume that the
successive samples are statistically independent.
(i) What is the information rate of this source ?
(ii) Can the output of the source be transmitted without error an AWGN

2
channel with a bandwidth of 10khz and an S/N ratio of 20db ?
(iii) Find the bandwidth required for an Awgn channel for error-free transmission of
the output of this source if the S/N ratio is 25db. NOV /DEC 2007
13. A discrete memoryless source X has five equally likely symbols.
(i) Construct a shannon-Fano code for X, and calculate the efficiency of the
code.
(ii) Repeat for the Huffman code and compare the results. NOV /DEC 2007
14. Encode the following messages with their respective probability using basic
Huffman algorithm :

Message M1 M2 M3 M4 M5 M6 M7 M8
Probability 1/2 1/8 1/8 1/16 1/16 1/16 1/32 1/32

Also calculate the efficiency of coding coding and comment on the result.
APRIL/MAY 2008
15. Alphanumeric data are entered into a computer from a remote terminal through a
Voice grade telephone channel. The channel has a bandwidth of 3.4khz and output
signal-to-noise ratio of 20db.The terminal has a total of 128 symbols.Assume that
the symbols are equiprobable and the successive transmissions are statistically
independent.Calculate the information capcity of the channel , and the maximum
symbol rate for which error free transmisssion over the channel is possible.
APRIL/MAY 2008
16.Find capacity of a Binary(symmetric)channel in bits/sec, when probabity of error
Is 0.1 and the symbol rate is 1000 symbols/sec. NOV /DEC 2005
117.Apply Huffman Encoding procedure to following message ensemble and
determine
2 average length of encoded message also. Determine the coding efficiency.Use
3 coding alphabet D = 4. There are 10 symbols.
[X] = [x1,x2,x3,……x10]
P[X] = [0.18, 0.17, 0.16, 0.15, 0.10, 0.08, 0.05, 0.05, 0.04, 0.02]NOV /DEC 2005
418. A discrete memoryless source has an alphabet of five symbols with their
5 Probabilities for its output as given here:
0 [X] = [ X1, X2, X3, X4, X5]
P[X] = [0.45, 0.15, 0.15, 0.10, 0.15]
Compute two different Huffman codes for this source. For these two codes find
(i) Average Code word length
(ii) Efficiency and Redundancy APRIL/MAY 2008

UNIT – II

2 MARKS

1. What is ADPCM?
2. What will happen if speech is coded at low bit rates?
3. Compare and contrast DPCM and ADPCM.
4. Explain - Adaptive Subband Coding.
5. Give the principle behind DPCM. NOV/DEC 2008
6. What is granular noise? NOV/DEC 2008
7. Write the condition required to avoid the slope overload distortion in delta
modulation. MAY/JUNE 2007
3
8. Why subband coding is preferred for speech coding? NOV/DEC 2007
9. What do you mean by slope overload distortion in delta modulation?
NOV/DEC 2007
10. What is quantization noise and on which parameter it depends?
APRIL/MAY 2008
11.Differentiate vocoder and waveform coder. APRIL/MAY 2008
12.Draw the block diagram for differential pulse code modulator. NOV/DEC 2005
13.Explain slope overloading. NOV/DEC 2005

8 or 16 Marks:

1. With the block diagram explain the operation of a basic ADPCM. Also,
Explain how a basic ADPCM scheme obtains improved performance over a DPCM
scheme. NOV/DEC 2007
2. Explain the working principle of Dela Modulation? State the drawbacks of
delta Modulation and suggest solution.NOV/DEC 2007
3. Explain Adaptive Quantization and Prediction with Backward estimation
in ADPCM system with block diagrams. APRIL/MAY 2008
4. Explain Dela Modulation (DM) system with block diagrams? What is Slope
overload error? State condition to avoid Slope Overload errors? How granular
noise and Slope Overload error is minimized in ADM systems?APRIL/MAY 2008
5. Consider a sinusoidal signal m(t) = Acoswmt applied to a Delta modulator with
stepsize ∇ . Show that Slope Overload noise will occur in A > __∇__wm Ts
NOV/DEC 2003
6. Delta Modulator is designed to Operate at 3times the Nyquist rate for a
signal with 3khz bandwidth. The Quantized stepsize is 250mv. Determine the
maximum amplitude of 1khz input sinusoidal for which DM does not have slope
Overload Noise.
7. With a neat block diagram explain Pulse Code Modulation (PCM)
8. Compare PCM , DM , ADM , DPCM , ADPCM.
9. Write notes on:
(i) Adaptive Subband coding. NOV/DEC 2008
(ii) Adaptive Delta Modulation.
10. Write notes on:
(i) LPC. NOV/DEC 2008
(ii) Adaptive Differential Pulse Code Modulation.

11. With neat sketch and supportive mathematical expressions, briefly explain the
working principle of Differential pulse code modulation. MAY/JUNE 2007
12. Briefly describe about the two schemes available for coding the speech signals
at low bit rates, namely , Adaptive differential pulse code modulation and
Adaptive subband coding. MAY/JUNE 2007
13. Explain a PCM system to digitize a speech signal .What are A-law and µ-law?
APRIL/MAY 2008
14. A PCM system uses a uniform quantizer followed by a 7-bit binary encoder. The
Bit rate of the system is equal to 50 * 106 bits/sec.
(i) What is the maximum message bandwidth for which the system operates
satisfactorily?
(ii) Determine the output signal-to-quantization noise ratio when a full load
sinusoidal modulating wave of frequency 1 mhz is applied to the input.
APRIL/MAY 2008

4
UNIT – III

2 MARKS

1. What is a Generator polynomial? Give some standard generator polynomials.


2. What are Convolutional Codes? How are they different from block Codes?
3. What are Code Tree, Code Trellis and State diagrams for Convolutional encoders?
4. Define Hamming distance and Hamming Weight. MAY/JUNE 2007
5. Show that c= { 000, 001, 101 }is not a linear code. MAY/JUNE 2007
6. What is meant by Systematic and non Systematic Codes?
7. What is meant by Linear block Codes?
8. What is meant by Cyclic Codes?
9. Define Syndrome
10. What are the properties of Linear Block Codes?
11. What are the properties of Cyclic Codes?
12. What are the properties of Syndrome?
13. When is a code said to be linear? NOV/DEC 2006
14. What is the essence of Huffman coding? NOV/DEC 2006
15. State two properties of syndrome (used in linear block codes).NOV/DEC 2007
16. What do you mean by code rate and constraint length in convolutional code?
NOV/DEC 2007
17. Define Syndrome in error correction coding.APRIL/MAY 2008, NOV/DEC 2008
18. Why Cyclic codes are extremely well suited for error detection?APRIL/MAY 2008
19. Give the error correcting capability of a linear block code.NOV/DEC 2008

8 MARKS

1 a)Define linear block code. (2)


b)How to find the parity check matrix? (4)
c)Give the syndrome decoding algorithm. (4)
d)Design a linear block code with dmin ≥ 3 for some block length n = 2m-1. (6)
e)Consider a hamming code C which is determined by the parity check matrix.
1 1 0 1 1 0 0
 
H = 1 0 1 1 0 1 0
0 1 1 1 0 0 1
 

i) Show that the two vectors C1 = (0010011) and C2 = (0001111) are


codewords of C and calculate the hamming distance between them.
ii) Assume that a codeword C was transmitted and that a vector r = c + e is
received. Show that the syndrome s = r.HT only depends on error vector e.
iii) Calculate the syndromes for all possible error vectors e with Hamming
weight < = 1 and list them in a table. How can this be used to correct a single
bit error in an arbitrary position.
iv) What is the length and the dimension K of the code. Why can the minimum
Hamming distance dmin not be larger than three?
2. The generator matrix for a (6,3)block code is given below. Find all the Code
word
of this Code.

5
1 0 0 0 1 1 
 
G = 0 1 0 1 0 1 
0 0 1 1 1 0 
 

3. Considering (7,4) Code defined by generator polynomial g(x)=1+x+x3


the codeword 0111001 is sent over a noisy Channel producing a received word
0101001 that has a single error. Determine Syndrome Polynomial S(x) and error
polynomial e(x).
4. For a (6,3) systematic linear block code, the three parity check bits c4,c5,c6 are
formed from the following equations MAY/JUNE 2007
C4 = d1+d3
C5 = d1+d2+d3
C6 = d1+d2
i) Write down the generator matrix
ii) Construct all possible codewords
iii) Suppose that the received word is 01011.Decode this received word by
finding the location of the error and the transmitted data bits.

5. Consider a (7,4) cyclic code with generator polynomial g(x) = 1+x+x3 .Let data
d=(1010) . Find the corresponding systematic codeword. MAY/JUNE 2007
6. Consider the (7,4) Hamming code defined by the generator polynomial
g(x)=1+x+x3 The codeword 1000101 is sent over a noisy channel producing the
received word 0000101 that has a single error. Determine the syndrome
polynomial s(x) for this received word. Find its corresponding message vector m
and express m in polynomial m(x). NOV/DEC 2006
7. How is syndrome calculated in cyclic codes? NOV/DEC 2006
8. The SEC (7,4) Hamming code can be converted into a double error detecting and
single error correcting (8,4) code by using an extra parity check. Construct the
generator matrix for the code and also construct encoder and decoder for the code.
NOV/DEC 2007
9. A convolutional encoder has a single shift register with 2 stages, 3 modulo-2 adders
and an output multiplexer.The generator sequences of the encoder are as follows
g1(x)=(1,0,1) and g2(x)=(1,1,0) and g3(x)=(1,1,1).Draw (i) the block diagram of
the encoder, (ii) state diagram, and also explain the working principle of the
encoder. NOV/DEC 2007
10. Construct a Convolutional encoder for the following specifications:
Rate efficiency = ½, Constraint length = 4. The connection from the shift registers
to modulo-2 adders are described by the following equations:
g1(x) = 1+x
g2(x) = x
Determine the output codeword for the input message 1110. APRIL/MAY 2008
11. Explain cyclic codes with its generator polynomial and parity check polynomial.
NOV/DEC 2008
12. Consider the (7,4) Hamming code with P = 1 1 0.Determine the codeword for
011
111
101
The message 0010. Suppose the codeword 1100010 is received.Determine if the
codeword is correct. If it is in error correct the error. NOV/DEC 2008

UNIT – IV
6
2 MARKS

1. Differentiate loss less and lossy compression technique and give one example
for
each.
2. Compare Static Coding and Dynamic Coding.
3. Compare Arithmetic Coding and Huffman Coding
4. What is Run length Coding?
5. What is GIF Interlaced mode?
6. What is JPEG standard?
7. Give some examples of lossy and lossless compression algorithms.
NOV/DEC 2006
8. What is the major advantage of the adaptive Huffman coding over static
Huffman Coding? NOV/DEC 2007 MAY/JUNE 2007
9. Write the formula for quantization which is used in JPEG compression.
NOV/DEC 2007
10. List the three tokens available at the output of the entrophy encoder in JPEG
algorithm . MAY/JUNE 2007
11. Distinguish between global color table and local color table in GIF.
MAY/JUNE 2007
12. How dynamic Huffman coding is different than basic Huffman coding?
APRIL/MAY 2008
13. Why graphic Interchange Format is used extrensively in the internet?
APRIL/MAY 2008
14. Define Statistical encoding. NOV/DEC 2008
15. What is differential encoding? NOV/DEC 2008

8 MARKS

1. Explain the principles of Arithmetic coding. APRIL/MAY 2008


2. Explain Static Coding with an example.
3. Explain Dynamic Coding with an example.
4. Explain in detail about GIF,TIFF.
5. With a neat block diagram explain in detail about JPEG encoder /Decoder.
6. Discuss the various stages in JPEG standard
7. With the aid of an example, describe how arithmetic coding can be used for text
compression. NOV/DEC 2006
8. With the aid of a block diagram explain how digitized pictures are compressed.
NOV/DEC 2006
9. With suitable example briefly explain static Huffman coding and dynamic
coding.Also compare them. NOV/DEC 2007
10. With the aid of a diagram ,describe the interlaced mode of operation of GIF and
also describe the principles of TIFF. NOV/DEC 2007
11. Breifly describe the procedures followed in two of the text compression algorithms
given below
(i) Dynamic Huffman coding MAY/JUNE 2007
(ii) Arithmetic coding
12. With suitable block diagram, briefly explain JPEG encoder and JPEG decoder.
MAY/JUNE 2007
13. Consider the transmission of a message comprising a string of characters with
probabilities of :

7
e = 0.3, n = 0.3, t = 0.2, w = 0.1, z = 0.1
Use Arithmetic coding technique to encode this string. APRIL/MAY 2008
14. List the different types of lossless and lossy data compression techniques.
APRIL/MAY 2008
15. Why lossy compression techniques are used for speech,audio,and video.Justify
your answer with numeric calculations. APRIL/MAY 2008
16. Explain dynamic Huffman coding .Illustrate it for the message . “This is”.
NOV/DEC 2008
17. Write notes on the following:
(i) GIF.
(ii) Digitized Documents. NOV/DEC 2008

UNIT – V

2 MARKS

1. What is LPC?
2. What is CELP?
3. Define Pitch,Period,Loudness.
4. What is Perceptual Coding?
5. What is MPEG?
6. How CELP provides better quality than LPC in speech coding? APRIL/MAY 2008
7. Distinguish between global color table and local color table in GIF.
MAY/JUNE 2007
8. Mention two basic properties of linear prediction. MAY/JUNE 2007
9. List the three features which determine the perception of a signal by the ear.
NOV/DEC 2007
10. Define the terms ‘group of pictures’ and ‘prediction span’ with respect to video
Compression. NOV/DEC 2007

8 MARKS

1. With block schematic diagram explain an LPC coder and decoder.


APRIL/MAY 2008
2. Compare H.261 and MPEG – 1 standard. APRIL/MAY 2008
3. With suitable block diagram, briefly explain the implementation schematic
of H.261.Also,briefly explain macro block and frame/picture encoding formats
of H.261. MAY/JUNE 2007
4. In connection with perceptual coding, briefly describe the following concepts
(i) Frequency Masking (ii) Temporal Masking MAY/JUNE 2007
5. With a neat block diagram explain Dolby AC-1 & Dolby AC-2 audio Coders.
6. State the intended application of MPEG–1,MPEG–2,MPEG–4.APRIL/MAY 2008
7. Explain the principles of LPC. Draw the schematic of an LPC encoder and decoder,
and identify and explain the perception parameters and associated vocal tract
excitation parameters. NOV/DEC 2007
8. State and explain the encoding procedure used with (i) the motion vector and (ii) P
and B frame. Draw the necessary sketches. NOV/DEC 2007
9. Explain the encoding procedure of I, P, and B frame in video compression using
necessary diagram. NOV/DEC 2008
10. Explain principles of video encoding based on H.261 standard. NOV/DEC 2008

You might also like