Professional Documents
Culture Documents
1 Parameters of Block Codes I 2 Parameters of Block Codes II 3 Parity Check Code 4 Weight Distribution 5 Hard-Decision Decoding 6 Soft-Decision Decoding 7 ML vs. MAP Decoding 8 Generator Matrix, Parity Check Matrix 9 Cyclic Block Codes I 10 Cyclic Block Codes II 11 Viterbi Decoding with Soft Decision 12 Punctured Convolutional Codes 13 Weight Enumerating Function 14 Concatenated Codes/Log-Likelihood Ratio 15 Group, Field 16 Euclidean Algorithm 17 Prime Field GF (7) 18 RS Code over GF (7) 19 Theorems of the DFT 20 Properties of RS Codes
2 2 2 3 3 3 4 5 5 6 6 7 7 7 8 9 9 9 10 10
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
Consider a channel code with parameters (3, 2, 2)2 . a) How does the set U of info words look like? Determine the length k of a (binary) info word, if there are altogether |U| info words. b) How many dierent block codes with the above parameters do exist? By which code rate are they characterized? c) Suppose, one of these block codes is selected. How many dierent encoders do exist?
A binary parity check code shall be constructed according to the following rules:
Derive the code table and determine the parameters of the code. Is the code able to correct errors?
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
The weight distribution {a0 , a1 , . . . , an } of a code C with length n provides the number aj of code words having a Hamming weight j . Derive the weight distributions for the following codes: a) (3, 2, 2)2 single parity check code, b) (3, 1, 3)2 repetition code, c) Parity check code of exercise 3.
Hard-Decision Decoding
Consider a transmission over a BSC (binary symmetric channel) characterized by the error probability PBSC . In this context, the block code of exercise 3 shall be employed. The decoder receives the word y = [011001]. a) Calculate the Hamming distances between the received word and all valid code words. Which code word will be selected by the decoder? Determine the corresponding info word. b) Calculate the probability that y is received if the info word [010] is transmitted. c) Calculate the probability that the decoder yields the info word [110] although the info word [011] was encoded and transmitted. Assume that a BMD decoder (bounded minimum distance decoder ) with maximum decoding radius is used. d) In which cases is the decoder not able to detect transmission errors? Calculate the probability for this event.
Soft-Decision Decoding
Consider a transmission over an AWGN channel, where a (3, 2, 2)2 single parity check code is used. The bits are mapped on 1 according to the following rule: 0 1 +1 , 1 .
The demodulator yields real values (soft output) that are passed on to the decoder. Consider a received word [1.3, 0.1, 0.2] (demodulator output). a) Calculate the squared Euclidean distances between the received word and all valid code words. Which (estimated) info word is obtained by means of soft-decision decoding?
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
b) Which received word is passed on to the decoder if the demodulator yields hard outputs only, i.e., zeros and ones? Calculate the Hamming distances between the received word and all valid code words. Which (estimated) info word is obtained by means of hard-decision decoding? c) Carry out both soft-decision and hard-decision decoding of a received word [1.3, +0.1, 0.2]. Is the result surprising, compared to the received word [1.3, 0.1, 0.2]?
Speech data shall be transmitted over a BSC with bit error probability 0.2. For error protection the systematic (6, 3, 3)2 parity check code of exercise 3 is employed. As the speech encoder does not work ideally, the dierent info words are not equiprobable. They occur according to the following distribution: u [000] [001] [010] [011] [100] [101] [110] [111] P (u) 0.10 0.05 0.20 0.05 0.05 0.10 0.35 0.10
Over the BSC, the decoder receives the word y = [100 111]. a) Perform ML decoding of y . b) Perform MAP decoding of y . c) Under which conditions are ML and MAP decoding equivalent?
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
The generator matrix of a (n, k, dmin )2 block code consists of k linearly independent rows of length n. It is easy to see that the individual rows of the generator matrix are valid code words themselves (to which info words?). Conversely, a generator matrix can be constructed from any set of k linear independent code words. In the following, consider the parity check code of exercise 3. a) From the code words [001 011], [011 101], [101 110] construct a generator matrix and derive the code table. b) How can one nd a systematic generator matrix? How does the corresponding code table look like? c) In which way are the code and the encoder altered, if one permutes the rows of the generator matrix, and in which way, if one permutes its columns? d) Given a systematic generator matrix according to G = [I k |P ], the parity check matrix H is determined by H = [P T |I nk ]. According to this rule, calculate the parity check matrix for the given code.
The (7, 4, 3)2 Hamming code consists of the following code words:
The generator polynomial of the equivalent cyclic code is given by g (D ) = 1 + D + D 3 . a) On basis of the code C0 derive a systematic generator matrix Gs . b) On basis of the generator polynomial derive a generator matrix Gp . c) By permuting the columns of Gs and applying elementary operations to its rows, transform the matrix Gs into Gp . d) Encode the info word u = [0111] by means of (a) the generator matrix Gp and (b) the generator polynomial g (D).
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
In the following, the info word u = [0111] shall rst be encoded non-systematically and then systematically. a) The equation x(D) = u(D) g (D) denes the non-systematic encoding. Calculate the code word associated with u. How does the corresponding shift-register representation look like? b)* In the case of a systematic cyclic block code, the info bits are typically located in the upper bit positions of the code word, i.e., the info word u(D) = u0 + u1 D + u2 D2 + u3 D3 is encoded yielding the word x(D) = r0 + r1 D + r2 D2 + u0 D3 + u1 D4 + u2 D5 + u3 D6 . How can one determine the polynomial r(D) = r0 + r1 D + r2 D2 ? Calculate the code word associated with u. How does the corresponding shift-register representation look like here?
11
Consider a transmission over an AWGN channel, where a terminated convolutional code with (octal) taps 5 and 7 is used ((5, 7)octal ). The info word length is 5 (BPSK mapping 0 +1, 1 1). In the following, soft-decision decoding shall be performed by means of the Viterbi algorithm. a) Sketch the trellis diagram of the above code. By which code rate is the code characterized? b) In which way does one calculate the branch metrics? c) Decode the received word y= [ 1.2 , 0.4 , 0.3 , 0.2 , 0.8 , 1.2 , 0.5 , 0.8 , 1.4 , 1.3 , 0.2 , 0.6 , 1.5 , 0.1 ]
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
using the Viterbi algorithm. For this purpose rst calculate the branch metrics, then carry out the add-compare-select operations and nally perform the traceback.
12
1 1 , a code of a higher rate shall 1 0 be derived from the convolutional code given by the generator polynomials (5, 7)octal . a) How does the trellis diagram look like, if the info word length is 5 and zero tailing is applied? Which code rate does one obtain? b) In which way does one calculate the branch metrics for the punctured code, if soft-decision decoding shall be applied? c) By which free distance df ree is the mother code characterized and by which the punctured code? d) Which asymptotic code gain does result for the mother code and which for the punctured code when the info word length is 5 (zero tailing applied)? Which asymptotic code gains do result in the case of an innite info word length?
13
For the convolutional code dened by the generator polynomials (3, 1)octal the weight distribution shall be derived. For this purpose determine the following items: a) Generator polynomials, encoder, and trellis segment b) State representation and modied state representation c) Weight enumeration function T (wc , wi ) and thus the coecients ad
14
In the following a two-dimensional block code, referred to as product code, shall be considered. Such a product code could, for example, be employed in order to encode an image le. The product code regarded here consists of a horizontal and a vertical (3, 2, 2)2 single parity check code. The systematic part of the product code is given by four info bits u11 , u12 , u21 and u22 . The horizontal parity bits are given by p 1 , p2 and the vertical | | by p1 , p2 (see gure).
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
For transmission, BKSK mapping (0 +1, 1 1) is applied and an AWGN channel is assumed. a) Compare the distance properties of the product code to those of the component codes. b) Perform iterative decoding of the product code given the following received symbols: u11 +1.0, u12 +3.0, u21 +1.5, u22 +0.5
p 1 +2.5, p2 0.5
15
Group, Field
Mq = {0, 1, . . . , q 1}
Consider the sets in connection with the operations : addition mod q : multiplication mod q for q = 6, 7.
Determine the result tables for the two operations and check whether a) (Mq , ) are groups (commutative?). b) (Mq , )\{0} are groups (commutative?). c) (Mq , , ) are elds.
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
a) Calculate the greatest common divisor (gcd) of 275 and 80. b) Consider the polynomials a(x) = x12 + x10 + x7 + x4 + x3 + x2 + x + 1 b(x) = x11 + x9 + x7 + x6 + x5 + x + 1 having coecients in GF (2). gcd(a(x), b(x)). Determine the greatest common divisor
17
Consider the prime eld GF (7). a) Determine the order of its elements. b) Determine the primitive elements. Hint: A smart way of performing modulo calculations is as follows: (a b) c mod q = (a b mod q ) c mod q
18
As an example, a Reed-Solomon (RS) code over GF (7) was constructed in the lecture, which is able to correct t = 2 symbol errors. The code word length was n = 6. Accordingly, the was chosen to be the primitive element 5 GF (7). a) What properties does the RS code have if an element not having maximum order is chosen for the transformation? b) Construct an RS code which is able to correct t = 1 symbol errors but which has - apart from this - the same parameters as the RS code constructed in the lecture. Do you obtain a larger or a smaller code rate than in the lecture? Determine the generator polynomial g (D). c) For the code of 2., encode the info word [4, 3, 2, 1] (i) using the IDFT and (ii) using the generator polynomial g (D).
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
10
Let a(x) A(x) and b(x) B (x). On basis of the denition of the DFT prove the following theorems: b(x) = xl a(x) bi = il ai Bj = jl Aj B (x) = xl A(x) (Shift theorem) (Modulation theorem)
20
Properties of RS Codes
For the following two tasks the theorems of exercise 19 shall be utilized. As an abbreviation Fq = GF (q ) will be used below. a) Prove that RS codes are cyclic codes. b) Let the codes Ca and Cb be dened as follows: Ca := Cb := a Fq n : a A = [A0 , . . . , Ak1 , 0, . . . , 0]
nk
b Fq : b B = [0, . . . , 0, B0 , . . . , Bk1 ]
n k
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
11
11 00 11 00 11 00
11 00 11 00 11 00 11 00
11 00 11 00
00
11 00 11 00
01
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11 00 11 00
01
1
11 11
00
01
11 00
11 00
11 00
11 00
11 00
00
00
11 00
11 00
11 00
00
01
10
11
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
10
11 00
10
11 00
10
11 00
11 00
12
11 00 11 00 11 00
11 00 11 00 11 00 11 00
11 00 11 00
00
11 00 11 00
01
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11 00 11 00
01
1
11 11
00
01
11 00
11 00
11 00
11 00
11 00
00
00
11 00
11 00
11 00
00
01
10
11
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
10
11 00
10
11 00
10
11 00
11 00
13
11 00 11 00 11 00
11 00 11 00 11 00 11 00
11 00 11 00
00
11 00 11 00
01
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11 00 11 00
01
1
11 11
00
01
11 00
11 00
11 00
11 00
11 00
00
00
11 00
11 00
11 00
00
01
10
11
Information and Coding Theory Lab, Faculty of Engineering, University of Kiel, Germany
10
11 00
10
11 00
10
11 00
11 00