You are on page 1of 4

EECS 376, Foundations of Computer Science

Fall 2015, University of Michigan, Ann Arbor

Date: January 26, 2016

Homework 2 Solutions

Problem 1. There are multiple solutions to this problem. Consider the n-fold repetition code, which
encodes a single bit by repeating it n times. For this code, we have d = n and k = 1, and
k + d 1 = 1 + d 1 = d = n.
Therefore,
 n-fold repetition code satisfies the Singleton bound, and hence it is an MDS code! It can
 the
errors.
correct d1
2
Now observe that if we used the n-fold repetition code to send m bits, we would be encoding each bit
invididually, and our total communication would cost mn bits. Thats an n-fold increase in the number of
bits that comprise our message! The reason we dont consider the repetition code to be optimal is that if we
want to send more than 1 bit but still correct the same number of errors, we can get away with much less
than an n-fold increase over our message length. The trick is to send blocks of k bits at a time.
As a concrete example, suppose we want to send 4 bits, with the ability to correct 1 error. We can use
the 3-fold repetition code, encoding each bit in turn; we would end up sending 12 bits. Or, we could use the
Hamming[7,4] code and accomplish the same task with only 7 bits. Even though the n-fold repetition code
is an MDS code, it is really a way to encode 1 bit at a time; that is, k = 1. If we allow k to be larger, then
we can greatly improve our efficiency: note that the Hamming[7,4] is not an MDS code.

Problem 2.
(a) We start with the polynomial f (x) = 3 + x + 3x2 . Then, we calculate the following values:
f (0) = 3 + 0 + 0 = 3
f (1) = 3 + 1 + 3 = 7
f (2) = 3 + 2 + 12= 6
f (3) = 3 + 3 + 27= 0
f (4) = 3 + 4 + 48= 0

So the final answer is (3, 7, 6, 0, 0).


(b) We know that the original polynomial p(x) is of the form p(x) = a + bx + cx2 . Thus, using three of the
coordinates we know (i.e., (1,6), (3,9), and (4,8)), we have the following system of equations:

1. a + 1b + 1c = a + 1b + 1c = 6
2. a + 3b + 9c = a + 3b + 9c = 9

3. a + 4b + 16c = a + 4b + 5c = 8
Subtracting equation 1 from equations 2 and 3 yields:

(
1a. 2b + 8c = 2b + 8c = 3
2a. 3b + 15c = 3b + 4c = 2
Adding -3(1a)+2(2a), we get: 6c = 6 = c = 1.
Substituting this into equation 2a and solving for b, we get b = 3.
Substituting c = 1 and b = 3 into equation 1, we get that a = 2.
So, the solution to our system of equations is a = 2, b = 3, and c = 1. Thus we know that the original
message is (2,3,1) with a polynomial of 2 + 3x + x2 .
It should be noted that any combination of three of the four known points would work for finding the
original polynomial.

Problem 3.
Forward direction: Suppose that H is a check matrix for C, with dist(C) = d. Since dist(C) = wH (C),
there is some ~v Im(C) with weight d. Since ~v Im(C), we have H~v = 0, but H~v is just the sum of the
columns of H in the positions corresponding to where ~v has nonzero entry. This means the sum of d columns
is 0, so condition (1) is satisfied.
Now suppose condition (2) were not satsfied, i.e. that there exists some subset of d 1 columns of H
which are linearly dependent. Without loss of generality assume ~v1 , . . . , ~vd1 are the dependent columns,
so c1 , . . . , cd1 {0, 1} such that c1~v1 + + cd1~vd1 = 0. But if this is the case, there must be some
vector w
~ with weight d 1 corresponding to the columns with ci 6= 0 such that H w
~ = 0. But this is a
contradiction because wH (C) = d. Thus condition (2) must be satisfied.
Backward direction. Suppose conditions (1) and (2) are satisfied. Then Let ~v1 , . . . , ~vd be a set of linearly
dependent columns of H, so c1 , . . . , cd {0, 1} such that c1~v1 + + cd~vd = 0. Consider the vector
w
~ with ones in positions corresponding to the vi s where ci 6= 0. Note that w is a codeword because
Hw
~ = c1~v1 + + cd~vd = 0, and w has weight at most d. Thus wH (C) d.
Suppose wH (C) < d, i.e. there is some codeword ~v with weight strictly less than d. Then H~v = 0,
but note that H~v is just the sum of d 1 columns of H, which means those columns must be linearly
dependent. This contradicts condition (2), so it must be the case that wH (C) = d. By (2.1), this means
that dist(C) = d.

Problem 4.
(a)
d
=d
20
for i = 1,2,...,n


d
2i


1

Therefore:
n

k1
X
i=0

d
2i


(d + (k 1)) = Singleton Bound

Since the Griesmer Bound is always Singleton Bound, it is a tighter bound.

(b) n =5. Since we can have up to one error, we must have d = 3 (Recall that e = (d-1)/2). Then by the
Griesmer Bound:
   
3
3
n
+
=3+2=5
1
2
Furthermore, the task can indeed be done with 5 bits (this was an in-class exercise for Lecture 1).

Problem 5. We want to show that the Reed-Solomon codes achieve equality in the Singleton Bound; that
is: d = n k + 1. Let RS[n, k]q be a Reed-Solomon code. The Singleton Bound tells us that
d n k + 1.
Therefore, it remains to prove that
d n k + 1;
if we do this, then the two inequalities together will show that d = n k + 1.
Since the Reed-Solomon codes are linear, it holds that d is equal to the minimum weight of a codeword.
Let ~y be any codeword. Then, from the way Reed-Solomon codes work, we have that
~y = (p(0), p(1), . . . , p(n 1))
for some polynomial p of degree at most k 1. Since the modulus q is necessarily prime from the definition
of Reed-Solomon codes, p has at most k 1 roots. Therefore, p(i) = 0 for at most k 1 values of i
{0, 1, . . . n 1}. Equivalently, p(i) 6= 0 for at least n (k 1) values of i {0, 1, . . . n 1}. Looking at the
coordinates of ~y , we see that the number of these i for which p(i) 6= 0 is equal to the Hamming weight of ~y .
Hence,
wH (~y ) n (k 1) = n k + 1.
Since ~y was arbitrary, we have proved that the above holds for every codeword ~y ; therefore,
d n k + 1.

Problem 6.
(a) Multiple answers. Basically anything where you need to convert to a digital context is a good example.
(b) The rates are identical. C has message length k and code length n for a rate of k/n. C 0 has message
length k` and n` for a rate of k`/n` = k/n.
(c) Suppose C has distance d. Then that means there is some codeword ~v in C with Hamming weight d,
i.e. with d nonzero entries. This means that v = ` (~v ) (~v written in binary) is a codeword of C 0 . Note
that each zero entry of ~v corresponds to ` zero-bits in v, but each nonzero entry of ~v corresponds to `
one-bits in v. This means that wH (
v ) ` wH (~v ) = `d. Thus the distance of C 0 , d is `d. The relative
d
`d
distance of C is n , and the relative distance of C 0 is wH`n(v) `n
= nd . Thus the relative distance of
0
C is less than or equal to the relative distance of C (with equality only when the minimum Hamming
weight codeword in C has entries that are only 0 and 2` 1).
(d) No, C 0 need not be linear. Here is a counterexample: Let C : Z28 Z38 be a linear code with
C(2, 1) = (0, 0, 1)
C(3, 7) = (0, 1, 0)
C(5, 0) = C(2, 1) + C(3, 7) = (0, 1, 1)
C(1, 6) = 7 C(2, 1) + 1 C(3, 7) = (0, 1, 7)
3

(1)

This means that then we have


C 0 (010001) = (000000001)
C 0 (011111) = (000001000)
C 0 (101000) = (000001001)
C 0 (001110) = (000001111)
However, if C 0 were linear, we would have
C 0 (010001 + 011111) = (000000001) + (000001000) = (000001001)
but instead we have
C 0 (010001 + 011111) = C 0 (001110) = (000001111)
And thus C 0 must not be linear.

(2)

You might also like