You are on page 1of 3

Linear Block Code Kavi Pandya

131020 Page 1

TOPIC: LINEAR BLOCK CODE

TECHNICAL DETAILS
It will consist of Technical terms, formulas and working of LBC used:

TECHNICAL TERMS
1. Block Code: An error correcting code that encodes data in blocks or that acts on
a block of k bits of input data to produce n bits of output data.

2. Linear Code: It is an error-correcting code for which any linear combination of
codewords is also a codeword.

3. Code Efficiency/Rate: The rate of a block code is defined as the ratio between its
message length and its block length. Rate=k/n.

4. Hamming Distance: The number of components in which two codewords differ.

5. Hamming Sphere: A Hamming sphere of radius t contains all possible received
Vectors, they are at a distance (Hamming distance) less thent from a codeword.

6. Hamming Bound: A code correctst errors if spheres of radiust around codewords
do not overlap.

7. Modulo-2Addition: Add two bit and take modulo2 of their sum, its same as XOR
gate.

8. Parity bits: A bit which acts as a check on a set of binary values.

9. Generator Matrix: An (n, k) LBC can be specified by any set of k linear independent
codewords c0, c1, . . . , ck1. If we arrange the k codewords into a k n matrix G,
then G is called a generator matrix for C. G consists of identity and parity matrices.

10. Parity Check Matrix: Matrix H that consist of identity and transpose of parity matrix.

11. Received word: The code received on the receiving part of the system.

12. Error Vector: A vector showing the position of wrong/flipped bit in received word.
13. Syndrome: The syndrome is the receive sequence multiplied by the transposed
parity check matrix (H). The syndrome depends only on the error pattern.

14. Minimum Weight Vector: A vector with the smallest numbers of 1
s
.

15. Maximum Likelihood Rule: Given a received vector r, the rule that reduces/minimise
the probability of error is to find that codeword c
i
which maximizes P(c = c
i
/r). This is
called the Maximum a posteriori Decision Rule or Maximum Likelihood Rule.

FORMULAS:
1. C=d*G: Coded output bit (C or c) is obtained by multiplying input bit (d) with the
elements of Generator Matrix (G). Note: We follow modulo-2 Addition while adding
after multiplication.
2. C
p
=d*P: Check digits (C
p
) is obtained by multiplying input bit(d) with Parity matrix(P).
3. s=r*H
T
: Syndrome(s) is calculated by multiplying received bit/word(r) with the
transpose of Parity-Check Matrix.
4. C=(r)Xor(e): Actual coded output bits is obtained by doing modulo-2 addition of
received vector/word(r) with the error-vector(e).

WORKING:
ENCODING: In LBC k input bits are coded into n output bits, where n>k. Doing so, n bits
not only contains information about input bits but also gives us m=n-k parity check bits.
These parity check bits will help us while decoding.
Output bit(c) of n length is generated using formula: c=d*G. d is input bit and G is
generator matrix. G consists of Identity matrix and parity matrix.
Linear Block Code Kavi Pandya

131020 Page 2

DECODING: While Decoding we make the use of H
T
(transpose of parity-check matrix). H
T
consists of Identity matrix and parity matrix which we had used in encoding. The formula of
c and H
T
holds that c*H
T
=0. But because of error we dont receive c in its original form and
rather we get the received word called r. Corresponding to r we find syndrome(s), where
s=r* H
T
. Now we find the error vector e. e will show us the bits in r that contain error. The
formula we use for finding e is: s=(c Xor e)* H
T
, as c* H
T
is 0, we get s=e* H
T
, we know s and
H
T
, and thus we can find e. At this point we know both e and r and thus we can calculate
c by formula: c = r (modulo-2-addition) e.

HISTORY AND ADVANCES:
HISTORY:
The research into field of codes started as early as in 1930s. In 1937, Claude Shannon used
Boolean algebra in design and analysis of logic circuits. In late 50's, Edward Moore and
George Mealy made fundamental contributions to automata theory.

In 1937, George Stibitz, designed the first binary relay computer and with this, researchers
began looking at error correction as a way to build fault-tolerant computing hardware.

Richard Hamming laid the foundations of error-correcting codes in 1948. In late 40s Edgar
Gilbert found lower bound on the size of an error-correcting code with a certain minimum
distance and David Slepian established underpinnings of algebraic coding theory.
In 1968, Hamming was awarded the prestigious Turing award for his pioneering
Work in error correcting codes.

In 1970s, Goppa studied new codes construction starting from flat algebraic curves (Goppa's
Geometric Codes or algebraic-Geometrics Codes) [Algebraic-Geometric formula such as
Rieman-Roch Theorem].

In early 90s the first efficient error-correcting Algorithm came from Justessen Et Al,
Skorobogatov and Vladut and Porter, but so far from the correction capacity from the codes
to those that are applied, and someone need restrictive conditions about the codes type
that they can used, until the appearance of Ehrhard or Duursma Algorithms.

ADVANCES IN CODING
Two other types of codes are Turbo Codes and LDPC (Low Density Parity Check). LDPC codes
were introduced in 1963 by Robert Galler, but it did not found any practical implementation
because of lack of specific hardwares. Turbo Codes were introduced in 1993 and has found
its application in Low Code Rate Range such as mobile communication. In late 1990s and
early 2000 LDPC saw a revival and found its application in High Code Rate Range.

There are many more codes introduced after it such as Space-Time Codes, Joint Source and
Channel Coding, Cyclic codes, Cross-Layer Recursive Forward Error Correction Coding etc.

Currently there are developing more fast and efficient algorithms based in the design of
majority decoding from Fewe and Rao, which use lineal recurrent relations (Sakata
Algorithm) or Grbner bases.
However the parameters from Algebraic Geometric codes are better than classic codes in
asymptotic sense as we now deal with larger and complicated messages.

WHY AND HOW IT IS USEFUL TO SOCIETY
Internet, communication and digital circuits are huge source of data sharing. Sometimes we
want to transmit a bulky data and other time a very confidential data.

Thus in any case we want to encode our data through transmission channel so that we can
reduce the transmission cost, maintain the privacy of data transferred.
Many of the times noise in channels disturbs the data which may result in wrong
transmission of information.
Linear Block Code Kavi Pandya

131020 Page 3


Thus, we need some methodology which encodes the data and transmits it over the channel
and if the error occurs in the transmitted data then the method should be capable of finding
the error and correcting it.

Linear Block Code (LBC) is a code which satisfies above raised needs. LBC is not only a
theoretical concept but has found its application in varied fields; some of them are listed
below along with examples.

1. COMUNICATION SYSTEMS/INTERNET
a. Teletext systems, satellite communication, broadcasting (radio and digital TV),
telecommunications (digital phones)
b. Ethernet frame carries a CRC-32 checksum

2. INFORMATION SYSTEMS
a. Logical circuits, semiconductor memories.
b. Magnetic disks (HD), optic reading disks (CD-ROM).

3. AUDIO AND VIDEO SYSTEMS
a. Digital sound (CD) and digital video(DVD)

Hence, we find that the concept of LBC is ingrained in all the electrical equipments we use.
Thus, this concept helps to transmit encoded data, is error-correcting too which increases
the accuracy of data we receive, resulting in effective communication and data sharing.

LARGER CONCLUSION WITH YOUR CREATIVE INPUTS
Linear Block Code was a great project to work with as it not only described the practical
application of Linear Algebra in real life but we also came to know of how it works by
programming it in java. It was a hand on experience.

There is need to modify and improve Linear Block codes because in its existing structure it
has several drawbacks or disadvantages such as;
a. It requires more transmission width (as n>k),
b. Lesser the ratio of n/k, lesser will be the rate of code transmission and thus lesser
efficiency.
c. Code efficiency is dependent upon n and k.
d. If more than one error vector has same number of ones and the hamming distance
for both the error vectors is same then it is difficult to choose the e that would
maximise the probability of correct output. E.g. e
1
=000111 and e
2
=111000.
e. Even the syndrome lookup table method of error correction is very tedious and
complex, and is inapplicable and impractical for all but the shortest of code vectors.

There are possible ways to counter the drawbacks;
a. Through cyclic codes we can compute syndrome using shift registers (practical and
accurate over larger codes).
b. Can devise a method or code that is independent of both input(k) and output(n) bits.
c. Cyclic codes involve shifting of bits to produce new codewords, it therefore increase
probability of finding correct error bit and reduces transmission bandwidth.
d. We devise a method in where error bit can be classified depending upon earlier
error-bits. That is, create a error-identifying circuit as a memory circuit which can
store and analyse all the previous errors and gives use high probability error vector.

You might also like