You are on page 1of 2

Digital Communication,

1 How many bits can fit in a signal?


So the key digital communication is the sending and receiving of a signal that can be interpreted as 0s and 1s. Most commonly, the signal used is electrical or optical and is a continuous function of time, x(t). The function of the digital transmitter is then to control the function x(t) output from the transmitting system and the digital receiver will interpret the signal received as 0s and 1s. The signal received will in general will be different than x(t) as a result of distortions that occur on the way and the addition of noise from external sources so we will denote it as y(t). We will look into the effects of distortion and noise later so we initially look at the ideal system where the transmitted signal x(t) is received perfectly so that y(t)=x(t). So how many bits can be put in a signal x(t)? Intuitively we would expect that the longer the duration of transmission, the more bits are possible so the more relevant question to ask is how many bits per unit time can be put in signal x(t). This question is closely tied with the number of degrees of freedom of the signal x(t). That is if we expand x(t) in a set of orthonormal functions, how many of the components are independent? The periodic functions sin(t) and cos(t) provide a very useful orthonormal set for representing signals x(t) and y(t). Lets begin by considering an ideal extreme case where the transmitter is allowed to use a single frequency 0, for example medium between the transmitter and the receiver absorbs energy at all other frequencies. Then there are two degrees of freedom, two independent components, one corresponding to sin(0t) and one corresponding to cos(0t). Since these two functions are orthogonal, their contributions are independent and do not interfere with each other. This is an important principle that will stay with us: For each frequency, there are two degrees of freedom. These two degrees of freedom in x(t) = B sin(0t) + A cos(0t) are sometimes put in to the form x(t) = B sin(0t) + A cos(0t) = a cos(0t + ) , with a cos() = A and sin()=B and referred to as amplitude and phase. Yet another form used is x(t) = B sin(0t) + A cos(0t) = A+ eit + A-e-it , with A = (A+ + A-)/2 and B = (A+ - A-)/2I 1-3 1-2 1-1

Where A+ and A- become the amplitudes of the positive and negative frequency components1. Regardless of the form: One frequency -> two degrees of freedom. O.K. so if we had only one frequency that we can use, we then have two degrees of freedom such that we can send two arbitrary values using this frequency from the beginning of time to end of time! For the time being and preparing for later, we will call these arbitrary values "symbols". So a communication channel that consists of a single frequency is of very limited use. We can only send two symbols for eternity! The capacity of a communication channel is usually measured as information per unit time so a single frequency channel has 2/inf = 0 symbols/sec capacity. On the positive side any communication channel will usually pass more than a single frequency. Typically a continuous range of frequencies are passed through a channel. This is typically
1

Note we will use i = -1 like most of the rest of the world. Electrical engineers often use j = -1

referred to as the pass band of the channel. This is hopeful then, although each individual frequency has infinitesimally small capacity, infinite number of them within a band might add up to a finite capacity. Let's look at this more closely. At first glance, the number of frequencies in a finite but continuous range would be anticipated to be uncountably infinite so it is not immediately clear that the above considerations lead to finite capacity for a finite frequency range. In order to be able send independent symbols using different frequencies, these frequencies need to independent such that what we are sending in one frequency does not interfere with what is in another frequency. In other words, they need to be orthogonal. Any two different frequencies are orthogonal IF the measurement time is INFINITE. If the measurement time is finite, however, frequencies that are spaced closer than the inverse of the measurement time DF=1/T will no longer be orthogonal. We can therefore use frequencies that are roughly spaced at least 1/T apart. In a frequency band of width f there are fT such frequencies. Using these frequencies then we can transmit 2fT symbols. We can therefore send 2f symbols per unit time. As we take the limit T-> inf, the above sequence of arguments become mathematically exact and this simple result, which is known as Nyquist's theorem after its originator, becomes one of the two fundamental components of the foundation of digital communication. The other component due to Claude Shannon will be covered in the next lecture. Another way to look at Nyquists sampling theorem is that any signal band-limited to frequency range f can be described by a finite set of numbers per unit time, 2f to be exact. The most popular choice for these numbers is the value of the signal itself at 2f points per unit time. Usually they are spaced in equal intervals so we end up with 2f equally spaced samples of the signal as the descriptive set of variables. (Hence, the name sampling theorem). They dont need to be equally spaced samples, as long as the average number of samples is equal to 2f. If we have more samples, then we have an over-determined system. Samples beyond the 2f samples per unit time are redundant with respect to the rest. We will see that this is not always a bad idea if there is a lot of noise in our system such that some of the redundancy can be used to minimize the sensitivity of our calculations to noise present in each of the samples. If we have less than 2f samples, then we have an under-determined, or degenerate, system. The samples we have are not sufficient to determine all aspects of the signal. This does not mean necessarily that we are dead in the water, it depends on what we want to do and how under-sampled we are. Lets look a little further what happens with an under-sampled signal. Lets say the signal band-width was in fact f such that we would need 2f samples to fully represent the signal. Lets say that we have 2fs < 2f samples per unit time. We know that 2fs samples will completely determine a signal of bandwidth f s . There is a component of bandwidth fs in the signal of bandwidth f as illustrated in Figure 2-1. In fact there are infinitely many different components of bandwidth fs. The 2fs samples can represent any one of these subsets, one at a time, but they can never represent a signal that has frequency components separated by more than 2fs. The reason for this is illustrated in Figure 2-2 in time domain. The most vivid illustration of this condition is watching the wagon wheels turn in a western movie. The movie camera in this case performs the sampling. It takes snap-shots or samples of the picture about 30 times a second. When the wheel is turning slowly at the beginning everything appears to be normal. As the speed reaches the point where spokes of the wheel line up exactly 30 times a second, the wheel first appears to turn backwards then stops at exactly the speed corresponding to 30 invariant rotations per second. It is not possible to distinguish whether the wheel has turned forward of the way or backwards by of the way. This is the effect of aliasing. Problem 1: Prove Nyquist's Theorem as outlined in Lecture 2. Hint: Impose periodic boundary conditions with period T and let T-> inf.

Recommended Reading To Complement Lecture 2: Lee & Messerschmitt, Chapter


2, Deterministic Signal Processing

You might also like