You are on page 1of 15

Chapter 5 Signal Energy Considerations, Orthogonal Signals

SUMMARY: The collection of vectors that corresponds to the signal-waveforms is called the signal structure. If the signal structure is translated or rotated the error probability need not change. In this chapter we determine the translation vector that minimizes the average signal energy. After that we demonstrate that orthogonal signaling achieves a certain error performance with twice as much energy as antipodal signals. The energy loss of orthogonal sets with many signals can be neglected however. In the second part of the chapter we investigate orthogonal signaling. In this case the waveforms that correspond to the messages are orthogonal. We determine the average error probability PE for such signals. It appears that PE can be made arbitrary small by increasing the number of waveforms if only the energy per transmitted bit is larger than N0 ln 2. Note that this is a capacity-result!

5.1 Vladimir A. Kotelnikov


Vladimir Aleksandrovich Kotelnikov (1908 - 2005) was a member of the Russian Academy of Science, in the Department of Technical Science (radio technology). He is mostly known for having independently of others discovered the sampling theorem in 1933. This result of Fourier Analysis was known in harmonic analysis since the end of the 19th century and circulated in the 1920-ies and 30-ies in the engineering community. He was the rst to write down a precise statement of this theorem in relation to signal transmission. He also worked on jet technology, devices for the control of rocket trajectories, code systems, and radio and radar planetology, both earthbound and from spacecraft. IEEE President Bruce A. Eisenstein said over the years the West had its Shannon; and the East had its Kotelnikov, when Kotelnikov received the IEEE Alexander Graham Bell Medal. (from Wikipedia) 70

5.2. PROBLEM DESCRIPTION

71

Figure 5.1: V.A. Kotelnikov, one of the inventors of the sampling theorem.

5.2 Problem Description


In this chapter we want to nd out whether simple changes to signal structures (translation or rotation) can lead to a smaller average signal energy. Moreover we consider sets of signals that are all orthogonal to each other. A rst question is how an optimal receiver for this signal set should be implemented and what its error probability is? A second question is what happens when we increase the number of signals but we keep the energy per transmitted bit constant. The answer to this second question leads to a capacity result.

5.3 Translation and Rotation of Signal Structures


In the additive white1 Gaussian noise case, if a signal structure, i.e. the collection of vectors s 1 , s 2 , , s |M| , is translated or rotated the error probability PE need not change. To see why, just assume that the corresponding decision regions I1 , I2 , , I|M| (which need not be optimum) are translated or rotated in the same way. Then, because the additive white Gaussian noise-vector is spherically symmetric in all dimensions in the signal space, and since distances between decoding regions and signal points did not change, the error probability remains the same. However, in general, the average signal energy changes if a signal structure is translated. Rotation about the origin has no effect on the average signal energy.

1 Although we are concerned with an AGN vector channel, we use the word white to emphasize that this vector

channel is equivalent to a waveform channel with additive white Gaussian noise.

72

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS

5.4 Signal Energy


Here we want to stress again that E sm =
0

2 sm (t)dt = s m 2 ,

(5.1)

therefore we only need to know the collection of signal vectors, the signal structure, and the message probabilities to determine the average signal energy which is dened as E av = Pr{M = m}E sm = Pr{M = m}s m 2 = E[S2 ]. (5.2)
mM mM

5.5

Translating a Signal Structure

The smallest possible average error probability of a waveform communication system only depends on the signal structure, i.e. on the collection of vectors s 1 , s 2 , , s |M| . It does not change if the entire signal structure is translated. The optimum decision regions simply translate too. Translation may affect the average signal energy however. We next determine the translation 2



1 1

Figure 5.2: Translation of a signal structure, moving the origin of the coordinate system to a. vector that minimizes the average signal energy. Consider a certain signal structure. Let E av (a) denote the average signal energy when the structure is translated over a, or equivalently when the origin of the coordinate system is moved to a (see gure 5.2), then Pr{M = m}s m a2 = E[S a2 ]. (5.3) E av (a) =
mM

We now have to nd out how we should choose the translation vector a that minimizes E av (a). It turns out that the best choice is Pr{M = m}s m = E[S]. a= (5.4)
m=1,|M|

5.6. COMPARISON OF ORTHOGONAL AND ANTIPODAL SIGNALING

73

This follows from considering an alternative translation vector b. The energy of the signal structure after translation by b is E av (b) = E[S b2 ] = = = = E[(S a) + (a b)2 ] E[S a2 + 2(S a) (a b) + a b2 ] E[S a2 ] + 2(E[S] a) (a b) + a b2 E[S a2 ] + a b2 E[S a2 ],

(5.5)

where equality in the third line follows from a = E[S]. Observe that E av (b) is minimized only for b = a = E[S]. If we do not translate, i.e. when b = 0, the average signal energy E av (0) = E[S a2 ] + a2 , hence we can save a2 if we translate the center of the coordinate system to a. RESULT 5.1 This implies that to minimize the average signal energy we should choose the center of gravity of the signal structure as the origin of the coordinate system. If the center of gravity of the signal structure a = 0 we can decrease the average signal energy by a2 by doing an optimal translation of the coordinate system. In what follows we will discuss some examples. (5.6)

5.6 Comparison of Orthogonal and Antipodal Signaling


Let M = {1, 2} and Pr{M = 1} = Pr{M = 2} = 1/2. Consider a rst signal set of two orthogonal waveforms (see the two sub-gures in the top row in gure 5.3) s1 (t) = 2E sin(10t) s s2 (t) = 2E s sin(12t), for 0 t < 1, (5.7) and a second signal set of two antipodal signals (see the bottom row sub-gures in gure 5.3) s1 (t) = 2E sin(10 t) s (5.8) s2 (t) = 2E s sin(10 t), also for 0 t < 1. Note that binary FSK (frequency-shift keying) is the same as orthogonal signaling, while binary PSK (phase-shift keying) is identical to antipodal signaling. The vector representations of the signal sets are s 1 = ( E s , 0), (5.9) s 2 = (0, E s ),

74

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS


s1(t)/sqrt(Es) 1.5 1 0.5 0 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 1.5 1 0.5 0 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 s2(t)/sqrt(Es)

s1(t)/sqrt(Es) 1.5 1 0.5 0 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 1.5 1 0.5 0 0.5 1 1.5 0 0.2

s2(t)/sqrt(Es)

0.4

0.6

0.8

Figure 5.3: The two signal sets of (5.7) and (5.8) in waveform representation. 2 s 2 1 (a) (b) s 1 0

s2

s 1 0

Figure 5.4: Vector representation of the orthogonal (a) and the antipodal signal set (b).

for the rst (orthogonal) set, and s 1 = ( E s , 0), s 2 = ( E s , 0),

(5.10)

for the second (antipodal) set. The average signal energy for both sets is equal to E s . For the error probabilities in the case

5.6. COMPARISON OF ORTHOGONAL AND ANTIPODAL SIGNALING


10
0

75

10

10

10

10

10

10

10 15

10

10

15

Figure 5.5: Probability of error for binary antipodal and orthogonal signaling as function of E s /N0 in dB. It is assumed that Pr{M = 1} = Pr{M = 2} = 1/2. Observe the difference of 3 dB between both curves. of additive white Gaussian noise with power spectral density N0 /2 we get 2E s = Q = Q( E s /N0 ), 2 N0 2 2 Es = Q = Q( 2E s /N0 ). 2 N0 2

PE

orthog.

PE

antipod.

(5.11)

Note that the distance d between the signal points differs by a factor

2 and PE = Q(d/2 )).

These error probabilities are depicted in gure 5.5. We observe a difference in the error probabilities of 3.0 dB. By this we mean that, in order to achieve a certain value of PE , we have to make the signal energy twice as large for orthogonal signaling than for antipodal signaling. The better performance of antipodal signaling relative to orthogonal signaling is best explained by the fact that for antipodal signaling the center of gravity is the origin of the coordinate system while for orthogonal signaling this is certainly not the case. We may conclude that binary PSK modulation achieves a better error-performance than binary FSK. An advantage of FSK modulation is however that efcient FSK receivers can be designed that do not recover the phase. PSK on the other hand requires coherent demodulation.

76

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS

5.7 Orthogonal Signal Structures


Consider |M| signals sm (t), or in vector representation s m , with a-priori probabilities Pr{M = m} = 1/|M| for m M = {1, 2, , |M|}. We now dene an orthogonal signal set in the following way: Denition 5.1 All signals in an orthogonal set are assumed to have equal energy and are orthogonal i.e. s m = E s m for m M, (5.12)

where m is the unit-vector corresponding to dimension m . There are as many building-block waveforms m (t) and dimensions in the signal space as there are messages.
The signals that we have dened are now orthogonal since

sm (t)sk (t)dt = (s m s k ) = E s ( m k ) = E s mk for m M and k M.

(5.13)

Note that all signals have energy equal to E s . So far we have not been very explicit about the actual signals. However we can e.g. think of (disjoint) shifts of a pulse (pulse-position modulation, PPM) or sines and cosines with an integer number of periods over [0, T ) (frequency-shift keying, FSK).

5.8 Optimum Receiver


How does the optimum receiver decide when it receives the vector r = (r1 , r2 , , r|M| )? It has to choose the message m M that minimizes the squared Euclidean distance between s m and r , i.e. r s m 2 = r 2 + s m 2 2(r s m ) = r 2 + E s 2 E s rm .

(5.14)

RESULT 5.2 Since only the term 2 E s rm depends on m an optimum receiver has to choose m such that rm rm for all m M. Now we can nd an expression for the error probability. (5.15)

5.9. ERROR PROBABILITY

77

5.9 Error Probability


To determine the error probability for orthogonal signaling we may assume, because of symmetry, that signal s1 (t) was actually sent. Then r1 = Es + n1, rm = n m , for m = 2, 3, , |M|, 1 n2 with p N (n) = exp( m ). (5.16) N0 N0 m=1,|M| Note that the noise vector n = (n 1 , n 2 , , n |M| ) consists of |M| independent components all with mean 0 and variance N0 /2. Suppose that the rst component of the received vector is . Then we can write for the correct probability, conditional on the fact that message 1 was sent and that the rst component of r is , Pr{ M = 1|M = 1, R1 = } = Pr{N2 < , N3 < , , N|M| < } = (Pr{N2 < })|M|1 ( )|M|1 = p N ()d

(5.17)

therefore the correct probability PC =

p N (

( Es )

p N ()d

)|M|1 d. (5.18)

We rewrite this correct probability as follows PC = = =


( )|M|1 1 ( E s )2 exp( ) d d N0 N0 ( )|M|1 1 (/ N0 /2 E s / N0 /2)2 ) d d/ N0 /2 exp( 2 2 )|M|1 ( N0 /2 1 ( 2E s /N0 )2 d d (5.19) ) exp( 2 2

with = / N0 /2. Furthermore


N0 /2

1 2 exp( )d = N0 N0 =

N0 /2

1 (/ N0 /2)2 )d/ N0 /2 exp( 2 2 1 2 (5.20) exp( )d 2 2

78

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS


10
0

Es/N0

10

10

10

10

10

12

14

16

Figure 5.6: Error probability PE for |M| orthogonal signals as a function of E s /N0 in dB for log2 |M| = 1, 2, , 16. Note that PE increases with |M| for xed E s /N0 . with = / N0 /2. Therefore PC =

( p( b)

)|M|1 p()d d, (5.21)

2 with p( ) = 1 exp( 2 ) and b = 2E s /N0 . Note that p( ) is the probability density 2 function of a Gaussian random variable with mean zero and variance 1. From (5.21) we conclude that for a given |M| the correct probability PC depends only on b i.e. on the ratio E s /N0 . This can be regarded as a signal to noise ratio since E s is the signal energy and N0 is twice the variance of the noise in each dimension.

Example 5.1 In gure 5.1 the error probability PE = 1 PC is depicted for values of log2 |M| = 1, 2, , 16 and as a function of E s /N0 .

5.10

Capacity

Consider the following experiment. We keep increasing |M| and want to know how we should increase E s /N0 such that the error probability PE gets smaller and smaller. It turns out that it is the energy per bit that counts. We dene E b i.e. the energy per transmitted bit of information Eb = Es . log2 |M| (5.22)

Reliable communication is possible if E b > N0 ln 2. It can be shown that:

5.10. CAPACITY
10
0

79
Eb/N0

10

10

10

10

10

12

14

16

Figure 5.7: Error probability for |M| = 2, 4, 8, , 32768 now as function of the ratio E b /N0 in dB. RESULT 5.3 The error probability for orthogonal signalling satises { PE 2 exp( log2 |M|[E b /(2N0 ) 2]) ln E b /N0 4 ln 2 2 ) ln 2 E /N 4 ln 2. 2 exp( log2 |M|[ E b /N0 ln 2] b 0 (5.23)

The consequence of (5.23) is that if E b , i.e. the energy per bit, is larger than N0 ln 2 we can get an arbitrary small error probability by increasing |M|, the dimensionality of the signal space. This a capacity result! Note that the number of bits that we transmit each time is log2 |M| and this number grows much slower than |M| itself.
Example 5.2 In gure 5.7 we have plotted the error probability PE as a function of the ratio E b /N0 . It appears that for ratios larger than ln 2 = 1.5917 dB (this number is called the Shannon limit) the error probability decreases by making |M| larger.

Finally note that if we use a transmitter with power Ps then, since reliable transmission of a bit requires energy N0 ln 2, up to C= Ps bits/second N0 ln 2 (5.24)

can be transmitted reliably. We will see later that this is the so-called wideband capacity.

80

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS

5.11 Energy of |M| Orthogonal Signals


The average energy E av of the signals in an orthogonal set (all signals having energy E s ) is E av = E[S2 ] = Pr{M = m}s m 2 = E s . (5.25)
mM

The center of gravity of our orthogonal signal structure is a=( We know from (5.6) that E av (0) = E av (a) + a2 thus, since E av (0) = E av , E s = E av (a) + or E av (a) = E s (1 Es |M| (5.27) (5.28) 1 1 1 , , , ) Es . |M| |M| |M| (5.26)

1 ). (5.29) |M| Observe that E av (a) is the average signal energy after translating the coordinate system such that its origin is the center of gravity of the signal structure. For |M| the difference between the E av (a) and E s can be neglected. Therefore we conclude that an orthogonal signal set is not optimal in the sense that for a given error performance it needs the smallest possible average signal energy, but that the difference diminishes if M .

5.12 Exercises
1. Either of the two waveform sets illustrated in gures 5.8 and 5.9 may be used to communicate one of four equally likely messages over an additive white Gaussian noise channel. (a) Show that both sets use the same energy. (b) Exploit the union bound to show that the set of gure 5.9 uses energy almost 3 dB more effectively than the set of gure 5.8 when a small PE is required. (Exercise 4.18 from Wozencraft and Jacobs [25].) 2. In a communication system based on an additive white Gaussian noise waveform channel four signals (waveforms) are used. All signals are zero for t < 0 and t 8. For 0 t < 8 the signals are s1 (t) s2 (t) s3 (t) s4 (t) = = = = 0, +2 cos( t/4), +2 cos( t/2), +2 cos( t/4) + 2 cos(t/2).

(5.30)

5.12. EXERCISES
s1 (t) E s /2 1 s3 (t) E s /2 1 2 3 4 t 2 3 4 t s2 (t) E s /2 1 s4 (t) E s /2 1 2 3 4 t 2 3 4 t

81

Figure 5.8: Signal set (a).

The messages corresponding to the signals all have probability 1/4. The power spectral density of the noise process Nw (t) is N0 /2 = 2 for all f . The receiver observes the received waveform r (t) = sm (t) + n w (t) in the time interval 0 t < 8.

(a) Determine a set of building-block waveforms for the four signals. Sketch these building-block waveforms. Show that they are orthonormal over [0, 8). Give the vector representations of all four signals and sketch the resulting signal structure. (b) Describe for what received vectors r an optimum receiver chooses M = 1, M = 2, M = 3, and M = 4. Sketch the corresponding decision regions I1 , I2 , I3 , I4 . Give an expression for the error probability PE obtained by an optimum receiver. Use the Q()-function. (c) Sketch and specify a matched-lter implementation of an optimum receiver for the four signals (all occurring with equal a-priori probability). Use only two matched lters. (d) Compute the average energy of the signals. We can translate the signal structure over a vector a such that the average signal energy is minimal. Determine this vector a. What is the minimal average signal energy? Specify the modied signals s1 (t), s2 (t), s3 (t), and s4 (t) that correspond to this translated signal structure. (e) Consider the signals as specied in (5.30) again. There we assumed that all messages

82

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS


s1 (t) Es 1 s3 (t) Es 1 2 3 4 t 2 3 4 t s2 (t) Es 1 s4 (t) Es 1 2 3 4 t 2 3 4 t

Figure 5.9: Signal set (b). were equally likely. Assume next that 1 , 1 + 2e2 + e4 e2 Pr{M = 2} = Pr{M = 3} = , 1 + 2e2 + e4 e4 Pr{M = 4} = . 1 + 2e2 + e4 Pr{M = 1} =
Sketch the corresponding decision regions I1 , I2 , I3 , I4 . Give an expression for the resulting error probability PE obtained by an optimum receiver. Use the Q()function again.

(Exam Communication Theory, July 9, 2004) 3. Consider a communication system based on frequency-shift keying (FSK). There are 8 equiprobable messages, hence Pr{M = m} = 1/8 for m {1, 2, , 8}. The signal waveform corresponding to message m is sm (t) = A 2 cos(2mt), for 0 t < 1. For t < 0 and t 1 all signals are zero. The signals are transmitted over an additive white Gaussian noise waveform channel. The power spectral density of the noise is Sn ( f ) = N0 /2 for all frequencies f . (a) First show that the signals sm (t), m {1, 2, , 8} are orthogonal2 . Give the energies of the signals. What are the building-block waveforms 1 (t), 2 (t), , 8 (t)
2 Hint:

2 cos(a) cos(b) = cos(a b) + cos(a + b).

5.12. EXERCISES
that result in the signal vectors s1 s2 s8 = (A, 0, 0, 0, 0, 0, 0, 0), = (0, A, 0, 0, 0, 0, 0, 0), = (0, 0, 0, 0, 0, 0, 0, A)?

83

1 (b) The optimum receiver rst determines the correlations ri = 0 r (t)i (t)dt for i = 1, 2, , 8. Here r (t) is the received waveform hence r (t) = sm (t) + n w (t). For what values of the vector r = (r1 , r2 , , r8 ) does the receiver decide that message m was transmitted? (c) Give an expression for the error probability PE obtained by the optimum receiver. (d) Next consider a system with 16 messages all having the same a-priori probability. The signal waveform corresponding to message m is now given by sm (t) = A 2 cos(2 mt), for 0 t < 1 for m = 1, 2, , 8, sm (t) = A 2 cos(2(m 8)t), for 0 t < 1 for m = 9, 10, , 16. For t < 0 and t 1 these signals are again zero. What are the signal vectors s 1 , s 2 , , s 16 if we use the building vectors mentioned in part (a) again? 1 (e) The optimum receiver again determines the correlations ri = 0 r (t)i (t)dt for i = 1, 2, , 8. For what values of the vector r = (r1 , r2 , , r8 ) does the receiver now decide that message m was transmitted? Give an expression for the error probability PE obtained by the optimum receiver now. (Exam Communication Principles, October 6, 2003) 4. A Hadamard matrix is a matrix whose elements are 1. When n is a power of 2, an n n Hadamard matrix is constructed by means of the recursion: ( ) +1 +1 H2 = , +1 1 ( ) +Hn +Hn H2n = . (5.31) +Hn Hn Let n a power of 2 and M = {1, 2, , n}. Consider for m M the signal vectors be s m = Es h m where h m is the m-th row of the Hadamard matrix Hn . n (a) Show that the signal set {s m , m M} consists of orthogonal vectors all having energy E s . (b) What is the error probability PE if the signal vectors correspond to waveforms that are transmitted over a waveform channel with additive white Gaussian noise having spectral density N0 /2.

84

CHAPTER 5. SIGNAL ENERGY CONSIDERATIONS, ORTHOGONAL SIGNALS


(c) What is the advantage of using the Hadamard signal set over the orthogonal set from denition 5.1 if we assume that the building-block waveforms are in both cases timeshifts of a pulse? And the disadvantage? (d) The matrix n 2n matrix H = n +Hn . Hn (5.32)

denes a set of 2n signals which is called bi-orthogonal. Determine the error probability if this signal set if the energy of each signal is E s . A bi-orthogonal code with n = 32 was used for an early deep-space mission (Mariner, 1969). A fast Hadamard transform was used as decoding method [6].

You might also like