You are on page 1of 44

SIGNAL SPACE ANALYSIS

1/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel

2/45
Introduction – the Model
• We consider the following model of a generic
transmission system (digital source):
– A message source transmits 1 symbol every T sec
– Symbols belong to an alphabet M (m1, m2, …mM)
• Binary – symbols are 0s and 1s
• Quaternary PCM – symbols are 00, 01, 10, 11

3/45
Transmitter Side
• Symbol generation (message) is probabilistic, with a
priori probabilities p1, p2, .. pM. or
• Symbols are equally likely
• So, probability that symbol mi will be emitted:

i  P(mi )
1
= for i=1,2,....,M (1)
M

4/45
• Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a
distinct signal si(t).
• The signal si(t) occupies the whole slot T allotted
to symbol mi.
• si(t) is a real valued energy signal (???)

T
Ei   si2 (t )dt , i=1,2,....,M (2)
0

5/45
• Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a
distinct signal si(t).
• The signal si(t) occupies the whole slot T allotted
to symbol mi.
• si(t) is a real valued energy signal (signal with finite
energy)
T
Ei   si2 (t )dt , i=1,2,....,M (2)
0

6/45
Channel Assumptions:
• Linear, wide enough to accommodate the signal
si(t) with no or negligible distortion
• Channel noise is w(t) is a zero-mean white Gaussian
noise process – AWGN
– additive noise
– received signal may be expressed as:

0  t  T 
x(t )  si (t )  w(t ),   (3)
i=1,2,....,M 

7/45
Receiver Side
• Observes the received signal x(t) for a duration of time T sec
• Makes an estimate of the transmitted signal si(t) (eq. symbol mi).
• Process is statistical
– presence of noise
– errors
• So, receiver has to be designed for minimizing the average
probability of error (Pe)
What is this?
M
Pe =  p P(mˆ  m
i 1
i i / mi ) (4)
Symbol sent
cond. error probability
given ith symbol was
sent
8/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel

9/45
2. Geometric Representation of Signals
• Objective: To represent any set of M energy signals
{si(t)} as linear combinations of N orthogonal basis
functions, where N ≤ M
• Real value energy signals s1(t), s2(t),..sM(t), each of
duration T sec Orthogonal basis
function

N
0  t  T 
si (t )   sij j (t ),   (5)
j 1 i==1,2,....,M 
coefficient

Energy signal
10/45
• Coefficients:
T i=1,2,....,M 
sij   si (t ) j (t )dt ,   (6)
0
 j=1,2,....,M 
• Real-valued basis functions:
T
1 if i  j 
0 i (t ) j (t )dt   ij  0 if i  j  (7)

11/45
• The set of coefficients can be viewed as a N-
dimensional vector, denoted by si
• Bears a one-to-one relationship with the
transmitted signal si(t)

12/45
Figure 2
(a) Synthesizer for generating the signal si(t). (b) Analyzer for
generating the set of signal vectors si.

13/45
So,
• Each signal in the set si(t) is completely
determined by the vector of its coefficients

 si1 
s 
 i2 
. 
si    , i  1,2,....,M (8)
. 
. 
 
 siN 

14/45
Finally,
• The signal vector si concept can be extended to 2D, 3D etc. N-
dimensional Euclidian space
• Provides mathematical basis for the geometric representation
of energy signals that is used in noise analysis
• Allows definition of
– Length of vectors (absolute value)
– Angles between vectors
– Squared value (inner product of si with itself)
Matrix
Transposition
 siT si
2
si
N
=  sij2 , i  1,2,....,M (9)
j 1

15/45
Figure 3
Illustrating the geometric
representation of signals for the
case when N  2 and M  3.
(two dimensional space, three
signals)

16/45
Also,
What is the relation between the vector
representation of a signal and its energy value?

• …start with the definition T

of average energy in a E i   si2 (t )dt (10)


0
signal…(10)
N
• Where si(t) is as in (5): si (t )   sij j (t ), (5)
j 1

17/45
T
N  N 
• After substitution: Ei    sij j (t )   sikk (t )  dt
0  j 1   k 1 

N N T

• After regrouping: Ei    s s   (t ) (t )dt


ij ik j k (11)
j 1 k 1 0

• Φj(t) is orthogonal, so N
Ei   s 2 2
finally we have: ij = si (12)
j 1

The energy of a signal


is equal to the squared
length of its vector

18/45
Formulas for two signals
• Assume we have a pair of signals: si(t) and sk(t), each
represented by its vector,
• Then:
T
sij   si (t )sk (t )dt  siT sk (13)
0

Inner product is invariant


to the selection of basis
Inner product of the signals functions
is equal to the inner product
of their vector
representations [0,T]
19/45
Euclidian Distance
• The Euclidean distance between two points
represented by vectors (signal vectors) is equal to
||si-sk|| and the squared value is given by:

N
si  s k =  (sij -skj ) 2
2
(14)
j 1
T
=  ( si (t )  sk (t )) 2 dt
0

20/45
Angle between two signals
• The cosine of the angle Θik between two signal vectors si and
sk is equal to the inner product of these two vectors, divided
by the product of their norms:

T
s s
cosik  i k
(15)
si sk

• So the two signal vectors are orthogonal if their inner


product siTsk is zero (cos Θik = 0)

21/45
Schwartz Inequality
• Defined as:

      (16)
 2  
s1 (t )s2 (t )dt  2
s (t )dt s22 (t )dt
  1 

• accept without proof…

22/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel

23/45
Gram-Schmidt Orthogonalization Procedure
Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t).

1. Define the first basis function s1 (t )


starting with s1 as: (where E is the 1 (t )  (19)
energy of the signal) (based on 12) E1
2. Then express s1(t) using the basis
function and an energy related
coefficient s11 as:
s11 (t )  E11 (t ) = s111 (t ) (20)
3. Later using s2 define the
coefficient s21 as: T
s21   s2 (t )1 (t )dt (21)
0

24/45
4. If we introduce the g 2 (t )  s2 (t )  s211 (t ) (22)
intermediate function g2 as:
Orthogonal to φ1(t)

g 2 (t )
5. We can define the second 2 (t )  (23)
basis function φ2(t) as: T
 0
g 22 (t )dt

6. Which after substitution of s2 (t )  s211 (t )


g2(t) using s1(t) and s2(t) it 2 (t )  (24)
becomes: E2  s21
2

T
• Note that φ1(t) and φ2(t) are
orthogonal that means:  0
22 (t )dt  1 (Look at 23)

T
 0
1 (t )2 (t )dt  0

25/45
And so on for N dimensional space…,
• In general a basis function can be defined using the
following formula:
i 1
gi (t )  si (t )   sij - j (t) (25)
j 1

• where the coefficients can be defined using:


T
sij   si (t ) j (t )dt , j  1, 2,....., i 1 (26)
0

26/45
Special case:
• For the special case of i = 1 gi(t) reduces to si(t).

General case:

• Given a function gi(t) we can define a set of basis


functions, which form an orthogonal set, as:
gi (t )
i (t )  , i  1, 2,....., N (27)
T

0
gi2 (t )dt

27/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel

28/45
Conversion of the Continuous AWGN Channel
into a Vector Channel
• Suppose that the si(t) is
not any signal, but x(t )  si (t )  w(t ),
specifically the signal at 0  t  T 
the receiver side, defined   (28)
i=1,2,....,M 
in accordance with an
AWGN channel: T
x i   x(t ) j (t )dt
• So the output of the 0

correlator (Fig. 2b) can =sij  wi ,


be defined as: j  1, 2,....., N (29)

29/45
T T

sij   si (t )i (t )dt (30) wi   w(t )i (t )dt (31)


0 0

deterministic quantity random quantity

contributed by the transmitted sample value of the variable


signal si(t) Wi due to noise

30/45
Now,
• Consider a random
process X1(t), with x1(t), a N

sample function which is x(t )  x(t )   x ji (t ) (32)


j 1
related to the received
N
signal x(t) as follows:
x(t )  x(t )   ( sij  w j ) j (t )
• Using 28, 29 and 30 and j 1
the expansion 5 we get: N
=w(t )   w j j (t )
j 1

=w(t ) (33)
which means that the sample function x1(t) depends only on the channel noise!

31/45
• The received signal can
be expressed as:
N
x(t )   x ji (t )  x(t )
j 1
N
  x ji (t )  w(t ) (34)
j 1

NOTE: This is an expansion similar to the one in


5 but it is random, due to the additive noise.

32/45
Statistical Characterization
• The received signal (output of the correlator of Fig.2
b) is a random signal. To describe it we need to use
statistical methods – mean and variance.
• The assumptions are:
– X(t) denotes a random process, a sample function of which
is represented by the received signal x(t).
– Xj(t) denotes a random variable whose sample value is
represented by the correlator output xj(t), j = 1, 2, …N.
– We have assumed AWGN, so the noise is Gaussian, so X(t)
is a Gaussian process and being a Gaussian RV, X j is
described fully by its mean value and variance.

33/45
Mean Value
• Let Wj, denote a random variable, represented by its
sample value wj, produced by the jth correlator in
response to the Gaussian noise component w(t).
• So it has zero mean (by definition of the AWGN
model)
 x  E  X j 
j

• …thenthe mean of =E  sij  W j 


Xj depends only on
=sij  E[W j ]
sij:
 x = sij
j
(35)

34/45
Variance
 x2  var[ X j ]
• Starting from the definition, i

we substitute using 29 and =E ( X j  sij ) 2 


31
T =E W j2  (36)
wi   w(t )i (t )dt (31)
0
 T T

 xi =E   W (t ) j (t )dt  W (u ) j (u )du 
2

0 0 
T T

T T =E   0  j (t )i (u)W (t )W (u)dtdu  (37)
 x2 =  o
i
o
  (t ) (u) E[W (t )W (u )]dtdu
0
i j

T T

=E   0  j (t )i (u) Rw (t, u)dtdu  (38) Autocorrelation function of
o the noise process
35/45
• It can be expressed as:
(because the noise is N0
stationary and with a R w (t , u )   (t  u ) (39)
2
constant power spectral
density) T T
N0
 =    (t ) (u ) (t  u )dtdu
2

• After substitution for xi


2 o 0
i j

the variance we get: T


N0 2
= 
2 0
 j (t )dt (40)
• And since φj(t) has unit N0
energy for the variance  x =
2
for all j (41)
we finally have: 2 i

• Correlator outputs, denoted by Xj have variance


equal to the power spectral density N0/2 of the
noise process W(t).
36/45
Properties (without proof)
• Xj are mutually uncorrelated
• Xj are statistically independent (follows from above
because Xj are Gaussian)
• and for a memoryless channel the following
equation is true:
N
f x ( x / mi )   f x j ( x j / mi ), i=1,2,....,M (44)
j 1

37/45
• Define (construct) a vector X of N random variables, X1, X2,
…XN, whose elements are independent Gaussian RV with
mean values sij, (output of the correlator, deterministic part of
the signal defined by the signal transmitted) and variance
equal to N0/2 (output of the correlator, random part,
calculated noise added by the channel).
• then the X1, X2, …XN , elements of X are statistically
independent.
• So, we can express the conditional probability of X, given si(t)
(correspondingly symbol mi) as a product of the conditional
density functions (fx) of its individual elements fxj.
NOTE: This is equal to finding an expression of the probability of
a received symbol given a specific symbol was sent, assuming
a memoryless channel

38/45
• …that is:
N
f x ( x / mi )   f x j ( x j / mi ), i=1,2,....,M (44)
j 1

• where, the vector x and the scalar xj, are sample


values of the random vector X and the random
variable Xj.

39/45
N
f x ( x / mi )   f x j ( x j / mi ), i=1,2,....,M (44)
j 1

Vector x is called observation


vector
Vector x and scalar xj are Scalar xj is called observable
sample values of the element
random vector X and the
random variable Xj

40/45
• Since, each Xj is Gaussian with mean sj and variance
N0/2

 1 2 j=1,2,....,N
f x j ( x / mi )  ( N0 )  N /2
exp  ( x j  sij )  , (45)
 N0  i=1,2,....,M

• we can substitute in 5.44 to get 5.46:

 1 N 2
f x ( x / mi )  ( N0 )  N /2
exp   ( x j  sij )  ,
 N0 j 1  i=1,2,....,M (46)

41/45
• If we go back to the formulation of the received
signal through a AWGN channel 34
N
x(t )   x ji (t )  x(t )
j 1
N
  x ji (t )  w(t ) (34)
j 1

Only projections of the noise onto


the basis functions of the signal set
The vector that we
{si(t)Mi=1 affect the significant
have constructed fully statistics of the detection problem
defines this part

42/45
Finally,
• The AWGN channel, is equivalent to an N-
dimensional vector channel, described by the
observation vector

x  si  w, i  1, 2,....., M (48)

43/45

You might also like