You are on page 1of 22

2.

4-1
2.4 Signal Space [P2.4]

You are used to the association between n-tuples of numbers and points in
space, and you even call them both vectors. Its a powerful link, because
you can visualize many operations on n-tuples as corresponding operations
in Euclidean 3-space.

In this section, we extend spatial visualization to finite-energy functions
and get a big payback. Our vectors are now:


It is easiest to relate functions and tuples by considering functions to be
indexed by a continuous variable t, rather than by a discrete variable, as
in the case of tuples. However, this is not necessary; the formal link rests
on the structure below, which can even treat zero-mean random variables
as vectors (see the expanded description of vector spaces in Appendix D).






2.4-2
2.4.1 Vector Space

A vector space consists of a set of vectors that forms a commutative group
under a binary operation +:
-
( ) ( ) + + = + + + = + + + = = x y y x x y z x y z x 0 x x x 0
with a field of scalars and scalar multiplication
( ) ( ) 1 0 ( ) ( ) a b ab a a a a b a b = = = + = + + = x x x x x 0 x + y x y x x x
Usually the scalars are in or C, but or even B={0,1} will do, too. R Z

o Summary: we can add different vectors together. Theres a zero element
and a negative counterpart for every vector. Also, we can multiply by
scalars.

o All three of our vectors qualify.



o A set of vectors spans the space consisting of all linear combinations of
them; they form a basis of the space.

o Vectors are linearly independent iff no linear combination of them
equals 0 (except all zero coefficients).



2.4-3
2.4.2 Inner Product Space

Adding an inner product operator to a vector space gives us length and
distance between vectors. The inner product maps pairs of vectors to
complex (or real) numbers: V V C, and its denoted ( , ) x y .

o Properties:

*
( , ) ( , ) ( , ) ( , ) ( , )
( , ) 0, equality iff
a b a b = + = +
=
x y y x x y z x z y z
x x x 0

o Familiar examples include dot product (for arrows in space), array
product (for tuples), correlation (for time functions) and covariance for
zero mean random variables:

T
*
0
x(t)y ( ) t dt E XY
*

x y y x -
Vectors are orthogonal iff their inner product is zero.

The inner product lets us define the squared norm, or squared length of a
vector. For a waveform, its the energy.

2 2

0
( )
T
x t d =

x x x t




2.4-4
The inner product also gives us the squared distance between vectors:
2 2
2 2
0
( ) ( ) ( ) ( )
T
d d x t y t d = = =

x y x y x y t

2.4.3 Waveforms as Vectors

Waveforms with finite energy (square integrable) obey the same rules as
or (see the summary at the end of this section). Consequently, we
can make vector diagrams of signals and use geometric intuition.
n
R
n
C

Example

o Assume these basis waveforms:

They have unit energy and are orthogonal (inner product is zero), hence
orthonormal.







2.4-5
o We can form other waveforms as linear combinations, such as

1
1 2
2
( ) ( ) 2 ( ) x t t t = +
1 2
( ) ( ) ( ) y t t t =

They are in the space spanned by
1
( ) t and
2
( ) t .

o A vector diagram for x(t) and y(t):

o We can form sum and difference from the diagram. For example,
( ) ( ) x t y t + is represented by



2.4-6

o We can calculate the energy as the squared length (values are real, but
pro forma treated as complex):

Vector Waveform
( )
2 2
1 2
1 2 1 2
2 2
1 2
,
2
=
=
= + =
y



or by coordinates,
2 2 2
1 1 = + = y 2
2
2
0
2
2
1 2
0
2 2
2 2
1 2
0 0
( )
( ) ( )
( ) ( ) 2
T
y
T
T T
E y t dt
t t dt
t dt t dt
=
=
= + =







Similarly, the squared length
of x by coordinates is
2
2 2
1
2
1
4
2
4
= +
=
x

and for x(t),
2
2
0
2
2
1
1 2
2
0
1 1
4 4
( )
( ) 2 ( )
4 4
T
x
T
E x t dt
t t
=
= +
= + =

dt

Verify them by direct integration.


2.4-7

o We also get the squared distance between x(t) and y(t). First, by direct
integration:



( )
2
2
1
4
0
( ) ( ) 9
T
x t y t dt =



and, second, from the vector diagram:


( )
2
2
2
1 1 1 2 2 2
2
2
1 1
2 4
( ) ( )
3 9
d
x y x y
=
= + +
= + =
x y







2.4-8
2.4.4 Common Orthonormal Bases

Although any set of N linearly independent vectors can provide a basis for
an N-dimensional space, most operations become simpler if the basis
vectors are orthogonal and have unit energy (i.e., are orthonormal).

With these two, finding the
coefficients of some other vector
requires work:
But with these two, the coefficients
of some other vector are just the
inner products with and .
1



Below is a collection of common orthonormal waveform basis sets.

The Fourier series basis is familiar (defined over
[ ]
2, 2 T T here):


1
( ) exp 2 , 2 2,
k
t
t j k T t T L
T
T

=


k L





2.4-9

o To represent another waveform in this space (not quite conventional
Fourier series pair):

2
2 2
2
1 1
( ) ( )
T
L
j kt T j kt T
k k
k L
T
x t x e x x t e
T T

=

= =



dt



o The Fourier transforms of these basis signals are

( )
( ) sinc
k
f T fT k = (Why?)





o So for large L, the range of frequencies (positive and negative) is
roughly (2 1) L + T T . Rule of thumb: there are about real
dimensions in (positive frequency) bandwidth W and time T.
2 D W







2.4-10
The Walsh-Hadamard basis looks like this:
Generated by the recursion


[ ]
(0)
1 = H

( 1) ( 1)
( )
( 1) ( 1)
n n
n
n n



=



H H
H
H H


o They are orthogonal (prove by induction on ).
( ) n
H




o They span the space of staircase-like function with possible
equispaced zero crossings:
2
n







2.4-11
The rectangular pulse translates span the same space as Walsh-
Hadamard:



o Their Fourier transforms are, for N pulses in time T,

2 2 2
( ) sinc
j f T N j f nT N
n
T T
f e e f
N N


=








Translated square root raised cosine pulses (Section 2.5) form another
orthonormal set. For 0.5 = , a few of them are as shown below:
6 4 2 0 2 4 6
0.5
0.5
1
1.5
root cosine translates
normalized time t/T



2.4-12
o Why are they orthogonal? Because the inner product of two such pulses
separated by a multiple of T is
( ) ( ) ( )
g
g t g t nT dt R nT


The autocorrelation function of this pulse is the raised cosine function
(Appendix C), which equals zero at multiples of T:

4 3 2 1 0 1 2 3 4
0.5
1
1.5
beta =1
beta =0.5
beta =0.35
Rg(t) for square root raised cos pulse
normalized time t/T

Hence they are orthogonal. We also normalized them to unit energy.




2.4-13

2.4.5 Summary of Equivalences
Concept
n

2
L space
Scaling
ax
( ) a x t

Sum
+ x y
( ) ( ) x t y t +

Inner product

y x
*
( ) ( ) x t y t dt



Orthogonality

0 = y x
*
( ) ( ) 0 x t y t dt



Norm
= x x x
2
( )
x
x t dt E




Distance 2
i i
i
x y =

x y 2
( ) ( ) x t y t dt



2.4-14



Concept
n

2
L space
Normalized
inner product


y x
x x y y

*
( ) ( )
x y
x t y t dt
E E



Triangle ty ,
equality when
proportional

+ + x y x y

x y x y
E E E
+
+
Projection
coefficients
(orthonormal
basis)

k k
x = x

*
( ) ( )
k k
x x t t dt

=



Schwartz ty ,
equality when
proportional

2
2 2

y x x y
2
*
( ) ( )
x y
x t y t dt E E



2.4-15


Concept
n

2
L space
Pythagoras, for
orthogonal
vectors

2 2 2
+ = + x y x y
if orthogonal

x y x y
E E E
+
= +
if orthogonal


2.4-16
2.4.6 Projections

For any waveform , we want the coefficients ( ) x t
{ }
i
x with respect to a
given basis set
{ }
( )
i
u t . A practical reason is that, in a computer, its easier
to manipulate a vector of coefficients than a waveform.

The corresponding representation and error are, respectively,

0
( ) ( )
N
n n
n
x t x u
=
=

t and ( ) ( ) ( ) e t x t x t = .
Why could there be an error?
o If is finite, then most square-integrable functions lie at least partially
outside , the space spanned by the basis functions
N
U
{ }
( )
n
u t .


o If is infinite, the basis set may not be complete; i.e., the norm of the
error may not be guaranteed to converge to zero as becomes
arbitrarily large.
N
N

How to pick the best coefficients? What do we mean by best?
o A quadratic measure like integrated squared error
2
( ) e t dt

?
o Or the integrated absolute value ( ) e t dt

?


2.4-17
o Or the worst case error max ( )
t
e t

?

o Well use integrated squared error, since quadratic measures like energy,
power and variance are simple, useful, tractable and well understood.
So the objective is: given any , find ( ) x t ( ) x t U that minimizes
2
( )
e
E e t dt

=

, the squared norm of . ( ) e t

We can use the parallel natures of function space and Euclidean space to
establish an important property of the representation error.


o Whether and are orthogonal or not, the that minimizes the norm
of e is the orthogonal projection of x onto . Therefore the error is
orthogonal to the representation, to the basis vectors and to any other
vector in :
1
u
2
u x
U
U
*
( ) ( ) 0 e t x t dt

and
*
( ) ( ) 0
n
e t u t dt


Also Pythagoras:
x x
E E E = +
e
. Well prove these properties later.


2.4-18
2.4.7 Coefficient Calculation Orthonormal Basis

The easy case: find the coefficients if the basis is orthonormal.
o The error and its squared norm are
1
( ) ( ) ( )
N
n n
i
e t x t x u t
=
=


* *
1
2
1
* * * * *
1 1 1
2Re ( ) ( )
*
1
( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
(
N
nk
n n
n
N
e n n
n
N N N N
x n n n n n k n k
n n n k i
x x t u t dt
N
x n
n
E x t x u t dt
E x x t u t dt x x t u t dt x x u t u t dt
E x x t


= = = =

=

=


=
=
= +

_ _
* * *
1 1
) ( ) ( ) ( )
N N
n n n n n
n n
u t dt x x t u t dt x x

= =

+




o Take the gradient of with respect to the complex coefficients
e
E
i
x (see
Appendix E!) and equate it to zero:



*
( ) ( ) , 1, ,
n n
x x t u t dt n N

= =


which is how you calculate Fourier series coefficients, for example.


2.4-19
o This correlation operation is at the core of most modems, as the inner
product of two arrays of samples. For this discussion, denote the i
th

correlation as
i
p .


So projection looks like this:


o Check Pythagoras. What is the value of the minimized norm?
From the previous page,
2

in general
after substitution of
e x
x x
E E
E E
= +
= =
x p p x x
x p



o Check orthogonality of error and arbitrary basis function.
*
1 1
( ) ( ) ( ) 0
N N
n n k k n nk
n n
x t x u t u t dt p x

= =

=


2.4-20
2.4.7 Coefficient Calculation General Basis

Slightly more difficult is the case of a non-orthonormal basis set.
o To obtain the coefficients
n
x (the components of ), adapt the
expression for from Section 2.4.6:
x
e
E
,
2
1
* * * *
1 1 1

( ) ( )
( ) ( )

k n
N
e n n
n
N N N N
x n n n n n k k n
n n n k i
G
x
E x t x u t dt
E x p x p x x u t u t dt
E

= = = =

=
=
= +
= +



x p p x x Gx
_

where G is the Gram matrix, the matrix of inner products of basis fns.

o Set the gradient to zero = Gx p, which gives the optimum coefficients as

1


= x G p
These are the normal equations. Projection looks like this:


Note that p contains all the information needed to construct . ( ) x t


2.4-21
o The energy of the representation is ( ) x t
2
* *

1 1

1 1
( ) ( ) ( )
in general
for optimum coefficients
N N
x n k k n
n k
E x t dt x x u t u t dt

= =


= =
=
= =


x Gx
p G p x G p


Check Pythagoras:

1 1 1

in general
for optimum coefficients
e x
x
x x
E E
E
E E

= +
= +
=
x p p x x Gx
p G p p G p p G p


Check orthogonality of error and basis functions. Concisely,
[ ]
1 2
( ) ( ), ( ), , ( )
N
t u t u t u t = u ,
a row vector (or matrix, if time runs downwards). Then

[ ]
1
2
1 2
1

( ) ( ) ( ), ( ), , ( ) ( )

N
n n N
n
N
x
x
x t x u t u t u t u t t
x
=



= = =


u x
.


*
1

*
( )
( ) = ( ) ( )
( )
N
u t
x t dt x t t
u t




=




p u . dt


2.4-22

[ ]
2
* *
1 1 2 1
2
*
2 1 2
2
*
1
*
1
*
2
1 1 1
*
2
( ) ( ) ( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
( )
( )
( ) ( ) ( ) = ( ) ( )
( )
N
N N
u t dt u t u t dt u t u t dt
u t u t dt u t dt
u t u t dt u t dt
u t
u t
u t u t u t dt t t dt
u t





=







=






G
u u

.
.

.

x
G p


As for orthogonality (our goal, before we developed notation),
( )

1
( ) ( ) ( ) ( ) ( )
in general
for optimum coefficients
t e t dt t x t t dt

=
=
= =

u u u
p Gx
0 x


This notation also lets us write the projection concisely as
-1

( ) ( ) ( ) ( ) ( ) ( ) x t t t t dt x t t


=

=




x
u u u u
_
dt

You might also like