You are on page 1of 14

Space-Time Coding

Space-time coding (STC) systems make use of the MIMO channel gener-
ated by a multiple-antenna transmit/receive setup:
+
+
+
x
N
t
r
x
1r
x
2r
h
11
h
N
t
N
r
h
ij
y
N
r
r
y
2r
y
1r
n
N
r
r
n
2r
n
1r
Transmit
Antenna
Array
Receive Array
Each antenna transmits a DSB-SC signal:
y
j
(t) =
N
t

i=1

E
s
N
t
h
ij
x
i
(t) +n
i
(t)
and y(t) =

E
s
N
t
Hx(t) +n(t),
where y = (y
1
, , y
N
r
) and x = (x
1
, , x
N
t
) (EE5520 Lecture Notes).
Channel Gains: These are modeled as independent complex fading
coecients, i.e.,
p(h) =
1
2
exp
_

|h|
2
2
_
; E[h
i
h
j
] = 0
which leads to a Rayleigh distributed amplitude
p(a = |h|) = a exp
_

a
2
2
_
p(a = |h|
2
) =
1

2a
exp(a/2)
EE 7950: Statistical Communication Theory 2
MIMO channels Transmit Diversity
The Alamouti scheme [1] uses two transmit antennas and one or two
receive antennas:
+
[x
1
, x

0
]
[x
0
, x

1
]
h
1
h
0
[x
0
, x

1
]
[ x
0
, x
1
]
[

h
0
,

h
1
]
Transmit
Antenna
Array
Combiner
Estimate
The transmitted 2 2 STC codeword is
X =
_
x
0
x

1
x
1
x

0
_
where the symbols x
i
can be any quadrature modulated symbols.
The received signal is now
r = [r
0
, r
1
] = [h
0
x
0
+h
1
x
1
, h
0
x

1
+h
1
x

0
] + [n
0
, n
1
]
= [h
0
, h
1
]X +n
The demodulator calculates
[ x
0
, x
1
] =
_
h

0
h
1
h

1
h
0
_ _
r
0
r

1
_
= [(|h
0
|
2
+|h
1
|
2
)x
0
+h

0
n
0
+h
1
n

1
. .
n

0
, (|h
0
|
2
+|h
1
|
2
)x
1
+h
0
n

1
+h

1
n
0
. .
n

1
]
If the channel path h
0
and h
1
are uncorrelated, the noise sources n

i
have twice the variance of the original noise sources.
The system provides dual diversity due to the factor (|h
0
|
2
+|h
1
|
2
) which
exhibits a -square distribution of fourth order.
EE 7950: Statistical Communication Theory 3
Multiple Receive Antennas
The Alamouti scheme can be extended to multiple receive antennas. In
this case
R =
_
r
0
r
1
r
2
r
3
_
=
_
h
0
h
1
h
2
h
3
_ _
x
0
x

1
x
1
x

0
_
+N
Multiplying the received signals Rwith the channel estimate H we obtain
[ x
0
, x
1
] =
_
h

0
h
1
h

1
h
0
_ _
r
0
r

1
_
+
_
h

2
h
3
h

3
h
2
_ _
r
2
r

3
_
= (|h
0
|
2
+|h
1
|
2
+|h
2
|
2
+|h
3
|
2
)x
0
+h

0
n
0
+h
1
n

1
+h

2
n
2
+h
3
n

3
. .
n

0
,
+ (|h
0
|
2
+|h
1
|
2
+|h
2
|
2
+|h
3
|
2
)x
1
+h
0
n

1
+h

1
n
0
+h

2
n
2
+h
3
n

3
. .
n

1
]
Note that this system provides 4-fold diversity as expressed by the
amplitude factor
A = (|h
0
|
2
+|h
1
|
2
+|h
2
|
2
+|h
3
|
2
)
This is possible due to the fact that the transmitted rows of X are orthog-
onal. In this sense the Alamouti scheme is the most basic representative
of what are known as orthogonal designs.
Example: The following is an example of a 4 4 orthogonal design
X =
_

_
x
1
x
2
x
3
x
4
x
2
x
1
x
4
x
3
x
3
x
4
x
1
x
2
x
4
x
3
x
2
x
1
_

_
The 4 rows of X are orthogonal for any real x
i
.
EE 7950: Statistical Communication Theory 4
Space-Time Codes
A space-time codeword is an N
t
N
r
matrix (array) of complex signal
points, which is transmitted at time t:
X(t) = [x
1
(t), , x
N
c
(t)] =
_
_
x
11
x
12
x
1N
c
.
.
.
.
.
.
x
N
t
1
x
N
t
2
x
N
t
N
c
_
_
Each row of X is a space-time symbol, and the received STC word is given
by
Y (t) =
_
E
s
N
t
H(t)X(t) +N(t)
where N is a matrix of complex noise samples with variance N
0
, that is
variance
2
= N
0
/2 in each of the two dimensions.
The signal energy per space-time symbol is

E
s
, which means that the
signal energy per constellation point is
_
E
s
N
t
, and the each constellation
point x
ij
is energy normalized to unity.
Error Calculation (Gaussian)
At any given time-instant, the channel is xed and the impairment is
Gaussian noise. The probability of error between two space-time code
words, and the conditional error probability depends only on the squared
Euclidean distance between code words
P(X X

) = Q
_
_

E
s
N
t
d
2
(X, X

)
2N
0
_
_
and d
2
(X, X

) is the squared Euclidean distance between these two points:


d
2
(X, X

) =
E
s
N
t
N
c

n=1
N
r

j=1

N
t

i=1
h
ij
(x
in
x

in
)

2
EE 7950: Statistical Communication Theory 5
Squared Euclidean Distance
We proceed to express d
2
(X, X

) in terms of the linear algebraic proper-


ties of H, X and X

.
d
2
(X, X

) =
E
s
N
t
N
r

j=1
N
t

i=1
N
t

=1
h
ij
h

j
N
c

n=1
(x
in
x

in
)(x
i

n
x

n
)

. .
K
ii

The matrix K is a kernel matrix with entries K


ii
. Dene h
j
= [h
1j
, , h
N
t
j
]
T
as the signature vector of receive antenna j. Then d
2
(X, X

) can be ex-
pressed as a quadratic form:
d
2
(X, X

) =
E
s
N
t
N
r

j=1
h

j
Kh
j
Since K is hermitian (K = K
+
), we can spectrally decompose d
2
(X, X

)
d
2
(X, X

) =
E
s
N
t
N
r

j=1
h

j
V DV
+
h
j
=
E
s
N
t
N
r

j=1
v

j
Dv
j
The components of these equations are:
V is a unitary matrix
D is a diagonal matrix with the eigenvalues of K. It can be shown
that all these eigenvalues are nonnegative real
h is a vector of complex Gaussian gains
v = V
+
h is a vector of rotated complex channel gains
EE 7950: Statistical Communication Theory 6
STC Error Probability
The squared Euclidean distance can now be expressed in terms of the
eigenvalues of the STC matrix K
d
2
(X, X

) =
E
s
N
t
N
r

j=1
N
t

i=1
d
i
|v
ij
|
2
and, given a xed channel H
P(X X

) = Q
_
_

E
s

N
r
j=1

N
t
i=1
d
i
|v
ij
|
2
2N
0
N
t
_
_
Often one works with the Cherno Bound on the error probability, which
is easier to manipulate:
P(X X

) exp
_

E
s
4N
0
N
t
N
r

j=1
N
t

i=1
d
i
|v
ij
|
2
_
=
N
r

j=1
exp
_
_
_
_
_
_

E
s
4N
0
N
t
N
t

i=1
d
i
|v
ij
|
2
. .
D
j
_
_
_
_
_
_
Fading channels If independent fading is assumed then
h is a vector of independent complex Gaussian random variables
v = V
+
h is also a vector of independent complex Gaussian random
variables, because
E
_
vv
+

= V
+
E
_
hh
+

V = I
EE 7950: Statistical Communication Theory 7
Fading Error Analysis
The components v
ij
are unit-variance complex Gaussian random vari-
ables, and their absolute value squared is therefore -square distributed
with two degrees of freedom:
p(a = |v|
2
) = exp(a); p(da = d|v|
2
) =
1
d
exp(a/d)
d
2
(X, X

) is now the weighted sum of -square random variables. The


PDF of the sum of independent random variables is best evaluated via
the characteristic function:
() =
_

f
X
(x) exp (jx) dx;
d|v|
2() =
1
1 jd
i
With this we have

D
j
() =
N
t

i=1
1
1 jd
i
=
d
2() =
N
r

j=1
N
t

i=1
1
1 j
E
s
d
i
N
t
Probability Density Function (PDF) The PDF is found via a partial
fraction expansion and a backtransform of the individual terms:
p(x = d
2
) =
N
t

i=1
p
i
N
r

j=1
x
m1
d
m
i
(m1)
exp
_

x
d
i
_
This PDF can be integrated in closed form to
P
e
=
1
2
N
t

i=1
p
i
N
r

j=1
_

_
1
1
_
1 +
4N
0
N
t
E
s
d
i
m1

n=0
2n
2
2n
n
2
1
_
1+E
s
d
i
4N
0
N
t
_
n
_

_
EE 7950: Statistical Communication Theory 8
Cherno Error Bounds
The application of the Cherno error bounding technique avoids many of
these algebraic technicalities. The Cherno bound is a calculated from
the characteristic function as:
P(X X

) E
_
exp
_

E
s
4N
0
N
t
N
r

j=1
N
t

i=1
d
i
|v
ij
|
2
__
=
d
2()

j=
Es
4N
0
N
t
=
N
r

j=1
N
t

i=1
1
1 +
E
s
d
i
4N
0
N
t
The nal form of the bound is:
P(X X

)
_
1

N
t
i=1
(1 +
E
s
d
i
4N
0
N
t
)
_
N
r

_
N
t

i=1
d
i
_
N
r_
E
s
4N
0
N
t
_
N
r
N
t
Design Criteria:
Maximum diversity of N
t
N
r
can be achieved the following criteria are
optimized.
The Rank Criterion: To achieve maximum performance the ma-
trix K has to have full rank.
The Determinant Criterion: The product

N
t
i=1
d
i
= det(K)
needs to be maximized to give maximum coding advantage.
EE 7950: Statistical Communication Theory 9
Rank and Rate
Rate: Each of the signals x
ij
is drawn from a signal constellation A with
2
b
signalpoints. We dene two rates, the symbol rate R
s
and the bit
rate R
b
:
R
s
=
k
N
c
; R
b
=
kb
N
c
Now consider the space-time codeword
X(t) = [x
1
(t), , x
N
c
(t)] =
_
_
x
11
x
12
x
1N
c
.
.
.
.
.
.
x
N
t
1
x
N
t
2
x
N
t
N
c
_
_
Consider the rows of X as the symbols X
i
in a code of lenght N
t
. Then
a rank-r matrix X corresponds to a Hamming weight-r codeword in this
code. The maximum rate is therefore
R
b,max
=
log (A(N
t
, d
H
))
N
t
where A(N
t
, d
H
) is the number of codewords in the code of length N
t
with
minimum Hamming distance d
H
.
If we want full-rank codes we require d
H
= N
t
, and
R
b,max
=
log (A(N
t
, N
t
))
N
t
= b
The maximum symbol rate of a full rank code is therefore
R
s,max
= 1
EE 7950: Statistical Communication Theory 10
Code Construction
Following these criteria Tarokh et. al. [2] have constructed trellis codes
which provide maximal diversity. Two examples are shown below:
00 01 02 03
10 11 12 13
20 21 22 23
30 31 32 33
00 01 02 03
10 11 12 13
20 21 22 23
30 31 32 33
22 23 20 21
32 33 30 31
02 03 00 01
12 13 10 11
These codes achieve full diversity on two-antenna systems. The transmis-
sion rate is 2 bits per symbol using two QPSK signals over two antennas,
because there are four choices at each state.
Decoding:
Decoding follows the trellis using a sequence metric calculator Viterbi.
Branch Metrics:
The Viterbi algorithm works by using metrics m(r) along their branches
and accumulates them to nd the global minimum, where
m(r) =
N
r

j=1

y
jr

N
t

i=1
h
ij
x
ir

2
=need channel estimate|
The metric is simply the squared Euclidean distance between hypothesis
and received signal.
EE 7950: Statistical Communication Theory 11
Error Performance of STCs
4 st at es
8 st at es
16 st at es
32 st at es
64 st at es
4 5 6 7 8 9 10 11 12 13 14
10
4
10
3
10
2
10
1
1
Frame Error Probability (BER)
SNR
Performance results copied from [2].
EE 7950: Statistical Communication Theory 12
Orthogonal Designs
Example: The following is an example of a 4 4 orthogonal design
X =
_

_
x
1
x
2
x
3
x
4
x
2
x
1
x
4
x
3
x
3
x
4
x
1
x
2
x
4
x
3
x
2
x
1
_

_
Properties:
Orthogonal designs provide full diversity. This condition is equiv-
alent to requiring that X X

is non -singular for any X = X

.
Proof: The determinant of X is
det(X) =
_
det(XX
T
)
=

_
det diag
_

i
x
2
i
, ,

i
x
2
i
_
=
_

i
x
2
i
_
N
t
/2
and therefore
det(X X

) =
_

i
|x
i
x

i
|
2
_
N
t
/2
= 0
Therefore the maximum diversity N
t
N
r
is achieved with orthogonal
designs.
Real orthogonal designs exist only for n = 2, 4 and n = 8.
There exist very simple maximum-likelihood decoding rules (see home-
work).
Application of real orthogonal designs are in single-sideband (SSB)
modulated signals.
Complex Orthogonal Designs
There exist no complex orthogonal designs for n > 2.
EE 7950: Statistical Communication Theory 13
Generalized Orthogonal Designs
The generalized designs relax the tight conditions of the orthogonal de-
signs.
+
+
[x
1
, , x
k
]
_
_
x
1
x
2
x
3
x
4
x
2
x
1
x
4
x
3
x
3
x
4
x
1
x
2
_
_
k real signals are packed into an array X of size N
t
N
c
; N
t
N
c
. The
symbol rate of this transmission system is
R = k/N
c
[real symbols/use]
A generalized orthogonal design has XX
T
= I, i.e., all rows are or-
thogonal. The design goal is to minimize N
c
for a given R and N
t
.
Full rate (R = 1) real orthogonal designs exist for n 8.
Theory: Using complex number theory the Hurwitz-Radon problem
it is known that [2]
For any rate R 1 real generalized designs exist
For any rate R 0.5 complex generalized designs exist
Example:: A rate R = 0.5 complex design is
X =
_
_
x
1
x
2
x
3
x
4
x

1
x

2
x

3
x

4
x
2
x
1
x
4
x
3
x

2
x

1
x

4
x

3
x
3
x
4
x
1
x
2
x

3
x

4
x

1
x

2
_
_
EE 7950: Statistical Communication Theory 14
References
[1] S.M. Alamouti, A simple transmit diversity technique for wireless
communications, IEEE J. Select. Areas Commun., Vol. 16, No. 8,
October 1998.
[2] V. Tarokh, N. Seshadri, and A.R. Calderbank, Space-time coding
for high data rate wireless communication: Performance criterion
and code construction, IEEE Trans. Inform. Theory, pp. 744-765,
Mar. 1998.
[3] V. Tarokh, H. Jafarkhani, and A.R. Calderbank, Space-time block
codes from orthogonal designs, IEEE Trans. Inform. Theory, vol.
45, no. 5, July 1999.
[4] C. Schlegel, Error probability calculation for multibeam Rayleigh
channels, IEEE Trans. Commun., Vol. 44, No. 3, March 1996.

You might also like