Professional Documents
Culture Documents
8, AUGUST 2013
3783
I. I NTRODUCTION
IRELESS communication systems with multiple transmit and receive antennas offer significant advantages
in terms of increased data rates and reliability than those
of single antenna systems [1] [2]. In order to benefit from
the advantages of multiple-input multiple-output (MIMO) systems, it is essential for the transmitters and receivers to have
an accurate estimate of the channel state information (CSI).
However, this is a challenging task in practice, especially for
systems with a large number of transmit and receive antennas.
Typical channel estimation techniques include training-symbol
based methods [3] [4] and blind channel estimation strategies
[5] [6]. However, regardless of the estimation approach, the
CSI estimate is prone to measurement, quantization and other
sources of error.
The main objective in MIMO receiver design is to obtain
low symbol error rates (SER) with acceptable computational
complexity. Receiver design under the assumption of perfect
CSI has been an area of research for decades; some of the
well known low-complexity receivers assuming perfect CSI
are linear receivers such as zero-forcing and minimum meansquared errors receivers [7], and nonlinear receivers such as
decision feedback equalizers [8] and sphere decoders [9].
Manuscript received July 12, 2012; revised November 12, 2012 and March
5, 2013; accepted May 24, 2013. The associate editor coordinating the review
of this paper and approving it for publication was J. R. Luo.
B. S. Thian is with the Institute for Infocomm Research, Singapore (e-mail:
thianbs@i2r.a-star.edu.sg).
A. Goldsmith is with the Department of Electrical Engineering, Stanford
University.
This work is supported in part by ONR under grant N00014-09-072-P00006
and by the DARPA ITMANET program under grant 110574-1-TFIND.
Digital Object Identifier 10.1109/TWC.2013.071913.121019
A. Motivation
Practical MIMO systems must consider the design of receivers without the assumption of perfect CSI. Works that
consider receiver design under imperfect CSI include the
joint channel estimation and signal detection approach [10],
and designing transceivers based on the sum minimum mean
squared error (SMMSE) criteria [11] [12], and the maximim
criteria [13] [14].
However, most of these studies have focused on polynomialtime suboptimum decoders, with few considering the use of
the optimum decoding scheme. In addition, these studies also
consider very simple error models, such as only having upper
bounds on the magnitude of the errors. Furthermore, there has
also been very few analytical results on the SER performance
of the decoding schemes in the current literature.
B. Contributions
In this paper, we consider the effects of channel estimation
errors on the SER performance of MIMO systems. The
following are our main contributions:
1) Using a correlated multivariate Gaussian channel estimation error model, we derive the optimum decoding
metric (which is the maximum likelihood (ML) metric)
by utilizing the known second-order statistics of the
errors. This is important because in practice, channel
estimation errors are correlated due to channel correlation
as well as estimation methods which induce correlation
in the errors [4] [16] [17].
2) Using the optimum decoding metric for detection requires
an exhaustive search through all possible transmitted
signal vectors, and this is not implementable in a practical
setting. To overcome this problem, we propose an alternative decoding metric which approximates the optimum
rule. This approximated metric still accounts for error
correlation in that the main block diagonals of the error
covariance matrix are used.
3) Using the approximated metric, we formulate a tree
search algorithm that has substantially lower complexity
than the brute-force search of ML detection. We term the
algorithm as the robust sphere decoder.
4) We derive analytical upper bounds on the pairwise codeword error rate (and symbol error rate) performance of
the optimum ML metric as well as the robust sphere
decoder.
C. Organization
The remainder of this paper is organized as follows. We
present our system model in Section II. We present the optimal
ML decoder and the robust sphere decoder in Section III.
Analytical results on the upper bounds of pairwise error
rates of the proposed decoders are presented in Section IV.
c 2013 IEEE
1536-1276/13$31.00
3784
(1)
R = E vec (E) vec (E)T
E E21,1
E (E
1,1 E1,2
)
2
E (E1,2 E1,1 )
E
E
1,2
=
..
..
.
.
E (EP,N E1,1 ) E (EP,N E1,2 )
E
Re (
y)
Im (
y)
Re(H)
Im(H)
Im(H)
Re(H)
Re (
x)
Im (
x)
Re (
n)
Im (
n)
,
(2)
where Re() and Im() denote the real and imaginary components, respectively. Letting y, H, x and n denote the first,
second, third and fourth terms of (2), respectively, we obtain
the equivalent real system model, y = Hx + n. In this
representation, the dimensionality of the system vectors are
doubled, i.e P = 2P and N = 2N . For the rest of the paper,
we will work with the real domain representation. Analysis is
identical for complex domain.
When the transmitted symbols are uniformly distributed,
the optimum decoder (in the sense of minimizing SER) is the
maximum likelihood (ML) decoder. It is given by
(3)
(4)
(5)
E (E1,1 EP,N )
E (E1,2 EP,N )
..
.2
E EP,N
(6)
x2 , . . . ,
xN ] denotes the complex transmitted
where
x = [
x1 ,
xi is drawn from a
signal vector of dimension N 1 and
T
set X of finite cardinality;
y = [
y1 ,
y2 , . . . ,
yP ] denotes
is
the noisy received signal vector of dimension P 1; H
the channel matrix of dimension P N (where P N )
i,j CN (0, 1) representing
with independent elements H
T
n2 , . . . ,
nP ]
uncorrelated Rayleigh fading [19]; and
n = [
n1 ,
represents a vector of independent complex Gaussian noise
with
ni CN (0, 2n2 IP ).
The complex system model in (1) can be represented equivalently in the real domain by the following transformation:
..
.
SINR =
P
1
SINR[i].
P i=1
(8)
(9)
where
= Ky,
h
(10)
1
.
K = h PT Ph PT + n
(11)
h, it can be shown
Defining the estimation error as e = h
that e is a zero-mean Gaussian vector with the following
covariance matrix [15]
h||2
e E||h
1
= h h PT Ph PT + n
Ph .
(12)
where
and
i,j =
x N Hx,
,
y|H,
1 y Hx
= arg min log det + y Hx
.
xX N
(16)
We can simplify the ML decoding metric in the special case
where channel estimations are independent and identically
distributed (i.i.d), as given in the following proposition.
Proposition 1. When the channel estimation errors are i.i.d
with zero mean and variance u2 , the off-diagonal elements
of RE are zero and the diagonal elements are u2 . Then, the
maximum-likelihood estimate of the transmitted vector x is
N
N
2
2
2
log n + u
|xk |
xML = arg min
m=1
k=1
|(y Hx)m |
.
+
N
2
2
2
|x
|
m=1 n + u
k
k=1
N
triangular matrix, and to use that to assist in a tree searchbased decoding strategy.
To proceed with this transformation, we pre-multiply y by
QT to obtain
+ QT Ex + QT n,
y = QT y = QT Hx
(17)
|x
|
, i.
k
n
u
i,i
k=1
By substituting and 1 into (16), we obtain Proposition
1.
+n
,
y = Rx + Ex
(19)
T
T
E
R = E vec Q E vec Q E
.
(20)
If RE is a dense matrix (each entry of RE is nonzero with
(21)
y|R, x N (Rx, ) ,
(22)
xX N
where
and
i,j
=
otherwise.
xT RE(i1)N +1:iN,(j1)N +1:jN x,
(23)
(24)
i,j
0,
otherwise.
A reduced complexity decoding algorithm can be imple as will be shown
mented based on the simplified matrix ,
in the next section. We define an approximate ML decoder
(which we call the robust near-ML decoding metric) as
N
2
|(
y
Rx)
|
i
+
xRNML = arg min
log
.
i,i
xX N
i,i
i=1
(25)
(18)
channel matrix H.
(14)
xX N
3785
In the special case of i.i.d channel estimation errors (Proposition 1), the approximate ML decoder (which is the same as
the exact ML decoder in this case, and we call it the ML
decoding metric with uncorrelated errors) is given by
N
N
xGUML = arg min
log n2 + u2
|xj |2
xX N
i=1
N
i=1
j=1
|(y Rx)i |2
. (26)
N
2
2
2
n + u j=1 |xj |
3786
=
Vi xj1
|x
=
x
j
1
j
T E
minimize x R
x + 2 (27)
(i1)N +1:iN,(i1)N +1:iN
subject to
N
xN
j ,
j =x
(28)
(i)
RE(i1)N +1:iN,(i1)N +1:iN . In (28), b(i) = A1:j1,j:N x N
j . The
solution to (28) is
(i)
xj1
=
A
b(i) .
(29)
1
1:j1,1:j1
To find a lower bound for the second term of (25), it
i,i . Define
is
required that we find an upper bound for
j1 N
i,i , given that
to be the upper bound of
Ui x1 |xj = x N
j
N
N
xj takes the value of x j . An upper bound is given by
j1 j1
N
N
(i)
N
N
(i)
2
=
Ui xj1
|x
=
x
A
+
x
x
Am,n
m
n
j
j
m,n
1
m=j n=j
j1
N
(i)
+ 2
Am,n x m ,
m=j n=1
m=1 n=1
(30)
|x
=
x
all leaf nodes below it; the expression xj1
j
j
1
indicates the lower bound, given that xN
j takes the value
N
of x N
j (i.e the signal components xj has been detected as
N
x j). Combining (27)
and (30), we can now define the NMF,
j1 N
N
x1 |xj = x j , at level j of the tree to be
N
xj1
|xN
j = x
j
1
=
N
log Vi
xj1
|xN
j
1
x N
j
i=1
+
N
i=j
2
(
y Rx)i
.
N
Ui xj1
|xN
1
j = x
j
(31)
is the exact
In addition, at each of the leaf nodes, (|x = x)
decoding metric (25).
In the special case of i.i.d channel estimation errors, the
NMF at level j of the tree can be defined as
N
N
|x
=
x
xj1
j
j
1
N
N
=
log n2 + u2
|x m |2 + u2 2 (j 1)
i=1
N
i=j
m=j
2
|(y Rx)i |
,
N
n2 + u2 m=j |x m |2 + u2 2 (j 1)
(32)
.
RE =
2.55
0.79
2.83 0.83
1.17 0.54 0.83 3.32
(34)
1
|xN = 3 = 6.21,
xN
1N 1
(35)
x1 |xN = 1 = 204,
N 1
x1 |xN = 1 = 823,
1
xN
|xN = 3 = 910.
1
3787
19:
20:
21:
22:
23:
24:
25:
26:
Input: n2 , RE , R, N, P
Initialization: x 0, Cmin (x )
jN
for each node at level 1 do
1
Compute xN
|xN = x N
1
end for
1
Sort xN
|xN = x N in ascending order and put
1
them (and the associated j and x N ) into a stack
while
stack is not empty do
N
N
j, x N
pop top element
, xj1
1 |xj = x
j
j
j1 N
N
if x1 |xj = x j Cmin (x ) then
if j = 1 then
x x N
1
Cmin (x ) (|x = x)
else
for i in 1 to |X | do
j j1
N
N
j
Compute xj1
1 |xj+1 = x
j+1 , xj = x
for each
x j X
N
N
in ascending order and
Sort xj1
j
1 |xj = x
put them (and the associated j and x j ) into
the stack
end for
end if
end if
end while
Output: x , Cmin (x )
The four nodes are sorted in ascending order and put into a
stack, and the node with the smallest NMF is removed from
the stack for further computation/search. Hence, the nodes
with xN = 1, xN = 1 and xN = 3 remain in the stack
and the node with xN = 3 will be searched next. Further
computation gives us
(|xN = 3, xN 1 = 3) = 449,
(|xN = 3, xN 1 = 1, ) = 6.26,
(36)
(|xN = 3, xN 1 = 1) = 467,
(|xN = 3, xN 1 = 3.) = 694.
Similar to the previous step, the four nodes are sorted and
put into the same stack and the node with the smallest NMF
(the node with xN 1 = 1) will be searched next. Since the
level is now at j = 1 and that (|xN = 3, xN 1 = 1) <
Cmin (x ), we update Cmin (x ) and x accordingly to be
T
Cmin (x ) = 6.26 and x = [1, 3] .
In the following steps, each remaining node is removed
from the stack and investigated sequentially. Since all remaining nodes (in the stack) have NMF values which are greater
than Cmin (x ), they are discarded. The stack is now empty;
we have completed our search and the decoded signal vector
T
is declared to be x = [1, 3] . In this example, the number
of nodes searched is 8, which is less than 42 = 16. However,
more savings can be achieved if a higher order modulation
-3
] x1N 1 | x N 3 6.21
-1
] x1N 1 | x N 1 204
] x1N 1 | x N 1 823
] x1N 1 | x N 3 910
Stack
#
] x1N 1 | x N 3 6.21
] x1N 1 | x N 1 204
] x1N 1 | x N
1 823
] x1N 1 | x N
910
Fig. 1. Example: In step 1, the nodal metric function (NMF) value of the
four children nodes of the root node is evaluated. They are then sorted and
place into a stack in descending order; the node with the smallest NMF value
is placed at the bottom of the stack. At this stage, Cmin = +.
TABLE I
C OMPUTATIONAL COMPLEXITY OF THE k th STEP OF QR DECOMPOSITION
No. of real additions
(N k + 1)2 + (N k + 1)(N k) + 2
3788
TABLE II
N
C OMPUTATIONAL COMPLEXITY OF EVALUATING Xj1
| XN
1
j = Xj
FOR I . I . D CHANNEL ESTIMATION ERRORS
-3
] x1N 1 | x N 3 6.21
-1
] x1N 1 | x N 1 204
Term
N
term of xj1
|xN
1
j =x
j
N
Numerator of 2nd term of xj1
|xN
j
1
j = x
N
Denominator of 2nd term of xj1
|xN
1
j =x
j
Total
Term
1st term of xj1
|xN
= x N
1
j
j
N
Numerator of 2nd term of xj1
|xN
j
j = x
1
N
Denominator of 2nd term of xj1
|xN
j =x
j
1
Total
1st
] x1N 1 | x N 1 823
] x1N 1 | x N 3 910
N j+5
2(N j + 1)
N j+5
4N 4j + 12
N j+4
N j+2
N j+5
2N 3j + 11
TABLE III
C OMPUTATIONAL COMPLEXITY OF EVALUATING (26)
-3
-1
] x | x N 3, x N 1 3 449
] x | x N 3, x N 1 1 467
] x | x N 3, x N 1 1 6.26
] x | x N 3, x N 1 3 694
Cmin = 6.26
Fig. 2. Example: In step 2, the children nodes of the node with the smallest
NMF value is searched and evaluated. Since the level is now at j = 1, the
values of the nodes are compared to Cmin and updated accordingly to be
6.26. Next, the remaining nodes in the stack are compared to Cmin . It is
found that their NMF values are greater than Cmin , so they are discarded
(their children nodes will not be evaluated). Since the stack is now empty,
the decoding is completed.
Since Vi xj1
|x
=
x
involves solving a system of linear
j
j
1
3
equations, the computational complexity is O
(j 1) . The
N
N
computational complexity of evaluating Ui xj1
j
1 |xj = x
y Rx|2 is O N 2 . Therefore,the
is O (j 1)2 and that of |
N
N
computational complexity of evaluating xj1
is
j
1 |xj = x
O N (j 1)3 + O (N j + 1)(j 1)2 + O(N 2 ). (37)
Instead of solving (28) each time the search arrives at a
node, further savings can be achieved if we solve (28) once
and use the result at another node on the same level. This can
be explained as follows: (29) can be written as
xj1
1
N
m=j
(i)
A1:j1,1:j1
(i)
A1:j1,m x m .
(38)
From (38), it is seen that at any level j of the tree, the pseudo(i)
inverse of the matrix A1:j1,1:j1 is computed once and used
several times.
Term
||
y Rx||2
N
2 + 2
2
n
u
i=1 |xi |
N
2 + 2
2
N log n
u
i=1 |xi |
Total
N 2 + 4N + 4
Term
||
y
Rx||2
2
2
2
n + u N
i=1 |xi |
N
2 + 2
2
N log n
u
i=1 |xi |
Total
N 2 + 3N + 1
N +2
N +2
N
N +1
(39)
(40)
E. Further Improvements
We can further improve upon the computational complexity
(in terms of reducing the number of nodes visited and hence
the total number of real additions and multiplications) by
doing more pre-processing before decoding, or to use a
different search strategy from the one described above. Some
of the improvements that can be made are as follows:
1) Channel ordering: Order the columns of the estimated
according to some criterion, such
channel matrix, H,
as Euclidean norm or VBLAST ordering [18]. In the
Euclidean norm ordering, the columns of the estimated
channel matrix are sorted in the increasing order of their
adheres to
norm. Therefore, the resultant re-ordered H
the following criterion
1:P,2 ||2 ||H
1,P :N ||2 .
1:P,1 ||2 ||H
||H
(41)
3789
(42)
MP N
is the
where y R is the received signal, X R
transmitted codeword, h = vec (H) is the channel, and n
RM represents a vector of white Gaussian noise. The channel
is given by
estimate at the receiver, h,
= h + e,
h
(43)
1
exp [ (|J1 )],
2
0 1,
(44)
where
(|J1 ) = ln E [exp { ln P (y|J0 ) ln P (y|J1 )}|J1 ] .
(45)
An upper bound pairwise error probability for codewords
X and Z is then given by
1
[PUB;J1 () + PUB;J0 ()] .
(46)
2
The upper bound in (44) is always valid if (45) exists.
In addition, the optimal Chernoff bound is obtained by the
value of which minimizes (44). In most cases, finding the
optimum value of is difficult; a less complicated bound
can be obtained by setting = 12 . The resulting bound is
known as the Bhattacharyya bound [23]. An upper bound for
the different scenarios is computed using a key lemma [20],
stated below.
PUB () =
T
x Rx + rT x + u exp xT Sx + sT x + v dx
RL
1 L/2
1 T 1
1/2
=
[det S]
exp s S s v
2
4
1
1
tr RS
sT S1 r + sT S1 RS1 s + 2u .
(47)
2
Proof: The proof is detailed in Theorem 15.12.1 in [20].
A. Imperfect Channel Estimates with Known Error Covariance Matrix
We first consider the scenario where the receiver has an
imperfect channel estimate and perfect knowledge of the error
covariance matrix. The channel estimation errors are assumed
3790
y Xh
y Xh
exp
X
2
T
1
dy.
1 y Zh
y Zh
Z
2
(48)
To
,
s
=
+
(1
(Z
X)
h,
Z
2
1X
X
u = (2)M/2 (det X )/2 (det Z )(1)/2
, and
T (Z X)T 1 (Z X) h
to obtain
v = 2 h
X
(|J1 )
1
1
= ln
1
2
det 1
det [X ] det [Z ]1
X + (1 ) Z
1
1 1
1 T
(Z X)T 2 1
X
X + (1 ) 1
+ h
X
Z
2
1
X (Z X) h
(49a)
1
1
= ln
1
2
det 1
+
(1
det [X ] det [Z ]1
X
Z
det [Z ] det [X ]1
1 T
(Z X)T
h
2
det [Z ] det [X ]1
1
1
1
(Z X) h
(49b)
X
Z
1
(1 ) T
1
det [Z ] det [X ]1
= ln
h (Z X)T
2
det [Z + (1 ) X ]
2
{Z + (1 ) X }1 (Z X) h.
(49c)
where the step from (49a) to (49b) requires the use of the
matrix inversion lemma in [20] (equation (2.22) on page 424),
given by
1
1 1
X + (1 ) 1
X 1
2 1
X
Z
X
"1
1
1
X
Z
=
.
(50)
1
Defining
det [Z ] det [X ]1
det [Z + (1 ) X ]
#1/2
,
(51)
(1 )
T
(Z X)
2
1
{Z + (1 ) X } (Z X) ,
(52)
(|J1 ) =
and
T (|J1 ) =
we obtain
PUB;J1 () =
(|J1 )
T T (|J1 ) h
.
exp h
2
(|J0 )
T T (|J0 ) h
,
exp h
2
(54)
where
det [X ] det [Z ]
det [X + (1 ) Z ]
#1/2
.
(55)
(1 )
(X Z)T
2
{X + (1 ) Z }1 (X Z) ,
(56)
(|J0 ) =
and
T (|J0 ) =
1
(|J1 )
+
, (57)
1/2
2
{det [IP N + 2T (|J1 ) h ]}
There is no simple
where h is the covariance matrix of h.
way to obtain the optimal value of . Instead, we obtain the
Bhattacharyya bound by setting = 12 .
(53)
P (y|J0 ) >
< P (y|J1 ) .
z
(58)
where
(|J1 )
$
%
= ln E exp ln P (y|J0 ) ln P (y|J1 ) |J1
/2
!
Z
det 2
= ln
/2
1/2
RM
X
det 2
det [2Z ]
T
1 y Xh
y Xh
exp
X
2
T
1 y Zh
+
y Zh
Z
2
T
1
1
y Zh Z y Zh dy,
(|J0 )
1/2
X
det
=
,
det
Z det [X ]
1
1 + 1 det
Z
X
X
(65)
and
(59)
X
where, due to the mismatched error covariance matrix,
Z are the covariance matrices of y under hypotheand
sis J0 and J1 respectively. In (59), Z is the covariance
matrix of y with the perfect error covariance matrix, under
hypothesis J1 . Applying Lemma (1) to (59), with x
=
1
1
1
1
y Zh, R = 0, r = 0, S = 2 X Z + Z ,
1 (Z X) h,
u =
s =
X
v=
T
2h
(Z X)
1 (Z
Z]
det[2
X]
det[2
/2
/2
det[2Z ]1/2
, and
to obtain
X) h
Defining
(|J1 )
1/2
Z
det
=
det
X det [Z ]
1
1 + 1 det
X
Z
Z
(61)
and
F (|J1 )
1
1
1
1
1
1 + 1
= (Z X)T 2
X
X
Z
Z
X
2
%
1 (Z X) ,
(62)
(63)
(|J0 )
T F (|J0 ) h
,
exp h
2
F (|J0 )
1
1
T
1
1
1
1 + 1
= (X Z)
2
Z
Z
X
X
Z
2
%
1 (X Z) .
(66)
Z
we get
Averaging (63) and (64) over the distribution of h,
(|J1 )
1
T F (|J1 ) h
T
PUB () = Eh
exp h
2
2
(|J0 )
T
T
+
exp h F (|J0 ) h
2
1
(|J1 )
=
1/2
4
{det [IP N + 2F (|J1 ) ]}
h
1
(|J0 )
+
. (67)
1/2
4
{det [IP N + 2F (|J0 ) h ]}
(|J1 )
Z
det
1
= ln
2 det
X det [Z ]
1
1 + 1 det
X
Z
Z
1
1 T
1
1
1
1 + 1
(Z X)T 2
+ h
X
X
Z
Z
X
2
%
1 (Z X) h.
(60)
PUB;J1 () =
3791
(64)
3792
10
10
10
10
SER
SER
10
2
10
10
3
10
10
10
15
20
10
25
10
30
10
15
performance for the different schemes. The exact ML decoding metric gives the best SER performance. It is observed
specifically that the robust near-ML decoding metric (25)
has only a very small performance loss; at a SER of 103 ,
the coding loss is only about 0.5 dB. In addition, the ML
decoding metric which assumed i.i.d errors (26) has a coding
loss of 3 dB. In contrast, using the ML metric which assumes
perfect CSI and the worst-case errors decoder proposed in [14]
results in significantly worse performance than our proposed
decoding metric.
2) i.i.d Channel Estimation Errors: In the second part
of this set of numerical simulations, we study a MIMO
system with i.i.d channel estimation errors and compare the
performance of using the proposed ML decoding metric with
i.i.d errors (26) with the conventional ML metric (3) which
= H), as well as the worst-case
assumes perfect CSI (i.e H
errors decoding metric [14].
Figure 4 illustrates the SER versus the SINR performance
for the three different metrics. Our proposed robust ML decoding metric achieves the best SER performance. Interestingly,
in this scenario, the conventional ML metric which assumes
= H) performs much better than worst-case
perfect CSI (H
errors decoding [14]. The following coding gains are observed:
At a SER of 103 , the proposed ML decoding metric (26)
achieves a 4.5 dB gain over the conventional ML decoder, and
more than 8 dB of gain over the worst-case errors decoding
of [14].
B. Performance of the Robust Sphere Decoder
In the second set of numerical simulations, we present SER
performance and average computational complexity results for
the robust sphere decoder.
1) Correlated Channel Estimation Errors: We first consider a 4 4 real MIMO system with correlated channel estimation errors, and with 64-QAM and 256-QAM modulation.
Figure 5 illustrates the SER versus SINR performance with
the two different metrics; we verify via numerical simulation
that the robust sphere decoder does in fact find the solution to
the robust near-ML decoding metric (obtained via exhaustive
25
30
10
256QAM
1
10
SER
20
Average SINR
Average SINR
64QAM
10
10
10
10
15
20
25
30
35
40
Average SINR
64QAM
800
10
1001
600
720
400
10
477
301
200
192
126
80
40
54
32
28
10
10
15
20
25
30
35
40
Average SINR
256QAM
x 10
10
28274
18591
4
10
11828
7202
4088
2360 1260
761
414
255
195
5
10
15
20
25
30
35
10
40
Average SINR
1200
1000
10
15
20
25
30
35
40
Average SINR
3793
SER
800
600
400
200
10
15
20
25
30
35
40
Average SINR
Fig. 7.
MIMO system with correlated channel estimation errors: Computational savings ratio the proposed robust sphere decoder achieves over
exhaustive search
10
15
20
25
SINR
4x4 MIMO, 64QAM
30
35
40
10
15
20
25
SINR
4x4 MIMO, 256QAM
30
35
40
10
15
30
35
40
2000
1500
1000
500
0
0
3794
x 10
3
2.5
2
1.5
1
0.5
0
0
20
SINR
25
10
10
CWER
10
10
10
10
10
15
20
25
Average SINR
Fig. 10. (8,4,4) Hamming code: Average pairwise codeword error probability
(CWER) vs signal to interference-plus-noise ratio (SINR) for the different
decoding schemes for 8x8 MIMO with correlated channel estimation errors
3795