You are on page 1of 7

J Med Syst (2010) 34:8389

DOI 10.1007/s10916-008-9218-9

ORIGINAL PAPER

Determination of Sleep Stage Separation Ability of Features


Extracted from EEG Signals Using Principle Component
Analysis
Cabir Vural & Murat Yildiz

Received: 31 July 2008 / Accepted: 15 September 2008 / Published online: 11 October 2008
# Springer Science + Business Media, LLC 2008

Abstract In this study, a method was proposed in order to


determine how well features extracted from the EEG
signals for the purpose of sleep stage classification separate
the sleep stages. The proposed method is based on the
principle component analysis known also as the Karhunen
Love transform. Features frequently used in the sleep stage
classification studies were divided into three main groups: (i)
time-domain features, (ii) frequency-domain features, and (iii)
hybrid features. That how well features in each group separate
the sleep stages was determined by performing extensive
simulations and it was seen that the results obtained are in
agreement with those available in the literature. Considering
the fact that sleep stage classification algorithms consist of
two steps, namely feature extraction and classification, it will
be possible to tell a priori whether the classification step will
provide successful results or not without carrying out its
realization thanks to the proposed method.
Keywords EEG . Sleep stage classification . Sleep scoring .
Principle component analysis

Introduction
Automatic determination of sleep stages from the Electroencephalogram (EEG) signals is an important problem for
which numerous algorithms have been suggested [17]. All
the algorithms consist of two stages as shown in Fig. 1: (i)
feature extraction from the EEG signal, (ii) application of
C. Vural (*) : M. Yildiz
Department of Electrical and Electronics Engineering,
Sakarya University,
54187 Adapazari, Turkey
e-mail: cvural@sakarya.edu.tr

the extracted features to a classifier. If we denote the


extracted features as an N-dimensional vector, the objective of
the feature extraction step is to show an EEG signal
corresponding to a sleep stage as a point in the N-dimensional
feature space. The classification step has two goals. The
purpose during the training phase is to partition the Ndimensional feature space into the regions that are equal to
the number of sleep stages. The goal during the classification
phase, on the other hand, is to determine the region, hence
the sleep stage, to which a given EEG signal represented as a
point in the N-dimensional feature space belongs.
As it is clear from the above discussion, the feature extraction step will affect the overall classification accuracy. An
ideal feature vector should cluster the signals corresponding
to the same sleep stage into a hyper sphere with small radius,
while it should generate nonintersecting hyper spheres
corresponding to different sleep stages in the feature space.
That the hyper spheres corresponding to different sleep stages
intersect means that the classifier will make an error when
making a decision. To the best of our knowledge, an analysis
showing how well a particular feature vector separates sleep
stages has not been performed in none of the available sleep
stage classification algorithms. However such an analysis is
very important for two reasons. For one thing, the analysis
will provide an upper bound for the accuracy of the classifier.
Without such knowledge it is not possible to know how much
the performance of the classifier deviates from the upper
bound. For another, if the upper bound determined in the
analysis step is not sufficient for a given sleep stage
classification problem, one can deduce that using a classifier
is meaningless and the chosen feature vector needs to be
modified.
In this study, a method was proposed in order to determine
how well features separate the sleep stages. The method is
based on the eigenvectors or the principle components of a

84

J Med Syst (2010) 34:8389

one of the following sleep stages by taking its waveform


and spectrum into account [9].
Fig. 1 Steps involved in the classification of the EEG signals

covariance matrix computed from the feature vectors. The


principle component analysis, known also as the Karhunen
Love transform, has been successfully applied in numerous
applications such as pattern recognition, image segmentation
and classification [8]. Features frequently used in the sleep
stage classification studies were divided into three main
groups: (i) features extracted from the EEG signal itself
(time-domain features), (ii) features computed in the frequency-domain by transforming the EEG signal using a
suitable transform such as discrete-time Fourier or discretetime wavelet transform (frequency-domain features), and (iii)
features composed of time and frequency-domain features
(hybrid features). That how well features in each group
separate the sleep stages was determined by performing
extensive simulations and it was seen that the results
obtained are in agreement with those available in the
literature.
The paper is organized as follows. Brief information for
the sleep stages, and the definition of the features used in
this study are given in The sleep stages and the features
used section. The proposed method is introduced in The
proposed method section. Simulation results are provided
in Simulation results section. Finally, the main results
obtained are highlighted and the summary of the study is
done in Conclusion section.

The sleep stages and the features used

Stage awake The EEG signals belong to this sleep stage


contain frequency components in more than one band and
have low amplitudes usually in the 1030 V range.
Stage 1. In this case, the EEG signals contain frequencies
in the 27 Hz range, the maximum value of the waveform
can be as high as 200 V. The frequency components in the
alpha band should not exceed the 50% of the total frequency
components.
Stage 2. One of the most important characteristics of this
sleep stage is the K complex defined as a steep negative
wave followed by a smooth positive wave. The other
characteristic is the sleep spindle defined as waves of at
least 0.5 s duration that contain frequency components in
the 1214 Hz range.
Stage 3. In order to classify an EEG signal as Stage 3, it
should contain frequencies in the delta band, its amplitude
should be higher than 75 V, and the delta activity should
occupy 20% to 50% of the epoch. Sleep spindles and Kcomplexes can be observed in EEG signals belonging to
this stage.
Stage 4. The EEG signals belonging to this class exhibit
the characteristics mentioned for the Stage 3. However, the
delta activity should occupy more than 50% of the epoch.
Stage REM For this sleep stage, the EEG signals have low
amplitude, contain frequency component in several bands, have
a waveform similar to sawtooth, and may contain alpha activity.

Sleep stages
The features used
The EEG signals are taken from the brain by using
special electrodes, they have a maximum amplitude of
200 V and contain frequency components in the 0.5
100 Hz range. It is known that the EEG signals obtained
from the healthy people and from those who have sleep
disorders exhibit different characteristics. As a result,
physicians are able to make diagnosis about sleep
disorders by using the information extracted from the
EEG signals. Studies have shown that the EEG signals
obtained during the sleep, rest and physical activity are
different in terms of their waveform and the frequency
components they contain. The frequency components that
the EEG signals contain have been divided into five
bands: delta (0.54 Hz), theta (48 Hz), alpha (812 Hz),
beta (1235 Hz), and gamma (3550 Hz) bands. An
EEG signal obtained from a person has been classified in

It was decided that the features used for the classification of


the sleep stages can be divided into three main groups: (i)
time-domain features computed from the EEG signal, (ii)
frequency-domain features obtained from the frequency
spectrum of the EEG signal, (iii) the hybrid features that are
formed by using the time-domain and frequency-domain
features together. It is impossible to analyze every feature
vector in each group because of space limitation. Brief
definitions of the most frequently used feature vectors that
were investigated in this study are provided below for the
sake of completeness.
Time-domain features Mean value, standard deviation,
median, Hjorth parameters called activity, mobility and
complexity, correlation coefficient between the two EEG

J Med Syst (2010) 34:8389

85

signals and the mean square value are usually used as the
time-domain features. Computation of these parameters
except the median, the correlation coefficient and the mean
square value are given in Eqs. 1 to 5. In the calculations, it
is assumed that each analog EEG signal is recorded in 30 s
segments called an epoch. Analog signals are sampled
according to the Nyquist sampling theorem, and the
sampled EEG signal is represented as vector x of dimension
nx1. Hence, the notation xi stands for ith element of the
vector x. Furthermore, in equations defining activity,
mobility and complexity the notation i denotes the
variance of the ith derivative of the vector x. A known
method can be used in order to calculate the derivatives of a
vector [10]. Note that 0 denotes the variance of the vector.

P
RSPi

f 2Bi
fH
P
f fL

P f
6
P f

Central frequency:

Mean:

fH
P

n
1X
x
xi
n i1

fc

f fL

f :P f

fH
P
f fL

Standard Deviation:
r n
1 X
xi  x 2
s
n  1 i1

Activity:
3

A s 20
Mobility:
M

the frequency-domain features. Computation of these


parameters except the Itakura distance are provided in
Eqs. 6 to 10. In the equations, P(f) denotes the power
spectrum obtained from the sampled EEG signal by using a
suitable method such as YuleWalker, Burg, Welch [12], fL
and fH represent the lowest (0.5 Hz) and the highest (45 Hz)
frequency contained in the EEG signal, respectively.
Relative spectral power for the ith frequency band Bi:

7
P f

Bandwidth:
v
uP
u fH
u
f  fc 2 :P f
u f fL
fs u
u
fH
P
t
P f

f fL

s1
s0

Power at the central frequency:


Pfc Pfc

Complexity:
q
C s 2 =s 1 2 s 1 =s 0 2

Frequency-domain features Relative spectral powers


(RSPi) defined as the ratio of the powers in the frequency
bands given in Table 1 to the total spectral power, harmonic
parameters called the central frequency (fc), bandwidth (f),
power at the central frequency (Pfc), spectral edge frequency (fspe) [11] and the Itakura distance are generally used as

Spectral edge frequency (fspe): The minimum frequency


fspe that satisfies following equality
fspe
P
f fL
fH
P
f fL

P f
0:9

10

P f

Table 1 Spectral energy bands of the EEG signals


Band (Bi)
Delta 1
Delta 2
Theta 1
Theta 2
Alpha
Beta 1
Beta 2

Frequency range (Hz)


0.52.5
2.54
46
68
812
1220
2045

The proposed method


Since the dimension of the feature vectors is usually greater
than three, it is not possible to visualize a feature vector as a
point in the feature space. In order to visualize the feature
vector as a point in the feature space, its dimension must be
reduced to three or less, but at the same time its information
content must be kept as much as possible. For this purpose,
the most significant information for sleep stage classifica-

86

J Med Syst (2010) 34:8389

tion must be chosen. In this study, the dimensionality was


reduced to two, though it is straightforward to extend the
procedure to more than two dimensions.
The procedure used for the dimensionality reduction is
the principle component analysis (PCA). The idea behind
the principle component analysis is to define a new
coordinate system in the direction of useful information
used for classification. The two vectors, which have the
same dimension as the feature vectors, defining the new
coordinate system should be determined such that the
following properties are satisfied. The points obtained by
projecting the feature vectors corresponding to the same
sleep stage into these two vectors are clustered in a narrow
region. On the hand the projections corresponding to
different sleep stages are clustered in different regions. In
the proposed method, these two vectors are chosen as the
two eigenvectors of a generalized covariance matrix
computed as a weighted sum of the covariance matrices
of the sleep stages. The method is shown as a block
diagram in Fig. 2 and its details are provided below. It is
easy to generalize the method to m dimensions.
Step 1: Let us assume that the dimension of a feature vector
is px1, the number of available feature vectors for
the awake, REM, Stage 1, Stage 2, Stage 3, Stage
4 sleep stages are N1, N2, , N6, respectively. The
mean feature vector of each sleep stage mi of size
px1 and the covariance matrix Ci of size pxp are
computed by using Eqs. 11 and 12:
Fig. 2 Block diagram of the proposed method

mi

Ci

Ni
1 X
xi;m
Ni m1

i 1; 2; . . . ; 6

Ni

 
T
1 X
xi;m  xi  xi;m  xi
Ni m1

i 1; 2; :; 6

11

12

In Eqs. 11 and 12, the notation xi,m stands for the mth
feature vector for the ith sleep stage (i=1, 2,.., 6; m=1, 2,
, Ni).
Step 2: The generalized covariance matrix C defined as
the weighted sum of the covariance matrices of
the six sleep stages is determined by
C

6
X

pi C i

13

i1

where p1, p2, , p6 are the a priori probabilities of the sleep


stages.
Step 3: The eigenvalues and eigenvectors of the generalized covariance matrix are computed.

Step 4: Let q1, q2 denote the two eigenvectors or the


principle components corresponding to the largest
two eigenvalues in magnitude. In order to
represent a given feature vector x as a point in
the two dimensional principle components space,
the feature vector is projected onto q1, q2 by
using Eqs. 14 and 15.
p1 < x; q1 > xT q1

p2 < x; q2 > xT q2

14

15

The projection values give the amount of energy of the


feature vector in the direction of the most important two
principle components.
Step 5: Since in the two-dimensional principle component
space in which q1 and q2 represent the horizontal
and vertical axes, [p1, p2]T represents a point, it
will be possible to visualize a given feature vector.

J Med Syst (2010) 34:8389

87

Table 2 The number of feature vectors used for calculating the mean
feature vector and covariance matrix of each sleep stage and testing
the proposed method

Calculation
Testing
Total

Awake

REM

Stage1

Stage2

Stage3

Stage4

60
240
300

40
160
200

15
40
55

70
280
350

20
70
90

40
160
200

Simulation results
The EEG signals used in this study were obtained from the
International Database PhysioNet Sleep Records [13]. The
database provides each EEG signal as 30 s epochs at
the sampling frequency of 100 Hz. The time-domain feature
vectors are of size 51 composed of the mean value,
standard deviation, and the Hjorth parameters defined in
The sleep stages and the features used section. The
frequency-domain feature vectors are of size 111 composed of the relative spectral energy density for each
frequency band, central frequency, bandwidth, power at the
center frequency and the spectral edge frequency also
defined in The sleep stages and the features used section.
Hence, the dimension of hybrid feature vectors will be 161.
The EEG signals were applied to a sixth order Butterworth digital bandpass filter with passband 0.545 Hz
before computing the feature vectors from the sampled
EEG signals in order to eliminate the undesired distortions
such as noise. The YuleWalker algorithm was used for the
purpose of computing the power spectrum of the EEG
signals. The number of feature vectors used in each sleep
stage is provided in Table 2. The first row shows the
number of feature vectors for the mean feature vector and

covariance matrix calculations and the second row gives the


numbers used to test the sleep stage separation ability of the
feature vectors. Since a priori information about the sleep
stages is not available, we choose to use equal a priori
probabilities for the sleep stages, i.e., p1 =p2 ==p6 =1/6,
even though the true probabilities might give better result.
Sleep stage separation ability of the time-domain feature
vectors is illustrated in Fig. 3. For the sake of easy
interpretation, clustering results for only three sleep stages
are shown in the same figure. Figure 3b shows that the
projection values for Stage 2, REM and awake sleep stages
cluster in different regions, while Fig. 3a demonstrates that
those for Stage 2, Stage 3 and Stage 4 mix into each other.
Specifically, Stage 2 mixes with Stage 3 and Stage 3 gets
mixed up with Stage 4. When these features are used for
sleep stage classification, one can tell beforehand that the
classification accuracy of a classifier for Stage 2, Stage 3
and Stage 4 will not be high while that for REM and awake
stages will be high. This observation is in harmony with the
results reported in the literature [1415].
That how well sleep stages separate when the frequencydomain feature vectors are used is shown in Fig. 4. It is
clear from Fig. 4b that the projection values obtained from
Stage 2, Stage 4, awake stage cluster in different regions,
while it is obvious from Fig. 4a that those obtained from
Stage 2 Stage 3 and Stage 4 get mixed up. This result is
also in agreement with the result reported in the literature.
Classification results obtained by using frequency-domain
features are reported in [16]. These results are repeated here
in Table 3 that supports the claims made for Fig. 4a, b. For
example, classification accuracy for Stage 3 is only 3% (recall
that Stage 3 gets mixed up with Stage 2 and Stage 4). On the
other hand, classification accuracies for Stage 4 and awake are
very high (95% and 88%, respectively). One can observe by

Fig. 3 Sleep stage separation ability of the time-domain features. a Sleep stages that mix up, b sleep stages that separate well

88

J Med Syst (2010) 34:8389

Fig. 4 Sleep stage separation ability of the frequency-domain features. a Sleep stages that mix up, b sleep stages that separate well
Table 3 Sleep stage classification results obtained by using the frequency-domain features given in [16]

Awake
Stage1
Stage2
Stage3
Stage4
REM

Awake

Stage1

Stage2

Stage3

Stage4

REM

Success (%)

59
11
3
0
1
7

0
0
1
0
0
0

0
17
291
39
9
16

0
0
0
3
2
0

5
2
21
52
278
6

3
24
31
13
2
204

88
0
84
3
95
88

Fig. 5 Sleep stage separation ability of the hybrid features. a Sleep stages that mix up, b sleep stages that separate well
Table 4 Sleep stage classification results obtained by using the hybrid features given in [14]

Awake
Stage1
Stage2
Stage3
Stage4
REM

Awake

Stage1

Stage2

Stage3

Stage4

REM

Success (%)

23
3
0
0
0
0

25
31
28
0
0
11

7
46
650
10
0
41

1
3
21
84
6
0

0
0
0
16
159
0

0
9
0
0
0
204

41.1
33.7
92.6
76.4
96.4
79.7

J Med Syst (2010) 34:8389

comparing Figs. 3 and 4 that the projection values cluster in


narrower regions when the frequency-domain feature vectors
are used as opposed to the time-domain feature vectors. In
other words, it can be conjectured that the sleep stage
classification accuracy for the frequency-domain features
will be better compared to the time-domain features. This
might explain why the time-domain features alone were not
used in the sleep stage classification.
Finally, the clustering results obtained from the projections in the case of the hybrid feature vectors are shown in
Fig. 5 for Stage 1, Stage 2, Stage 4, REM and awake stages.
A situation similar to that obtained in the previous two
cases takes place. In other words, Stage 1, Stage 2 and
REM stages are not expected to be classified accurately,
though Stage 2, Stage 4 and awake stages are expected to
be classified with a high accuracy when these hybrid
feature vectors are provided to a classifier. For example,
classification results for hybrid feature vectors are given in
[14] and repeated in Table 4. Out of 92 epochs belonging to
Stage 1, 46 epochs are declared as Stage 2. Also, out of 256
REM epochs, 41 epochs are classified as Stage 2. These
two observations support the claim made for Fig. 5a.
Similarly, results of Table 4 and Fig. 5b agree.

Conclusions
In this study, a method was proposed in order to evaluate the
sleep stage separation ability of the most frequently used
feature vectors extracted from the EEG signals without
performing the classification step. Implementation of the
method is straightforward. Once the decision about the EEG
database is made, it is sufficient to calculate the generalized
covariance matrix and store its two eigenvectors corresponding to the largest two eigenvalues in magnitude.
The feature vectors were classified into three groups, and
the sleep stage separation ability of each group was determined
via detailed simulations. From simulations results, one can
deduce that the time-domain features alone provide the worst
separation, the hybrid features give the best separation. This
fact explains why most of the sleep stage classification
algorithms proposed in the literature use hybrid feature vectors.
Analysis made in this study has two important consequences. First, it is possible to tell in advance whether the
classification step will provide successful results or not for a
chosen feature vector. Consequently, if the features used do not
provide acceptable sleep stage separation results, there will be
no need in implementing the classification step in order to see
its classification performance. Second, the features that give
successful sleep stage separation results will provide an upper
bound for the classification step. Hence, one can tell how close
the classification accuracy obtained by using a particular
classifier is to the upper bound. If there is a significant

89

deviation from the upper bound, either the parameters of the


classifier used should be modified or a different classifier
should be employed. Without such an analysis, it is not
possible to know beforehand the performance increase that
might be obtained for a given feature vector.

References
1. Sinha, R. K., Artificial neural network and wavelet based automated
detection of sleep spindles, REM sleep and wake states. J. Med.
Syst. 32:291299, 2008. doi:10.1007/s10916-008-9134-z.
2. Kim, M. S., Cho, Y. C., Berdakh, A., and Seo, H. D., Analysis of
brain function and classification of sleep stage EEG using
Daubechies wavelet. Sens. Mater. 20:1001014, 2008.
3. Virkkala, J., Hasan, J., Vrri, A., Himanen, S. L., and Mller, K.,
Automatic sleep stage classification using two-channel electrooculography. J. Neurosci. Methods. 166:109115, 2007.
doi:10.1016/j.jneumeth.2007.06.016.
4. Doroshenkov, L., Konyshev, V., and Selishchev, S., Classification
of human sleep stages based on EEG processing using hidden
Markov models. Biomed. Eng. (N.Y.). 41:12528, 2007.
doi:10.1007/s10527-007-0006-5.
5. Held, C. M., Heiss, J. E., Estvez, P. A., Perez, C. A., Garrido, M.,
Algarin, C., and Peirano, P., Extracting rules from polysomnographic
recordings for infant sleep stage classification. IEEE Trans. Biomed.
Eng. 53:1019541962, 2006. doi:10.1109/TBME.2006.881798.
6. Hanaoka, M., Kobayashi, M., and Yamazaki, H., Automatic sleep
stage scoring based on waveform recognition method and
decision-tree learning. Syst. Comput. Jpn. 33:11113, 2002.
doi:10.1002/scj.10248.
7. Caffarel, J., Gibson, G. J., Harrison, J. P., Griffiths, C. J., and
Drinnan, M. J., Comparison of manual sleep staging with
automated neural network-based analysis in clinical practice.
Med. Biol. Eng. Comput. 44:105110, 2006. doi:10.1007/
s11517-005-0002-4.
8. Duda, R. O., Hart, P. E., and Stork, D. G., Pattern classification.
Wiley, New York, 2001.
9. Rechtschaffen, and Kales, A., (ed.), A manual of standardized
terminology, techniques and scoring system for sleep stages of human
subjects, Brain Information Service/Brain Research Institute, 1968.
10. Gonzales, R. C., and Woods, R. E., Digital image processing, 3rd
edition. Pearson Prentice Hall, Englewood Cliffs, NJ, 2008.
11. Szeto, H. H., Spectral edge frequency as a simple quantitative
measure of the maturation of electrocortical activity. Pediatr. Res.
27:3289292, 1990. doi:10.1203/00006450-199003000-00018.
12. Kay, S. M., Modern spectral estimation: Theory and application.
Prentice Hall, Englewood Cliffs, NJ, 1988 January.
13. International Database PhysioNet Sleep Recordings: http://www.
physionet.org
14. Agarwal, R., and Gotman, J., Computer-assisted sleep staging.
IEEE Trans. Biomed. Eng. 48:1214121423, 2001. doi:10.1109/
10.966600.
15. Van Hese, P., Philips, W., De Koninck, J., Van de Walle, R., and
Lemahieu, I., Automatic detection of sleep stages using the EEG.
Proceedings of the 23rd Annual International Conference of the
IEEE Engineering in Medicine and Biology Society, vol 2, pp. 944
1947, 2001.
16. Kerkeni, N., Alexandre, F., Bedoui, M. H., Bougrain, L., and
Dogui, M., Automatic classification of sleep stages on a EEG
signal by artificial neural network, 5th WSEAS International
Conference on SIGNAL, SPEECH and IMAGE PROCESSING
WSEAS SSIP05, Corfu, Greece, 2005.

You might also like