You are on page 1of 6

4th International Conference on Electrical Engineering (ICEE 2015)

IGEE, Boumerdes, December 13th -15th, 2015

CSP Features Extraction and FLDA Classification


of EEG-Based Motor Imagery for Brain-Computer
Interaction
Sid Ahmed Belhadj

Nawal Benmoussat

Mohamed Della Krachai

Department of Automation Engineering Department of Automation Engineering Department of Automation Engineering


AVCIS Laboratory
AVCIS Laboratory
LEPESA Laboratory
University of Science and Technology University of Science and Technology University of Science and Technology
of Oran, P.O. Box 1505 El MNaouer of Oran, P.O. Box 1505 El MNaouer of Oran, P.O. Box 1505 El MNaouer
Algeria
Algeria
Algeria
Email : sidahmed.belhadj@univ-usto.dz
Email : nawalbb@yahoo.com
Email : kmdella@yahoo.fr

AbstractA Brain-Computer Interface (BCI) is a revolutionising Human-Computer Interface system, which is in developing
state. BCI research aims to develop systems that help disabled
people to communicate using computers and their brain waves
without any muscular action between a person and a computer.
Motor Imagery (MI) is one of the popular paradigm to design
the BCI system. Besides the BCIs complicated architecture,
the required computation load is heavy. BCIs based on electroencephalogram (EEG) are growing fast, and several EEGbased techniques have been proposed for this purpose. Although
EEG signals are characterised by a low spatial resolution and a
limited frequency range. Moreover, they are often contaminate
by noise caused by a cardiac activity (electrocardiographyECG effects) and/or ocular artefacts (electrooculography-EOG
effects). To handle the problem, in this paper, we present an
efficient approach based on Common Spatial Pattern (CSP)
for spatial feature extraction and Fisher Linear Discriminant
Analysis (FLDA) for classification. In this study, CSP and FLDA
have been used to reduce common channels artefacts and to find
projections that maximise the discrimination between different
classes. A CSP feature extraction of EEG-based Motor Imagery
is conducted, then an offline classification of Motor Imagery
is performed. Simulation results demonstrate the efficiency and
the accuracy of the approach which can be used in real-life
applications.

Keywords Fisher Linear Discriminant Analysis; Common


Spatial Pattern; Electroencephalogram; Motor Imagery;
Brain-Computer Interface.
I. I NTRODUCTION
People with severe neurological diseases can be highly
paralyzed and so need for an enhanced technology communication mean. Indeed, in the case where the paralyze is almost
total, or locked-in, the used of the brain activities remains the
ultimate way to communicate [1]. Brain Computer interfaces
(BCIs) researches focus on developing systems that help those
disabled people to communicate through the use of their brain
waves.
A BCI is a communication system in which brain messages
or commands sent to the external environment do not cross

the brain normally via nerves and muscles. Hence, a BCI can
be described as a system of communication and direct control
based only on brain activities without any need to muscular
actions between a person and an electrical or a mechanical
system (computer, wheelchair...) [2].
In this paper, we are interested on a Brain Computer
Interaction which is based on a Motor Imagery paradigm,
better known as asynchronous BCI or self-paced [3], [4]. In
this issue the user interacts with the system when he decides
by voluntarily changing his brain activity [5], [6]. In asynchronous BCI, the control signals are generated continuously
since the system detects and manages brain signals constantly
in contrast to synchronous BCI [7], This allows a permanent
control of the interface elements.
Several signals of brain activity are exploited in asynchronous BCI, in our case we are interested to signals provided by the sensory-motor activity. When the user imagines,
plans or achieves a movement, the power of frequency bands
[9, 13] Hz and [16, 30] Hz changes. These changes
which are in both magnitude and frequency vary from one
person to another and vary also over time. The challenge is
so to detect the and bands in the EEG signals. Which are
directly related to the motor imagery tasks. it is important to
notice that with motor imagery-based BCI, it is possible for
the user to view feedback and so to learn how to control his
brain activity, for example the position of a cursor in a screen
[8]. However, the problem is the detection of these bands in
the EEG signals.
Hence, in this study, we present a methodology to correctly
recognize the motor imagery task in noisy EEG signals while
taking account of their variability over time. For this purpose,
we make use the performance of the Common Spatial Pattern
(CSP) filters and a supervised classifier, Fisher Linear Discriminant Analysis (FLDA).
This paper is organized as follows. Section 2 describes the
dataset used to evaluate the performance of the algorithm
CSP+FLDA. Section 3 introduces the space-time filters used to

2015 IEEE

C4 (V)
C2 (V)

80
60
40
20
0
20

58.5

59

59.5

60

60.5

61

61.5

62

62.5

63

63.5

58.5

59

59.5

60

60.5

61

61.5

62

62.5

63

63.5

58.5

59

59.5

60

60.5

61

61.5

62

62.5

63

63.5

58.5

59

59.5

60

60.5

61

61.5

62

62.5

63

63.5

58.5

59

59.5

60

60.5

61
Time (s)

61.5

62

62.5

63

63.5

0
50
100

Fig. 2. The EEG recording over the motor cortex (central lobe) corresponding
to channel C4, C2, Cz, C1 and C3 during the movement imagination of the
left hand.

A. Experimental protocol
The EEG dataset used in this study is provided by the
Berlin BCI Group [10]. The EEG data are collected from
one healthy subject, from 118 electrodes distributed according
to the extended international 10-20 system see Fig. 1. The
subject was sitting in a comfortable chair with arms resting on
armrests. The dataset includes only the first seven acquisition
sessions data without feedback. The first three sessions are
given with labels. Visual stimulus (letter appearance) were
presented for 3.5 seconds, according to two motor imagery
tasks that must be performed by the subject: (L) for left hand,
(F) for right foot. Each target stimulus was followed by a short
break with random length, 1.75 to 2.25 seconds, in which
the subject could relax. The remaining acquisition sessions (4
to 7) are continuous where EEG signals are given without
any stimulus information (class label/stimulus timing). During
these sessions the tasks left hand, right foot and relaxation have
been ordered by an acoustic stimuli for time intervals between
1.5 and 8 seconds. Relaxation periods after each stimulus are
the same as those of the first three sessions.
B. Dataset analysis
From the dataset, one can extract the EEG response to
a stimulus corresponding to the channels (C4, C2, Cz, C1,
C3) over the motor cortex. Fig. 2 shows the response of
the left hand movement imagination. One can clearly see the
redundancy of the EEG signals in the motor cortex, which
makes the discrimination process of the mental task difficult.

(a)

Cz (V)

The Brain-Computer Interface used in this study is a motor


imagery-based BCI. This interface is the most asynchronous
BCI for continuous monitoring applications such as computer browsing, a wheelchair driving, a robotic prosthetic
control...[2], [8], [9].

80
60
40
20
0
20

C1 (V)

II. DATASET DESCRIPTION AND ANALYSIS

20
0
20
40
60
60
40
20
0
20

C3 (V)

identify and extract the motor imagery tasks from EEG signals.
Section 4 describes the FLDA classification method for mental
tasks discrimination and prediction. Section 5 summarises the
simulation results. Finally, section 6 draws conclusions.

(b)

Fig. 1. (a) The extended international 10-20 system [11]. (a) The numbers
(2,4,6,8) refer to electrode positions on the right hemisphere, whereas odd
numbers (1,3,5,7) refer to those on the left hemisphere. (b) Letters contained
in the electrodes names correspond to the different zones.

III. DATA PREPROCESSING AND FEATURES EXTRACTION


A. Temporal filter
As the phenomenon we have to recognise appears around
3.5 seconds after the visual stimulus, we used 0.5 to 3.5
seconds from visual stimulus for each trial, then time range
is 3.0 seconds, to apply the EEG signals to classification.
Also, to increase the SNR, each channel has been band-pass
Butterworth (20 order) filtered [7 - 30] Hz to extract the
frequency band corresponding to the sensory motor rhythms
and . Also, the aim of this filtering is to cancel signals
which do not originate from the brain, i.e, eye-movements
and
blinks (electrooculography-EOG) are a slow signal frequency and a cardiac activity (electrocardiography-ECG),
these EEG artefacts have to be reduced while keeping as
best as possible the cerebral origin signals. Fig. 3 shows an
example of two different trials, right foot and left hand motor
imagery over the channels C1 and C4. One can observe that
the artefacts in EEG signals are considerably reduced for the
two trials.
B. Spatial filter and features extraction
1) Background: After solving the problem of the EEG signals artefacts elimination, one can go on the and rhythms
identification, However, the motor imagery associated to these
rhythms has to be firstly spatially located. For each motor
imagery, neural activity is conducted through the brain volume
to the scalp and EEG sensor by the effect called Volume
Conduction, see Fig. 4. this latter may lead to interference
between different EEG channels which make necessary the
use of spatial filters to reconstruct the source signal and thus
identify the mental task performed by the user.
The Volume Conduction problem has been largely handled
in literature [2], [12], [13]. However, almost all achieved works
have not reached any single EEG spatial filter. Indeed, in [12],
MacFarland et al. used a simple Laplacien filter to subtract an
average EEG signals measured on neighboring electrodes.
Wolpaw et al., in [2] , proposed a Common Average Reference (CAR) approach which subtracts the calculated average
on all electrodes. In [13] Logar et al. proposed a different

Left Hand

Left Hand

80

40
Nonfiltered C1
Filtered C1

60

0
Amplitude (V)

Amplitude (V)

40
20
0
20

20
40
60

40

80

60

100

80
0.5

Nonfiltered C4
Filtered C4

20

1.5

2
Time (s)

2.5

120
0.5

3.5

1.5

(a)

2
Time (s)

2.5

3.5

(b)

Right Foot

Right Foot

80

60
Nonfiltered C1
Filtered C1

Nonfiltered C4
Filtered C4

40

60

20
40
Amplitude (V)

Amplitude (V)

0
20

feature vectors enhance the discriminability between the two


classes. Since the variance of EEG signals filtered in a given
frequency band corresponds to the power inside this band, CSP
essentially maximises the discriminability of the features used
in the BCI [15].
Let us consider the input data { }
=1 from the trial and
the class {1, 2}. Each is an [ ] matrix, where
is the number of channels and is the number of samples
in time unit per channel.
The goal of CSP is to find spatial filters, given by an [
] matrix (each column is a spatial filter), that linearly
transforms the input signals according to

20

= ()

40
60

20
80
40

60
0.5

100

1.5

2
Time (s)

2.5

3.5

120
0.5

1.5

(c)

2
Time (s)

2.5

3.5

(d)

Fig. 3. Illustration of EEG signals record over motor cortex (channels C1


and C4) before and after preprocessing. (a) and (b) Left hand motor imagery.
(c) and (d) Right foot motor imagery.

(1)

where () is the vector of input signals at time from all


channels. In order to find the filters, the two class-conditional
covariance matrices are first estimated as
1
=
( )
(2)

for {1, 2}. The CSP technique involves determining a
matrix such that:
1 = 1

(3)

2 = 2
Before CSP filtering Class 1 (MI left hand)

After CSP filtring Class 1 (Left Hand Mouvement Imagination)

80

0.2
C1
C4

60

CSP1
CSP2

0.15

40
0.1
Amplitude (V)

Fig. 4. The Volume Conduction effect.

Amplitude (V)

20
0
20
40

0.05
0
0.05

60
0.1
80
0.15

100
120
0.5

1.5

2
Time (s)

2.5

0.2
0.5

3.5

1.5

2
Time (s)

(a)

2.5

3.5

(b)

Before CSP filtering Class 2 (MI right foot)

After CSP filtring Class 2 (Right Foot Mouvement Imagination)

80

0.15
C1
C4

60

CSP1
CSP2
0.1

40
0.05
Amplitude (V)

Amplitude (V)

20
0
20
40
60

0.05

0.1

80
0.15
100
120
0.5

1.5

2
Time (s)

2.5

0.2
0.5

3.5

1.5

2
Time (s)

(c)

2.5

3.5

(d)

Before CSP filtering

After CSP filtering

60

0.1
Class 1
Class 2

40

Class 1
Class 2

20

0.05

0
0
S2

20
C4

approach that used a Principal Component Analysis (PCA) to


allow reduction of the number of variables and make signals
least redundant.
However, all these approaches do not take account of the
motor imagery activity source temporal distribution which is,
in the time window of interest, jointly Gaussian-distributed
[14] see Fig. 5(f). This makes the problem of source reconstruction analytically solvable and that what Common Spatial
Pattern (CSP) can find [15]. Indeed, the CSP is a supervised
learning approach for spatial filter allowing analysing spatial
difference existing between two distinct classes. The major
objective is to maximise the signals variances gap between
the two classes. In this work, we are interested to apply the
CSP method for filtering and spatial characterisation of EEG
signals.
2) Common Spatial Pattern (CSP): The appropriate spatial
filter would provide signals so that easy to classify. The goal of
this section is to design CSP spatial filters that lead to optimal
variances for the discrimination of two populations of EEG
signals related to left hand and right foot motor imagery. CSP
finds spatial filters such that the variance of the filtered data
of one class is maximised while the variance of the filtered
data from the other classes are minimised. Thus, the resulting

40
60

0.05

80
100

0.1

120
140
80

60

40

20

0
C1

(e)

20

40

60

80

0.15
0.2

0.15

0.1

0.05

0
S1

0.05

0.1

0.15

0.2

(f)

Fig. 5. Examples of 2-D CSP applied on two trials from training set, ((a),
(c) and (e)) before CSP filtering, ((b), (d) and (f)) after CSP filtering.

where the are diagonal matrices and 1 + 2 = ,


where is the identity matrix. This can be done by solving a
generalized eigenvalue problem given by
1 w = 2 w

(4)

The generalized eigenvectors = that satisfy the above


equation form the columns of and represent the CSP
spatial filters. The generalized eigenvalues = w 1 w and
= w 2 w form the diagonal elements of 1 and 2
respectively. Since 1 +2 = 1, a high value for 1 means that
the filter output based on filter w produces a high variance
for input signals in class 1 and a low variance for signals in
class 2 (and vice versa). Fig. 5 illustrates this property of CSP
filters for EEG signals. CSP analysis was performed to obtain
two spatial filters that discriminate left hand from right foot
motor imagery. The graph shows an example of a continuous
EEG signal after applying the CSP filters.
The resulting signals in Fig. 5(b) have larger variance
during left hand motor imagery (the blue curve) while signals
in Fig. 5(d) have larger variance during right foot motor
imagery (the red curve). Spatial filtering with such filters can
significantly enhance the discriminability.
Fig. 5(e) and Fig. 5(f) illustrate an example of CSP filtering
in 2D. Two sets of samples marked by red and blue asterisks
are drawn from two trial distributions. In (e) and (f), the
distribution of samples before and after filtering is shown,
respectively. One can notice that the both classes before
applying the CSP are strongly correlated and after performing
the latter, they are uncorrelated at the same time; the horizontal
(vertical) axis gives the largest variance in the blue (red) class
and the smallest in the red (blue) class, respectively.

Fig. 6. FLDA find the optimal hyperplane (solid line) to separate two classes.
It can be described by the vector and the offset term .

Intuitively, a decision boundary drawn in the middle of the


void between data items of the two classes seems to be better
than the one nearest to patterns of one or both classes.
The FLDA gives an optimal projection so that the
distribution of is easy to discriminate. See Fig. 6.
The criterion of Fisher is given by
() =

(1 2 )2
1 + 2

where
1 and 2 denote averages for { 1} and
{ 2}, respectively.
1 and 2 denote variances for { 1} and
{ 2}, respectively.
We have
(1 2 )2 = ( 1 2 )( 1 2 )
= (1 2 )(1 2 )
=

Once the feature extraction is done, we can apply the data


set to the classification. In the following, we will outline the
idea of FLDA Fisher Linear Discriminant Analysis, which
is discussed in more detail in [16]. An easy way to perform
binary classification is to construct a hyperplane defined by
a weight vector and an offset as depicted
in Fig. 6. Based on a training set of patterns with the data
vectors and the corresponding class labels
(5)

a machine learning approach requires to find such a hyperplane according to some proper optimality criterion. In a test
phase, the class label of a new data vector can be predicted
by projecting onto according to Eq. 6
() = . +

(8)

IV. C LASSIFICATION

( , ), ..., ( , ) {1, 1}

(7)

(6)

The sign of the function determines the class to which the


vector will be assigned. For two-classes, separable training
data sets, such as the one in Fig. 6, there are lots of possible
linear separators.

and
1 + 2 = 1 + 2
= (1 + 2 )

(9)

=
where
1 and 2 denote mean vectors for { 1} and
{ 2}, respectively.
1 and 2 denote variances for { 1} and
{ 2}, respectively.
by substituting (8) and (9) in (7), the cost function ()
becomes

(10)
() =

the solution is then given by
1

(1 2 )

(11)

Finally, we choose an optimal threshold , then we can


classify any data vector by

{ 1},

Predicted Class
Acual Class

(12)

< { 2}.

(13)

Class

For example, = (1 + 2 )/2 could be usable.


2

V. S IMULATION RESULTS
Let us visualise the spatial filter effect and the corresponding
pattern of activation into the brain and see how they correspond to the neurophysiological understanding of both right
foot / left hand motor imagery. Fig. 7 displays a pair of CSP
spatial filters, we can notice that the source signals associated
to the motor imagery tasks have been estimated rightly.
For the classification process, a training phase is firstly
applied to the classifier in order to determine its parameters.
Next, the evaluation stage is conducted, where the test set
includes continuous EEG signals concatenated from four acquisition sessions ( 12 mn), Fig. 8 shows the response of the
classifier according to the actual (true class) class for the 400
seconds of test, one can see that the classification achieved
is satisfactory even though some errors occurrence due to
overlapping of the data not found in training set.
Thus, to assess the efficiency of the method used (CSP +
FLDA), we varied the frequency band of Butterworth filter,
and the number of CSP spatial filters. Table I shows the
prediction rate in term of the true positive [16] of the FLDA
classifier applied on the test set according to the assumptions
mentioned above. One can see that the CSP methods are
basically superior to non-CSP method and the frequency band
is also an important factor.

50

100

150

200
Time(s)

250

300

350

400

Fig. 8. BCI performance over a 400 seconds time window of test. The blue
curve indicates the output of the FLDA classifier. Green plateaus indicate the
actual class (1 is the imagined right foot movement; 0 is the rest; 1 is the
imagined left hand movement).

TABLE I
T HE CLASSIFICATION PERFORMANCE WITH DIFFERENT FREQUENCY
BANDS .
Band
[7-30] Hz
[7-30] Hz
[7-30] Hz
[7-30] Hz
[10-30] Hz
[10-30] Hz
[10-30] Hz
[10-30] Hz
[9-13] Hz
[9-13] Hz
[9-13] Hz
[9-13] Hz
[16-30] Hz
[16-30] Hz
[16-30] Hz
[16-30] Hz

number of CSP
Non CSP
2
6
10
Non CSP
2
6
10
Non CSP
2
6
10
Non CSP
2
6
10

Accuracy
66.1%
85.3 %
88.1%
88.0 %
66.9%
89.4 %
89.1%
89.4 %
56.8%
81.0 %
83.7%
82.0 %
51.9%
83.7 %
81.5%
79.5 %

VI. C ONCLUSION

(a)

(b)
Fig. 7. Example of CSP analysis. (a) Right foot motor imagery. (b) Left hand
motor imagery. The patterns (left (a) and (b)) illustrate how the presumed
sources project to the scalp. They can be used to verify neurophysiological
plausibility. The filters (right (a) and (b)) illustrate the reconstructed sources,
they are used to project the original signals.

This paper has presented a spatial features extraction and


classification methodology of EEG-based motor imagery tasks
for Brain-Computer Interface built. The classification problem
is closely related to recognise a signal containing mu and
beta rhythms related to a motor imagery task. The major
difficulty of such a problem is the variability of the EEG signal
due to problem of volume conduction, the changes of the
acquisition conditions, the users attention level and fatigue.
In our study, we dealt with these problems by the use of a set
of CSP spatial filters for features extraction purpose, followed
by an efficient FLDA classification. We have demonstrated
that the prediction of mental tasks is strongly related to an
optimal choice of some key parameters namely the frequency
band and the number of spatial filters used. Simulation results
demonstrate the efficiency and the accuracy of the proposed
approach which can be used in real-life applications.
ACKNOWLEDGEMENTS
The authors would like to thank Berlin Brain-Computer
Interface group: Fraunhofer FIRST, Intelligent Data Analy-

sis Group (Klaus-Robert Muller, Benjamin Blankertz), and


Campus Benjamin Franklin of the Charit - University
Medicine Berlin, Department of Neurology, Neurophysics
Group (Gabriel Curio) for their support in the data collection
used in this study.
R EFERENCES
[1] L. A. Farwell and E. Donchin, Talking off the top of your head:
toward a mental prosthesis utilizing event-related brain potentials,
Electroencephalography and clinical Neurophysiology, vol. 70, no. 6,
pp. 510523, 1988.
[2] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M.
Vaughan, Braincomputer interfaces for communication and control,
Clinical neurophysiology, vol. 113, no. 6, pp. 767791, 2002.
[3] R. Scherer, F. Lee, A. Schlogl, R. Leeb, H. Bischof, and G. Pfurtscheller,
Toward self-paced braincomputer communication: navigation through
virtual worlds, Biomedical Engineering, IEEE Transactions on, vol. 55,
no. 2, pp. 675682, 2008.
[4] A. R. Satti, D. Coyle, and G. Prasad, Self-paced brain-controlled
wheelchair methodology with shared and automated assistive control,
in Computational Intelligence, Cognitive Algorithms, Mind, and Brain
(CCMB), 2011 IEEE Symposium on. IEEE, 2011, pp. 18.
[5] F. Cabestaing and A. Rakotomamonjy, Introduction aux interfaces
cerveau-machine, in 21e colloque GRETSI sur le traitement du signal
et des images, 2007.
[6] J. F. Borisoff, S. G. Mason, A. Bashashati, and G. E. Birch, Braincomputer interface design for asynchronous control applications: improvements to the lf-asd asynchronous brain switch, Biomedical Engineering, IEEE Transactions on, vol. 51, no. 6, pp. 985992, 2004.
[7] S. A. Belhadj, N. Benmoussat, and M. Della Krachai, Mixed svm
classification of eeg-based palliative communication for brain-computer

[8]

[9]

[10]

[11]
[12]
[13]
[14]

[15]
[16]

interface, in Proceedings of 2nd World Congress on Computer Applications and Information Systems. N&N Global Technology, 2015, pp.
26.
J. R. Wolpaw and D. J. McFarland, Control of a two-dimensional
movement signal by a noninvasive brain-computer interface in humans,
Proceedings of the National Academy of Sciences of the United States
of America, vol. 101, no. 51, pp. 17 84917 854, 2004.
C. Neuper, R. Scherer, S. Wriessnegger, and G. Pfurtscheller, Motor
imagery and action observation: modulation of sensorimotor brain
rhythms during mental control of a braincomputer interface, Clinical
neurophysiology, vol. 120, no. 2, pp. 239247, 2009.
G. Dornhege, B. Blankertz, G. Curio, and K. Muller, Boosting bit rates
in noninvasive eeg single-trial classifications by feature combination and
multiclass paradigms, Biomedical Engineering, IEEE Transactions on,
vol. 51, no. 6, pp. 9931002, 2004.
V. Jurcak, D. Tsuzuki, and I. Dan, 10/20, 10/10, and 10/5 systems revisited: their validity as relative head-surface-based positioning systems,
Neuroimage, vol. 34, no. 4, pp. 16001611, 2007.
D. J. McFarland, L. M. McCane, S. V. David, and J. R. Wolpaw, Spatial
filter selection for eeg-based communication, Electroencephalography
and clinical Neurophysiology, vol. 103, no. 3, pp. 386394, 1997.
V. Logar and A. Belic, Braincomputer interface analysis of a dynamic
visuo-motor task, Artificial intelligence in medicine, vol. 51, no. 1, pp.
4351, 2011.
W. Ou, A. Nummenmaa, M. Hamalainen, and P. Golland, Multimodal
functional imaging using fmri-informed regional eeg/meg source estimation, in Information Processing in Medical Imaging. Springer, 2009,
pp. 88100.
P. Rajesh and N. Rao, Brain-Computer Interfacing: An Introduction.
Cambridge University Press, 2013.
C. M. Bishop et al., Pattern recognition and machine learning. springer
New York, 2006, vol. 4, no. 4.

You might also like