Professional Documents
Culture Documents
email: saptocondro@ieee.org
supervised by
Hiermit versichere ich dass ich meine Masterarbeit ohne fremde Hilfe angefertigt habe,
und dass ich keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe.
Alle Stellen, die wörtlich oder sinngemäÿ aus Veröentlichungen entnommen sind, habe
Declaration
Herewith I declare that I wrote my Master Thesis without external support and that I did
not use other than quoted sources and auxiliary means. All statements which are literally
the research topic in this institute is steady-state visual evoked potential (SSVEP), which
deals with resonance frequency from the visual cortex as response if a person is focusing
his visual attention on a rapidly ickering stimulus. SSVEP response is detected using
Time is needed since a subject looks at a blinking source until the BCI system can
detect SSVEP response. The purpose of this master thesis is to shorten the response
time. Time series prediction algorithms are presented here. The algorithms are tested
with simulation using MATLAB at rst. Then the passing algorithms are implemented
into C++ classes and tested with experiments involving 11 human subjects. This thesis
found out that regression method and Kalman Filter can help improving the performance
of SSVEP-based BCI.
Keywords:
Brain-computer interface (BCI), Steady-state visual evoked potential (SSVEP), EEG, time
ago. During this time, I have learned a lot. I used not to have experience in object-oriented
self esteem and has proven that I can go through all diculties in programming and
writing. This thesis has opened my eyes to promising research in the area of neuroscience,
brain-computer interface and so on, so build my condence to continue study after this
master program. I would like to express my gratitude after doing this thesis.
Volosyak and Prof. Dr.-Ing. Axel Gräser, for all their support and guidance so I can nish
my thesis. I want to express my gratitude to Prof. Dr.-Ing. Axel Gräser who has given
a chance to do project and thesis in his institute and taught a lot of course during my
master study. Mr. Thorsten Lüth has shown his patience as supervisor, especially during
the writing of the thesis report. I am also thankful to my past and present colleges in
the IAT for their help and friendship: Dr. Hubert Cecotti, Dr. Brendan Allison, Diana
me nancially in my rst two years in Bremen. DAAD has given me opportunity to study
in Germany and to attend seminars which build my interest in science and technology. I
would like to thank Ms. Rachmawaty as DAAD contact in Indonesia, as well as, Ms. Helga
Islam and Ms Susanne Kammüller as DAAD contacts in Bonn, Germany. They are very
helpful during my rst time in Germany and the preparation of my master study.
Germany. Without him, I would have been still afraid to send application to DAAD and
enough experiment subjects for this master thesis. Their warmth and friendship also help
me through hard days in Germany. I would like to thank Felix Pasila and Indar Sugiarto
who greeted me in Bremen and provided me shelter for the rst three days in Germany.
I am thanking Rayendra family and Sannyo Galingging who lent me money and provided
I would like to thank to my parents who supports me morally and nancially, with love
3.2 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4 Simulation 13
4.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Implementation 20
5.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.1.1 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.1.2 BCI2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.1.3 ClassierConnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2.3 Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6 Experiment 32
6.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.1.1 Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.1.3 Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.1.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.1.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1 Enhanced SNR (red) using Three-Point Quadratic Model and estimated
4.2 Enhanced SNR (red) using Regression with 5 delay taps and estimated SNR
4.3 Enhanced SNR (red) using Regression with 8 delay taps and estimated SNR
delay taps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
delay taps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
delay taps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.7 Enhanced SNR (red) using simplied form of Kalman Filter with 5 steps
4.8 Enhanced SNR (red) using Särkää's form of Kalman Filter with 5 steps
4.9 Enhanced SNR (red) using Joseph's form of Kalman Filter with 5 steps
5.4 BCI2000 Filtering Module, showing segment length and ickering frequencies 28
5.6 ClassierConnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.7 HTerm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2 Average speed, from the rst experiment: 7 and 8 subjects, idle periods of 1 s 37
6.3 Average accuracy, from the rst experiment: 7 and 8 subjects, idle periods
of 1 s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4 Average information transfer rate (ITR), from the rst experiment:
6.5 Average speed, from the second experiment: 7 subjects, idle periods of 2 s . 41
6.6 Average accuracy, from the second experiment: 7 subjects, idle periods of 2 s 42
6.7 Average information transfer rate (ITR), from the second experiment:
6.8 Average speed, from the third experiment: 3 subjects, idle periods of 2 s . . 44
6.9 Average accuracy, from the third experiment: 3 subjects, idle periods of 2 s 45
6.10 Average information transfer rate (ITR), from the third experiment:
6.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.4 The result summary of the rst experiment, with the idle period of 1 s . . . 40
6.5 The result summary of the second experiment, with the idle period of 2 s . 40
6.6 The result summary of the third experiment, with the idle period of 2 s . . 43
Introduction
1.1 Thesis Background
Brain-Computer Interface (BCI) has been studied in the Institute of Automation (IAT)
of the University of Bremen since 2005 [1]. The rst publication from IAT is in 2005
[2]. In the CeBIT event in Hannover, Germany, in March 2008, the BrainRobot research
group from IAT made a study of steady-state visually evoked potential (SSVEP) for BCI
application and 106 subjects participated in this study [3]. Later on, in October 2008, IAT
The IAT has developed an algorithm to detect SSVEP of a certain frequency from
EEG signals (from dierent channels or electrodes), which is called Minimum Energy
Combination [5]. This algorithm can estimate signal-to-noise ratio (SNR) for every time
sequence. In this case, the signal in SNR is the one with the desired frequency and the noises
are other signals. The SNR is then used for classication. If SNR increases and reaches a
certain threshold, a command is assigned. If more than one SNR reach a threshold, then
Time is needed to do the classication. The response time is measured and the
parameter used is the information transfer rate (ITR) [6],[7]. The shorter the response time,
the faster the ITR, the better the BCI system. Not only the raw speed, ITR calculation
includes also accuracy. The more accurate, the faster the ITR.
This thesis proposed time series manipulation and prediction algorithms to improve
response time. In Figure 1.1, one black curve is reaching a threshold slowly. This black
curve is SNR. With a specic time series manipulation algorithm, shown by the orange
1
see the master project report of I.S.C. Atmawan page 1 [4]
curve, this may be speed up. The orange curve, as the enhanced SNR, is faster than the
black curve in approaching the threshold. There is time gap dt in the gure showing the
Figure 1.1: The classication based on SNR and response time improvement
The main goal of this master thesis is to improve the response time. It is done by using
• Regression
• Kalman Filter
The response time between algorithm is compared, as well as, their accuracy. It is intended
• experiment with a total of 11 subjects, which is divided from 3 until 8 subjects per
algorithm
Chapter 1 introduces the background and the objectives of this master thesis. Chapter 2
describes the denition of BCI, EEG and SSVEP. This chapter also gives an idea of
Chapter 3 tells about dierent kinds of Time Series Prediction algorithm, which is used in
Chapter 4 shows simulation result using MATLAB R2006b (version 7.3). Chapter 5
tells about the hardwares and the softwares used in this thesis. This chapter also gives an
idea how the Time Series Prediction algorithms is implemented into standard C++ classes.
Chapter 6 tells about the experiments. This chapter gives an idea how the experiment
is conducted and shows the experiment result. In the end, Chapter 7 concludes the master
thesis.
commands that an indivual sends to the external world do not pass through the brain's
normal output pathways of peripheral nerves and muscles [6]. With BCI, people may
communicate without movement [3], e.g. not making hand or mouth movement. BCI
system enables people to send messages or commands to electronic devices, e.g. computers,
robots, telecommunication devices and so on, by means of brain activity [6],[8]. The
communication channel connects the brain directly to a computer or a machine [7]. The
channel does not go through normal muscle and nerves, i.e. in hands and ngers. This
can happen because there are electrodes which are put on the head to measure the neural
activity in the brain [3],[9]. The communication between the brain and the computer goes
There are dependent and independent BCI [3],[6]. BCI is called dependent if a few
muscular activities are needed. In SSVEP-based BCI, people have to look at certain
direction, for example the stimuli. In this system, extra ocular muscle and the cranial
nerves are needed, which means eyes, as well as their lens, are moving and sometimes neck
is also turning. An independent BCI is a pure BCI because it does not need muscular
There are invasive and non-invasive BCI [6]. Invasive BCI needs surgery to implant
electrodes, whether on the surface of the scalp or penetrating into the brain. Non-invasive
BCI is otherwise. It requires electrodes which are put on the head, not inside
3.
1
see master project report of I.S.C. Atmawan, page 4 [4]
2
idem
3
idem
The neurons of the human brain process information by changing the ow of electrical
currents across their membranes. These changing currents generate electric and magnetic
elds. By putting electrodes to the scalp, the change of electrical eld can be measured
those dierent electrodes. EEG means the writing out of the electrical activity of the
4
brain .
EEG-based BCI is a kind of human computer interface which enables the direct
activities (EEG) that reect the function of the brain [11]. EEG-based BCI is mostly
non-invasive since the electrodes are usually put on the surface of the scalp .
5
The evoked potential is the change in brain's electrical activity that follows a stimulus [10].
This potential is mostly measured using EEG. Visually evoked potential (VEP) relates to
visual activities: seeing, gazing and focusing. VEP is recorded from the scalp over virtual
cortex [6]. This cortex is located in the occipital lobe [7], so the EEG is taken from the
back of the head. Light stimuli, e.g. ickering light or image, videos and so on, are used
to stimulate the visual systems. These stimuli elicit response in the visual cortex. This
response is called VEP [12]. VEP reects the attention (to the stimuli) and can determine
There are transient VEP and steady-state VEP. Transient VEP arises if the electrical
excitation of the visual system is able to abate before new stimuli are presented [12]. P300
is an example of the transient VEP. There is a resonance phenomenon arising in the visual
attention to a ickering light source and this resonance happens in steady-state VEP [5] .
7
the visual cortex to a rapidly repeating (ickering) stimulus [13]. This response generally
has a sinusoidal waveform with the same temporal frequency as the driving stimulus [13].
This means that SSVEP is periodic [14]. The periodic pattern of SSVEP can be detected
and correlated to the periodic pattern of the stimuli. One example is if a subject watches
4
see Terence W. Picton's denition of EEG in the website of Rotman Research Institute [10] and master
project report of I.S.C. Atmawan, page 5 [4]
5
see master project report of I.S.C. Atmawan, page 5 [4]
6
idem
7
idem
a ickering LED with a frequency of 15 Hz, then a signal with the same frequency can
be extracted from EEG and the amplitude as well as the power of this 15 Hz signal is
8
see master project report of I.S.C. Atmawan, page 56 [4]
combination method to estimate SNR [5]. This SNR is going to be enhanced in this thesis.
Time series prediction is presented to do the signal processing. Figure 3.1 shows the block
diagram and the signal ow. As it can be seen from the time series prediction block in
the gure, the input is estimated SNR and the output is enhanced SNR. This chapter
This method is done by modeling time series in quadratic equation. Three values in time
series are sampled to get three quadratic parameters: b0 , b1 and b2 . These parameters are
used to predict next values. Eq. (3.1) shows the model, with time t and signal y 1 . In this
y = b2 · t2 + b1 · t + b0 (3.1)
The equation (3.2) and (3.3) are the same quadratic model but they emphasize the
relation between the present value yk and the two past values: yk−1 and yk−2 . If we put
t=0 in Eq. (3.1) then we will get yk in the (3.3). Putting t = −1 means getting yk−1 and
There are model errors: e0 , e1 and e2 , which is added in Eq. (3.2) and (3.3). These
equations are similar to the ones with regression method in Section 3.2 [15],[16],[17].
y =X·β+e (3.2)
yk 1 0 0 b0 e0
yk−1 = 1 −1 1 · b1 + e1 (3.3)
yk−2 1 −2 4 b2 e2
Due to the presence of errors, parameters β cannot be achieved. Instead, it can only
be estimated. The estimation of quadratic parameters β̂ are achieved using the Eq. (3.4).
Then the time series prediction can be seen from the Eq. (3.5) for m step prediction. It is
better to use m ≤ 2.
β̂ = X−1 · y (3.4)
h i
ŷk+m = 1 m m2 ·β̂ (3.5)
The enhanced SNR is the maximum value between ŷk+m and y so the enhanced one
3.2 Regression
With general linear regression [15],[16],[17], the quadratic parameters from Eq. (3.1) can
be get by sampling more than 3 points in time series. More sampling can reduce noise in
the 3-point model above. But too much sampling can make a slower response of Bremen
by adding delay taps, means larger matrix. Computational cost should be taken into
consideration here.
Eq. (3.4) is modied into Eq. (3.6) for parameter estimation. Eq. (3.5) is still used for
time series prediction. The enhanced SNR in the regression method is calculated the same
In [15] and [16], (XT · X) is called auto-correlation matrix and (XT · y) is called
cross-correlation matrix.
Another method to improve response time of BCI system is using logical trend. This
method is based on the gradient of a time series signal. Eq. (3.7) shows the model, with
yk is the value of estimated SNR at this time, yk−1 and yk−2 are the past values. The
decision is positive if the trend is increasing, which means the gradient is positive.
Another model is if more than three points are used. Eq. (3.8) gives an idea of this
model. Using more points may reduce the eect of uctuating signals.
IF (yk > yk−1 ) & (yk−1 > yk−2 ) & ··· & (yk−(n−1) > yk−n ) THEN Y es
(3.8)
Kalman Filter is another alternative for time series prediction [18]. This method use
recursive algorithm so that it may save memory consumption for matrical calculation. This
algorithm was developed by R.E. Kalman in 1960 [19]. There are now various applications
2
of this lter in Brain-Computer Interface . Below is the Kalman Filter algorithm, derived
from [15],[18],[27],[28],[29].
2
see G.S. Dharwarkar's master thesis, chapter 4 [20] and other published papers [21],[22],[23],[24],[25],[26]
In Kalman Filtering, the process is modeled in Eq. (3.9) and the measurement is
z k = H · xk + r k (3.10)
A, B and H are known parameters. qk is process noise with p(q) ∼ N (0, Q) and
from [18]. Then, Eq. (3.12) from [28]. For both equation, the noise variance is the same:
q x = σq2 .
" #
∆t3 /3 ∆t2 /2
Q= · qx (3.11)
∆t2 /2 ∆t
" #
∆t4 /4 ∆t3 /2
Q= · σq2 (3.12)
∆t3 /2 ∆t2
h i
The measurement parameter is H = 1 0 . The measurement noise covariance is
R = σz2 = σx2 . The problem with Kalman Filter is to tune the noise covariance matrix. In
this thesis, q
x = σq2 = σx2 is proposed.
The Kalman Filter algorithm contains two steps: time update [27] or prediction step
[18],[28] and then measurement update [27] or update step [18],[28]. In the rst step, the
priori state is predicted (Eq. (3.13)) and the priori covariance is estimated (Eq. (3.14)).
The next step, the innovation or measurement residual (Eq. (3.15)) and the innovation
covariance (Eq. (3.16)) are calculated to get Kalman Gain (Eq. (3.17)). Then the Kalman
Gain is used to update the posteriori state estimate (Eq. (3.18)) and the posteriori
covariance estimate.
x̂−
k = A · x̂k−1 + B · uk−1 (3.13)
P− T
k = A · Pk−1 · A + Qk (3.14)
vk = zk − H · x̂−
k (3.15)
Sk = H · P− T
k · H + Rk (3.16)
Kalman Gain:
K k = P− T
k · H · (Sk )
−1
(3.17)
x̂k = x̂−
k + Kk · vk (3.18)
The next is to get updated (a posteriori) covariance estimate. There are three possible
ways to get it. The rst (Eq. (3.19)) is the simplied formula from [15],[27],[28]. Other
method is Eq. (3.20) from [18]. The last is Eq. (3.21), which is called Joseph Form [28].
Pk = (I − Kk · H) · P−
k (3.19)
Pk = P− T
k − Kk · Sk · Kk (3.20)
Pk = (I − Kk · H) · P− T T
k · (I − Kk · H) − Kk · Rk · Kk (3.21)
For the time series prediction, Eq. (3.22) is used. In this master thesis, the parameters
h i
are C = 1 m∆t and D = 0, for m step ahead prediction.
y = C · x̂ + D · u (3.22)
In this thesis, the estimated SNR is measured as zk . If we look at the Figure 3.1
then zk is the output of minimum energy combination [5], as well as, the input of time
series prediction algorithm. In order to avoid an enhanced value which is less than zk , the
Simulation
4.1 Procedure
The time series prediction algorithms are tested with simulation. The software used is
1
MATLAB R2006b (Version 7.3) from Mathworks . The data used is from SSVEP-BCI
experiment using wearable SSVEP Stimulator [30]. The experiment was conducted in
2008. There were 5 LEDs with dierent ickering frequencies which were put on the front
part of a visor cap, worn by the subject. Then the subject looked at only one LED of ve
and the EEG data is collected. The EEG electrodes used are 6 and the sampling frequency
is 128 Hz. So each EEG data set contains signals which has only one dominant frequency.
The signal processing algorithm used for estimating SNR was minimum energy
combination [5]. In this thesis, for simulation, there are 5 SNRs which correlate to the
5 ickering frequencies but only one dominant SNR which correlates to frequency of the
LED that the subject focused on. The segment length used is 2 s and one sample is equal
After estimating SNRs, the prediction algorithms from Chapter 3 are tested. The
algorithms will enhance the estimated SNRs but only the enhancement results which are
larger than the estimated SNRs will be the nal enhanced SNRs. Otherwise, the enhanced
is the same as the not-enhanced SNR. So the enhanced can never be smaller. The enhanced
Figure 3.1 in Chapter 3 shows the block diagram and the signal ow for the simulation.
Figure 4.1 shows SNR of EEG Signals for one frequency only. The SNRs from the rest
4 frequencies are not shown here because only one is necessary for classication using
thresholding and they are much smaller than the important SNR. It can be seen from the
gure that the enhanced SNR (red) is larger than the estimated SNR (blue).
700 705 710 715 720 725 730 735 740 745 750
sample
Figure 4.1: Enhanced SNR (red) using Three-Point Quadratic Model and estimated SNR (blue)
of one frequency
Three-point quadratic method may lead to false decision making in the classication
because it is sensitive if the SNRs is uctuating. In the gure, small changes of estimated
4.3 Regression
Regression method is an algorithm to deal with uctuating signals or SNRs better than
three-point method. The results can be seen in Figure 4.2 and Figure 4.3. The rst gure
shows regression with 5 delay taps and the latter is with 8 taps. They have more tap than
From both gures, it can be seen that the enhanced SNRs are larger than the estimated
SNRs. The eect of uctuating SNRs is reduced with regression. Having more taps help
dealing with uctuation. In this simulation, using 8 taps is better than 5 taps. The
regression method has low pass lter eect if a lot of taps are set. The high frequency, in
this case, the uctuation is ltered while this algorithm is predicting the future value. But
the more taps are set, the larger the matrix is used , the more computational cost will be.
2
2
see Section 3.2
2.5
1.5
0.5
−0.5
700 705 710 715 720 725 730 735 740 745 750
sample
Figure 4.2: Enhanced SNR (red) using Regression with 5 delay taps and estimated SNR (blue)
of one frequency
Also too much taps may lead to false decision making in the classication because some
important SNRs could be treated as noises and ltered instead of as SNRs for classication.
From this simulation result, 8 taps are considered enough to be the maximum.
4
YforThreshold (red) and Ydesired(blue)
3.5
2.5
1.5
0.5
−0.5
700 705 710 715 720 725 730 735 740 745 750
sample
Figure 4.3: Enhanced SNR (red) using Regression with 8 delay taps and estimated SNR (blue)
of one frequency
Unlike the other algorithms which use the values of SNR for classication (for threshold),
the logical trend-based method uses mainly the gradient of SNR. The logical trend in the
Section 3.3 only tells whether an SNR is ascending or descending. There are 5 SNRs so if
more than one SNR have an ascending trend, the SNR values are taken into consideration.
The one with the greatest value is chosen if there are conicting SNR logical trends. This
means that classication methods, which are based only on gradient or trend, cannot be
realized.
Figure 4.4 shows simulation result if 2 delay taps are used. The decision should be
command 4 but there is uctuation. The system decide other commands too.
4
Selected Command
2100 2105 2110 2115 2120 2125 2130 2135 2140 2145 2150
sample
Figure 4.4: The command made by Logical Trend-Based Decision algorithm, with 2 delay taps
Using more delay taps may reduce uctuation. So 5 delay taps are used and the result
can be seen on Figure 4.5. This results shows that, instead of command 4, the system
chooses mostly command 0. From the gure, command 4 is chosen several times, as well
as other commands, but there is more waiting time to decide command 4. It means that
the system mostly do not make decision and spend time waiting. False classication can
arise as well.
If 8 delay taps are used then there is no uctuation. After 2330 samples, the system
will decide correctly command 4 as we can see from Figure 4.6. It is good to reduce noise or
uctuation but 2330 samples take more than a minute. The purpose of improving response
Using mainly gradient for classication is not a good idea. This algorithm is bad if
the SNRs are uctuating. The system will decide either false commands or none. The
simulation result conrmed that. So the logical trend-based algorithm is not implemented
in the experiment.
Selected Command
3
2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500
sample
Figure 4.5: The command made by Logical Trend-Based Decision algorithm, with 5 delay taps
Selected Command through time
4
Selected Command
Figure 4.6: The command made by Logical Trend-Based Decision algorithm, with 8 delay taps
Unlike regression, changing parameters in Kalman Filter algorithm does not change
computational cost. The size of matrices remain the same. The parameters that could
be changed are the covariances and number of steps ahead. In this simulation, the number
of steps is 5.
Three forms of Kalman Filter are tested: simplied, Särkää's and Joseph's. Figure 4.7
shows that simplied form is a good choice. Särkää's form performs well also on the
Figure 4.8. From both gures, the enhanced SNR is larger than the estimated one but
small uctuation does not have big eect. So these two forms are good candidate to be
Enhanced output for threshold (red) versus the real value (blue)
35
25
20
15
10
0
4220 4225 4230 4235 4240 4245 4250 4255 4260 4265 4270
sample
Figure 4.7: Enhanced SNR (red) using simplied form of Kalman Filter with 5 steps ahead
prediction and estimated SNR (blue) of one frequency
Enhanced output for threshold (red) versus the real value (blue)
35
30
YforThreshold (red) and Ydesired(blue)
25
20
15
10
0
2430 2435 2440 2445 2450 2455 2460 2465 2470 2475 2480
sample
Figure 4.8: Enhanced SNR (red) using Särkää's form of Kalman Filter with 5 steps ahead
prediction and estimated SNR (blue) of one frequency
Unlike Särkää's and simplied form, Joseph's is too sensitive. Small uctuation results
in great change in enhanced SNR. The enhanced SNR may be 15 times larger than the
estimated SNR. This can lead to false classication. So Joseph's form is not implemented
in the experiments.
Enhanced output for threshold (red) versus the real value (blue)
200
180
YforThreshold (red) and Ydesired(blue)
160
140
120
100
80
60
40
20
0
3050 3055 3060 3065 3070 3075 3080 3085 3090 3095 3100
sample
Figure 4.9: Enhanced SNR (red) using Joseph's form of Kalman Filter with 5 steps ahead
prediction and estimated SNR (blue) of one frequency
Implementation
5.1 Software
of this thesis. The software is designed with standard C++ programming language. The
it can be used for any application and any C++ compilers. BCI2000
1 system is used in
In the implementation part, after the enhanced SNRs is calculated, they are normalized
into Probability values. Eq. (5.1) shows how to calculate the normalized SNRs (or
probability values): P vi , with i as the SNR channel and N is total number of channels.
Figure 5.1 gives an idea of the signal ow. The normalized SNR is used for classication
using threshold.
SN Ri
P vi = N
(5.1)
P
SN Rk
k=1
5.1.1 Classes
There are two C++ classes are designed: cRegression3 and cKalmanIATBCI. The rst one
is to implement three-point method and regression algorithm and the latter is for Kalman
Filter one. Each class is designed so that only one line of code is needed to embed in the
IAT SSVEP BCI application. Changing input variables results in dierent algorithms.
Table 5.1 shows the class cRegression3 arguments and Table 5.2 shows the methods.
Changing the variable iNumberofTap into 2 (or less) results three-point method algorithm.
Other numbers produce Regression algorithm. Below are examples how to use cRegression3
class.
// Three-point method
dNewSeries[i] = SNR.getRegression(dSeries[i],2);
// Regression
dNewSeries[i] = SNR.getRegression(dSeries[i],8);
Table 5.3 shows the class cKalmanIATBCI arguments and Table 5.4 shows the methods.
Due to the result of simulation in Chapter 4, there are only two forms used in the
bSimpleOrSarka are set to true then simplied form is used. Otherwise, Särkää's is used.
Below are examples how to use cKalmanIATBCI class, with the noise variance q x = 250
and m = 10 steps ahead prediction .
3
3
see section 3.4
24
Implementation
Table 5.4: Class cKalmanIATBCI methods (cont.)
Method Type Comment
getUpdatedCovariance(int,int) double accessing dPk
getPredictedCovariance(int,int) double accessing dPkminus
getMeasuredOutput() double accessing dzk
getInnovation() double accessing dvk
getInnovationCovariance() double accessing dSk
getKalmanGain(int) double accessing dKk
getKalmanError() double accessing dKalmanError
ShowQ() void showing dQ (process noise covariance)
Ignatius Sapto Condro Atmawan Bisawarna
25
Implementation
Ignatius Sapto Condro Atmawan Bisawarna Chapter 5. Implementation
5.1.2 BCI2000
BCI2000 is a general-purpose system for brain-computer interface (BCI) research [31],[32].
BCI2000 supports dierent kinds of signal acquisition devices, varieties of signal processing
and many BCI applications. It can connect to other software, such as MATLAB [31].
BCI2000 version 2, which is used in this thesis, is open source software but it can be
compiled only with Borland C++ Builder 6.0 and Borland/CodeGear Development Studio
Figure 5.2 shows the BCI2000 operator. There are three command buttons: 'Cong',
'Set Cong' and 'Start' (or 'Resume'). If the 'Cong' button is clicked then we can see the
modules. In the modules, users can change parameters needed for experiments. Figure 5.3
shows the storage module if 'Cong' button is clicked. We can locate the folder to save
the data. 'Set Cong' button is clicked to set the conguration. Then the measurement
The important module in this thesis is ltering module. The time series prediction
algorithm is embedded here. If the 'Cong' button is clicked, we can see the ltering
module. Figure 5.4 and Figure 5.5 show this module. The prediction part
(3) in this module
can be seen on both gures. We can choose whether we want to use prediction calculation
or not
(4) . The provided algorithms(5) are Regression, simplied form and Särkää's form
is chosen.
in minimum energy combination algorithm to estimate SNRs [5]. The segment length
(8) is
for classication. The idle period means that the system wait for some time before it
Figure 5.4: BCI2000 Filtering Module, showing segment length and ickering frequencies
Figure 5.5: BCI2000 Filtering Module, showing idle period and threshold
5.1.3 ClassierConnect
ClassierConnect is a software to measure some parameters in classication [33]. This
software produce random commands for SSVEP. A subject can hear the commands from
the speaker of the computer. The commands are 1, 2, 3 and 4 and they correlate to the
4 LEDs. If a subject select the right command, this program will give another command
ClassierConnect measure how many completed tasks, the time the subject need to do
the task and the error he makes, as it can be seen from Figure 5.6. These parameters is
important for the calculation of speed, accuracy and information transfer rate.
connection use RS232 port. HTerm 0.8.1 beta is used to send command to the controller
and receive information from it. This software is developed by Tobias Hammer in 2006
4
4
http://www.der-hammer.info/terminal
5.2 Hardware
general purpose physiological signal acquisition device [34]. Not only EEG, it can measure
ECG, EGG, EOG and so on. In this thesis, only EEG signals from 8 sites on the scalp are
by PIC 16F877. Their ickering frequency can be adjusted after connecting PIC and
computer using RS232 port. A terminal software should be used to communicate with
the controller. Figure 5.9 shows the LED array and the controller. In this thesis, the
5.2.3 Computer
Any computers which have USB port and RS232 port can be used. In this thesis, Acer
Extensa 5620 with Intel Core 2 Duo Processor T5250 is used. This processor works in
5
Twente Medical System International, http://www.tmsi.com/
1.5 GHz, with 667 MHz FSB and 2 MB L2 cache. The computer has 1 GB DDR2 random
access memory. The operating system used for experiments is Microsoft Windows XP.
Experiment
6.1 Procedure
6.1.1 Subjects
In the experiment, there are 11 volunteers. They participated as subjects in 3 experiments.
Table 6.1 shows the subject information. There are 2 females and 9 males. They are
students with the age between 21 and 30 years old. Five subjects who use glasses did not
wear them at the experiments and one wore inconsistently. The others do not need eye
correction. There is only one subject who has experience in BCI Experiment, especially
SSVEP.
site AFz [3]. The electrodes were connected to the Porti7 channels .
2
AFZ
Pz
PO3 PO4
O1 Oz O2
O9 O10
6.1.3 Protocol
After the subject is ready and all the equipments are set properly, the subject has to look at
certain LED which is commanded by voice from ClassierConnect program. If the subject
do so and the system can classify whether it is right, then there will be another command.
There are three experiments in this thesis. Each experiment is done with dierent
parameters. Table 6.2 shows dierent parameters in each Experiment. The idle period is
1
adding O1 and O2 from previous experiments [3]
2
see Figure 5.8 on page 31
3
see Chapter 4 and 5 and [5]
Most subjects do not take part of all experiments as we can see on Table 6.3. Three
subjects who took part in the second experiment participated again in the third one. The
experiment took 1.5 until 2 hours so only a few algorithms and parameters are tested.
Each algorithm is tested three times at least. The result of the second experiment is used
In the third experiment, the session without using time series prediction are conducted
once more. And then the result of the second and third experiments are combined and
calculated together to get the mean and the standard deviation, as it is noted in Table 6.2.
It is done so because there might be dierent performance of the subject. This session
Some sessions are not conducted anymore in the third experiment. Instead, the result
It is done so because of time convenience for the subject. A long hour of experiment is
task count. Unfortunately, they cannot be recorded automatically so the data is manually
The important parameters for SSVEP BCI research, especially in this thesis, are
• speed
• accuracy
subject. In this thesis, task is called trial. e is the error made. t is time. All three of them
are measurement data from ClassierConnect. Eq. (6.2) is used to calculate accuracy, P.
n+e
v= (6.1)
t
n
P = × 100% (6.2)
n+e
As examples, Figure 5.6 shows some parameters. Then we can calculate the speed and
the accuracy.
n = 10 + 13 + 10 + 10 = 43 tasks
e = 4 + 1 + 1 + 0 = 6 errors
The speed is
43 + 6
v= = 0.3686 trials/seconds = 22.12 trials/minute
132.927
and the accuracy is
43
P = = 87.8%
43 + 6
The information transfer rate (ITR) is a function of speed and accuracy. ITR is used in
most brain-computer interface research. To get ITR, bits per trials B has to be calculated,
using Eq. (6.3) [6],[7]. In this equation, N is the number of command. In this thesis, it is
also equal to the number of LEDs. Bits per trials is a function of accuracy. ITR unit is
mostly tasks or trials per minute so speed v must be multiplied by bits per trials B, in the
1−P
B = log2 N + P · log2 P + (1 − P ) · log2 (6.3)
N −1
IT R = B · v (6.4)
Using the same example from Figure 5.6, we can calculate ITR.
N = 4 LEDs
The ITR is
6.2 Result
participated in the experiment. One subject did not take part in one part of the experiment
For this experiment, the sampling frequency for the EEG signal acquisition is 2048 Hz.
The segment length is 2 s and the idle period is half of it. The threshold is set to 70.
The variance for Kalman Filter is set to 1250 for all experiment. This variance should
be large. It is better to have it larger than the SNR threshold or larger than the average
of all SNRs through time and from dierent frequencies. But to analyze SNRs will be
time consuming and it will be incovenient for the subject. The experiment is also using
Probability values, which is a normalized SNR. This values depends not only SNR from
one frequency, but also the others. The variance of Kalman Filter is designed for enhanced
SNR of one frequency which is not aected by others. The design is based on the old IAT
BCI system. The variance 1250 is chosen while doing experiment with the rst subject.
The rst subject had SNRs with a range from 600 until 1800 as SSVEP response, while
the probability value calculation was turned o. So 1250 is roughly chosen.
The maximum of delay taps is 8. So the regression algorithm use this. For the Kalman
Filter, 25 steps are chosen. The idea behind this is to make the BCI system around 10 ms
faster.
25
= 12.2 ms
2048 Hz
Figure 6.2: Average speed, from the rst experiment: 7 and 8 subjects, idle periods of 1 s
Improvement of Response Times in SSVEP-based Brain-Computer Interface 37
Ignatius Sapto Condro Atmawan Bisawarna Chapter 6. Experiment
Not only Table 6.4, Figure 6.2 also gives an idea of the speed comparison between
algorithms. All time series prediction algorithms is improving average speed. The Kalman
Filter increases it the most, especially the simplied form, with 49.0 trials per minute. The
regression improves it the least, 31.1 trials per minute, but its speed is still larger than the
Figure 6.3: Average accuracy, from the rst experiment: 7 and 8 subjects, idle periods of 1 s
Although the speed is increasing with time series prediction, the accuracy is decreasing.
Using prediction algorithm is less accurate than not using at all (46.2%). Kalman Filter is
the worst, especially Särkää's form, with 35.4%. The comparison of average accuracy can
Figure 6.4: Average information transfer rate (ITR), from the rst experiment: 7 and 8 subjects,
idle periods of 1 s
The ITR, as a function of speed and accuracy, is compared in Figure 6.4 and Table 6.4.
None of the time series prediction improves bit rate. No average ITR is larger than 4.8
bits per minute. Three-point quadratic method is the worst, with 2.3 bits per minute.
Table 6.5: The result summary of the second experiment, with the idle period of 2 s
Number Speed Accuracy ITR
Algorithm Parameter of ( Trials per minute ) (%) ( Bits per minute )
Subjects mean stdev range mean stdev range mean stdev range
None 7 14.5 5.1 4.924.1 74.6 14.6 39.391.4 13.20 6.40 0.4325.00
Three-point quadratic 7 19.1 2.0 14.422.6 57.1 17.2 28.995.5 8.52 8.75 0.0834.09
sampling frequency = 2048 Hz, segment length = 2 s, idle period = 2 s, threshold = 70, Kalman Filter variance = 1250
40
Experiment
Ignatius Sapto Condro Atmawan Bisawarna Chapter 6. Experiment
participated in the experiment. The parameters for signal processing and classication are
the same as the rst experiment, except the idle period. In the second one, the idle period
Figure 6.5: Average speed, from the second experiment: 7 subjects, idle periods of 2 s
Figure 6.5 as well as Table 6.5 show the average speed. In the second experiment, the
speed is increasing with time series prediction. Kalman Filter performs the best speed,
especially Särkää's form, with 24.5 trials per minute. Without any prediction algorithm,
the average speed is 14.5 trials per minute. With regression, the speed is slightly increase
Like the rst experiment, time series prediction reduce the accuracy in the second
one as it can be seen from Figure 6.6 and Table 6.5. Kalman Filter performs the worst
accuracy, especially simplied form, with 48.6%. The average accuracy is 74.6% if no
prediction algorithm is used. The accuracy is a little bit decreasing into 71.6% using
regression method.
Figure 6.6: Average accuracy, from the second experiment: 7 subjects, idle periods of 2 s
Figure 6.7: Average information transfer rate (ITR), from the second experiment: 7 subjects,
idle periods of 2 s
Table 6.5 and Figure 6.7 show that regression can improve ITR. Without time series
prediction, the ITR is 13.2 bits per minute. The regression increase ITR slightly into 13.45
bits per minute. Time series prediction can improve response times as the main purpose
From the second experiment, it is found out that the idle period should not be less than
the segment length. Table 6.4 and Table 6.5 show that the ITRs in the second experiment
25 steps ahead 3 23.8 1.5 21.926.5 55.3 24.7 26.888.9 10.65 11.97 0.0332.91
Särkää's form of Kalman Filter 5 steps ahead 3 21.0 4.7 14.228.4 71.9 14.5 53.290.2 16.37 11.91 4.8833.19
10 steps ahead 3 22.6 3.9 18.328.2 67.9 14.7 50.088.0 15.64 12.81 3.7932.16
25 steps ahead 3 25.4 1.4 22.627.2 62.0 25.6 29.588.2 15.34 13.57 0.2033.81
sampling frequency = 2048 Hz, segment length = 2 s, idle period = 2 s, threshold = 70, Kalman Filter variance = 1250
43
Experiment
Ignatius Sapto Condro Atmawan Bisawarna Chapter 6. Experiment
and Kalman Filter prediction step. Regression using 5 delay taps is tested. With the
The third experiment also use the same signal processing and classication parameters
as the second one. The result of the previous experiments has conrmed that these
parameters are the most optimal. Besides, we can compare the result from the second
Table 6.6 summarizes the third experiment result. The number of subjects is 3. These
subjects took part in the second experiment so the result between the second and the third
Figure 6.8: Average speed, from the third experiment: 3 subjects, idle periods of 2 s
Table 6.6 and Figure 6.8 shows that not all algorithms are improving the speed.
Without time series prediction, the average speed is 17.3 trials per minute. The regression,
with 8 delay taps, is slower than that. It is 16.1 trials per minute. Kalman Filter gives the
best speed, especially Särkää's form, with 25.4 trials per minute.
In the third experiment, time series prediction algorithms can improve accuracy, as
we can see from Figure 6.9 and Table 6.6. Without any algorithms, the average accuracy
is 71.8%. Regression, with 8 delay taps, could increase it into 72.3%. Simplied form
of Kalman Filter prediction, for 5 and 10 steps ahead, could do so, with 78.5% (as the
maximum) and 73.4%. Särkää's form are performs well with 5 delay taps. It can get to
71.9% accuracy. Kalman Filter with only 5 steps is the most optimal to have accurate BCI
Figure 6.9: Average accuracy, from the third experiment: 3 subjects, idle periods of 2 s
Figure 6.10: Average information transfer rate (ITR), from the third experiment: 3 subjects, idle
periods of 2 s
Due to a better accuracy, the time series prediction in the third experiment shows a
better bit rate, as it is shown by Figure 6.10 and Table 6.6. The ITR for a system without
prediction algorithm is 14.83 bits per minute. The Kalman Filter increase the ITR unless
it is not simplied form which uses 25 steps ahead. The maximum is 19.24 bits per minute,
after simplied form with 10 steps is used. But 5 steps shows slightly dierent result: 19.15
It is optimal to have only 5 steps for Kalman Filter algorithm. Särkää's form result
also conrmed that 5 steps is better than 10 and 25. From the table, the ITR is 16.37.
Regression with 8 taps performs well, with 17.59 bits per minute. Regression, which
has more points, is better than using only three points. Three-point quadratic method
cannot increase ITR. The rst and second experiments also conrmed that. It is optimal
7.1.1 Simulation
From the simulation result, conclusions can be drawn below.
• Quadratic model, whether using only three points or more (with regression) can
enhance SNR so that it can reach threshold faster and then improve response time.
• Classication cannot be realized if it based on only gradient (or trend) without the
values. The logical trend-based method still need SNR values if there is conict in
decision making.
the uctuation.
• Kalman Filter can enhance SNR but not all form can be used. Simplied form and
7.1.2 Experiment
The conclusion from the experiment are below.
• Time series prediction can help improving response time, as a result of the second
• Increasing speed does not mean increasing bit rate since the accuracy has to be
considered also. Choosing the right parameter make optimal speed and accuracy
• Quadratic model with only three points cannot increase information transfer rate, as
• Regression helps improving response times. The ITR is the best if 8 delay taps are
used.
• Kalman Filter can improve response time. Too much steps can lead to false
steps.
• It cannot be concluded which one is better between simplied and Särkää's form of
Kalman Filter.
• Idle period should not be less than segment length to improve response time. Time
series prediction algorithms also do not work well if idle period is less.
The performance of BCI system depends not only on the computers but also human.
The experiments in this thesis shows that the range of speed, accuracy and information
transfer rate is wide. The time series prediction algorithms perform dierently for each
subject. A subject who participated in all three experiments could perform dierently
in each experiment although some parameters are the same. The third experiment use
only 3 subjects and they are t with most of the algorithms. The second experiment with
7 subjects shows that only regression (with 8 delay taps) could perform well. Another
experiment with more subjects and their varieties would be useful to know whether these
The experiment using ClassierConnect and 4 LEDs is boring and tiring for some
subjects. For future works, the the time series algorithms in this thesis could be embedded
into other BCI applications, for example spelling, games and so on. It would make SSVEP
The data from ClassierConnect is not recorded automatically. In this thesis, it took
a lot of works to get the data. First, printscreen has to be used. Then the data has to be
manually copied from the picture to Microsoft Excel, so that the data can be analyzed.
Developing ClassierConnect is not part of this thesis but it would be helpful for future
In the experiments using Kalman Filter, the transient analysis cannot be done. The
changes in Kalman Filter parameters through time are not recorded in this thesis. Adding
C++ code to store the parameters is good to see its performance. And then we can analyze
how fast the Kalman Filter predicts or how long it needs until convergence is reached. With
this code addition, the eect of Kalman Filter variance can also be analyzed.
[2] O. Friman, Brain Signals for Robot Control, in 27th Colloquium of Automation,
(Salzhausen), November 2005.
How many (and what kinds of ) people can use an SSVEP BCI?, in Proc.
4th International Brain-computer Interface Workshop and Training Course, (Graz,
[4] I. S. C. Atmawan, Final Preparation of the CeBit Data, Master Project Report,
progress and prospects, Expert Review of Medical Devices, vol. 4, no. 4, pp. 463474,
2007.
Baycrest.
[11] N. Xu, X. Gao, B. Hong, X. Miao, S. Gao, and F. Yang, BCI Competition
2003Data Set IIb: Enhancing P300 Wave Detection Using ICA-Based Subspace
February 1998.
to multiple locations within one hemield, Neuroscience Letters, vol. 414, pp. 6570,
April 2007.
[14] Z. Lin, C. Zhang, W. Wu, and X. Gao, Frequency Recognition Based on Canonical
[15] B. Abraham and J. Ledolter, Statistical Methods for Forecasting. John Wiley & Sons,
1983.
[16] R. E. Walpole and R. H. Myers, Probability and Statistics for Engineers and Scientists.
New York: MacMillan Publishing Co., 2 ed., 1978.
[18] S. Särkää, A. Vehtari, and J. Lampinen, Time Series Prediction by Kalman Smoother
March 1960.
for controlling a robot simulator: an online event labelling paradigm and an extended
Kalman lter based algorithm for online training, Medical and Biological Engineering
and Computing, vol. 47, pp. 257265, 2009.
parameters and band power estimates, Biomedizinische Technik, vol. 50, pp. 350354,
November 2005.
Parameters as A New EEG Feature Vector for BCI Applications, in Proc. 13th
European Signal Processing Conference, 48 September 2005.
[27] G. Welch and G. Bishop, An Introduction to the Kalman Filter, Technical Report:
[29] M. B. Priestley, Spectral Analysis and Time Series, vol. 2: Multivariate series,
[33] P. Hustedt, Entwicklung eines Programms zur Untersuchung von Parametersätzen für
[34] TMS International, Oldenzaal, User Manual for the portable physiological
measurement system Porti7, November 2007.