Professional Documents
Culture Documents
Fumiya Iida
Bachelor-Thesis
Abstract ii
Symbols iii
1 Introduction 1
2 EMG Sensors 2
2.1 Functionality of Electromyography . . . . . . . . . . . . . . . . . . . 2
2.2 The layout of an EMG sensor . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Adapting the sensors to the Arduino . . . . . . . . . . . . . . . . . . 3
3 Pattern Recognition 5
3.1 Pattern Recognition with a Neural Network . . . . . . . . . . . . . . 5
3.2 Activity Check Pattern Recognition . . . . . . . . . . . . . . . . . . 6
4 Sensor Placement 7
5 Experiments 8
5.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6 Results 11
6.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
7 The Robot 14
7.1 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.3 Experiments with the Robot . . . . . . . . . . . . . . . . . . . . . . 17
Bibliography 21
B Code 25
B.1 Matlab Code for Neural Network . . . . . . . . . . . . . . . . . . . . 25
B.2 Arduino Code for Activity Check . . . . . . . . . . . . . . . . . . . . 26
C Sketches 29
C.1 Sketches of the parts of the robot . . . . . . . . . . . . . . . . . . . . 29
i
Abstract
This thesis describes the control of a four degrees fo freedom robot arm through
nerve signals. The robot can successfully pick a spoon out of a cup and place it
into another cup. The nerve signals are measured from the forearm of the subject
whenever it moves its hand. Three EMG sensors are used to measure the nerve
signals, an activity check algorithm or a neural network are used to process them.
As measurements show, the activity check is superior to the neural network.
ii
Symbols
Symbols
i val input values
h val hidden values
o val output values
hw weights of the connections to the hidden values
ow weights of the connections to the output values
h sum hidden sum = weighted sum of all the input values
o sum output sum = weighted sum of all the hidden values
iii
Chapter 1
Introduction
1
Chapter 2
EMG Sensors
2
3 2.3. Adapting the sensors to the Arduino
can be seen in figure 2.1. Figure 2.2 shows two EMG sensors, one with and one
without isolation.
Figure 2.2: EMG sensor with electrodes, potentiometer and yellow shrink tube
isolation
Figure 2.3: Schematic of the circuit that adds a 2.5V offset to the signal
Chapter 3
Pattern Recognition
To control a robot with four DOF, eight input patterns are necessary, because each
DOF can rotate forward and backward. Two algorithms were used to distinguish
different patterns from the three EMG signals: A neural network and an activity
check.
From this filtered curve samples are taken at the frequencies 20, 40, 60, 80, 100, 120,
140, 160, 180, 200, 220, 250, 300, 350, 400 and 450Hz. More samples are chosen
below 250Hz because for higher frequencies the amplitude of the FFT aproaches
zero. The 16 samples of all three EMG signals are combined to a 48 × 1 input
vector for the NN.
The NN was designed similar to [1] using backpropagation learning. It has 48 input
nodes, one layer of 53 hidden nodes and 8 output nodes. The exact number of
hidden nodes is not important for the NN performance. Past experiecne has shown
that using 10% more hidden nodes than input nodes works fine. Eight output
nodes were used, because eight patterns have to be recognized. To indicate pattern
5
Chapter 3. Pattern Recognition 6
n, node n is set to 1 while all the other nodes are set to 0. For a detailed description
of the NN structure and the learning algorithm, see appendix A.
The Matlab code that implements the NN learning is shown in appendix B.1. It
trains all the recorded patterns in a loop, until the maximum of the sum squared
error is smaller than 0.001 or a time mark is reached. A time mark of 4 minutes
was chosen because the major changes in the NN structure happen in the first 2
minutes of learning. After 4 minutes almost no further changes occur. Figure 3.2
shows the process of a NN learning to differentiate eight patterns. The output
(green) aproaches the desired output (black) with every learning iteration. Pictures
were taken after 10, 20, 30 and 150 iterations.
Figure 3.2: Green: NN Output values after 10, 20, 30, 150 learning iterations;
Black: desired values
To determine the limit value, a constant value of 0.3V is integrated over the whole
measurement time. Since the noise amplitude coming from an inactive muscle lies
around 0.2V and the EMG signal of an active muscle normally has an amplitude of
1.5V , this provides a good limit value.
Chapter 4
Sensor Placement
Figure 4.1: Top: Sensor placement on the forearm, Bottom: Sensor placement on
the neck and jaw.
7
Chapter 5
Experiments
Two differen pattern recognition algorithms and two different sensor placements
were used in this thesis. The following experiments compare these four setups. The
results can be seen in the next chapter.
5.1 Simulation
Before measuring the successrate of these four setups, the recognition capabilities of
the NN have been simulated. The Activity Check algorithm cannot be simulated.
It is specifically designed for the EMG signals and cannot be tested with randomly
generated patterns. 10, 20, 30, up to 100 patterns were used to test the NN. Each
pattern was generated as a 48 × 1 vector of random numbers between 0 and 1. This
results in signals with an amplitude of 0.5 and a mean value of 0.5.
To create different samples of the same pattern, the original sample was overlayed
with an artificial measurement noise. Noise to signal ratios of 0.6, 1.0 and 1.4
were examined. The noise was generated as a vector of random values between
±noise amp. For each pattern the NN had to learn, there were three training
samples and seven testing samples. A pattern is considered recognizable if at least
6 out of 7 testing samples were recognized correctly. The training samples were
fed to the NN in a loop, as shown in figure 5.1. First the first sample of the first
pattern is fed in, then the first sample of the second pattern and so on up to the
first sample of the nth pattern. Then the same loop is repeated with the second set
of training samples and then again with the third set.
8
9 5.2. Measurements
5.2 Measurements
The following measurements were conducted to find out which of the four setups
works best to control the robot. For both sensor placements eight muscle activity
patterns were designed which have a unique combination of relaxed and tensed
muscles. A NN with three training samples, a NN with one training sample and
the Activity Check algorithm were used to recognize these eight patterns. The
measurement data is shown in the tables in figure 5.2.
Figure 5.2: Pattern recognition measurements form the forearm or the neck
In an aditional test series it was tried to identify as many signals as possible with
the NN. As the tables in figure 5.3 show, the NN performed very poorly in distin-
guishing these patterns, even though in the simulation it could easily recognize 20
patterns. This difference in performance is due to the totaly different structure of
the patterns. The patterns for the simulation were generated randomly. Random
patterns do not have any correlation, they are all unique. The patterns measured
from the different movements of the hand (or neck) are often similar to each other.
Movements that require actions of the same muscle groups produce almost identical
patterns. For these patterns the fluctuations between two samples of the same pat-
tern are bigger than the differences between the patterns themselves. Using almost
identical patterns with different desired output values confuses the NN. Sometimes
all of these patterns lead to one desired output, which explains why few patterns
could still be recognized. Mostly the NN will not assign the unclear pattern to
either one of the desired outputs, thus the pattern will never be recognized.
Chapter 5. Experiments 10
Figure 5.3: Pattern recognition measurements form the forearm or the neck
Chapter 6
Results
6.1 Simulation
The NN can maximally differentiate 50 patterns with the smallest noise to signal
ratio. Increasing the measurement noise decreases the amount of patterns the NN
can learn to recognize as well as the chance of a sample being recognized correctly.
If the NN is only trained with few patterns, the relative performance is high but
the absolute performance is low. For every noise amp there is a maximal absolute
performance, which stays constant over an interval of 20 to 40 patterns. Once
it is reached, a further increase of the number of patterns does not change the
absolute performance of the NN. Yet the relative performance decreases already. If
the number of patterns is increased beyond this interval, both the relative and the
absolute performance decrease.
For a noise to signal ratio of 1.4, the NN can rarely recognize a pattern 6 or 7
times out of 7 tests. However, up to 50 test patterns, most patterns are recognized
3, 4 or 5 times, which indicates that the NN can still learn different patterns, but
the fluctuations of one testing sample to the other are too big to allow a reliable
classification. A better performance is expected if the noise to signal ratio of the
testing samples would be decreased to 1.0 or even 0.6. Figures 6.1 and 6.2 show the
results of the simulation.
11
Chapter 6. Results 12
6.2 Measurements
As the measurements show, the Activity Check algorithm performed best in recog-
nizing eight muscle activity patterns. If this algorithm fails to recognize a pattern
reliably, it is because the subject cannot control its muscle tension well enough when
performing the movement. The more the subject practices to control its muscle ten-
sion, the better these eight patterns can be distinguished. This could be observed
during the measurements as well as during the experiments with the robot.
Against the expectations, using one training sample for the NN worked better than
using three. The explanation for this observation is, that the fluctuations between
two samples of a pattern can still have a similar order of magnitude as the differences
between two patterns. Therefore it is better to use only one typical learning sample
per pattern. The fluctuations will prevent some samples from being recognized
correctly, but at least the learning samples are clearly distinguishable.
When ever the NN could not recognize a pattern at all, it did not learn to do so
during the learning session. If it could recognize a pattern only a few times, it
managed lo learn during the learning session, but the fluctations of the samples of
this pattern were bigger than the difference to another pattern. Therefore some-
times the one and sometimes the other pattern is recognized, resulting in a poor
identification perfrmance.
Chapter 7
The Robot
7.1 Characteristics
The following table shows the characteristics of the robot. The figure 7.2 contains
a photograph of the robot with all four DOF sketched in. Sketches of the parts can
be seen in the appendix C.
14
15 7.1. Characteristics
Figure 7.2: Photograph of the Robot with the four DOF sketched in
Chapter 7. The Robot 16
7.2 Controller
The control of the robot is feedforward only. The visual feedback of the user ensures
a position control with zero steady state error. On the Arduino microcontroller the
Activity Check algorithm is implemented. The code for the Arduino is written in
C and can be seen in the appendix B.2.
The code uses the ’switch’ command to ask which pattern is recognized at the
moment. To use this command, the EMG patterns have to be converted to integers.
The three sensors produce values 0 or 1. These can be interpreted as binary numbers
from 000 to 111, which are then converted to decimal numbers from 0 to 7. A servo
action is assigned to every one of these eight cases. The table in figure 7.3 shows
the correlation between the users action, the EMG pattern and the robots action.
The pattern where all muscles are inactive has to be assigned to not moving any
servo, else the robot can never stand still. This leaves only seven patterns to control
eight servo movements. This problem is solved through designing the gripper as an
open/ close switch. The same signal opens the gripper if it is closed and closes the
gripper if it is opened. The remaining six patterns are used to control the other
three DOF. Since they have to be controllable over their full range of motion, each
of them needs one pattern to turn to the left and one to turn to the right.
Figure 7.3: Correlation between the users acrion, the EMG pattern and the robots
action
17 7.3. Experiments with the Robot
In figure 7.5, the control signal and the corresponding changes of the servo angles
are shown.
Further experiments were conducted with differnt muscle groups of the same subject
and with different subjects. The setup with the neck and jaw was tested, as well
as two other setups using the chest (Musculus pectoralis major) and the waist
(Musculus rectus abdominis) or the legs (Musculus quadriceps femoris) and the
calves (Musculus gastrocnemius). In all these tests, including the ones with different
subjects, the subject could make the robot move, but was not capable of completing
a task. From this it is concluded, that all the tested muscles generate the same kind
of signals. However the amplitude of the signal is individually different for each
subject and for each muscle. To allow a precise control of the robot, the limit
values for the Activity Check would have to be adapted to the specific setup. Also
the subject would have to train with each setup to improve its coordination. Else
it will not be able to control the robot.
Chapter 7. The Robot 18
The noise has a big influence on the NN performance. Increasing its amplitude
decreases the amount of patterns the NN can learn to recognize as well as the
chance of a sample being recognized correctly. For a noise to signal ratio of 1.4, the
NN could hardly recognize any patterns 6 times out of 7. However it could recognize
many patterns 3, 4 or 5 times. This indicates, that the NN could still learn the
patterns during the learning session but the noise was too big to allow a reliable
classification during the tests. Future experiments could analyze if decreasing the
noise to signal ratio of the testing samples to 1.0 or even 0.6 would improve the NN
performance.
The Activity Check could best recognize eight patterns that have unique combi-
nations of tensed and relaxed muscles. A NN with one or three training samples
was used as well to recognize these patterns. The training with one sample was
more successful, because the fluctuations of the samples can still have the same
order of magnitude as the differences between the patterns.
The attempt to measure as many muscle activity patterns as possible for a spe-
cific sensor placement was not successful. The NN could not distinguish most of
the patterns because two or three were always too similar. To recognize more pat-
terns, they would have to be made more individual. Future tests could use more
sensors to get individual signals from different muscles of a muscle group. Another
approach could also include more than one limit value for the Activity Check to
distinguish between different levels of activity.
While conducting the experiments it was observed that the placement of the elec-
trodes has a huge influence on the quality of the measured signal. Placing the sensor
two centimeters away from its intended position can lead to totaly different signal
amplitudes or even to a loss of the signal.
19
Chapter 8. Conclusions and Future Work 20
The robot could successfully be controlled using the Activity Check algorithm and
the sensor placement on the forearm. Further experiments were conducted with
differnt sensor placements and with different subjects. In all these tests the subject
could make the robot move but never succeeded in controlling the robot well enough
to perform a simple task. The limit values of the Activity Check would have to be
adapted and the subject would have to practice with the specific setup.
It was observed that controlling the robot is a learning process. The longer the
subject tried to perform the predefined task, the better it learned to generate the
input signals to smoothly control the robot. This learning process improves the
subjects coordination of the observed muscles as well as its control over the tension
in each muscle while performing the movements. It is concluded, that every subject
and every different sensor placement can be used to control the robot if the subject
practices long enough to learn these abilities for the specific setup.
Bibliography
Background research
[1] Author Not Found: Chapter 3 Supervised learning: Multilayer Networks I.
http://www.google.ch/url?sa=t&source=web&cd=1&ved=0CCAQF
jAA&url=http%3A%2F%2Fwww.cs.umbc.edu%2F~ypeng%2FF04NN%2F
lecture-notes%2FNN-Ch3.ppt&rct=j&q=Chapter%203%20Supervi
sed%20learning%3A%20Multilayer%20Networks%20I&ei=bu3YTYn
SBc-j-gasr-WfDw&usg=AFQjCNHUz_VkHQQpgaCv9iSrCbi0EadGOA&c
ad=rja
Figures
[2] Figure of the head muscles:
http://www.edoctoronline.com/medical-atlas.asp?c=4&id=21
651&m=1&p=10&cid=1051&s=
http://commons.wikimedia.org/wiki/File:Forearm_muscles_
front_deep.png?uselang=de
http://commons.wikimedia.org/wiki/File:Forearm_muscles_
back_deep.png?uselang=de
21
Appendix A
A NN consists of many of nodes and connections between these nodes. The nodes
are organized in layers. A schematic of the NN used in this thesis can be seen in
figure A.1. It is designed similar to the NN described in [1].
The first layer is the input layer, followed by the hidden layers. The last layer is the
output layer. Generaly a NN can have many hidden layers but this NN only has
one. Every node of the NN has a connection to every node one layer before and one
layer after itself. Every connection has a weight. The weights of the connections to
the hidden layer are called ’hidden weights’, hw, and the weights of the connections
to the output nodes ’output weights’, ow. The first index of a weight describes the
node in the target layer, the second one the node in the origin layer. The value of a
node is a function of the weighted sum of all the values of the nodes one layer before
the observed node. Each value is weighted with the weight of its connection to the
examined node. The weighted sum of the input values is called h sum because the
hidden values, h val, are a function of h sum. Accordingly the weighted sum of the
hidden values is called o sum and the output values, o val, are a function of o sum.
22
23
The input values are called i val. Equations A.1 shows how the weighted sums are
calculated.
n
X n
X
h sumi = (hwij ∗ i valj ) o sumi = (owij ∗ h valj ) (A.1)
j=0 j=0
The sigmoid function is used to calculate the value of a node from its weighted sum.
Equation A.2 shows the sigmoid function as a function of x.
1
f (x) = (A.2)
1 + e−x
The sigmoid function is shown in figure A.2. It has a slope of 0.25 at the point (0, 0.5)
and asymptotically aproaches 0 if the argument goes to −∞. If the argument goes
to ∞, the function aproaches 1. The sigmoid function is a common choice for a NN
node function. It constrains the values of the nodes to a finite interval. At the same
time it is still sensitive to changes of the function variable around x = 0, allowing
changes of the input values to introduce changes of the output values.
A NN creates an output vector for each input vector. The weights of the connec-
tions save the information how these two vectors correspond. A NN can learn to
recognize patterns at its input and indicate them through its output. There are
various algorithms for NN learning, in this thesis backpropagation learning was
used. Backpropagation is a supervised learning algorithm. The system knows the
desired output values and compares them to the the actual output values. The
sum of the squares of all these output errors is an indicator how closely the NN
output resembles the desired output. After every iteration, each weght of the NN
is adapted proportionally to the partial derivative of the sum squared error with
respect to the weight itself, see equation A.3. This teaches the NN to match the
current input to the current desired output.
∂E
hwij new = hwij − λ ∗ (A.3)
∂hwij
The parameter λ is the learning step size. Experiments have shown that setting
λ to 0.5 works fine with the used NN. If the learning step size is too big, the NN
learning overshoots and can become unstable. Setting λ too small leads to a slow
learning rate. Of course if λ → 0, the NN does not adapt at all. If a NN has
to recognize several patterns, it is important to train all these in a loop. If only
one pattern at a time is trained, the NN forgets the ones it learned before. This
happens because the NN adapts its weights after every learning iteration. All the
weights are adapted to match the current input vector to the current desired output
vector. This decreases the ability of the NN to match the previous input vector to
Appendix A. Description of the Neural Network 24
its corresponding desired output. If this process is repeated too often, the NN can
no longer match the previous patterns to their corresponding outputs at all.
Appendix B
Code
% NN_2_learning_EMG_signals
number_of_patterns=size(I_VAL,2);
I_VAL_save=I_VAL;
% I_VAL=I_VAL(:,1:number_of_patterns/3);
% % only train with one sample per pattern
% number_of_patterns=size(I_VAL,2);
% I_VAL(:,1)=I_VAL_save(:,1+number_of_patterns)
sum_sq_error=ones(number_of_patterns,1);
% I_VAL((1:p),1)=[1 0 0 0 0 0 0 0]’
% weights to hidden layer from input, n rows, m columns, start range -1,1
hw=2*rand(n,m)-ones(n,m);
% weights ot output from hidden layer, p rows, n columns, start range -1,1
ow=2*rand(p,n)-ones(p,n);
tic
while max(sum_sq_error)>.001
for j=1:3
for i=1:number_of_patterns
i_val=I_VAL((p+1:end),i);
o_val_true=I_VAL((1:p),i);
h_sum=hw*i_val; % weighted sum of all input values
h_val=1./(1+exp(-h_sum)); % sigmoid function
25
Appendix B. Code 26
o_sum=ow*h_val;
o_val=1./(1+exp(-o_sum));
% modify weights ow
derrivative_o_val=o_val.*(1-o_val);
o_error=(o_val_true-o_val); % p x 1 vector
delta_ow=2*lambda* ( o_error.*derrivative_o_val ) *h_val’;
ow_new=ow+delta_ow;
% modify weights hw
derrivative_h_val=h_val.*(1-h_val);
delta_hw = 2*lambda* ( ow’*( o_error.*derrivative_o_val ) .*...
derrivative_h_val*i_val’ );
hw_new=hw+delta_hw;
sum_sq_error(i)=o_error’*o_error;
ow=ow_new;
hw=hw_new;
figure(1)
hold on
plot(o_val,’g’)
plot(o_val_true,’black’)
end % for
end % for
max_sum_sq_error=max(sum_sq_error)
% pause(.5)
if toc > 60*5
break % break after 4 min
end
if max_sum_sq_error>.001
clf(1,’reset’) % clear figure 1
end
end % while
I_VAL= I_VAL_save;
name = [’EMG_recording_’ sensor_placement ’.mat’];
save (name, ’I_VAL’, ’hw’, ’ow’)
% save ’EMG_learning_neck_NN.mat’ hw ow;
’theEEEEEnd’
#include <Servo.h>
Servo shoulder_rot; // 90 middle
Servo shoulder_flex; // 0=vertical up
Servo ellbow; // 0=straight outwards
Servo grip; // 180=closed, 90=open
int samples=200;
float limit=round((.3)*samples); float EMG[3];
27 B.2. Arduino Code for Activity Check
void setup(){
Serial.begin(9600);
pinMode(13, OUTPUT);
shoulder_rot.attach(3);
shoulder_flex.attach(9);
ellbow.attach(10);
grip.attach(11);
shoulder_rot.write(pos_rot);
shoulder_flex.write(pos_flex);
ellbow.write(pos_ellbow);
grip.write(pos_grip);
for (j=0;j<4;j++){
digitalWrite(13, HIGH);
delay(500);
digitalWrite(13, LOW);
delay(500);
}
}
void loop(){
digitalWrite(13, HIGH);
for (i=0; i<samples; i++){
EMG[0]=EMG[0]+abs(analogRead(0)-512)*5.0/1024.0;
EMG[1]=EMG[1]+abs(analogRead(1)-512)*5.0/1024.0;
EMG[2]=EMG[2]+abs(analogRead(2)-512)*5.0/1024.0*1.2;
}
digitalWrite(13, LOW);
if (EMG[0]>limit) {EMG[0]=1;} else{EMG[0]=0;}
if (EMG[1]>limit) {EMG[1]=1;} else{EMG[1]=0;}
if (EMG[2]>limit) {EMG[2]=1;} else{EMG[2]=0;}
pattern= 4*EMG[0]+2*EMG[1]+EMG[2]; // convert EMG to pattern: [0-7]
switch (pattern){
case 4:
if (pos_rot>=180){break;} else{pos_rot=pos_rot+angle; break;}
case 2:
if (pos_rot<=1){break;} else{pos_rot=pos_rot-angle; break;}
case 1:
Appendix B. Code 28
}
shoulder_rot.write(pos_rot);
shoulder_flex.write(pos_flex);
ellbow.write(pos_ellbow);
grip.write(pos_grip);
// Serial.println(pattern);
// Serial.println(pos_rot);
// Serial.println(pos_flex);
// Serial.println(pos_ellbow);
// Serial.println(pos_grip);
// // Serial.println(millis()); // time between two loops: 180 ms
// Serial.println( );
}
Appendix C
Sketches
29
Appendix C. Sketches 30