You are on page 1of 35

Prof. Dr.

Fumiya Iida

Bachelor-Thesis

Controling a crane arm with


EMG sensors

Spring Term 2011

Supervised by: Author:


Prof. Dr. Fumiya Iida Amos Zweig
Dr. Alejandro Arieta
Keith Gunura
Contents

Abstract ii

Symbols iii

1 Introduction 1

2 EMG Sensors 2
2.1 Functionality of Electromyography . . . . . . . . . . . . . . . . . . . 2
2.2 The layout of an EMG sensor . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Adapting the sensors to the Arduino . . . . . . . . . . . . . . . . . . 3

3 Pattern Recognition 5
3.1 Pattern Recognition with a Neural Network . . . . . . . . . . . . . . 5
3.2 Activity Check Pattern Recognition . . . . . . . . . . . . . . . . . . 6

4 Sensor Placement 7

5 Experiments 8
5.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

6 Results 11
6.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

7 The Robot 14
7.1 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.3 Experiments with the Robot . . . . . . . . . . . . . . . . . . . . . . 17

8 Conclusions and Future Work 19

Bibliography 21

A Description of the Neural Network 22

B Code 25
B.1 Matlab Code for Neural Network . . . . . . . . . . . . . . . . . . . . 25
B.2 Arduino Code for Activity Check . . . . . . . . . . . . . . . . . . . . 26

C Sketches 29
C.1 Sketches of the parts of the robot . . . . . . . . . . . . . . . . . . . . 29

i
Abstract

This thesis describes the control of a four degrees fo freedom robot arm through
nerve signals. The robot can successfully pick a spoon out of a cup and place it
into another cup. The nerve signals are measured from the forearm of the subject
whenever it moves its hand. Three EMG sensors are used to measure the nerve
signals, an activity check algorithm or a neural network are used to process them.
As measurements show, the activity check is superior to the neural network.

ii
Symbols

Symbols
i val input values
h val hidden values
o val output values
hw weights of the connections to the hidden values
ow weights of the connections to the output values
h sum hidden sum = weighted sum of all the input values
o sum output sum = weighted sum of all the hidden values

Acronyms and Abbreviations


DOF Degrees of freedom
FFT Fast fourier transformation
EMG Electromyography
NN Neural Network

iii
Chapter 1

Introduction

Controlling a robot through nerve signals is a fascinating idea. It reminds of the


dream of leaving the own body and being inside a different one. It would allow to
influence a far away environment through the body of a robot. Of course with a
computer this is allready possible today, but not with the feeling of actually being
inside the robot.
Now imagine using your nerve signals to control a robot attached to your body.
The often mentioned wish for a third arm could all of a sudden become reality. Or
the wish to have a second arm again... If it was possible to attach a robot hand to
the forearm of a hand amputee and let him control it like his own hand, the robot
could replace his lost hand, thus greatly improving his quality of life.
To improve the understanding of nerve signals and how they can be used to control
a robot, the goal of this thesis was to build a robot gripper that can be controlled
through nerve signals. Picking up the thought of a hand prosthesis, it was decided
to use nerve signals gained from the forearm of the subject.

1
Chapter 2

EMG Sensors

2.1 Functionality of Electromyography


Electromyography, EMG, is a method of recording nerve signals that are sent from
the brain to the muscles of our body. Every time a muscle is contracted, the EMG
sensor measures a nerve signal. There are two types of EMG sensors, surface EMG
sensors and implantable ones. For the sake of simplicity only surface EMG sensors
were used in this thesis. An EMG sensor consists of two electrodes, which are placed
on a muscle, in orientation of its fibers. The sensor measures the voltage between
these two electrodes which is caused by nerve impulses. Even though a voltage
difference is measured, nerve impulses are not delivered electrically, as a common
misconception causes to believe. They are caused through diffusion of N a+ , K +
and Cl− ions. In the relaxed state, the interior of a nerve tract, also known as axon,
holds most negative ions while the positive ions are outside of the membran. The
potential inside a nerve cell in its relaxed state is −70mV . When a signal arrives
through the axon, the local potential inside the cell starts to rise. As soon as it
reaches −55mV , all positive ions from outside are forced inside the cell through ion
carriers while the negative ions are forced out of the cell. The local potential inside
the cell reaches a peak of +30mV . After that the cell returns to its relaxed state.
The signal is passed on through the axon because some of the positive ions inside
the cell difuse along the axon, causing the potential next to the location that just
reacted to rise to −55mV . There the process repeats itself.
At the very moment an EMG sensor measures a voltage difference, a signal is trav-
eling along a nerve cell beneath the two electrodes. The force of contraction of a
muscle is proportional to the number of muscle cells that are contracted simultane-
ously. Every muscle cell has its own nerve cell that controls it. If more nerve cells
are sending signals simultaneously, the muscle contracs harder. The sensor sums up
all the signals passing through all the nerve cells beneath its electrodes. Therefore
the amplitude of the measured signal is proportional to the force of contraction of
the muscle.

2.2 The layout of an EMG sensor


Surface EMG sensors compare the voltage between two electrodes on the skin of
the subject. An extra electrode is used as a ground reference for all the measured
signals. The amplitude of the measured voltage difference is around 70mV . This
signal is first amplified with a differential amplifier, then filtered and then amplified
again. A low pass filter avoids aliasing and a high pass filter helps against low
frequency distortions like the heart beat. An example of the resulting EMG signal

2
3 2.3. Adapting the sensors to the Arduino

can be seen in figure 2.1. Figure 2.2 shows two EMG sensors, one with and one
without isolation.

Figure 2.1: Typical EMG signal

2.3 Adapting the sensors to the Arduino


The EMG sensors had to be adapted to work with the Arduino microcontroller.
The Arduino can only read input values between 0V and 5V , whereas the sensors
produced output values between ±5V . The level of amplification of the signal can
be adjusted over a resistor on the circuit board of the sensor. The resistor was
replaced with a potentiometer ranging from 0 to 5kΩ. By changing the resistance
of the potentiometer the amplitude of the signal was set to 2.5V . Figure 2.2 shows
the EMG sensor with the potentiometer.
In order to change the signal range from [−2.5V, 2.5V ] to [0V, 5V ], an offset voltage
of 2.5V was added to each signal. For this task, the circuit in figure 2.3 has been
soldered. The Op-Amps A and B stabilize the 2.5V reference signal and the input
signal. Op-Amp C adds them together but also inverts the sign. D inverts the sign
again from negative to positive.
Chapter 2. EMG Sensors 4

Figure 2.2: EMG sensor with electrodes, potentiometer and yellow shrink tube
isolation

Figure 2.3: Schematic of the circuit that adds a 2.5V offset to the signal
Chapter 3

Pattern Recognition

To control a robot with four DOF, eight input patterns are necessary, because each
DOF can rotate forward and backward. Two algorithms were used to distinguish
different patterns from the three EMG signals: A neural network and an activity
check.

3.1 Pattern Recognition with a Neural Network


At first glance, an EMG signal looks like white noise. Except for differences in
amplitude, nothing can be recognized in time domain. However in frequency domain
a typical pattern can be observed. In figure 3.1 the left curve shows a typical EMG
signal and the right curve shows a plot of its FFT. Because the FFT of the EMG
signal has large fluctuations, it is filtered with a moving average filter over the last
20 samples. The resulting curve can be seen in figure 3.1 in red.

Figure 3.1: left: EMG curve, right: FFT of the EMG.

From this filtered curve samples are taken at the frequencies 20, 40, 60, 80, 100, 120,
140, 160, 180, 200, 220, 250, 300, 350, 400 and 450Hz. More samples are chosen
below 250Hz because for higher frequencies the amplitude of the FFT aproaches
zero. The 16 samples of all three EMG signals are combined to a 48 × 1 input
vector for the NN.
The NN was designed similar to [1] using backpropagation learning. It has 48 input
nodes, one layer of 53 hidden nodes and 8 output nodes. The exact number of
hidden nodes is not important for the NN performance. Past experiecne has shown
that using 10% more hidden nodes than input nodes works fine. Eight output
nodes were used, because eight patterns have to be recognized. To indicate pattern

5
Chapter 3. Pattern Recognition 6

n, node n is set to 1 while all the other nodes are set to 0. For a detailed description
of the NN structure and the learning algorithm, see appendix A.
The Matlab code that implements the NN learning is shown in appendix B.1. It
trains all the recorded patterns in a loop, until the maximum of the sum squared
error is smaller than 0.001 or a time mark is reached. A time mark of 4 minutes
was chosen because the major changes in the NN structure happen in the first 2
minutes of learning. After 4 minutes almost no further changes occur. Figure 3.2
shows the process of a NN learning to differentiate eight patterns. The output
(green) aproaches the desired output (black) with every learning iteration. Pictures
were taken after 10, 20, 30 and 150 iterations.

Figure 3.2: Green: NN Output values after 10, 20, 30, 150 learning iterations;
Black: desired values

3.2 Activity Check Pattern Recognition


A simpler way to recognize different EMG patterns is to check each muscle indi-
vidually if it is active or inactive. If the average EMG amplitude is bigger than a
limit value, the observed muscle is active. In that case, the corresponding sensor
variable is set to 1. In case of inactivity it is set to 0. Using n sensors, this algorithm
can recognize 2n patterns. Since the average amplitude is equal to the integration
over the whole signal divided by the length of the signal, the integration can be
compared to a limit value just as well. The integration is computed as a summation
of all the absolut values of the signal, as in equation 3.1.
n
X
EM Gi = |valueij | (3.1)
j=1

To determine the limit value, a constant value of 0.3V is integrated over the whole
measurement time. Since the noise amplitude coming from an inactive muscle lies
around 0.2V and the EMG signal of an active muscle normally has an amplitude of
1.5V , this provides a good limit value.
Chapter 4

Sensor Placement

In the introduction, the placement of the sensors on the forearm is mentioned. A


second sensor placemect was tested as well, placing the sensors on the the neck
and the jaw of the subject. In the main test series, the sensors were placed on the
forearm, the first one on the finger flexor muscle group, the second one on the finger
extensor muscle group and the third one on the small thumb flexor (musculus flexor
pollicis brevis). In the second test series, sensor one was placed on the left head
turner muscle (Musculus sternocleidomastoideus sinister), sensor two on the right
head turner muscle (Musculus sternocleidomastoideus dexter) and sensor three on
the chewing muscle (Musculus masseter). The locations of the sensors in the two
different setups are shown in figure 4.1. The sources of the pictures are [2] and [3].

Figure 4.1: Top: Sensor placement on the forearm, Bottom: Sensor placement on
the neck and jaw.

7
Chapter 5

Experiments

Two differen pattern recognition algorithms and two different sensor placements
were used in this thesis. The following experiments compare these four setups. The
results can be seen in the next chapter.

5.1 Simulation
Before measuring the successrate of these four setups, the recognition capabilities of
the NN have been simulated. The Activity Check algorithm cannot be simulated.
It is specifically designed for the EMG signals and cannot be tested with randomly
generated patterns. 10, 20, 30, up to 100 patterns were used to test the NN. Each
pattern was generated as a 48 × 1 vector of random numbers between 0 and 1. This
results in signals with an amplitude of 0.5 and a mean value of 0.5.
To create different samples of the same pattern, the original sample was overlayed
with an artificial measurement noise. Noise to signal ratios of 0.6, 1.0 and 1.4
were examined. The noise was generated as a vector of random values between
±noise amp. For each pattern the NN had to learn, there were three training
samples and seven testing samples. A pattern is considered recognizable if at least
6 out of 7 testing samples were recognized correctly. The training samples were
fed to the NN in a loop, as shown in figure 5.1. First the first sample of the first
pattern is fed in, then the first sample of the second pattern and so on up to the
first sample of the nth pattern. Then the same loop is repeated with the second set
of training samples and then again with the third set.

Figure 5.1: Schematic of the feeding loop for the NN

8
9 5.2. Measurements

5.2 Measurements
The following measurements were conducted to find out which of the four setups
works best to control the robot. For both sensor placements eight muscle activity
patterns were designed which have a unique combination of relaxed and tensed
muscles. A NN with three training samples, a NN with one training sample and
the Activity Check algorithm were used to recognize these eight patterns. The
measurement data is shown in the tables in figure 5.2.

Figure 5.2: Pattern recognition measurements form the forearm or the neck

In an aditional test series it was tried to identify as many signals as possible with
the NN. As the tables in figure 5.3 show, the NN performed very poorly in distin-
guishing these patterns, even though in the simulation it could easily recognize 20
patterns. This difference in performance is due to the totaly different structure of
the patterns. The patterns for the simulation were generated randomly. Random
patterns do not have any correlation, they are all unique. The patterns measured
from the different movements of the hand (or neck) are often similar to each other.
Movements that require actions of the same muscle groups produce almost identical
patterns. For these patterns the fluctuations between two samples of the same pat-
tern are bigger than the differences between the patterns themselves. Using almost
identical patterns with different desired output values confuses the NN. Sometimes
all of these patterns lead to one desired output, which explains why few patterns
could still be recognized. Mostly the NN will not assign the unclear pattern to
either one of the desired outputs, thus the pattern will never be recognized.
Chapter 5. Experiments 10

Figure 5.3: Pattern recognition measurements form the forearm or the neck
Chapter 6

Results

6.1 Simulation
The NN can maximally differentiate 50 patterns with the smallest noise to signal
ratio. Increasing the measurement noise decreases the amount of patterns the NN
can learn to recognize as well as the chance of a sample being recognized correctly.
If the NN is only trained with few patterns, the relative performance is high but
the absolute performance is low. For every noise amp there is a maximal absolute
performance, which stays constant over an interval of 20 to 40 patterns. Once
it is reached, a further increase of the number of patterns does not change the
absolute performance of the NN. Yet the relative performance decreases already. If
the number of patterns is increased beyond this interval, both the relative and the
absolute performance decrease.
For a noise to signal ratio of 1.4, the NN can rarely recognize a pattern 6 or 7
times out of 7 tests. However, up to 50 test patterns, most patterns are recognized
3, 4 or 5 times, which indicates that the NN can still learn different patterns, but
the fluctuations of one testing sample to the other are too big to allow a reliable
classification. A better performance is expected if the noise to signal ratio of the
testing samples would be decreased to 1.0 or even 0.6. Figures 6.1 and 6.2 show the
results of the simulation.

11
Chapter 6. Results 12

Figure 6.1: Simulation results of absolute NN performance

Figure 6.2: Measurement results of relative NN performance


13 6.2. Measurements

6.2 Measurements
As the measurements show, the Activity Check algorithm performed best in recog-
nizing eight muscle activity patterns. If this algorithm fails to recognize a pattern
reliably, it is because the subject cannot control its muscle tension well enough when
performing the movement. The more the subject practices to control its muscle ten-
sion, the better these eight patterns can be distinguished. This could be observed
during the measurements as well as during the experiments with the robot.
Against the expectations, using one training sample for the NN worked better than
using three. The explanation for this observation is, that the fluctuations between
two samples of a pattern can still have a similar order of magnitude as the differences
between two patterns. Therefore it is better to use only one typical learning sample
per pattern. The fluctuations will prevent some samples from being recognized
correctly, but at least the learning samples are clearly distinguishable.
When ever the NN could not recognize a pattern at all, it did not learn to do so
during the learning session. If it could recognize a pattern only a few times, it
managed lo learn during the learning session, but the fluctations of the samples of
this pattern were bigger than the difference to another pattern. Therefore some-
times the one and sometimes the other pattern is recognized, resulting in a poor
identification perfrmance.
Chapter 7

The Robot

7.1 Characteristics
The following table shows the characteristics of the robot. The figure 7.2 contains
a photograph of the robot with all four DOF sketched in. Sketches of the parts can
be seen in the appendix C.

Figure 7.1: Table showing the characteristics of the robot

14
15 7.1. Characteristics

Figure 7.2: Photograph of the Robot with the four DOF sketched in
Chapter 7. The Robot 16

7.2 Controller
The control of the robot is feedforward only. The visual feedback of the user ensures
a position control with zero steady state error. On the Arduino microcontroller the
Activity Check algorithm is implemented. The code for the Arduino is written in
C and can be seen in the appendix B.2.
The code uses the ’switch’ command to ask which pattern is recognized at the
moment. To use this command, the EMG patterns have to be converted to integers.
The three sensors produce values 0 or 1. These can be interpreted as binary numbers
from 000 to 111, which are then converted to decimal numbers from 0 to 7. A servo
action is assigned to every one of these eight cases. The table in figure 7.3 shows
the correlation between the users action, the EMG pattern and the robots action.
The pattern where all muscles are inactive has to be assigned to not moving any
servo, else the robot can never stand still. This leaves only seven patterns to control
eight servo movements. This problem is solved through designing the gripper as an
open/ close switch. The same signal opens the gripper if it is closed and closes the
gripper if it is opened. The remaining six patterns are used to control the other
three DOF. Since they have to be controllable over their full range of motion, each
of them needs one pattern to turn to the left and one to turn to the right.

Figure 7.3: Correlation between the users acrion, the EMG pattern and the robots
action
17 7.3. Experiments with the Robot

7.3 Experiments with the Robot


The author was the main test subject, other subjects were merely used to analyze if
the robot works with other users aswell. The main test used the sensor placement
on the forearm. The subject had to use the robot to pick a tea spoon out of a cup
and place it into another cup. The actions this task requires are listed below.

Figure 7.4: Actions the Experiment includes

In figure 7.5, the control signal and the corresponding changes of the servo angles
are shown.
Further experiments were conducted with differnt muscle groups of the same subject
and with different subjects. The setup with the neck and jaw was tested, as well
as two other setups using the chest (Musculus pectoralis major) and the waist
(Musculus rectus abdominis) or the legs (Musculus quadriceps femoris) and the
calves (Musculus gastrocnemius). In all these tests, including the ones with different
subjects, the subject could make the robot move, but was not capable of completing
a task. From this it is concluded, that all the tested muscles generate the same kind
of signals. However the amplitude of the signal is individually different for each
subject and for each muscle. To allow a precise control of the robot, the limit
values for the Activity Check would have to be adapted to the specific setup. Also
the subject would have to train with each setup to improve its coordination. Else
it will not be able to control the robot.
Chapter 7. The Robot 18

Figure 7.5: EMG signal and corresponding servo angles


Chapter 8

Conclusions and Future


Work

In the simulation of the NN pattern recognition capacities, the NN could maximally


recognize 50 input patterns. Depending on the noise amp, the maximum absolute
performance lies between 30 and 70 tested patterns. Using too few patterns does
not fully exploit the capacities of the NN, using too many confuses the NN and
corrupts its performance. The highest relative performance has been observed for
10 to 20 patterns. It decreases with a growing amount of tested patterns.

The noise has a big influence on the NN performance. Increasing its amplitude
decreases the amount of patterns the NN can learn to recognize as well as the
chance of a sample being recognized correctly. For a noise to signal ratio of 1.4, the
NN could hardly recognize any patterns 6 times out of 7. However it could recognize
many patterns 3, 4 or 5 times. This indicates, that the NN could still learn the
patterns during the learning session but the noise was too big to allow a reliable
classification during the tests. Future experiments could analyze if decreasing the
noise to signal ratio of the testing samples to 1.0 or even 0.6 would improve the NN
performance.

The Activity Check could best recognize eight patterns that have unique combi-
nations of tensed and relaxed muscles. A NN with one or three training samples
was used as well to recognize these patterns. The training with one sample was
more successful, because the fluctuations of the samples can still have the same
order of magnitude as the differences between the patterns.

The attempt to measure as many muscle activity patterns as possible for a spe-
cific sensor placement was not successful. The NN could not distinguish most of
the patterns because two or three were always too similar. To recognize more pat-
terns, they would have to be made more individual. Future tests could use more
sensors to get individual signals from different muscles of a muscle group. Another
approach could also include more than one limit value for the Activity Check to
distinguish between different levels of activity.

While conducting the experiments it was observed that the placement of the elec-
trodes has a huge influence on the quality of the measured signal. Placing the sensor
two centimeters away from its intended position can lead to totaly different signal
amplitudes or even to a loss of the signal.

19
Chapter 8. Conclusions and Future Work 20

The robot could successfully be controlled using the Activity Check algorithm and
the sensor placement on the forearm. Further experiments were conducted with
differnt sensor placements and with different subjects. In all these tests the subject
could make the robot move but never succeeded in controlling the robot well enough
to perform a simple task. The limit values of the Activity Check would have to be
adapted and the subject would have to practice with the specific setup.
It was observed that controlling the robot is a learning process. The longer the
subject tried to perform the predefined task, the better it learned to generate the
input signals to smoothly control the robot. This learning process improves the
subjects coordination of the observed muscles as well as its control over the tension
in each muscle while performing the movements. It is concluded, that every subject
and every different sensor placement can be used to control the robot if the subject
practices long enough to learn these abilities for the specific setup.
Bibliography

Background research
[1] Author Not Found: Chapter 3 Supervised learning: Multilayer Networks I.

http://www.google.ch/url?sa=t&source=web&cd=1&ved=0CCAQF
jAA&url=http%3A%2F%2Fwww.cs.umbc.edu%2F~ypeng%2FF04NN%2F
lecture-notes%2FNN-Ch3.ppt&rct=j&q=Chapter%203%20Supervi
sed%20learning%3A%20Multilayer%20Networks%20I&ei=bu3YTYn
SBc-j-gasr-WfDw&usg=AFQjCNHUz_VkHQQpgaCv9iSrCbi0EadGOA&c
ad=rja

Figures
[2] Figure of the head muscles:

http://www.edoctoronline.com/medical-atlas.asp?c=4&id=21
651&m=1&p=10&cid=1051&s=

[3] Figures of the hand muscles:

http://commons.wikimedia.org/wiki/File:Forearm_muscles_
front_deep.png?uselang=de
http://commons.wikimedia.org/wiki/File:Forearm_muscles_
back_deep.png?uselang=de

21
Appendix A

Description of the Neural


Network

A NN consists of many of nodes and connections between these nodes. The nodes
are organized in layers. A schematic of the NN used in this thesis can be seen in
figure A.1. It is designed similar to the NN described in [1].

Figure A.1: Schematic of the neural network

The first layer is the input layer, followed by the hidden layers. The last layer is the
output layer. Generaly a NN can have many hidden layers but this NN only has
one. Every node of the NN has a connection to every node one layer before and one
layer after itself. Every connection has a weight. The weights of the connections to
the hidden layer are called ’hidden weights’, hw, and the weights of the connections
to the output nodes ’output weights’, ow. The first index of a weight describes the
node in the target layer, the second one the node in the origin layer. The value of a
node is a function of the weighted sum of all the values of the nodes one layer before
the observed node. Each value is weighted with the weight of its connection to the
examined node. The weighted sum of the input values is called h sum because the
hidden values, h val, are a function of h sum. Accordingly the weighted sum of the
hidden values is called o sum and the output values, o val, are a function of o sum.

22
23

The input values are called i val. Equations A.1 shows how the weighted sums are
calculated.
n
X n
X
h sumi = (hwij ∗ i valj ) o sumi = (owij ∗ h valj ) (A.1)
j=0 j=0

The sigmoid function is used to calculate the value of a node from its weighted sum.
Equation A.2 shows the sigmoid function as a function of x.

1
f (x) = (A.2)
1 + e−x
The sigmoid function is shown in figure A.2. It has a slope of 0.25 at the point (0, 0.5)
and asymptotically aproaches 0 if the argument goes to −∞. If the argument goes
to ∞, the function aproaches 1. The sigmoid function is a common choice for a NN

Figure A.2: Sigmoid function

node function. It constrains the values of the nodes to a finite interval. At the same
time it is still sensitive to changes of the function variable around x = 0, allowing
changes of the input values to introduce changes of the output values.
A NN creates an output vector for each input vector. The weights of the connec-
tions save the information how these two vectors correspond. A NN can learn to
recognize patterns at its input and indicate them through its output. There are
various algorithms for NN learning, in this thesis backpropagation learning was
used. Backpropagation is a supervised learning algorithm. The system knows the
desired output values and compares them to the the actual output values. The
sum of the squares of all these output errors is an indicator how closely the NN
output resembles the desired output. After every iteration, each weght of the NN
is adapted proportionally to the partial derivative of the sum squared error with
respect to the weight itself, see equation A.3. This teaches the NN to match the
current input to the current desired output.

∂E
hwij new = hwij − λ ∗ (A.3)
∂hwij
The parameter λ is the learning step size. Experiments have shown that setting
λ to 0.5 works fine with the used NN. If the learning step size is too big, the NN
learning overshoots and can become unstable. Setting λ too small leads to a slow
learning rate. Of course if λ → 0, the NN does not adapt at all. If a NN has
to recognize several patterns, it is important to train all these in a loop. If only
one pattern at a time is trained, the NN forgets the ones it learned before. This
happens because the NN adapts its weights after every learning iteration. All the
weights are adapted to match the current input vector to the current desired output
vector. This decreases the ability of the NN to match the previous input vector to
Appendix A. Description of the Neural Network 24

its corresponding desired output. If this process is repeated too often, the NN can
no longer match the previous patterns to their corresponding outputs at all.
Appendix B

Code

B.1 Matlab Code for Neural Network

% NN_2_learning_EMG_signals

global num_patterns sensor_placement

name = [’EMG_recording_’ sensor_placement ’.mat’];


load (name, ’I_VAL’);

m=48; % number of inputs


n=48+5; % number of hidden values
p=num_patterns; % number of outputs
lambda=.5; % size of the learning step

number_of_patterns=size(I_VAL,2);
I_VAL_save=I_VAL;
% I_VAL=I_VAL(:,1:number_of_patterns/3);
% % only train with one sample per pattern
% number_of_patterns=size(I_VAL,2);
% I_VAL(:,1)=I_VAL_save(:,1+number_of_patterns)

sum_sq_error=ones(number_of_patterns,1);

% I_VAL((1:p),1)=[1 0 0 0 0 0 0 0]’

% weights to hidden layer from input, n rows, m columns, start range -1,1
hw=2*rand(n,m)-ones(n,m);
% weights ot output from hidden layer, p rows, n columns, start range -1,1
ow=2*rand(p,n)-ones(p,n);

tic
while max(sum_sq_error)>.001

for j=1:3
for i=1:number_of_patterns

i_val=I_VAL((p+1:end),i);
o_val_true=I_VAL((1:p),i);
h_sum=hw*i_val; % weighted sum of all input values
h_val=1./(1+exp(-h_sum)); % sigmoid function

25
Appendix B. Code 26

o_sum=ow*h_val;
o_val=1./(1+exp(-o_sum));

% modify weights ow
derrivative_o_val=o_val.*(1-o_val);
o_error=(o_val_true-o_val); % p x 1 vector
delta_ow=2*lambda* ( o_error.*derrivative_o_val ) *h_val’;
ow_new=ow+delta_ow;

% modify weights hw
derrivative_h_val=h_val.*(1-h_val);
delta_hw = 2*lambda* ( ow’*( o_error.*derrivative_o_val ) .*...
derrivative_h_val*i_val’ );
hw_new=hw+delta_hw;

sum_sq_error(i)=o_error’*o_error;
ow=ow_new;
hw=hw_new;

figure(1)
hold on
plot(o_val,’g’)
plot(o_val_true,’black’)

end % for
end % for
max_sum_sq_error=max(sum_sq_error)
% pause(.5)
if toc > 60*5
break % break after 4 min
end
if max_sum_sq_error>.001
clf(1,’reset’) % clear figure 1
end

end % while

I_VAL= I_VAL_save;
name = [’EMG_recording_’ sensor_placement ’.mat’];
save (name, ’I_VAL’, ’hw’, ’ow’)
% save ’EMG_learning_neck_NN.mat’ hw ow;

’theEEEEEnd’

B.2 Arduino Code for Activity Check


// Activity_Check_controller

#include <Servo.h>
Servo shoulder_rot; // 90 middle
Servo shoulder_flex; // 0=vertical up
Servo ellbow; // 0=straight outwards
Servo grip; // 180=closed, 90=open

int samples=200;
float limit=round((.3)*samples); float EMG[3];
27 B.2. Arduino Code for Activity Check

int pattern; int i; int j; int angle=3;


int pos_rot=90; int pos_flex=1; int pos_ellbow=1; int pos_grip=90;

void setup(){
Serial.begin(9600);
pinMode(13, OUTPUT);
shoulder_rot.attach(3);
shoulder_flex.attach(9);
ellbow.attach(10);
grip.attach(11);

shoulder_rot.write(pos_rot);
shoulder_flex.write(pos_flex);
ellbow.write(pos_ellbow);
grip.write(pos_grip);

for (j=0;j<4;j++){
digitalWrite(13, HIGH);
delay(500);
digitalWrite(13, LOW);
delay(500);
}
}
void loop(){
digitalWrite(13, HIGH);
for (i=0; i<samples; i++){
EMG[0]=EMG[0]+abs(analogRead(0)-512)*5.0/1024.0;
EMG[1]=EMG[1]+abs(analogRead(1)-512)*5.0/1024.0;
EMG[2]=EMG[2]+abs(analogRead(2)-512)*5.0/1024.0*1.2;
}
digitalWrite(13, LOW);
if (EMG[0]>limit) {EMG[0]=1;} else{EMG[0]=0;}
if (EMG[1]>limit) {EMG[1]=1;} else{EMG[1]=0;}
if (EMG[2]>limit) {EMG[2]=1;} else{EMG[2]=0;}
pattern= 4*EMG[0]+2*EMG[1]+EMG[2]; // convert EMG to pattern: [0-7]

// EMG pattern action robot


// 1 0 0 4 turn right turn right shoulder
// 0 1 0 2 turn left turn left shoulder
// 0 0 1 1 jaw shoulder down
// 1 1 0 6 hard front shoulder up
// 1 0 1 5 jaw+right ellbow down
// 0 1 1 3 jaw+left ellbow up
// 1 1 1 7 all gripper
// 0 0 0 0 nothing nothing
Serial.println(EMG[0]);
Serial.println(EMG[1]);
Serial.println(EMG[2]);
Serial.println( );
Serial.println( );

switch (pattern){
case 4:
if (pos_rot>=180){break;} else{pos_rot=pos_rot+angle; break;}
case 2:
if (pos_rot<=1){break;} else{pos_rot=pos_rot-angle; break;}
case 1:
Appendix B. Code 28

if (pos_flex>=90){break;} else{pos_flex=pos_flex+angle/3; break;}


case 6:
if (pos_flex<=1){break;} else{pos_flex=pos_flex-angle/3; break;}
case 3:
if (pos_ellbow<=1){break;} else{pos_ellbow=pos_ellbow-angle; break;}
case 5:
if (pos_ellbow>=90){break;} else{pos_ellbow=pos_ellbow+angle; break;}
case 7:
if (pos_grip<135){pos_grip=180;}
else {pos_grip=90;}
break;
// default: and case 0: do nothing=write the same pos again...

}
shoulder_rot.write(pos_rot);
shoulder_flex.write(pos_flex);
ellbow.write(pos_ellbow);
grip.write(pos_grip);

// Serial.println(pattern);
// Serial.println(pos_rot);
// Serial.println(pos_flex);
// Serial.println(pos_ellbow);
// Serial.println(pos_grip);
// // Serial.println(millis()); // time between two loops: 180 ms
// Serial.println( );
}
Appendix C

Sketches

C.1 Sketches of the parts of the robot

Figure C.1: Parts of the robot 1

29
Appendix C. Sketches 30

Figure C.2: Parts of the robot 2

You might also like