Professional Documents
Culture Documents
Neural
Network
Fuzzy-neuro
system
h
q
+
1
z
) (t q
1
H
r
s
2
H
+
1
z
) 1 (t q
) 1 ( + t q q
p
) (J
Jacobian
T
J
Fig. 2. The intelligent control system
143
position of joint k to its limits, and v
rk
is the absolute relative
velocity of joint k to its maximum velocity limit. s
rk
and v
rk
are used as crisp inputs to the k-th model of the fuzzy-neuro
system to generate the hint value q
hk
. s
rk
can be calculated
from the current value of the joint angle, q
k
(t), of the ma-
nipulator. The universe of discourse of s
rk
is divided into
three partitions, which are far (F), close (C), and very close
(VC). So by using this input, we can determine whether
each joint is far from, close to, or very close to its limits.
Since almost all robotic manipulators have velocity limits in
addition to the joint limits, the relative velocity of each joint
to its maximum limit must be taken into consideration. v
rk
can be calculated from the current and previous values of
each joint angle of the manipulator, q
k
(t) and q
k
(t 1). The
universe of discourse of v
rk
is divided into three partitions,
which are low (L), medium (M), and high (H). So by using
the second input, we can determine whether each joint is
moving with low, medium, or high velocity compared to its
maximum velocity limit. Whether the joint is moving to-
wards or away from its limits is another important factor.
All those requirements can be modeled as described below.
The relative position of joint k, s
rk
, is given as
s
q t q
q q
q t q
q q
q t q
rk
k km
k km
k km
k km
k km
( )
( )
( )
max
max
if
otherwise
(1)
in which
q
q q
km
k k
+
max min
2
, where q
k max
is the maximum
angle limit of joint k, and q
k min
is the minimum one.
The velocity of joint k, v
k
, is given as
v
q t q t
t
q t q t
t
q t q
k
k k
k k
k km
( ) ( )
( ) ( )
( )
1
1
if
otherwise
(2)
where t denotes the sampling width. Note that the joint k
moves towards the limits if v
k
0; otherwise it moves away
from the limits.
The absolute relative velocity, v
rk
, is given as
v
v
v
rk
k
k
max
(3)
where v
k max
is the maximum velocity limit of joint k.
The general concept of constructing the fuzzy rules for
each joint angle is to generate an approximate value for
each joint angle from the current and previous ones for the
next step according to the situation of s
rk
, v
k
, and v
rk
as
follows:
q t q t c q t q t
hk k k k
+ ( ) ( ) ( ) ( ) ( )
1 1 (4)
where c is a constant. This can take any real number, for
example, c 0 to stop the motion of the axis, 0 < c < 1 for
motion with deceleration, c 1 for motion with constant
velocity, and c > 1 for motion with acceleration. Rearrang-
ing Eq. 4, we obtain
q t q t c q t q t
hk k k k
+ ( ) ( ) + ( ) ( ) ( )
1 1 (5)
The rst term of Eq. 5 represents the part related to the rst
input s
rk
to the fuzzy-neuro system, while the second term
represents the part related to the second input v
rk
.
Note that H
1
, shown in Fig. 2, denotes the relation
between s
r
and q(t), where s
rk
and q
k
(t) are related according
to Eq. 1, while H
2
denotes the relation between v
r
and (q(t)
q(t 1)), where v
rk
and (q
k
(t) q
k
(t 1)) are related
according to Eqs. 2 and 3.
2.2 Illustrative example
As a nonunique problem that has many solutions for one
input, this example is introduced to illustrate the concept of
guiding the output of an NN by feeding the hint values for
the solution to the NN as an additional input vector rather
than the training data.
Consider the equation
z x y +
2 2
(6)
The inverse problem of this equation has many solutions,
i.e., many pairs (x, y) for a certain value of z. Therefore, in
constructing an NN to generate this solution, two input
vectors will be presented. The rst vector is i
1
[z]
T
while
the second is i
2
[x y]
T
, which are considered to be the hints
in the training mode.
A three-layered resilient back-propagation NN
11
with 45
neurons in each hidden layer was constructed as shown in
Fig. 3, where the input vector is i [i
T
1
i
T
2
]
T
[z x
h
y
h
]
T
and the
output vector is o [x y]
T
in the testing mode. Two cases,
where each has the same value for z in its input vector but
different values for both x
h
and y
h
, were obtained as follows:
Case 1.
i o
[ ]
[ ]
31 25 5 2 5 2031 2 0727 . . .
T T
Case 2.
i o
[ ]
[ ]
31 25 5 2 5 1696 2 1231 . . .
T T
Concisely, for the same input and different hints, the NN
responds with appropriate outputs in the sequel.
Neural
Network
Neural
Network
x
y
z x
y
Neural
Network
Neural
Network x
h
y
h
z x
y
a b
Fig. 3. The inputoutput relations of the NN modes. a Training mode.
b Testing mode
144
3 Control system modules
3.1 Fuzzy-neuro system
The integration of neural networks and fuzzy logic results in
a combined approach which provides a powerful approxi-
mate reasoning framework. It has both learning and gener-
alization abilities. Therefore the fuzzy-neuro system is a
candidate as a hint generator in the proposed system. A
rst-order Sugeno fuzzy model is used to construct the k-th
model of this system, as shown in Fig. 4, where k 1, . . . , n.
Here, and denote the sum and the multiplications of the
inputs, respectively. The construction is described below.
Input layer. Two input variables are presented to the
model, namely s
rk
and v
rk
.
Fuzzication layer. The outputs of this layer are the mem-
bership values of the antecedent part. Three Gaussian
membership functions are assigned for each input in their
universe of discourse (s
rk
[0, 1], v
rk
[0, 1]) as follows:
1
1
2
1
2
2
i rk
rk i
i
s
s r
( )
( )
|
.
`
}
exp
(7)
2
2
2
2
2
2
j rk
rk j
j
v
v r
( )
( )
|
.
`
}
exp
(8)
where r
1i
and
1i
are the center and width parameters, re-
spectively, of the membership function of s
rk
, i 1, . . . , 3,
while r
2j
and
2j
, respectively, are those of v
rk
, j 1, . . . , 3.
Rule layer. Nine rules are constructed for this model. In
this layer, the T-norm of the antecedent part is computed.
The algebraic product is considered for that norm; there-
fore the output of this layer is the ring strength of every
rule as
lk i rk j rk
s v ( ) ( )
1 2
(9)
where [(i 1) 3 + j]. There are two outputs from each
neuron in this layer. The rst is multiplied by a crisp func-
tion in the consequent part, F
k
, and then summed in the
next layer, while the other is just summed and then inverted
in the next layer.
We now discuss the aspects of constructing the conse-
quent part. If joint k is going toward the limits (v
k
0) then
we have to penalize it, but if it is going away from its limits
(v
k
< 0) then there is no need to penalize it. Moreover, if
there is a critical axis, the situation will be different. Note
that the critical axis is the axis which is forced to work very
close to its limit but still move toward it to achieve a given
task. In this case, the cooperation concept between links
must be incorporated. That is, stop the motion on the criti-
cal axis at its limits at the expense of more compensation
from the most relaxed axis, where the most relaxed axis is
the axis that is far from its limits and still moving away
from it. To distinguish between the situations of the axes, all
the axes are ranked at each time-step according to their
situations of s
rk
, v
k
, and v
rk
within 24 nonintersected
categories, and then sorted from the most critical axis to the
most relaxed one. Thus, we can summarize the rules as
follows:
if v
k
0
F
l l l l k rk rk
a s b v c + +
elseif v
k
< 0
if joint k is the relaxed axis and a critical axis exists
F
l l l l l k rk rk
a s f b v c + +
else
F
l l l l k rk rk
a s b v c + +
end
end
where a
, b
, and c
, b
, and c
is the
cooperation parameter. Note that the cooperation param-
eter is multiplied to the second term of the right-hand side
of the equation that is dedicated to the most relaxed axis,
because this term represents the previous rate of change of
the joint angle. This is used if and only if there is a critical
axis to prevent unnecessary self-motion.
Defuzzication layer. The defuzzication layer consists of
two neurons; the rst neuron, which is denoted by , sums
all incoming signals from the rule layer multiplied by the
crisp functions, F
k
, as follows:
O
k k k
j i
l l l
F
1
3
1
3
(10)
11
13
12
23
22
21
rk
s
rk
v
hk
q
Output
layer
Input
layer
Fuzzification
layer
Rule
layer
Defuzzification
layer
Fig. 4. The k-th model of the fuzzy-neuro system
145
The second neuron, which is denoted by 1/, inverts the
summing of all incoming signals from the rule layer as
follows:
O
k
k
j i
l
l
1
1
3
1
3
(11)
Output layer. The output layer multiplies the two signal
outputs of the defuzzication layer as follows:
q
hk
k k
j i
k
j i
l l
l
F
1
3
1
3
1
3
1
3
(12)
Thus, the output vector of the whole system is q
h
[q
h1
q
h2
. . . q
hn
]
T
. Let the reference output of the training data be
q*
h
; so the error vector at iteration l is e
1
q*
h
q
h
, and the
least-squares cost of the output error is E
1
1
2
Tr {e
1
e
T
1
},
where Tr{} is the trace operator. The learning algorithm for
updating the consequent parameters by using the back-
propagation algorithm is given as
l l
l
l l
E
+ ( ) ( )
1
1
(13)
where
, b
, c
, b
, c
, and f
( )
E
1
are subjected to the logical rules given below.
Gradient calculations
if v
k
0
E
a
A s
E
b
A v
E
c
A
k rk
k k g s
n
k rk
k k g s
n
k
k k g s
n
1
1
1
1
1
1
l
l
l
,
,
,
elseif v
k
< 0
E
a
A s
E
c
A
k rk
k g s
k
k g s
1
1
l
l
if joint k is the relaxed axis and a critical axis exists
E
f
A v
E
b
A v
k rk
k rk
1
1
1
l
l
else
E
b
A v
k rk
k g s k h
m
1
2
l ,
end
end
E
b
E
b
E
b
1 1
1
1
2
l l l
where the gs are the joint numbers that are going away
from their limits at each time-step, and h
m
is the most re-
laxed joint number in the case of the existance of a critical
axis. Also note that
A q q A A b A A f
k hk hk k k k k k k
j i
|
.
`
}
* , ,
l l l l
1
3
1
3
3.2 Back-propagation neural network
Because of its ability to realize a complex nonlinear input
output relation, a back-propagation NN with two hidden
layers, as shown in Fig. 5, is used for the inverse kinematic
problem. It has been reported
12
that with the increase in the
input layer size, an NN with two hidden layers enables the
learning process to be more successful because of its ability
to extract the local and global features from the input space
by the rst and second hidden layers, respectively.
Two input vectors are presented to that NN. The rst is
considered for the differential change in the Cartesian vari-
able vector p [x y]
T
, while the second is considered for
the hint values q
h
[q
h1
q
h2
. . . q
hn
]
T
, where q
hi
q
hi
q
i
for i 1, . . . , n. The output vector is considered for the
differential change in the joint variable q [q
1
q
2
. . . q
n
]
T
. The forward calculation through the NN to
compute the output based on the matrix calculus approach
is described below.
Letting z
1
represent the input vector z
1
[1 p
T
q
T
h
]
T
, the
outputs of the rst and second hidden layers and of the
Input
Hint
Output
Hidden
Layer 1
Hidden
Layer 2
Output
Layer
+1 +1 +1
.
.
.
.
.
.
( )
j U
T
(14)
z z
3 2
( )
j V
T
(15)
q
( )
j W
T
z
3
(16)
with
z z z z
2 2 3 3
1 1
[ ]
[ ]
;
T
T
T
T
(17)
Here, j() is the hyperbolic tangent activation function vec-
tor whose element x
e
x
( )
+
2
1
1
2
. U, V, and W consist
of weights and thresholds matrices for the rst and second
hidden layers and the output layer, respectively, which are
given by
U U U
T T
T
[ ] 0
(18)
V V V
T T
T
[ ] 0
(19)
W W W
T T
T
[ ] 0
(20)
where
U V W
U V W
l l m
m l l l l m
0
1
0
1
0
1
1 2 2
1 1 1 2 2 2
, ,
, ,
in which l
1
, l
2
, and m
2
are the numbers of the rst and second
hidden units and the output units, respectively, and m
1
is the
number of input units.
Since the reference output (i.e., the desired differential
change in the joint variable) is not known at the output
layer, the output vector must be mapped to make a measur-
able error space, which is the differential change in the
Cartesian variable. Thus, the output vector q is mapped
through the Jacobian matrix J to p, where p Jq.
Letting p represent the desirable reference differential
change in the Cartesian variable vector, the error at itera-
tion l can be calculated as e(l) p p(l), and the least-
squares cost of the error is considered as
E l l l
T
( ) ( ) ( )
{
1
2
Tr e e (21)
Assuming a linear activation function at the output layer,
the cost function can be rewritten as
E l JW l JW l
T T
T
( ) ( ) ( )
( ) ( )
1
2
3 3
Tr p p z z
(22)
The updating algorithm of weights, according to the gener-
alized delta rule,
12
is given by
W l W l W l
E l
W l
+ ( ) ( ) + ( )
( )
( )
1 1
(23)
where is the momentum factor, is the learning rate, and
E(l)/W(l) can be calculated as
( )
( )
( )
E l
W l
l
T
z
3 3
d
(24)
Here, d
3
(l) J
T
e(l) is the m
2
1 local gradient vector of the
output layer, while the local ones of the second and rst
layers are as the original algorithm.
4 Simulation results
Simulations have been performed for a four-link planar
manipulator. Each link has a length of 1m, and it is assumed
that the manipulator has the joint limits q
min
[10 80
100 40]
T
and q
max
[90 40 15 80]
T
. Simulation results
are presented to show the performance of the proposed
position control system for a redundant manipulator to
track a given trajectory while avoiding the joint limits. Only
the end-effector position is considered in this simulation,
which permits two degrees of redundancy to achieve the
joint limits avoidance. As shown in Fig. 6, the required
trajectory is an inclined straight-line trajectory with an incli-
nation angle of 80 with the X-axis. To reach the maximum
height along this trajectory, three joints of the manipulator
are forced to work very close to their limits, which are q
2
, q
3
,
and q
4
. So the number of critical axes is more than the
number of degrees of redundancy. The manipulator starts
the trajectory at point (3.5, 0) m with an initial conguration
of q(0) [41 49 32 48]
T
, and ends it at the maximum
travel point (3.64, 0.79) m.
To get the simulation results, the fuzzy-neuro system is
rst trained off-line by the back-propagation algorithm, as
described in subsection 3.2, with a learning rate 0.01, and
then the trained fuzzy-neuro system is used in the overall
control system. The training data presented to that system
are extracted from the nonviolated results of tracking some
trajectories by using the GPM. The NN, which is con-
structed as a two hidden-layer NN with 30 neurons each, is
trained online a each time-step by presenting the hint values
q
h
in addition to p. Thus the NN converges rapidly to the
required q that is very close to q
h
. The terminating criterion
at each time-step is taken as the trajectory tracking error |e
i
|
0
Y
1
q
0
X
2
X
1
Y
1
X
max 2
q
max 3
q
3
X
3
Y
min 4
q
2
Y
End point The final
configuration
Start point
Trajectory
Fig. 6. The nal conguration
147
10
5
for i 1, . . . , m. The sampling width t is taken as
0.075s. The initial values for the weights are selected as
random values on [1, 1]. The learning rate and the mo-
mentum factor are chosen to be 0.1 and 0.9, respectively,
for the best results.
Figure 7 shows the simulation results; Fig. 7a and b rep-
resent the successive manipulator congurations through-
out the desired trajectory and the joint angle proles,
respectively. The results demonstrate the efcacy of the
proposed control system in tracking the trajectory while
avoiding the joint limits. This performance is due to the
ability gained by the NN to follow the hint generated from
the fuzzy-neuro system. Both joint 2 and joint 3 stop their
motions at their limits, while joints 1 and 4 change their
rates as a kind of cooperation to achieve the task until joint
4 reaches its limit at the end of the trajectory. Neither a
violation nor an abrupt change appears in the joint angle
proles, as shown in the results of the GPM (Fig. 8), which
veries the good performance of the proposed control sys-
tem. Figure 8 shows the results of the GPM for the same
trajectory and the same initial manipulator conguration in
which joint 2 violates its maximum limit and abrupt changes
occur at the end of the trajectory.
5 Conclusion
The problem of tracking a given trajectory while avoiding
the joint limits of a redundant manipulator has been ad-
dressed by an intelligent control system. The concept of the
cooperative hint between the joints provided by a fuzzy-
neuro system was introduced to achieve a given task. The
results veried the good performance of the proposed
method, which can be regarded as a general NN-based re-
dundancy resolution because it can be applied to many
other problems, such as collision avoidance,
13
fault toler-
ance, and drift-free redundancy.
References
1. Liegeois M (1977) Automatic supervisory control of the
conguration and behavior of multibody mechanisms. IEEE Trans
Syst Man Cybern SMC-7(12):868871
2. Zghal H, Dubey RV, Euler JA (1990) Efcient gradient projection
optimization for manipulators with multiple degrees of redun-
dancy. Proceedings of the IEEE International Conference on
Robotics and Automation, vol 2, pp 10061011
3. Chan TF, Dubey RV (1995) A weighted least-norm solution-based
scheme for avoiding joint limits for redundant joint manipulators.
IEEE Trans Robot Autom 11:286292
4. Cheng F-T, Chen T-H, Sun Y-Y (1994) Resolving manipulator
redundancy under inequality constraints. IEEE Trans Robot
Autom 10:6571
5. Cheng F-T, Sheu RJ, Chen T-H (1995) The improved compact QP
method for resolving manipulator redundancy. IEEE Trans Syst
Man Cybern 25:15211530
6. Ahson SI, Sharkey NE, Nicolas B (1996) Avoiding joint limits and
obstacles for kinematically redundant manipulators: a fuzzy-logic-
based approach. Proceedings of the 5th IEEE International Con-
ference on Fuzzy Systems, vol 3, pp 17771781
7. Ramos MC, Koivo AJ (2002) Fuzzy-logic-based optimization for
redundant manipulators. IEEE Trans Fuzzy Syst 10:498509
8. Daachi B, Madani T, Ramdane-Cherif A, et al. (2003) Memory
neural network for kinematic inversion of constrained redundant
robot manipulators. Proceedings of the 11th International Confer-
ence on Advanced Robotics, pp 18551862
9. Assal SFM, Watanabe K, Izumi K (2004) Cooperative fuzzy hint
acquisition for avoiding joint limits of redundant manipulators.
Proceedings of the 30th Annual Conference of the IEEE Industrial
Electronic Society (IECON 2004), November, WC5-6 in CD
10. Zhang Y, Wang J, Xia Y (2003) A dual neural network for redun-
dancy resolution of kinematically redundant manipulators subject
-0.5 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
-0.5
0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
X-coordinate [m]
Y
-
c
o
o
r
d
i
n
a
t
e
[
m
]
a
Time [s]
A
n
g
l
e
s
[
d
e
g
.
]
20
40
60
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
4
q
1
q
3
q
2
q
-60
-40
-20
b
Fig. 7. The results of the proposed control system. a Manipulator
congurations. b Joint angle proles
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
-60
-40
-20
0
20
40
60
Time [s]
A
n
g
l
e
s
[
d
e
g
.
]
4
q
1
q
3
q
2
q
Fig. 8. The results of the GPM
148
to joint limits and joint velocity limits. IEEE Trans Neural Net-
works 14:658667
11. Riedmiller M, Braun H (1993) A direct adaptive method for faster
back-propagation learning: the RPROP algorithm. Proceedings of
the IEEE International Conference on Neural Networks, pp 586
591
12. Haykin S (1994) Neural networks: a comprehensive foundation.
Macmillan, New York
13. Assal SFM, Watanabe K, Izumi K (2004) Exploring motion acqui-
sition of manipulators with multiple degrees of redundancy using
soft computing techniques. Proceedings of the IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Systems (IROS 2004),
pp 30863091