You are on page 1of 37

Robot control part 1: Forward transformation matrices

Im doing a tour of learning down at the Brains in Silicon lab run by Dr. Kwabena Boahen for
the next month or so working on learning a bunch about building and controlling robots and
some other things, and one of the super interesting things that Im reading about is effective
methods for force control of robots. Ive mentioned operational space (or task space) control of
robotic systems before, in the context of learning the inverse kinematic transformation, but down
here the approach is to analytically derive the dynamics of the system (as opposed to learning
them) and use these to explicitly calculate control signals to move in task space that take
advantage of the passive dynamics of the system.

In case you dont remember what those words mean, operational (or task) space refers to a
different configuration space than the basic / default robot configuration space. FOR EXAMPLE:
If we have a robot arm with three degrees of freedom (DOF), that looks something like this:

where two joints rotate, and are referred to as , and , respectively, then the most obvious
representation for the system state, , is the joint-angle space . So thats great, but
often when were using this robot were going to be more interested in controlling the position of
the end-effector rather than the angles of the joints. We would like to operate to complete our
task in terms of , where are the Cartesian coordinates of our hand in 2D space. So
then is our operational (or task) space.

I also mentioned the phrase passive dynamics. Its true, go back and check if you dont believe
me. Passive dynamics refer to how the system moves from a given initial condition when no
control signal is applied. For example, passive dynamics incorporate the effects of gravity on the
system. If we put our arm up in the air and remove any control signal, it falls down by our side.
The reason that were interested in passive dynamics is because theyre movement for free. So if
my goal is to move my arm to be down by my side, I want to take advantage of the fact that the
system naturally moves there on its own simply by removing my control signal, rather than
using a bunch of energy to force my arm to move down.

There are a bunch of steps leading up to building controllers that can plan trajectories in a task
space. As a first step, its important that we characterize the relationship of each of reference
coordinate frames of the robots links to the origin, or base, of the robot. The characterization of
these relationships are done using what are called forward transformation matrices, and they will
be the focus of the remainder of this post.

Forward transformation matrices in 2D

As I mentioned mere sentences ago, forward transformation matrices capture the relationship
between the reference frames of different links of the robot. A good question might be what is a
reference frame? A reference frame is basically the point of view of each of the robotic links,
where if you were an arm joint yourself what you would consider looking forward. To get a
feel for these and why its necessary to be able to move between them, lets look at the reference
frames of each of the links from the above drawn robot:

We know that from our end-effector point is length away along its x-axis. Similarly, we
know that is length away from along its x-axis, that is length away from the origin
along its y-axis. The question is, then, in terms of the origins coordinate frame, where is our
point

In this configuration pictured above its pretty straightforward to figure out, its simply
. So youre feeling pretty cocky, this stuff is easy. OK hotshot, what about
NOW:
Its not as straightforward once rotations start being introduced. So what were looking for is a
method of automatically accounting for the rotations and translations of points between different
coordinate frames, such that if we know the current angles of the robot joints and the relative
positions of the coordinate frames we can quickly calculate the position of the point of interest in
terms of the origin coordinate frame.

Accounting for rotation

So lets just look quick at rotating axes. Heres a picture:

The above image displays two frames of reference with the same origin rotated from each other
by degrees. Imagine a point specified in reference frame 1, to find its coordinates
in terms of of the origin reference frame, or coordinates, it is necessary to find out the
contributions of the and axes to the and axes. The contributions to the axis from
are calculated

which takes the position of and maps it to .


To calculate how affects the position in we calculate
which is equivalent to , as shown above.

This term can be rewritten as because and are phase shifted 90 degrees from one
another. This leads to the total contributions of a point defined in the axes to the axis
being

Similarly for the axis contributions we have

Rewriting the above equations in matrix form gives:

where is called a rotation matrix.


The notation used here for these matrices is that the reference frame number being rotated from is
denoted in the superscript before, and the reference frame being rotated into is in the subscript.
denotes a rotation from reference frame 0 into reference frame 1.

To find the location of a point defined in reference frame 1 in reference frame 0 coordinates, we
then multiply by the rotation matrix .

Accounting for translation

Alrighty, rotation is great, but as you may have noticed our robots joints are not all right on top
of each other. The second part of transformation is translation, and so it is also necessary to
account for distances between reference frame origins.
Lets look at the the reference frames 1 and 0 shown in the above figure, where point
in reference frame 1. Reference frame 1 is rotated 45 degrees from and located at in
reference frame 0. To account for this translation and rotation a new matrix will be created that
includes both rotation and translation. It is generated by appending distances, denoted , to the
rotation matrix along with a row of zeros ending in a 1 to get a transformation matrix:

To make the matrix-vector multiplications work out, a homogeneous representation must be


used, which adds an extra row with a 1 to the end of the vector to give

When position vector is multiplied by the transformation matrix the answer should be
somewhere around from visual inspection, and indeed:

To get the coordinates of in reference frame 0 now simply take the first two elements of the
resulting vector .

Applying multiple transformations

We can also string these things together! What if we have a 3 link, planar (i.e. rotating on the
plane) robot arm? A setup like this:
We know that our end-effector is at point in reference frame 2, which is at an 80 degree
angle from reference frame 1 and located at . That gives us a transformation
matrix

To get our point in terms of reference frame 0 we account for the transform from reference frame
1 into reference frame 2 with and then account for the transform from reference frame 0 into
reference frame 1 with our previously defined transformation matrix

So lets give it a shot! By eyeballing it we should expect our answer to be somewhere around
or so, I would say.

And its a good thing we didnt just eyeball it! Accurate drawings might have helped but the
math gives us an exact answer. Super!

And one more note, if were often performing this computation, then instead of performing 2
matrix multiplications every time we can work out
and simply multiply our point in reference frame 2 by this new transformation matrix to
calculate the coordinates in reference frame 0.

Forward transform matrices in 3D

The example here is taken from Samir Menons RPP control tutorial.

It turns out its trivial to add in the dimension and start accounting for 3D transformations. Lets
say we have a standard revolute-prismatic-prismatic robot, which looks exactly like this, or
roughly like this:

where the base rotates around the axis, and the


distance from reference frame 0 to reference frame 1 is 1 unit, also along the axis. The rotation
matrix from reference frame 0 to reference frame 1 is:

and the translation vector is

The transformation matrix from reference frame 0 to reference frame 1 is then:

where the third column indicates that there was no rotation around the axis in moving between
reference frames, and the forth (translation) column shows that we move 1 unit along the
axis. The fourth row is again then only present to make the multiplications work out and
provides no information.
For transformation from the reference frame 1 to reference frame 2, there is no rotation (because
it is a prismatic joint), and there is translation along the axis of reference frame 1 equal to
. This gives a transformation matrix:

The final transformation, from the origin of reference frame 2 to the end-effector position is
similarly another transformation with no rotation (because this joint is also prismatic), that
translates along the axis:

The full transformation from reference frame 0 to the end-effector is found by combining all of
the above transformation matrices:

To transform a point from the end-effector reference frame into terms of the origin reference
frame, simply multiply the transformation matrix by the point of interest relative to the end-
effector. If it is the end-effector position that is of interest to us, . For example,
let , , and , then the end-effector location is:

Inverting our transformation matrices

What if we know where a point is defined in reference frame 0, but we want to know where it is
relative to our end-effectors reference frame? Fortunately this is straightforward thanks to the
way that weve defined our transform matrices. Continuing the same robot example and
configuration as above, and denoting the rotation part of the transform matrix and the
translation part , the inverse transform is defined:

If we have a point thats at in reference frame 0, then we can calculate that


relative to the end-effector it is at:

Conclusions

These are, of course, just the basics with forward transformation matrices. There are numerous
ways to go about this, but this method is fairly straightforward. If youre interested in more, there
are a bunch of youtube videos and detailed tutorials all over the web. Theres a bunch of neat
stuff about why the matrices are set up like they are (search: homogeneous transformations) and
more complex examples.
The robot example for the 3D case here didnt have any spherical joints, each joint only moved
in 2 dimensions, but it is also possible to derive the forward transformation matrix in this case,
its just more complex and not necessary to move onward here since theyre not used in the
robots Ill be looking at. This introduction is enough to get started and move on to some more
exciting things, so lets do that!

Robot control part 2: Jacobians, velocity, and force

Jacobian matrices are a super useful tool, and heavily used throughout robotics and control
theory. Basically, a Jacobian defines the dynamic relationship between two different
representations of a system. For example, if we have a 2-link robotic arm, there are two obvious
ways to describe its current position: 1) the end-effector position and orientation (which we will
denote ), and 2) as the set of joint angles (which we will denote ). The Jacobian for this
system relates how movement of the elements of causes movement of the elements of . You
can think of a Jacobian as a transform matrix for velocity.

Formally, a Jacobian is a set of partial differential equations:

With a bit of manipulation we can get a neat result:

or

where and represent the time derivatives of and . This tells us that the end-effector velocity
is equal to the Jacobian, , multiplied by the joint angle velocity.

Why is this important? Well, this goes back to our desire to control in operational (or task) space.
Were interested in planning a trajectory in a different space than the one that we can control
directly. Iin our robot arm, control is effected through a set of motors that apply torque to the
joint angles, BUT what wed like is to plan our trajectory in terms of end-effector position (and
possibly orientation), generating control signals in terms of forces to apply in space.
Jacobians allow us a direct way to calculate what the control signal is in the space that we control
(torques), given a control signal in one we dont (end-effector forces). The above equivalence is
a first step along the path to operational space control. As just mentioned, though, what were
really interested in isnt relating velocities, but forces. How can we do this?
Energy equivalence and Jacobians
Conservation of energy is a property of all physical systems where the amount of energy
expended is the same no matter how the system in question is being represented. The planar two-
link robot arm shown below will be used for illustration.

Let the joint angle positions be denoted , and end-effector position be denoted
.

Work is the application of force over a distance

where is work, is force, and is velocity.

Power is the rate at which work is performed

where is power.
Substituting in the equation for work into the equation for power gives:

Because of energy equivalence, work is performed at the same rate regardless of the
characterization of the system. Rewriting this terms of end-effector space gives:
where is the force applied to the hand, and is the velocity of the hand. Rewriting the above
in terms of joint-space gives:

where is the torque applied to the joints, and is the angular velocity of the joints. Setting
these two equations (in end-effector and joint space) equal to each other and substituting in our
equation for the Jacobian gives:

where is the Jacobian for the end-effector of the robot, and represents the forces in
joint-space that affect movement of the hand. This says that not only does the Jacobian relate
velocity from one state-space representation to another, it can also be used to calculate what the
forces in joint space should be to effect a desired set of forces in end-effector space.

Building the Jacobian

First, we need to define the relationship between the position of the end-effector and the
robots joint angles, . However will we do it? Well, we know the distances from the
shoulder to the elbow, and elbow to the wrist, as well as the joint angles, and were interested in
finding out where the end-effector is relative to a base coordinate frameOH MAYBE we
should use those forward transformation matrices from the previous post. Lets do it!

The forward transformation matrix

Recall that transformation matrices allow a given point to be transformed between different
reference frames. In this case, the position of the end-effector relative to the second joint of the
robot arm is known, but where it is relative to the base reference frame (the first joint reference
frame in this case) is of interest. This means that only one transformation matrix is needed,
transforming from the reference frame attached to the second joint back to the base.

The rotation part of this matrix is straight-forward to define, as in the previous section:
The translation part of the transformation matrices is a little different than before because
reference frame 1 changes as a function of the angle of the previous joints angles. From
trigonometry, given a vector of length and an angle the position of the end point is defined
, and the position is . The arm is operating in the plane, so the position
will always be 0.

Using this knowledge, the translation part of the transformation matrix is defined:

Giving the forward transformation matrix:

which transforms a point from reference frame 1 (elbow joint) to reference frame 0 (shoulder
joint / base).

The point of interest is the end-effector which is defined in reference frame 1 as a function of
joint angle, and the length of second arm segment, :

To find the position of our end-effector in terms of the origin reference frame multiply the point
by the transformation :

where, by pulling out the term and using the trig identities

and
the position of our end-effector can be rewritten:

which is the position of the end-effector in terms of joint angles. As mentioned above, however,
both the position of the end-effector and its orientation are needed; the rotation of the end-
effector relative to the base frame must also be defined.

Accounting for orientation

Fortunately, defining orientation is simple, especially for systems with only revolute and
prismatic joints (spherical joints will not be considered here). With prismatic joints, which are
linear and move in a single plane, the rotation introduced is 0. With revolute joints, the rotation
of the end-effector in each axis is simply a sum of rotations of each joint in their respective axes
of rotation.

In the example case, the joints are rotating around the axis, so the rotation part of our end-
effector state is

where denotes angular rotation. If the first joint had been rotating in a different plane, e.g. in
the plane around the axis instead, then the orientation would be

Partial differentiation

Once the position and orientation of the end-effector have been calculated, the partial derivative
of these equations need to be calculated with respect to the elements of . For simplicity, the
Jacobian will be broken up into two parts, and , representing the linear and angular velocity,
respectively, of the end-effector.

The linear velocity part of our Jacobian is:

The angular velocity part of our Jacobian is:


The full Jacobian for the end-effector is then:

Analyzing the Jacobian

Once the Jacobian is built, it can be analysed for insight about the relationship between and .

For example, there is a large block of zeros in the middle of the Jacobian defined above, along
the row corresponding to linear velocity along the axis, and the rows corresponding to the
angular velocity around the and axes. This means that the position, and rotations and are
not controllable. This can be seen by going back to the first Jacobian equation:

No matter what the values of , it is impossible to affect , , or , because the corresponding


rows during the above multiplication with the Jacobian are . These rows of zeros in the
Jacobian are referred to as its `null space. Because these variables cant be controlled, they will
be dropped from both and .

Looking at the variables that can be affected it can be seen that given any two of the third
can be calculated because the robot only has 2 degrees of freedom (the shoulder and elbow). This
means that only two of the end-effector variables can actually be controlled. In the situation of
controlling a robot arm, it is most useful to control the coordinates, so will be dropped
from the force vector and Jacobian.

After removing the redundant term, the force vector representing the controllable end-effector
forces is

where is force along the axis, is force along the axis, and the Jacobian is written

If instead , i.e. torque around the axis, were chosen as a controlled force then the force vector
and Jacobian would be (assuming force along the axis was also chosen):
But well stick with control of the and forces instead, as its a little more straightforward.

Using the Jacobian

With our Jacobian, we can find out what different joint angle velocities will cause in terms of the
end-effector linear and angular velocities, and we can also transform desired forces into
torques. Lets do a couple of examples. Note that in the former case well be using the full
Jacobian, while in the latter case we can use the simplified Jacobian specified just above.

Example 1

Given known joint angle velocities with arm configuration

and arm segment lengths , the velocities of the end-effector can be calculated by
substituting in the system state at the current time into the equation for the Jacobian:

And so the end-effector is linear velocity is , with angular velocity is


.

Example 2

Given the same system and configuration as in the previous example as well as a trajectory
planned in space, the goal is to calculate the torques required to get the end-effector to
move as desired. The controlled variables will be forces along the and axes, and so the
reduced Jacobian from the previous section will be used. Let the desired forces be
to calculate the corresponding joint torques the desired end-effector forces and current system
state parameters are substituted into the equation relating forces in in end-effector and joint
space:

So given the current configuration to get the end-effector to move as desired, without accounting
for the effects of inertia and gravity, the torques to apply to the system are
.

And now we are able to transform end-effector forces into torques, and joint angle velocities into
end-effector velocities! What a wonderful, wonderful tool to have at our disposal! Hurrah for
Jacobians!

Conclusions

In this post Ive gone through how to use Jacobians to relate the movement of joint angle and
end-effector system state characterizations, but Jacobians can be used to relate any two
characterizations. All you need to do is define one in terms of the other and do some partial
differentiation. The above example scenarios were of course very simple, and didnt worry about
compensating for anything like gravity. But dont worry, thats exactly what were going to look
at in our exciting next chapter!

Something that I found interesting to consider is the need for the orientation of the end-effector
and finding the angular velocities. Often in simpler robot arms, were only interested in the
position of the end-effector, so its easy to write off orientation. But if we had a situation where
there was a gripper attached to the end-effector, then suddenly the orientation becomes very
important, often determining whether or not an object can be picked up or not.

And finally, if youre interested in reading more about all this, I recommend checking out
Velocity kinematics The manipulator Jacobian available online, its a great resource.

Robot control part 3: Accounting for mass and gravity

In the exciting previous post we looked at how to go about generating a Jacobian matrix, which
we could use to transformation both from joint angle velocities to end-effector velocities, and
from desired end-effector forces into joint angle torques. I briefly mentioned right at the end that
using just this force transformation to build your control signal was only appropriate for very
simple systems that didnt have to account for things like arm-link mass or gravity.

In general, however, mass and gravity must be accounted for and cancelled out. The full
dynamics of a robot arm are

where is joint angle acceleration, is the control signal (specifying torque), is a


function describing the Coriolis and centrifugal effects, is the effect of gravity in joint space,
and is the mass matrix of the system in joint space.

There are a lot of terms involved in the system acceleration, so while the Jacobian can be used to
transform forces between coordinate systems it is clear that just setting the control signal
is not sufficient, because a lot of the dynamics affecting acceleration arent
accounted for. In this section an effective PD controller operating in joint space will be
developed that will allow for more precise control by cancelling out unwanted acceleration
terms. To do this the effects of inertia and gravity need to be calculated.

Accounting for inertia

The fact that systems have mass is a pain in our controllers side because it introduces inertia into
our system, making control of how the system will move at any given point in time more
difficult. Mass can be thought of as an objects unwillingness to respond to applied forces. The
heavier something is, the more resistant it is to acceleration, and the force required to move a
system along a desired trajectory depends on both the objects mass and its current acceleration.

To effectively control a system, the system inertia needs to be calculated so that it can be
included in the control signal and cancelled out.
Given the robot arm above, operating in the
plane, with the axis extending into the picture where the yellow circles represent each links
centre-of-mass (COM). The position of each link is COM is defined relative to that links
reference frame, and the goal is to figure out how much each links mass will affect the system
dynamics.

The first step is to transform the representation of each of the COM from Cartesian coordinates
in the reference frame of their respective arm segments into terms of joint angles, such that the
Jacobian for each COM can be calculated.

Let the COM positions relative to each segments coordinate frame be

The first segments COM is already in base coordinates (since the first link and the base share
the same coordinate frame), so all that is required is the position of the second links COM in the
base reference frame, which can be done with the transformation matrix

Using to transform the gives

To see the full computation worked out explicitly please see my previous robot control post.
Now that we have the COM positions in terms of joint angles, we can find the Jacobians for each
point through our Jacobian equation:

Using this for each link gives us:

Kinetic energy

The total energy of a system can be calculated as a sum of the energy introduced from each
source. The Jacobians just derived will be used to calculate the kinetic energy each link generates
during motion. Each links kinetic energy will be calculated and summed to get the total energy
introduced into the system by the mass and configuration of each link.

Kinetic energy (KE) is one half of mass times velocity squared:

where is the mass matrix of the system, with the subscript denoting that it is defined in
Cartesian space, and is a velocity vector, where is of the form

and the mass matrix is structured

where is the mass of COM , and the terms are the moments of inertia, which define the
objects resistance to change in angular velocity about the axes, the same way that the mass
element defines the objects resistance to changes in linear velocity.

As mentioned above, the mass matrix for the COM of each link is defined in Cartesian
coordinates in its respective arm segments reference frame. The effects of mass need to be found
in joint angle space, however, because that is where the controller operates. Looking at the
summation of the KE introduced by each COM:

and substituting in ,
and moving the terms outside the summation,

Defining

gives

which is the equation for calculating kinetic energy in joint space. Thus, $\textbf{M}(\textbf{q})
$ denotes the inertia matrix in joint space.

Now that weve successfully calculated the mass matrix of the system in joint space, we can
incorporate it into our control signal and cancel out its effects on the system dynamics! On to the
next problem!

Accounting for gravity

With the forces of inertia accounted for, we can now address the problem of gravity. To
compensate for gravity the concept of conservation of energy (i.e. the work done by gravity is
the same in all coordinate systems) will once again be pulled out. The controller operates by
applying torque on joints, so it is necessary to be able to calculate the effect of gravity in joint
space to cancel it out.

While the effect of gravity in joint space isnt obvious, it is quite easily defined in Cartesian
coordinates in the base frame of reference. Here, the work done by gravity is simply the
summation of the distance each links center of mass has moved multiplied by the force of
gravity. Where the force of gravity in Cartesian space is the mass of the object multiplied by
-9.8m/s along the axis, the equation for the work done by gravity is written:

where is the force of gravity on the th arm segment. Because of the conservation of energy,
the equation for work is equivalent when calculated in joint space, substituting into the above
equation with the equation for work:
and then substitute in using ,

and cancelling out the terms on both sides,

which says that to find the effect of gravity in joint space simply multiply the mass of each link
by its Jacobian, multiplied by the force of gravity in space, and sum over each link. This
summation gives the total effect of the gravity on the system.

Making a PD controller in joint space

We are now able to account for the energy in the system caused by inertia and gravity, great!
Lets use this to build a simple PD controller in joint space. Control should be very straight
forward because once we cancel out the effects of gravity and inertia then we can almost pretend
that the system behaves linearly. This means that we can also treat control of each of the joints
independently, since their movements no longer affect one another. So in our control system
were actually going to have a PD controller for each joint.

The above-mentioned nonlinearity thats left in the system dynamics is due to the Coriolis and
centrifugal effects. Now, these can be accounted for, but they require highly accurate model of
the moments of inertia. If the moments are incorrect then the controller can actually introduce
instability into the system, so its better if we just dont address them.

Rewriting the system dynamics presented at the very top, in terms of acceleration gives

Ideally, the control signal would be constructed

where is the desired acceleration of the system. This would result in system acceleration
which would be ideal. As mentioned, because the Coriolis and centrifugal effects are tricky to
account for well leave them out, so the instead the control signal is

Using a standard PD control formula to generate the desired acceleration:

where and are our gain values, and the control signal has been fully defined:

and weve successfully build an effective PD controller in joint space!

Conclusions

Here we looked at building a PD controller that operates in the joint space of a robotic arm that
can cancel out the effects of inertia and gravity. By cancelling out the effects of inertia, we can
treat control of each of the joints independently, effectively orthogonalizing their control. This
makes PD control super easy, we just set up a simple controller for each joint. Also a neat thing
is that all of the required calculations can be performed with algorithms of linear complexity, so
its not a problem to do all of this super fast.

One of the finer points was that we ignored the Coriolis and centrifugal effects on the robots
dynamics. This is because in the mass matrix model of the moments of inertia are notoriously
hard to accurately capture on actual robots. Often you go based off of a CAD model of your
robot and then have to do some fine-tuning by hand. So they will be unaccounted for in our
control signal, but most of the time as long as you have a very short feedback loop youll be fine.

I am really enjoying working through this, as things build on each other so well here and were
starting to be able to do some really interesting things with the relatively forward transformation
matrices and Jacobians that we learned how to build in the previous posts. This was for a very
simple robot, but excitingly the next step after this is moving on to operational space control! At
last. From there, well go on to look at more complex robotic situations where things like
configuration redundancy are introduced and its not quite so straightforward.

Robot control part 4: Operation space control


In this post well look at operational space control and how to derive the control equations. Id
like to mention again that these posts have all come about as a result of me reading and working
through Samir Menons operational space control tutorial, where he works through an
implementation example on a revolute-prismatic-prismatic robot arm.

Generalized coordinates vs operational space

The term generalized coordinates refers to a characterization of the system that uniquely defines
its configuration. For example, if our robot has 7 degrees of freedom, then there are 7 state
variables, such that when all these variables are given we can fully account for the position of the
robot. In the previous posts of this series weve been describing robotic arms in joint space, and
for these systems joint space is an example of generalized coordinates. This means that if we
know the angles of all of the joints, we can draw out exactly what position that robot is in. An
example of a coordinate system that does not uniquely define the configuration of a robotic arm
would be one that describes only the position of the end-effector.

So generalized coordinates tell us everything we need to know about where the robot is, thats
great. The problem with generalized coordinates, though, is that planning trajectories in this
space for tasks that were interested in performing tends not to be straight forward. For example,
if we have a robotic arm, and we want to control the position of the end-effector, its not obvious
how to control the position of the end-effector by specifying a trajectory for each of the arms
joints to follow through joint space.

The idea behind operational space control is to abstract away from the generalized coordinates of
the system and plan a trajectory in a coordinate system that is directly relevant to the task that we
wish to perform. Going back to the common end-effector position control situation, we would
like to operate our arm in 3D Cartesian space. In this space, its obvious what trajectory
to follow to move the end-effector between two positions (most of the time it will just be a
straight line in each dimension). So our goal is to build a control system that lets us specify a
trajectory in our task space and will transform this signal into generalized coordinates that it can
then send out to the system for execution.

Operational space control of simple robot arm

Alright, were going to work through an example. The generalized coordinates for this example
is going to be joint space, and the operational space is going to be the end-effector Cartesian
coordinates relative to the a reference frame attached to the base. Recycling the robot from the
second post in this series, heres the set up well be working with:
Once again, were going to need to find the Jacobians for the end-effector of the robot.
Fortunately, weve already done this:

Great! So now that we have , we can go ahead and transform forces from end-effector (hand)
space to joint space as we discussed in the second post:

Rewriting as its component parts

where is end-effector acceleration, and is the inertia matrix in operational space.


Unfortunately, this isnt just the normal inertia matrix, so lets take a look here at how to go
about deriving it.

Inertia in operational space

Being able to calculate allows inertia to be cancelled out in joint-space by incorporating it


into the control signal, but to cancel out the inertia of the system in operational space more work
is still required. The first step will be calculating the acceleration in operational space. This can
be found by taking the time derivative of our original Jacobian equation.
Substituting in the dynamics of the system, as defined in the previous post, but ignoring the
effects of gravity for now, gives:

Define the control signal

where substituting in for , the desired end-effector force, gives

where denotes the desired end-effector acceleration. Substituting the above equation into our
equation for acceleration in operational space gives

Rearranging terms leads to

the last term is ignored due to the complexity of modeling it, resulting in

At this point, to get the dynamics to be equal to the desired acceleration , the end-effector
inertia matrix needs to be chosen carefully. By setting

we now get

And thats why and how the inertia matrix in operational space is defined!

The whole signal


Going back to the control signal we were building, lets now add in a term to cancel the effects of
gravity in joint space. This gives

where is the same as defined in the previous post. This controller converts desired end-
effector acceleration into torque commands, and compensates for inertia and gravity.

Defining a basic PD controller in operational space

and the full equation for the operational space control signal in joint space is:

Hurray! That was relatively simple. The great thing about this, though, is that its the same
process for any robot arm! So go out there and start building controllers! Find your robots mass
matrix and gravity term in generalized coordinates, the Jacobian for the end effector, and youre
in business.

Conclusions

So, this feels a little anticlimactic without an actual simulation / implementation of operational
space, but dont worry! As avid readers (haha) will remember, a while back I worked out how to
import some very realistic MapleSim arm simulations into Python for use with some Python
controllers. This seems a perfect application opportunity, so thats next! A good chance to work
through writing the controllers for different arms and also a chance to play with controllers
operating in null spaces and all the like.

Actual simulation implementations will also be a good chance to play with trying to incorporate
those other force terms into the control equation, and get to see the results without worrying
about breaking an actual robot. In actual robots a lot of the time you leave out anything where
your model might be inaccurate because the last thing to do is falsely compensate for some
forces and end up injecting energy into your system, making it unstable.

Theres still some more theory to work through though, so Id like to do that before I get to
implementing simulations. One more theory post, and then well get back to code!

Robot control part 5: Controlling in the null space


In the last post, I went through how to build an operational space controller. It was surprisingly
easy after weve worked through all the other posts. But maybe that was a little too easy for you.
Maybe you want to do something more interesting like implement more than one controller at
the same time. In this post well go through how to work inside the null space of a controller to
implement several seperate controllers simultaneously without interference.
Buckle up.

Null space forces

The last example comprises the basics of operational space control; describe the system,
calculate the system dynamics, transform desired forces from an operational space to the
generalized coordinates, and build the control signal to cancel out the undesired system
dynamics. Basic operational space control works quite well, but it is not uncommon to have
several control goals at once; such as `move the end-effector to this position (primary goal), and
`keep the elbow raised high (secondary goal) in the control of a robot arm.

If the operational space can also serve as generalized coordinates, i.e. if the system state
specified in operational space constrains all of the degrees of freedom of the robot, then there is
nothing that can be done without affecting the performance of the primary controller. In the case
of controlling a two-link robot arm this is the case. The end-effector Cartesian space (chosen as
the operational space) could also be a generalized coordinates system, because a specific
position fully constrains the position of the arm.

But often when using operational space control for more complex robots this is not the case. In
these situations, the forces controlled in operational space have fewer dimensions than the robot
has degrees of freedom, and so it is possible to accomplish the primary goal in a number of ways.
The null space of this primary controller is the region of state space where there is a redundancy
of solutions; the system can move in a number of ways and still not affect the completion of the
goals of the primary controller. An example of this is all the different configurations the elbow
can be in while a person moves their hand in a straight line. In these situations, a secondary
controller can be created to operate in the null space of the primary controller, and the full
control signal sent to the system is a sum of the primary control signal and a filtered version of
the secondary control signal. In this section the derivation of the null-space filter will be worked
through for a system with only a primary and secondary controller, but note that the process can
be applied iteratively for systems with further controllers.

The filtering of the secondary control signal means that the secondary controllers goals will only
be accomplished if it is possible to do so without affecting the performance of the first controller.
In other words, the secondary controller must operate in the null space of the first controller.
Denote the primary operational space control signal, e.g. the control signal defined in the
previous post, as and the control signal from the secondary controller . Define the force to
apply to the system

where is the pseudo-inverse of .

Examining the filtering term that was added,

it can be seen that the Jacobian transpose multiplied by its pseudo-inverse will be 1s all along
the diagonal, except in the null space. This means that is subtracted from itself everywhere
that affects the operational space movement and is left to apply any arbitrary control signal in the
null space of the primary controller.

Unfortunately, this initial set up does not adequately filter out the effects of forces that might be
generated by the secondary controller. The Jacobian is defined as a relationship between the
velocities of two spaces, and so operating in the null space defined by the Jacobian ensures that
no velocities are applied in operational space, but the required filter must also prevent any
accelerations from affecting movement in operational space. The standard Jacobian pseudo-
inverse null space is a velocity null space, and so a filter built using it will allow forces affecting
the systems acceleration to still get through. What is required is a pseudo-inverse Jacobian
defined to filter signals through an acceleration null space.

To acquire this acceleration filter, our control signal will be substituted into the equation for
acceleration in the operational space, which, after cancelling out gravity effects with the control
signal and removing the unmodeled dynamics, gives

Rewriting this to separate the secondary controller into its own term

it becomes clear that to not cause any unwanted movement in operational space the second term
must be zero.

There is only one free term left in the second term, and that is the pseudo-inverse. There are
numerous different pseudo-inverses that can be chosen for a given situation, and here what is
required is to engineer a pseudo-inverse such that the term multiplying in the above
operational space acceleration equation is guaranteed to go to zero.
this needs to be true for all , so it can be removed,

substituting in our inertia matrix for operational space, which defines

This specific Jacobian inverse was presented in this 1987 paper by Dr. Oussama Khatib and is
called the `dynamically consistent generalized inverse. Using this psuedo-inverse guarantees
that any signal coming from the secondary controller will not affect movement in the primary
controllers operational space. Just as a side-note, the name pseudo-inverse is a bit of misnomer
here, since it doesnt try to produce the identity when multiplied by the original Jacobian
transpose, but hey. Thats what theyre calling it.

The null space filter cancels out the acceleration effects of forces in operational space from a
signal that is being applied as part of the control system. But it can also be used to cancel out the
effects of any unwanted signal that can be modeled. Given some undesirable force signal
interfering with the system that can be effectively modeled, a null space filtering term can be
implemented to cancel it out. The control signal in this case, with one primary operational space
controller and a null space filter for the undesired force, looks like:

We did it! This will now allow a high-priority operational space controller to execute without
interference from a secondary controller operating in its null space to complete its own set of
goals (when possible).

Example:

Given a three link arm (revolute-revolute-revolute) operating in the plane, shown below:
this example will construct the control system for a primary controller controlling the end-
effector and a secondary controller working to keep the arm near its joint angles default resting
positions.

Let the system state be with default positions . The control


signal of the secondary controller is the difference between the target state and the current system
state

where is a gain term.

Let the centres of mass be

the Jacobians for the COMs are

The Jacobian for the end-effector of this three link arm is


where and .

Taking the control signal developed in Section~\ref{sec:exampleOS}

where was defined in the previous post, and is defined two posts ago, and and
are gain terms, usually set such that , and adding in the null space control signal and
filter gives

where is the dynamically consistent generalized inverse defined above, and is our
null space signal!

Conclusions

Its a lot of math, but when you start to get a feel for it whats really awesome is that this is it.
Were describing the whole system, and so by working with these equations we can get a super
effective controller. Which is pretty cool. Especially in relation to other possible controllers.

Alright! Weve now worked through all the basic theory for operational space control, it is time
to get some implementations going.

Robot control part 6: Handling singularities


Were back! Another exciting post about robotic control theory, but dont worry, its short and
ends with simulation code. The subject of todays post is handling singularities.

What is a singularity

This came up recently when I had build this beautiful controller for a simple two link arm that
would occasionally go nuts. After looking at it for a while it became obvious this was happening
whenever the elbow angle reached or got close to 0 or . Heres an animation:
Whats going on here? Heres what. The Jacobian has dropped rank and become singular (i.e.
non-invertible), and when we try to calculate our mass matrix for operational space

the values explode in the inverse calculation. Dropping rank means that the rows of the Jacobian
are no longer linearly independent, which means that the matrix can be rotated such that it gives
a matrix with a row of zeros. This row of zeros is the degenerate direction, and the problems
come from trying to send forces in that direction.

To determine when the Jacobian becomes singular its determinant can be examined; if the
determinant of the matrix is zero, then it is singular. Looking the Jacobian for the end-effector:

When it can be that , so the Jacobian becomes

which gives a determinant of


Similarly, when , where and , the
determinant of the Jacobian is

Calculating the determinant of this we get

Note that while in these cases the Jacobian is a square matrix in the event that it is not a square
matrix, the determinant of can be found instead.

Fixing the problem

When a singularity is occurring it can be detected, but now it must be handled such that the
controller behaves appropriately. This can be done by identifying the degenerate dimensions and
setting the force in those directions to zero.

First the SVD decomposition of is found. To get the inverse of this matrix
(i.e. to find ) from the returned and matrices is a matter of inverting the matrix :

where is a diagonal matrix of singular values.

Because is diagonal it is very easy to find its inverse, which is calculated by taking the
reciprocal of each of the diagonal elements.

Whenever the system approaches a singularity some of the values of will start to get very small,
and when we take the reciprocal of them we start getting huge numbers, which is where the value
explosion comes from. Instead of allowing this to happen, a check for approaching the
singularity can be implemented, which then sets the singular values entries smaller than the
threshold equal to zero, canceling out any forces that would be sent in that direction.

Heres the code:

1 Mx_inv = np.dot(JEE, np.dot(np.linalg.inv(Mq), JEE.T))


2 if abs(np.linalg.det(np.dot(JEE,JEE.T))) > .005**2:
# if we're not near a singularity
3 Mx = np.linalg.inv(Mx_inv)
4 else:
5 # in the case that the robot is entering near singularity
6 u,s,v = np.linalg.svd(Mx_inv)
for i in range(len(s)):
7 if s[i] < .005: s[i] = 0
8 else: s[i] = 1.0/float(s[i])
9 Mx = np.dot(v, np.dot(np.diag(s), u.T))
10
11

And heres an animation of the controlled arm now that weve accounted for movement when
near singular configurations:

As always, the code for this can be found up on my Github. The default is to run using a two link
arm simulator written in Python. To run, simply download everything and run the run_this.py
file.

Everything is also included required to run the MapleSim arm simulator. To do this, go into the
TwoLinkArm folder, and run python setup.py build_ext -i. This should compile the arm
simulation to a shared object library that Python can now access on your system. To use it, edit
the run_this.py file to import from TwoLinkArm/arm_python to TwoLinkArm/arm and you
should be good to go!

Robot control part 7: OSC of a 3-link arm

So weve done control for the 2-link arm, and control of the one link arm is trivial (where we
control joint angle, or x or y coordinate of the pendulum), so here Ill just show an
implementation of operation space control for a more interesting arm model, the 3-link arm
model. The code can all be found up on my Github.
In theory theres nothing different or more difficult about controlling a 3-link arm versus a 2-link
arm. For the inertia matrix, what I ended up doing here is just jacking up all the values of the
matrix to about 100, which causes the controller to way over control the arm, and you can see the
torques are much larger than they would need to be if we had an accurate inertia matrix. But the
result is the same super straight trajectories that weve come to expect from operational space
control:

Its a little choppy because I cut out a bunch of frames to keep the gif size down. But you get the
point, it works. And quite well!

Because this is also a 3-link arm now and our task space force signal is 2D, we have redundant
space of solutions, meaning that the task space control signal can be carried out in a number of
ways. In other words, a null space exists for this controller. This means that we can throw
another controller in our system to operate inside the null space of the first controller. Weve
already worked through all the math for this, so its straightforward to implement.

What kind of null space controller should we put in? Well, you may have noticed in the above
animation the arm goes through itself, heres another example:
Often its desirable to avoid this (because of physics or whatever), so what we can do is add a
secondary controller that works to keep the arms elbow and wrist near some comfortable default
angles. When we do this we get the following:
And there you have it! Operational space control of a three link arm with a secondary controller
in the null space to try to keep the angles near their default / resting positions.

I also added mouse based control to the arm so it will try to follow your mouse when you move
over the figure, which makes it pretty fun to explore the power of the controller. Its interesting
to see where the singularities become an issue, and how having a null space controller thats
operating in joint space can actually come to help the system move through those problem points
more smoothly.

You might also like