You are on page 1of 176

- 2.

12 Lecture Notes -

H. Harry Asada
Ford Professor of Mechanical Engineering

Fall 2005

Introduction to Robotics, H. Harry Asada

Chapter 1
Introduction
Many definitions have been suggested for what we call a robot. The word may conjure up various
levels of technological sophistication, ranging from a simple material handling device to a
humanoid. The image of robots varies widely with researchers, engineers, and robot
manufacturers. However, it is widely accepted that todays robots used in industries originated in
the invention of a programmed material handling device by George C. Devol. In 1954, Devol
filed a U.S. patent for a new machine for part transfer, and he claimed the basic concept of teachin/playback to control the device. This scheme is now extensively used in most of today's
industrial robots.

1.1 Era of Industrial Robots


Devol's industrial robots have their origins in two preceding technologies: numerical control for
machine tools, and remote manipulation. Numerical control is a scheme to generate control
actions based on stored data. Stored data may include coordinate data of points to which the
machine is to be moved, clock signals to start and stop operations, and logical statements for
branching control sequences. The whole sequence of operations and its variations are prescribed
and stored in a form of memory, so that different tasks can be performed without requiring major
hardware changes. Modern manufacturing systems must produce a variety of products in small
batches, rather than a large number of the same products for an extended period of time, and
frequent changes of product models and production schedules require flexibility in the
manufacturing system. The transfer line approach, which is most effective for mass production,
is not appropriate when such flexibility is needed (Figure 1-1). When a major product change is
required, a special-purpose production line becomes useless and often ends up being abandoned,
despite the large capital investment it originally involved. Flexible automation has been a central

Figure 1-1 General trend of manufacturing cost vs. batch size

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

issue in manufacturing innovation for a few decades, and numerical control has played a central
role in increasing system flexibility. Contemporary industrial robots are programmable machines
that can perform different operations by simply modifying stored data, a feature that has evolved
from the application of numerical control.
Another origin of today's industrial robots can be found in remote manipulators. A remote
manipulator is a device that performs a task at a distance. It can be used in environments that
human workers cannot easily or safely access, e.g. for handling radio-active materials, or in some
deep sea and space applications. The first master-slave manipulator system was developed by
1948. The concept involves an electrically powered mechanical arm installed at the operation
site, and a control joystick of geometry similar to that of the mechanical arm (Figure 1-2). The
joystick has position transducers at individual joints that measure the motion of the human
operator as he moves the tip of the joystick. Thus the operator's motion is transformed into
electrical signals, which are transmitted to the mechanical arm and cause the same motion as the
one that the human operator performed. The joystick that the operator handles is called the
master manipulator, while the mechanical arm is called the slave manipulator, since its motion is
ideally the replica of the operator's commanded motion. A master-slave manipulator has
typically six degrees of freedom to allow the gripper to locate an object at an arbitrary position
and orientation. Most joints are revolute, and the whole mechanical construction is similar to that
of the human arm. This analogy with the human arm results from the need of replicating human
motions. Further, this structure allows dexterous motions in a wide range of workspaces, which
is desirable for operations in modern manufacturing systems.

Hazardous environment

Human
Operator

Joystick

Manipulator

Figure by MIT OCW.

Contemporary industrial robots retain some similarity in geometry with both the human arm and
remote manipulators. Further, their basic concepts have evolved from those of numerical control
Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

and remote manipulation. Thus a widely accepted definition of todays industrial robot is that of a
numerically controlled manipulator, where the human operator and the master manipulator in the
figure are replaced by a numerical controller.

Figure removed for copyright reasons.


See Figure 1-4 in Asada and Slotine, 1986.

Figure 1-3 White body assembly lines using spot welding robots
1.2 Creation of Robotics
The merge of numerical control and remote manipulation created a new field of engineering, and
with it a number of scientific issues in design and control which are substantially different from
those of the original technologies have emerged.
Robots are required to have much higher mobility and dexterity than traditional machine tools.
They must be able to work in a large reachable range, access crowded places, handle a variety of
workpieces, and perform flexible tasks. The high mobility and dexterity requirements result in the
unique mechanical structure of robots, which parallels the human arm structure. This structure,
however, significantly departs from traditional machine design. A robot mechanical structure is
basically composed of cantilevered beams, forming a sequence of arm links connected by hinged
joints. Such a structure has inherently poor mechanical stiffness and accuracy, hence is not
appropriate for the heavy-duty, high-precision applications required of machine tools. Further, it
also implies a serial sequence of servoed joints, whose errors accumulate along the linkage. In
order to exploit the high mobility and dexterity uniquely featured by the serial linkage, these
difficulties must be overcome by advanced design and control techniques.
The serial linkage geometry of manipulator arms is described by complex nonlinear equations.
Effective analytical tools are necessary to understand the geometric and kinematic behavior of
the manipulator, globally referred to as the manipulator kinematics. This represents an important
and unique area of robotics research, since research in kinematics and design has traditionally
focused upon single-input mechanisms with single actuators moving at constant speeds, while
robots are multi-input spatial mechanisms which require more sophisticated analytical tools.
The dynamic behavior of robot manipulators is also complex, since the dynamics of multi-input
spatial linkages are highly coupled and nonlinear. The motion of each joint is significantly
affected by the motions of all the other joints. The inertial load imposed at each joint varies
widely depending on the configuration of the manipulator arm. Coriolis and centrifugal effects
are prominent when the manipulator arm moves at high speeds. The kinematic and dynamic
complexities create unique control problems that are not adequately handled by standard linear
control techniques, and thus make effective control system design a critical issue in robotics.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Figure removed for copyright reasons.


See _______________
http://www.adept.com

Figure 1-4 Adept Direct-Drive robot


Finally, robots are required to interact much more heavily with peripheral devices than traditional
numerically-controlled machine tools. Machine tools are essentially self-contained systems that
handle workpieces in well-defined locations. By contrast, the environment in which robots are
used is often poorly structured, and effective means must be developed to identify the locations
of the workpieces as well as to communicate to peripheral devices and other machines in a
coordinated fashion. Robots are also critically different from master-slave manipulators, in that
they are autonomous systems. Master-slave manipulators are essentially manually controlled

Photo removed for copyright reasons.


http://www.dlr.de/rm/en/Desktopdefault.aspx/tabid-426/569_read-76/
Robot hand holding lightbulb - ______________________________________________

Figure 1-5 Dexterous fingers


Photo removed for copyright reasons.
See http://menzelphoto.peripix.com/viewdetails/item/313/size/300/3/
'Also published Menzel, Peter, and Faith DAluisio. Robo Sapiens: Evolution of a New
Species. Cambridge, MA: MIT Press, 2001, p. 176.'

Figure 1-6 Medical robots for minimally invasive surgery


systems, where the human operator takes the decisions and applies control actions. The operator
interprets a given task, finds an appropriate strategy to accomplish the task, and plans the
procedure of operations. He/she devises an effective way of achieving the goal on the basis of
his/her experience and knowledge about the task. His/her decisions are then transferred to the
slave manipulator through the joystick. The resultant motion of the slave manipulator is
monitored by the operator, and necessary adjustments or modifications of control actions are
provided when the resultant motion is not adequate, or when unexpected events occur during the
operation. The human operator is, therefore, an essential part of the control loop. When the
operator is eliminated from the control system, all the planning and control commands must be
generated by the machine itself. The detailed procedure of operations must be set up in advance,

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

and each step of motion command must be generated and coded in an appropriate form so that
the robot can interpret it and execute it accurately. Effective means to store the commands and
manage the data file are also needed . Thus, programming and command generation are critical
issues in robotics. In addition, the robot must be able to fully monitor its own motion. In order to
adapt to disturbances and unpredictable changes in the work environment, the robot needs a
variety of sensors, so as to obtain information both about the environment (using external
sensors, such as cameras or touch sensors) and about itself (using internal sensors, such as joint
encoders or joint torque sensors). Effective sensor-based strategies that incorporate this
information require advanced control algorithms. But they also imply a detailed understanding of
the task.

1.3. Manipulation and Dexterity


Contemporary industrial needs drive the applications
of robots to ever more advanced tasks. Robots are
required to perform highly skilled jobs with minimum
human assistance or intervention. To extend the
applications and abilities of robots, it becomes
important to develop a sound understanding of the
tasks themselves.
In order to devise appropriate arm mechanisms and to
develop effective control algorithms, we need to
precisely understand how a given task should be
accomplished and what sort of motions the robot
should be able to achieve. To perform an assembly
operation, for example, we need to know how to
guide the assembly part to the desired location, mate
it with another part, and secure it in an appropriate
way. In a grinding operation, the robot must properly
position the grinding wheel while
accommodating the contact force. We need to
Figure 1-7 Remote-center compliance hand
analyze the grinding process itself in order to
generate appropriate force and motion
commands.
A detailed understanding of the underlying principles and "know-how" involved in the task must
be developed in order to use industrial robots effectively, while there is no such need for making
control strategies explicit when the assembly and grinding operations are performed by a human
worker. Human beings perform sophisticated manipulation tasks without being aware of the
control principles involved. We have trained ourselves to be capable of skilled jobs, but in
general we do not know what the acquired skills are exactly about. A sound and explicit
understanding of manipulation operations, however, is essential for the long-term progress of
robotics. This scientific aspect of manipulation has never been studied systematically before, and
represents an emerging and important part of robotics research.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

1.4 Locomotion and Navigation


Robotics has found a number of important application areas in broad fields beyond
manufacturing automation. These range from space and under-water exploration, hazardous
waste disposal, and environment monitoring to robotic surgery, rehabilitation, home robotics, and
entertainment. Many of these applications entail some locomotive functionality so that the robot
can freely move around in an unstructured environment. Most industrial robots sit on a
manufacturing floor and perform tasks in a structured environment. In contrast, those robots for
non-manufacturing applications must be able to move around on their own. See Figure 1-8.
Locomotion and navigation are increasingly important, as robots find challenging applications in
the field. This opened up new research and development areas in robotics. Novel mechanisms are
needed to allow robots to move through crowded areas, rough terrain, narrow channels, and even
staircases. Various types of legged robots have been studied, since, unlike standard wheels, legs
can negotiate with uneven floors and rough terrain. Among others, biped robots have been
studied most extensively, resulting in the development of humanoids, as shown in Figure 1-9.
Combining leg mechanisms with wheels has accomplished superior performance in both
flexibility and efficiency. The Mars Rover prototype shown below has a rocker-buggy
mechanism combined with advanced wheel drives in order to adapt itself to diverse terrain
conditions. See Figure 1-10.

Photo removed for copyright reasons.

Figure 1-8 Automatically guided vehicle for meal delivery in hospitals

Photo removed for copyright reasons.

Figure 1-9 Hondas P3 humanoid robot

Navigation is another critical functionality needed for mobile robots, in particular, for
unstructured environment. Those robots are equipped with range sensors and vision system, and
are capable of interpreting the data to locate themselves. Often the robot has a map of the
environment, and uses it for estimating the location. Furthermore, based on the real-time data
obtained in the field, the robot is capable of updating and augmenting the map, which is
incomplete and uncertain in unstructured environment. As depicted in Figure 1-10, location
estimation and map building are simultaneously executed in the advanced navigation system.
Such Simultaneous Location and MApping (SLAM) is exactly what we human do in our daily
life, and is an important functionality of intelligent robots.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

The goal of robotics is thus two-fold: to extend our understanding about manipulation,
locomotion, and other robotic behaviors and to develop engineering methodologies to actually
perform desired tasks. The goal of this book is to provide entry-level readers and experienced
engineers with fundamentals of understanding robotic tasks and intelligent behaviors as well as
with enabling technologies needed for building and controlling robotic systems.

Sensor Data

Map Building

Source: JPL

Robot Control

Location Estimation

Figure 1-10 JPLs planetary exploration robot: an early version of the Mars Rover

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 2
Actuators and Drive Systems
Actuators are one of the key components contained in a robotic system. A robot has many
degrees of freedom, each of which is a servoed joint generating desired motion. We begin with
basic actuator characteristics and drive amplifiers to understand behavior of servoed joints.
Most of todays robotic systems are powered by electric servomotors. Therefore, we
focus on electromechanical actuators.

2.1 DC Motors
Figure 2-1 illustrates the construction of a DC servomotor, consisting of a stator, a rotor,
and a commutation mechanism. The stator consists of permanent magnets, creating a magnetic
field in the air gap between the rotor and the stator. The rotor has several windings arranged
symmetrically around the motor shaft. An electric current applied to the motor is delivered to
individual windings through the brush-commutation mechanism, as shown in the figure. As the
rotor rotates the polarity of the current flowing to the individual windings is altered. This allows
the rotor to rotate continually.

Stator Winding
Rotor Windings

Brush

ia
S
Inertia
Load

Shaft

Commutator

Bearings

Angle
Figure by MIT OCW.

Figure 2.1.1 Construction of DC motor


Let m be the torque created at the air gap, and i the current flowing to the rotor
windings. The torque is in general proportional to the current, and is given by

m = Kt i

(2.1.1)

where the proportionality constant K t is called the torque constant, one of the key parameters
describing the characteristics of a DC motor. The torque constant is determined by the strength of
the magnetic field, the number of turns of the windings, the effective area of the air gap, the
radius of the rotor, and other parameters associated with materials properties.
In an attempt to derive other characteristics of a DC motor, let us first consider an
idealized energy transducer having no power loss in converting electric power into mechanical

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

power. Let E be the voltage applied to the idealized transducer. The electric power is then given
by E i , which must be equivalent to the mechanical power:

Pin = E i = m m

(2.1.2)

where m is the angular velocity of the motor rotor. Substituting eq.(1) into eq.(2) and dividing
both sides by i yield the second fundamental relationship of a DC motor:

E = K tm

(2.1.3)

The above expression dictates that the voltage across the idealized power transducer is
proportional to the angular velocity and that the proportionality constant is the same as the torque
constant given by eq.(1). This voltage E is called the back emf (electro-motive force) generated at
the air gap, and the proportionality constant is often called the back emf constant.
Note that, based on eq.(1), the unit of the torque constant is Nm/A in the metric system,
whereas the one of the back emf constant is V/rad/s based on eq.(2).
Exercise 2.1

Show that the two units, Nm/A and V/rad/s, are identical.

The actual DC motor is not a loss-less transducer, having resistance at the rotor windings
and the commutation mechanism. Furthermore, windings may exhibit some inductance, which
stores energy. Figure 2.1.2 shows the schematic of the electric circuit, including the windings
resistance R and inductance L. From the figure,

u = R i + L

di
+E
dt

(2.1.4)

where u is the voltage applied to the armature of the motor.

Figure 2.1.2 Electric circuit of armature


Combining eqs.(1), (3) and (4), we can obtain the actual relationship among the applied
voltage u, the rotor angular velocity m , and the motor torque m .

Kt
d
K
u = m + Te m + t m
R
dt
R
2

Department of Mechanical Engineering

(2.1.5)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

where time constant Te =

L
, called the motor reactance, is often negligibly small. Neglecting
R

this second term, the above equation reduces to an algebraic relationship:

m =

Kt
K
u t m
R
R

(2.1.6)

This is called the torque-speed characteristic. Note that the motor torque increases in proportion
to the applied voltage, but the net torque reduces as the angular velocity increases. Figure 2.1.3
illustrates the torque-speed characteristics. The negative slope of the straight lines, K t R ,
implies that the voltage-controlled DC motor has an inherent damping in its mechanical behavior.
The power dissipated in the DC motor is given by
2

Pdis = R i 2 =

R
2
2 m
Kt

(2.1.7)

from eq.(1). Taking the square root of both sides yields

Pdis =

m
Km

, Km =

Kt

(2.1.8)

where the parameter K m is called the motor constant. The motor constant represents how
effectively electric power is converted to torque. The larger the motor constant becomes, the
larger the output torque is generated with less power dissipation. A DC motor with more powerful
magnets, thicker winding wires, and a larger rotor diameter has a larger motor constant. A motor
with a larger motor constant, however, has a larger damping, as the negative slope of the torquespeed characteristics becomes steeper, as illustrated in Figure 2.1.3.

max m

m
u
Pout

-Km2

1
max m
2

max m

Figure 2.1.3 Torque-speed characteristics and output power

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Taking into account the internal power dissipation, the net output power of the DC motor
is given by

Pout = m m = (

Kt
2
u K m m ) m
R

(2.1.9)

This net output power is a parabolic function of the angular velocity, as illustrated in Figure 2.1.3.
It should be noted that the net output power becomes maximum in the middle point of the
velocity axis, i.e. 50 % of the maximum angular velocity for a given armature voltage u. This
implies that the motor is operated most effectively at 50 % of the maximum speed. As the speed
departs from this middle point, the net output power decreases, and it vanishes at the zero speed
as well as at the maximum speed. Therefore, it is important to select the motor and gearing
combination so that the maximum of power transfer be achieved.

2.2 Dynamics of Single-Axis Drive Systems


DC motors and other types of actuators are used to drive individual axes of a robotic
system. Figure 2.2.1 shows a schematic diagram of a single-axis drive system consisting of a DC
motor, a gear head, and arm links1. An electric motor, such as a DC motor, produces a relatively
small torque and rotates at a high speed, whereas a robotic joint axis in general rotates slowly,
and needs a high torque to bear the load. In other words, the impedance of the actuator:

Zm =

torque
= m
angular velocity m

(2.2.1)

is much smaller than that of the load.

Arm Links

load

load
Gearing
DC Motor
Joint Axis

Figure 2.2.1 Joint axis drive system

Although a robotic system has multiple axes driven by multiple actuators having dynamic
interactions, we consider behavior of an independent single axis in this section, assuming that all
the other axes are fixed.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

To fill the gap we need a gear reducer, as shown in Figure 2.2.1. Let r > 1 be a gear reduction
ratio (If d1 and d2 are diameters of the two gears, the gear reduction ratio is r = d 2 / d1 ). The
torque and angular velocity are changed to:

1
r

load = r m , load = m

(2.2.2)

where load and load are the torque and angular velocity at the joint axis, as shown in the figure.
Note that the gear reducer of gear ratio r increases the impedance r2 times larger than that of the
motor axis Zm:

Z load = r 2 Z m

(2.2.3)

Let Im be the inertia of the motor rotor. From the free body diagram of the motor rotor,

1
I m m = m load
r

(2.2.4)

1
r

where load is the torque acting on the motor shaft from the joint axis through the gears, and

 m is the time rate of change of angular velocity, i.e. the angular acceleration. Let I l be the
inertia of the arm link about the joint axis, and b the damping coefficient of the bearings
supporting the joint axis. Considering the free body diagram of the arm link and joint axis yields

I l load = load bload

(2.2.5)

Eliminating load from the above two equations and using eq.(2.1.6) and (2.2.2) yields

I load + Bload = k u

(2.2.6)

where I, B, k are the effective inertia, damping, and input gain reflected to the joint axis:

I = Il + r 2 I m

B = b + r Km
K
k=r t
R
2

(2.2.7)
2

(2.2.8)
(2.2.9)

Note that the effective inertia of the motor rotor is r2 times larger than the original value I m when
reflected to the joint axis. Likewise, the motor constant becomes r2 times larger when reflected to
the joint axis. The gear ratio of a robotic system is typically 20 ~ 100, which means that the
effective inertia and damping becomes 400 ~ 10,000 times larger than those of the motor itself.
For fast dynamic response, the inertia of the motor rotor must be small. This is a crucial
requirement as the gear ratio gets larger, like robotics applications. There are two ways of

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

reducing the rotor inertia in motor design. One is to reduce the diameter and make the rotor
longer, as shown in Figure 2.2.2-(a). The other is to make the motor rotor very thin, like a
pancake, as shown in Figure 2.2.2-(b).

(a) Long and slender

(b) Pancake

Figure 2.2.2 Two ways of reducing the motor rotor inertia


Most robots use the long and slender motors as Figure (a), and some heavy-duty robots use the
pancake type motor. Figure 2.2.3 shows a pancake motor by Mavilor Motors, Inc.

Figure removed for copyright reasons.

Figure 2.2.3 Pancake DC motor


Exercise 2-2 Assuming that the angular velocity of a joint axis is approximately zero, obtain the
optimal gear ratio r in eq.(7) that maximizes the acceleration of the joint axis.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

2.3 Power Electronics


Performance of servomotors used for robotics applications highly depends on electric
power amplifiers and control electronics, broadly termed power electronics. Power electronics
has shown rapid progress in the last two decades, as semiconductors became faster, more
powerful, and more efficient. In this section we will briefly summarize power electronics relevant
to robotic system development.
2.3.1 Pulse width modulation (PWM)
In many robotics applications, actuators must be controlled precisely so that desired
motions of arms and legs may be attained. This requires a power amplifier to drive a desired level
of voltage (or current indirectly) to the motor armature, as discussed in the previous section. Use
of a linear amplifier (like an operational amplifier), however, is power-inefficient and impractical,
since it entails a large amount of power loss. Consider a simple circuit consisting of a single
transistor for controlling the armature voltage, as shown in Figure 2.3.1. Let V be the supply
voltage connected to one end of the motor armature. The other end of the armature is connected
to the collector of the transistor. As the base voltage varies the emitter-collector voltage varies,
and thereby the voltage drop across the motor armature, denoted u in the figure, varies
accordingly. Let i be the collector current flowing through the transistor. Then the power loss that
is dissipated at the transistor is given by

Ploss = (V u ) i =

1
(V u ) u
R

(2.3.1)

where R is the armature resistance. Figure 2.3.2 plots the internal power loss at the transistor
against the armature voltage. The power loss becomes the largest in the middle, where half the
supply voltage V/2 acts on the armature. This large heat loss is not only wasteful but also harmful,
burning the transistor in the worst case scenario. Therefore, this type of linear power amplifier is
seldom used except for driving very small motors.

Ploss

u
Worst
range

V
OFF

ON

VCE
0

Figure 2.3.1 Analogue power amplifier


for driving the armature voltage

V/2

Figure 2.3.2 Power loss at the transistor vs.


the armature voltage.

An alternative is to control the voltage via ON-OFF switching. Pulse Width Modulation,
or PWM for short, is the most commonly used method for varying the average voltage to the
motor. In Figure 2.3.2 it is clear that the heat loss is zero when the armature voltage is either 0 or
V. This means that the transistor is completely shutting down the current (OFF) or completely

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

admitting the current (ON). For all armature voltages other than these complete ON-OFF states,
some fraction of power is dissipated in the transistor. Pulse Width Modulation (PWM ) is a
technique to control an effective armature voltage by using the ON-OFF switching alone. It varies
the ratio of time length of the complete ON state to the complete OFF state. Figure 2.3.3
illustrates PWM signals. A single cycle of ON and OFF states is called the PWM period, whereas
the percentage of the ON state in a single period is called duty rate. The first PWM signal is of
60% duty, and the second one is 25 %. If the supply voltage is V=10 volts, the average voltage is
6 volts and 2.5 volts, respectively.
The PWM period is set to be much shorter than the time constant associated with the
mechanical motion. The PWM frequency, that is the reciprocal to the PWM period, is usually 2 ~
20 kHz, whereas the bandwidth of a motion control system is at most 100 Hz. Therefore, the
discrete switching does not influence the mechanical motion in most cases.

OFF

ON

OFF

ON

60% PWM
TPWM

PWM
Period
OFF

OFF

OFF

25% PWM
ON

ON

ON

Figure 2.3.3 Pulse width modulation


As modeled in eq.(2.1.4), the actual rotor windings have some inductance L. If the
electric time constant Te is much larger than the PWM period, the actual current flowing to the
motor armature is a smooth curve, as illustrated in Figure 2.3.4-(a). In other words, the inductance
works as a low-pass filter, filtering out the sharp ON-OFF profile of the input voltage. In contrast,
if the electric time constant is too small, compared to the PWM period, the current profile
becomes zigzag, following the rectangular voltage profile, as shown in Figure 2.3.4-(b). As a
result, unwanted high frequency vibrations are generated at the motor rotor. This happens for
some types of pancake motors with low inductance and low rotor inertia.

(a) Te >> TPWM

(b) Te << TPWM

Figure 2.3.4 Current to the motor is smoothed due to inductance


2.3.2 PWM switching characteristics
As the PWM frequency increases, the current driven to the motor becomes smoother, and
the nonlinearity due to discrete switching disappears. Furthermore, high PWM frequencies cause
no audible noise of switching. The noise disappears as the switching frequency becomes higher

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

than the human audible range, say 15 kHz. Therefore, a higher PWM frequency is in general
desirable. However, it causes a few adverse effects. As the PWM frequency increases:
The heat loss increases and the transistor may over-heat,
Harmful large voltage spikes and noise are generated, and
Radio frequency interference and electromagnetic interference become prominent.
The first adverse effect is the most critical one, which limits the capacity of a PWM
amplifier. Although no power loss occurs at the switching transistor when it is completely ON or
OFF, a significant amount of loss is caused during transition. As the transistor state is switched
from OFF to ON or vise versa, the transistor in Figure 2.3.1 goes through intermediate states,
which entail heat loss, as shown in Figure 2.3.2. Since it takes some finite time for a
semiconductor to make a transition, every time it is switched, a certain amount of power is
dissipated. As the PWM frequency increases, more power loss and, more importantly, more heat
generation occur. Figure 2.3.5 illustrates the turn-on and turn-off transitions of a switching
transistor. When turned on, the collector current Ic increases and the voltage Vce decreases. The
product of these two values provides the switching power loss as shown by broken lines in the
figure. Note that turn-off takes a longer time, hence it causes more heat loss.

Vce

Switching
power loss

Transistor
Voltage
Current
Power loss

Ic

0.5 s
Turn-ON

1.0 s

Time

Turn-OFF

Figure 2.3.5 Transient responses of transistor current and voltage and associated power loss
during turn-on and turn-off state transitions
From Figure 2.3.5 it is clear that a switching transistor having fast turn-on and turn-off
characteristics is desirable, since it causes less power loss and heat generation. Power MOSFETs
(Metal-Oxide-Semiconductor Field-Effect Transistors) have very fast switching characteristics,
enabling 15 ~ 100 kHz of switching frequencies. For relatively small motors, MOSFETs are
widely used in industry due to their fast switching characteristics. For larger motors, IGBTs
(Insulated Gate Bipolar Transistor) are the rational choice because of their larger capacity and
relatively fast response.
As the switching speed increases, the heat loss becomes smaller. However, fast switching
causes other problems. Consider eq.(2.1.4) again, the dynamic equation of the armature:

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

u = R i + L

10

di
+E
dt

(2.1.4)

High speed switching means that the time derivative of current i is large. This generates a large
inductance-induced kickback voltage L

di
that often damages switching semiconductors. As
dt

illustrated in Figure 2.3.6-(a), a large spike is induced when turning on the semiconductor. To get
rid of this problem a free-wheeling-diode (FWD) is inserted across the motor armature, as shown
in Figure 2.3.6-(b). As the voltage across the armature exceeds a threshold level, FWD kicks in to
bypass the current so that the voltage may be clamped, as shown in figure (c).

Power supply
VDS

Without FWD:
Spikes of VDS

FWD
OFF

Armature

ON

VDS

(a)

With FWD:
The spike is
clamped.

VDS
OFF

ON

(c)

Gate
Drive
Switching
transistor,
MOSFET

VDS

(b)

Figure 2.3.6 Voltage spike induced by inductance (a), free-wheeling diode (b),
and the clamped spike with FWD (c)
High speed PWM switching also generates Electromagnetic Interference (EMI),
particularly when the wires between the PWM amplifier and the motor get longer. Furthermore,
high speed PWM switching may incur Radio-Frequency Interference (RFI). Since the PWM
waveforms are square, significant RFI can be generated. Whenever PWM switching edges are
faster than 10 s , RFI is induced to some extent. An effective method for reducing EMI and RFI
is to put the PWM amplifier inside the motor body. This motor architecture, called Integrated
Motor or Smart Motor, allows confining EMI and RFI within the motor body by minimizing the
wire length between the motor armature and the power transistors.
2.3.3 The H-bridge and bipolar PWM amplifiers
In most robotics applications, bi-directional control of motor speed is necessary. This
requires a PWM amplifier to be bipolar, allowing for both forward and backward rotations. The
architecture described in the previous section needs to be extended to meet this bipolar
requirement. The H-Bridge architecture is commonly used for bipolar PWM amplifiers. As
shown in Figure 2.3.7, the H-Bridge architecture resembles the letter H in the arrangement of
switching transistors around the motor armature. Switching transistors A and B are pulled up to
the supply voltage V, whereas transistors C and D are connected to ground. Combinations of
these four switching transistors provide a variety of operations. In figure (i), gates A and D are
ON, and B and C are OFF. This gate combination delivers a current to the armature in the

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

11

forward direction. When the gate states are reversed, as shown in figure (ii), the direction of
current is reversed. Furthermore, the motor coasts off when all the gates are turned OFF, since the
armature is totally isolated or disconnected as shown in figure (iii). On the other hand, the
armature windings are shortened, when both gates C and D are turned ON and A and B are turned
OFF. See figure (iv). This shortened circuit provides a braking effect, when the motor rotor is
rotating.

+V
Gate B
OFF

Gate A
ON

+
Gate C
OFF

+V

+ e
Armatur

Gate D
ON

Gate A
OFF

Gate C
ON

Armature

+V

Gate C
OFF

Armature

(iii) The motor armature


coasts off

+
Gate D
OFF

(ii) Reverse motion

(i) Forward motion

Gate A
OFF

Gate B
ON

+V
Gate B
OFF

Gate A
OFF

Gate D
OFF

Gate C
ON

Gate B
OFF

Armature

Gate D
ON

(iv) The motor windings are shortened


causing a braking effect.

Figure 2.3.7 H-bridge and four quadrant control


It should be noted that there is a fundamental danger in the H-bridge circuit. A direct
short circuit can occur if the top and bottom switches connected to the same armature terminal are
turned on at the same time. A catastrophic failure results when one of the switching transistors on
the same vertical line in Figure 2.3.7 fails to turn off before the other turns on. Most of H-bridge
power stages commercially available have several protection mechanisms to prevent the direct
short circuit.

2.4 Robot Controls and PWM Amplifiers of the 2.12 Laboratory


DC motors and PWM amplifiers, the two most important components involved in a robot
power train, have been described. Now we are ready to introduce the specific drive system and
controls to be used for building robots for the design project.
This term we will use controllers and drives produced by Innovation First, Inc. The
system consists of bipolar PWM amplifiers, a PIC-based on-board robot controller with a wireless
modem, a stationary controller hooked up to a laptop computer. Potentiometers are used for

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

12

measuring the angular displacement of joint axes. They are connected to A/D converter ports of
the on-board controller for position feedback control. Additional sensors can be hooked up to the
on-board controllers. A C-language based development environment is available for the system.

Figure 2.4.1 Bipolar PWM amplifier with a built-in cooling fun, IFI, Inc.
Courtesy of IFI Robotics. Used with permission.

Wireless
Communication

Operator
Interface
(Stationary)

Robot
Controller
(On-Board)

Figure 2.4.2 On-board and stationary controllers, IFI.Inc.

Courtesy of IFI Robotics. Used with permission.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

13

Courtesy of IFI Robotics. Used with permission.

Figure 2.4.3 Control system operation diagram, IFI, Inc.

2.5 Optical Shaft Encoders


The servomechanism described in the previous section is based on analogue feedback
technology, using a potentiometer and a tachometer generator. These analogue feedbacks,
although simple, are no longer used in industrial robots and other industrial applications, due to
limited reliability and performance. A potentiometer, for example, is poor in reliability,
resolution, accuracy, and signal to noise ratio. The output tap of the variable resistance slides on a
track of resistive material, making a mechanical contact all the time. This slide contact causes not
only electric noise but also wear of the contacting surfaces. The resolution and S/N ratio of the
sensor are also limited by the mechanical contact. Furthermore, linearity depends on the
uniformity of the resistive material coated on the substrate, and that is a limiting factor of a
potentiometers accuracy. Todays industrial standard is optical shaft encoders, having no sliding
contact. This will be discussed next.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

14

2.5.1 Basic principle


An optical encoder consists of a rotating disk with grids, light sources, photodetectors,
and electronic circuits. As shown in Figure 2.5.1, a pattern of alternating opaque and translucent
grids is printed on the rotating disk. A pair of light source and photodetector is placed on both
sides of the rotating disk. As an opaque grid comes in, the light beam is blocked, while it is
transmitted through the disk, when the translucent part comes in. The light beam is then detected
by the photodetector. The disk is coupled to a motor shaft or a robot joint to measure. As it
rotates, an alternating ON-OFF signal is obtained with the photodetector. The number of grids
passing through the optical elements represents the distance traveled.

Light source: LED


Opaque
Translucent

Disk with
grid pattern

Photodetector
Figure 2.5.1 Basic construction of optical shaft encoder

This optical shaft encoder has no mechanical component making a slide contact, and has
no component wear. An optical circuit is not disturbed by electric noise, and the photodetector
output is a digital signal, which is more stable than an analogue signal. These make an optical
shaft encoder reliable and robust; it is a suitable choice as a feedback sensor for servomotors.
2.5.2 Position measurement
One problem with the above optical encoder design is that the direction of rotation cannot
be distinguished from the single photodetector output. The photodetector output is the same for
both clockwise and counter-clockwise rotations. There is no indication as to which way the disk
is rotating. Counting the pulse number merely gives the total distance the shaft has rotated back
and forth. To measure the angular position, the direction of rotation must be distinguished.
One way of obtaining the directional information is to add another pair of light
source/photodetector and a second track of opaque/translucent grids with 90 degrees of phase
difference from the first track. Figure 2.5.2 illustrates a double track pattern and resultant output
signals for clockwise and counter-clockwise rotations. Note that track A leads track B by 90
degrees for clockwise rotation and that track B leads track A for counter-clockwise rotation. By
detecting the phase angle the direction of rotation can be distinguished, and this can be done
easily with an up-down counter.
By simply feeding both A phase and B phase encoder signals to an up-down counter, the
direction of rotation is first detected, and the number of rising edges and falling edges of both
signals is counted in such a way that the counter adds the incoming edge number for clockwise
rotation and subtract the edge numbers for counter-clockwise rotation. The up-down counter
indicates the cumulative number of edges, that is, the angular position of the motor. The output
of the up-down counter is binary n-bit signals ready to be sent to a digital controller without A/D
conversion.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

15

Track A
Track B
Photodetectors
Counter-clockwise rotation

Clockwise rotation
A

A
B

-90o

+90

Figure 2.5.2 Double track encode for detection of the direction of rotation
It should be noted that this type of encoder requires initialization of the counter prior to
actual measurement. Usually a robot is brought to a home position and the up-down counters are
set to the initial state corresponding to the home position. This type of encoder is referred to as an
incremental encoder, since A-phase and B-phase signals provide relative displacements from an
initial point. Whenever the power supply is shut down, the initialization must be performed for
incremental encoders.

Most sensitive bit

A Phase
B Phase

Up-Down
Counter

n-bit parallel

Least sensitive bit


Clear/Initialization

Figure 2.5.3 Up-down counter for an incremental encoder


An absolute encoder provides an n-bit absolute position as well as the direction of
rotation without use of an up-down counter and initialization. As shown in Figure 2.5.4, the
rotating disk has n-tracks of opaque-translucent grid patterns and n pairs of light sources and
photodetectors. The n-tracks of grid patterns differ in the number of grids; the innermost track has
only 1=20 pair of opaque and translucent slits, the second track has 2=21 pairs, and the i-th track
i-1
has 2 pairs. The n outputs from the photodetectors directly indicate the n-bit absolute position
of the rotating disk. In general, absolute encoders are more complex and expensive than
incremental encoders. In case of power failure, incremental encoders need a laborious

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

16

initialization procedure for recovery. For quick recovery as well as for safety, absolute encoders
are often needed in industrial applications.

Sorry for this poor


drawing. I did in haste.
Figure 2.5.4 Absolute encoder
2.5.3 Velocity estimate
Velocity feedback is needed for improving accuracy of speed control as well as for
compensating for system dynamics. A salient feature of optical encoders is that velocity
information can be obtained along with position measurement. Without use of a dedicated
tachometer generator, velocity measurement can be attained by simply processing pulse
sequences generated by an optical encoder.
Figure 2.5.5 shows a pulse sequence coming from an optical encoder.2 Each pulse
indicates a rising edge or a falling edge of phase A & B signals. Therefore, the density of this
pulse train, i.e. the pulse frequency, is approximately proportional to the angular velocity of the
rotating shaft. The pulse density can be measured by counting the number of incoming pulses in
every fixed period, say T=10 ms, as shown in the figure. This can be done with another up-down
counter that counts A phase and B phase pulses. Counting continues only for the fixed sampling
period T, and the result is sent to a controller at the end of every sampling period. Then the
counter is cleared to re-start counting for the next period.
As the sampling period gets shorter, the velocity measurement is updated more
frequently, and the delay of velocity feedback gets shorter. However, if the sampling period is too
short, discretization error becomes prominent. The problem is more critical when the angular
velocity is very small. Not many pulses are generated, and just a few pulses can be counted for a
very short period. As the sampling period gets longer, the discretization error becomes smaller,
but the time delay may cause instability of the control system.
Counter clear

Sampling period T
= Pulse counting interval
Figure 2.5.5 Velocity estimate based on pulse frequency measurement

For simplicity only an incremental encoder is considered.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

17

An effective method for resolving these conflicting requirements is to use a dual mode
velocity measurement. Instead of counting the number of pulses, the interval of adjacent pulses is
measured at low speed. The reciprocal to the pulse interval gives the angular velocity. As shown
in Figure 2.5.6, the time interval can be measured by counting clock pulses. The resolution of this
pulse interval measurement is much higher than that of encoder pulse counting in a lower speed
range. In contrast, the resolution gets worse at high speed, since the adjacent pulse interval
becomes small. Therefore, these two methods supplement to each other. The dual mode velocity
measurement uses both counters and switches them depending on the speed.

Low
speed
counter

High
speed
counter

Figure 2.5.6 Dual mode velocity measurement

2.6 Brushless DC Motors


The DC motor described in the previous section is the simplest, yet efficient motor
among various actuators applied to robotic systems. Traditional DC motors, however, are limited
in reliability and robustness due to wear of the brush and commutation mechanism. In industrial
applications where a high level of reliability and robustness is required, DC motors have been
replaced by brushless motors and other types of motors having no mechanical commutator. Since
brushless motors, or AC synchronous motors, are increasingly used in robotic systems and other
automation systems, this section briefly describes its principle and drive methods.

Stator

Stator
Rotor

Magnetic flux
Permanent magnet
a) Conventional DC motor

Permanent magnet

b) Brushless DC motor
Figure by MIT OCW.

Figure 2.6.1 Construction of brushless DC motor and conventional DC motor


Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

18

In the brushless motor, the commutation of currents is performed with an electronic


switching system. Figure 2.6.1 shows the construction of a brushless motor, compared with a
traditional DC motor. In the brushless motor, the rotor and the stator are swapped. Unlike the
traditional DC motor, the stator of the brushless motor consists of windings, whereas the rotor
comprises permanent magnets. The commutation is accomplished by measuring the rotor position
using a position sensor. Depending on the rotor position, currents are delivered to the
corresponding windings though electronic switching circuits. The principle of torque generation
remains the same, and the torque-speed characteristics and other properties are mostly preserved.
Therefore, the brushless motor is highly efficient with added reliability.
A drawback of this brushless motor design is that the torque may change discontinuously
when switches are turned on and off as the rotor position changes. In the traditional DC motor
this torque ripple is reduced by simply increasing the commutator segments and dividing the
windings to many segments. For the brushless motor, however, it is expensive to increase the
number of electronic switching circuits. Instead, in the brushless motor the currents flowing into
individual windings are varied continuously so that the torque ripple be minimum. A common
construction of the windings is that of a three-phase windings, as shown in Figure 2.6.2.
Let IA, IB and IC be individual currents flowing into the three windings shown in the
figure. These three currents are varies such that:

I A = I O sin

2
I B = I O sin( + )
3
4
I C = I O sin( + )
3

(2.6.1)

where IO is the scalar magnitude of desired current, and is the rotor position. The torque
generated is the summation of the three torques generated at the three windings. Taking into
account the angle between the magnetic field and the force generated at each air gap, we obtain

2
3

4
3

m = kO [ I A sin + I B sin( + ) + I C sin( + )]

(2.6.2)

where k0 is a proportionality constant. Substituting eq.(1) into eq.(2) yields

2
3

m = kO I O

(2.6.3)

The above expression indicates a linear relationship between the output torque and the scalar
magnitude of the three currents. The torque-current characteristics of a brushless motor are
apparently the same as the traditional DC motor.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

19

Resolver

Reference input
(analogue)

Resolver to
digital
converter

Read
ROM
only
memory

ROM

Rotor

ROM
Windings

Multiplying D/A M/DA


converter

M/DA

M/DA

PWM

PWM

Triangular
Wave
Pulse width
PWM
modulator

Power amps

Figure 2.6.2 Brushless DC motor and drive amplifier


Figure by MIT OCW.
Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 3
Robot Mechanisms
A robot is a machine capable of physical motion for interacting with the environment.
Physical interactions include manipulation, locomotion, and any other tasks changing the state of
the environment or the state of the robot relative to the environment. A robot has some form of
mechanisms for performing a class of tasks. A rich variety of robot mechanisms has been
developed in the last few decades. In this chapter, we will first overview various types of
mechanisms used for generating robotic motion, and introduce some taxonomy of mechanical
structures before going into a more detailed analysis in the subsequent chapters.

3.1 Joint Primitives and Serial Linkages


A robot mechanism is a multi-body system with the multiple bodies connected together.
We begin by treating each body as rigid, ignoring elasticity and any deformations caused by large
load conditions. Each rigid body involved in a robot mechanism is called a link, and a
combination of links is referred to as a linkage. In describing a linkage it is fundamental to
represent how a pair of links is connected to each other. There are two types of primitive
connections between a pair of links, as shown in Figure 3.1.1. The first is a prismatic joint where
the pair of links makes a translational displacement along a fixed axis. In other words, one link
slides on the other along a straight line. Therefore, it is also called a sliding joint. The second type
of primitive joint is a revolute joint where a pair of links rotates about a fixed axis. This type of
joint is often referred to as a hinge, articulated, or rotational joint.1

(a) Prismatic joint

(b) Revolute joint

Figure 3.1.1 Primitive joint types: (a) a prismatic joint and (b) a revolute joint
Combining these two types of primitive joints, we can create many useful mechanisms
for robot manipulation and locomotion. These two types of primitive joints are simple to build
and are well grounded in engineering design. Most of the robots that have been built are
combinations of only these two types. Let us look at some examples.
Robot mechanisms analogous to coordinate systems
One of the fundamental functional requirements for a robotic system is to locate its endeffecter, e.g. a hand, a leg, or any other part of the body performing a task, in three-dimensional
space. If the kinematic structure of such a robot mechanism is analogous to a coordinate system,
1

It is interesting to note that all biological creatures are made of revolute type joints; there are no
sliding joints involved in their extremities.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

it may suffice this positioning requirement. Figures 3.1.2 ~ 4 show three types of robot arm structures corresponding to
the Cartesian coordinate system, the cylindrical coordinate system, and the spherical coordinate system respectively. The
Cartesian coordinate robot shown in Figure 3.1.2 has three prismatic joints, corresponding to three axes denoted x, y ,
and z. The cylindrical robot consists of one revolute joint and two prismatic joints, with r, and z representing the
coordinates of the end-effecter. Likewise, the spherical robot has two revolute joints denoted and and one prismatic joint
denoted r.
z2
y

Photo removed for copyright reasons.


Robot by Cincinnati Milacron.

z1

i3

x
z0

Figure by MIT OCW.

Figure 3.1.2 Cartesian coordinate robot

Photo removed for copyright reasons.


GMF Robotics model M-100.

Figure by MIT OCW.

Figure 3.1.3 Cylindrical coordinate robot

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Photo removed for copyright reasons.

Figure by MIT OCW.

Figure 3.1.4 Spherical coordinate robot.

There are many other ways of locating an end-effecter in three-dimensional space. Figure 3.1.5 ~ 7 show three other kinematic
structures that allow the robot to locate its end-effecter in three-dimensional space. Although these mechanisms have no analogy
with common coordinate systems, they are capable of locating the end-effecter in space, and have salient features desirable
for specific tasks. The first one is a so-called SCALAR robot consisting of two revolute joints and one prismatic joint.
This robot structure is particularly desirable for assembly automation in manufacturing systems, having a wide workspace in
the horizontal direction and an independent vertical axis appropriate for insertion of parts.

z2

z1

z0

Photo removed for copyright reasons.


Robot by Adept.

d3

3
0

Figure by MIT OCW.

Figure 3.1.5 SCALAR type robot.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

The second type, called an articulated robot or an elbow robot, consists of all three
revolute joints, like a human arm. This type of robot has a great degree of flexibility and
versatility, being the most standard structure of robot manipulators. The third kinematic structure,
also consisting of three revolute joints, has a unique mass balancing structure. The counter
balance at the elbow eliminates gravity load for all three joints, thus reducing toque requirements
for the actuators. This structure has been used for the direct-drive robots having no gear reducer.

Photo removed for copyright reasons.


SCORBOT model ER 1X.

Figure by MIT OCW.


Figure 3.1.6 Articulated robot

Note that all the above robot structures are made of serial connections of primitive joints.
This class of kinematic structures, termed a serial linkage, constitutes the fundamental makeup of
robot mechanisms. They have no kinematic constraint in each joint motion, i.e. each joint
displacement is a generalized coordinate. This facilitates the analysis and control of the robot
mechanism. There are, however, different classes of mechanisms used for robot structures.
Although more complex, they do provide some useful properties. We will look at these other
mechanisms in the subsequent sections.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Photo removed for copyright reasons.

Figure 3.1.7 Gravity-balanced robot with three-revolute joints

3.2 Parallel Linkages


Primitive joints can be arranged in parallel as well as in series. Figure 3.2.1 illustrates
such a parallel link mechanism. It is a five-bar-linkage consisting of five links, including the base
link, connected by five joints. This can be viewed as two serial linkage arms connected at a
particular point, point A in the figure. It is important to note that there is a closed kinematic chain
formed by the five links and, thereby, the two serial link arms must conform to a certain
geometric constraint. It is clear from the figure that the end-effecter position is determined if two
of the five joint angles are given. For example, if angles 1? and 3? of joints 1 and 3 are
determined, then all the link positions are determined, as is the end-effecters. Driving joints 1
and 3 with two actuators, we can move the end-effecter within the vertical plane. It should be
noted that, if more than two joints were actively driven by independent actuators, a conflict
among three actuators would occur due to the closed-loop kinematic chain. Three of the five
joints should be passive joints, which are free to rotate. Only two joints should be active joints,
driven by independent actuators.

Link 2

Point A
Joint 5

End Effecter
Passive
joints

Joint 2
Link 4

Link 1

Closed
Loop

Link 3

Joint 4

3
Joint 3

Joint 1

Link 0

Active Joints

Figure 3.2.1 Five-bar-link parallel link robot


Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

This type of parallel linkage, having a closed-loop kinematic chain, has significant
features. First, placing both actuators at the base link makes the robot arm lighter, compared to
the serial link arm with the second motor fixed to the tip of link 1. Second, a larger end-effecter
load can be born with the two serial linkage arms sharing the load. Figure 3.2.2 shows a heavyduty robot having a parallel link mechanism.

Diagram removed for copyright reasons.

Figure 3.2.2 Heavy-duty robot with parallel link mechanism


Figure 3.2.3 shows the Stewart mechanism, which consists of a moving platform, a fixed
base, and six powered cylinders connecting the moving platform to the base frame. The position
and orientation of the moving platform are determined by the six independent actuators. The load
acting on the moving platform is born by the six "arms". Therefore, the load capacity is generally
large, and dynamic response is fast for this type of robot mechanisms. Note, however, that this
mechanism has spherical joints, a different type of joints than the primitive joints we considered
initially.

Moving
Platform

S
S

(Spherical Joint) S
S

(Prismatic Joint) P

S
(Spherical Joint) S

Fixed
Base

S
S
S

Figure by MIT OCW.

Figure 3.2.3 Stewart mechanism parallel-link robot

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 4
Planar Kinematics
Kinematics is Geometry of Motion. It is one of the most fundamental disciplines in
robotics, providing tools for describing the structure and behavior of robot mechanisms. In this
chapter, we will discuss how the motion of a robot mechanism is described, how it responds to
actuator movements, and how the individual actuators should be coordinated to obtain desired
motion at the robot end-effecter. These are questions central to the design and control of robot
mechanisms.
To begin with, we will restrict ourselves to a class of robot mechanisms that work within
a plane, i.e. Planar Kinematics. Planar kinematics is much more tractable mathematically,
compared to general three-dimensional kinematics. Nonetheless, most of the robot mechanisms of
practical importance can be treated as planar mechanisms, or can be reduced to planar problems.
General three-dimensional kinematics, on the other hand, needs special mathematical tools, which
will be discussed in later chapters.

4.1 Planar Kinematics of Serial Link Mechanisms


Example 4.1 Consider the three degree-of-freedom planar robot arm shown in Figure 4.1.1.
The arm consists of one fixed link and three movable links that move within the plane. All the
links are connected by revolute joints whose joint axes are all perpendicular to the plane of the
links. There is no closed-loop kinematic chain; hence, it is a serial link mechanism.

A3

E
Link 3

A2
A1

Link 2

End Effecter

x
e
ye

Joint 3

Link 1
Joint 1

O
Link 0

A Joint 2

Figure 4.1.1 Three dof planar robot with three revolute joints
To describe this robot arm, a few geometric parameters are needed. First, the length of
each link is defined to be the distance between adjacent joint axes. Let points O, A, and B be the
locations of the three joint axes, respectively, and point E be a point fixed to the end-effecter.
Then the link lengths are A 1 = O A , A 2 = A B , A 3 = B E . Let us assume that Actuator 1 driving

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

link 1 is fixed to the base link (link 0), generating angle 1 , while Actuator 2 driving link 2 is
fixed to the tip of Link 1, creating angle 2 between the two links, and Actuator 3 driving Link 3
is fixed to the tip of Link 2, creating angle 3 , as shown in the figure. Since this robot arm
performs tasks by moving its end-effecter at point E, we are concerned with the location of the
end-effecter. To describe its location, we use a coordinate system, O-xy, fixed to the base link
with the origin at the first joint, and describe the end-effecter position with coordinates xe and
ye . We can relate the end-effecter coordinates to the joint angles determined by the three
actuators by using the link lengths and joint angles defined above:

xe = A 1 cos 1 + A 2 cos(1 + 2 ) + A 3 cos(1 + 2 + 3 )


ye = A 1 sin 1 + A 2 sin(1 + 2 ) + A 3 sin(1 + 2 + 3 )

(4.1.1)
(4.1.2)

This three dof robot arm can locate its end-effecter at a desired orientation as well as at a desired
position. The orientation of the end-effecter can be described as the angle the centerline of the
end-effecter measured from the positive x coordinate axis. This end-effecter orientation e is
related to the actuator displacements as

e = 1 + 2 + 3

(4.1.3)

The above three equations describe the position and orientation of the robot end-effecter
viewed from the fixed coordinate system in relation to the actuator displacements. In general, a
set of algebraic equations relating the position and orientation of a robot end-effecter, or any
significant part of the robot, to actuator or active joint displacements, is called Kinematic
Equations, or more specifically, Forward Kinematic Equations in the robotics literature.
Exercise 4.1
Shown below in Figure 4.1.2 is a planar robot arm with two revolute joints and one
prismatic joint. Using the geometric parameters and joint displacements, obtain the kinematic
equations relating the end-effecter position and orientation to the joint displacements.

A3

End Effecter

3
B

x
e
ye

Link 3

Joint 3
Link 2

Link 1
Joint 1

O
Link 0

Joint 2
A

Figure 4.1.2 Three dof robot with two revolute joints and one prismatic joint

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Now that the above Example and Exercise problems have illustrated kinematic equations,
let us obtain a formal expression for kinematic equations. As mentioned in the previous chapter,
two types of joints, prismatic and revolute joints, constitute robot mechanisms in most cases. The
displacement of the i-th joint is described by distance di if it is a prismatic joint, and by angle i
for a revolute joint. For formal expression, let us use a generic notation: qi. Namely, joint
displacement qi represents either distance di or angle i depending on the type of joint.

qi =

di

Prismatic joint

Revolute joint

(4.1.4)

We collectively represent all the joint displacements involved in a robot mechanism with a
column vector: q = [q1 q2 " qn ] T , where n is the number of joints. Kinematic equations
relate these joint displacements to the position and orientation of the end-effecter. Let us
collectively denote the end-effecter position and orientation by vector p. For planar mechanisms,
the end-effecter location is described by three variables:

xe
p = ye
e

(4.1.5)

Using these notations, we represent kinematic equations as a vector function relating p to q:

p = f (q ),

p 3 x1 , q nx1

(4.1.6)

For a serial link mechanism, all the joints are usually active joints driven by individual
actuators. Except for some special cases, these actuators uniquely determine the end-effecter
position and orientation as well as the configuration of the entire robot mechanism. If there is a
link whose location is not fully determined by the actuator displacements, such a robot
mechanism is said to be under-actuated. Unless a robot mechanism is under-actuated, the
collection of the joint displacements, i.e. the vector q, uniquely determines the entire robot
configuration. For a serial link mechanism, these joints are independent, having no geometric
constraint other than their stroke limits. Therefore, these joint displacements are generalized
coordinates that locate the robot mechanism uniquely and completely. Formally, the number of
generalized coordinates is called degrees of freedom. Vector q is called joint coordinates, when
they form a complete and independent set of generalized coordinates.
4.2 Inverse Kinematics of Planar Mechanisms
The vector kinematic equation derived in the previous section provides the functional
relationship between the joint displacements and the resultant end-effecter position and
orientation. By substituting values of joint displacements into the right-hand side of the kinematic
equation, one can immediately find the corresponding end-effecter position and orientation. The
problem of finding the end-effecter position and orientation for a given set of joint displacements
is referred to as the direct kinematics problem. This is simply to evaluate the right-hand side of
the kinematic equation for known joint displacements. In this section, we discuss the problem of
moving the end-effecter of a manipulator arm to a specified position and orientation. We need to
find the joint displacements that lead the end-effecter to the specified position and orientation.
This is the inverse of the previous problem, and is thus referred to as the inverse kinematics
problem. The kinematic equation must be solved for joint displacements, given the end-effecter

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

position and orientation. Once the kinematic equation is solved, the desired end-effecter motion
can be achieved by moving each joint to the determined value.
In the direct kinematics problem, the end-effecter location is determined uniquely for any
given set of joint displacements. On the other hand, the inverse kinematics is more complex in the
sense that multiple solutions may exist for the same end-effecter location. Also, solutions may not
always exist for a particular range of end-effecter locations and arm structures. Furthermore,
since the kinematic equation is comprised of nonlinear simultaneous equations with many
trigonometric functions, it is not always possible to derive a closed-form solution, which is the
explicit inverse function of the kinematic equation. When the kinematic equation cannot be
solved analytically, numerical methods are used in order to derive the desired joint displacements.
Example 4.2
Consider the three dof planar arm shown in Figure 4.1.1 again. To solve its
inverse kinematics problem, the kinematic structure is redrawn in Figure 4.2.1. The problem is to
find three joint angles 1 , 2 , 3 that lead the end effecter to a desired position and orientation,
xe , ye , e . We take a two-step approach. First, we find the position of the wrist, point B, from
xe , ye , e . Then we find 1 , 2 from the wrist position. Angle 3 can be determined immediately
from the wrist position.

A3

End
Effecter

xe

ye

xw

y
w

Wrist
r

A2

A
O

A1
Figure 4.2.1 Skeleton structure of the robot arm of Example 4.1
Let xw and yw be the coordinates of the wrist. As shown in Figure 4.2.1, point B is at
distance A 3 from the given end-effecter position E. Moving in the opposite direction to the end
effecter orientation e , the wrist coordinates are given by

xw = xe A 3 cos e
yw = ye A 3 sin e

Department of Mechanical Engineering

(4.2.1)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Note that the right hand sides of the above equations are functions of xe , ye , e alone. From these
wrist coordinates, we can determine the angle shown in the figure.1

= tan 1

yw
xw

(4.2.2)

Next, let us consider the triangle OAB and define angles , , as shown in the figure. This
triangle is formed by the wrist B, the elbow A, and the shoulder O. Applying the law of cosines to
the elbow angle yields

A 12 + A 22 2A 1A 2 cos = r 2

(4.2.3)

2
2
where r 2 = x w
, the squared distance between O and B. Solving this for angle yields
+ yw

2 = = cos 1

A 12 + A 22 xw2 yw2
2A 1A 2

(4.2.4)

Similarly,

r 2 + A 12 2rA 1 cos = A 22

(4.2.5)

Solving this for yields

1 = = tan 1

yw
x 2 + yw2 + A 12 A 22
cos 1 w
xw
2A 1 xw2 + yw2

(4.2.6)

From the above 1 , 2 we can obtain

3 = e 1 2

(4.2.7)

Eqs. (4), (6), and (7) provide a set of joint angles that locates the end-effecter at the
desired position and orientation. It is interesting to note that there is another way of reaching the
same end-effecter position and orientation, i.e. another solution to the inverse kinematics
problem. Figure 4.2.2 shows two configurations of the arm leading to the same end-effecter
location: the elbow down configuration and the elbow up configuration. The former corresponds
to the solution obtained above. The latter, having the elbow position at point A, is symmetric to
the former configuration with respect to line OB, as shown in the figure. Therefore, the two
solutions are related as

1 ' = 1 + 2
2 ' = 2
3 ' = e 1 ' 2 ' = 3 + 2 2 2

(4.2.8)

Inverse kinematics problems often possess multiple solutions, like the above example,
since they are nonlinear. Specifying end-effecter position and orientation does not uniquely
determine the whole configuration of the system. This implies that vector p, the collective
position and orientation of the end-effecter, cannot be used as generalized coordinates.
The existence of multiple solutions, however, provides the robot with an extra degree of
flexibility. Consider a robot working in a crowded environment. If multiple configurations exist
for the same end-effecter location, the robot can take a configuration having no interference with
1

Unless noted specifically we assume that the arc tangent function takes an angle in a proper quadrant
consistent with the signs of the two operands.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

the environment. Due to physical limitations, however, the solutions to the inverse kinematics
problem do not necessarily provide feasible configurations. We must check whether each solution
satisfies the constraint of movable range, i.e. stroke limit of each joint.

Wrist

A
O

3'

2 '

Elbow-Up
Configuration

End
Effecter

1 '

Elbow Down
Configuration

Figure 4.2.2 Multiple solutions to the inverse kinematics problem of Example 4.2

4.3 Kinematics of Parallel Link Mechanisms


Example 4.3

Consider the five-bar-link planar robot arm shown in Figure 4.3.1.

xe = A 1 cos 1 + A 2 cos 2

(4.3.1)

ye = A 1 sin 1 + A 2 sin 2

Note that Joint 2 is a passive joint. Hence, angle 2 is a dependent variable. Using 2 , however,
we can obtain the coordinates of point A:

x A = A 1 cos1 + A 5 cos 2

(4.3.2)

y A = A 1 sin 1 + A 5 sin 2
Point A must be reached via the branch comprising Links 3 and 4. Therefore,

x A = A 3 cos 3 + A 4 cos 4

(4.3.3)

y A = A 3 sin 3 + A 4 sin 4
Equating these two sets of equations yields two constraint equations:

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

A 1 cos 1 + A 5 cos 2 = A 3 cos 3 + A 4 cos 4

(4.3.4)

A 1 sin 1 + A 5 sin 2 = A 3 sin 3 + A 4 sin 4

Note that there are four variables and two constraint equations. Therefore, two of the variables,
such as 1 , 3 , are independent. It should also be noted that multiple solutions exist for these
constraint equations.
End Effecter

Link 2

Point A
Joint 5

xe

ye

xA , y A

Joint 2

Link 4

Joint 4

Link 1

1 Link 3

3 Joint 3

Joint 1

x
Link 0

Figure 4.3.1 Five-bar-link mechanism


Although the forward kinematic equations are difficult to write out explicitly, the inverse
kinematic equations can be obtained for this parallel link mechanism. The problem is to find
1 , 3 that lead the endpoint to a desired position: xe , ye . We will take the following procedure:
Step 1 Given xe , ye , find 1 , 2 by solving the two-link inverse kinematics problem.
Step 2 Given 1 , 2 , obtain x A , y A . This is a forward kinematics problem.
Step 3 Given x A , y A , find 3 , 4 by solving another two-link inverse kinematics
problem.

Example 4.4

Obtain the joint angles of the dogs legs, given the body position and orientation.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

yB

Link 0
y

xB

yB

C
Link 3

3
Link 4

xB

1
Link 2

B
Link 1

Figure 4.3.2 A doggy robot with two legs on the ground


The inverse kinematics problem:
Step 1 Given xB , y B , B , find x A , y A and xC , yC
Step 2 Given x A , y A , find 1 , 2

Step 3 Given xC , yC , find 3 , 4

4.4 Redundant mechanisms


A manipulator arm must have at least six degrees of freedom in order to locate its endeffecter at an arbitrary point with an arbitrary orientation in space. Manipulator arms with less
than 6 degrees of freedom are not able to perform such arbitrary positioning. On the other hand, if
a manipulator arm has more than 6 degrees of freedom, there exist an infinite number of solutions
to the kinematic equation. Consider for example the human arm, which has seven degrees of
freedom, excluding the joints at the fingers. Even if the hand is fixed on a table, one can change
the elbow position continuously without changing the hand location. This implies that there exist
an infinite set of joint displacements that lead the hand to the same location. Manipulator arms
with more than six degrees of freedom are referred to as redundant manipulators. We will discuss
redundant manipulators in detail in the following chapter.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 5
Differential Motion
In the previous chapter, the position and orientation of the manipulator end-effecter were evaluated in
relation to joint displacements. The joint displacements corresponding to a given end-effecter location
were obtained by solving the kinematic equation for the manipulator. This preliminary analysis
permitted the robotic system to place the end-effecter at a specified location in space. In this chapter,
we are concerned not only with the final location of the end-effecter, but also with the velocity at
which the end-effecter moves. In order to move the end-effecter in a specified direction at a specified
speed, it is necessary to coordinate the motion of the individual joints. The focus of this chapter is the
development of fundamental methods for achieving such coordinated motion in multiple-joint robotic
systems. As discussed in the previous chapter, the end-effecter position and orientation are directly
related to the joint displacements. Hence, in order to coordinate joint motions, we derive the
differential relationship between the joint displacements and the end-effecter location, and then solve
for the individual joint motions.

5.1 Differential Relationship


We begin by considering a two degree-of-freedom planar robot arm, as shown in Figure 5.1.1.
The kinematic equations relating the end-effecter coordinates xe and ye to the joint displacements
1 and 2 are given by

xe (1 , 2 ) =
ye (1 , 2 ) =

cos 1 +
1 sin 1 +

cos(1 + 2 )
2 sin(1 + 2 )

(5.1.1)

(5.1.2)

y
End Effecter

xe

ye

Link 2
1

2
Link 1

Joint 2

Joint 1

O
Link 0
Figure 5.1.1 Two dof planar robot with two revolute joints

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

We are concerned with small movements of the individual joints at the current position,
and we want to know the resultant motion of the end-effecter. This can be obtained by the total
derivatives of the above kinematic equations:

x e ( 1 , 2 )
x ( , )
d1 + e 1 2 d 2
1
2
y ( , )
y ( , )
dye = e 1 2 d1 + e 1 2 d 2
1
2
dx e =

(5.1.3)
(5.1.4)

where xe , ye are variables of both 1 and 2 , hence two partial derivatives are involved in the
total derivatives. In vector form the above equations reduce to

dx = J dq

(5.1.5)

dx
d
dx = e , dq = 1
d 2
dye

(5.1.6)

where

and J is a 2 by 2 matrix given by

xe (1 , 2 )

1
J =

y
(
e 1 , 2 )

xe (1 , 2 )

ye (1 , 2 )

(5.1.7)

The matrix J comprises the partial derivatives of the functions xe (1 , 2 ) and ye (1 , 2 ) with
respect to joint displacements 1 and 2 . The matrix J, called the Jacobian Matrix, represents the
differential relationship between the joint displacements and the resulting end-effecter motion.
Note that most robot mechanisms have a multitude of active joints, hence a matrix is needed for
describing the mapping of the vectorial joint motion to the vectorial end-effecter motion.
For the two-dof robot arm of Figure 5.1.1, the components of the Jacobian matrix are
computed as

sin 1 2 sin(1 + 2 ) 2 sin(1 + 2 )

J = 1
2 cos(1 + 2 )
1 cos1 + 2 cos(1 + 2 )

(5.1.8)

By definition, the Jacobian collectively represents the sensitivities of individual end-effecter


coordinates to individual joint displacements. This sensitivity information is needed in order to
coordinate the multi dof joint displacements for generating a desired motion at the end-effecter.
Consider the instant when the two joints of the robot arm are moving at joint velocities
q = (1 , 2 )T , and let v e = ( xe , ye )T be the resultant end-effecter velocity vector. The Jacobian
provides the relationship between the joint velocities and the resultant end-effecter velocity.
Indeed, dividing eq.(5) by the infinitesimal time increment dt yields

dx e
dq
=J
,
dt
dt

or v e = J q

(5.1.9)

Thus the Jacobian determines the velocity relationship between the joints and the end-effecter.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

5.2 Properties of the Jacobian


The Jacobian plays an important role in the analysis, design, and control of robotic
systems. It will be used repeatedly in the following chapters. It is worth examining basic
properties of the Jacobian, which will be used throughout this book.
We begin by dividing the 2-by-2 Jacobian of eq.(5.1.8) into two column vectors:

J = (J1 , J 2 ), J1 , J 2 21

(5.2.1)

Then eq.(5.1.9) can be written as

v e = J1 1 + J 2 2

(5.2.2)

The first term on the right-hand side accounts for the end-effecter velocity induced by the first
joint only, while the second term represents the velocity resulting from the second joint motion
only. The resultant end-effecter velocity is given by the vectorial sum of the two. Each column
vector of the Jacobian matrix represents the end-effecter velocity generated by the corresponding
joint moving at a unit velocity when all other joints are immobilized.
Figure 5.2.1 illustrates the column vectors J1 , J 2 of the 2 dof robot arm in the twodimensional space. Vector J 2 , given by the second column of eq.(5.1. 8), points in the direction
perpendicular to link 2. Note, however, that vector J1 is not perpendicular to link 1 but is
perpendicular to line OE, the line from joint 1 to the endpoint E. This is because J1 represents the
endpoint velocity induced by joint 1 when joint 2 is immobilized. In other words, links 1 and 2
are rigidly connected, becoming a single rigid body of link length OE, and J1 is the tip velocity
of the link OE.

J1
J2

Link 2

2
Joint 2

1 Link 1

Joint 1

Figure 5.2.1 Geometric interpretation of the column vectors of the Jacobian


In general, each column vector of the Jacobian represents the end-effecter velocity and
angular velocity generated by the individual joint velocity while all other joints are immobilized.
Let p be the end-effecter velocity and angular velocity, or the end-effecter velocity for short, and

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

J i be the i-th column of the Jacobian. The end-effecter velocity is given by a linear combination
of the Jacobian column vectors weighted by the individual joint velocities.

p = J1 q1 +

+ J n qn

(5.2.3)

where n is the number of active joints. The geometric interpretation of the column vectors is that
J i is the end-effecter velocity and angular velocity when all the joints other than joint i are
immobilized and only the i-th joint is moving at a unit velocity.
Exercise
Consider the two-dof articulated robot shown in Figure 5.2.1 again. This time we
use absolute joint angles measured from the positive x-axis, as shown in Figure 5.2.2. Note that
angle 2 is measured from the fixed frame, i.e. the x-axis, rather than a relative frame, e.g. link
1. Obtain the 2-by-2 Jacobian and illustrate the two column vectors on the xy plane. Discuss the
result in comparison with the previous case shown in Figure 5.2.1.

Link 2

2
Joint 2

1 Link 1

Joint 1

O
Figure 5.2.2 Absolute joint angles measured from the x-axis.

Note that the elements of the Jacobian are functions of joint displacements, and thereby
vary with the arm configuration. As expressed in eq.(5.1.8), the partial derivatives,
xe / i , ye / i , are functions of 1 and 2 . Therefore, the column vectors J1 , J 2 vary
depending on the arm posture. Remember that the end-effecter velocity is given by the linear
combination of the Jacobian column vectors J1 , J 2 . Therefore, the resultant end-effecter velocity
varies depending on the direction and magnitude of the Jacobian column vectors J1 , J 2 spanning
the two dimensional space. If the two vectors point in different directions, the whole twodimensional space is covered with the linear combination of the two vectors. That is, the endeffecter can be moved in an arbitrary direction with an arbitrary velocity. If, on the other hand,
the two Jacobian column vectors are aligned, the end-effecter cannot be moved in an arbitrary
direction. As shown in Figure 5.2.3, this may happen for particular arm postures where the two
links are fully contracted or extended. These arm configurations are referred to as singular
configurations. Accordingly, the Jacobian matrix becomes singular at these positions. Using the
determinant of a matrix, this condition is expressed as

det J = 0

Department of Mechanical Engineering

(5.2.4)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

In fact, the Jacobian degenerates at the singular configurations, where joint 2 is 0 or 180
degrees. Substituting 2 = 0, into eq.(5.1.8) yields

( 1 2 ) sin 1
det J =
( 1 2 ) cos1

sin 1
= 0
2 cos1
2

(5.2.5)

Note that both column vectors point in the same direction and thereby the determinant becomes
zero.

Non-singular

J1

J2

Singular
Configuration B

J2

J1

J1
J2
Singular
Configuration A

Figure 5.2.3 Singular configurations of the two-dof articulated robot


5.3 Inverse Kinematics of Differential Motion
Now that we know the basic properties of the Jacobian, we are ready to formulate the
inverse kinematics problem for obtaining the joint velocities that allow the end-effecter to move
at a given desired velocity. For the two dof articulated robot, the problem is to find the joint
T
velocities q = (1 , 2 )T , for the given end-effecter velocity v e = (vx , v y ) . If the arm
configuration is not singular, this can be obtained by taking the inverse of the Jacobian matrix in
eq.(5.1.9),

q = J 1 v e

(5.3.1)

Note that the solution is unique. Unlike the inverse kinematics problem discussed in the previous
chapter, the differential kinematics problem has a unique solution as long as the Jacobian is nonsingular.
The above solution determines how the end-effecter velocity ve must be decomposed, or
resolved, to individual joint velocities. If the controls of the individual joints regulate the joint
velocities so that they can track the resolved joint velocities q , the resultant end-effecter velocity
will be the desired ve. This control scheme is called Resolved Motion Rate Control, attributed to
Daniel Whitney (1969). Since the elements of the Jacobian matrix are functions of joint
displacements, the inverse Jacobian varies depending on the arm configuration. This means that
although the desired end-effecter velocity is constant, the joint velocities are not. Coordination is

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

thus needed among the joint velocity control systems in order to generate a desired motion at the
end-effecter.
Example
Consider the two dof articulated robot arm again. We want to move the endpoint
of the robot at a constant speed along a path staring at point A on the x-axis, (+2, 0), go around
the origin through points B (+, 0) and C (0, +), and reach the final point D (0, +2) on the y-axis.
See Figure 5.3.1. For simplicity, each arm link is of unit length. Obtain the profiles of the
individual joint velocities as the end-effecter tracks the path at the constant speed.
T
Substituting v e = (vx , v y ) into eq.(1) yields

1 =
2 =

vx cos(1 + 2 ) + v y sin(1 + 2 )
sin 2
vx [cos1 + cos(1 + 2 )] + v y [sin 1 + sin(1 + 2 )]

(5.3.2)
(5.3.3)

sin 2
y
B
2

Singular
Configuration

1
Singular
Configuration

2
A
2

Figure 5.3.1 trajectory tracking near the singular points


Figure 5.3.2 shows the resolved joint velocities 1 , 2 computed along the specified
trajectory. Note that the joint velocities are extremely large near the initial and final points, and
are unbounded at points A and D. These are at the arms singular configurations, 2 = 0 . As the
end-effecter gets close to the origin, the velocity of the first joint becomes very large in order to
quickly turn the arm around from point B to C. At these configurations, the second joint is almost
180 degrees, meaning that the arm is near a singularity. This result agrees with the singularity
condition using the determinant of the Jacobian:

det J = sin 2 = 0, 2 = k ,
Department of Mechanical Engineering

k = 0, 1, 2,

(5.3.4)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

In eqs.(2) and (3) above, the numerators are divided by sin 2 , the determinant of the Jacobian.
Therefore, the joint velocities 1 , 2 blow up as the arm configuration gets close to the singular
configuration.

Joint
Velocities

1
0

D
t (time)

Figure 5.3.2 Joint velocity profiles for tracking the trajectory in Figure 5.3.1
Furthermore, the arms behavior near the singular points can be analyzed by substituting
1 = 2 = 1 and 2 = 0 , the Jacobian
column vectors reduce to the ones in the same direction:

2 = 0, into the Jacobian, as obtained in eq.(5.2.5). For


sin 1
2 sin 1
,
, J 2 =
J1 =
cos1
2 cos1

for 2 = 0

(5.3.5)

As illustrated in Figure 5.2.3 (singular configuration A), both joints 1 , 2 generate the endpoint
velocity along the same direction. Note that no endpoint velocity can be generated in the direction
perpendicular to the aligned arm links. For 2 = ,

0
J1 = ,
0

sin 1
,
J 2 =
cos1

for 2 =

(5.3.6)

The first joint cannot generate any endpoint velocity, since the arm is fully contracted. See
singular configuration B in Figure 5.2.3.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

At a singular configuration, there is at least one direction in which the robot cannot
generate a non-zero velocity at the end-effecter. This agrees with the previous discussion; the
Jacobian is degenerate at a singular configuration, and the linear combination of the Jacobian
column vectors cannot span the whole space.
Exercise 5.2
A three-dof spatial robot arm is shown in the figure below. The robot has three revolute
joints that allow the endpoint to move in the three dimensional space. However, this robot
mechanism has singular points inside the workspace. Analyze the singularity, following the
procedure below.
Step 1 Obtain each column vector of the Jacobian matrix by considering the endpoint velocity
created by each of the joints while immobilizing the other joints.
Step 2 Construct the Jacobian by concatenating the column vectors, and set the determinant of
the Jacobian to zero for singularity: det J = 0 .
Step 3 Find the joint angles that make det J = 0 .
Step 4 Show the arm posture that is singular. Show where in the workspace it becomes singular.
For each singular configuration, also show in which direction the endpoint cannot have a nonzero velocity.

Endpoint

Link 3

xe

ye
z
e

Joint 3

Link 2

Joint 2

y
1
Link 1

x
Joint 1

Figure 5.3.3 Schematic of a three dof articulated robot


5.4 Singularity and Redundancy

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

We have seen in this chapter that singular configurations exist for many robot
mechanisms. Sometimes, such singular configurations exist in the middle of the workspace,
seriously degrading mobility and dexterity of the robot. At a singular point, the robot cannot
move in certain directions with a non-zero velocity. To overcome this difficulty, several methods
can be considered. One is to plan a trajectory of the robot motion such that it will not go into
singular configurations. Another method is to include additional degrees of freedom, so that even
when some degrees of freedom are lost at a certain configuration, the robot can still maintain the
necessary degrees of freedom. Such a robot is referred to as a redundant robot. In this section we
will discuss singularity and redundancy, and obtain general properties of differential motion for
general n degree of freedom robots.
As studied in Section 5.3, a unique solution exists to the differential kinematic equation,
(5.3.1), if the arm configuration is non-singular. However, when a planar (spatial) robot arm has
more than three (six) degrees of freedom, we can find an infinite number of solutions that provide
the same motion at the end-effecter. Consider for instance the human arm, which has seven
degrees of freedom excluding the joints at the fingers. When the hand is placed on a desk and
fixed in its position and orientation, the elbow position can still vary continuously without
moving the hand. This implies that a certain ratio of joint velocities exists that does not cause any
velocity at the hand. This specific ratio of joint velocities does not contribute to the resultant
endpoint velocity. Even if these joint velocities are superimposed onto other joint velocities, the
resultant end-effecter velocity is the same. Consequently, we can find different solutions of the
instantaneous kinematic equation for the same end-effecter velocity. In the following, we
investigate the fundamental properties of the differential kinematics when additional degrees of
freedom are involved.
To formulate a differential kinematic equation for a general n degree-of-freedom robot
mechanism, we begin by modifying the definition of the vector dxe representing the end-effecter
motion. In eq. (5.1.6), dxe was defined as a two-dimensional vector that represents the
infinitesimal translation of an end-effecter. This must be extended to a general m-dimensional
vector. For planar motion, m may be 3, and for spatial motion, m may be six. However, the
number of variables that we need to specify for performing a task is not necessarily three or six.
In arc welding, for example, only five independent variables of torch motion need be controlled.
Since the welding torch is usually symmetric about its centerline, we can locate the torch with an
arbitrary orientation about the centerline. Thus five degrees of freedom are sufficient to perform
arc welding. In general, we describe the differential end-effecter motion by m independent
variables dp that must be specified to perform a given task.

dp = [dp1 dp2

dpm ] m1
T

(5.4.1)

Then the differential kinematic equation for a general n degree-of-freedom robot is given by

dp = J dq

(5.4.2)

where the dimension of the Jacobian J is m by n; J m n . When n is larger than m and J is of


full rank, there are (n-m) arbitrary variables in the general solution of eq.(2). The robot is then
said to have (n-m) redundant degrees of freedom for the given task.
Associated with the above differential equation, the velocity relationship can be written
as
(5.4.3)
p = J q
where p and q are velocities of the end effecter and the joints, respectively.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

10

Equation (3) can be regarded as a linear mapping from n-dimensional vector space Vn to
m-dimensional space Vm. To characterize the solution to eq.(3), let us interpret the equation using
the linear mapping diagram shown in Figure 5.4.1. The subspace R(J) in the figure is the range
space of the linear mapping. The range space represents all the possible end-effecter velocities
that can be generated by the n joints at the present arm configuration. If the rank of J is of full
row rank, the range space covers the entire vector space Vm. Otherwise, there exists at least one
direction in which the end-effecter cannot be moved with non-zero velocity. The subspace N(J)
of Figure 5.4.1 is the null space of the linear mapping. Any element in this subspace is mapped
into the zero vector in Vm. Therefore, any joint velocity vector q that belongs to the null space
does not produce any velocity at the end-effecter. Recall the human arm discussed before. The
elbow can move without moving the hand. Joint velocities for this motion are involved in the null
space, since no end-effecter motion is induced. If the Jacobian is of full rank, the dimension of the
null space, dim N(J), is the same as the number of redundant degrees of freedom (n-m). When the
Jacobian matrix is degenerate, i.e. not of full rank, the dimension of the range space, dim R(J),
decreases, and at the same time the dimension of the null space increases by the same amount.
The sum of the two is always equal to n:

dim R ( J ) + dim N (R ) = n

(5.4.4)

Let q * be a particular solution of eq.(3) and q 0 be a vector involved in the null space,
then the vector of the form q = q * + k q 0 is also a solution of eq.(3), where k is an arbitrary
scalar quantity. Namely,

Jq = Jq * + kJq 0 = Jq* = p

(5.4.5)

Since the second term kq 0 can be chosen arbitrarily within the null space, an infinite number of
solutions exist for the linear equation, unless the dimension of the null space is 0. The null space
accounts for the arbitrariness of the solutions. The general solution to the linear equation involves
the same number of arbitrary parameters as the dimension of the null space.

q V n

p V m

R(J)

p=0

N(J)

Figure 5.4.1 Linear mapping diagram

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 6
Statics
Robots physically interact with the environment through mechanical contacts. Mating
work pieces in a robotic assembly line, manipulating an object with a multi-fingered hand, and
negotiating a rough terrain through leg locomotion are just a few examples of mechanical
interactions. All of these tasks entail control of the contacts and interference between the robot
and the environment. Force and moment acting between the robot end-effecter and the
environment must be accommodated for in order to control the interactions. In this chapter we
will analyze the force and moment that act on the robot when it is at rest.
A robot generates a force and a moment at its end-effecter by controlling individual
actuators. To generate a desired force and moment, the torques of the multiple actuators must be
coordinated. As seen in the previous chapter, the sensitivities of the individual actuators upon the
end-effecter motion, i.e. the Jacobian matrix, are essential in relating the actuator (joint) torques
to the force and moment at the end-effecter. We will obtain a fundamental theorem for force and
moment acting on a multi degree-of-freedom robot, which we will find is analogous to the
differential kinematics discussed previously.
6.1 Free Body Diagram
We begin by considering the free body diagram of an individual link involved in an open
kinematic chain. Figure 6.1.1 shows the forces and moments acting on link i, which is connected
to link i-1 and link i+1 by joints i and i+1, respectively. Let Oi be a point fixed to link i located
on the joint axis i+1 and Oi-1 be a point fixed to link i-1 on the joint axis i. Through the
connections with the adjacent links, link i receives forces and moments from both sides of the
link. Let fi-1,i be a three-dimensional vector representing the linear force acting from link i-1 to
link i. Likewise let fi,i+1 be the force from link i to link i+1. The force applied to link i from link
i+1 is then given by fi,i+1. The gravity force acting at the mass centroid Ci is denoted mig, where
mi is the mass of link i and g is the 3x1 vector representing the acceleration of gravity. The
balance of linear forces is then given by

fi 1,i f i ,i +1 + mi g = 0, i = 1," , n

(6.1.1)

Note that all the vectors are defined with respect to the base coordinate system O-xyz.
Next we derive the balance of moments. The moment applied to link i by link i-1 is
denoted Ni-1,i, and therefore the moment applied to link i by link i+1 is Ni,i+1. Furthermore, the
linear forces fi-1,i and fi,i+1 also cause moments about the centroid Ci. The balance of moments
with respect to the centroid Ci is thus given by

N i 1,i N i ,i +1 (ri 1,i + ri ,Ci ) fi 1,i + ( ri ,Ci ) ( f i ,i +1 ) = 0, i = 1," , n

(6.1.2)

where ri-1,i is the 3x1 position vector from point Oi-1 to point Oi with reference to the base
coordinate frame, and ri,Ci represents the position vector from point Oi to the centroid Ci.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

f i , i +1 Joint i+1

Joint i

f i1, i

N i , i+1
Link i

N i1, i

ri, C i

Ci

Oi

ri1, i
Oi-1

m ig

Link i+1

Link i-1

Actuator i

x
Figure 6.1.1 Free body diagram of the i-th link
The force f i 1, i and moment N i 1, i are called the coupling force and moment between
the adjacent links i and i-1. For i=1, the coupling force and moment are f 0,1 and N 0, 1 . These are
interpreted as the reaction force and moment applied to the base link to which the arm mechanism
is fixed. See Figure 6.1.2-(a). When i = n, on the other hand, the above coupling force and
moment become f n , n +1 and N n , n +1 . As the end-effecter, i.e. link n, contacts the environment, the
reaction force acts on the end-effecter. See Figure 6.1.2-(b). For convenience, we regard the
environment as an additional link, numbered n+1, and represent the reaction force and moment
by - f n , n +1 and - N n , n +1 , respectively.
- N n , n +1

N 0, 1

f0,1

- f n , n +1

(a)

(b)

Figure 6.1.2 Force and moment that the base link exerts on link 1 (a), and the ones that the
environment exerts on the end-effecter, the final link (b)

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

The above equations can be derived for all the link members except for the base link, i.e.
i=1,2, , n. This allows us to form 2n simultaneous equations of 3x1 vectors. The number of
coupling forces and moments involved is 2(n+1). Therefore two of the coupling forces and
moments must be specified; otherwise the equations cannot be solved. The final coupling force
and moment, f n , n +1 and N n , n +1 , are the force and moment that the end-effecter applies to the
environment. It is this pair of force and moment that the robot needs to accommodate in order to
perform a given task. Thus, we specify this pair of coupling force and moment, and solve the
simultaneous equations. For convenience we combine the force f n , n +1 and the moment N n , n +1 ,
to define the following six-dimensional vector:

f n , n+1

F =

N n, n+1

(6.1.3)

We call the vector F the endpoint force and moment vector, or the endpoint force for
short.

6.2 Energy Method and Equivalent Joint Torques


In this section we will obtain the functional relationship between the joint torques and the
endpoint force, which will be needed for accommodating interactions between the end-effecter
and the environment. Such a functional relationship may be obtained by solving the simultaneous
equations derived from the free body diagram. However, we will use a different methodology,
which will give an explicit formula relating the joint torques to the endpoint force without
solving the simultaneous equations. The methodology we will use is the energy method,
sometimes referred to as the indirect method. Since the simultaneous equations based on the
balance of forces and moments are complex and difficult to solve, we will find that the energy
method is the ideal choice when dealing with complex robotic systems.
In the energy method, we describe a system with respect to energy and work. Therefore,
terms associated with forces and moments that do not produce, store, or dissipate energy are
eliminated in its basic formula. In the free body diagram shown in Figure 6.1.1, many
components of the forces and moments are so called constraint forces and moments merely
joining adjacent links together. Therefore, constraint forces and moments do not participate in
energy formulation. This significantly reduces the number of terms and, more importantly, will
provide an explicit formula relating the joint torques to the endpoint force.
To apply the energy method, two preliminary formulations must be performed. One is to
separate the net force or moment generating energy from the constraint forces and moments
irrelevant to energy. Second, we need to find independent displacement variables that are
geometrically admissible satisfying kinematic relations among the links.
Figure 6.2.1 shows the actuator torques and the coupling forces and moments acting at
adjacent joints. The coupling force f i 1, i and moment N i 1, i are the resultant force and moment
acting on the individual joint, comprising the constraint force and moment as well as the torque
generated by the actuator. Let bi-1 be the 3x1 unit vector pointing in the direction of joint axis i,
as shown in the figure. If the i-th joint is a revolute joint, the actuator generates joint torque i
about the joint axis. Therefore, the joint torque generated by the actuator is one component of the
coupling moment N i 1, i along the direction of the joint axis:

i = bi1T Ni1, i

Department of Mechanical Engineering

(6.2.1)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

For a prismatic joint, such as the (i+1)-st joint illustrated in Figure 6.2.1, the actuator generates a
linear force in the direction of the joint axis. Therefore, it is the component of the linear coupling
force fi1, i projected onto the joint axis.

i = bi1T fi1, i

(6.2.2)

Note that, although we use the same notation as that of a revolute joint, the scalar quantity i has
the unit of a linear force for a prismatic joint. To unify the notation we use i for both types of
joints, and call it a joint torque regardless the type of joint.

f i1, i

N i1, i

Link i

Revolute Joint

Actuator i+1

Oi-1

Joint i+1

i +1

Joint i
Link i-1

f i , i +1
Actuator i

Prismatic
Joint

N i , i +1
Oi
Link i+1

z
O
y

b i 1

bi

Figure 6.2.1 Joint torques as components of coupling force and moment


We combine all the joint torques from joint 1 through joint n to define the nx1 joint
torque vector:

= ( 1 2 " n )

(6.2.3)

The joint torque vector collectively represents all the actuators torque inputs to the linkage
system. Note that all the other components of the coupling force and moment are borne by the
mechanical structure of the joint. Therefore, the constraint forces and moments irrelevant to
energy formula have been separated from the net energy inputs to the linkage system.
In the free body diagram, the individual links are disjointed, leaving constraint forces and
moments at both sides of the link. The freed links are allowed to move in any direction. In the
energy formulation, we describe the link motion using independent variables alone. Remember
T
that in a serial link open kinematic chain joint coordinates q = (q1 " qn ) are a complete and
independent set of generalized coordinates that uniquely locate the linkage system with
independent variables. Therefore, these variables conform to the geometric and kinematic
constraints. We use these joint coordinates in the energy-based formulation.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

The explicit relationship between the n joint torques and the endpoint force F is given by
the following theorem:
Theorem 6.1
Consider an n degree-of-freedom, serial link robot having no friction at the joints. The joint
torques n1 that are required for bearing an arbitrary endpoint force F 6 x1 are given by

= JT F

(6.2.4)

where J is the 6 x n Jacobian matrix relating infinitesimal joint displacements dq to infinitesimal


end-effecter displacements dp:

d p = J dq

(6.2.5)

Note that the joint torques in the above expression do not account for gravity and friction.
They are the net torques that balances the endpoint force and moment. We call of eq.(3) the
equivalent joint torques associated with the endpoint force F.
Proof
We prove the theorem by using the Principle of Virtual Work. Consider virtual
displacements at individual joints, q = (q1 ," , qn )T , and at the end-effecter,
p = (xeT , eT )T , as shown in Figure 6.2.2. Virtual displacements are arbitrary infinitesimal
displacements of a mechanical system that conform to the geometric constraints of the system.
Virtual displacements are different from actual displacements, in that they must only satisfy
geometric constraints and do not have to meet other laws of motion. To distinguish the virtual
displacements from the actual displacements, we use the Greek letter rather than the roman d.

qi

- N n , n +1

x e

- f n , n +1

Figure 6.2.2 Virtual displacements of the end effecter and individual joints
We assume that joint torques = ( 1 2 " n ) and endpoint force and moment, -F, act on
the serial linkage system, while the joints and the end-effecter are moved in the directions
geometrically admissible. Then, the virtual work done by the forces and moments is given by
T

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Work = 1 q1 + 2 q2 + " + n qn f n ,n+1T x e N n ,n+1T e


= T q FT p

(6.2.6)

According to the principle of virtual work, the linkage system is in equilibrium if, and only if, the
virtual work Work vanishes for arbitrary virtual displacements that conform to geometric
constraints. Note that the virtual displacements q and p are not independent, but are related by
the Jacobian matrix given in eq.(5). The kinematic structure of the robot mechanism dictates that
the virtual displacements p is completely dependent upon the virtual displacement of the joints,
q. Substituting eq.(5) into eq.(6) yields

Work = T q FT J q = ( J T F)T q

(6.2.7)

Note that the vector of the virtual displacements q consists of all independent variables, since
the joint coordinates of an open kinematic chain are generalized coordinates that are complete
and independent. Therefore, for the above virtual work to vanish for arbitrary virtual
displacements we must have:

= JT F
This is eq.(6.2.4), and the theorem has been proven.

The above theorem has broad applications in robot mechanics, design, and control. We
will use it repeatedly in the following chapters.
Example 6.1
Figure 6.2.3 shows a two-dof articulated robot having the same link dimensions as the
previous examples. The robot is interacting with the environment surface in a horizontal plane.
Obtain the equivalent joint torques = ( 1 , 2 )T needed for pushing the surface with an endpoint
force of F = ( Fx , Fy )T . Assume no friction.
The Jacobian matrix relating the end-effecter coordinates xe and
displacements 1 and 2 has been obtained in the previous chapter:

ye to the joint

A sin 1 A 2 sin(1 + 2 ) A 2 sin(1 + 2 )

J = 1
A 1 cos1 + A 2 cos(1 + 2 ) A 2 cos(1 + 2 )

(5.1.8)

From Theorem 6.1, the equivalent joint torques are obtained by simply taking the transpose of the
Jacobian matrix.

1 A 1 sin 1 A 2 sin(1 + 2 ) A 1 cos 1 + A 2 cos(1 + 2 ) Fx


=

A 2 sin(1 + 2 )
A 2 cos(1 + 2 )
2
Fy

Department of Mechanical Engineering

(6.2.8)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

y
Endpoint
Force
Fx

F
y

1
1
O

Figure 6.2.3 Two-dof articulated robot pushing the environment surface


6.3 Duality of Differential Kinematics and Statics
We have found that the equivalent joint torques are related to the endpoint force by the
Jacobian matrix, which is the same matrix that relates the infinitesimal joint displacements to the
end-effecter displacement. Thus, the static force relationship is closely related to the differential
kinematics. In this section we discuss the physical meaning of this observation.
To interpret the similarity between differential kinematics and statics, we can use the
linear mapping diagram of Figure 5.4.1. Recall that the differential kinematic equation can be
regarded as a linear mapping when the Jacobian matrix is fixed at a given robot configuration.
Figure 6.3.1 reproduces Figure 5.4.1 and completes it with a similar diagram associated with the
static analysis. As before, the range space R(J) represents the set of all the possible end-effecter
velocities generated by joint motions. When the Jacobian matrix is degenerate, or the robot
configuration is singular, the range space does not span the whole vector space. Namely, there
exists a direction in which the end-effecter cannot move with a non-zero velocity. See the
subspace S2 in the figure. The null space N(J), on the other hand, represents the set of joint
velocities that do not produce any velocity at the end-effecter. If the null space contains a nonzero element, the differential kinematic equation has an infinite number of solutions that cause
the same end-effecter velocity.
The lower half of Figure 6.3.1 is the linear mapping associated with the static force
relationship given by eq.(6.2.4). Unlike differential kinematics, the mapping of static forces is
given by the transpose of the Jacobian, generating a mapping from the m-dimensional vector
space Vm, associated with the Cartesian coordinates of the end-effecter, to the n-dimensional
vector space Vn, associated with the joint coordinates. Therefore the joint torques are always
determined uniquely for any arbitrary endpoint force F. However, for given joint torques, a
balancing endpoint force does not always exist. As in the case of the differential kinematics, let us
define the null space N(JT) and the range space R(JT) of the static force mapping. The null space
N(JT) represents the set of all endpoint forces that do not require any torques at the joints to bear
the corresponding load. In this case the endpoint force is borne entirely by the structure of the

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

linkage mechanism, i.e. constraint forces. The range space R(JT), on the other hand, represents the
set of all the possible joint torques that can balance the endpoint forces.

q V n

p V m

R(J)

p = 0

N(J)

Differential
Kinematics
S1 N (J )

S3 R(J T )

S 2 R(J )

JT
S 4 N (J T )

R(J )

Statics

=0

N(JT)

V n

F V m

Figure 6.3.1 Duality of differential kinematics and statics


The ranges and null spaces of J and JT are closely related. According to the rules of linear
algebra, the null space N(J) is the orthogonal complement of the range space R(JT). Namely, if a
non-zero n-vector x is in N(J) , it cannot also belong to R(JT), and vice-versa. If we denote by S1
the orthogonal complement of N(J), then the range space R(JT) is identical to S1, as shown in the
figure. Also, space S3, i.e., the orthogonal complement of R(JT) is identical to N(J). What this
implies is that, in the direction in which joint velocities do not cause any end-effecter velocity, the
joint torques cannot be balanced with any endpoint force. In order to maintain a stationary
configuration, the joint torques in this space must be zero.
There is a similar correspondence in the end-effecter coordinate space Vm. The range
space R(J) is the orthogonal complement to the null space N(JT). Hence, the subspace S2 in the
figure is identical to N(JT), and the subspace S4 is identical to R(J). Therefore, no joint torques are
required to balance the end point force when the external force acts in the direction in which the
end-effecter cannot be moved by joint movements. Also, when the external endpoint force is
applied in the direction along which the end-effecter can move, the external force must be borne
entirely by the joint torques. When the Jacobian matrix is degenerate or the arm is in a singular
configuration, the null space N(JT) has a non-zero dimension, and the external force can be borne
in part by the mechanical structure. Thus, differential kinematics and statics are closely related.
This relationship is referred to as the duality of differential kinematics and statics.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

6.4 Closed-Loop Kinematic Chains


The relationship between joint torques and the endpoint force obtained in Theorem 6.1
can be extended to a class of parallel-link mechanisms with closed kinematic-chains. It can also
be extended to multi-fingered hands, leg locomotion, and other robot mechanisms having closed
kinematic chains. In this section we discuss classes of closed kinematic chains based on the
principle of virtual work.

Link 2

Point A
Joint 5

2 = 0

xA , y A

Joint 2

Link 4

Endpoint
Force
Fx

F
y

xe

ye

Joint 4

Link 1

3 Link 3

1
Link 0

Joint 1

Joint 3

Figure 6.4.1 Five-bar-link robot exerting endpoint force


We begin by revisiting the five-bar-link planar robot shown in Figure 6.4.1. This robot
has two degrees of freedom, comprising two active joints, Joints 1 and 3, and three passive joints,
Joints 2, 4, and 5. Therefore the virtual work associated with the endpoint force and joint toques
is given by

Work = 11 + 2 2 + 33 + " 55 Fxxe Fyye

(6.4.1)

We assume no friction at the joints. Therefore the three passive joints cannot bear any torque load
about their joint axis. Substituting 2 = 4 = 5 = 0 into the above yields

1
(Fx
3

Work = ( 1 3 )T

T x
Fy ) e .
ye

(6.4.2)

For any given configuration of the robot, the virtual displacements of the end-effecter are
uniquely determined by the virtual displacements of Joints 1 and 3. In fact, the former is related
to the latter via the Jacobian matrix:

xe

J= 1
ye

1

xe

3
ye
3

(6.4.3)

Using this Jacobian,

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

where

10

Work = T q FT J q = ( J T F)T q = 0, q

(6.4.4)

q = (1 3 )T , p = ( xe ye )T

(6.4.5)

Eq.(4) implies

= JT F

(6.4.6)

which is the same form as eq.(6.2.4).


In general the following Corollary holds.
Corollary 6.1
Consider an n degree-of-freedom robot mechanism with n active joints. Assume that all
the joints are frictionless, and that, for a given configuration of the robot mechanism, there exists
a unique Jacobian matrix relating the virtual displacements of its end-effecter, p m1 , to the
virtual displacements of the active joints, q n1 ,

p = Jq .

(6.4.7)

Then the equivalent joint torques n1 to bear an arbitrary endpoint force F m1 is given
by
= JT F
(6.4.8)

Note that the joint coordinates associated with the active joints are not necessarily
generalized coordinates that uniquely locate the system. For example, the arm configuration of
the five-bar-link robot shown in Figure 6.4.1 is not uniquely determined with joint angles 1 and

3 alone. There are two configurations for given 1 and 3 . The corollary requires the

differential relation to be defined uniquely in the vicinity of the given configuration.

6.5 Over-Actuated Systems


If an n degree-of-freedom robot system has more than n active joints, or less than n active
joints, the above corollary does not apply. These are called over-actuated and under-actuated
systems, respectively. Over-actuated systems are of particular importance in many manipulation
and locomotion applications. In the following, we will consider the static relationship among joint
torques and endpoint forces for a class of over-actuated systems.
Figure 6.5.1 shows a two-fingered hand manipulating an object within a plane. Note that
both fingers are connected at the fingertips holding the object. While holding the object, the
system has three degrees of freedom. Since each finger has two active joints, the total number of
active joints is four. Therefore the system is over-actuated.
Using the notation shown in the figure, the virtual work is given by

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

11

Work = 11 + 2 2 + 33 + 4 4 Fxxe Fyye

(6.5.1)

Note that only three virtual displacements of the four joint angles are independent. There exists a
differential relationship between one of the joints, say 4 , and the other three due to the
kinematic constraint. Let us write it as

4 = J c q

(6.5.2)

where q = (1 2 3 ) are independent, and Jc is the 1x3 Jacobian associated with the
constraint due to the closed kinematic chain. Substituting this equation together with the Jacobian
relating the end effecter displacements to the tree joint displacements into eq.(1),
T

Work = T q + 4 J cq FT Jq = 0, q

(6.5.3)

The virtual work vanished for an arbitrary q only when

= J c 4 + J T F
T

(6.5.4)

The two-fingered hand is at equilibrium only when the above condition is met. When the external
endpoint force is zero: F=0, we obtain

0 = J c 4
T

(6.5.5)

This gives a particular combination of joint torques that do not influence the force balance with
the external endpoint load F. The joint torques having this particular proportion generate the
internal force applied to the object, as illustrated in the figure. This internal force is a grasp force
that is needed for performing a task.

Grasped
Object

External
Endpoint Force
xA , y A

2
y
Finger 1

Grasp
Force

Finger 2

3
x

Figure 6.5.1 Two-fingered hand manipulating a grasped object

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

12

Exercise 6.2
Define geometric parameters needed in Figure 6.5.1, and obtain the two Jacobian
matrices associated with the two-fingered hand holding an object. Furthermore, obtain the grasp
force using the Jacobian matrices and the joint torques.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 7
Dynamics
In this chapter, we analyze the dynamic behavior of robot mechanisms. The dynamic
behavior is described in terms of the time rate of change of the robot configuration in relation to
the joint torques exerted by the actuators. This relationship can be expressed by a set of
differential equations, called equations of motion, that govern the dynamic response of the robot
linkage to input joint torques. In the next chapter, we will design a control system on the basis of
these equations of motion.
Two methods can be used in order to obtain the equations of motion: the Newton-Euler
formulation, and the Lagrangian formulation. The Newton-Euler formulation is derived by the
direct interpretation of Newton's Second Law of Motion, which describes dynamic systems in
terms of force and momentum. The equations incorporate all the forces and moments acting on
the individual robot links, including the coupling forces and moments between the links. The
equations obtained from the Newton-Euler method include the constraint forces acting between
adjacent links. Thus, additional arithmetic operations are required to eliminate these terms and
obtain explicit relations between the joint torques and the resultant motion in terms of joint
displacements. In the Lagrangian formulation, on the other hand, the system's dynamic behavior
is described in terms of work and energy using generalized coordinates. This approach is the
extension of the indirect method discussed in the previous chapter to dynamics. Therefore, all the
workless forces and constraint forces are automatically eliminated in this method. The resultant
equations are generally compact and provide a closed-form expression in terms of joint torques
and joint displacements. Furthermore, the derivation is simpler and more systematic than in the
Newton-Euler method.
The robots equations of motion are basically a description of the relationship between
the input joint torques and the output motion, i.e. the motion of the robot linkage. As in
kinematics and in statics, we need to solve the inverse problem of finding the necessary input
torques to obtain a desired output motion. This inverse dynamics problem is discussed in the last
section of this chapter. Efficient algorithms have been developed that allow the dynamic
computations to be carried out on-line in real time.

7.1 Newton-Euler Formulation of Equations of Motion


7.1.1. Basic Dynamic Equations
In this section we derive the equations of motion for an individual link based on the direct
method, i.e. Newton-Euler Formulation. The motion of a rigid body can be decomposed into the
translational motion with respect to an arbitrary point fixed to the rigid body, and the rotational
motion of the rigid body about that point. The dynamic equations of a rigid body can also be
represented by two equations: one describes the translational motion of the centroid (or center of
mass), while the other describes the rotational motion about the centroid. The former is Newton's
equation of motion for a mass particle, and the latter is called Euler's equation of motion.
We begin by considering the free body diagram of an individual link. Figure 7.1.1 shows
all the forces and moments acting on link i. The figure is the same as Figure 6.1.1, which
describes the static balance of forces, except for the inertial force and moment that arise from the
dynamic motion of the link. Let v ci be the linear velocity of the centroid of link i with reference

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

to the base coordinate frame O-xyz, which is an inertial reference frame. The inertial force is then
given by m i v ci , where mi is the mass of the link and v ci is the time derivative of v ci . Based
on DAlemberts principle, the equation of motion is then obtained by adding the inertial force to
the static balance of forces in eq.(6.1.1) so that

f i 1,i f i ,i +1 + mi g mi v ci = 0,

i = 1," , n

(7.1.1)

where, as in Chapter 6, f i 1,i and f i ,i +1 are the coupling forces applied to link i by links i-1 and
i+1, respectively, and g is the acceleration of gravity.

f i , i +1

f i1, i

N i , i+1

v ci

N i1, i
Link i

ri, C i

Ci

ri1, i

Oi

Oi-1
z

m ig
O

Joint i+1

Joint i
y

x
Figure 7.1.1 Free body diagram of link i in motion
Rotational motions are described by Euler's equations. In the same way as for
translational motions, adding inertial torques to the static balance of moments yields the
dynamic equations. We begin by describing the mass properties of a single rigid body with
respect to rotations about the centroid. The mass properties are represented by an inertia tensor,
or an inertia matrix, which is a 3 x 3 symmetric matrix defined by
2
2

body {( y yc ) + ( z zc ) } dV

I = ( x xc )( y yc ) dV
body

body ( z zc )( x xc ) dV

body

body

( x xc )( y yc ) dV

{( z zc ) 2 + ( x xc ) 2 } dV

body

( y yc )( z zc ) dV

( y yc )( z zc ) dV
body

2
2
body {( x xc ) + ( y yc ) } dV
(7.1.2)

body

( z zc )( x xc ) dV

where is the mass density, xc , yc , zc are the coordinates of the centroid of the rigid body, and
each integral is taken over the entire volume V of the rigid body. Note that the inertia matrix
varies with the orientation of the rigid body. Although the inherent mass property of the rigid

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

body does not change when viewed from a frame fixed to the body, its matrix representation
when viewed from a fixed frame, i.e. inertial reference frame, changes as the body rotates.
The inertial torque acting on link i is given by the time rate of change of the angular
momentum of the link at that instant. Let i be the angular velocity vector and I i be the
centroidal inertia tensor of link i, then the angular momentum is given by I i i . Since the inertia
tensor varies as the orientation of the link changes, the time derivative of the angular momentum
 i , but also a term resulting from changes in the
includes not only the angular acceleration term I i
inertia tensor viewed from a fixed frame. This latter term is known as the gyroscopic torque and
is given by i (I i i ) . Adding these terms to the original balance of moments (4-2) yields
 i i (I i i ) = 0, i = 1,", n
N i 1,i N i ,i +1 (ri 1,i + ri ,Ci ) fi 1,i + (ri ,Ci ) (fi ,i +1 ) I i

(7.1.3)
using the notations of Figure 7.1.1. Equations (2) and (3) govern the dynamic behavior of an
individual link. The complete set of equations for the whole robot is obtained by evaluating both
equations for all the links, i = 1, ,n.
7.1.2. Closed-Form Dynamic Equations
The Newton-Euler equations we have derived are not in an appropriate form for use in dynamic
analysis and control design. They do not explicitly describe the input-output relationship, unlike
the relationships we obtained in the kinematic and static analyses. In this section, we modify the
Newton-Euler equations so that explicit input-output relations can be obtained. The Newton-Euler
equations involve coupling forces and moments f i1, i and N i1, i . As shown in eqs.(6.2.1) and
(6.2.2), the joint torque i, which is the input to the robot linkage, is included in the coupling force
or moment. However, i is not explicitly involved in the Newton-Euler equations. Furthermore,
the coupling force and moment also include workless constraint forces, which act internally so
that individual link motions conform to the geometric constraints imposed by the mechanical
structure. To derive explicit input-output dynamic relations, we need to separate the input joint
torques from the constraint forces and moments. The Newton-Euler equations are described in
terms of centroid velocities and accelerations of individual arm links. Individual link motions,
however, are not independent, but are coupled through the linkage. They must satisfy certain
kinematic relationships to conform to the geometric constraints. Thus, individual centroid
position variables are not appropriate for output variables since they are not independent.
The appropriate form of the dynamic equations therefore consists of equations described
in terms of all independent position variables and input forces, i.e., joint torques, that are
explicitly involved in the dynamic equations. Dynamic equations in such an explicit input- output
form are referred to as closed-form dynamic equations. As discussed in the previous chapter, joint
displacements q are a complete and independent set of generalized coordinates that locate the
whole robot mechanism, and joint torques are a set of independent inputs that are separated
from constraint forces and moments. Hence, dynamic equations in terms of joint displacements q
and joint torques are closed-form dynamic equations.
Example 7.1
Figure 7.1.1 shows the two dof planar manipulator that we discussed in the previous
chapter. Let us obtain the Newton-Euler equations of motion for the two individual links, and
then derive closed-form dynamic equations in terms of joint displacements 1 and 2 , and joint
torques 1 and 2. Since the link mechanism is planar, we represent the velocity of the centroid of

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

each link by a 2-dimensional vector vi and the angular velocity by a scalar velocity i . We
assume that the centroid of link i is located on the center line passing through adjacent joints at a
distance A ci from joint i, as shown in the figure. The axis of rotation does not vary for the planar
linkage. The inertia tensor in this case is reduced to a scalar moment of inertia denoted by Ii.
From eqs. (1) and (3), the Newton-Euler equations for link 1 are given by

f 0, 1 f1, 2 + m1g m1 v c1 = 0,
N 0,1 N1, 2 + r1,c1 f1, 2 r0,c1 f 0,1 I11 = 0

(7.1.4)

Note that all vectors are 2 x 1, so that moment N i-1,i and the other vector products are scalar
quantities. Similarly, for link 2,

f1, 2 + m2 g m2 v c 2 = 0,
N 1, 2 r1,c 2 f1,2 I 2 2 = 0

(7.1.5)

y
A2

Vc2

A c2

A1

I 2 , m2

2 , 2
f1,2

A c1

I1 , m1

1 , 1

Link 0
Figure 7.1.2 Mass properties of two dof planar robot
To obtain closed-form dynamic equations, we first eliminate the constraint forces and separate
them from the joint torques, so as to explicitly involve the joint torques in the dynamic equations.
For the planar manipulator, the joint torques 1 and 2 are equal to the coupling moments:

N i 1,i = i , i = 1, 2

(7.1.6)

Substituting eq.(6) into eq.(5) and eliminating f1,2 we obtain

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

2 r1,c 2 m2 v c 2 + r1,c 2 m2 g I 2 2 = 0

(7.1.7)

Similarly, eliminating f0,1 yields,

1 2 r0,c1 m1 v c1 r0,1 m2 v c 2 + r0,c1 m1g + r0,1 m2 g I11 = 0

(7.1.8)

Next, we rewrite v ci , i , and ri ,i +1 using joint displacements 1 and 2 , which are independent
variables. Note that 2 is the angular velocity relative to the base coordinate frame, while 2 is
measured relative to link 1. Then, we have

1 = 1 , 2 = 1 + 2

(7.1.9)

The linear velocities can be written as

A  sin 1

v c1 = c1 1
 cos
A

1
c1 1
{A 1 sin 1 + A c 2 sin(1 + 2 )}1 A c 2 sin(1 + 2 )}2

v c 2 =


{A 1 cos1 + A c 2 cos(1 + 2 )}1 + A c 2 cos(1 + 2 )} 2

(7.1.10)

Substituting eqs. (9) and (10) along with their time derivatives into eqs. (7) and (8), we obtain the
closed-form dynamic equations in terms of 1 and 2 :

1 = H111 + H122 h22 2h12 + G1


2 = H 222 + H 211 + h12 + G2

(7.1.11-b)

H 11 = m1A 2c1 + I1 + m2 (A 12 + A 2c 2 + 2A 1A c 2 cos 2 ) + I 2

(7.1.12-a)

H 22 = m A + I 2

(7.1.12-b)

H 12 = m2 (A 2c 2 + A 1A c 2 cos 2 ) + I 2

(7.1.12-c)

h = m2A 1A c 2 sin 2
G1 = m1A c1 g cos1 + m2 g{A c 2 cos(1 + 2 ) + A 1 cos 1}
G2 = m2 gA c 2 cos(1 + 2 )

(7.1.12-d)

(7.1.11-a)

where

2
2 c2

(7.1.12-e)
(7.1.12-f)

The scalar g represents the acceleration of gravity along the negative y-axis.

More generally, the closed-form dynamic equations of an n-degree-of-freedom robot can


be given in the form
n

j +1

j =1 k =1

i = H ij qj + hijk q j qk + Gi ,

Department of Mechanical Engineering

i = 1,", n

(7.1.13)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

where coefficients Hij , hijk, and Gi are functions of joint displacements q1 , q2 ,", qn . When
external forces act on the robot system, the left-hand side must be modified accordingly.
7.1.3. Physical Interpretation of the Dynamic Equations
In this section, we interpret the physical meaning of each term involved in the closedform dynamic equations for the two-dof planar robot.
The last term in each of eqs. (11-a, b), Gi , accounts for the effect of gravity. Indeed, the
terms G1 and G2, given by (12-e, f), represent the moments created by the masses m1 and m2 about
their individual joint axes. The moments are dependent upon the arm configuration. When the
arm is fully extended along the x-axis, the gravity moments become maximums.
Next, we investigate the first terms in the dynamic equations. When the second joint is
immobilized, i.e. 2 = 0 and 2 = 0 , the first dynamic equation reduces to 1 = H111 , where the
gravity term is neglected. From this expression it follows that the coefficient H11 accounts for the
moment of inertia seen by the first joint when the second joint is immobilized. The coefficient H11
given by eq. (12-a) is interpreted as the total moment of inertia of both links reflected to the first
joint axis. The first two terms, m1A c1 + I1 , in eq. (12-a), represent the moment of inertia of link 1
with respect to joint 1, while the other terms are the contribution from link 2. The inertia of the
second link depends upon the distance L between the centroid of link 2 and the first joint axis, as
illustrated in Figure 7.1.3. The distance L is a function of the joint angle 2 and is given by
2

L2 = A 1 + A c 2 + 2A 1A c 2 cos 2
2

(7.1.14)

Using the parallel axes theorem of moment of inertia (Goldstein, 1981), the inertia of link 2 with
respect to joint 1 is m2L2+I2 , which is consistent with the last two terms in eq. (12-a). Note that
the inertia varies with the arm configuration. The inertia is maximum when the arm is fully
extended ( 2 = 0 ), and minimum when the arm is completely contracted ( 2 = ).

2 '

2
1
O

A c2

A1

Figure 7.1.3 Varying inertia depending on the arm configuration

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Let us now examine the second terms on the right hand side of eq. (11). Consider the
instant when 1 = 2 = 0 and 1 = 0 , then the first equation reduces to 1 = H122 , where the
gravity term is again neglected. From this expression it follows that the second term accounts for
the effect of the second link motion upon the first joint. When the second link is accelerated, the
reaction force and torque induced by the second link act upon the first link. This is clear in the
original Newton-Euler equations (4), where the coupling force -fl,2 and moment -N1,2 from link 2
are involved in the dynamic equation for link 1. The coupling force and moment cause a torque
int about the first joint axis given by

int = N1, 2 r0,1 f1, 2


= I 2 2 r0,c 2 m2 v c 2

(7.1.15)

2
= {I 2 + m2 (A c 2 + A 1A c 2 cos 2 )}2

where N1,2 and fl,2 are evaluated using eq. (5) for 1 = 2 = 0 and 1 = 0 . This agrees with the
second term in eq. (11-a). Thus, the second term accounts for the interaction between the two
joints.
The third terms in eq. (11) are proportional to the square of the joint velocities. We
consider the instant when 2 = 0 and 1 = 2 = 0 , as shown in Figure 7.1.4-(a). In this case, a
centrifugal force acts upon the second link. Let fcent be the centrifugal force. Its magnitude is
given by
2
f cent = m2 L 1

(7.1.16)

where L is the distance between the centroid C2 and the first joint O. The centrifugal force acts in
the direction of position vector rO ,C 2 . This centrifugal force causes a moment cent about the
second joint. Using eq. (16), the moment cent is computed as

cent = r1,c 2 fcent = m2A 1A c 21 2

(7.1.17)

This agrees with the third term h12 in eq. (11-b). Thus we conclude that the third term is caused
by the centrifugal effect on the second joint due to the motion of the first joint. Similarly, rotating
the second joint at a constant velocity causes a torque of h22 due to the centrifugal effect upon
the first joint.

fCor

fcent

A c 2 2

C2
r1,c 2

r0, c 2

O1

1

1
(a)

2

yb
Ob

1 + 2
xb

(b)

Figure 7.1.4 Centrifugal (a) and Coriolis (b) effects

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Finally we discuss the fourth term of eq. (11-a), which is proportional to the product of
the joint velocities. Consider the instant when the two joints rotate at velocities 1 and 2 at the
same time. Let Ob-xbyb be the coordinate frame attached to the tip of link 1, as shown in Figure
7.1.4-(b). Note that the frame Ob-xbyb is parallel to the base coordinate frame at the instant
shown. However, the frame rotates at the angular velocity 1 together with link 1. The mass

centroid of link 2 moves at a velocity of A c 22 relative to link 1, i.e. the moving coordinate frame
Ob-xbyb. When a mass particle m moves at a velocity of vb relative to a moving coordinate frame
rotating at an angular velocity , the mass particle has the so-called Coriolis force given by
2m ( v b ) . Let fCor be the force acting on link 2 due to the Coriolis effect. The Coriolis force
is given by

2m A   cos(1 + 2 )

fCor = 2 c 2 1 2
  sin( + )
2
A
m

1
2
2 c2 1 2

(7.1.18)

This Coriolis force causes a moment C or about the first joint, which is given by

Cor = r0,c 2 fCor = 2 m2 A1 A c 2 1 2 sin 2

(7.1.19)

The right-hand side of the above equation agrees with the fourth term in eq. (11-a). Since the
Coriolis force given by eq. (18) acts in parallel with link 2, the force does not create a moment
about the second joint in this particular case.
Thus, the dynamic equations of a robot arm are characterized by a configurationdependent inertia, gravity torques, and interaction torques caused by the accelerations of the other
joints and the existence of centrifugal and Coriolis effects.

7.2. Lagrangian Formulation of Robot Dynamics


7.2.1. Lagrangian Dynamics
In the Newton-Euler formulation, the equations of motion are derived from Newton's
Second Law, which relates force and momentum, as well as torque and angular momentum. The
resulting equations involve constraint forces, which must be eliminated in order to obtain closedform dynamic equations. In the Newton-Euler formulation, the equations are not expressed in
terms of independent variables, and do not include input joint torques explicitly. Arithmetic
operations are needed to derive the closed-form dynamic equations. This represents a complex
procedure that requires physical intuition, as discussed in the previous section.
An alternative to the Newton-Euler formulation of manipulator dynamics is the
Lagrangian formulation, which describes the behavior of a dynamic system in terms of work and
energy stored in the system rather than of forces and moments of the individual members
involved. The constraint forces involved in the system are automatically eliminated in the
formulation of Lagrangian dynamic equations. The closed-form dynamic equations can be
derived systematically in any coordinate system.
Let q1 ,", qn be generalized coordinates that completely locate a dynamic system. Let T
and U be the total kinetic energy and potential energy stored in the dynamic system. We define
the Lagrangian L by

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

L(qi , qi ) = T (qi , qi ) U (qi )

(7.2.1)
Note that the potential energy is a function of generalized coordinates qi and that the kinetic
energy is that of generalized velocities qi as well as generalized coordinates qi. Using the
Lagrangian, equations of motion of the dynamic system are given by

d L L

= Qi ,
dt qi qi

i = 1," , n

(7.2.2)

where Qi is the generalized force corresponding to the generalized coordinate qi. Considering the
virtual work done by non-conservative forces can identify the generalized forces acting on the
system.
7.2.2 Planar Robot Dynamics
Before discussing general robot dynamics in three-dimensional space, we consider the 2
dof planar robot, for which we have derived the equations of motion based on Newton-Euler
Formulation. Figure 7.2.1 shows the same robot mechanism with a few new variables needed for
the Lagrangian Formulation.

y
Vc2

A c2

A1

2
I 2 , m2

2 , 2

Vc1 1

A c1
I1 , m1

1 , 1

Figure 7.2.1 Two dof robot


The total kinetic energy stored in the two links moving at linear velocity v ci and angular
velocity i at the centroids, as shown in the figure, is given by
2
1
1
2
2
T = ( mi v ci + I i i )
2
i =1 2

(7.2.3)

where v ci represents the magnitude of the velocity vector. Note that the linear velocities and the
angular velocities are not independent variables, but are functions of joint angles and joint

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

10

angular velocities, i.e. the generalized coordinates and the generalized velocities that locate the
dynamic state of the system uniquely. We need to rewrite the above kinetic energy so that it is
with respect to i and i . The angular velocities are given by

1 = 1 , 2 = 1 + 2

(7.2.4)

The linear velocity of the first link is simply


2
2
2
v c1 = A c1 1

(7.2.5)

However, the centroidal linear velocity of the second link vc2 needs more computation. Treating
the centroid C2 as an endpoint and applying the formula for computing the endpoint velocity yield
the centroidal velocity. Let J c 2 be the 2x2 Jacobian matrix relating the centroidal velocity vector
to joint velocities. Then,

v c 2 = J c 2q = q T J c 2 J c 2q
2

where q = 1

T=

(7.2.6)

2 ) . Substituting eqs.(4~6) to eq.(3) yields


T

1
1
1
H1112 + H1212 + H 2222 = 1 2
2
2
2

) HH
T

11

12

H12 1

H 22 2

(7.2.7)

where coefficients Hij are the same as the ones in eq.(7.1.12).

H 11 = m1A 2c1 + I1 + m2 (A 12 + A 2c 2 + 2A 1A c 2 cos 2 ) + I 2 = H11 ( 2 )

(7.1.12-a)

H 22 = m A + I 2

(7.1.12-b)

H 12 = m2 (A 2c 2 + A 1A c 2 cos 2 ) + I 2 = H12 ( 2 )

(7.1.12-c)

2
2 c2

Note that coefficients H11 and H12 are functions of 2 .


The potential energy stored in the two links is given by

U = m1 gA c1 sin 1 + m2 g{A 1 sin 1 + A c 2 sin(1 + 2 )}

(7.2.8)

Now we are ready to obtain Lagranges equations of motion by differentiating the above
kinetic energy and potential energy. For the first joint,

L
U
=
= [m1A c1 g cos 1 + m2 g{A c 2 cos(1 + 2 ) + A 1 cos 1}] = G1
q1
q1
L
= H111 + H122
q1

d L
H
H
2
= H111 + H122 + 11 21 + 12 2
dt q1
2
2

Department of Mechanical Engineering

(7.2.9)

(7.2.10)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

11

Substituting the above two equations into eq.(2) yields the same result as eq.(7.1.11-a). The
equation of motion for the second joint can be obtained in the same manner, which is identical to
eq.(7.1.11-b). Thus, the same equations of motion have been obtained based on Lagrangian
Formulation. Note that the Lagrangian Formulation is simpler and more systematic than the
Newton-Euler Formulation. To formulate kinetic energy, velocities must be obtained, but
accelerations are not needed. Remember that the acceleration computation was complex in the
Newton-Euler Formulation, as discussed in the previous section. This acceleration computation is
automatically dealt with in the computation of Lagranges equations of motion. The difference
between the two methods is more significant when the degrees of freedom increase, since many
workless constraint forces and moments are present and the acceleration computation becomes
more complex in Newton-Euler Formulation.

7.2.3 Inertia Matrix


In this section we will extend Lagranges equations of motion obtained for the two d.o.f.
planar robot to the ones for a general n d.o.f. robot. Central to Lagrangian formulation is the
derivation of the total kinetic energy stored in all of the rigid bodies involved in a robotic system.
Examining kinetic energy will provide useful physical insights of robot dynamic. Such physical
insights based on Lagrangian formulation will supplement the ones we have obtained based on
Newton-Euler formulation.
As seen in eq.(3) for the planar robot, the kinetic energy stored in an individual arm link
consists of two terms; one is kinetic energy attributed to the translational motion of mass mi and
the other is due to rotation about the centroid. For a general three-dimensional rigid body, this can
be written as

Ti =

1
1 T
T
mi v ci v ci + i I i i ,
2
2

i = 1,", n

(7.2.11)

where i and Ii are, respectively, the 3x1 angular velocity vector and the 3x3 inertia matrix of
the i-th link viewed from the base coordinate frame, i.e. inertial reference. The total kinetic
energy stored in the whole robot linkage is then given by
n

T = Ti

(7.2.12)

i =1

since energy is additive.


The expression for the kinetic energy is written in terms of the velocity and angular
velocity of each link member, which are not independent variables, as mentioned in the previous
section. Let us now rewrite the above equations in terms of an independent and complete set of
generalized coordinates, namely joint coordinates q = [q1, .. ,qn]T. For the planar robot example,
we used the Jacobian matrix relating the centroid velocity to joint velocities for rewriting the
expression. We can use the same method for rewriting the centroidal velocity and angular
velocity for three-dimensional multi-body systems.

v ci = J iLq
i = J iAq

Department of Mechanical Engineering

(7.2.13)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

12

where JLi and JAi are, respectively, the 3 x n Jacobian matrices relating the centroid linear
velocity and the angular velocity of the i-th link to joint velocities. Note that the linear and
angular velocities of the i-th link are dependent only on the first i joint velocities, and hence the
last n-i columns of these Jacobian matrices are zero vectors. Substituting eq.(13) into eqs.(11) and
(12) yields

T=

T
T
1 n
1
(mi q T J iL J iL q + q T J iA I i J iAq ) = q T Hq

2 i=1
2

(7.2.14)

where H is a n x n matrix given by


n

H = (mi J iL J iL + J iA I i J iA )
T

(7.2.15)

i =1

The matrix H incorporates all the mass properties of the whole robot mechanism, as reflected to
the joint axes, and is referred to as the Multi-Body Inertia Matrix. Note the difference between the
multi-body inertia matrix and the 3 x 3 inertia matrices of the individual links. The former is an
aggregate inertia matrix including the latter as components. The multi-body inertia matrix,
however, has properties similar to those of individual inertia matrices. As shown in eq. (15), the
multi-body inertia matrix is a symmetric matrix, as is the individual inertia matrix defined by eq.
(7.1.2). The quadratic form associated with the multi-body inertia matrix represents kinetic
energy, so does the individual inertia matrix. Kinetic energy is always strictly positive unless the
system is at rest. The multi-body inertia matrix of eq. (15) is positive definite, as are the
individual inertia matrices. Note, however, that the multi-body inertia matrix involves Jacobian
matrices, which vary with linkage configuration. Therefore the multi-body inertia matrix is
configuration-dependent and represents the instantaneous composite mass properties of the whole
linkage at the current linkage configuration. To manifest the configuration-dependent nature of
the multi-body inertia matrix, we write it as H(q), a function of joint coordinates q.
Using the components of the multi-body inertia matrix H={Hij}, we can write the total
kinetic energy in scalar quadratic form:

T=

1 n n
H ij qi q j
2 i=1 j =1

(7.2.16)

Most of the terms involved in Lagranges equations of motion can be obtained directly by
differentiating the above kinetic energy. From the first term in eq.(2),
n
n dH
d T
d n
= ( H ij q j ) = H ij q j + ij q j
dt qi dt j =1
j =1
j =1 dt

The first term of the last expression,

H
j =1

diagonal terms

H
i j

ij

ij

(7.2.17)

i as well as offq j , comprises the diagonal term H ii q

qj , representing the dynamic interactions among the multiple joints due to

accelerations, as discussed in the previous section. It is important to note that a pair of joints, i
and j, have the same coefficient of the dynamic interaction, Hij=Hji , since the multi-body inertia
matrix H is symmetric. In vector-matrix form these terms can be written collectively as

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

13

H11

i> #
 =
Hq
j> #

H
n1

"

"

%
H ji
"

H ij
%
"

q1
H1n #

# qi
# q j

H nn #
q
n

(7.2.18)

It is clear that the interactive inertial torque H ij q j caused by the j-th joint acceleration upon the ith joint has the same coefficient as that of H ji qi caused by joint i upon joint j. This property is
called Maxwells Reciprocity Relation.
The second term of eq.(17) is non-zero in general, since the multi-body inertia matrix is
configuration-dependent, being a function of joint coordinates. Applying the chain rule,

dH ij
dt

n H
H ij dqk
= ij q k
dt
k =1 qk
k =1 qk
n

(7.2.19)

The second term in eq.(2), Lagranges equation of motion, also yields the partial derivatives of
Hij. From eq.(16),

1 n n H jk
T
1 n n
H jk q j qk =
=
q j qk
2
qi qi 2 j =1 k =1
j =1 k =1 qi

(7.2.20)

Substituting eq.(19) into the second term of eq.(17) and combining the resultant term with
eq.(20), let us write these nonlinear terms as
n

hi = Cijk q j qk

(7.2.21)

j =1 k =1

where coefficients Cijk is given by

Cijk =

H ij
qk

1 H jk
2 qi

(7.2.22)

This coefficient Cijk is called Christoffels Three-Index Symbol. Note that eq.(21) is nonlinear,
having products of joint velocities. Eq.(21) can be divided into the terms proportional to square
joint velocities, i.e. j=k, and the ones for j k : the former represents centrifugal torques and the
latter Coriolis torques.
n

hi = Cijj q j + Cijk q j qk = (Centrifugal) + (Coriolis)


2

j =1

Department of Mechanical Engineering

(7.2.23)

k j

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

14

These centrifugal and Coriolis terms are present only when the multi-body inertia matrix is
configuration dependent. In other words, the centrifugal and Coriolis torques are interpreted as
nonlinear effects due to the configuration-dependent nature of the multi-body inertia matrix in
Lagrangian formulation.

7.2.4 Generalized Forces


Forces acting on a system of rigid bodies can be represented as conservative forces and
non-conservative forces. The former is given by partial derivatives of potential energy U in
Lagranges equations of motion. If gravity is the only conservative force, the total potential
energy stored in n links is given by
n

U = mi g T r0, ci

(7.2.24)

i =1

where r0 , ci is the position vector of the centroid Ci that is dependent on joint coordinates.
Substituting this potential energy into Lagranges equations of motion yields the following
gravity torque seen by the i-th joint:

Gi =

n
n
r
U
= m j gT 0, cj = m j gT J Lj , i
qi
qi
j =1
j =1

(7.2.25)

L
where J j, i is the i-th column vector of the 3 x 1 Jacobian matrix relating the linear centroid

velocity of the j-th link to joint velocities.


Non-conservative forces acting on the robot mechanism are represented by generalized
forces Qi in Lagrangian formulation. Let Work be virtual work done by all the non-conservative
forces acting on the system. Generalized forces Qi associated with generalized coordinates qi, e.g.
joint coordinates, are defined by
n

Work = Qiqi

(7.2.26)

i =1

If the virtual work is given by the inner product of joint torques and virtual joint displacements,
1q1 +" + nqn , the joint torque itself is the generalized force corresponding to the joint
coordinate. However, generalized forces are often different from joint torques. Care must be
taken for finding correct generalized forces. Let us work out the following example.

Example 7.2
Consider the same 2 d.o.f. planar robot as Example 7.1. Instead of using joint angles 1
and 2 as generalized coordinates, let us use the absolute angles, 1 and 2 , measured from the
positive x-axis. See the figure below. Changing generalized coordinates entails changes to
generalized forces. Let us find the generalized forces for the new coordinates.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

15

2 , 2

2
1

1
O

1 , 1

Figure 7.2.2 Absolute joint angles 1 and 2 and disjointed links


As shown in the figure, joint torque 2 acts on the second link, whose virtual

displacement is 2 , while joint torque 1 and the reaction torque 2 act on the first link for

virtual displacement 1 . Therefore the virtual work is

Work = ( 1 2 )1 + 22

(7.2.27)

Comparing this equation with eq.(26) where generalized coordinates are 1 = q1 , 2 = q2 , we can
conclude that the generalized forces are:

Q1 = 1 2 , Q2 = 2

(7.2.28)

The two sets of generalized coordinates 1 and 2 vs. 1 and 2 are related as

1 = 1 , 2 = 1 + 2

(7.2.29)

Substituting eq.(29) into eq.(27) yields

Work = ( 1 2 )1 + 2 (1 + 2 ) = 11 + 22

(7.2.30)

This confirms that the generalized forces associated with the original generalized coordinates, i.e.
joint coordinates, are 1 and 2 .
Non-conservative forces acting on a robot mechanism include not only these joint torques
but also any other external force Fext . If an external force acts at the endpoint, the generalized
forces Q=(Q1,, Qn)T associated with generalized coordinates q are, in vector form, given by

Work = T q + Fext T p = ( + J T Fext )T q = QT q


Q = + J T Fext

Department of Mechanical Engineering

(7.2.31)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

16

When the external force acts at position r, the above Jacobian must be replaced by

Jr =

dr
dq

(7.2.32)

Note that, since generalized coordinates q can uniquely locate the system, the position vector r
must be written as a function of q alone.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Chapter 9
Force and Compliance Controls
A class of simple tasks may need only trajectory control where the robot end-effecter is
moved merely along a prescribed time trajectory. However, a number of complex tasks, including
assembly of parts, manipulation of tools, and walking on a terrain, entail the control of physical
interactions and mechanical contacts with the environment. Achieving a task goal often requires
the robot to comply with the environment, react to the force acting on the end-effecter, or adapt
its motion to uncertainties of the environment. Strategies are needed for performing those tasks.
Force and compliance controls are fundamental task strategies for performing a class of
tasks entailing the accommodation of mechanical interactions in the face of environmental
uncertainties. In this chapter we will first present hybrid position/force control: a basic principle
of strategic task planning for dealing with geometric constraints imposed by the task environment.
An alternative approach to accommodating interactions will also be presented based on
compliance or stiffness control. Characteristics of task compliances and force feedback laws will
be analyzed and applied to various tasks.

9.1 Hybrid Position/Force Control


9.1.1 Principle
To begin with let us consider a daily task. Figure 9.1.1 illustrates a robot drawing a line
with a pencil on a sheet of paper. Although we humans can perform this type of task without
considering any detail of hand control, the robot needs specific control commands and an
effective control strategy. To draw a letter, A, for example, we first conceive a trajectory of the
pencil tip, and command the hand to follow the conceived trajectory. At the same time we
accommodate the pressure with which the pencil is contacting the sheet of paper. Let o-xyz be a
coordinate system with the z-axis perpendicular to the sheet of paper. Along the x and y axes, we
provide positional commands to the hand control system. Along the z-axis, on the other hand, we
specify a force to apply. In other words, controlled variables are different between the horizontal
and vertical directions. The controlled variable of the former is x and y coordinates, i.e. a position,
while the latter controlled variable is a force in the z direction. Namely, two types of control loops
are combined in the hand control system, as illustrated in Figure 9.1.2.

Fz
x

Figure 9.1.1 Robot drawing a line with a pencil on a sheet of paper

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Position
Reference

Position Control
Robot

Force
Reference

Controlled
Variables

Force Control

Figure 9.1.2 Position and force control loops


The above example is one of the simplest tasks illustrating the need for integrating
different control loops in such a way that the control mode is consistent with the geometric
constraint imposed to the robot system. As the geometric constraint becomes more complex and
the task objective is more involved, an intuitive method may not suffice. In the following we will
obtain a general principle that will help us find proper control modes consistent with both
geometric constraints and task objectives. Let us consider the following six-dimensional task to
derive a basic principle behind our heuristics and empiricism.
Example 9.1
Shown below is a task to pull up a peg from a hole. We assume that the peg can move in
the vertical direction without friction when sliding in the hole. We also assume that the task
process is quasi-static in that any inertial force is negligibly small. A coordinate system O-xyz,
referred to as C-frame, is attached to the task space, as shown in the figure. The problem is to find
a proper control mode for each of the axes: three translational and three rotational axes.

vz

vy

vx
x

Figure 9.1.3 Pulling up a peg from a hole

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

The key question is how to assign a control mode, position control or force control, to
each of the axes in the C-frame in such a way that the control action may not conflict with the
geometric constraints and physics. M. Mason addressed this issue in his seminal work on hybrid
position/force control. He called conditions dictated by physics Natural Constraints, and
conditions determined by task goals and objectives Artificial Constraints. Table 9.1.1 summarizes
these conditions.
From Figure 9.1.3 it is clear that the peg cannot be moved in the x and y directions due to
the geometric constraint. Therefore, the velocities in these directions must be zero:
vx = 0, v y = 0 . Likewise, the peg cannot be rotated about the x and y axes. Therefore, the angular
velocities are zero: x = 0, y = 0 . These conditions constitute the natural constraints in the
kinematic domain. The remaining directions are linear and angular z axes. Velocities along these
two directions can be assigned arbitrarily, and may be controlled with position control mode. The
reference inputs to these position control loops must be determined such that the task objectives
may be satisfied. Since the task is to pull up the peg, an upward linear velocity must be given:
vz = V > 0 . The orientation of the peg about the z-axis, on the other hand, doesnt have to be

changed. Therefore, the angular velocity remains zero: z = 0 . These reference inputs constitute
the artificial constraints in the kinematic domain.
Table 9.1.1 Natural and artificial constraints of the peg-in-the-hole problem

Kinematic

Static

vx = 0
Natural
Constraints

vy = 0

fz = 0

x = 0
y = 0

z = 0
fx = 0

Artificial
Constraints

vz = V > 0

fy = 0

z = 0

x = 0
y = 0

In the statics domain, forces and torques are specified in such a way that the quasi-static
condition is satisfied. This means that the peg motion must not be accelerated with any
unbalanced force, i.e. non-zero inertial force. Since we have assumed that the process is frictionless, no resistive force acts on the peg in the direction that is not constrained by geometry.
Therefore, the linear force in the z direction must be zero: f z = 0 . The rotation about the z axis,

too, is not constrained. Therefore, the torque about the z axis must be zero: z = 0 . These
conditions are dictated by physics, and are called the natural constraints in the statics domain. The
remaining directions are geometrically constrained. In these directions, forces and torques can be
assigned arbitrarily, and may be controlled with force control mode. The reference inputs to these
control loops must be determined so as to meet the task objectives. In this task, it is not required
to push the peg against the wall of the hole, nor twist it. Therefore, the reference inputs are set to

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

zero: f x = 0, f y = 0, x = 0, y = 0 . These constitute the artificial constraints in the statics


domain.
In the above example, it is clear that the axes involved in the natural constraints and the
artificial constraints are orthogonal to each other in both kinematic and static domains. Moreover,
the axes involved in the natural kinematic constraints and the artificial static constraints are the
same, and the ones listed in the natural static constraints and the artificial kinematic constraints
are the same. These relationships are rather obvious in the above example where the direction of
each C-frame axis is aligned with the direction along which each control mode, position or force,
is assigned. If such a C-frame exists, these orthogonality properties are simply the consequence of
the following assumptions and rule:

Each C-frame axis must have only one control mode, either position or force,
The process is quasi-static and friction less, and
The robot motion must conform to geometric constraints.

In general, the axes of a C-frame are not necessarily the same as the direction of a separate
control mode. Nevertheless, the orthogonality properties hold in general. We prove this next.
Let V6 be a six-dimensional vector space, and Va V 6 be an admissible motion space,
that is, the entire collection of admissible motions conforming to the geometric constraints
involved in a given task. Let Vc be a constraint space that is the orthogonal complement to the
admissible motion space:

Vc = Va

(9.1.1)

Let F V 6 be a six-dimensional endpoint force acting on the end-effecter, and p V 6 be an


infinitesimal displacement of the end-effecter. The work done on the end-effecter is given by

Work = FT p

(9.1.2)

Decomposing each vector to the one in the admissible motion space and the one in the constraint
space,

F = Fa + Fc ;

Fa Va , Fc Vc

(9.1.3)

p = p a + p c ; p a Va , p c Vc
and substituting them to eq.(2) yield

Work = (Fa + Fc )T (p a + p c ) = Fa p a + Fa p c + Fc p a + Fc p c
T

= Fa p a + Fc p c
T

since Fa p c ,

(9.1.4)

Fc p a by definition. For the infinitesimal displacement p to be a virtual

displacement p , its component in the constraint space must be zero: p c = 0 . Then,

p a = p becomes a virtual displacement, and eq.(4) reduces to virtual work. Since the system
is in a static equilibrium, the virtual work must vanish for all virtual displacements p a .

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Work = Fa T p a = 0, p a

(9.1.5)

This implies that any force and moment in the admissible motion space must be zero, i.e. the
natural static constraints:

0 = Fa Va

(9.1.6)

Furthermore, the properties of artificial static constraints can be derived from eqs.(4) and (5).
Since in eq.(4) p c = 0 , the static equilibrium condition holds, although Fc Vc takes an
arbitrary value. This implies that to meet a task goal we can assign arbitrary values to the force
and moment in the constraint space, i.e. the artificial static constraints.

arbitrary : Fc Vc

(9.1.7)

Converting infinitesimal displacements to velocities, p a , p c , we can obtain the natural and


artificial kinematic constraints:

arbitrary : p a Va ,
0 = p c Vc

(9.1.8)

Table 9.1.2 summarizes the above results.


Table 9.1.2 Masons Principle of Hybrid Position/Force Control

Natural Constraints
Artificial Constraints

Kinematic

Static

0 = p c Vc
arbitrary : p a Va

0 = Fa Va

arbitrary : Fc Vc

The reader will appreciate Masons Principle when considering the following exercise
problem, in which the partition between admissible motion space and constraint space cannot be
described by a simple division of C-frame axes. Rather the admissible motion space lies along an
axis where a translational axis and a rotational axis are coupled.
Exercise 9.2 (The same as PS)
9.1.2 Architecture of Hybrid Position/Force Control System
Based on Masons Principle, a hybrid position/force control system can be constructed in
such a way that the robot control system may not have a conflict with the natural constraints of
the task process, while performing the task towards the task goal. Figure 9.1.5 shows the block
diagram of a hybrid position/force control system. The upper half of the diagram is position
control loops, where artificial kinematic constraints are provided as reference inputs to the system
and are compared with the actual position of the end-effecter. The lower half of the diagram is
force control loops, where artificial static constraints are provided as reference inputs to the

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

feedback loops and are compared with the actual force and moment at the end-effecter. Note that
feedback signals are described in an appropriate C-frame attached to the end-effecter.
If the feedback signals are noise-less and the C-frame is perfectly aligned with the actual
task process, the position signal of the feedback loop must lie in the admissible motion space, and
the force being fed back must lie in the constraint space. However, the feedback signals are in
general corrupted with sensor noise and the C-frame may be misaligned. Therefore, the position
signal may contain some component in the constraint space, and some fraction of the force signal
may be in the admissible motion space. These components are contradicting with the natural
constraints, and therefore should not be fed back to the individual position and force controls. To
filter out the contradicting components, the feedback errors are projected to their own subspaces,
i.e. the positional error ep mapped to the admissible motion space Va and the force feedback error
ef mapped to the constraint space Vc. In the block diagram these filters are shown by projection
matrices, Pa and Pc :



e p = Pa e p , e f = Pc e f

(9.1.9)

When the C-frame axes are aligned with the directions of the individual position and force control
loops, the projection matrices are diagonal, consisting of only 1 and 0 in the diagonal components.
In the case of the peg-in-the-hole problem, they are:

Pa = diag (1 1 0 1 1 0), Pc = diag (0 0 1 0 0 1)

(9.1.10)

In case of Example 9.2 where the C-frame axes are not the directions of the individual position
and force control loops, the projection matrices are not diagonal.

Position Feedback

_
+

Pa

Position
Reference
Inputs


ep

J 1

Position/velocity
control
compensator

Robot

Task
Environment

+
Force
Reference
Inputs +

Pc


ef

JT

Force/torque
control
compensator

Force Feedback

Figure 9.1.4 Block diagram of hybrid position/force control system

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

These feedback errors, e p and e f , are in the C-frame, hence they must be converted to
the joint space in order to generate control commands to the actuators. Assuming that the
positional error vector is small and that the robot is not at a singular configuration, the position
feedback error in joint coordinates is given by


e q = J 1 e p

(9.1.11)

where J is the Jacobian relating the end-effecter velocities in the C-frame to joint velocites. The
force feedback error in the joint coordinates, on the other hand, is obtained based on the duality
principle:


e = J T e f

(9.1.12)

These two error signals in the joint coordinates are combined after going through dynamic
compensation in the individual joint controls.

9.2 Compliance Control


9.2.1 Task strategy
Use of both position and force information is a unique feature in the control of robots
physically interacting with the environment. In hybrid position/force control, separation was
made explicitly between position and force control loops through projections of feedback signals
onto admissible motion space and constraint space. An alternative to this space separation
architecture is to control a relationship between position and force in the task space. Compliance
Control is a basic control law relating the displacement of the end-effecter to the force and
moment acting on it. Rather than totally separating the task space into subspaces of either position
or force control, compliance control reacts to the endpoint force such that a given functional
relationship, typically a linear map, is held between the force and the displacement. Namely, a
functional relationship to generate is given by

p = CF

(9.2.1)

where C is an m x m Compliance Matrix, and pand F are endpoint displacement and force
represented in an m-dimensional, task coordinate system. Note that the inverse to the compliance
matrix is a stiffness matrix:

K = C 1

(9.2.2)

if the inverse exists.


The components of the compliance matrix, or the stiffness matrix, are design parameters
to be determined so as to meet task objectives and constraints. Opening a door, for example, can
be performed with the compiance illustrated in Figure 9.2.1. The trajectory of the doorknob is
geometrically constrained to the circle of radius R centered at the door hinge. The robot hand
motion must comply to the constrained doorknob trajectory, although the trajectory is not exactly
known. The robot must not break the doorknob, although the conceived trajectory is different
from the actual trajectory. This task requirement can be met by assigining a small stiffness, i.e. a
high compliance, to the radial direction perpendicular to the trajectory. As illustrated in the figure,

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

such a small spring constant generates only a small restoring force in response to the discrepancy
between the actual doorknob trajectory and the reference trajectory of the robot hand. Along the
direction tangent to the doorknob trajectory, on the other hand, a large stiffness, or a small
compliance, is assigned. This is to force the doorknob to move along the trajectory despite
friction and other resistive forces. The stiffness matrix is therefore given by

kx
K =
0

0
; k x << 1, k y >> 1
k y

(9.2.3)

with reference to the task coordinate system O-xy. Using this stiffness with which the doorknow
is held, the robot can open the door smoothly and dexterously, although the exact trajectory of the
doorknob is not known.

y
k y >> 1
O

x
k x << 1
R

Doorknob
Trajectory

Figure 9.2.1 Door opening with compliance control


9.2.2 Compliance control synthesis
Now that a desired compliance is given, let us consider the method of generating the
desired compliance. There are multiple ways of synthesizing a compliance control system. The
simplest method is to accommodate the proportional gains of joint feedback controls so that
desired restoring forces are generated in proportion to discrepancies between the actual and
reference joint angles. As shown in Figure 9.2.2, a feedback control error ei is generated when a
disturbance force or torque acts on the joint. At steady state a ststic balance is made, as an
actuator torque i proportional to the control error ei cancels out the disturbance torque. The
proportionality constant is determined by the position feedback gain ki, when friction is neglected.
Therefore a desired stiffness or compliance can be obtained by tuning the position feedback gain.
Compliance synthesis is trivial for single joint control systems. For general n degree-offreedom robots, however, multiple feedback loops must be coordinated. We now consider how to
generate a desired m x m compliance or stiffness matrix specified at the endpoint by tuning joint
feedback gains.

Department of Mechanical Engineering

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

Disturbance
Force/torque

ei

PD Control

Actuators

Robot
Mechanis

qri (t )

qi

Figure 9.2.2 Single joint position feedback control system

Theorem

Let J be the Jacobian relating endpoint velocity p R m x1 to joint velocities

q R n x1 , and R n x1 be joint torques associated with joint coordinates q. Let p R m x1 be a


m x 1 vector of the endpoint displacement measured from a nominal position p , and F R m x1 be
the endpoint force associated with the endpoint displacement p . Let Kp be a desired endpoint
stiffness matrix defined as:

F = K p p

(9.2.4)

The necessary condition for joint feedback gain Kq to generate the endpoint stiffness Kp is given
by

K q = JT K p J

(9.2.5)

assuming no friction at the joints and linkage mechanisms.


Proof
Using the Jacobian and the duality principle as well as eq.(4),

= J T F = J T K p p = J T K p J q

(9.2.6)

Using eq.(5), the above relationship reduces to

= K q q

(9.2.7)

This implies that Kq is the joint feedback gain matrix.


Example 9.2.1 Consider a two-link, planar robot arm with absolute joint angles and joint torques,
as shown in Figure 9.2.3. Obtain the joint feedback gain matrix producing the endpoint stiffness
Kp :

k 0

K p = 1
0 k2

Department of Mechanical Engineering

(9.2.8)

Massachusetts Institute of Technology

Introduction to Robotics, H. Harry Asada

10

Assuming that the link length is 1 for both links, the Jacobian is given by

s s2

J = 1
c2
c1

(9.2.9)

From eq.(5),

kq3

k q 2

k q1
K q = J T K p J =
kq3

(9.2.10)

where

k q1 = k1s12 + k 2c12
k q 2 = k1s22 + k 2c22

(9.2.11)

k q 3 = k1s1s2 + k 2c1c2
Note that the joint feedback gain matrix Kq is symmetric and that the matrix Kq degenerates when
the robot is at a singular configuration. If it is non-singular, then
1

p = Jq = JK q = JK q J T F = J (J T K p J ) 1 J T F = K p F = CF (9.2.12)
Therefore, the obtained joint feedback gain provides the desired endpoint stiffness given by eq.(8),
or the equivalent compliance.

Figure 9.2.3 Two link robot

Department of Mechanical Engineering

Massachusetts Institute of Technology

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.1


Out: September 14, 2005

Due: September 21, 2005

Problem 1
The torque-speed characteristics of a DC motor have been determined, as shown
in the figure below. Note that u is the voltage applied to the motor armature, m and
m are, respectively, the output torque and angular velocity of the motor shaft. From this
plot,
(a) Obtain the motor constant, torque constant, and armature resistance of the motor.
(b) When the motor is rotating at 100 radians per second, what is the back emf
voltage induced at the armature?
(c) When the motor is producing 2x10-3 Nm of torque, how much power is dissipated
as heat at the motor windings?

m
4 x 10-3 Nm

u = 24 volts

u = 20 volts

160 r/s

200 r/s

Figure 1 Torque-speed characteristics of a DC motor


Problem 2
The DC motor in Problem 1 is now used for driving a single axis robot arm with a
gear reducer. See the figure below. The inertia of the motor rotor including the shaft and
the pinion gear is Im = 1.0 x 10-4 kgm2 , while the arm inertia about the joint axis
including the joint axis and the large gear is Ia= 1.6 x 10-1 kgm2. Obtain the gear ratio that
maximizes the angular acceleration of the joint axis, when the angular velocity is almost
zero. Ignore gravity and viscous damping. Also obtain the maximum angular acceleration
when the maximum armature voltage is 30 volts.

Robotic Arm

axis
Gearing
Im = 1.0 x 10-4 kgm2

Ia= 1.6 x 10-1 kgm2


Joint Axis
DC Motor

Figure 2 Single axis robot arm with DC motor and gearing


Problem 3
Consider a single axis robot control system with an optical shaft encoder
measuring both position and velocity of the joint axis. The transfer function from input
armature voltage u to the joint angle is given by
( s)
1
G( s) =
=
u ( s ) s (a1s + a2 )
where parameters are a1 = 0.16, a2 = 0.53 . Answer the following questions.
(a) Consider the position and velocity feedback control, as shown in the first figure
below. Determine the position feedback gain kp and the velocity feedback gain kv
so that the 0-100% rise time is 0.4 sec and that the maximum overshoot is 5 %.
(b) Consider the integral control along with the position and velocity feedbacks, as
shown in the second figure below. Sketch a root locus for appropriate values of
the velocity feedback gain kv and the integral gain kI , and discuss stability,
settling time, and steady-state error. [I believe that you have learned these in
2.004. If not, please let me know.]

+
_

kp

+
_

1
s (a1s + a2 )

kv s

+
_

k p (1 +

kI
)
s

+
_

1
s (a1s + a2 )

kv s
Figure 3 Block diagrams
2

Problem 4
A DC motor connected to a switching power transistor is shown in Figure 4-a. Answer
the following questions.
a). The torque constant of the motor is 5.0 x 10-2 Nm/A. What is the voltage across the motor
armature as the motor rotates at 100 rad/s with a zero torque load?
b). Figure 1-b shows profiles of the transistor voltage and current, Vce and Ic, when the transistor
turns on and off. The maximum voltage is 10 volts, and the maximum current is 0.5 A. It takes 5
s for the transistor to turn on, and 10 s to turn-off. During the turn-on and turn-off transition
periods both voltage and current vary linearly, as shown in the figure. How much heat (cal) is
generated at the transistor in one second when this transistor is used for a uni-polar PWM
amplifier of 10 kHz PWM frequency? Note that the mechanical equivalent of heat quantity is 4.2
joule/cal.

Ic
Vce

Figure 4-a DC motor connected


to a switching transistor

Transistor
Voltage
& Current

100 s
5 s

10 s

V
Vce

Ic

0
Turn-ON

Turn-OFF
Time

Figure 4-b Switching characteristics of the transistor

Issued: September 26, 2005


Due: Wednesday, October 5, 2005
2.12 Introduction to Robotics
Problem Set 2: Robot Programming
The goal of this problem set and the associated lab sessions on Thursday, September 28th
and Friday, September 29th, is for you to develop a mobile robot motion planning and control
algorithm for a simulated demining robot.
1. First, using matlab, write an algorithm to generate a series of waypoints that will cover a 5
by 5 meter area, (the box.txt environment in simple sim, our demining robot simulator)
using each of three dierent strategies: (a) backandforth (mowing the lawn) motions,
(b) spiraling, and (c) random motions. Your program should write the waypoints out to
a le, in the form of (x, y) coordinates in two columns. Read in the data from the le
and generate a plot of the waypoints. (Dont specify waypoints too close to one another;
a separation for example of at least 0.5 meters is desirable.)
2. Next, write a C program to generate the waypoint les that you generated using matlab
in part 1.
3. Next, run simple sim on an athena linux workstation (well show you how in Lab). First,
teach yourself to manually steer the robot around a simple environment with four mines.
Perform two or three runs where you create a data log le for manual control to activate
all the mines, and load and plot the data to reconstruct the trajectory that you manually
executed. Details for the data logging format will be provided in lab. Plot both the true
(x, y) trajectory based on the simulated robot state (columns 2 and 3 of the data le),
and the deadreckoned robot trajectory computed for you by the simulator by integrating
the encoders on the robot wheels. Compare the two trajectories. Try to do this for two
dierent motion control strategies (e.g., conservative vs. agressive velocity control). How
quickly can you reach all the mines?
4. As discussed in class, simple sim in with a prebuilt trajectory controller, that can read in a
set of waypoints and then systematically perform trajectory control to try to reach each of
the waypoints in turn (assuming no obstacles!). Run simple sim in this mode providing the
waypoint les that you automatically generated above in part 1 to see how the waypoint
controller of the simulator performs on your waypoint lists. Does it do a good job? Can
you think of ways to do better?
5. Now, add your own C code to the le user_code.cpp to implement your own trajectory
control algorithm in simple sim. To start, try to simply to integrate your waypoint gen
erator with a simple waypoint controller, and see how it performs. Log data and plot the
results in matlab. How does your controller compare to the builtin simple sim controller
tested in part 4, and to the results that you obtained under manual control?
6. (Optional): Develop a more complex motion controller that would be capable of running in
a more complex environment with numerous obstacles (such as maze.txt) and/or could
achieve good coverage despite large amounts of deadreckoning error. For example, can you
implement a nite state machine that switches between the three modes of long transits,
bouncing o walls, and spiraling motions, that seems to be the mode of operation of the
roomba robot vacuum cleaner? Does your code outperform a simple controller in a more
cluttered environment? How fast can you nd all the mines? (If you do not feel that you
have the prior C programming experience to attempt this, then feel free to sketch out a
potential solution strategy on paper.)

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.3


Out: October 3, 2005 Due: October 12, 2005
Problem 1
Shown below is a construction robot having two revolute joints and one prismatic joint.
Notice that the axis of the prismatic joint has an offset of A 1 from the first revolute joint at the
origin O. Namely, the distance between point O and point A is A 1 , a constant, and the angle
between OA and AB is 90 degrees, a constant as well. Joint 2 is a prismatic joint, whose
displacement is given by distance d, a variable. Using the geometric parameters and joint
displacements shown in the figure, answer the following questions.

A3

Link 3

3
B

x
e
ye

Joint 3
Link 2

Link 1

Joint 2

A1
1
Link 0

Joint 1

x
Link 0

Figure 1 Construction robot with two revolute joints and one prismatic joint
(a) Obtain the kinematic equations relating the end-effecter position and orientation to the
joint displacements.
(b) Joint 1 can rotate between 45 degrees and 135 degrees, and joint 3 can rotate from 90
degrees to +90 degrees, while joint 2 can move from A1 to 2A1 . Sketch the workspace of
the end-effecter E within the xy plane.
(c) Solve the inverse kinematics problem to find joint displacements leading the end-effecter
to a desired position and orientation: xe , ye , e .

Problem 2
Shown below is the schematic of a three dof articulated robot arm. Although this arm
looks three-dimensional, its kinematic equations can be obtained in the same way as that of planar
robots. For joints 2 and 3 alone, consider a vertical plane containing links 2 and 3. As for joint 1,
consider the projection of the endpoint onto the xy plane. Answer the following questions, using
the notation shown in the figure.
(a) Obtain the kinematic equations relating the endpoint coordinates, xe , ye , ze , to joint
angles 1 , 2 , 3 .
(b) Solve the inverse kinematics problem, i.e. obtain the joint coordinates, given the endpoint
coordinates. Obtain all of the multiple solutions, assuming that each joint is allowed to
rotate 360 degrees.
(c) Sketch the arm configuration for each of the multiple solutions.

Endpoint

A2
Link 3

xe

ye
z
e

A1
Joint 3

Link 2

Joint 2

y
1
Link 1

x
Joint 1

Figure 2 Schematic of 3 dof articulated robot arm

Problem 3
Shown below is a robot arm with three revolute joints. Coordinate system O x0 y0 z0 is
fixed to Link 0. Axis x1 is fixed to Link 1. Joint angle 1 is measured about the joint axis OA (z0

axis) from x0 to x1 . The second joint axis BC is horizontal, and joint angle 2 is measured from

axis x1 to axis x2 , which is fixed to Link 2, as shown in the figure. Joint angle 3 is measured
about the joint axis CD from axis x2 to Link 3,i.e. line DE. Link dimensions are OA=1, AB=1,
BC=1, CD=0, and DE=1. (For the purpose of explaining the kinematic structure, points C and D
are shown to be different points, but they are the same, i.e. the length CD is zero.) Note also that
OAB = ABC = BCD = CDE = 90o . Answer the following questions.

Link 3

z0

Link 2

2
1

B
x1

x2

Link 0
O
x0

y0

1
Figure 3 Kinematic structure of 3 DOF robot

a). Obtain the coordinates of point C viewed from the base coordinate system O x0 y0 z0 .
b). Assuming that all the joints are allowed to rotate 360 degrees, determine the workspace of the
robot. Sketch the workspace envelope, and show the size and dimensions of the envelope in your
sketch.

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.4


Out: October 12, 2005 Due: October 19, 2005
Problem 1
An astronaut is operating a shuttle manipulator with an inspection end effecter attached to
the tip of the arm. For the sake of simplicity we consider only the three revolute joints, 1 , 2 , 3 ,
and the three links, as shown in the figure. A Cartesian coordinate system, O-xy, is attached to the
object to be inspected. The distance of the coordinate origin O is L from the location of the first
joint. Answer the following questions using the notation shown in the figure.
a) Obtain the forward kinematic equations relating the end effecter position and orientation,
xe , ye , e , to the three joint angles, 1 , 2 , 3 . Note that the end effecter position and
orientation are viewed from the Cartesian coordinate system attached to the object, O-xy.
b) Obtain the Jacobian matrix associated with the kinematic equations of Part a). Sketch a
block diagram of Resolved Motion Rate Control where the astronaut uses a joystick for
generating velocity commands, vx = xe , v y = y e , = e , with reference to the
Cartesian coordinate system O-xy.
c) The inspection end effecter must be moved along the object surface, i.e. the x-axis, at a
constant speed, v xd = 20 cm / sec . The gap between the inspection sensor and the object
surface must be constant, yed = 10 cm , and the orientation must be kept horizontal,
ed = 90o . Compute the time trajectories of the three joint angles as well as the joint
velocities when the end effecter moves from Point A at x = 0, y = 10 cm , to Point B at
x = 15 m, y = 10 cm . The link lengths are A 1 = 10 m, A 2 = 10 m, A 3 = 40 cm , and the
distance to the object is L = 10.5 m . Plot position and velocity profiles using MATLAB.

Inspection x
End Effecter

Joint 3

xe

ye

A2

A1

A3

3
O

L
Object
Figure 1 Shuttle manipulator inspecting an object surface
All the angles are measured in the right hand sense. 2 in the figure is therefore negative.

Problem 2
Shown below is the same articulated robot as the one in the previous problem set. The robot
has three revolute joints that allow the endpoint to move in three-dimensional space. However, this
robot has some singular points inside the workspace. Answer the following questions.
a) Obtain each column vector of the Jacobian matrix based on its geometric interpretation,
as discussed in class. (Consider the endpoint velocity created by each of the joints while
immobilizing the other joints.)
b) Obtain the Jacobian via direct differentiation of the kinematic equations relating the endeffecter coordinates to joint displacements. Compare the result with the geometric
approach in part a).
c) For this question and the next question only, assume A 1 = A 2 = 1 for brevity. Compute
the Jacobian matrix for the arm configuration:

1 =

, 2 =

, 3 =

2
3

and obtain the determinant of the Jacobian.


d) Obtain the joint velocities that move the endpoint with desired velocities vx = 1, v y = 2, vz = 0
at the instant of the arm configuration in part c).
e) Obtain the joint angles of singular configurations by solving the singularity condition:
det J = 0 .
f) Based on the results of part e), sketch the arm posture for each of the singular
configurations. Show where in the workspace it becomes singular and in which direction
the endpoint cannot be moved at a non-zero velocity.

Endpoint

A2
Link 3

xe

ye
z
e

A1
Joint 3

Link 2

Joint 2

y
1
Link 1

x
Joint 1

Figure 2 Schematic of a three dof articulated robot

Problem 3
Shown below is a planar 3 d.o.f. robotic leg standing on the ground. Three joint angles,
1 , 2 , 3 , all measured from the ground, are used as an independent set of generalized
coordinates uniquely locating the system. The second figure below shows the front view of the
robot including actuators and transmission mechanisms. Actuator 1 generates torque 1 between
link 0 and link 1. Note that the body of Actuator 1 is fixed to link 0, while its output shaft is
connected to link 1. Actuator 2 is fixed to Link 3, and its output torque 2 is transmitted to Joint
2, i.e. the knee joint, through the mass-less belt-pulley system with a gear ratio of 1:1. Actuator 3
is fixed to Link 3, while its output shaft is connected to Link 2. All actuator torques 1 , 2 , 3 are
measured in a right hand sense, as shown by the arrows in the figure. Displacements of the
individual actuators are denoted 1 , 2 , 3 , and are measured in the same direction of the torque.
The location of the hip, i.e. Link 3, is represented by the coordinates of its center of mass, xh , yh ,
and angle measured from the base coordinate system fixed to Joint 1, as shown in the figure.
a). Obtain the Jacobian matrix relating infinitesimal joint angles 1 , 2 , 3 to infinitesimal
changes to the hip position and orientation, xh , yh , .
b). Obtain the Jacobian matrix relating joint velocities 1 , 2 , 3 to actuator angular velocities

1 , 2 , 3 .
c). Obtain actuator angular velocities 1 , 2 , 3 when the hip is moving horizontally at a constant
speed, x h = V , y h = 0,  = 0 .

Link 3

x
Joint 3 h
(hip) yh

Actuator 3

A2

2 2

Joint 3

Belt-Pulley
Mass-less
Transmission

Link 2
Rear

Actuator 2

Front

Joint 2

y
Link 1

A1

1 1

1
Joint 1

Ground

Actuator 1

Joint 1
Link 0

(a) Side view

(b) Front view


Figure 3 Leg robot

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.5


Out: October 19, 2005 Due: October 26, 2005
Problem 1
Consider a mass-less rod of length l constrained by two sliding joints at both ends
A and B, as shown in the figure below. The rod is connected to a spring of spring
constant k at A and is pulled down by mass m at B. Friction is negligible. Let be the
angle between the horizontal line and the rod. Using the Principle of Virtual Work, show
that the rod is in equilibrium at the angle that satisfies the following relationship:

(1 cos ) tan =

mg
kA

where g is acceleration of gravity. Assume that at = 0 the spring force is zero.

k
B

Figure 1

mg

Problem 2
A planar robot with three revolute joints is shown below. Let i and A i be the
angle of joint i and the length of link i , respectively, and xe , ye , e be the end-effecter
position and orientation viewed from the base coordinate frame, as shown in the figure.
In performing a class of tasks, the end-effecter orientation doesnt have to be
specified. Namely, the number of controlled variables is two, while the number of
degrees of freedom is three. Therefore the robot has a redundant degree of freedom.
At an arm configuration of 1 = 135o , 2 = 45o , 3 = 225o obtain the 2x3 Jacobian
matrix relating the end-effecter position to joint displacements. We want to generate an
endpoint force of Fx = 10 N , Fy = 2 N . Obtain the equivalent joint torques needed for

generating the endpoint force.

A3

A2

A1

Endpoint
xe

ye

A 1 = 3m, A 2 = 2m, A 3 = 1m

Figure 2 Three degree-of-freedom redundant robot arm


Note: In the following problem, numerical values of link lengths and other geometric
parameters are not given, but you can solve the problem using given functions
h1 ( s1 ), h2 ( s1 , s2 ) alone.

Problem 3

'Text and diagram removed for copyright reasons. See Problem 4.2, description and
figure, in Asada and Slotine, 1986.'
(1) At a given configuration of 1 and 2 we want to move the endpoint at a specified
velocity, v = [vx, vy] T with reference to the base coordinate system OO - xy. Obtain the
cylinder speeds, s1 and s2, that produce the desired endpoint velocity. Hint: Use
derivatives of functions h1 (s1) and h2 (s1,s2).
(2) Let f1 and f2 be the forces exerted by the cylinders, HC1 and HC2, respectively. Each
force acts in the longitudinal direction of the cylinder, and is defined to be positive in the
direction of expanding the cylinder. We want to push an object at the arm's endpoint.
Obtain the cylinder forces, f1 and f2, required for exerting an endpoint force of Fx = 0
and Fy = F, assuming that all the joints are frictionless. Also ignore gravity.

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.6


Out: October 31, 2005 Due: November 9, 2005
Problem 1
A robot arm is drawing a line with a ruler, as shown below. The ruler is held by
another robot arm, which is not shown. Assume no friction and a quasi-static process.
a). Obtain natural and artificial constraints, using the C-frame attached to the ruler.
b). Sketch the block diagram of the hybrid position/force control system in accordance
with the natural and artificial constraints obtained above. Also obtain the projection
matrices Pa and Pc associated with the natural and artificial constraints.

vz

vx

vy

Figure 1 Ruler and C-frame


Problem 2
Shown below is an office robot drying ink with blotting paper attached to a
semicircular roller of radius R. The roller should not slide but roll on the paper in order to
avoid smearing the wet signature. Assuming that the process is quasi-static and frictionless, we want to perform the task using hybrid position/force control. Obtain natural and
artificial constraints in terms of velocities and forces at the robot endpoint E. Describe the
constraints with respect to the coordinate system fixed to the desk, O-xyz. Note that Point
E is in the middle of the top surface of the semicircular roller. Is the rolling-contact
requirement a natural constraint or an artificial constraint? [The key is to differentiate
constraints that physics dictates, i.e. natural constraints, from the type of trajectories that
you want the robot to follow in order to accomplish a given task, i.e. artificial
constraints.]

Blotting
Paper

E E

SusanHockfield

R
R
R

x
O

Figure 2 Office robot drying ink with blotting paper

Problem 3
The planar 3 d.o.f. robotic leg considered in a previous problem set is shown
below. Three joint angles, 1 , 2 , 3 , all measured from the ground, are used as an
independent set of generalized coordinates uniquely locating the system. The second
figure below shows the front view of the robot including actuators and transmission
mechanisms. As before, Actuator 1 generates torque 1 between link 0 and link 1.
Actuator 2 is fixed to Link 3, and its output torque 2 is transmitted to Joint 2, i.e. the
knee joint, through the mass-less belt-pulley system with a gear ratio of 1:1. Actuator 3 is
fixed to Link 3, while its output shaft is connected to Link 2. All actuator torques
1 , 2 , 3 are measured in a right hand sense, as shown by the arrows in the figure.
Displacements of the individual actuators are denoted 1 , 2 , 3 , and are measured in the
same direction of the torque. The location of the hip, i.e. Link 3, is represented by the
coordinates of its center of mass, xh , yh , and angle measured from the base coordinate
system fixed to Joint 1, as shown in the figure.
In order to walk in rough terrain, the robot wants to make its knee, ankle, and hip
joints compliant so that disturbances acting on the body may be alleviated. All the
disturbance forces acting on the foot can be represented collectively with an equivalent
linear force and moment acting at the hip: F = [ Fx , Fy , M ]T . See the figure below. Using
compliance (stiffness) control, we want to support the hip with a desired stiffness defined
as:
0 xh
Fx k x

ky
Fy =
yh
M 0
k

where p = [xh , yh , ]T is linear and angular displacements of the hip, and the
elements of the stiffness matrix, k x , k y , k , are appropriate positive values. For the leg

configuration and the joint angles shown in the figure, obtain the joint feedback gain
matrix K that provides the desired stiffness given above. Also obtain the feedback gain
matrix in the actuator space.
Link 3
x
Joint 3 h
(hip) yh

Actuator 2

Actuator 3

A2

Belt-Pulley
Mass-less
Transmission

Link 2

Rear

2 2

Joint 3

Front

Joint 2

Link 1

1 1

A1
1
Joint 1

Actuator 1

Joint 1

x
Link 0

Ground

(b) Front view

(a) Side view

(c) Hip compliance

1 =

Fx

Fy
M

4
3
2 =
4

3 =

2
A1 = 1

A2 = 1

Figure 3 Leg robot revisited

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.7


Out: November 9, 2005

Due: November 16, 2005

Problem 1
A two degree-of-freedom robot arm with one prismatic joint is shown below. The
direction of the prismatic joint is perpendicular to the centerline of the first link. As shown in the
figure, joint angle and distance z between the tip of the first link and the mass centroid of the
second link are used as generalized coordinates. The first actuator fixed to the base link produces
torque about the first joint, while the second actuator located at the tip of the first link generates
linear force f acting on the second link. Using the parameters shown in the figure, answer the
following questions.
Joint 2
Prismatic joint

l1

Vc1

l c1

z
I 2 , m2

f
I1 , m1
Link 1

Link 2

Vc2

,
O

Joint 1
Revolute joint

Figure 1 Mass properties and link parameters of a two d.o.f. arm


a). Obtain the moment of inertia reflected to Joint 1 when the second joint is fixed at z = z0 . At
which arm configuration does the moment of inertia become minimal?
b). Obtain the centrifugal force acting on Link 2 when the first joint is rotating at a constant
angular velocity & ? Also obtain the torque induced by the centrifugal force upon Joint 1, i.e. the
joint torque needed for canceling out the centrifugal effect.
c). Obtain the Coriolis force acting on Link 2 when the first joint is rotating at a constant angular
velocity & and the second joint is moving at a constant linear velocity z& . Also obtain the torque
induced by the Coriolis force upon Joint 1.
d). Obtain the linear velocity vector of each mass centroid, Vci, as functions of generalized
coordinates and their time derivatives.
e). Obtain the linear acceleration vector of each mass centroid, aci.
f). Obtain Newton-Eulers equations of motion by drawing Free-body-diagrams of the individual
links.
g). Eliminate constraint forces involved in the Newton-Euler equations, and obtain closed-form
dynamic equations relating actuator torques, and f, to , &, && and z , z&, &z& .

Problem 2
Figure 2 shows the schematic of a three degree-of-freedom rehabilitation bed/chair
system. The seat is tilted with Actuator 1 fixed to the base frame. The back leaf and the footrest
are driven together by Actuator 2 fixed to the seat. Note that the motor shaft of Actuator 2 is
connected to a belt-pulley mechanism to move the footrest together with the back leaf. The
headrest is moved with Actuator 3 fixed to the seat through another belt-pulley mechanism as
shown in the figure. Figure 3 shows the kinematic structure and joint variables along with
geometric and mass parameters of the individual links. Note that joint angles 2 and 3 are
measured from the seat, while angle 1 is from the base frame.

Head Rest
Actuator 3
Belt
Actuator 1
Back Leaf
Seat
Actuator 2
Belt
Foot Rest
Base Frame

Figure 2 Powered rehabilitation bed-chair


Link 3
l3
lc3
l2

lc2 C2

l1
l0
O0

lc1
C1

O1

Link 0

lc0

C0

m 0, I 0

O3

Endpoint E
C3
3

m 3, I 3

m 2, I 2
Link 2

O2
x

m 1, I 1

Link 1

Figure 3 Mass parameters of the bed-chair system Figures by MIT OCW. 2

The closed-form equations of motion are in form:

1 = H111 + H122 + H133 + "


2 = H 211 + H 222 + H 233 + "

(1)

3 = H131 + H 232 + H 333 + "


where Hij is the i-j element of the 3x3 inertia matrix H = {H ij } associated with the joint
coordinates. Answer the following questions.
a) Explain the physical meaning of the inertia matrix elements H11 and H 22 , respectively.
Show which part of the link inertia is associated with each of H11 and H 22 . Be sure
which type of motion, translation and/or rotation, is involved in H11 and H 22 .
b) Based on the physical interpretation in part a), obtain H11 and H 22 , respectively. Use the
mass parameters shown in the figure: mi is mass, Ii the moment of inertia at the centroid
Ci ; A ci the distance between i-th joint axis and the mass centroid of the link.

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Problem Set No.8


Out: November 21, 2005

Due: November 30, 2005

Problem 1
Shown below is a vehicle similar to the 2.12 mobile robot having a pair of powered
wheels and a frictionless caster. The radius of the wheels is r=3 cm, while the distance between
the two wheels is 2b=20 cm. The angular velocity of the right wheel is

r , and that of the left

wheel is l . Each powered wheel is equipped with a shaft encoder to measure the angular
velocity. Answer the following questions.

(t f )

X (t f )

Y (t )
f

r
2b

X0

Y0

X
Figure 1 Vehicle trajectory

10
8
Angular 6
Velocities
rad/sec 4

r
l

2
0
3

tf = 10

Figure 2 Time profiles of the wheel velocities

Time sec

a). At time t = 0, the vehicle was at position X 0 = 20 cm, Y0 = 20 cm with reference to the
inertial reference frame O-XY and at orientation 0 = 0 measured from the positive X axis. See
Figure 1. Then the vehicle moved. The time profiles of the wheel angular velocities during the
movement were recorded, as shown in Figure 2. Compute the position and orientation of the
vehicle at time t f = 10 sec based on the time profiles shown in Figure 2. Assume no slip.
To go back to the initial position and orientation, X 0 , Y0 , 0 , a feedback control law is now
employed. Let us consider the following control method.
As illustrated in Figure 3, let be the angle between the direction of the vehicle, i.e. line AB, and
the direction of the destination from the current position of the vehicle, line AC.

= arctan 2 [Y0 Y (t ), X 0 X (t )] (t )

The primary goal is to reduce the distance between the current position X (t ), Y (t ) and the
destination X 0 , Y0 ,

D =

( X ( t ) X 0 ) 2 + (Y ( t ) Y 0 ) 2

To reduce this distance D the vehicle should move in the direction given by angle . At the
same time the vehicle should be oriented in the direction of 0 = 0 at the destination. Therefore,
the vehicle should reduce the difference in orientation:

= 0 (t )

during the movement towards the destination.


To combine all these, let us consider the following feedback law:

v = kD D
 = k + k

where v is the vehicle forward velocity and  is the angular velocity of the vehicle rotation.
Answer the following questions.

X (t ), Y (t )

Destination

X 0 , Y0

(t )

A
D
Figure 3 Feedback law

b). Obtain the Jacobian relating the vehicle forward velocity v and rotation velocity  to the
angular velocities of the right and left wheels, r and l .
c). Sketch an approximate trajectory of the vehicle from the final position obtained in Part a), i.e.
X (t f ), Y (t f ), (t f ) , back to the original position and orientation, X 0 , Y0 , 0 . Find appropriate
values for the feedback gains, k D , k , k .
For Extra Credit:
d). Discuss whether the vehicle can reach the exact destination when the feedback gains are
changed. What will happen if k < k D ?
e). If < / 2 , moving forward may be better than moving backward. What will happen if the
vehicle moves backward when / 2 < < ? Considering these alternative routes, how do you
modify the control law in order to move quickly towards the destination?

Problem 2
The objective of this assignment is to build the dynamic model of the 2.12 arm being
used for the final project, and obtain feedforward torques for manipulating an end-effecter in a
vertical plane. As you already know, both actuators of the 2.12 arm are fixed to the base link, and
the actuator torque of the second motor, 2 , is transmitted from joint 1 to joint 2 through a beltpulley mechanism. Actuator displacements 1 and 2 , which are absolute angles measured from

the base axis, are used as generalized coordinates, and actuator torques 1 and 2 correspond to the

actuator displacements, forming virtual work: Work = 1 1 + 2 2 .

a) Obtain mass properties of each link, i.e. mi , A ci , I i , as defined in Figure 4. Figure 6


illustrates the disjointed arm links (Details are ignored for computing the mass properties).
Each arm link consists of an aluminum bar of 275 mm x 50 mm x 20 mm and two masses
at both ends of the link. For simplicity, the masses at both ends are treated as mass
particles having no moment of inertia. Obtain the mass, the location of the center of mass,
and the moment of inertia about the center of mass, mi , A ci , I i , for each link.
b) Obtain feedforward actuator torques for compensating for the gravity load of the arm
when displacements 1 and 2 are measured.
For extra credit:
c) Obtain equations of motion in terms of generalized coordinates 1 and 2 and actuator
torques 1 and 2 . Discuss why no Coriolis term is involved in the equations of motion.

d) Consider the cosine curves shown in Figure 7 for the trajectories of 1 and 2 . Compute
the angular velocities and accelerations, 1 , 2 and 1 , 2 , along the trajectories, and then
obtain the feedforward torques for tracking the trajectories from time t = 0 to t = t f .

A c2

A1

2
I 2 , m2

2 2

A c1
I1 , m1

1 1
Figure 4 Variables and
parameters of the 2.12 arm

x
Figure 5 Mechanism of the arm

0.5 kg

Link 1 230 mm

Link 2:
The same dimension
as Link 1

50 mm
0.1 kg
0.3 kg
20 mm
275 mm

1 kg

Figure 6 Link dimensions and masses attached to the links

90

1
deg.

45
0

3
Time sec

tf = 4

3
Time sec

tf = 4

45

deg.

-45

Figure 7 Trajectories

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics


Mid-Term Examination
October 27, 2004
2:30 pm 4:30 pm
Close-Book. Two sheets of notes are allowed.
Show how you arrived at your answer.
Do not leave multiple answers. Indicate which one is your correct answer.
Problem 1
Shown below is a robotic arm roaring a boat at the Charles Regatta. For the sake
of simplicity, we consider only planar motion, assuming that the two-link robot arm and
the oar are constrained in a horizontal plane. The oar is free to rotate about the pin, called
an oarlock, and the robot pulls the oar at the distance d from the oarlock. The robot
endpoint is connected to the oar at point E through a friction-less passive joint. A
hydraulic force f acts on the blade at a distance D from the oarlock. Assume that the
T
hydraulic force is perpendicular to the oar. The oarlock is at coordinates (xo , yo ) with
reference to the base coordinate system attached to the first joint of the robot arm.
Assume no friction and no gravity. Using the notation shown in the figure, answer the
following questions.
a). Obtain the coordinates of the robot endpoint xe , ye as functions of joint angles 1 , 2 ,
and differentiate the endpoint coordinates with respect to joint angles in order to derive
the Jacobian.
b). Let angle be the angle of the oar measured from the x axis. Obtain the endpoint
coordinates xe , ye as functions of . Under what condition, is angle a generalized
coordinate to locate the entire system?
c). Using the principle of virtual work, show that the joint torques 1 , 2 needed for
pulling the oar against the hydraulic force f must satisfy the following relationship:

1
2
d
f =
sin( 2 ) + sin(1 )
D sin( 2 1 ) A 1
A2

d). DC motors are used for driving the two joints of the robot. Let Km1 be the motor
constant of the first motor, and Km2 be that of the second motor. Show that the total
power loss at the two motors when generating joint torques 1 , 2 is given by:
Ploss =

12
2

r1 K m1

22
2

r2 K m 2

where r1 , r2 are gear ratios of the motors. (Note that the second motor is fixed to the base
and its output torque is transmitted to the second joint, just like the 2.12 lab manipulator.)
e). Obtain the optimal values of the joint torques 1 , 2 that minimize the total power loss
in both motors, Ploss, while bearing the hydraulic force f at the configuration:
1
1 = 45o , 2 = 135o , = 0 . Assume that the link dimensions are A 1 = A 2 =
,d =D
2
and that the products of the motor constant and the gear ratio are r1K m1 = 1, r2 K m 2 = 0.5 ,
in dimensionless form.

d
Oar

A1
Joint 1

xe Robot

ye endpoint
Free joint

A2

Oarlock
xo

yo

Blade

2
Joint 2

Figure 1 Two d.o.f. robot roaring a boat

Problem 2
Shown below is a robot arm with three revolute joints. Coordinate system O xyz fixed to the
base link, Link 0, represents the Cartesian coordinates of the endpoint xe , ye , ze . Joint angle

1 is measured about the joint axis OA (z axis) from axis x to line OB, where point B is the
projection of point B onto the xy plane. Another coordinate system B-uvw is placed at point B in
such a way that the u and w axes are parallel to the xy plane and that the v axis is parallel to the z
axis. The second joint axis AB is horizontal, and joint angle 2 is measured from axis u to line

BD. Joint angle 3 is measured about the joint axis CD from line BD to Link 3, i.e. line DE. Link
dimensions are OA= A 0 , AB= A 1 , BD= A 2 , and DE= A 3 . (For the purpose of explaining the

kinematic structure, points C and D are shown to be different, but they are the same point, i.e. the
length CD is zero.) Note also that OAB = ABD = 90o . Answer the following questions.

xe

ye E
z
e

v
Joint 3
C

ve
u

Joint 2

Link 2

Link 1
Joint 1 1

Link 0

ue

O
x

Link 3

Fx

Fy
F
z

Figure 1 Kinematic structure of a 3 d.o.f. robot


a). Obtain the coordinates of the endpoint E viewed from the frame, B-uvw, that is, ue , ve in the
figure.
b). Obtain the endpoint coordinates xe , ye , ze viewed from the base coordinate system O xyz .
c). For a given endpoint position, how many solutions exist to the inverse kinematics problem?
Sketch all the different configurations leading to the same endpoint coordinates.
d). Obtain the 3x3 Jacobian matrix relating the endpoint velocities xe , y e , ze to joint velocities

1 , 2 , 3 .
e). Forces Fx = 10 N , Fy = 0, Fz = 0 act at the endpoint, when the joint angles are

1 = 0o , 2 = 45o , 3 = 90o . Assume that A 2 = A 3 . Obtain the joint torques needed for bearing
the force acting at the endpoint, Fx = 10 N , Fy = 0, Fz = 0 . Discuss the physical sense of the
result.
The following question is for your extra credit. If you have time, work on it.
f). Obtain the joint angles of all the singular configurations by solving the determinant condition
of the Jacobian matrix. Sketch singular configurations, and determine in which direction the
endpoint cannot be moved with a non-zero velocity.

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics


Mid-Term Examination
November 2, 2005
2:30 pm 4:30 pm
Close-Book. Two sheets of notes are allowed.
Show how you arrived at your answer.
Do not leave multiple answers. Indicate which one is your correct answer.
Problem 1 (50 points + 5 points extra)
The MIT Mechanical Engineering is developing a dinosaur robot for a local
science museum. Figure 1 shows a schematic of the robot, having three degrees of
freedom. The joint angles 1 , 2 and 3 are all measured from the positive x axis fixed to
the floor, and the link lengths are OA = AB = 1 m . The head of the dinosaur is at Point E

(xe , ye )T , which is 3 meters from Joint 3 (Point B), and its orientation e

is measured
from the positive x axis to line EH, the head centerline. Note that line EH bends down
/3 from line BE, as shown in the figure. Answer the following questions.

xe

ye

3
H

Link 3: the whole body

3 3

Actuator 2

2m
Actuator 3
Link 2

Link 3

1m

Joint 1

Link 1

Link 2

Joint 3

Belt-Pulley
Mass-less
Transmission

Joint 2

2 2

Joint 3

1 1
Joint 2

Link 1

Link 0

Figure 1 Overall structure of dinosaur robot

Actuator 1
Joint 1

x
Link 0

Figure 2 Actuators and leg mechanism

a). Obtain the position and orientation of the dinosaur head, p = ( xe


functions of joint angles.
b). Obtain the Jacobian matrix relating the head velocity p = ( xe

ye e ) , as
T

T
y e  e ) to joint

T
velocities: q = 1 2 3 .

c). Obtain singular configurations using the above Jacobian in part b). First,
mathematically solve the singularity condition, interpret the result, and sketch the
singular configurations.
For the rest of the questions, refer to Figure 2 along with Figure 1, depicting the actuators
and transmission mechanisms. Actuator 1 generates torque 1 between link 0 and link 1.
Note that the body of Actuator 1 is fixed to link 0, while its output shaft is connected to
link 1. Actuator 2 is fixed to Link 3, and its output torque 2 is transmitted to Joint 2, i.e.
the knee joint, through the mass-less belt-pulley system with a gear ratio of 1:1. Actuator
3 is fixed to Link 3, while its output shaft is connected to Link 2. All actuator torques 1 ,
2 , and 3 are measured in a right hand sense, as shown by the arrows in the figure.
Displacements of the individual actuators are denoted 1 , 2 , and 3 are measured in the
same direction as the torques.
d). The body of the dinosaur weighs 2,000 kg. The center of mass is at Point C, which is
1 meter from point B on line BE, as shown in Figure 1. How much actuator torques 1 ,
2 , and 3 are needed to bear the weight of the body at the configuration of 1 = / 4 ,
2 = 3 / 4 , and 3 = / 3 . Ignore friction and the mass of the leg.
e). If one uses the Jacobian matrix relating the head velocity to actuator velocities for
examining the singular configuration, are the singular configurations different from the
ones obtained in part c)? Explain why.

Problem 2 (50 points + 10 points extra)


A rescue robot is working at a disaster site moving a heavy object at the tip of the
arm. To reduce the load the rescue robot rests on a solid stationary wall at Point A, and
slide the second link on the wall, as shown in the figure. The contact between the second
link and the wall at Point A is assumed to be friction-less. The coordinates of Point A are
x A , y A with reference to the base coordinate system. Similar to the 2.12 robot, this rescue
robot has a two d.o.f. arm with both motors fixed to the base. (The torque of the second
motor is transmitted through a belt-pulley mechanism to the second joint. The two pulley
diameters are the same.) Using the parameters and variables shown in the figure, answer
the following questions.
2

A2
A1

Joint 2

2 ( < 0)

Link 2

1
Joint 1

Fy
A

xA

yA

Link 1

Object

Friction-less
slide contact

x
Wall

Figure 3 Rescue robot moving a heavy object

a). Under what conditions, can we use the length L between Joint 2 and Point A as a
generalized coordinate that locates the system uniquely, assuming that Link 2 is in
contact with Point A at all times?
b). When link 2 slides on the wall at Point A, the robot motion has to satisfy some
geometric constraint. Obtain constraint equations by writing the coordinates of Point A,
x A , y A , as functions of the joint angles 1 , 2 and length L, i.e. x A = x A (1 , 2 , L) and
y A = y A (1 , 2 , L) .
c). Differentiating the functions, x A = x A (1 , 2 , L) and y A = y A (1 , 2 , L) , in part b),
find the relationship among virtual displacements 1 , 2 and L that satisfy the
geometric constraints.
d). Obtain the virtual work done by the motor torques, 1 , 2 , and the vertical endpoint
force - Fy .
e). Using the Principle of Virtual Work, show that the joint torques 1 , 2 , and the vertical
endpoint force Fy satisfy the following form of condition when the system is in
equilibrium:

Fy = a 1 + b 2
If time permits, obtain a and b by solving the equilibrium condition derived from the
Virtual Work Principle for given joint angles 1 , 2 and length L. You will get additional

10 points. (If you get stuck in computing a and b, proceed for the next question that can
be solved without knowing a and b.)
f). DC motors are used for driving the two joints of the robot. Let Km1 be the motor
constant of the first motor, and Km2 be that of the second motor. Show that the total
power loss at the two motors when generating joint torques 1 , 2 is given by:
Ploss =

12
2

r1 K m1

22
2

r2 K m 2

where r1 , r2 are gear ratios of the motors.


g). Using parameters a and b in part e), obtain the optimal values of the joint torques
1 , 2 that minimize the total power loss in both motors, Ploss, while bearing the vertical
endpoint force - Fy .

Class Average
Standard Deviation

75.28
11.40

Point distribution
Problem 1
1-a
10
1-b
10
1-c
10
1-d
10 + 5 extra
1-e
10
_________________________
subtotal
50
Problem 2
2-a
6
2-b
6
2-c
6
2-d
6
2-e
10 + 10 extra
2-f
6
2-g
10
_________________________
subtotal
50
TOTAL

100 + 15 extra

2.12 Final Project

Photo by Robert Kaufmann/FEMA

Rescue Scenario

The location of a casualty is unknown. The rescue robot


must be able to find a casualty.
The casualty is not directly accessible. The rescue robot
must remove some wreckage.
2

A magnet is imbedded in the body

Detection using magnetic proximity sensors


The same lead sensor as the one used for the
de-mining robot

Grasp with a solenoid


End effecter design

Search Method 1.
The rescue robot has magnetic sensors at the end effecter.
The disaster site is probed with the magnetic sensors by
moving the end effecter.
Wreckage/obstacles must be pushed or grabbed by the
robot to find a way for accessing a casualty.

Magnetic sensors

Project Teams
Four project teams will be formed.

Team 1 : Thursday 12 - 2
Team 2 : Thursday 3-5
Team 3 : Friday 12 - 2
Team 4 : Friday 3-5

Each project team will be divided into three subgroups:


Mobile platform design and control
End effecter, arm, and manipulation
Sensing and navigation
5

Mobile platform design and control


Chassis for mounting motors, drives, an arm,
batteries, and controllers
Layout design and system integration

End Effecter, Arm, and


Manipulation
End effecter design
Grasp: Solenoids
Magnetic sensors

Control
Closed-loop control
Arm trajectory generation and
Search control
7

Sensing and Navigation


Dead reckoning using optical shaft encoders.
An optical beacon (North Star) based navigation
Navigation route generation and path planning

Schedule and Milestones


Week 1: 11/4,5
Group strategy brainstorming
Forming subgroups

Week 2: Idle, Veterans Day Holiday


11/9(W) Game plan presentation
11/10(Th) Game plan due

Week 3: 11/18, 19
Machining and hardware fabrication

Thanksgiving Holiday
Week 4: 12/1,2
Assembly and check

Week 5: 12/8,9
Programming and tuning

Week 6: 12/12 (Monday, Final check) and 12/14 Presentation


9

Must

Should

Would be
nice

10

Search Method 2
Imbedding a LED

LED Spread light

Long range search for the body


Modulated LED spread light, 600~800 nm
Partially visible
11

Vision-based Sensors: Hardware

Photos of hardware removed for copyright reasons.


2048 x 2028 CCD arrav
Orangemicro iBOT Firewire webcam
Sony DFW-X700 color machine vision camera
Canon IXUS 300 digital camera

12

Searching for a casualty

The rescue robot has to go around the disaster site to search for a
casualty
With a set of photo detectors, the rescue robot can detect the
modulated LED light and determine the direction of a casualty.
To determine the position of the detected casualty based on
triangulation, the robot has to obtain the direction of the LED light
13
at two different locations.

Robot system development

Strategy
Algorithm
Sensor selection
End effecter design
Programming
Control tuning

Modality
Range
Accuracy
Sampling rate
Complexity/Cost

Hero/Heroine Robot
14

Magnetic Lead Switches


Glass Tube

Filled with
Inert Gas

S
N

Leads

ON

OFF
For extending the detectable range

S
S

Permanent
Magnet

Magnetic
Lead Switch

Permanent
Magnet
Magnetic
Material

Magnetic
Material:
A Fake Mine
15

Optical Proximity Sensors


LED

Photo Detector
R2
3V

R1
Vout

Photodiode

LED
Photo Detector

16

A candidate final project theme

Photo by Robert Kaufmann/FEMA

A Grand Challenge
17

Sonar sensor on a pan-tilt base

18

Beacon:
Ultrasound?

19

Proximity / Range sensor

20

21

Soft and fragile

Grasp force sensor

22

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics


Laboratory No. 1
Objective
The objectives of todays laboratory are:
Set up a robot control system and check basic functions of the controller,
Drive a DC motor using a PWM power amplifier, and
If time permits, down load a user C-code to the robot controller.

Lab Report
You are expected to turn in the questions/check sheet. Each group turns in one
report at the end of lab session.
SAFETY
The hardware and motor that you will be using for 2.12 this semester use
considerable current. The Robot controller uses up to 1.5A continuously and the
motor draws up to 4A of current. These currents are dangerous and please use
common sense when operating the equipment. Most importantly, when you are
wiring make sure all the power is turned off.
READING
1. Read materials from the Innovation First website.
a. 2004 Control System Overview
b. 2004 Operator Interface (OI) Reference Guide
c. 2004 Full-Size Robot Controller (RC) Reference Guide
d. 2004 Full-Size Control System Quick Start Guide
You have a half hour to skim these materials.
INITIAL SET UP
2. Turn on your laptop and click on Usern. The password is: [password
removed]. Next, you should change your password. Open the control panel, click
on User Accounts, and click your section again. Here you will be able to change
your password. Write down the new password and give it to the laboratory
instructor with your names and laptop number.
3. Retrieve switches for the 12 V power supply (the big one). Use the SPST switch
(single pole single throw) and connect it as shown in the diagram below. Before
connecting to the RC, check with a multi-meter that when the switch is toggled,
12 V and ground are read. Keep the switch OFF for the time being. Also, plug in
the backup battery. NOTE: If you ever turn the switch off, power will be drained
from the backup battery to keep the system functioning. This is a nice feature but
not now. You must press the RESET button on the RC to stop the backup battery
from being drained.

4. Use the tether cable (RED) to connect the Robot Controller (RC) to the Operator
Interface (OI). NOTE: The tether cable (RED) will provide power to the OI from
the RC when it is tethered: you do not need the AC Adapter when it is connected.
However, the AC adapter must be used when you are using the radios.
5. Plug in the joystick to port 1.
PWM
6. Connect the PWM Extension cable (the black-red-white flat cable) to the PWM
output 1 on the RC and connect the other end to a breadboard. Connect
oscilloscope probes to the GND (Black) and the signal (White).
7. Connect a black DB9 connector from the DASHBOARD on the OI to the serial
port on your laptop.
8. Before turning the power ON, ask the lab instructor to check your wiring.
9. Ensure that the Robot Controller Jumper is set to RC (see P.12 of Operator
Interface Reference Guide).
10. Open the Dashboard program on the laptop.
11. Look at the PWM values on the Dashboard interface as you move the joystick
horizontally and vertically.
a. What PWM numbers (there are sixteen of them) on the dashboard do X
and Y motion correspond to?
b. What is the maximum value? Minimum? What is the value when the
joystick is in the neutral setting? How many bits is the signal?
c. What variables in the dashboard program do the buttons on the joystick
correspond to? What lights are illuminated on the OI when the joystick
buttons are pressed?
d. As you move the joystick what lights are illuminated on the OI and under
what circumstances?

12. Turn on the oscilloscope and observe the waveform. As you move the joystick,
the duty cycle should vary.
a. What is the frequency of the PWM signal?
b. What is the maximum duty cycle? Minimum? What about at the neutral
position? Is this what you expected? Any hypotheses why these are like
this?
c. What is the voltage the PWM signal operates at?
d. What is the risetime? Falltime?
DC MOTOR
13. Turn off the power and setup one axis of the DC motor with PWM amplifier
(Victor 884) referring to page 1 of the Victor 884 User Manual. You can connect
the PWM Extension cable (the black-red-white flat cable) that was in the
breadboard into the Victor 884. Also, you can connect the power to the Victor 884
by fastening the circle lug to the power of the RC (this is called daisy chaining).
Before turning on the power, have the laboratory instructor check your wiring.
14. Send a command from the joystick and verify that you can drive the motor in both
directions.
15. Using a multi-meter, look at the voltage that drives the motor (DO NOT use the
oscilloscope for this: it has a world ground that will spark when attached to the
floating motor load. If you want a further explanation talk to the lab instructor).
a. What is the voltage range?
b. Is the voltage the motor sees linear with position of the joystick?
c. Does this correspond to what you saw the PWM signal you saw earlier?
Optional: You can calibrate the motor by following the directions on page 2 of the
Victor 884 Manual.
16. Refer to page 2 of the Victor 884 User Manual to switch between Brake/Coast
configurations. How does this change the behavior of the motor? Observe the
time for stopping the motor after the joystick command turns to zero.

DIGITAL INPUT
17. The RC makes Digital I/O very easy. Connect a different PWM extension cable to
DIGITAL IN/OUT (DIO) pin 1 and connect the other end to a breadboard. Note
that the +5V and ground are already provided for you.
a. As you go between +5V and GND to the logic input to the DI pin, what
lights on he OI are illuminated? When it turns ON, are you grounding the
pin or giving +5V?
b. When you move the PWM extension cable to DIO pin 2 and 3 what lights
are illuminated on the OI.
c. Looking at the Dashboard on the laptop, what variables represent the
assertion and disassertion of these logic pins?

RADIO LINK
18. Now you will connect to the RC wirelessly. The RC will eventually be remote on
the robot that you will be controlling. Follow the directions on page 5 of the
Quick Start Guide. When you choose a team number, choose a unique number
and write it on the white board with your names next to it (this is just temporary).
Any redundant team numbers will cause interference. Note: Your team number is
set by changing the 12 bits on the OI with the MSB (Most Significant Bit) on the
far left.
19. After you connect wirelessly, turn off the power and disassemble all your parts to
how you received them.
The remaining of this lab is optional although if you have time the first is
strongly recommended. It involves downloading a program to the RC that you
will be doing in the next lab.
1. The RC is able to capture analogue inputs. Connect a PWM extension cable to
analogue input 1 and the end on the breadboard. Connect a variable resistor such
that you can get a range from 0 to 5 V. Unfortunately, neither the dashboard nor
the OI have any easy way to look at the signal you are changing. In order to do
this we have to download a file to the RC. Although you program in the C
language, it had to be in the .hex format in order to download the program. We
have made a program for you to download. Disconnect the dashboard connection
on the RC and connect it to the Program connection. Open the IFI Loader
program and download Lab1_Code.hex. After the program is successfully
downloaded a text window pops up with three different items: X axis, Y axis, and
analogue input. As you change the potentiometer, the analogue input should
change between 0 and 255. This number is easily retrieved for use in
programming. You can also move the joystick to see the X axis and Y axis
numbers change. IF YOU DOWNLOADED THE .HEX FILE, PLEASE
REDOWNLOAD THE DEFAULT CODE FRC_default.hex.
2. Try connecting the joystick to other three ports and finding what variables the
joystick movement and buttons correspond to on both the Dashboard and OI.
Also, it is interesting to see a new Dashboard interface if you switch the jumper
near the Dashboard to the other setting. If you do this, please switch it back after
you are done.
3. See what happens when you have 2 identical team numbers.

Items Needed for the Lab


Laptop and AC Adaper
Robot Controller
Operator Interface
Both Radios
3 DB9 connectors (BLACK)
1 Tether cable (RED)
Joystick
Switch with soldered ends
12 V Battery or Power Supply
Backup Battery
Mulit-meter (we only have 2 of them currently)
Motor
PWM Amplifier (Victor 884)
Variable Resistor Ask TA
Breadboard Wires (Communal) Ask TA
Oscilloscope
2 PWM extension cables (black, red, white)
Philips screwdriver Ask TA
Small flat-head screwdriver Ask TA
1 Black wire with a blue circle lugs on the end.
1 dual black wire with Red/Blue circle lugs and Red/Blue U lugs on the ends

Massachusetts Institute of Technology


Department of Mechanical Engineering

2.12 Introduction to Robotics

Laboratory No.1
Section: ________________________ ,

Date: ____________________________

Students Name: __________________________ and ____________________________

Read the reference guides.


Initial set up and wiring checked by the lab instructor.
11. Look at the PWM values on the Dashboard interface as you move the joystick
horizontally and vertically.
a. What PWM numbers (there are sixteen of them) on the dashboard do X and Y motion
correspond to?

b. What is the maximum value? Minimum? What is the value when the joystick is in the
neutral setting? How many bits is the signal?

c. What variables in the dashboard program do the buttons on the joystick correspond
to? What lights are illuminated on the OI when the joystick buttons are pressed?

d. As you move the joystick what lights are illuminated on the OI and under what
circumstances?

12. Turn on the oscilloscope and observe the waveform. As you move the joystick, the
duty cycle should vary.
a. What is the frequency of the PWM signal?

b. What is the maximum duty cycle? Minimum? What about at the neutral position?
Is this what you expected? Any hypotheses why these are like this?

c. What is the voltage the PWM signal operates at?

d. What is the risetime? Falltime?

DC motor wiring checked by the lab instructor.


15. Using a multi-meter, look at the voltages that drive the motor.
a. What is the voltage range as you vary the joystick position?

b. Is this voltage linear with joystick position?

c. Does this correspond to what you saw the PWM signal you saw earlier? Why or
why not?

16. Comment on the switching between brake and coast modes. How long does the motor
take to stop in the Brake mode? How long for the coast mode?

Digital inputs checked.


Radio link checked.

Extra
Program down load successfully performed.
What happens when you have 2 identical team numbers?
2

Other comments and observations

Massachusetts Institute of Technology


Department of Mechanical Engineering
04

2.12 Introduction to Robotics


Laboratory No. 2
Section: ________________________ ,

Date: ____________________________

Students Name: __________________________

Objective
The objectives of todays laboratory are:
Download a user C-code to the robot controller.
Get acquainted with the C language and how it affects the robot.
Create actions based on sensor inputs.

Lab Report
You are expected to turn in the questions/check sheet. Each group turns in one
report at the end of lab session.
SAFETY
The hardware and motor that you will be using for 2.12 this semester use
considerable current. The Robot controller uses up to 1.5A continuously and the
motor draws up to 4A of current. These currents are dangerous and please use
common sense when operating the equipment. Most importantly, when you are
wiring make sure all the power is turned off.
1. Retrieve all the items listed on the last page. At the end of the lab please return all
items to where you got them.
2. Mount all components like the example in class and electrically wire all the
components like in Lab 1. This time there are two motors. Hook them up to PWM
outputs 1 and 5. There is no need to connect to the robot wirelessly. Find
something to lift the mobile robots wheels off the ground or workbench.
3. Before you turn on your robot, have the lab instructor check your wiring.
4. Download the Default program to the RC by following the directions on page 22
of the Programming reference guide. The FRC_default.hex file should be on
your desktop. If not, you can download it from the Innovation First website.
5. Send commands from the joystick and verify that you can drive the motors in both
directions.
a. Setup the Dashboard program and connect it to the OI. This will stop
updating information on the computer screen. Hook up the motors to
PWM signal 13 and 15. Notice how those channels are mixed with the X
an Y motions of the joystick. Also note that pushing the joystick forward

moves the wheels in opposite directions. This is because the motors are
mirrored from each other. You will be able to change this later in the lab.
b. Squeeze the two thumb buttons and see that the Relay 1 lights on the OI
are illuminated. Try this again as you assert and disassert Digital Input 1
and 2. Write down this relationship you find in the space below.

c. Hook your joystick into Port 3 and connect the motors to PWM Output 3
and 7. Attempt controlling the motors as you assert and disassert Digital
Output 5 and 6. Write down this relationship you find in the space below.

6. Open MPLAB IDE. Now, open the compiler and open the workspace
FrcCode.mcw. What c file do you primarily edit? More specifically, what
function do you primarily edit? If you do not have this program, talk to the
laboratory instructor. Examine the code and find verify the relationships you
found in part 5. Do your relationships from part 5 correspond to the C language
logic?

7. Take note of the syntax they used when calling the functions and look to see what
the functions do. Also look at the syntax of the if-else logic towards the end of
this function. What does the Limit_Switch_Min, Limit_Switch_Max, and
Limit_Mix do?

8. Close the Current Workspace without saving your changes, and return the joystick
to Port 1.

9. Make a copy of the FrcCode Folder and rename it so that you dont change the
default program. open the workspace FrcCode.mcw in your new folder.
Immediately save your project as and choose something different that the
default setting. You must make the current project to create a new .hex file.
Load your new .hex file. It should have the same name as your project. Verify that
you can download this to the RC. You should work solely with your new project
from this point on.
10. Find the printf commands. The information in this function is the data that is
displayed in the text box after you use the IFI loader. This is a handy debugging
tool. Make changes to this command and have it show different information.
11. Switch PWM1 4 to be Y, X, Aux, and Wheel from your joystick in port 1. Load
the program again and prove that these work. You will have to alter the printf
statement or look in the dashboard program to see the differences.
12. (Optional) Find the standard place to put a macro and put #define
MY_NEUTRAL 127. Then find all the instances of 127 in the user_routines.c.
13. Obtain a rare earth magnet from the laboratory instructor and move it in front of
the magnetic switch. You should be able to hear a faint click of the leads coming
together. This will be your mine detector. Electronically connect this to the RC
using a breadboard and PWM wire. The RC already has a pull-up resistor built in
so it makes it easy to connect. If you are confused about this, ask the laboratory
instructor. This is important for most of the sensors we will be working with.
What is the approximate proximity of the magnetic switch? Does the behavior
change with the magnet orientation? If so, how?

Finally, what is the value of having an internal pull-up resistor? What would
happen if we did not have this pull-up resistor?

14. Using the printf C command or the lights on the OI, make a visual indicator
when you find a mine (close the magnet switch). Show the instructor when you
accomplish this.
15. Determine the electronic behavior of the whisker switches using a multi-meter.
Electronically connect the whisker switches to the robot using the breadboard and
a PWM cable. Ask the laboratory instructor if you are confused.
16. Using the printf C command or the lights on the OI, make a visual indicator
when you hit a wall (switch goes on). Show the instructor when you accomplish
this.

17. Find the variable resistor you used in lab1 and connect it to analog input 1. Find
the correct function to read its value and display its value in the printf
statement. Show the instructor when you accomplish this.
18. This is the fun part. Be clever and use your imagination and stop/start the motors
based on various sensor readings, joystick positions, and joystick buttons. You
can fix the mirror problem with the default mixing PWM functions. Maybe even
pretend that the robot is moving autonomously and if it hits a wall it must turn, go
backwards, or anything you can think of. This is what you will be continuing to
program at home and in the extra laboratory hours. Show the instructor when you
accomplish this. He is very interested in what you can do.

Items Needed for the Lab


1x Laptop with IFI Loader, the C-compiler, and Dashboard.
1x Pre-made Mobile Robot
Access to all the Innovation First Documents
1x Robot Controller
1x Operator Interface
1x Tether cable (RED)
1x Joystick (the second is optional)
1x Battery switch
1x 12 V Battery
1x Backup Battery
2x PWM Amplifier (Victor 884)
1x Battery Ground Connector
2x Motor Connectors
2x Whisker Switches
1x AC Adapter
4x PWM extension cables (you may need more of these later in the lab)
2x DB9 connectors (BLACK)

Massachusetts Institute of Technology


Department of Mechanical Engineering
04

2.12 Introduction to Robotics


Laboratory No. 3
Section: ________________________ ,

Date: ____________________________

Students Name: __________________________

Objective
The objectives of todays laboratory are:
Implement basic functions of sensor-based control on the lab robot, and
Plan a strategy for performing the de-mining task described below.
Lab Report
You are expected to turn in the questions/check sheet. Each group turns in one report at the end of
lab session.
SAFETY
The hardware and motor that you will be using for 2.12 this semester use considerable current. The
Robot controller uses up to 1.5A of current continuously, while the motor draws up to 4A of current.
These currents are dangerous and please use common sense when operating the equipment. Most
importantly, when you are wiring make sure all the power is turned off.

Basic robot functions:


Implement the following basic functions on your robot system. These are needed for
building your de-mining robot.
1. Timers are sometimes very important in programming if you need a delay or need
synchronization. The website has a tutorial to get a timer to work. You can read
the documentation below if you want to know what the program you will
download does.
http://www.ifirobotics.com/docs/timers_white_paper_2004-jan-14.pdf

The timers are built into the program and the variables are available in programming.
There is a variable for counting 25 milliseconds and 1 second intervals. When you
download the code to the RC, you will see the variables that I have defined on the
printf screen. Also, digital outputs 3 and 4 oscillate at 25 ms and 1 second, respectively.

In order to use these timers, you have to program a bit. For example, since the
timers are always counting from the start of the program, if you want to count 2
seconds with the millisecond timer, you would have to follow this peudo-code.
a. Initialize a variable, say
TwoSecondTimer = Current Value of 25 ms timer;
b. Then, write a statement that will trigger when
(Current Value of 25 ms timer TwoSecondTimer) > 80 ms
c. When that triggers, do something and reinitialize variables.
Now, practice with the timers to do the following items. This may be very difficult
depending on your background in programming.
2. Turn both motors backwards for approximately 2 seconds when detecting a wall
with one of the whisker sensors.
3. When detecting a mine with the magnetic proximity sensor, turn the buzzer (or a
LED) on for 2 seconds.
4. Optional. Make a zigzag motion by turning one of the two motors at a time and
altering the two.

De-Mining Robot Project Task Statement


Goal and Rules: Find the location of fake mines buried in a minefield. Identify as many
mines as possible within a given time, say 2 minutes. Every time a fake mine is found,
the robot must stop at that position and turn the buzzer on. If the mine is a newly found
one, a credit point is given to the project team. If the mine has already been found by the
robot, no credit is given. The robot resumes searching a mine, when a joystick command
is given.
The Minefield: About 10 magnetic fake mines are buried in a 12x12 minefield, as
shown in the figure below. The minefield is surrounded by a 6 wall with cushion. The
four corners of the square minefield are made round (10 radius) so that the robot can
easily turn when hitting the corner. A start point is marked in the minefield. Each project
team places their robot at the start point for mine search.

Start

6
12

Fake minefield
12

You might also like