You are on page 1of 12

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO.

4, APRIL 2015 2265

Human-Like Motion Generation and Control for


Humanoids Dual Arm Object Manipulation
Sung Yul Shin and ChangHwan Kim

AbstractThe robot manipulation in human environment perform similar tasks, this paper deals with the generation and
is a challenging issue because the human environment is control of human-like arm motions and the interaction between
complex, dynamic, unstructured, and difcult to perceive the robot hand and environments.
reliably. In order to implement promising robot applications
in our daily lives, robots need to perform manipulation A number of studies on the kinematic and dynamic aspects
tasks within human environment. Particularly for a hu- of human arm movements have been conducted [9], [10]. Flash
manoid robot, the manipulability of the objects is essential and Hogan introduced a mathematical model based on minimal
to assist the humans in human environment. This paper jerk for an unconstrained point-to-point arm movement and
presents a method for manipulating an object with both experimentally showed the bell-shaped velocity profiles of the
arms of a humanoid robot. We focus on the generation of
human-like movements by using the human motion cap- curved motion computed from the model [9]. Arimoto and
ture data. Then, the control method based on the virtual Sekimoto employed the bell-shaped velocity profile for the
dynamics model is proposed to control both the motion point-to-point arm movement to verify the human-likeness of
and force under the uniform control system. This method their controller for a robot arm [10]. These models may not be
empowers the robot to perform the object manipulation good enough to deal with more complicated arm movements
task including reaching, grasping, and moving an object
in sequence. The proposed algorithm is implemented on requested for such more dexterous tasks as writing letters on a
a humanoid robot with independent joint controller at each white board, rotating a screw driver, moving an object with both
motor; its performance is demonstrated by manipulating an arms, and so on.
object with both arms. In our knowledge, human-like movements cannot be charac-
Index TermsDexterous manipulation, grasping, human- terized just by the bell-shaped velocity profiles, but depend on
like motion generation, humanoid robot, virtual dynamics the characteristics and the purposes of given tasks. A possible
model (VDM). approach to produce human-like motions or behaviors is, first,
to analyze a given task and solve it with robot programming by
I. I NTRODUCTION a human [11], [12]. To reduce those human efforts and perform
more complex tasks, another approach based on learning the-
V ERSATILE and flexible manipulation skills of robot sys-
tems are necessary to perform complicated tasks such
as a human handles tools and objects [1]. Particularly for a
ories has been studied by many researchers [12][16]. Robot
programming by demonstration (PbD), which is also referred
humanoid robot, the manipulability in human environments and to as imitation learning, appeared to automate tedious man-
circumstances such as handling tools, objects, and equipment ual programming for manipulating robots [14]. Ijspeert et al.
is still a challenging issue since the human environment is [15] designed a motor representation based on dynamical sys-
complex, dynamic, and difficult to perceive and control reli- tems for encoding movements and replaying them in various
ably [1][6]. The human-like motion is one of the significant conditions. Lim et al. [16] employed the principal component
issues for a humanoid robot to perform dexterous manipulation analysis (PCA) and the dynamics-based optimization algorithm
tasks in human environments [7], [8]. For a personal service to generate torque minimized human-like motions. However,
robot, people may also feel comfort, cooperative, and friendly the dynamics-based optimization algorithm was merely applied
from a human-like motion-based manipulation instinctively and offline, and the inverse kinematics problem has to be considered
empirically [8]. People usually manipulate a number of objects to obtain the final arm posture that may not look like human arm
by using both arms for a certain task, keeping proper forces posture.
between the objects and the hands. For a humanoid robot to We herein introduce one of the motion generation methods
that enables to preserve the human-like characteristics of a
given task for a robotic arm. Our assumption is that the char-
Manuscript received February 19, 2014; revised June 9, 2014; ac- acteristics of a specific task are implicitly absorbed within
cepted July 23, 2014. Date of publication August 28, 2014; date of the human demonstrations; therefore, we use the human mo-
current version March 6, 2015. This work was supported in part by
the Korea Institute of Science and Technology under Project 2E24721. tion capture data for generating the motion of the robot. The
(Corresponding author: ChangHwan Kim.) human-like arm motion can be characterized by the move-
S. Y. Shin is with the Department of Mechanical Engineering, The Uni- ments of hand and elbow. In this paper, we define the criteria of
versity of Texas at Austin, Austin, TX 78712 USA (e-mail: syshin0228@
utexas.edu; syshin0228@gmail.com). human-likeness as the trajectories of hand and the trajectories
C. Kim is with the Center for Bionics, Korea Institute of Science and of elbow generated by the elbow elevation angle (EEA) [7].
Technology, Seoul 130-650, Korea (e-mail: ckim@kist.re.kr). The human-like characteristics of the arm motion are evaluated
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. by comparing the criteria of human-likeness of the generated
Digital Object Identifier 10.1109/TIE.2014.2353017 motion with that of the captured human motion.

0278-0046 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
2266 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015

Fig. 1. Acquisition of reaching motion data. (a) Human motion capturing. (b) (red crosses) Center positions of the target object. (c) Sample
reaching motion data. (d) Sample reaching motion data in the y-axis of left hand (hand position is defined as the center point between the thumb
and pinky positions).

The approaches in [12][16] are exclusively based on motion human programming efforts. The human-like motion gener-
trajectory generation in kinematics level. These approaches still ation for the dual-arm object manipulation and the control
require an additional force controller to deal with the interaction method based on VDM are presented in Sections II and III,
between a robot and an environment. The impedance behavior respectively. The experimental results and comparisons with
for dual-arm manipulation was studied by Wimbock et al. [17], other approaches are described in Section IV, followed by
[18] with a humanoid robot. They demonstrated object manip- discussion and conclusion.
ulation with both robot arms by implementing the impedance
behaviors to the controller. Ozawa et al. [19] and Yoshida et al.
[20] proposed a method to grasp an object with dual-finger II. H UMAN -L IKE A RM M OTION G ENERATION
robots. However, these algorithms are based on a torque control
A. Acquisition of Human Motion Data
method where the sensors measuring the torque (or current) are
necessary to implement the approaches in [17][20]. The human-like movement is generated based on the cap-
We proposed a control method based on virtual dynamics tured human motion data. The human motion data were cap-
model (VDM) [21]. VDM is an ideal dynamic model that exists tured by Xsenss Moven motion capturing suit (http://www.
in the simulation space, neglecting the nonlinear effects such as xsens.com), which is a cameraless inertial motion capture sys-
friction and uncertainty terms and compensating gravitational tem that provides 3-D kinematics of full human body. The ini-
effects [21]. The target trajectory of a motion (or a force) to be tial step for constructing motion data is to acquire the multiple
controlled is filtered through the VDM-based controller, before sets of human motions by using the motion capture system.
executing it by the robot motor controller. Using the VDM- The captured motion data are composed of the discrete time
based controller, the target motion for a robot arm generated trajectories of virtual points of interest on the human body.
in a Cartesian space can be directly controlled without solving After performing the geometrical scaling to resolve the length
inverse kinematics and avoiding geometrical singularities. An- differences between the human and robot arms [22], the scaled
other advantage of the method is that this approach does not motion is stored. Each motion primitive is composed of the
need to sense the torque (or current) at each joint motor, but trajectories of ten virtual points of interest, which are defined on
only requires the encoders to acquire the current joint positions the upper body of the human: three points on the hand (thumb,
and the force and torque (FT) sensors (attached to the wrist) middle, and pinky), one point on the elbow, and one point on
to measure the external forces applied to the robot arms. Such the shoulder for both arms.
measured forces and torques are kept constant at a certain level 1) Object Reaching Motion Data: First, the object reach-
for the robot arms to physically interact with an environment or ing motions with both arms are captured by a human subject. It
an object like holding an object. is assumed that an object is placed in front of the human sub-
In this paper, we present a control method for manipulating jects chest, as shown in Fig. 1(a) and (b). In Fig. 1(b), the red
an object with both arms of a humanoid robot by reducing crosses in front of the human subjects chest are assumed to be
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2267

Fig. 2. Acquisition of object moving motion data. (a) Sample object moving data (object position is defined as the center point between the left
and right hands). (b) Sample object moving data in the z-axis.

the center positions of the target object (3 3 3 = 27 posi-


tions). The human subject is asked to naturally reach to the posi-
tions while wearing the motion capturing suit for recording the
reaching motion data. The captured motion data are segmented
and stored into 27 sets of vector time series data at 120 Hz
and used as the training and testing data for the experiments.
Each motion data are timescaled to normalize the number of
frames to make the data into a matrix form. One of the stored
sample reaching motion data in 3-D space and the sample data
of left hand motion in the y-axis are shown in Fig. 1(c) and (d),
respectively (hand position is defined as the center point be-
tween the thumb and pinky positions).
2) Object Moving Motion Data: Second, the object mov-
ing motions are captured by a human subject. The goal for the Fig. 3. Example of the reaching motion generated by the eigenmotion
and EEA (the trajectories of thumb, middle, pinky, elbow, and shoulder
object moving task is set to move on the profile of movement in 3-D space).
while holding an object (the same ball tested with the robot
shown in Fig. 6) with both arms; the human subject kept holding characteristics may still be inherent within the new trajectory
the object in order to maintain the distance between both hands. since the principal components involve the human movement
The human subject is asked to move on the profile as natural features.
as possible while wearing the motion capturing suit for record- Among all ten virtual points of interest, we use eight virtual
ing the moving motion data. Similar to the reaching motion points of interest at thumb, middle, and pinky for both hands
data acquisition, the captured moving data are segmented and and shoulder for both arms; both elbow positions are excepted
stored into 20 sets of vector time series data at 120 Hz, and since they are obtained by using EEA [7] (see Fig. 3). Thus, the
each moving data are timescaled to normalize the number of number of parameters (NOP) is 24 (8 3) since each position
frames. The sample five sets of the object trajectories in 3-D contains three Cartesian coordinates. The motion data matrix of
space and the moving trajectories in the z-axis are shown in each parameter, i.e., Pi Rf N (i = 1, . . . , NOP), consists of
Fig. 2(a) and (b), respectively (the object position is defined as the vector time series data, where f is the number of motion
the center point between the left and right hands). frames, and N is the number of motion data.
The principal components of the motion data matrix are
extracted with the sample mean and the sample covariance
B. Eigenmotion: Reaching Motion Generation
matrix from the motion data. The principal components are
Here, we explain the motion generation method for reaching formed as a set of eigenvectors with corresponding eigenvalues
an object by using the reaching motion data obtained from of the sample covariance matrix; the higher value of the eigen-
Section II-A1. Given the arbitrary initial and final points, a value indicates that it contains more dominant characteristics
numerous number of paths may exist between those two points. of the data. Thereby, the eigenvector that corresponds to the
Among the paths, our goal is to find a path that preserves the highest eigenvalue is the most dominant principal component
features of the human motion data. Herein, our assumption [16]. If the number of motion data is less than the number of
is that the human-like features are implicitly absorbed within motion frames (f > N ), only N 1 meaningful eigenvectors
the motion data since they are acquired by the human motion will exist, whereas the remaining eigenvectors will have corre-
capturing. PCA is a way of identifying patterns and expressing sponding eigenvalues of zero [23]. Therefore, in order to obtain
the similarities and differences of the data [16]. We employ the N meaningful eigenvectors (principal components), the number
PCA to extract the principal components from the motion data, of motion data should be larger than N + 1.
and they are used to generate a new trajectory that connects To calculate a new trajectory, we formulate the function with
the given initial and final positions. The human-like motion the linear combination of the principal components and mean of
2268 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015

TABLE I pi,0 pi,mean (t0 )


AVERAGE P ERCENTAGE E XPLAINED BY P RINCIPAL di = (5)
C OMPONENTS OF P OINTS OF I NTEREST pi,f pi,mean (tf )

i,1 (t0 ) i,2 (t0 ) i,3 (t0 ) i,4 (t0 )


Ci =
i,1 (tf ) i,2 (tf ) i,3 (tf ) i,4 (tf )
(6)

where Ai is the matrix of principal components, si is the


unknown coefficient vector, and di and Ci are the boundary
conditions of the optimization problem. The objective functions
the human trajectories, so-called eigenmotion function. The in (2) and (4) are obtained by rearranging (1) in a matrix
eigenmotion function is given by form, which is determined to minimize the difference between
the new trajectory, i.e., pi (si , t) and the mean trajectory, i.e.,

NPC
pi,mean (t). Equations (3), (5), and (6) are the boundary condi-
pi (t) = pi,mean (t) + i,j i,j (t), i = 1, . . . , NOP
j=1
tions, which are also obtained by arbitrarily substituting given
(1) initial and final time points, i.e., pi (t0 ) = pi,0 and pi (tf ) =
pi,f , into (1).
where pi (t) is the ith new trajectory, pi,mean (t) is the mean Then, the Lagrangian equation is
of the ith trajectories from the motion data matrix Pi , i,j (t)
(i = 1, . . . , NOP, j = 1, . . . , NPC) is the jth most dominant 1 T T
L(si , i ) = s A Ai si + Ti (Ci si di ) (7)
principal component of the ith trajectory, i,j are the unknown 2 i i
coefficients, and NPC is the number of used principal compo- where i is the Lagrangian multiplier vector. With the optimal-
nents. In this paper, we use up to the fourth most dominant ity conditions, the unknown coefficient vector is obtained in the
principal components as the basis functions of the linear com- following form:
bination (NPC = 4), because four principal components are 1 T T 1 T 1
sufficient to restore over 99% of the motion data, as evident in si = ATi Ai Ci Ci Ai Ai Ci di . (8)
Table I [16]. Thus, the number of the motion data, i.e., N > 5,
should be satisfied since we use four principal components in The optimal solution of (8) yields the new trajectory, i.e.,
this work (we used N = 8 for the training data in the reaching pi (si , t), which satisfies the following two conditions.
experiment). The new trajectory is closest to the mean trajectory
When the boundary conditions for the ith trajectory at the pi,mean (t) from the objective function (2).
initial and final time points are given as pi (t0 ) = pi,0 and The new trajectory connects given initial and final time
pi (tf ) = pi,f , the unknown coefficients in (1) can be calculated points from the boundary condition (3).
by solving the following optimization problem. We assume Based on the obtained trajectories of the hand and shoulder,
that the mean of the human trajectories involves human-like the elbow trajectory is calculated with the EEA from [7], and
characteristics since it is obtained from the human movements. they are applied as the reference input of the VDM controller to
Therefore, we formulate an optimization problem that finds a conduct the reaching task [21].
trajectory closest to the mean of the human trajectories and
satisfies the given initial (the initial job position) and final (the
final position for grasping) time points. Using the boundary C. Motion Generation for Object Moving With Dual Arms
conditions at the initial and final time points, the unknown Here, we explain the motion generation method for moving
coefficients can be obtained with the Lagrangian multiplier an object by using the object moving motion data obtained from
optimization method. From (1), the optimization problem for Section II-A2. Basically, the eigenmotion and the optimization
calculating the ith trajectory (i = 1, . . . , NOP) is given as the in (1)(8) are similarly implemented for the object moving
following form: motion generation. However, while the robot merely moves in
the free space and does not have any contact or interaction
1
f
min Ji (si ) = {pi (si , tk ) pi,mean (tk )}2 with the environment during the reaching task, the robot has
si 2 to interact with the object continually in order to maintain the
k=0
1 holding state during the object moving task period; otherwise,
= sTi ATi Ai si (2) the robot will drop or just move in the free space without
2
subject to di = Ci si (3) holding the object. Thus, in the object moving task, not only
the movement of the robot arms but also the constraint for
i,1 (t0 ) i,2 (t0 ) i,3 (t0 ) i,4 (t0 )
.. .. .. .. holding the object should be simultaneously considered to keep
Ai = . . . . the holding state while moving it.
i,1 (tf ) i,2 (tf ) i,3 (tf ) i,4 (tf ) In order to solve this problem, the parameter transformation,

i,1 which maps between the hand position (thumb, middle, and
pinky points for both hands) and the object pose (object position
si = i,2 (4)
i,3 and orientation), is executed with the object moving motion
i,4 data before implementing (1)(8). If the object moving motion
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2269

Fig. 5. Control of the VDM with virtual spring-damper force elements


Fig. 4. Parameter transformation, which maps between the hand po- (three points at hand, one point at elbow, and one point at shoulder
sition (thumb, middle, and pinky for both hands) and the object pose for both arms) for the object manipulation task. (Right) Spring-like force
(object position and orientation). applied on the object with the virtual spring-damper elements when both
hands contact the object.
data are directly applied with (1)(8) without the parameter
operator that determines an output force from an input behav-
transformation, the robot will not be able to hold the object
ioral element [27]. Conversely, mechanical admittance is the
(apply force on the object), but will just perform the moving
dynamic operator that determines an output behavioral element
motion in the free space. We make a few assumptions in the
from an input force [27]. Our method uses similar concept of
object moving task. First, the object position is defined as the
the mechanical impedance and admittance to control the motion
center point between the left and right hands (hand position
and force by using the virtual spring-damper elements [21].
is defined as the center point between the thumb and pinky
While the main focus in [21] was to perform grasping tasks with
positions denoted by pl and pr in Fig. 4). Second, the left
several types of objects and to test the compliance of the control
palm is parallel to the right palm (the palm is defined as the
system, in this paper, we concentrate more on the application
plane through thumb, middle, and pinky points). Third, the
of the motion generation on the VDM controller to perform the
orientation of both hands is the same as the object. Based on
sequent object manipulation task including reaching, grasping,
these assumptions, the parameters of the hand position in object
and moving. Here, we briefly introduce the control method
moving motion data are transformed into the parameters of
based on VDM, a method that is implemented to perform the
the object pose, as shown in Fig. 4, where ro is the radius of
object manipulation tasks with dual arms of the humanoid
the object; po R3 is the position vector of the object; and
robot.
, , and are the roll, pitch, and yaw angles for the object
orientation, respectively. Instead of the eight virtual points of
interest as used in the reaching motion generation (thumb, A. Control Based on VDM
middle, and pinky for both hands and shoulder for both arms), The VDM is not an estimated dynamic model of the actual
the transformed parameters are implemented with (1)(8) to robot but a nominal dynamic model existing in the simulation
generate the object moving motion. Thereby, NOP for the space, which has simplified dynamic properties of the actual
transformed parameters becomes 12: 3 for the position of the robot [21]. Basically, the low-level controller for the VDM
object, 3 for the orientation of the object, and 6 for the position method is based on the independent joint controller at each
of both shoulders (similar to the reaching motion generation, motor, which has intrinsically stiff characteristic to the external
both elbow positions are obtained by using EEA [7]). forces, because the controller merely exerts to maintain the
After solving the eigenmotion by executing (1)(8) to obtain target joint position all the time [29]. To enable the system
the new trajectories of the transformed parameters, parameter to react with the external forces (e.g., compliance control), the
retransformation is fulfilled to reconstruct the position of both VDM is used to filter out the joint trajectories for the low-level
hands (thumb, middle, and pinky). The distance between both control input, determined by the virtual spring-damper forces
hands can be determined by adjusting the radius of the object (see Fig. 5) and the external forces obtained by the FT sensor
ro (the appropriate choice of ro enables the robot to apply the attached to the wrist of the robot (see Fig. 7) [21].
force on the object while moving it [21]). Consequently, based One of the systematic advantages of the VDM control
on the obtained trajectories of the hand and shoulder, the elbow method is that the method does not need the sensors to measure
trajectory is calculated with the EEA [7], and they are applied as the torque (or current) at each motor, but only requires the en-
the reference input of the VDM controller to conduct the object coders to acquire the current joint positions and the FT sensors
moving task [21]. to measure the external forces. Another algorithmic advantage
is that this method does not need accurate system identification
of the actual robot, because the behavioral performance does
III. C ONTROL B ASED ON VDM
not depend on the dynamic parameters of the actual system
Many researchers have developed various control schemes (due to our low-level independent joint controller), but depends
for the robot manipulation [24][28]. Hogan and Buerger in- on the parameters of the VDM. This means that the actual robot
troduced the mechanical impedance and admittance for con- can be controlled without considering the nonlinear effects such
trolling the robot [27]. Mechanical impedance is the dynamic as gravity, friction, and uncertainties of the actual model since
2270 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015

the parameters of the VDM can be determined and simplified


by the user [21].
As shown in Fig. 5, we apply the translational virtual spring-
damper forces on three points for each hand denoted as thumb,
middle, and pinky. These three points are defined to avoid
singularity of the orientation. Likewise, for the motion ex-
pression of the waist rotation and for the elbow posture, two
other translation virtual spring-damper forces are applied on
both shoulder and elbow positions; therefore, a total of ten Fig. 6. Object manipulation task, including reaching, grasping, and
moving, by humanoid robot Mahru from KIST.
virtual spring-damper forces (10 3 = 30 since each point
contains three Cartesian coordinates) is applied to control the
dual arms of the VDM [21], [22]. These ten virtual spring-
dampers connect the ten points aforementioned and their target
trajectories obtained by the eigenmotion from Section II.
The movement of the VDM is represented by the following
dynamic equations of the motion:

MVDM (q)q + CVDM (q, q)q = c + ex (9)

where q Rm is the joint angle vector, MVDM (q) Rmm


is the symmetric and positive definite inertia matrix, and
CVDM (q, q)q Rm represents the Coriolis and centrifugal
forces. c Rm is the control torque vector, and ex Rm
contains the external generalized forces acting on the robot.
Note that the gravity term and nonlinear effects, including Fig. 7. Kinematic structure of the humanoid robot Mahru from KIST.
friction and uncertainty terms, are not modeled in (9). The position of the hand is defined as the center point on the hand
The control torque vector for the motion control and the grasper, but fingers are not actuated in this experiment.
external generalized force vector for the physical interaction
to control the humanoid robot, composed of independent joint
with the environment are represented by
controllers.

10 By using the control method based on VDM, both the motion
c = JjT (q)Fvsd,j C0 q + v (10) and force can be simultaneously controlled under the uniform
j=1 control system. The proposed algorithm is implemented to

Fvsd,j = kp (pd,j pc,j ) kp pc,j (11) perform the object manipulation task with dual arms of actual
 2 humanoid robot. More details about the VDM control algorithm
ex = JkT (q)FFT,k (12) can be found in [21].
k=1
B. Dual-Arm Object Manipulation
where Fvsd,j is the virtual spring-damper forces applied on the
jth point, where the jth virtual spring-damper is connected; j Here, the method for manipulating an object with both arms
can be thumb, middle, pinky, elbow, or shoulder of each arm (j of a humanoid robot is presented by integrating the proposed mo-
is 10 for both arms). The matrix Jj (q) R6m is the Jacobian tion generation with the VDM control method [21]. As shown
matrix of the jth point; C0 Rmm and v Rm are the in Fig. 6, the object manipulation task including reaching,
damping coefficient matrix and the limit reaction torque
 for the grasping, and moving an object is performed in sequence with
joints constraint, respectively [21], [22]; kp and kp denote a humanoid robot. First, for the reaching task, the reaching tra-
the spring and damping coefficients, respectively [21], [22]; jectories generated by using the eigenmotion from Section II-B
pd,j R6 and pc,j R6 are the jth target and current position are used as the reference input of the VDM controller. Second,
vectors (with zero orientations), respectively. FFT,k R6 is after the reaching motion, object grasping task is performed by
the measured force and moment from the FT sensor, which is applying force on the object (the force is applied by reducing
attached to the wrist; k can be the left or the right wrist; and the radius of the object ro defined in Section II-C) [21]. We
(0 < < 1) is the scalar force scaling coefficient, which can made a few assumptions for the object grasping task: 1) after
be used for adjusting the sensitivity of the measured force [21]. the reaching motion, the object is placed between both hands,
The current position of the robot follows the target position, and 2) the situation such as dropping or slipping the object is not
which is obtained by the eigenmotion in Section II (object considered since the contact surface of the actual robot hand is
reaching or moving motion). The current position vector is sufficient to withstand the moments. More details of the object
calculated by solving forward kinematics with the joint angle grasping (involving force determination for different types of
vector obtained by the encoders. By solving (9)(12) with objects) are described in [21]. Finally, after the grasping, the
the fourth-order RungeKutta, the acceleration, velocity, and object moving task is performed while holding the object. In
position trajectories of the joints are obtained, and they are used this paper, the goal of the object moving task is set to move
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2271

Fig. 8. Simulation results of hand trajectories during the reaching motion: (red solid line) EM, (magenta dotted line) EM-VDM, (blue dashed line)
VSDH, and (black dashdot line) HM. EM is the eigenmotion trajectory, and EM-VDM is the eigenmotion trajectory filtered by the VDM controller.
(a) Reaching motion in 3-D space. (b)(d) Time-normalized hand position and velocity trajectories in the x-, y-, and z-axes, respectively.

on the profile of movement. In the same way as in the reach- TABLE II


RMS E RROR OF L EFT H AND T RAJECTORY
ing task, the object moving trajectories obtained in Section II-C
are used as the reference input of the VDM controller. The sim-
ulation and experimental results of the object manipulation with
the actual humanoid robot are demonstrated in the subsequent
section.

IV. S IMULATION AND E XPERIMENTAL R ESULTS


The simulations and experiments performed by adopting (EM-VDM: magenta dotted line); 3) trajectory connected
the method are demonstrated here. In the experiments, the with the virtual spring-damper hypothesis (VSDH) from [10]
humanoid robot Mahru, which was built in Korea Institute of (VSDH: blue dashed line); and 4) captured human trajectory
Science and Technology (KIST), was used as a test bed. The (HM: black dashdot line).
upper body of Mahru (except neck) is composed of a total From 27 sets of the captured reaching motion data obtained
of 15 DOF, 7 DOF for both arms, and 1 DOF for the waist in Section II-A1, the trajectories toward the eight target posi-
(see Fig. 7). The simulations are performed with the model of tions in the corner are selected and used as the training data,
Mahru, programmed with kinematic and dynamic formulations whereas the trajectory toward the center position is used as the
based on C++ and OpenGL. testing data [see Fig. 1(b)]. The trajectories of both hands of
The simulation and experimental results section is com- the four groups in 3-D space are shown in Fig. 8(a), and the
posed of (A) comparison of human-like hand trajectories, (B) left hand position and velocity trajectories in the x-, y-, and
comparison of human-like elbow trajectories, and (C) object z-axis directions are shown in Fig. 8(b)(d), respectively. For
manipulation with humanoid robot. the quantitative evaluation, the root-mean-square (RMS) errors
between the HM and the other three trajectories are denoted in
A. Comparison of Human-Like Hand Trajectories Table II.

In order to evaluate the human-like characteristics of the


B. Comparison of Human-Like Elbow Trajectories
hand motion in reaching task, we compare the trajectories of
four groups: 1) eigenmotion trajectory (EM: red solid line); Here, we compare the human-like elbow trajectories, which
2) eigenmotion trajectory filtered by the VDM controller are calculated by using the EEA [7]. The comparisons are
2272 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015

Fig. 9. Simulation results of elbow trajectories during the reaching motion: (red solid line) EEA, (magenta dotted line) EEA-VDM, (blue dashed
line) VSDH, and (black dashdot line) HM. EEA is the elbow trajectory obtained by EEA, and EEA-VDM is the elbow trajectory obtained by EEA
and filtered by the VDM controller. (a) Reaching motion in 3-D space. (b)(d) Time-normalized elbow position and velocity trajectories in the x-, y-,
and z-axes, respectively.
TABLE III obtained from Section II-C is used as the reference hand mo-
RMS E RROR OF L EFT E LBOW T RAJECTORY IN R EACHING M OTION
tion [see thick black line in Fig. 10(a)]. The initial and final
points for the object moving motion are given based on the
final positions of both hands after the reaching motion for the
sequential performance. The elbow trajectory of human (HM) is
calculated by averaging 20 sets of moving motion data acquired
in Section II-A2, and VSDH is obtained by allowing the elbow
movement to depend on the hand and shoulder movements
without applying the virtual spring-damper force at the elbow
divided into two parts: the reaching motion and the object position [10]. The elbow trajectories of the four groups in
moving motion from Section II-B and C, respectively. The 3-D space are shown in Fig. 10(a), and the elbow position
elbow trajectories are calculated with EEA by implementing trajectories in the x-, y-, and z-axis directions are shown in
the trajectories of shoulder and hand [7]. Fig. 10(b)(d), respectively. The RMS errors between the HM
First, we compare the elbow trajectories of four groups and the other three trajectories are denoted in Table IV.
during the reaching motion compared in Section III-A: 1) elbow
trajectory obtained by EEA (EEA: red solid line); 2) elbow
C. Object Manipulation With Humanoid Robot
trajectory obtained by EEA and filtered by the VDM controller
(EEA-VDM: magenta dotted line); 3) elbow trajectory con- The object manipulation task including reaching, grasping,
nected with VSDH (VSDH: blue dashed line); and 4) elbow tra- and moving an object (ball, diameter: 30 cm, weight: approx-
jectory captured by human (HM: black dashdot line). VSDH imately 280 g) is performed in sequence with the humanoid
is obtained by connecting the initial and final elbow points robot Mahru. Initially, the human-like reaching motion gener-
of the given reaching motion with the VSDH controller. The ated by the eigenmotion from Section II-B is controlled through
elbow trajectories of the four groups in 3-D space are shown the VDM controller in order to reach the object with both
in Fig. 9(a), and the elbow position and velocity trajectories hands. After the reaching, the force is imposed on the object
in the x-, y-, and z-axis directions are shown in Fig. 9(b)(d), to grasp it (more details for grasping different types of objects
respectively. The RMS errors between the HM and the other are referred in [21]). Then, the object moving motion generated
three trajectories are denoted in Table III. by the eigenmotion from Section II-C is once again controlled
Second, we compare the elbow trajectories of the four groups through the VDM controller to move the object on the profile
during the object moving motion. The profile of movement of movement.
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2273

Fig. 10. Simulation results of elbow trajectories during the object moving motion: (red solid line) EEA, (magenta dotted line) EEA-VDM, (blue
dashed line) VSDH, and (black dashdot line) HM. EEA is the elbow trajectory obtained by EEA, and EEA-VDM is the elbow trajectory obtained by
EEA and filtered by the VDM controller. (a) Trajectories of the object and both elbows in 3-D space. (b)(d) Elbow position trajectories in the x-, y-,
and z-axes, respectively.
TABLE IV obtained by the eigenmotion and EEA with those obtained by
RMS E RROR OF L EFT E LBOW T RAJECTORY IN O BJECT M OVING M OTION
the captured human movements. Many studies have employed
the bell-shaped velocity profiles as the key factor for character-
izing the human-likeness [9], [10]. One of the methods is the
VSDH controller, which uses the bell-shaped velocity profile
of the point-to-point motions to verify the human-likeness of
The trajectories of the object and both hands in 3-D their controller [10]. However, particularly for the specific tasks
space during the whole manipulation session are depicted in such as object reaching or moving, we claim that the human-
Fig. 11(a), and the target and current trajectories of the left like characteristics cannot be just argued by the bell-shaped
hand in the x- and z-axis directions (both hands in the y-axis velocity profiles, but depend on the characteristics and the
direction) are shown in Fig. 11(b). For the depiction simplicity, purpose of the given tasks, where the evidences can be found
only the left hand is shown in the x- and z-axis directions, in our experimental results; by observing the HM (see black
whereas both hands are shown in the y-axis direction [check dashdot lines in Figs. 810), it can be seen that the position
the coordinates in Fig. 11(a)]. To observe the applied force on and velocity profiles of the hand and elbow do not just depend
the object, the force amplitude measured from the FT sensor on the bell-shaped velocity profiles, but draw the curves that
attached to the left wrist is shown in Fig. 11(c). implicitly absorbed within the characteristics of the reaching
motion (or the object moving motion).
In order to evaluate the human-like characteristics of our
V. D ISCUSSION
method, we compared with the VSDH controller [10]. In
One trait of the motion generation method based on robot Figs. 8(b)(d) and 9(b)(d), it is shown that both the hand and
PbD, or imitation learning, appeared to automate tedious man- elbow trajectories given by EM and EM-VDM are qualitatively
ual programming for manipulating robots [12][16]. In this closer to HM compared with VSDH both in position and
paper, the eigenmotion, one of the motion generation methods, velocity; in particular, large error appears in VSDH in the x-
is adopted for generating human-like arm movements. The and y-axis directions (see Tables II and III for the quantitative
human-like characteristics of the method are evaluated through evaluation). Another given example of human-like movement
the experiments by comparing the hand and elbow movements was to move on the profile of movement after reaching and
2274 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015

but just allows depending on the position of hand and shoulder.


This means that the elbow posture cannot be characterized only
with VSDH, but the elbow position should be controlled to
characterize the elbow posture, particularly for the robots with
redundant arms (7 DOF for each arm in this experiment).
Through the experiments, we verified that the VDM con-
troller can deal with the following: 1) the motion control
by using the target trajectories generated by the eigenmotion
(reaching and moving) and 2) the force control that enables
the physical interaction between the robot and the environ-
ment (grasping and moving). As shown in Fig. 11, the ac-
tual humanoid robot successfully performed the given object
manipulation task (reaching, grasping, and moving an object)
in sequence. It can be seen that, after the reaching motion,
the object is moved on the profile of movement [see thick
black line in Fig. 11(a)]. Note that the distance between current
position trajectories of both hands is kept equal during the
object grasping and moving period, that is because the object
was kept placed between both hands. In particular, in the y-axis
direction in Fig. 11(b), it can be seen that the target trajectories
(black solid lines) stay within the current trajectories (red
dotted lines) during the object moving task since the difference
between target and current positions generates the force applied
on the object [21]. The difference between target and current
positions is determined by reducing the radius of the object ro .
In this experiment, ro is determined as 5 cm from pretrials.
In Fig. 11(c), it can be seen that the force is kept applied
on the object during the object grasping and moving period
(approximately 7 N from the FT sensor on the left wrist).
As shown in the experiments, the physical interaction be-
tween the robot and the environment is essential to perform
the object manipulation task with both arms. This means that
not only the position control for the motion generation but
also the compliance of the control system should be realized to
deal with the interaction with the environment (e.g., object and
external forces) [30]. For our experimental case, the accurate
position control would be desirable for the object reaching
motion, whereas the compliance control is necessary for the
object moving motion since the object moving task is executed
while holding the object. The VDM control method is one of the
approaches that can simultaneously deal with both the motion
and compliance under the uniform control system. However,
these motion and compliance have the relation of tradeoff,
which is indispensable to allow the position error between
Fig. 11. Object manipulation task by humanoid robot Mahru with target and current trajectories as the external forces are applied
ro = 0.05 (5 cm). (a) Trajectories of the object and both hands in on the control system; although the control gain tuning is not
3-D space. (b) Hand position trajectories in the x-, y-, and z-axes.
(c) Force amplitude applied on left hands. The difference between in the scope of this paper, the amount of the error can be mod-
target and current trajectories generates the force applied on the object ulated by tuning the virtual spring-damper force gains of the
(approximately 7 N). controller [21]. This error can be also seen in our simulation and
experimental results. For instance, although it is not desirable
grasping the object with both arms. Note that the profile of to have such an error during the object reaching period, the
movement cannot be produced by VSDH alone; therefore, only tracking error can be observed as the target trajectories (EM and
the elbow trajectories are compared, whereas the hand trajec- EEA in Tables II and III) are filtered out from VDM (EM-VDM
tory for movement is identically given by the eigenmotion and EEA-VDM in Tables II and III). Considerable error can
from Section II-C. Referred to Fig. 10(b)(d), it is observed be also observed during the object grasping and moving period
that EEA and EEA-VDM are also qualitatively closer to HM in our experiment [the difference between target and current
compared with VSDH, particularly in the y-axis direction. As trajectories, particularly in the y-axis direction in Fig. 11(b)].
a matter of fact, VSDH does not control the elbow position, However, this error during the object moving period would be
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2275

beneficial since this error is caused by the force generated for robotics, IEEE Trans. Ind. Electron., vol. 61, no. 8, pp. 40434051,
holding the object. Indeed, it is shown that the force has been Aug. 2014.
[7] S. Kim, C. H. Kim, and J. H. Park, Human-like arm motion generation
applied on the object during the object grasping and moving for humanoid robots using motion capture database, in Proc. IEEE/RSJ
period [see approximately 1143 s in Fig. 11(c)]. Thus, this IROS, Beijing, China, Oct. 2006, pp. 34863491.
error is meaningful in the perspective of the compliance control, [8] V. Potkonjak, S. Tzafestas, D. Kostic, and G. Djordjevic, Human-like
behavior of robot arms: General considerations and the handwriting
and this is the reason why we adopted VDM control method in taskPart I: Mathematical description of human-like motion: Distributed
this work. positioning and virtual fatigue, Robot. Comput.-Integr. Manuf., vol. 17,
Finally, the method for object manipulation in this paper no. 4, pp. 305315, Aug. 2001.
[9] T. Flash and N. Hogan, The coordination of arm movements: An ex-
still leaves many rooms for improvements. For example, we perimentally confirmed mathematical model, J. Neurosci., vol. 5, no. 7,
assumed that the situation such as dropping or slipping object pp. 16881703, Jul. 1985.
is not considered in this work. To overcome this issue, a vision [10] S. Arimoto and M. Sekimoto, Human-like movements of robotic arms
with redundant DOFs: Virtual spring-damper hypothesis to tackle the
system could be integrated for the position feedback of the Bernstein problem, in Proc. IEEE ICRA, Orlando, FL, USA, May 2006,
object to assure the grasping stability. In addition, we only pp. 18601866.
tested with a light object (ball, weight: 280 g) in this work, but [11] E. Sisbot, L. Marin-Urias, X. Broquere, D. Sidobre, and R. Alami, Syn-
thesizing robot motions adapted to human presence, Int. J. Soc. Robot.,
grasping heavy objects should be also investigated (such as by vol. 2, no. 3, pp. 329343, 2010.
integrating finger control) to improve the object manipulation [12] S. Schaal, Is imitation learning the route to humanoid robots, Trends
task with both arms of the robot. Cognitive Sci., vol. 3, no. 6, pp. 233242, 1999.
[13] R. Zollner, T. Asfour, and R. Dillmann, Programming by demonstration:
Dual-arm manipulation tasks for humanoid robots, in Proc. IEEE/RSJ
IROS, Sendai, Japan, 2004, pp. 479488.
VI. C ONCLUSION [14] A. Billard, S. Calinon, R. Dillmann, and S. Schaal, Robot program-
ming by demonstration, in Handbook of Robotics, B. Siciliano and
We have presented a method for manipulating an object O. Khatib, Eds. New York, NY, USA: Springer-Verlag, 2008, ch. 59.
including reaching, grasping, and moving with both arms of [15] A. Ijspeert, J. Nakanishi, and S. Schaal, Movement imitation with non-
the humanoid robot. The main purpose of this work is focused linear dynamical systems in humanoid robots, in Proc. IEEE ICRA,
Washington DC, USA, 2002, pp. 13981403.
on the human-like motion generation and control for the object [16] B. Lim, S. Ra, and F. C. Park, Movement primitives, principal component
manipulation. To achieve the goal, the eigenmotion and EEA analysis, and the efficient generation of natural motions, in Proc. IEEE
are implemented to generate the human-like arm movements ICRA, 2005, pp. 46304635.
[17] T. Wimbock, C. Ott, A. Albu-Schaffer, and G. Hirzinger, Comparison of
by using the human motion capture data. Then, the VDM object-level grasp controllers for dynamic dexterous manipulation, Int. J.
controller is used to control both the motion and force under the Robot. Res., vol. 31, no. 1, pp. 323, 2012.
uniform control system, and the trajectories generated by the [18] T. Wimbock, C. Ott, and G. Hirzinger, Impedance behaviors for two-
handed manipulation: Design and experiments, in Proc. IEEE ICRA,
eigenmotion are used as the reference input of the VDM con- Roma, Italy, May 2007, pp. 41824189.
troller. Through the simulations and experiments, the human- [19] R. Ozawa, S. Arimoto, S. Nakamura, and J. Bae, Control of
like characteristics are evaluated by comparing the hand and an object with parallel surfaces by a pair of finger robots with-
out object sensings, IEEE Trans. Robot., vol. 21, no. 5, pp. 965976,
elbow trajectories obtained by the proposed method with those Oct. 2005.
obtained by the captured human trajectories. Finally, the object [20] M. Yoshida, S. Arimoto, and J. Bae, Blind grasp and manipulation of a
manipulation task including reaching, grasping, and moving an rigid object by a pair of robot fingers with soft tips, in Proc. IEEE ICRA,
Roma, Italy, May 2007, pp. 47074714.
object is demonstrated in sequence with the actual humanoid [21] S. Y. Shin and C. H. Kim, Humanoids dual arm object manipulation
robot. While this approach based on the human motion capture based on virtual dynamics model, in Proc. IEEE ICRA, Saint Paul, MN,
data may provide broader freedom for performing the human- USA, 2012.
[22] S. Y. Shin and C. H. Kim, On-line human motion transition and control
like manipulation tasks with the robots, the generalization of the for humanoid upper body manipulation, in Proc. IEEE/RSJ IROS, Taipei,
manipulation tasks is missing in the presented approach; the Taiwan, 2010, pp. 477482.
extendability of the manipulation tasks highly depends on [23] M. Turk and A. Pentland, Eigenfaces for recognition, J. Cognitive
Neurosci., vol. 3, no. 1, pp. 7186, 1991.
the database, which is true for all imitation learning methods. [24] C. Canudas De Wit and S. B. Brogliato, Direct adaptive impedance
The reduction of the database is part of our ongoing work. control including transition phases, Automatica, vol. 33, no. 4, pp. 643
649, Apr. 2004.
[25] J. Lee, P. H. Chang, and R. S. Jamisola, Jr., Relative impedance control
R EFERENCES for dual-arm robots performing asymmetric bimanual tasks, IEEE Trans.
[1] C. Smith et al., Dual arm manipulationA survey, Robot Auton. Syst., Ind. Electron., vol. 61, no. 7, pp. 37863796, Jul. 2014.
vol. 60, no. 10, pp. 13401353, Oct. 2012. [26] A. Dumlu and K. Erenturk, Trajectory tracking control for a 3-DOF
[2] K. C. Tan, Y. J. Chen, K. K. Tan, and T. H. Lee, Task-oriented develop- parallel manipulator using fractional-order P I D control, IEEE Trans.
mental learning for humanoid robots, IEEE Trans. Ind. Electron., vol. 52, Ind. Electron., vol. 61, no. 7, pp. 34173426, Jul. 2014.
no. 3, pp. 906914, Jun. 2005. [27] N. Hogan and S. P. Buerger, Impedance and interaction control, in
[3] W. Chung, C. Rhee, Y. Shim, H. Lee, and S. Park, Door-opening control Robotics and Automation Handbook. Boca Raton, FL, USA: CRC Press,
of a service robot using the multifingered robot hand, IEEE Trans. Ind. Oct. 2004, ch. 19.
Electron., vol. 56, no. 10, pp. 39753984, Oct. 2009. [28] H. Kim and B. Kim, Online minimum-energy trajectory planning and
[4] C. Kemp, A. Edsinger, and E. Torres-Jara, Challenges for robot manipu- control on a straight-line path for three-wheeled omnidirectional mo-
lation in human environments, IEEE Robot. Autom. Mag., vol. 14, no. 1, bile robots, IEEE Trans. Ind. Electron., vol. 61, no. 9, pp. 47714779,
pp. 2029, Mar. 2007. Sep. 2014.
[5] L. M. Capisani and A. Ferrara, Trajectory planning and second-order [29] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and
sliding mode motion/interaction control for robot manipulators in un- Control. Hoboken, NJ, USA: Wiley, 2005.
known environments, IEEE Trans. Ind. Electron., vol. 59, no. 8, [30] N. Motoi, T. Shimono, R. Kubo, and A. Kawamura, Task realization
pp. 31893198, Aug. 2012. by a force-based variable compliance controller for flexible motion con-
[6] R. C. Luo and C. C. Lai, Multisensor fusion-based concurrent envi- trol system, IEEE Trans. Ind. Electron., vol. 61, no. 2, pp. 10091021,
ronment mapping and moving object detection for intelligent service Feb. 2014.
2276 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015

Sung Yul Shin was born in Korea. He received ChangHwan Kim received the B.S. degree in
the B.S. degree in mechanical design engineer- mechanical engineering and the M.S. degree
ing from Korea Polytechnic University, Siheung, in machine design engineering from Hanyang
Korea, in 2009 and the M.S. degree in robotics University, Seoul, Korea, in 1993 and 1995, re-
engineering from the University of Science and spectively, and the Ph.D. degree in mechanical
Technology, Daejeon, Korea, in 2011. He is engineering from the University of Iowa, Iowa
currently working toward the Ph.D. degree in City, IA, USA, in 2002.
the Department of Mechanical Engineering, The From 2002 to 2004, he was a Research As-
University of Texas at Austin, Austin, TX, USA. sociate with the Robotics and Automation Lab-
From 2011 to 2014, he was a Researcher oratory, University of Notre Dame, Notre Dame,
with the Center for Bionics, Korea Institute of IN, USA. From 2004 to 2007 and from 2007 to
Science and Technology, Seoul, Korea. His research interests include 2011, he was with the Center for Intelligent Robotics and the Center
motion generation and control for humanoid robots and rehabilitation for Cognitive Robotics, respectively, with the Korea Institute of Science
robots. and Technology (KIST), Seoul, where he is currently with the Center
for Bionics. His research interests include human motion imitation and
motion generation of a humanoid, human modeling, motion planning of
mobile robots, cooperation of multiple robots, and rehabilitation robots.

You might also like