Professional Documents
Culture Documents
Kazuko Itoh 1 ,2, Hiroyasu Miwa 3,4,5, Massimiliano Zecca 2,4, Hideaki Takanobu 2,5,6,
Stefano Roccella 7, Maria Chiara Carrozza 2,7, Paolo Dario 2,7 and Atsuo Takanishi 1,2,4,5,8
1
Department of Mechanical Engineering, Waseda University, Tokyo, Japan
2
RoboCasa, Tokyo, Japan
3
Digital Human Research Center, National Institute of Advanced Industrial Science and Technology
(AIST), Tokyo, Japan
4
Institute for Biomedical Engineering, ASMeW, Waseda University, Tokyo, Japan
5
Humanoid Robotics Institute (HRI), Waseda University, Tokyo, Japan
6
Department of Mechanical Systems Engineering, Kogakuin University, Tokyo, Japan
7
ARTS Lab, Scuola Superiore Sant’Anna, Pontedera, Italy
8
Advanced Research Institute for Science and Engineering, Waseda University, Tokyo, Japan
e-mail: itoh@suou.waseda.jp (K. Itoh), takanisi@waseda,jp (A. Takanishi)
1 Introduction
Robots have become indispensable for human life. Most popular robots are of the industrial kind
with various functions, such as assembly and conveyance. However, we hope that personal robots
which are active in joint work and community life with humans will become popular in the future.
Such personal robots must adapt to partners and the environment and to communicate naturally
with humans. Therefore, new mechanisms and functions are developed in order to realize the
natural communication by expressing the emotions, behaviors and personality in a human-like
manner.
Research is being conducted into communication robots in the field of robotics. Breazeal and
*
We would like to thank the Humanoid Robotics Institute (HRI) consortium, General Directorate for
Cultural Promotion and Cooperation of the Italian Ministry of Foreign Affairs, ARTS Lab, NTT Docomo,
SolidWorks Corp. for their support. In addition, this research was supported by the Ministry of Education,
Science, Sports and Culture, Grant-in-Aid for Young Scientists (B) (No. 17700211) and a Grant-in-Aid
for the WABOT-HOUSE Project by Gifu Prefecture.
Scassellati (1999) have developed an expressive robotic creature that expresses facial expressions
using its eyes, eyelids, eyebrows and mouth. It communicates with humans using visual
information from CCD cameras. Kobayashi and Hara (1996) have developed a head robot that
uses the fourteen Action Units of Ekman (see Ekman and Friesen, 1978). It can express six basic
facial expressions as quickly as human using 24-DOFs with air compressors for actuators. It can
also recognize human facial expressions using CCD cameras and reciprocate the same facial
expressions back.
We have been developing the humanoid robots from viewpoints of both robotics and
psychology in order to realize human-like motion. The Human-like Head Robot WE-3 (Waseda
Eye No.3) series have been developed as a mechanical model since 1995 (Miwa et al., 2002). So
far, we have achieved coordinated head-eye motion with V.O.R. (Vestibular-Ocular Reflex),
depth perception using the angle of convergence between the two eyes, adjustment to the
brightness of an object with the eyelids and four sensations, visual, auditory, cutaneous and
olfactory sensations. In addition, we produced emotional expressions using not only the face, but
also the upper body by the Emotion Expression Humanoid Robot WE-4 (Waseda Eye No.4)
series with the waist, 9-DOFs emotion expression humanoids arms (Itoh et al., 2004) and
humanoid robot hands RCH-1 (Robo Casa Hand No.1) (Zecca et al., 2006). Moreover, a mental
model was developed for humanoid robots based on psychology. We introduced the mental space
with three independent parameters, the mood, the second order equations of emotion, the robot
personality, the need model, consciousness model, behavior model and memory model (Itoh et al.,
2005) into the mental model. In this paper, we describe the mechanical features of the latest
Emotion Expression Humanoid Robot WE-4II (Waseda Eye No.4 Refined II).
2 Mechanical Hardware
Figure 1 presents the hardware overview of Emotion Expression Humanoid Robot WE-4RII. The
robot has 59 DOFs for expressing motions and emotions as shown in Table 1, and four of the five
human senses for detecting external stimuli: visual, auditory, olfactory and tactile. The height and
weight of its upper body are 970[mm] and 59.3[kg], respectively. The hand is the Humanoid
Table 1. DOFs
configuration of
Sensors Expressions WE-4RII.
Visual Eyebrows Part DOF
(CCD Camera) Neck 4
Auditory Eyelids
Eyes 3
(Microphone) Facial Color Eyelids 6
Olfactory Eyebrows 8
Lips
(Gas Sensor) Lips 4
Tactile Voice Jaw 1
(FSR) Lungs 1
Neck Waist 2
Temperature
(Thermistor) Arms 18
Hands 12
(a) Whole view (b) Head part Total 59
Figure 1. Emotion Expression Humanoid Robot WE-4RII.
Upper Lid
Eye Yaw
Eyebrows
140[mm]
Eye Pitch
Motor
Lower Lid
Upper Lid
]
m
Roll
180[m [m
0 Pulley
13
Robot Hand RCH-1 (RoboCasa Hand No.1) which is designed and developed by ARTS Lab,
Scuola Superiore Sant’Anna in order to express not only emotional expressions but also active
behavior.
Roll
Pitch
Lower Pitch
(Back View)
RCH-1
DC Motors
Table
Table2.2 Sensors on WE-4RII
Sensors on WE-4RII.
Part Sensation Device Quantity
Visual CCD Camera 2
Auditory Microphone 2
Tactile FSR 26
Head Cutaneous Temperature Thermistor 1
Weight Current Sensor 2
Semiconductor
Olfactory 4
Gas Sensor
Tactile Contact Sensor 16
Hand Cutaneous Tactile FSR 4
Figure 9. RCH-1. Force 3D Force Sensor 2
there was too much backlash. Moreover, it was necessary to redesign the forearms for mounting
the finger’s motors. Therefore, we changed the gear system to small harmonic drive systems in
order to reduce the backlash and miniaturize the wrist mechanism. Especially, we designed the
link mechanism as shown in Figure 8(a) for the pitch axis of the wrist. The link mechanism
transmits the motor power with the two links, which were supported by four ball bearings to
reduce the slant caused between inner and outer rim.
WE-4RII has visual, auditory, olfactory and tactile sensors on its head, and tactile sensors on
its hands as shown in Table 2. Regarding visual sensor, WE-4RII has two color CCD cameras
(CS6550, Tokyo Electronic Industry Co. Inc.) in its eyes. WE-4RII calculates the gravity and area
of the targets. The robot can recognize any color as the targets and it can recognize eight targets
at the same time. It also can recognize the distance from its eyes to the targets by the angle of
convergence between the two eyes. If there are multiple targets in the robot’s view, WE-4RII
autonomously selects a suitable color as the target according to the situation. With regard to the
auditory sensor, WE-4RII has condenser microphones in each ear. It can localize the sound
direction from its loudness in a 3D space. For olfactory sensation, we set four semiconductor gas
sensors (SB-19, SB-30, SB-AQ1A and SB-E32, FIC Inc.) in WE-4RII’s nose. The robot can
quickly distinguish the smells of alcohol, ammonia and cigarette smoke. In addition, we used
FSRs (model 406, Interlink Electronics, Inc.) as the tactile sensor and set them on the cheeks,
forehead, top and side of the head of WE-4RII. The robot can recognize the difference in
touching behaviors such as “push,” “hit” and “stroke”.
Meanwhile, WE-4RII has on/off contact sensors and FSRs on RCH-1. The on/off contact
sensors, which are film shaped switch, detect contact with the objects for grasping. FSRs set on
the dorsum of RCH-1 are used for interaction with a human.
5 Emotional Expressions
WE-4RII can express its emotion by using the upper body motion: the facial expression, arms,
hands, waist and neck motion. First, the seven facial patterns of “Happiness”, “Anger”, “Disgust”,
“Fear”, “Sadness”, “Surprise”, and “Neutral” were defined by using the Six Basic Facial
Expressions of Ekman (Ekman and Friesen, 1978). Next, we made 13 patterns of the upper body
for each emotional expression by using the relation between the emotion and the posture
(a) Neutral (b) Disgust (c) Fear (d) Sadness
proposed by Hama et al. (2001). We defined one pattern with the highest recognition rate in pre-
experimental evaluation as the emotional pattern. In addition, both the posture and the motion
velocity were controlled in order to realize the effective emotional expression, since the motion
velocity is as important as the posture for emotional expressions. For example, WE-4RII moves
its body quickly and slowly for surprise and sadness emotional expressions, respectively. We
defined the motion velocities by comparing the various motion velocities. Figure 10 shows the
emotional expressions exhibited by WE-4RII.
Recognition Rate %
80 72.2
recognition rate of all emotional 66.7
In this paper, we presented the Emotion Expression Humanoid Robot WE-4RII developed by
integrating the Humanoid Robot Hands RCH-1 into the previous version WE-4R. We
confirmed that WE-4RII can effectively express its emotions.
In the future, we would like to increase the emotional expressions and behaviors and to
develop a model for generation of most suitable emotional expression and behavior according to
the situation. In addition, a method for objective evaluation of robots has not been proposed yet.
It is very important to measure the psychical effect of robots on humans in real time and with
high reliability by objectively evaluating robots. Therefore, we would like to develop new
evaluation system for the humanoid robots.
References
Breazeal, C., and Scassellati, B. (1999). How to build robots that make friends and influence people.
Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, 858-
863.
Ekman, P., and Friesen, W. V. (1978). Facial Action Coding System. Consulting Psychologists Press Inc.
Hama, H., et al. (2001). Kanjo Shinrigaku heno Syotai (in Japanese). Saiensu. 162-170.
Itoh, K., Miwa, H., Takanobu, H., and Takanishi, A. (2004). Mechanical Design and Motion Control of
Emotion Expression Humanoid Robot WE-4R. Proceeding of the 15th CISM-IFToMM Symposium on
Robot Design, Dynamics, and Control, ROM04-14.
Itoh, K., et al. (2005). Application of Neural Network to Humanoid Robots –Development of Co-
Associative Memory Model. Neural Networks, Vol.18, No.5-6, 666-673.
Kobayashi, H., and Hara, F. (1996). Real Time Dynamic Control of 6 Basic Facial Expressions on Face
Robot. Journal of the Robotics Society of Japan, Vol.14, No.5, 677-685.
Miwa, H., Takanobu, H., and Takanishi, A. (2002). Human-Like Head Robot WE-4RV for Emotional
Human-Robot Interaction. ROMANSY 14 –THEORY AND PRACTICE OF ROBOTS AND
MANIPULATORS, 519-526.
Zecca, M., et al. (2006). From the Human Hand to the Humanoid Hand: Development of RoboCasa Hand
#1. to be submitted to 16th CISM-IFToMM Symposium on Robot Design, Dynamics, and Control.