Professional Documents
Culture Documents
net/publication/306093191
YuMi, Come and Play with Me! A Collaborative Robot for Piecing Together a
Tangram Puzzle
CITATIONS READS
5 475
5 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Mathias Brandstötter on 12 April 2018.
1 Introduction
category A B C D E F
static/ static/
static H-R dynamic dynamic
encapsu- H-R dynamic
umbrella collabora- H-R collabo-
lation co-existence R-R collabo- H-R-R colla-
term tion ration boration
ration
static/
static/ dynamic
interaction-
static dynamic col- dynamic H-R-R coll-
interaction free safety stop
collaboration laboration R-R colla- aboration
level operation
boration
The use case presented in this article is to solve a five-piece Tangram puzzle (see
Fig. 1(a)). For this purpose, a two-arm robot called ABB YuMi either solves this
task alone (corresponding to the use case categories A, B and E from Table 1)
or it collaborates with the human towards this goal (corresponding to the use
case categories C, D and F from Table 1).
Initially, the puzzle pieces are placed by hand randomly in a circumvented
area reachable by the robot and the human. At the beginning, the user can select
the shape that shall be built with the puzzle (a square, a swan or any another
predefined shape). The robot can also give feedback to the user (e.g. via the
flexpendant, computer screen), for instance, in cases in which two asymmetric
parts are placed in a way that no solution can be found or that not enough
valid puzzle pieces are present in the game. It can also happen that more pieces
are present in the game than necessary for building the puzzle. In this case,
1
http://fourbythree.eu/
2
https://secondhands.eu/
4 Authors Suppressed Due to Excessive Length
the robot and human select the pieces they need for completing the task. The
building process stops when the human presses an emergency button or when
the robot collides with the human or any other object. Figure 4 presents a formal
representation of the different sequential steps necessary to realize the use case.
(a) The located puzzle pieces with the pattern and their ob- (b) Mismatch of
ject frames the pieces
threshold value leads to faster and more reliable results. However, a change in
lightning conditions may result in false negatives (no piece is detected while it
is present). On the other hand, with a lower threshold value, the possibility of
false positives increases. As shown in the example of Fig. 3(b), in case of a too
low threshold and the absence of the triangle piece, the polygon is recognized as
a triangle since the latter is a sub-shape of it. To overcome this error and ensure
the individuality of the pieces, specific patterns on some parts were painted,
see Fig. 3(a). Adding such patterns improved the robustness of detection under
different lighting conditions. Another important parameter is the rotation tol-
erance, which was set to be rotation-invariant in the whole range of 360◦ . With
horizontal and vertical offsets, the position of the object frame can be tuned. To
simplify the program, we defined only one pick-up position for all parts. Since
the robot moves in the object frame, we had to find an appropriate location of
the frames for each part so that the robot is able to pick up the parts always in
the same way. For some object shapes there may be only one solution for picking
up some pieces with a two-finger gripper, such as the triangles in Fig. 3(a). After
setting up the parameters, we taught the place-down position by “jogging” to
the location where the puzzle should be solved. We set this location outside the
camera detection area to avoid detecting the parts already handled. Note that
computation time of a PatMax pattern is about 0.8 s and that not more than
three PatMax patterns can be handled within one vision job due to time-out
errors. In the ABB integrated vision manual [9] two possible solutions to this
problem are presented: i) create an individual vision job for each part and then
load it into the camera one after the other or ii) use one vision job and disable
the unused vision tools. In our experiments, we analyzed both solutions (see
Table 2), but the second option seemed to be more time efficient. As described
in the manual, the disabled tools still send information to the program, which
should be sorted out by some measure. However, when implementing the refer-
ence solution presented in the manual, we could not achieve the desired result.
Accordingly, we developed an alternative and additionally simplified solution to
Title Suppressed Due to Excessive Length 7
Method Time
One hand - 5 vision jobs 78 s
Two hand - 5 vision jobs 64 s
One hand - 1 vision job 59 s
Two hand - 1 vision job 42 s
Due to the hardware restriction, the robot is blind between two images, i.e., it
may happen that the human pick up the same piece that the robot wants to. In
such a case, the system can detect if it did not grab anything and check whether
the piece exist on the target location or not.
On the other hand, the human can also assist the robot if an asymmetric
piece is placed down with a wrong orientation or the forthcoming piece is not
presented. We show that it is possible to fulfill these tasks with the mentioned
hardware limitations and how the human-robot collaboration helps us to cope
with these constraints.
In this section, first results concerning the developed robotic Tangram puzzle
piecing algorithms are presented. In Table 2, four different algorithms are com-
pared in terms of time required for successfully completing the puzzle piecing
task. To make results reconstructible in these measurements, the same initial
configuration of the puzzle pieces, the same lightning condition, and the same
moving instructions with the same speed were used. Furthermore, the robot
solved the puzzle without human intervention. As indicated in Fig. 4, one im-
portant prerequisite to achieve a robust recognition of puzzle pieces independent
of the lightning condition was to set up the exposure time automatically in the
beginning of the program by reading the brightness of the image. As expected,
when only one robot arm was used to detect the pieces as well as to place them
into the final position the robot was slower than when using one arm for image
acquisition and the other one for puzzle placing. Furthermore, Table 2 illustrates
the amount of time saving when using only one vision job for the puzzle part
recognition instead of 5 vision jobs.
8 Authors Suppressed Due to Excessive Length
Fig. 4. The flowchart showing the steps and components for puzzle solving.
6 Conclusion
In this article, a collaborative robotics use case for gaming applications was pre-
sented in which a two-arm ABB YuMi robot pieces a Tangram puzzle together
with a human game partner. Employed methods for puzzle piece recognition,
grasping, game sequencing, and emergency stop in case of collision were pre-
sented and evaluated. The next steps planned are amongst others the integration
of a dynamic puzzle solver, natural human-robot communication concepts, and
the integration of matrix-based proximity sensors for avoiding collisions between
robotic and human game partners.
Title Suppressed Due to Excessive Length 9
7 Acknowledgement
This work has been supported by the Austrian Ministry for Transport, In-
novation and Technology (bmvit) within the project framework Collaborative
Robotics.
References
1. Mathieu Bélanger-Barrette. Robotiq Collaborative Robot Ebook. Robotiq, 2016. 6th
edition.
2. Y Uny Cao, Alex S Fukunaga, and Andrew Kahng. Cooperative mobile robotics:
Antecedents and directions. Autonomous robots, 4(1):7–27, 1997.
3. Oxford English Dictionary. Oxford: Oxford university press, 1989.
4. Jeff Fryman and Björn Matthias. Safety of industrial robots: From conventional
to collaborative applications. In Robotics; Proceedings of ROBOTIK 2012; 7th
German Conference on Robotics, pages 1–5. VDE, 2012.
5. ISO. 10218-1: 2011: Robots and robotic devices–safety requirements for indus-
trial robots–part 1: Robots. Geneva, Switzerland: International Organization for
Standardization, 2011.
6. ISO. 10218-2: 2011: Robots and robotic devices–safety requirements for industrial
robots–part 2: Robot systems and integration. Geneva, Switzerland: International
Organization for Standardization, 2011.
7. ISO. 15066: Robots and robotic devices–collaborative robots. Geneva, Switzerland:
International Organization for Standardization, 2016.
8. George Michalos, Sotiris Makris, Panagiota Tsarouchi, Toni Guasch, Dimitris Kon-
tovrakis, and George Chryssolouris. Design considerations for safe human-robot
collaborative workplaces. Procedia CIRP, 37:248–253, 2015.
9. ABB AB Robotics Products. Application manual - integrated vision, 2015. Doc-
ument ID: 3HAC044251-001, Revision: E.
10. Paul G Ranky. Collaborative, synchronous robots serving machines and cells.
Industrial Robot: An International Journal, 30(3):213–217, 2003.
11. Velik Rosemarie, Dieber Bernhard, Yahyanejad Saeed, Brandstoetter Matthias,
Kirschner David, Paletta Lucas, Fuhrmann Ferdinand, Luley Patrick, Zeiner Her-
wig, Paar Gerhard, and Hofbaur. Michael. Step forward in human-robot collabora-
tion the project collrob. In OAGM & AWR Joint Workshop on Computer Vision
and Robotics, pages 1–8, Wels, Austria, 2016.