Professional Documents
Culture Documents
Mihai Dragusu1 , Anca Nicoleta Mihalache1 and Razvan Solea2 , member, IEEE
I. INTRODUCTION
Fig. 1. System hardware components
The influence and impact of digital images on modern so-
ciety, science, technology and art are huge. Image processing
has become such a critical component in contemporary sci- nonzero calibration residuals, low fidelity of models, and
ence and technology that many tasks would not be attempted real-time constraints; and learns to deal with them.
without it. It is a truly interdisciplinary subject and is used The goal of this paper is to provide an overview of the
in computer vision, robotics, medical imaging, microscopy, practical laboratory applications and the required facilities.
astronomy, geology and many other fields [1] - [5]. As an overview, the paper will provide enough practical
One of the core technologies of the intelligent robot was, information to replicate the applications.
of course, vision, and this vision research area was called
”computer vision”, which is now regarded as a fundamental II. PRACTICAL APPLICATIONS - ALGORITHMS
and scientific approach to investigate how artificial vision The system is designed for processing images captured
can be best achieved and what principles underlie it [6]. by a web cam so important information can be extracted
OpenCV (Open Source Computer Vision) is a library and sent to the robotic arm. The robotic arm has a software
of programming functions for real time computer vision. implemented, so it can use the information received in order
OpenCV is free for both academic and commercial use. It to complete its given task: either recognize shapes for sorting
has C++, C, Python and soon Java interfaces running on or coordinates extraction from a contour for drawing.
Windows, Linux, Android and Mac. The library has more System hardware components are (see Fig. 1): video
then 2500 optimized algorithms [7]. camera and processing tool, robotic arm and mobile platform.
OpenCV played a role in the growth of computer vision
by enabling thousands of people to do more productive work A. Contour Coordinates Extraction
in vision. With its focus on real-time vision, OpenCV helps The web cam captures the image to be processed. In
students and professionals efficiently implement projects. order to obtain information about this capture, edge detection
The contribution of this paper is to report new practical applied to the image is necessary.
applications for student laboratory that provide the hands-on Of all the edge detection techniques (Canny, Sobel,
experience that is critical for robotics education. These types Laplace, etc.), Canny method [10], [11] was chosen because
of exercises supply the student with hands-on experience that of its good performances with noisy images. This algorithm
complements classroom lectures and software development. can only be applied on a grayscale blurred image, and it will
Through this experience, the student confronts the hard find all the edges from the image, based on which contours
realities of robot systems; realities such as sensor noise, will be formed.
1 M. Dragusu and A.N. Mihalache are students at the Faculty of Auto- After using Canny algorithm on the blurred image, it has
matic Control, Computers, Electric and Electronic Engineering, “Dunarea de been decided that each contour would have associated a
Jos” University of Galati, Romania dragusumihai@yahoo.com, different shade of red for each limited entity, starting from
mihalache anca@ymail.com 254 and decrementing this value in successive steps with
2 R. Solea is with the Faculty of Automatic Control, Computers, Electric
and Electronic Engineering, “Dunarea de Jos” University of Galati, Romania 1 until reaches the value 10. This means the detection and
razvan.solea@ugal.ro distinguishing of up to 244 discontinuous contours. But in
Fig. 2. The Moore neighborhood in coordinates
50
0
40 50 60 70 80 90 100
step time
40
angular error [deg]
20
−20
−40
40 50 60 70 80 90 100
step time
xr = x ∗ K1 − 7.8
(1)
yr = y ∗ K2 + 44.3
where xr and yr are the rescaled coordinates and x, y are
the initial coordinates provided by the extraction algorithm.
So after rescaling the drawing space and the coordinates
received from the extraction algorithm and transformed into
polar coordinate system, the arm can now draw the shapes Fig. 14. Canny result
presented to the camera, with a low degree of difficulty,
considering the limitations it has.
Fig. 12 shows the desired, command and real angular a predetermined threshold then we can say that the object we
position of the first joint of the manipulator and the angular found (see Fig. 16 is the one were looking for.
error. The described process can find multiple objects in a single
image.
IV. SECOND PRACTICAL LABORATORY In the following example the searched object is an USB
APPLICATION - EXPERIMENTAL RESULTS FOR flash stick.
SORTING ALGORITHM In Fig. 18 the found object is declared (“stick.jpg ga-
A. Step 1 - Learning Process (object extraction) sit”) and its sum squared error compared to the saved one
During this process the user will help the application learn (0.0736217), as you can see other saved objects have been
which objects to detect, using the following steps: i) the compared with the found object in the current image but none
image is captured (Fig. 13 and the edge detection is applied had the sum squared error (0.284048 and 1) low enough for
(“c” button); ii) the contours are determined (“d” button); a mismatch.
iii) the objects captured by the contours are browsed (“n”
C. Step 3 - Object depositing
button) until the user finds the desired object (see Fig. 15)
and saves it (“s” button). After the object has been found it’s assigned a code that
is communicated to the arm’s control software which has
B. Step 2 - Recognition principle specified places for each object type to be deposited.
During this process (see the diagram: Fig. 17) the camera To show the effectiveness of the proposed algorithm, ex-
feedback is constantly checked to see the same object ex- periments was made using a Pionner Robot mobile platform
traction is applied for each frame only now for each object (like in Fig. 1). The mobile platform will pass through
extracted match template is applied for each object saved. three states: i) Positioning at the input buffer coordinates;
When the sum squared error between two objects is less than ii) Positioning under the camera in order to determine the
Fig. 15. Saved objects - example
object, and to grab the object with the robotic arm; iii) Return
to the initial position.
All three states are communicated (in form of Cartesian
coordinates) to the mobile platform using ARIA (Advanced
Robot Interface for Applications) software. The main appli-
cation will write the desired position for the platform and Fig. 17. The object recognition algorithm
the control application of the platform will read the file and
command the robot to go to that position. ARIA software
can be used to control the mobile robots like Pionner,
PatrolBot, PeopleBot, Seekur etc. ARIA it is an object-
oriented Applications Programming Interface (API), written
in C++ and intended for the creation of intelligent high-level
client-side software. ARIA provides tools to integrate I/O and
Fig. 18. Result of recognition algorithm
includes comprehensive support for all MobileRobots robot
accessories, including the laserrange finders, control of the
pan-tilt-zoom camera or pantilt unit, Pioneer Gripper and R EFERENCES
Arm, and more.
[1] C. Theis, I. Iossifidis and A. Steinhage, Image Processing Methods
V. CONCLUSIONS for Interactive Robot Control, Proceedings 10th IEEE International
Workshop on Robot and Human Interactive Communication, 2001,
In this paper it was presented practical laboratory applica- pp. 424-429.
tions in robotic manpulation, computer vision, and mecha- [2] G. Dougherty, Digital Image Processing for Medical Applications,
Cambridge University Press, 2009.
tronics. This applications provide the hands-on experience [3] M. Pesenson, W. Roby and B. McCollum, Multiscale Astronomical
that is essential for robotics education. Image Processing Based on Nonlinear Partial Differential Equations,
The integration of real-world problems with complex im- The Astrophysical Journal, 683 (1), 2008, pp. 566-576.
[4] S. A. Drury, Image Interpretation In Geology, Nelson Thornes, UK,
agery can improve image-processing applications. Effective 2001.
applications-based exercises should be closely aligned with [5] Yu-fai Fung, H. Lee, and M. F. Ercan, Image Processing Application
lecture concepts and should be assigned with little time in Toll Collection, IAENG International Journal of Computer Science,
32(4), 2006, 6 pages.
delay. An “OpenCV” environment allows a focus on the [6] M. Ejiri, T. Miyatake, H. Sako, A. Nagasaka and S. Nagaya, Evolu-
imaging concepts and techniques with minimal programming tion of Real-time Image Processing in Practical Applications, IAPR
difficulty. Workshop on Machine Vision Applications, Japan, 2000, pp. 177-186.
[7] G. Bradski and A. Kaehler Learning OpenCV (Book style), O’Reilly
Both applications, the object detection and the contour Media, Inc., USA, 2008.
coordinates extraction methods are implemented using a [8] Robai, CYTON V2 7DOF - Operating Manual, USA, 2010, 36 pages.
series of image processing techniques like border extraction, [9] Energid Technologies, Energid ActinSE Documentation,
http: //www.robai.com/content/docs/ActinSE docs/.
contour detection, contour extraction, along with match tem- [10] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd
plate and the aid of OpenCV library. Edition, Prentice Hall, 2008.
In future work, motivated by this work, we consider the [11] W. K. Pratt, Digital Image Processing: PIKS Inside, 3rd Edition, John
Wiley & Sons, Inc., 2001.
situation when mobile manipulators are sorting objects and
placed the objects in a known location.