Professional Documents
Culture Documents
Abstract: This paper designs a restaurant service robot which could be applicable to providing basic service, such
as ordering, fetching and sending food, settlement and so on, for the customers in the robot restaurant. In this
study, both three landmarks positioning and landmark-based localization algorithms are proposed to localize the
mobile robot with rough precision in the restaurant. And Optical Character Recognition (OCR) technology is
used to distinguish the unique table number of different landmarks. When the high localization precision must be
granted around the pantry table, the RFID-based localization algorithm is proposed to localize the mobile
robot.Various experiments show that the proposed algorithms could estimate the robot pose reasonably well and
could accurately evaluate the localization performance. Finally, the proposed service robot could realize real-time
self-localization in the restaurant.
Keywords: service robot; landmark; localization; OCR; RFID
1. Introduction
Mobile robot localization and object pose estimation in a
working environment have been central research
activities in mobile robotics. Solution of the mobile robot
localization problem requires addressing of two main
problems (R.Siegwart et al., 2004): the robot must have a
representation of the environment; the robot must have a
representation of its belief regarding its pose in this
environment. Sensors are the basis of addressing the both
problems. Many of the sensors available to mobile robots
are introduced in (R.Siegwart et al., 2004), giving their
basic principles and performance limitations. Based on
this introduction, ultrasonic sensors (J.Leonard et al.,
1991), goniometers (P.Bonnifait et al.., 1998), laser range
finders (A.Arsenio et al., 1998), and CCD cameras
(Y.Yamamoto et al., 2005) are sensors which are
commonly applied in mobile robot localization projects to
gather information on the robot's pose estimation at high
precision. In the project, we mainly use binocular stereo
camera and RFID sensors to gather enough information
to realize self-localization and self-navigation.
Vision systems are passive and of high resolution, and are
the most promising sensors for future generations of
mobile robots. Therefore, localization using vision sensors
has drawn much attention in recent years. The data
provided by a single camera is essentially 2D, but using
stereo camera or monocular image sequences, full 3D
information can be reconstructed. Many researches have
been studied with regard to the application of stereo
vision or image sequences. For example, a vision-based
mobile robot localization and mapping algorithm which
uses STIF (scale-invariant image feature) is applied for
228
(1)
Yu Qing-xiao, Yuan Can, Fu Zhuang and Zhao Yan-zheng: Research of the Localization of Restaurant Service Robot
(2)
(3)
d=
bf
bf
=
lL lR
AL p1 AR p2
(4)
( xO xO )2 + ( yO yO )2 = D12
1
1
2
2
2
+
y
(
x
x
)
(
O2
O
O2 yO ) = D2
2
2
2
( xO3 xO ) + ( yO3 yO ) = D3
(5)
xo xo = x
Supposing 1
, the formula can be simplicity as:
y o1 y o = y
( xo xo )2 + ( yo yo )2 + 2(xo xo )x + 2( yo yo )y = D2 2 D12
2
1
2
1
2
1
2
1
(6)
2
2
2
2
( xo3 xo1 ) + ( yo3 yo1 ) + 2( xo3 xo1 )x + 2( yo3 yo1 )y = D3 D1
229
y=
(7)
Where: x'
RCS.
( xo3 xo1 )(D2 2 D12 ( xo2 xo1 )2 ( y o2 y o1 )2 ) ( xo2 xo1 )( D3 2 D12 ( xo3 xo1 )2 ( y o3 y o1 )2 )
y o = y o1
2 ( xo3 xo1 )( y o2 y o1 ) ( xo2 xo1 )( y o3 y o1 )
(8)
T = Tz , Tx , Tx , 6
cos
sin
=
0
cos
sin
T =
0
sin cos
cos cos
sin
0
sin
cos
0
0
0
0
1
0
0 1
0
0 0 cos
0 0 sin
1 0
0
sin sin
cos sin
cos
0
0
sin
cos
0
0 1
0 0
0 0
1 0
0
1
0
0
0 6
0 0
1 0
0 1
6 cos
6 sin
0
x'
x'' x'' cos + y '' sin cos + z'' sin sin 6 cos
'
'' ''
y
y x sin + y '' cos cos + z'' cos sin + 6 sin (10)
' = T '' =
1
1
(9)
y'
T
coordinate system.
According to the geometry shown in Fig.1, we could
T ' = Tz , Tx , x0 ; y , y0
0
=
0
cos
sin
T' =
0
sin
cos
0
0
0
1
0
0
0
0
1
0
0 x0 cos
0 y0 sin
1 0 0
0 1 0
x0
y0
0
sin
cos
0
0
0
0
1
0
0
0
T'
(11)
T '' = sin
0
sin
cos
0
x0
y0
1
(12)
x cos sin x0
y = sin cos y0
1 0
0
1
''
''
x cos + y sin cos + z'' sin sin 6 cos
''
Where: x y
GCS.
(13)
230
Yu Qing-xiao, Yuan Can, Fu Zhuang and Zhao Yan-zheng: Research of the Localization of Restaurant Service Robot
When x1 = x2 , y 1 = y 2 ,and f ( x , y ) = F( x , y ) ,
T (0,0) = f 2 ( x , y )dx dy which is the autocorrelation
function of input characters. And T (0,0) T ( x , y ) , T ( x , y )
reaches the peak at T (0,0) , the sub-peaks appear at other
standard characters, as long as the peak and sub-peaks
are not equal, we can determine and identify the
character to be recognized by selecting the appropriate
threshold. During the modeling and matching, shape
matching methods are all to match the character basedcharacter block. Then, according to their similarity, the
recognition results are obtained.
During the characters recognition using graph matching
principle, generally, binary characters are adopted. Its
basic philosophy is establishes standard template Ti for
each character, the image to be recognized is denoted as
Y . Their sizes are all M N . Matching the unknown
model with the template one by one, the similarity Si
could be derived from the following formula:
N
M
(Y Ti )
Si =
m=1
n=1
N
M
(15)
Ti
m=1
n=1
(16)
Now suppose that L and R are the angle rate of the left
and the right wheels, respectively.
Then, we can get the following formula:
p x = r (L + R )cos
2
py = r 2 (L + R )sin
= r d (R L )
(17)
231
p = p + r ti+1 ( + )cos dt
xi
L
R
xi + 1
2 ti
ti + 1
pyi+1 = pyi + r 2 t (L + R )sin dt
i
= + r ti+1 ( )dt
i
R
L
d ti
i + 1
(18)
Where:
L and R are the angle rate of the left and the right
wheels, respectively.
d is the distance between the two wheels.
r is the radius of the two wheels of the robot.
t is the sample time.
In order to increase the computational efficiency, we
suppose that cos is a constant value in each interval. So,
P could be also calculated as the following equation:
ti + 1
pyi+1 = pyi + r 2 sin i t (L + R )dt
i
= + r ti+1 ( )dt
i
R
L
d ti
i + 1
(19)
232
x + l1 sin = x0
y + l1 cos = y0
(20)
Yu Qing-xiao, Yuan Can, Fu Zhuang and Zhao Yan-zheng: Research of the Localization of Restaurant Service Robot
x + l2 sin( n + ) + d cos(
2
y + l2 cos( n + ) d sin(
2
n + ) = x0
n + ) = y0
n = {0,1, 2, 3} (24)
x + l1 sin( n + ) = x0
2
n = {0,1, 2, 3}
y + l1 cos( n + ) = y 0
2
(25)
If it is the right RFID reader that detects the RFID tag, the
center coordinates of the robot, ( x0 , y0 ) , can be denoted
as follows.
Fig. 8. The solution of the center coordinates of the food
delivering robot
In the formula: l1 = 129.7 + 30 20.38 = 149.51
2
is the deflection angle between the robot and normal
direction of the object.
If it is the left RFID reader that detects the RFID tag, the
center coordinates of the robot, ( x0 , y0 ) , can be denoted
as follows.
x + l2 sin + d cos = x0
y + l2 cos d sin = y0
(21)
x + l2 sin d cos = x0
y + l2 cos + d sin = y0
(22)
x + l2 sin( n + ) d cos( n + ) = x0
2
2
n = {0,1, 2, 3} (26)
y + l2 cos( n + ) + d sin( n + ) = y0
2
2
The deflection angle between the robot and the object can
be reckoned:
= k atan xm x0 y y
0
m
(27)
= atan|xm x0 ||y y |
0
m
(23)
233
x global
y global
D (cm)
First landmark
225 530
220
Second landmark
284 385
120
Third landmark
115 356
104
2
2
2
2
2
2
2
2
y = 530 110 (120 220 59 145 ) 59 (104 220 110 174 )
o
2
(110
145
59
174)
234
Yu Qing-xiao, Yuan Can, Fu Zhuang and Zhao Yan-zheng: Research of the Localization of Restaurant Service Robot
0
0
6
1
0
0.9659
0.2588
0
T=
0 0.2588 0.9659 0
0
0
1
0
Then, the location of the landmark relative to RCS could
be written as:
x' 1
0
0
6 23.1 17.1
'
1 0
0
0
1 1 1
y ''
z''
23.1 175.3 23
x global
y global
373 430
50D
15D
0D
235
236
Yu Qing-xiao, Yuan Can, Fu Zhuang and Zhao Yan-zheng: Research of the Localization of Restaurant Service Robot
237
238