Professional Documents
Culture Documents
10
From Openrobotino
Robot
Another screenshot of the triangulation application showing Screenshot of the object recognition and robot triangulation
the robot's position on the football field application
The task has been to make your robot localize itself on a soccer field by using its on board camera. The field had two
goals on two sides, one painted yellow, the other one painted blue. Each of the four edges of the field where marked by
a pole each consisting of three cubes painted in blue and yellow, alternating. The poles next to the yellow goal have
been carrying blue-yellow-blue and the one's next to the blue goal yellow-blue-yellow. Besides, there have been white
lines on the ground, marking the field's boundaries. Which of those elements we're using to localize the robot was
pretty much up to us.
Contents
1 Basic Idea
2 Techniques
2.1 Libraries & Utilities
2.2 Implementation
2.2.1 Version 1 (First Implementation)
2.2.2 Version 2 (Improved)
3 Mathematics
4 Code Review
5 Bugs
6 Improvements
6.1 Recent Improvements
6.2 Ideas for further Improvements
7 Links
Basic Idea
Assuming the robot is standing somewhere on the soccer field, it starts turning around until it finds a (first) pole.
Having found a first pole it measures the time till it finds a second one. Knowing the speed with which it's rotation, you
can easily calculate the angled between the two detected poles. All of this is repeated for a third pole.
Now knowing the two angles between the three poles from the robot's position and given the geometry of the soccer
field as well as the location of the poles (which we can distinguish by their painting) we can now calculate the robot's
position. The details on how to do this are described a little further down this article.
Techniques
Libraries & Utilities
Implementation
Version 2 (Improved)
Thus we have performed a full turnaround and know that the sum of all measured angles should be 360 degrees.
ow we can use the difference between real and expected value to auto-scale the angles.
Besides:
For each new localization, the resulting position is added to the list and an average position of all the positions in
the list is shown on the soccer field.
With all these improvements, accuracy of the robot localization has been greatly improved and is now somewhat
around 30 to 40 cm instead of 1 to 1.5 m.
Mathematics
Triangles and quads used for triangulation Triangulation of the robot position
Calculating the robot position is basically done by making use of the sum of angles in triangles and quads as well as
some sine and cosine laws.
The second drawing on the right shows the one and only quad we are referring to throughout our calculation (blue
line) as well as the two main triangles we are making use of (painted in green and orange). The first drawing is a
detailed drawing with all angles and vertices named exactly as they are named in the program code.
We know and which are the width and the height of the soccer field.
We know that the angle between those two vertices is 90 degrees.
Angles and are also given, as they are measured by the robot as it constantly turns and recognizes the
poles.
1. Calculate one of the unknown angles in one of the two main triangles
The law of sines for arbitrary triangles leads to
(1)
and
(2).
For convenience we define
and .
With and being the height and width of the soccer field and and being the two angles between the three
poles measured by the robot, and can be considered given.
Furthermore given the sum of angles in a quad, we find for the blue quad mentioned above that
where and again are the angles between the three poles as stated above.
Therefore, we now have a direct relation between and .
Again, for convenience let
(3).
Now isolating a in both, equations (1) and (2), then using (3) to replace in (2) we get
and
we find that
Now we have
3. Calculate diagonal which splits up the quad formed by and in two rectangular triangles
For the triangle containing and , the law of sines states that
4. Calculate and which are the robots offset from the center pole, i.e. the robots position
Both of these vertices form a rectangular triangle together with vertex . For this, sine and cosine laws state that
and
and
Code Review
Calculation of the robot's offset to the second of three poles given the two angles between the three poles:
Bugs
Inaccuracy due to a small logical mistake which is to be corrected soon ;-)
Unfortunately inaccuracy has not been related to this logical mistake and therefore has not been solved by its
correction. evertheless, averaging more than just one position as well as auto-scaling of angles has improved
accuracy a lot.
Every four measurements, there is one measurement which gives a completely wrong position. It seems that this
happens every time the measurement is based on the three most distant poles, but this bug still needs to be
identified exactly. Anyway, there's a small chance that this is related to the bug mentioned above and thus is
automatically solved while solving this one.
Fortunately, just the latter has been the case and correcting the logical mistake mentioned above has completely
solved this problem.
Due to the lack of time because of a pending term abroad right ahead, the code is spaghetti code and the user
interface as well as thread synchronization is a mess. All of this should be cleaned up some day.
Improvements
Recent Improvements
Perform a full turn around before doing the calculation. Thus, by summing up all the measured angles, you can
determine how close to 360 degrees you are a scale a angles by a certain factor so that they sum up to exactly 360
degrees. Thus, angles should be much more accurate as they not depend on the actual motor speed mapping any more.
Calculate robot position as the average position of more than just one measurement
Currently the robot position is calculated by triangulating three poles. As there are not just three, but four poles on the
field, the robot starts with a different pole for each triangulation. This provides four different combinations of poles for
triangulation. Nevertheless, the current implementation doesn't make any use of the results of triangulations done
before the current one. Averaging the position from the last four triangulations (for the four different combinations of
poles) should improve accuracy again.
Currently angles between poles are calculated by multiplying the set rotation speed with the time between elapsed
between a currently detected pole and the last detected pole. Querying the robot for the current motor speed values
and using those to calculate the angle in stead of using the set values, might improve accuracy.
Perform a full turn around before doing the calculation. Thus, by summing up all the measured angles, you can
determine how close to 360 degrees you are a scale a angles by a certain factor so that they sum up to exactly 360
degrees. Thus, angles should be much more accurate as they not depend on the actual motor speed mapping any more.
The camera could need some calibration. Besides, object recognition leaves quite some room for improvements. Pole
recognition could be adjusted checking for ratio width / height of the rectangles making up the pole to minimize
chances of other objects (such as humans with colored clothing on the field) to be detected as a pole. A check for
minimum size is already implemented to get rid of very small objects detected by the cam. A check for maximum size
might bring some improvements.
Calculate robot position as the average position of more than just one measurement
Currently the robot position is calculated by triangulating three poles. As there are not just three, but four poles on the
field, the robot starts with a different pole for each triangulation. This provides four different combinations of poles for
triangulation. Nevertheless, the current implementation doesn't make any use of the results of triangulations done
before the current one. Averaging the position from the last four triangulations (for the four different combinations of
poles) should improve accuracy again.
In addition to just triangulating the four possible combinations of poles, one could also make use of the two goals and
maybe the lines on the field to acquire more position values. This can be used for averaging as well, but also to
detected and sort out extremely inaccurate or wrong values.
Make it faster
CPU load is by far not at it's maximum which means image processing and object detection should be able to keep up
with a faster rotation speed of the robot. If you're not the cosy kind of man, you might be happy with the robot turning
faster than the current 15 deg/s, especially as the robot can do a lot faster than this.
Links
Personal homepage with information, links and a video of the robot running the localization program [1]
(http://www.juergentreml.de/joomla/index.php?option=com_content&task=view&id=16&
Itemid=37#Screenshots)