Professional Documents
Culture Documents
By Image Processing
R. Chamelat1 , E. Rosso1 , A. Choksuriwong2 , C. Rosenberger2 , H. Laurent2 , P. Bro3
1 ENSI de Bourges
I. Introduction
The use of robotic systems for fruit harvesting has been
of interest for almost four decades [?]. Many agricultural
systems have been described in the literature for the au-
tomation of various harvesting processes such as for fruit
location, detachment, transferring, obstacle avoidance, ma-
neuvering, etc [1], [2], [5], [14], [12]. There have been some
good survey papers that covered several aspects of these
systems [19], [18]. Of interest has been harvesting of var-
ious fruits and vegetables [5], [21], [22], [23]. In this pa-
per, we focus on a system for grape harvesting and trans-
portation. The wine industry [20] is interested in using
autonomous robotic systems to realize these tasks for mul- 1-4244-0136-4/06/$20.00 '2006 IEEE
tiple reasons.
First of all, existing machines, such as those shown in Fig. 1. Some Existing Grape Harvesting Machines
Figure 1, harvest grapes by striking the vine. This harvest-
ing process is not possible for some wines, such as cham-
pagne, for chemical reasons (oxidation). Moreover, some
deposits are collected with the grapes. Finally, these ma- One important step in this process is first to locate
chines need at least one operator to harvest. grapes in a vine area by using artificial vision. There
are many applications of image processing in agriculture.
Second, a vineyard can be harvested by humans. Using Lots of works have been performed for the quality con-
robots should be performed such that robot picking rate trol of fruits or vegetables by artificial vision [9], [15].
is higher than manual picking rate. For large vineyards, a Such systems increase worker productivity, augment prod-
large number of workers is needed. An autonomous system uct throughput and improve selection reliability and uni-
could reduce the harvesting cost also. Another important formity. For a human, grape detection is not so easy es-
issue is that quality control using these systems must be pecially when the grape and the leafs have a similar color.
superior to that achieved by humans manually. For such an application, the environment makes the grape
detection difficult. Indeed, the luminance of images can
be very different (sun, shadow,...). A grape can appear at
different scales and can be occulted by a leaf.
Fig. 5. 2 objects with a similar shape Fig. 7. Zernike moments values computed on the color image
X̀
Fig. 6. Zernike moments values of 2 objects having a similar shape f (x) = hw, Φ(x)i + b = αi∗ yi K(xi , x) + b (5)
i=1
Fig. 8. Example of computed margin in the linear case
The performances of our algorithm are analyzed with re- 2. The classes yi ∈ {−1, 1} of each block that is to say the
spect to the ratio of examples in the learning set. Hence, block contains some grape or not.
for a given ratio, the learning and testing sets have been
built by splitting randomly all examples. Then, due to the 3. Algorithm performance: the efficiency is given accord-
randomness of this procedure, multiple trials have been ing to the number of examples selected from the learning
performed with different random draws of the learning and dataset. For a given rate, the learning and the test sets are
testing set. randomly constructed among all the examples.
4. Number of trials: fixed to 10, in order to compensate
the random drawing.
References
[1] Allotta, B., G. Buttazzo, P. Dario and F.A. Quaglia ”
Force/torque sensor-based technique for robot harvesting of
fruits and vegetables ” Proceedings. IEEE International Work-
shop on Intelligent Robots and Systems (IROS), Vol. 1, pages
Fig. 10. Efficiency of grape detection by taking into account only 231-235, 1990.
color information [2] Ceres, R., J.L. Pons, A.R. Jimenez, J.M. Martin, and L.
Calderon, ” Design and Implementation of An Aided Fruit-
Harvesting Robot (Agrobot), ” Industrial Robot, Vol. 25, No.
Figure 11 presents the efficiency of the proposed ap- 5, pp. 337-346, 1998.
[3] A. Choksuriwong, H. Laurent and B. Emile, ”Comparison of
proach by using all the parameters (color parameters and invariant descriptors for object recognition”, In the proceedings
invariant descriptors). We can see that with only 10% of of IEEE International Conference on Image Processing, 2005.
samples used in the learning database (which corresponds [4] C.-W. Chong, P. Raveendran and R. Mukundan, ”Mean Shift:
A Comparative analysis of algorithms for fast computation of
to 2160 samples with W=16 for example ), we have an er- Zernike moment”, Pattern Recognition 36 pp.731-742, 2003.
ror for the grape recognition less than 0.5%. [5] Y. Edan, D. Rogozin, T. Flash and G.E. Miles ”Robotic melon
harvesting”, IEEE Transactions on Robotics and Automation,
Vol. 16, pp. 831-835, 2000.
The computation time for learning step for the blocks [6] R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection us-
of size 16 × 16 pixels for the 17 images is 5 minutes on ing the second order information for training SVM. Journal of
a Pentium 4 with a processor 2.8 GHz. The computation Machine Learning Research 6, 1889-1918, 2005
[7] A. K. Jain, R.P.W. Duin and J.Mao ”Statistical Pattern Recog-
time for the recognition step (identification of each block nition : A Review”, IEEE Transactions on Pattern Analysis and
of size 16 × 16 pixels) takes less one second. Machine Intelligence 22(1) pp.4-37, 2000.
3702