Professional Documents
Culture Documents
,QWHJUDWLRQRI/L'$5DQG&DPHUD'DWDIRU'UHFRQVWUXFWLRQ
1
Tsung-I Chen1, Yu-Xiang Zhang1, Chia-Yen Chen1, Member, IEEE and Chia-Hung Yeh2, Member, IEEE
1
Dept. of Computer Science and Information Engineering, National University of Kaohsiung, Kaohsiung, Taiwan
2
Dept. of Electrical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
(a) (b)
Fig. 5. 3D reconstruction for (a) indoor and (b) outdoor.
Fig. 2. Projection of calibrated depth data onto images
V. CONCLUSIONS
A. Egomotion estimation
In this work, we integrated a multi-beam LiDAR device and
We perform egomotion estimation using images obtained
a pair of binocular cameras to achieve egomotion estimation
from stereo cameras. The method proposed in [8] is used to
and large scale 3D reconstruction. The system simultaneously
determine the path of the 3D reconstruction system as it
acquires both range data and image data, calibrates the LiDAR
moves along a path. Fig. 3 shows the path obtained using the
device and the cameras to determine the correspondences, and
selected egomotion estimation method. To demonstrate the
calculates egomotion as the system moves through the
performance of egomotion estimation, we moved the system
environment. The calibrated device parameters and egomotion
forward and back along the same straight path. As Fig. 3
are used to register and integrate the range data from the
shows, the estimated paths overlap almost perfectly.
LiDAR device and the image data from the cameras to
construct a 3D model of the environment. Both indoor and
outdoor experiments have been performed to demonstrate the
practicality of the proposed 3D reconstruction system. In
future work, we will continue to improve the system with
respect to its reconstruction accuracy and efficiency.
ACKNOWLEDGMENT
The authors would like to thank the National Science
Council of Taiwan for sponsoring this work under grant
Fig. 3. Paths showing egomotion estimated using the method in [8], the units numbers NSC 102-2221-E-390-021, NSC 103-2218-E-110-
are in mm. 006, NSC102-2221-E-110-032-MY3 and NSC101-2221-E-
B. 3D reconstruction 110-093-MY2.
In the reconstruction step, the range and image data at each
REFERENCES
point of acquisition are registered using the correspondences
[1] G. Erico, "How Google's Self-Driving Car Works," [online] Available
calculated from the calibration process, and the egomotion from: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence
estimated along the path of reconstruction is used to align the /how-google-self-driving-car-works [Accessed 18 July 2013].
data acquired at different points on the path, such that over the [2] P. Stone, P. Beeson, T. Merili and R. Madigan, "DARPA Urban
entire course, a 3D model of the environment can be Challenge Technical Report," technical report, Austin Robot
Technology, 2007.
reconstructed. We selected two locations for reconstruction, an [3] H. Moravec, "Obstacle Avoidance and Navigation in the Real World by
indoor location as shown in Fig. 4(a) and an outdoor location a Seeing Robot Rover," Ph.D. thesis, Univ. of Stanford, CA, 1980.
spanning over 10m in range as shown in Fig. 4(b). [4] M. Irani, B. Rousso and S. Peleg, "Recovery of Ego-Motion using Image
Stabilization," in Proceedings of the IEEE Computer Society Conference
on Computer Vision and Pattern Recognition 1994, pp. 21-23, 1994.
[5] D. Nister, O. Naroditsky and J. Bergen, "Visual Odometry," in
Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition 2004, Vol.1, pp. 652-659, 2004.
[6] B. Williams and I. Reid, "On Combining Visual SLAM and Visual
Odometry," in Proceedings of the IEEE International Conference on
Robotics and Automation 2010, pp. 3494-3500, 2010.
(a) (b) [7] C.-Y. Chen, H.-J. Chien, "Geometric Calibration of a Multi-layer
Fig. 4. 3D reconstruction locations (a) indoor and (b) outdoor. LiDAR System and Image Sensors using Plane-based Implicit Laser
Parameters for Textured 3-D Depth Reconstruction", J. Visual
IV. EXPERIMENT RESULT Communication and Image Representation,
DOI:10.1016/j.jvcir.2013.08.005, Aug. 2013.
The indoor and outdoor 3D reconstructions are shown in [8] C.-Y. Chen, J.-H. Zhang, T.-I Chen and C.-F. Chen, "3D Egomotion
Fig. 5(a) and (b), respectively. As can be seen from the results, from Stereo Cameras Using Constrained Search Window and Bundle
Adjustment," In Proc. IVCNZ '13, Wellington, New Zealand, 2013.
the 3D reconstructions at different points along the paths are
integrated quite well to produce very dense 3D models of the
tested environments.
94