Professional Documents
Culture Documents
Abstract saics [16] and also the setup of the mosaicing camera is
simpler.
We presents a technique for 360 x 360 mosaicing with
a very wide eld of view sh eye lens. Standard camera
calibration is extended for lenses with a eld of view bigger
than 180 . We demonstrate the calibration on a Nikon FC-
E8 sh eye converter, which is an example of a low-cost lens
with 183 eld of view. We illustrate the use of this lens on
one application, the 360 x 360 mosaic which provides 360
eld of view in both vertical and horizontal direction.
v
The camera model describes how a 3D scene is trans-
formed into a 2D image. It has to incorporate the orientation u
of the camera with respect to some scene coordinate system K
1
v
and also the way how the light rays in the camera centered u
coordinate system are projected into the image. The orien-
tation is expressed by extrinsic camera parameters while the
latter relationship is determined by intrinsic parameters of
the camera. Figure 3. A circle in the image plane is dis-
Intrinsic parameters can be divided into two groups. The torted due to a different length of the axes.
rst one includes the parameters of the mapping between Therefore we observe an ellipse instead of a
the rays and ideal orthogonal square pixels. We will dis- circle in the image.
cuss these parameters in the next section. The second group
contains the parameters describing the relationship between
ideal orthogonal square pixels and the real pixels of image
sensors.
Let (u; v ) denote coordinates of a point in the image
3 Projection Models
measured in an orthogonal basis as shown in Figure 3. CCD
chips often have a different spacing between pixels in the Models of the projection between the light rays and the
vertical and the horizontal direction. This results in im- pixels are discussed in this section. Most commonly used
ages unequally scaled in the horizontal and vertical direc- approach is that these models are described by a radially
tion. This distortion causes circles to appear as ellipses in symmetric function that maps the angle between the in-
the image, as shown in Figure 3. Therefore, we introduce a coming light ray and the optical axis to some distance r
parameter representing the ratio between the scales of the from the image center, see Figures 7(a) and 7(b). This func-
horizontal and the vertical axis. A matrix expression of the tion typically has one parameter k . As it was stated be-
distortion can be written in the following form: fore, the perspective projection, which can be expressed as
01 0 u0
1 r = k tan , is not suitable for modeling cameras with large
1=@ 0 A FOV. Several other projection models exist [14]:
K v0 : (1)
0 0 1 stereographic projection r = k tan 2 ,
Figure 5. (a) Camera observing a cylinder with a calibration pattern (b) wrapped around the cylinder.
Note that the lines corresponds to light rays with an increment in the angle set to 5 (the bottom 4
intervals) and 10 (the 5 upper intervals). (c) Image of circles with radii set to a tangent of a constantly
incremented angle results in concentric circles with almost constant increment in radii in the image.
1 z = optical axis
(x,y,z)
(u,v)
0.5 r
(u0 ,v0 ) v
Model fiting error
0 y u
0.5 x
(a) (b)
1
Figure 7. (a) Camera coordinate system and
1.5 a tan(/b) its relationship to the angles and ' (b) From
a tan(/b) + c sin(/d)
polar coordinates (r; ') to orthogonal coordi-
2
0 20 40 60 80 100 nates (u0 ; v0 ).
angle
Figure 8. (a) Experimental setup for the half cylinder experiment. (b) One of the images. (c) The
calibration target is located 90 left from the camera, note the signicant distortion.
where k:::k denotes the Euclidean norm, N is the number on a half cylinder. The object was realized such that a line
of points, u~ are coordinates of points measured in the im- of calibration points was rotated on a turntable, as it is de-
age, and u are their coordinates reprojected by the cam- picted in Figure 8. Here, the points with the same angle
era model. A MATLAB implementation of the Levenberg- had different depths.
Marquardt [15] minimization was employed in order to The rst experimental setup was also used to determine
minimize the objective function (7). the projection model, as it is described in Section 4. The
The rotational matrix R and the vector of translation T, total number of 72 points was manually detected. One half
see (2), have both three degrees of freedom. The image cen- of the circles of points was used for the estimation of the
ter, the scale ratio of the image axes , and the four param- parameters while the second half was used to compute the
eters of the mapping between the light rays and pixels (3) reprojection errors. Similar approach was also used in the
give 7 intrinsic parameters. This yields a total of 13 param- second experiment, where the number of calibration points
eters of our model. was 285. Again, all points were detected manually.
When minimizing the objective function (7), we initial- Figure 9 shows the reprojection of points, computed with
ize the image center to the center of the circle (ellipsis) sur- parameters estimated during the calibration, compared with
rounding the image, see Figure 5. This is possible because their coordinates detected in the image. The lines repre-
the Nikon FC-E8 lens is so called circular sh eye, where sent the errors between the respective points are scaled 20
this circle is visible. Assuming that the mapping between times to make the distances clearly visible. The same error
the light rays and pixels (3) is radially symmetric, this cen- is shown in Figure 10 for all the points. It can be noticed that
ter of the circle should be approximately in the image cen- the error is small compared to the precision of manual de-
ter. Parameters of the model were initially set to an ideal tection, where the images of some lines spanned more pix-
stereographic projection, which means that b = 2, c = 0, els while others where too far to be imaged as continuous
d = 1, and a was initialized using the ratio between the circles, see Figure 5(c). Therefore we performed another
coordinates of points corresponding to the light rays with experiment, where the calibration points where checker-
the angle equal to 0 and 90 degrees. The value of the board patterns.
parameters was initialized to 1. The initial camera position Similar graphs illustrate the results from the second ex-
was set to be in the center of the scene coordinate system periment. Figure 11 shows the comparison between the re-
with the z axis coincident with the optical axis of the cam- projected points and their coordinates detected in the image.
era. Again, the lines representing the distance between these two
sets of points are scaled 20 times.
6 Experimental Results Figure 12(a) depicts this reprojection error for each cal-
ibration point. Note that the error is bigger for points in
We performed two calibration experiments. In the rst the corners of the image, which is natural, since the resolu-
experiment, the calibration points were located on a cylin- tion here is higher and therefore one pixel corresponds to a
der around the optical axis and the camera was looking smaller change in the angle .
down into that cylinder, see Figure 5(a). The points had To verify the randomness of the reprojection error, we
the same depth for the same value of . The second experi- performed the following test. Because the points in the im-
ment employed a 3D calibration object with points located age were detected manually, we suppose that the detection
800
800
700
600
600
500
400 400
300
200
200
0 100
0 200 400 600 800 1000 0 200 400 600 800 1000
Figure 9. Reprojection of points for the cylin- Figure 11. Reprojection of points for the half
der experiment. The distances between cylinder experiment. The distances between
the reprojected and the detected points are the reprojected and the detected points are
scaled 20 times. scaled 20 times.
1.8 100
2.5
1.6
1.4 80
Reprojection error
2 1.2
60
1
Count
Reprojection error
0.8
40
1.5 0.6
0.4 20
0.2
1 0 0
0 50 100 150 200 250 300 0 5 10 15 20 25
Calibration point number Sum of squares of normalized detection errors
error has normal distribution in both image axes. Therefore, alization of the 360 x 360 mosaic [16]. The selection of the
a sum of squares of these errors, normalized to unit vari- proper pixels (ellipse) assures that the corresponding points
ance, should be described by a 2 distribution [17]. Fig- in the mosaic pair will be on the same image rows, which
ure 12(b) shows a histogram of detection errors together simplies the correspondence search algorithms.
with a graph of a 2 density. Note that 2 distribution de- There are two possible approaches for selection of light
scribes well the calibration error distribution. rays with a specic angle . The one originally proposed
Finally we show that we are able to select pixels in the in [16] uses mirrors, see Figure 13(a). The cameramirror
image, which correspond the the light rays lying in one rig setup must be performed very precisely to get reliable
plane passing through a camera center. The angle between results. Moreover, focusing on the mirror is not easy, be-
these rays and the optical axis equals to 2 and because this cause one has to focus on a virtual scene, not on the mirror,
situation is circularly symmetric, the corresponding pixels neither on the real scene. Therefore, we propose another
should form a circle centered at the image center (u0 ; v0 ), approach employing optics with FOV larger than 180, de-
obtained by minimizing (7). The radius of the circle is de- picted in Figure 13(b).
termined from (3) for = 2 and a, b, c, and d obtained by Figure 14 shows the right and the left eye mosaic respec-
minimizing (7). Due to the difference in scale of the image tively. Note the signicant disparity of objects in the scene.
axes , see Equation 1, the pixels form an ellipse, while the Enlarged parts of the mosaic showing one corresponding
image center again corresponds to the center of the ellipse. point can be found in Figures 15(a) and 15(c) for the right
As noted before, these light rays lie in one plane, which is mosaic and Figures 15(b) and 15(d) for the left mosaic.
crucial for the employment of the proposed sensor in a re- These gures represent the worst case, where the difference
300 300
305 305
310
310
315
315
320
320
325
325
155 160 165 170 175 180 185 190 855 860 865 870 875 880 885 890
(a) (b)
120 120
125 125
130 130
135 135
140 140
145 145
150 150
155 155
145 150 155 160 165 170 175 725 730 735 740 745 750 755
(c) (d)
Figure 15. Detail of a corresponding pair of points (a) and (c) in the right mosaic and (b) and (d) in
the left mosaic representing the difference from the ideal case, where the corresponding points lie
on the same image row. The upper row is the worst case acquired using mirror, bottom row for the
Nikon FC-E8 sh eye converter. Note the blurred images and that the points do not lie on the same
image row in case of mirror and that the lens provides focused and aligned images.
[12] R. A. Hicks and R. Bajcsy. Catadioptric sensors that ap- [19] H.-Y. Shum, A. Kalai, and S. M. Seitz. Omnivergent stereo.
proximate wide-angle perspective projections. In IEEE In Proc. of the International Conference on Computer Vi-
Workshop on Omnidirectional Vision (OMNIVIS00), Hilton sion (ICCV99), Kerkyra, Greece, volume 1, pages 2229,
Head, South Carolina, pages 97103, June 2000. September 1999.
[13] H. Hua and N. Ahuja. A high-resolution panoramic camera. [20] D. E. Stevenson and M. M. Fleck. Robot aerobics: Four
In A. Jacobs and T. Baldwin, editors, Proceedings of the easy steps to a more exible calibration. In International
CVPR01 conference, Kauaii, USA, volume 1, pages 960 Conference on Computer Vision, pages 3439, 1995.
967, Dec. 2001. [21] T. Svoboda, T. Pajdla, and V. Hlavac. Epipolar geometry
[14] F. M. M. Perspective projection: the wrong imaging model. for panoramic cameras. In H. Burkhardt and N. Bernd, ed-
Technical Report TR 95-01, Comp. Sci., U. Iowa, 1995. itors, the fth European Conference on Computer Vision,
[15] J. More. The levenberg-marquardt algorithm: Implementa- Freiburg, Germany, pages 218232, June 1998.
[22] R. Swaminathan and S. Nayar. Non-metric calibration of
tion and theory. In G. A. Watson, editor, Numerical Analysis,
wide-angle lenses. In DARPA Image Understanding Work-
Lecture Notes in Mathematics 630, pages 105116. Springer
shop, pages 10791084, 1998.
Verlag, 1977.
[23] Y. Xiong and K. Turkowski. Creating image based VR using
[16] S. K. Nayar and A. Karmarkar. 360 x 360 mosaics. In
a self-calibrating sheye lens. In IEEE Computer Vision and
IEEE Conference on Computer Vision and Pattern Recog-
Pattern Recognition (CVPR97), pages 237243, 1997.
nition (CVPR00), Hilton Head, South Carolina, volume 2,
pages 388395, June 2000.
[17] A. Papoulis. Probability and Statistics. Prentice-Hall, 1990.
[18] S. Peleg and M. Ban-Ezra. Stereo panorama with a single
camera. In IEEE Conference on Computer Vision and Pat-
tern Recognition, pages 395401, June 1999.