You are on page 1of 8

Panoramic Mosaicing with a 180 Field of View Lens 

Hynek Bakstein Tomas Pajdla

Center for Machine Perception, Dept. of Cybernetics


Faculty of Electrical Eng., Czech Technical University
121 35 Prague, Czech Republic
fbakstein,pajdlag@cmp.felk.cvut.cz

Abstract saics [16] and also the setup of the mosaicing camera is
simpler.
We presents a technique for 360 x 360 mosaicing with
a very wide eld of view sh eye lens. Standard camera
calibration is extended for lenses with a eld of view bigger
than 180 . We demonstrate the calibration on a Nikon FC-
E8 sh eye converter, which is an example of a low-cost lens
with 183 eld of view. We illustrate the use of this lens on
one application, the 360 x 360 mosaic which provides 360
eld of view in both vertical and horizontal direction.

1. Introduction Figure 1. Nikon FC-E8 sh eye converter


mounted on a Pulnix digital camera with a
There are many ways to enhance a eld of view and ob- standard 12.5mm lens.
tain an omnidirectional sensor. These approaches include
use of mirrors [2, 6, 5], multicamera devices [13, 1], rotating
cameras [19, 18, 7], lenses [4, 23, 18], or combination of the For many computer vision tasks, the relationship be-
previous methods [16]. The shape of the mirror determines tween the light rays entering the camera and pixels in the
its eld of view, mapping of the light rays [9, 12], and other image has to be known. In order to nd this relationship,
features such as the single effective viewpoint [21, 10]. On the camera has to be calibrated. A suitable camera model
the other hand, focusing of the lens is easier than focusing has to be chosen for this task. It turns out that the pinhole
on the mirror and the resulting setup may be simpler. camera model with a planar retina is not sufcient for sen-
sors with large FOV [14]. An image point (u; v ) denes a
We concentrate on the use of a special lens, the Nikon
FC-E8 sh eye converter [8], which provides FOV of 183 .
light ray as a vector which connects a camera center with
the image point which lies on an image plane at a certain
This lens provides omnidirectional image by itself, but we
distance from the camera center, see Figure 2(a). This is
use this lens in a practical realization of a 360 x 360 mo-
a straightforward approach, however, it limits the eld of
view of the camera to less then 180 .
saic [16], where the mosaic is composed by rotating an
omnidirectional camera. The resulting mosaic then covers
360 in both horizontal and vertical direction. We mounted
Previous approaches for sh eye calibration used planar
this lens on a Pulnix digital camera equipped with a stan- retina and pinhole model [3, 4, 22, 23]. In [20], a stereo-
graphic projection was employed but the experiments were
evaluated on lenses with FOV smaller than 180 . We in-
dard 12.5mm lens as it is depicted in Figure 1. Our ex-
periments also show that such a lens provides better results
than mirrors, which were often used to build 360 x 360 mo- troduce a spherical retina, see Figure 2(b), and a method for
calibration from a single image of one known 3D calibration
 This work was supported by the following grants: MSM 212300013, target with iterative renement of parameters of our camera
GACR 102/01/0971, MSMT KONTAKT 2001/09. model with a spherical retina. The light rays emanate from

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
the camera center and are determined by a radially symmet-
rical mapping between the pixel coordinates (u; v ) and the
angle between the light ray and the optical axis of the cam- (u,v) (u,v)
era, as it is depicted in Figure 2(b).
The main contribution of this work is the introduction of
a proper omnidirectional camera model, i. e. the spheri-
cal retina, and the choice of a proper projection function,
the radially symmetric mapping between the light rays and
pixels, for one particular lens. In contrast to other meth-
ods [3, 4, 22, 23], we test our approach on a lens with apriori
unknown projection function. This lens is an off-the-shelf
cheap lens and therefore it does not have to precisely fol- a) b)
low any of the projection models listed in [14]. Moreover,
this lens has a eld of view larger than 180 and thus the
standard camera model cannot be used. Figure 2. From image coordinates to light
In the next section, we introduce a camera model with a rays: (a) a directional and (b) an omnidirec-
spherical retina. Then we discuss various models describing tional camera.
the relationship between the light rays and pixels in Sec-
tion 3. Section 4 is devoted to the determination of this
model for the case of Nikon FC-E8 converter. A summary This matrix is a simplied intrinsic calibration matrix of a
of the presented method is given in Section 5. Experimental pinhole camera [11]. The displacement of the center of the
results are presented in Section 6. image is expressed by terms u0 and v0 , the skewness of
the image axes is neglected in our case, because cameras
2 Camera Model usually have orthogonal pixels.

v
The camera model describes how a 3D scene is trans-
formed into a 2D image. It has to incorporate the orientation u
of the camera with respect to some scene coordinate system K
1
v
and also the way how the light rays in the camera centered u
coordinate system are projected into the image. The orien-
tation is expressed by extrinsic camera parameters while the
latter relationship is determined by intrinsic parameters of
the camera. Figure 3. A circle in the image plane is dis-
Intrinsic parameters can be divided into two groups. The torted due to a different length of the axes.
rst one includes the parameters of the mapping between Therefore we observe an ellipse instead of a
the rays and ideal orthogonal square pixels. We will dis- circle in the image.
cuss these parameters in the next section. The second group
contains the parameters describing the relationship between
ideal orthogonal square pixels and the real pixels of image
sensors.
Let (u; v ) denote coordinates of a point in the image
3 Projection Models
measured in an orthogonal basis as shown in Figure 3. CCD
chips often have a different spacing between pixels in the Models of the projection between the light rays and the
vertical and the horizontal direction. This results in im- pixels are discussed in this section. Most commonly used
ages unequally scaled in the horizontal and vertical direc- approach is that these models are described by a radially
tion. This distortion causes circles to appear as ellipses in symmetric function that maps the angle  between the in-
the image, as shown in Figure 3. Therefore, we introduce a coming light ray and the optical axis to some distance r
parameter representing the ratio between the scales of the from the image center, see Figures 7(a) and 7(b). This func-
horizontal and the vertical axis. A matrix expression of the tion typically has one parameter k . As it was stated be-
distortion can be written in the following form: fore, the perspective projection, which can be expressed as
01 0 u0
1 r = k tan , is not suitable for modeling cameras with large
1=@ 0 A FOV. Several other projection models exist [14]:
K v0 : (1)
0 0 1  stereographic projection r = k tan 2 ,

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
 equidistant projection r = k, In order to derive the projection model for Nikon FC-
E8, we have investigated how light rays with constant in-
 equisolid angle projection r = k sin 2 , and crement in the angle  are imaged on the image plane. We
 sine law projection r = k sin .
performed the following experiment. The camera was ob-
serving a cylinder with circles seen by light rays with known
angle , as it is depicted in Figure 5(a). These circles cor-
respond to an increment in the angle  set to 5 for rays
Figure 4 shows graphs of the above projection functions
for angle  varying from 0 to 180 degrees. The vertical axis
imaged to the peripheral parts of the image ( = 90 ::70 )
and to 10 for the rays imaged to the central part of the im-
of this graph represents the value of a specic model func-
tion for corresponding  angle. All functions were scaled so
that they have a value of 1 at  = 50 . This gure illustrates age.
the development of the projective function with varying . It Figure 5(b) shows the grid which, after wrapping around
can be noticed that perspective projection cannot cope with a cylinder, produced the circles. Figure 5(c) shows an im-
angles  near 90 . It can also be noticed that most of the age of this cylinder. It can be seen that circles are imaged
models can be approximated with an equidistant projection to approximate circles and that constant increment in angles
for a smaller angle . However, when the FOV of the lens results in slowly increasing increment in radii of the circles
in the image. Note that the circles at the border have angular
distance 5 , while the distance near the center is 10 . The
increases, the models differ signicantly. In the next section
we describe a procedure for selecting the appropriate model
for the Nikon FC-E8 converter. camera has to be positioned so that its optical axis is iden-
tical with the rotational axis of the cylinder and the circle
5 corresponding to  = 90 must be imaged precisely. In our
case we used the assumption of the radial symmetry and the
4
known eld of view of the lens for manual positioning of
3 the lens with respect to the calibration cylinder. This setup
Model function value

2 is sufcient for the model determination, however, for a full


camera calibration, parameters determining this positioning
1
(rotation and translation with respect to the scene coordi-
0 nate system) have to be included in the computation, as it is
Perspective
1 Stereographic described in Section 5.
Sine law We tted all of the models mentioned in the previous sec-
2 Equisolid angle
Equidistant tion to detected projections of the light rays into the image.
3
The model t error was the Euclidean distance between the
4 pixels observed in the image and the pixel coordinates pre-
0 50 100 150
angle dicted by the model function. The stereographic projection
with two parameters: r = a tan b provided the best t but
Figure 4. Values of the projection functions there was still a systematic error, see Figure 6. Therefore,
for angle  in range of 0 and 180 degrees. we extended the model, which resulted in a combination of
All functions were scaled so that they have a the stereographic projection with the equisolid angle projec-
value of 1 at 50 . tion. This improved model is identied by four parameters,
see Equation 3, and provides the best t with no systematic
error, as it is depicted in Figure 6. This holds even for the
situation, where the parameters were estimated using only a
4 Model Determination half of the detected points and then used to predict the other
half of the points. The prediction error was less then half of
the pixel in the worst case. An initial t of the parameters is
The model describing the mapping between the pixels discussed in the following section.
and the light rays differs from lens to lens. Some lenses
are manufactured so that they follow a certain model, for
some other lenses is this information unavailable, which is
5 Complete Camera Model
the case of the Nikon FC-E8 converter. Also the assumption
Under the above observations, we can formulate the
that the light rays emanate from one point does not have to
model of the camera. Provided with a scene point X =
be true for some lenses. This requires additional parameters ~ =
(x; y; z )T , we are able to compute its coordinates X
of the model determining the position of the ray origin. All
above situations can be incorporated into our framework.
x; y~; z~)T in the camera centered coordinate system:
(~

We demonstrate this procedure on one particular lens. ~ = RX + T ;


X (2)

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
(a) (b) (c)

Figure 5. (a) Camera observing a cylinder with a calibration pattern (b) wrapped around the cylinder.
Note that the lines corresponds to light rays with an increment in the angle  set to 5 (the bottom 4
intervals) and 10 (the 5 upper intervals). (c) Image of circles with radii set to a tangent of a constantly
incremented angle results in concentric circles with almost constant increment in radii in the image.

1 z = optical axis
(x,y,z)
(u,v)

0.5 r

(u0 ,v0 ) v
Model fiting error

0 y u

0.5 x

(a) (b)
1
Figure 7. (a) Camera coordinate system and
1.5 a tan(/b) its relationship to the angles  and ' (b) From
a tan(/b) + c sin(/d)
polar coordinates (r; ') to orthogonal coordi-
2
0 20 40 60 80 100 nates (u0 ; v0 ).
angle

the pixel coordinates u0 = (u0 ; v 0 ; 1) in some orthogonal


Figure 6. Model t error for stereographic and
combined stereographic and equisolid angle
image coordinate system, see Figure 7(b), as
projection.
u0 = r cos ' (4)
v0 = r sin ' : (5)
where R represents a rotation and T stands for a transla-
tion. The standard rotation matrix R has three degrees of In this case the vector u0 does not represent a light ray from
freedom and T is expressed by the vector T = (t1 ; t2 ; t3 )T . the camera center like in a pinhole camera model, instead
Then, the angle , see Figure 7(a), between the light ray it is just a vector augmented by 1 so that we can write an
~ and the optical axis, can be computed.
through the point X afne transform of the image points compactly by one ma-
This angle determines the distance r of the pixel from the trix multiplication (6). Real pixel coordinates u = (u; v; 1)
can be obtained as
u = Ku0 :
center of the image:
(6)
 
r = a tan + c sin ; (3)
b d The complete camera model parameters including ex-
where a, b, c, and d are the parameters of the projection trinsic and intrinsic parameters can be recovered from mea-
model. sured coordinates of calibration points by minimizing
Together with the angle ' between the light ray repro-
jected to the xy plane and the x axis of the camera centered
X
N

coordinate system, the distance r is sufcient to calculate


J (R; T; ; u0 ; v0 ; a; b; c; d) = ku~ uk ; (7)
i=1

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
(a) (b) (c)

Figure 8. (a) Experimental setup for the half cylinder experiment. (b) One of the images. (c) The
calibration target is located 90 left from the camera, note the signicant distortion.

where k:::k denotes the Euclidean norm, N is the number on a half cylinder. The object was realized such that a line
of points, u~ are coordinates of points measured in the im- of calibration points was rotated on a turntable, as it is de-
age, and u are their coordinates reprojected by the cam- picted in Figure 8. Here, the points with the same angle 
era model. A MATLAB implementation of the Levenberg- had different depths.
Marquardt [15] minimization was employed in order to The rst experimental setup was also used to determine
minimize the objective function (7). the projection model, as it is described in Section 4. The
The rotational matrix R and the vector of translation T, total number of 72 points was manually detected. One half
see (2), have both three degrees of freedom. The image cen- of the circles of points was used for the estimation of the
ter, the scale ratio of the image axes , and the four param- parameters while the second half was used to compute the
eters of the mapping between the light rays and pixels (3) reprojection errors. Similar approach was also used in the
give 7 intrinsic parameters. This yields a total of 13 param- second experiment, where the number of calibration points
eters of our model. was 285. Again, all points were detected manually.
When minimizing the objective function (7), we initial- Figure 9 shows the reprojection of points, computed with
ize the image center to the center of the circle (ellipsis) sur- parameters estimated during the calibration, compared with
rounding the image, see Figure 5. This is possible because their coordinates detected in the image. The lines repre-
the Nikon FC-E8 lens is so called circular sh eye, where sent the errors between the respective points are scaled 20
this circle is visible. Assuming that the mapping between times to make the distances clearly visible. The same error
the light rays and pixels (3) is radially symmetric, this cen- is shown in Figure 10 for all the points. It can be noticed that
ter of the circle should be approximately in the image cen- the error is small compared to the precision of manual de-
ter. Parameters of the model were initially set to an ideal tection, where the images of some lines spanned more pix-
stereographic projection, which means that b = 2, c = 0, els while others where too far to be imaged as continuous
d = 1, and a was initialized using the ratio between the circles, see Figure 5(c). Therefore we performed another
coordinates of points corresponding to the light rays with experiment, where the calibration points where checker-
the angle  equal to 0 and 90 degrees. The value of the board patterns.
parameters was initialized to 1. The initial camera position Similar graphs illustrate the results from the second ex-
was set to be in the center of the scene coordinate system periment. Figure 11 shows the comparison between the re-
with the z axis coincident with the optical axis of the cam- projected points and their coordinates detected in the image.
era. Again, the lines representing the distance between these two
sets of points are scaled 20 times.
6 Experimental Results Figure 12(a) depicts this reprojection error for each cal-
ibration point. Note that the error is bigger for points in
We performed two calibration experiments. In the rst the corners of the image, which is natural, since the resolu-
experiment, the calibration points were located on a cylin- tion here is higher and therefore one pixel corresponds to a
der around the optical axis and the camera was looking smaller change in the angle .
down into that cylinder, see Figure 5(a). The points had To verify the randomness of the reprojection error, we
the same depth for the same value of . The second experi- performed the following test. Because the points in the im-
ment employed a 3D calibration object with points located age were detected manually, we suppose that the detection

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
1000 900

800
800
700

600
600
500

400 400

300
200
200

0 100
0 200 400 600 800 1000 0 200 400 600 800 1000

Figure 9. Reprojection of points for the cylin- Figure 11. Reprojection of points for the half
der experiment. The distances between cylinder experiment. The distances between
the reprojected and the detected points are the reprojected and the detected points are
scaled 20 times. scaled 20 times.

1.8 100
2.5
1.6

1.4 80

Reprojection error
2 1.2
60
1

Count
Reprojection error

0.8
40
1.5 0.6

0.4 20
0.2
1 0 0
0 50 100 150 200 250 300 0 5 10 15 20 25
Calibration point number Sum of squares of normalized detection errors

0.5 (a) (b)

0 Figure 12. (a) Reprojection error for each


0 20 40 60 80
Calibration point number point for the half cylinder experiment. (b)
Histogram of sum of squares of normalized
Figure 10. Reprojection error for each point detection errors together with a 2 density
for the cylinder experiment. marked by the curve.

error has normal distribution in both image axes. Therefore, alization of the 360 x 360 mosaic [16]. The selection of the
a sum of squares of these errors, normalized to unit vari- proper pixels (ellipse) assures that the corresponding points
ance, should be described by a 2 distribution [17]. Fig- in the mosaic pair will be on the same image rows, which
ure 12(b) shows a histogram of detection errors together simplies the correspondence search algorithms.
with a graph of a 2 density. Note that 2 distribution de- There are two possible approaches for selection of light
scribes well the calibration error distribution. rays with a specic angle . The one originally proposed
Finally we show that we are able to select pixels in the in [16] uses mirrors, see Figure 13(a). The cameramirror
image, which correspond the the light rays lying in one rig setup must be performed very precisely to get reliable
plane passing through a camera center. The angle  between results. Moreover, focusing on the mirror is not easy, be-
these rays and the optical axis equals to 2 and because this cause one has to focus on a virtual scene, not on the mirror,
situation is circularly symmetric, the corresponding pixels neither on the real scene. Therefore, we propose another
should form a circle centered at the image center (u0 ; v0 ), approach employing optics with FOV larger than 180, de-
obtained by minimizing (7). The radius of the circle is de- picted in Figure 13(b).
termined from (3) for  = 2 and a, b, c, and d obtained by Figure 14 shows the right and the left eye mosaic respec-
minimizing (7). Due to the difference in scale of the image tively. Note the signicant disparity of objects in the scene.
axes , see Equation 1, the pixels form an ellipse, while the Enlarged parts of the mosaic showing one corresponding
image center again corresponds to the center of the ellipse. point can be found in Figures 15(a) and 15(c) for the right
As noted before, these light rays lie in one plane, which is mosaic and Figures 15(b) and 15(d) for the left mosaic.
crucial for the employment of the proposed sensor in a re- These gures represent the worst case, where the difference

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
7 Conclusion

We have proposed a camera model for lenses with FOV


larger than 180. The model is based on employment of
a spherical retina and a radially symmetrical mapping be-
tween the incoming light rays and pixels in the image. We
proposed a method for identication of the mapping func-
(a) (b) tion, which led to a combination of two mapping functions.
Figure 13. Two possible realizations of the A complete calibration procedure, involving a single image
360 x 360 mosaic. (a) a telecentric camera of a 3D calibration target, is then presented. Finally, we
demonstrate the theory in two experiments and one applica-
and a conical mirror, (b) a central camera with
Nikon sh eye converter. tion, all using the Nikon FC-E8 sh eye converter. We be-
lieve that the ability to correctly describe and calibrate the
Nikon FC-E8 sh eye lens converter opens a way to many
new application of very wide angle lenses.
between the situation, where the corresponding points lie
on the same image row, was the biggest. Figures 15(a) and References
15(b) were acquired using a conical mirror observed by a
telecentric lens, while Figures 15(c) and 15(d) are from mo- [1] P. Baker, C. Fermuller, Y. Aloimonos, and R. Pless. A spher-
saic acquired employing the Nikon FC-E8 lens. ical eye from multiple cameras (makes better models of the
world). In A. Jacobs and T. Baldwin, editors, Proceedings
of the CVPR01 conference, Kauaii, USA, volume 1, pages
576583, Dec. 2001.
[2] S. Baker and S. K. Nayar. A theory of single-viewpoint cata-
dioptric image formation. International Journal of Com-
puter Vision, 35(2):175196, 1999.
[3] A. Basu and S. Licardie. Alternative models for sh-eye
lenses. Pattern Recognition Letters, 16(4):433441, 1995.
[4] S. S. Beauchemin, R. Bajcsy, and G. G. A unied proce-
dure for calibrating intrinsic parameters of sh-eye lenses.
In Vision Interface (VI 99), pages 272279, May 1999.
[5] R. Benosman, E. Deforas, and J. Devars. A new catadioptric
sensor for the panoramic vision of mobile robots. In IEEE
Workshop on Omnidirectional Vision (OMNIVIS00), Hilton
Head, South Carolina, pages 112116, June 2000.
[6] A. M. Bruckstein and T. J. Richardson. Omniview cameras
with curved surface mirrors. In IEEE Workshop on Omni-
directional Vision (OMNIVIS00), Hilton Head, South Car-
olina, pages 7984, June 2000.
[7] J. Chai, S. B. Kang, and H.-Y. Shum. Rendering with non-
Figure 14. Right (upper) and left (lower) eye uniform approximate concentric mosaics. In Second Work-
mosaics. shop on Structure from Multiple Images of Large Scale En-
vironments, SMILE, July 2000.
[8] N. Corp. Nikon www pages: http://www.nikon.com, 2000.
[9] S. Derrien and K. Konolige. Approximating a single ef-
Notice that in the upper row, where images taken with the fective viewpoint in panoramic imaging devices. In IEEE
use of the mirror are shown, the images are more blurry and Workshop on Omnidirectional Vision (OMNIVIS00), Hilton
that the points do not lie on the same image row and that this Head, South Carolina, pages 8590, June 2000.
[10] C. Geyer and K. Daniilidis. A unifying theory for central
difference is quite signicant, although some other points
panoramic systems and practical implications. In D. Vernon,
actually were on the same image row. This is in contrast
editor, European Conference on Computer Vision ECCV
with the images in the bottom row that were taken with the 2000, Dublin, Ireland, volume 2, pages 445462, June-July
Nikon FC-E8 converter, where all corresponding points lie 2000.
on the same image row and the images are more sharp. This [11] R. Hartley and A. Zisserman. Multiple View Geometry in
condition is satised for all pixels with a tolerance smaller Computer Vision. Cambridge University Press, Cambridge,
than 0.5 pixel. UK, 2000.

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE
295 295

300 300

305 305

310
310

315
315

320
320

325
325

155 160 165 170 175 180 185 190 855 860 865 870 875 880 885 890

(a) (b)
120 120

125 125

130 130

135 135

140 140

145 145

150 150

155 155

145 150 155 160 165 170 175 725 730 735 740 745 750 755

(c) (d)

Figure 15. Detail of a corresponding pair of points (a) and (c) in the right mosaic and (b) and (d) in
the left mosaic representing the difference from the ideal case, where the corresponding points lie
on the same image row. The upper row is the worst case acquired using mirror, bottom row for the
Nikon FC-E8 sh eye converter. Note the blurred images and that the points do not lie on the same
image row in case of mirror and that the lens provides focused and aligned images.

[12] R. A. Hicks and R. Bajcsy. Catadioptric sensors that ap- [19] H.-Y. Shum, A. Kalai, and S. M. Seitz. Omnivergent stereo.
proximate wide-angle perspective projections. In IEEE In Proc. of the International Conference on Computer Vi-
Workshop on Omnidirectional Vision (OMNIVIS00), Hilton sion (ICCV99), Kerkyra, Greece, volume 1, pages 2229,
Head, South Carolina, pages 97103, June 2000. September 1999.
[13] H. Hua and N. Ahuja. A high-resolution panoramic camera. [20] D. E. Stevenson and M. M. Fleck. Robot aerobics: Four
In A. Jacobs and T. Baldwin, editors, Proceedings of the easy steps to a more exible calibration. In International
CVPR01 conference, Kauaii, USA, volume 1, pages 960 Conference on Computer Vision, pages 3439, 1995.
967, Dec. 2001. [21] T. Svoboda, T. Pajdla, and V. Hlavac. Epipolar geometry
[14] F. M. M. Perspective projection: the wrong imaging model. for panoramic cameras. In H. Burkhardt and N. Bernd, ed-
Technical Report TR 95-01, Comp. Sci., U. Iowa, 1995. itors, the fth European Conference on Computer Vision,
[15] J. More. The levenberg-marquardt algorithm: Implementa- Freiburg, Germany, pages 218232, June 1998.
[22] R. Swaminathan and S. Nayar. Non-metric calibration of
tion and theory. In G. A. Watson, editor, Numerical Analysis,
wide-angle lenses. In DARPA Image Understanding Work-
Lecture Notes in Mathematics 630, pages 105116. Springer
shop, pages 10791084, 1998.
Verlag, 1977.
[23] Y. Xiong and K. Turkowski. Creating image based VR using
[16] S. K. Nayar and A. Karmarkar. 360 x 360 mosaics. In
a self-calibrating sheye lens. In IEEE Computer Vision and
IEEE Conference on Computer Vision and Pattern Recog-
Pattern Recognition (CVPR97), pages 237243, 1997.
nition (CVPR00), Hilton Head, South Carolina, volume 2,
pages 388395, June 2000.
[17] A. Papoulis. Probability and Statistics. Prentice-Hall, 1990.
[18] S. Peleg and M. Ban-Ezra. Stereo panorama with a single
camera. In IEEE Conference on Computer Vision and Pat-
tern Recognition, pages 395401, June 1999.

Proceedings of the Third Workshop on Omnidirectional Vision (OMNIVIS02)


0-7695-1629-7/02 $17.00 2002 IEEE

You might also like