You are on page 1of 14

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO.

6, DECEMBER 2007

2547

Pseudostereo-Vision System: A Monocular


Stereo-Vision System as a Sensor for
Real-Time Robot Applications
Theodore P. Pachidis, Member, IEEE, and John N. Lygouras

AbstractIn this paper, the design and the construction of a


new system for stereo vision using planar mirrors and a single
camera are presented. Equations giving the coordinates of a point
in space are provided. In these equations, refraction phenomena
due to the beam splitter used have been taken into consideration.
Two virtual cameras are created from this pseudostereo-vision system with exactly the same geometric properties, parameters, and
angular field of view of the real camera. Two superimposed stereo
images are simultaneously received as a complex image. This
vision system has no moving parts, its construction is quite simple,
and it is mechanically robust and cheap. It can be used for accurate
measurements in the same way as binocular stereo-vision systems.
It is easy to mount this system on the end effector of a robotic
manipulator or on a mobile robot for real-time applications. Using
fast algorithms for point correspondence and depth calculation on
a simple personal computer, it can be used in high-speed, low-cost,
and high-accuracy applications.
Index TermsComplex image, correspondence, mirrors,
pseudostereo, real-time robotic application, single camera.

I. I NTRODUCTION

OR ITS autonomous movement in real time, a robotic system has to perceive its environment, calculate the position
of a target or a block, and properly move. For this reason,
many types of sensors and apparatus have been proposed. By
using cameras as sensors, it is possible to mainly have vision
systems with one or two cameras. A stereo-vision system is
composed of two cameras. For the recovery of a 3-D scene
from a pair of stereo images of the scene, it is required to
establish correspondences. The basic steps of the stereo process
are the following: 1) features detection in each image captured;
2) matching of features detected (correspondence), under certain geometric and other constraints, for each pair of stereo
images; and 3) depth calculation by means of the disparity
values found and the geometric parameters of the vision system. From the previous three steps, correspondence between
points (second step) is usually the most difficult step, and it is
generally the most time-consuming.
Depth perception via stereo disparity is a passive method that
does not require any special lighting or scanner to acquire the
images. This method may be used to determine the depths of
points in indoor as well as outdoor scenes and the depths of
Manuscript received September 21, 2005; revised June 27, 2007.
The authors are with the Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi, Greece (e-mail:
pated@mail.otenet.gr; ilygour@ee.duth.gr).
Digital Object Identifier 10.1109/TIM.2007.908231

points that are centimeters or kilometers away from the viewer.


The two cameras composing a stereo-vision system have often
slightly different focal lengths and zoom levels. These differences cause intensity differences between the corresponding
points in stereo images. Moreover, the optical axes of the
cameras may not lie in the same plane [1].
In the cases of monocular (single-camera) stereo-vision systems, the aforementioned problems are reduced. The corresponding points are usually found on a single line (epipolar
line), the intensity differences of these points are reduced, and
the two virtual cameras, which constitute the stereo-vision system, have the same geometric properties and parameters. Many
single-camera stereo-vision systems have been proposed in the
literature. Teoh and Zhang [2] described a monocular stereovision system by means of a Sanyo video camera interfaced to
a microcomputer. According to this assembly, which is mostly
made of light aluminum, two mirrors are fixed at 45 angle with
respect to the optical axis of the camera. A third mirror mounted
on the shaft of a stepping motor can rotate by 90 angle in front
of the camera lens. A stereo pair of images is acquired by two
successive rotations of the third mirror in order to be parallel to
each of the fixed mirrors. Consequently, the result is the same
as using two cameras with parallel optical axes. Note that since
two shots of a scene are required, the camera should only be
used in static scenes.
Nishimoto and Shirai [3] also proposed a monocular stereovision system. Instead of a rotating mirror, a glass plate is
placed in front of the camera. As the glass plate is rotated,
the optical axis of the camera slightly shifts, simulating two
cameras with parallel optical axes. The pair of stereo images
obtained has very small disparities (coarse-depth values only),
making the point correspondence easy. This camera system
requires two shots from a scene, and therefore, it should only be
used in static environments. Otherwise, the scene will change
during the time when the images are obtained, and the positions
of the corresponding points will no longer relate to the depths
of points in 3-D. The previous monocular stereo-vision systems
reduce unwanted geometric and intensity differences between
stereo images because of using only one camera. However,
two shots of a scene are required, and in these systems, the
exact rotation of the third mirror [2] or of the glass plate is a
major design issue. They can only be used in static scenes. An
important number of monocular stereo-vision systems using the
curved and planar mirrors have been proposed in the literature.
As previously mentioned, systems based on curved mirrors

0018-9456/$25.00 2007 IEEE

2548

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

can capture a wide-field-of-view (FOV) images. Nayar [4]


demonstrated a wide-FOV stereo-vision system. This system is
based on a camera and two specular spheres. Later, another one
using two convex mirrors (one placed on top of the other) was
proposed by Southwell et al. [5]. A single-camera system was
also proposed by Goshtasby and Gruver [6]. It obtains images
in a single shot by means of a camera and two planar mirrors
in front of the properly located lens. It corresponds to a stereovision system with parallel optical axes. The stereo images are
obtained by transforming the acquired reverse image, and then,
correspondences and depth values are calculated. However, image transformation requires increased computational cost, and
the interreflection between mirrors causes intensity differences
to matching points in stereo images. Inaba et al. [7], as well
as Mathieu and Devernay [8], in their papers, proposed the
monocular stereo-vision systems using four planar mirrors. In
a more recent work, Nene and Nayar [9] presented four stereovision systems that use a single camera that is pointed toward
different types of mirrors (planar, hyperboloidal, ellipsoidal,
and paraboloidal). By means of nonplanar mirrors, wide-FOV
images are obtained. However, in their catadioptric systems, a
complex mirror mechanism is necessary. Gluckman and Nayar
[10], [11] demonstrated how, by using mirror reflections of a
scene, stereo images can be captured with a single camera. The
two mirrors used, which are found in an arbitrary configuration,
can be self-calibrated. Gluckman and Nayar [12] presented
the design of a compact panoramic stereo camera, which uses
parabolic mirrors and is capable of producing 360 panoramic
depth maps. Gluckman and Nayar [13] also presented a novel
catadioptric sensor which uses mirrors to produce rectified
stereo images. Lee et al. [14], [15] proposed a practical stereocamera system that uses only one camera and a biprism placed
in front of the camera. The equivalent of a stereo pair of
images is formed as the left and right halves of a single
charge-coupled-device (CCD) image using a biprism. This
system is more accurate for nearby objects than for far ones.
Their system is simple, but a biprism cannot be easily found.
Peleg et al. [16] presented two stereo-vision systems with one
camera by using a spiral-like mirror or lens. These systems
cannot be used right now in real-time applications. Finally,
Wuerz et al. [17] demonstrated how, by capturing an object and
its mirror reflection, a stereo pair of images can also be obtained. The aforementioned systems use two different images,
and correspondences are found between these images. Some
kind of mirror (i.e., curved) or a biprism is difficult to find.
In this paper, the design and the construction of a new system
for stereo vision with a single CCD camera are presented. This
system is called the pseudostereo-vision system (PSVS), and it
is composed of a camera, a beam splitter, and three mirrors. The
PSVS has the following properties.
1) A complex image, in one shot, is acquired by the system,
and using the proper algorithms, the disparities and the
depth of objects can be obtained.
2) Images have all the advantages of single-camera stereovision systems.
3) The angular FOV of the apparatus is the same as the
angular FOV of the camera used.

4) Only one complex image is directly processed


(pseudostereo vision).
5) The calibration of the PSVS is quite easy, and it has no
moving parts.
6) It is a relatively low-cost apparatus and has a mechanically robust construction.
7) It can be constructed at any dimension, covering every
type of camera and length of baseline.
In the succeeding sections, the initial idea, as well as the
construction details, and the examples of the images received
are presented. This paper is organized as follows. In Section II,
the description and details for the apparatus are given. In
Section III, the refraction phenomena due to the beam splitter
are analyzed. In Section IV, the equations of point coordinates, which take refraction phenomena into consideration,
are calculated by means of the PSVS. In Section V, a new
correspondence algorithm is introduced. In Section VI, the
experimental results are given. In Section VII, a comparison
with other systems, as well as the probable applications of the
PSVS, is provided. Finally, in Section VIII, the conclusions of
this paper are presented.
II. S YSTEM D ESCRIPTION AND D ETAILS
A. System Description
The target of this paper was the construction of a system with
a single camera for the reception of stereo images and with no
moving parts. Moreover, the construction of this system should
be relatively simple for it to be used in robotic systems and be
relatively cheap. The main idea is based on the use of three
mirrors with a 100% reflection of their incident light and a 50%
beam splitter. Refraction phenomena disappear onto the first
three mirrors because their first surface is used (first-surface
mirrors). In Fig. 1, the PSVS is shown. Mirror (1) is the beam
splitter, whereas mirrors (2)(4) are the first-surface mirrors.
The cost of these mirrors with a supporting base is lower than
the cost of a firewire board camera. In demanding applications,
in particular, where the use of cameras with special features is
needed (i.e., digital, square sensor pixel, low distortion, etc.),
the proposed system is much cheaper than an ordinary stereovision system (one camera, one simple frame grabber, and
less cables). We are searching for the replacement of the firstsurface mirrors and the beam splitter with common commercial
mirrors. In that case, the PSVS (PSV apparatus plus camera)
will cost almost as much as one camera.
An orthogonal coordinate system XY Z is established
(Fig. 1), with the optical center of the real camera as the
origin, the Z-axis coinciding with the optical axis of the real
camera, the X-axis being on the sheet plane, and the Y -axis
being vertical to the X- and Z-axes. The mirrors of this vision
system form an angle of 45 with the optical axis (Z-axis)
of the camera, and all of them are vertically mounted on the
supporting camera plane (plane XZ). In Fig. 1, 2a is the angular
FOV of the system.
It is considered that, at the beginning, no refraction phenomena exist due to the beam splitter (1) (i.e., Pennicle beam
splitter). Then, two virtual cameras are created, with their

PACHIDIS AND LYGOURAS: PSEUDOSTEREO-VISION SYSTEM: A MONOCULAR STEREO-VISION SYSTEM

2549

in the intensity of pixels of each view because of the beam


splitter. It is obvious that the intensity of pixels in a complex image never exceeds their maximum value (for grayscale
images IC (i, j) 255).
In Fig. 2(a), a photograph with the prototype model of the
PSVS mounted on a PUMA 761 manipulator is shown. Another
one in Fig. 2(b) shows an inner view of the apparatus. A
regulated lighting system is adapted on the PSV apparatus
to reduce the problems with correct luminosity. The reliable
extraction of features from the images in the vision systems
requires well-illuminated scenes, focused images, and satisfactory contrast of objects in them. As the PSVS is mounted
on a robotic manipulator, the light conditions always change
as the robot moves. The proposed lighting system permits the
operation of the PSVS in more steady light conditions, almost
independent of the task environment conditions. Moreover, a
smaller aperture of the lens is used. Thus, the depth of field
is increased, and the images captured are focused in a larger
area. If the light conditions are satisfactory, the lighting system
is not used. As an example, in Fig. 2(c) and (d), two complex
images, which are captured from the same scene, are shown.
The first one [Fig. 2(c)] is captured in the environment light
conditions, whereas the second one [Fig. 2(d)] is captured
using, in addition, the proposed lighting system. It is obvious
that more details are observed in the second image.
B. Mirror Dimensions

Fig. 1. PSVS. Some construction details and the virtual cameras created. The
established coordinate system is also illustrated.

optical axes being parallel to the optical axis of the real camera.
These cameras are symmetrically located to the Z-axis. They
have the same geometric properties and parameters (the same
as the real camera). Consequently, these cameras constitute
an ideal stereo-vision system with two cameras. This vision
system, as it is presented, receives one complex image in a
single shot. A complex image is defined as an image that is
created from the superposition of two images received from the
left and right views of the system. The baseline b of this stereo
system is the distance between the two virtual parallel optical
axes (Fig. 1). The beam splitter selected permits the reflection
of 50% of the incident light, whereas it permits, through its
body, the propagation of the other 50%. As the incident light is
coming from two different directions, 50% of the light in each
direction is lost, whereas the other 50% is driven to the camera
lens. If the intensity of each pixel of an image that is captured
from the left view is IL (i, j), and the intensity of each pixel of
an image that is captured from the right view is IR (i, j), then
the intensity of each pixel of the complex image is given as
IC (i, j) = k IL (i, j) + (1 k) IR (i, j)

(1)

where i and j are the indices of the current row and column, respectively, of a pixel in an image. k is the parameter
(k = 0.5 with the beam splitter used) indicating the decrease

In the PSVS, it is required to calculate the dimensions of the


mirrors in order to avoid probable shades of parts of complex
images caused by the improper size or the location of mirrors
as well as the appearance of ghost images. From triangle AOC
(Fig. 3), the dimension AC is calculated as the sum of the
segments AB and BC. From these segments, the intersection
of the optical axis with a mirror is determined. Knowing the
intersection, a mirror can be correctly mounted on the PSVS
base during the construction. Thus
AC = AB + BC AC


OB tan a cos 1
+ OB tan a sin 1
=
tan(1 a)
+

OB tan a
sin 1 + cos 1 tan a

(2)

where, in the general case, the optical axis forms a random


angle 1 with a mirror plane. OB is the path that a light beam
follows along an optical axis from the optical center of the real
camera to a mirror. From (2), the partial cases for 1 = 45 and
1 = 90 are calculated.
Case Studies:
1) 1 = 45 . Then
AC = AB + BC

2 OB tan a
2 OB tan a
+
=
1 tan a
1 + tan a

= 2 OB tan 2a.

(3)

2550

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

Fig. 2. (a) PSVS mounted on the end effector of the robotic manipulator PUMA761. (b) Inner view of the apparatus. (c) Complex image captured in the
environment light conditions. (d) Complex image captured by means of the proposed lighting system.

(the left and right views), it is concluded that each virtual


optical axis passes from the geometric center of the
corresponding subframe captured by the PSVS.
C. Minimum Distance of Common View
The calculation of this distance is possible by means of
Fig. 1. The distance of interest is OB. Point B represents the
first common point that is created by the two virtual cameras,
where no refraction phenomena are taken into consideration.
From the right triangle (O1 AB), we get following equation:
b
AB
AB
O1 A =
OB +
O1 A
tan a
2
b
b
b
= 2 OB =
.
tan a
2 tan a 2

tan a =

Fig. 3. Calculation of mirror dimensions.

Equation (3) permits calculation of the dimension of


each mirror that forms an angle of 1 = 45 with the optical axis. The other dimension of each mirror is vertical
to the optical axis, namely, 1 = 90 .
2) 1 = 90 . Then
AC = AB + BC = OB tan a + OB tan a = 2 OB tan a.
(4)
The sections AB and BC are equal when 1 = 90 .
Based on the previous calculations and taking into consideration that the optical axes are vertical to the subframes

(5)

From (5), it is deduced that the minimum distance of the


common view is smaller by b/2 compared with the distance
of an ordinary stereo-vision system that consists of two fixed
parallel cameras. Thus, the blind zone in front of the camera is smaller than the blind zone of an ordinary stereovision system and depends on the angular FOV of the lens
and the length of baseline b of the PSVS. In (5), we set
a priori O1 A = O1 H + HA = OB + b/2. To prove that the
extra distance O1 H is equal to b/2, first, the length of the
segment DG is found. The segment DG is given by the second
part of (3)



2 OI + 2b tan a
2 (OI + ID) tan a
=
. (6)
DG =
1 + tan a
1 + tan a

PACHIDIS AND LYGOURAS: PSEUDOSTEREO-VISION SYSTEM: A MONOCULAR STEREO-VISION SYSTEM

2551

Then, the segment DF is calculated as


DF = DE
= DG cos 45 + DG sin 45 tan a
 + EF 
b
= OI +
tan a.
(7)
2
Finally, the segment O1 H, by means of (7), is derived as
O1 H = O1 D HD
DF
=
OI
tan a b 
OI + 2 tan a
b
=
OI O1 H = .
tan a
2

(8)

Similarly, the extra distance O2 J is proved to be equal to b/2.


III. R EFRACTION P HENOMENA
In this part, the influence of the refraction phenomena due
to mirror (1) of the PSVS is examined. It is desirable that
the left and right views of a scene captured by means of
the PSVS coincide and have exactly the same magnification.
However, the optical axis of the second virtual camera (Fig. 4)
is displaced by m, parallel to the optical axis of the real
camera, due to the refraction phenomena generated by the beam
splitter. Simultaneously, the optical center O2 is shifted by l
along the virtual optical axis. In Fig. 4, the displacement by m
and the shift by l are presented. The incidence angle is i , and
the refraction angle is r .
In order to accurately calculate the path of a light beam in
two different directions, which is created by these two virtual
cameras, the displacement m and the shift l must be calculated.
Using the Snell law [18], the refraction angle of a light ray from
the optical center O as it propagates through the mirror (1) is the
following:


nair
1
sin i
(9)
r = sin
nglass
where i is the incidence angle, and nair and nglass are the
refraction indices. If 1 is the angle, the optical axis forms with
the mirror (1) (Fig. 4), then i = 90 1 . If d is the mirror
(1) thickness, the displacement m of the second virtual camera,
after some simple trigonometric calculations, is equal to
d sin(i r )
cos r
d sin (90 1 r )
=
cos r
d cos(1 + r )
=
.
cos r

m=

(10)

The shift of the optical center O2 , l, is calculated as the


difference between the optical paths that a ray follows from the
optical center O until the mirror (4), when this ray is propagated
through the mirror (1) with or without refraction phenomena.
The result of the calculations is the following equation:
l=

d
(1 sin(1 + r ) + cos(1 + r ) tan 1 ) . (11)
cos r

Fig. 4. PSVS. Refraction phenomena due to the beam splitter (1).

This indirect method results in an accurate equation for the


calculation of l. The shift of the optical center depends on the
refraction angle r , and the incidence angle i is not necessarily
small. In a direct method, the incidence angle i must be small,
and then the segment formed by the real and apparent points
(i.e., the real and apparent optical centers) is approximated as a
segment that is vertical to the surface of the refractive medium.
Using the aforementioned results for the displacement m
and the shift l, the construction of the apparatus could be
separated in two partial cases. In the first case, the simplicity in
construction and calibration is desired. The horizontal distance
selected between the mirrors (1), (2) and (3), (4) is exactly the
same, which is equal to b/2. Then, the second virtual camera
optical axis, due to the refraction phenomena in the mirror (1),
is displaced along the X-axis by m, and the optical center O2
is shifted along the Z-axis by l. The shift by l means that the
left view of the apparatus, in relation to the right view, is a bit
magnified.
In the second case, the horizontal distance between the
mirrors (3) and (4) is adjusted to be b/2 + l during construction
and calibration. Then, the optical center O2 is not shifted along
the Z-axis. The optical centers of the two virtual cameras have
the same coordinates along the Z-axis, and the angular FOV is
equal to 2a. As the displacement m and the shift l depend on

2552

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

Using triangle similarities, the equation for depth z is


derived as
z=

xl l
b
f m
f b
+
+
.
xl xr
2 xl xr
xl xr

(12)

The coordinate x of a random point in space is equal to




xr
b
b
x=
z+
+
f
2
2
or


xl
b
b xl l
m.
(13)
x=
z+

f
2
2
f
A relation similar to the relation for x gives the equation for
the coordinate y of point P(x, y, z), where the projections onto
the image planes of the two virtual cameras are yr and yl




yr
b
yl
b
(14)
y=
z+
or y =
z+ l .
f
2
f
2
Solving (14) to find yr and yl gives
Fig. 5.

yf

z + 2b

Projection of a point onto the image planes of the two virtual cameras.

the refraction angle r , the values of m and l change according


to the location of each ray in a light beam. Thus, the rays of a
light beam always converge to a point that is the optical center
of the real camera O. Consequently, the magnification of the
two virtual cameras is exactly the same, and the displacement
of the second virtual axis with respect to the optical axis of the
real camera is b/2 + m + l. In this case, the construction and
calibration procedure requires the careful alignment of mirrors
(3) and (4).
IV. C OORDINATES OF A R ANDOM P OINT IN
S PACE BY M EANS OF PSVS
As already mentioned (Fig. 1), the proposed pseudostereovision system consists of two parallel virtual cameras. In
Fig. 5, points P  (xl , yl ) and P  (xr , yr ) are the projections of
a random point P(x, y, z) onto the image planes of the two
virtual cameras. In the established coordinate system (Figs. 1
and 5), the Y -axis coordinates increase, so that a right-hand
rectangular coordinate system is created. To calculate the final
equations giving the coordinates of a random point in space, the
two partial cases of the PSVS construction must be taken into
consideration. In the first case, the optical centers O1 and O2 of
the virtual cameras have the following coordinates:


b
b
, 0,
O1 (xo1 , yo1 , zo1 )
2
2
and


O2 (xo2 , yo2 , zo2 )


b
b
m, 0, + l .
2
2

Equations giving the coordinates of a random point in space


with respect to the camera coordinate system are calculated
when the length of baseline b and the active focal length of the
camera are known.

yr = 

yl = 

or

yf
.
z + 2b l

(15)

Then
yf
yf
 yl yr

z + 2b l
z + 2b

yl yr = 

=y f 

z+

b
2

l
.
 
l z + 2b

(16)

If l is too small [small thickness d of mirror (1)], then


the previous difference is near zero. Namely, yl
= yr , and
a corresponding pixel is found in the same scan line. This
vertical disparity is decreased as the distance from the PSVS
is increased.
In the second case, the optical centers O1 and O2 of the
virtual cameras have the following coordinates:


b
b
, 0,
O1 (xo1 , yo1 , zo1 )
2
2
and


O2 (xo2 , yo2 , zo2 )

b
b
m l, 0,
2
2


.

Then, the equation giving the depth z is equal to


z=

b f (m + l)
f b
+
.
xl xr
2
xl xr

(17)

The coordinate x of a random point in space is equal to




xr
b
b
x=
z+
+
f
2
2
or


xl
b
b
(18)
x=
z+
m l.
f
2
2

PACHIDIS AND LYGOURAS: PSEUDOSTEREO-VISION SYSTEM: A MONOCULAR STEREO-VISION SYSTEM

A relation similar to the relation for x gives the equation for


the coordinate y of point P(x, y, z), where the projections onto
the image planes of the two virtual cameras are yr and yl
yr
y=
f

b
z+
2


or

yl
y=
f

b
z+
2


.

(19)

In this case, the corresponding points on image planes for a


random point in space will have the same coordinates, namely,
yr = yl , and the points will be found in the same scan line.
If no refraction phenomena appear due to the mirror (1), then
(17)(19) are the ordinary stereo-vision equations [19]. In each
one of the two previous cases, the left and right views are
superimposed, creating a complex image.

V. C ORRESPONDENCE A LGORITHM
A large number of correspondence algorithms have been
proposed over the last 20 years [20], [21]. These algorithms
are classified as area- and feature-based. Area-based algorithms
create a description for each image pixel location, usually by
formulating a measure for the local intensity profile of the
area surrounding the pixel, and compare this measure with the
candidate target pixels in the other image. They produce dense
disparity maps as they work at the pixel level, and they are
more suitable for photometrically invariant imagery. Featurebased approaches rely on the matching of explicit features
extracted from the images, such as the edges which correspond to physical scene properties. These presume a degree
of geometric invariance. Depending on the feature complexity,
the feature-based approaches can be low or high level. Lowlevel methods attempt to match individual edgels (edge points),
whereas high-level methods carry out matching between edge
segments, curves, or regions.
The proposed correspondence algorithm belongs to highlevel feature-based algorithms and particularly to algorithms
that can find correspondences in curves. It differs from the other
proposed algorithms of this class for the following reasons.
1) It is implemented not only for pairs of stereo images but
also for complex images.
2) It exploits the concept of seeds to find edge correspondences. While some other researchers [22] have used
this concept, in the proposed algorithm, it has a different
meaning that will be explained later.
3) The selection and correspondence of one or a few edges
or of a part of an edge, in a semiautomatic procedure, are
possible.
4) The criteria that are used to find the corresponding edges
are applied not only to some features but also to the
whole edge. However, these criteria depend on low-level
features (edgels).
5) The segmented pairs of edges have different color or
grayscale values, and thus, the priority of edges is
determined.
6) In order to obtain a corresponding edge, only some pixels
are found (seeds), and the edge is detected by passive
propagation of seeds.

2553

7) It is a two-stage algorithm. First, it detects the corresponding edges. Second, it finds the corresponding points
for each pair of edges with a desired density of points.
This second stage permits a robot path to be generated
from a pair of curves.
8) While the algorithm is implemented to large images (i.e.,
512 512 pixels), the execution time is variable and
depends on the number of the selected straight or curved
edges and the number of the desired points in each
edge. Thus, in some cases, it can be used in real-time
applications.
To implement the proposed algorithm, a complex image is
initially processed. In the application developed in Visual C++,
a variety of filters and edge-detection methods may be used. In
the final edge image, the desired edges are selected as left-view
edges in a semiautomatic procedure, i.e., by manually coloring
a pixel and then by propagating the pixel to the whole edge.
When all the desired edges are colored, with different color
values, the corresponding edges are detected. In an automatic
procedure, each left-view edge is automatically selected first,
the corresponding edge is detected, and the whole procedure
is repeated until all the pairs of the corresponding edges are
detected. Three criteria are used to select the corresponding
edge:
1) the horizontal upper and lower limits of the initial edge
plus a small permissible deviation measured in pixels;
2) the number of pixels in each initial edge, which is extended by a predefined percentage of the initial number
of pixels;
3) the criterion of the most probable edge.
A number of different criteria have been studied. It was found
that the previous three criteria permit a more reliable detection
of a corresponding edge. To explain these criteria, first, it is
worth noticing the way the edges of the left and right views
are located in a complex image. In such an image, the edges
of an object for the two different views have the same upper
and lower limits. If the object is far away from the PSVS, the
disparities of the different corresponding edge points will have
small differences in value. Contrary to the previous case, if the
object is near the PSVS and, moreover, its end points are along
the Z-axis (optical axis), the difference in disparity values will
be significant. If the object is not symmetrically located to the
Y -axis, the number of pixels of the two different views will also
significantly differ. Consequently, it is made clear why the first
criterion is required. A correct corresponding edge will have
the same upper and lower limits with the initial edge. Thus, the
edges whose limits are different from the limits of the initial
edge are excluded from detection. However, a small deviation is
permitted to these limits. This deviation is measured in pixels.
The second criterion checks for a significant deviation in the
number of pixels per edge. This deviation must be greater than
the number of pixels of the initial edge plus a percentage of
that number. The extra percentage of pixels is determined in the
application before the algorithm is executed. Once the second
criterion is applied, the edges with an excessively large number
of pixels but probably with the same limits, compared with the
initial edge, are rejected. The third criterion is more powerful

2554

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

than the other two and can be applied by itself if the differences
in pixel disparity values are too small. The constraint used, as it
happens in a large number of correspondence algorithms, is the
epipolar constraint. According to this criterion, we assume that
in order to detect a corresponding edge, it is not necessary to
examine all the pixels of the initial edge. Thus, from the colored
pixels of the initial edge, a few are selected. The selection
is automatically made by means of a prespecified number of
desired pixels (the range of this number is from 2 to 100). The
selected pixels are equally distributed along the whole initial
edge. For each one of these pixels, the corresponding epipolar
line of the complex image is scanned. The candidate edge pixels
are detected and stored in a matrix of counters according to their
distance from the initial edge pixel. The procedure is repeated
for all the selected pixels of the initial edge. Thus, only a few
lines are scanned, and at the end of this procedure, each element
of the matrix contains the population of the detected pixels per
distance.
Then, the third criterion is defined as follows.
Denition: The most probable candidate edge corresponds
to the maximum population of pixels at the same distance from
the initial image. At this distance, at least one pixel of the
candidate edge is detected.
If an object is parallel to the XY plane of the camera
system, then its projections onto a complex image plane are
parallel edges, and all the selected pixels are detected in the
corresponding edge. In any other case, the number of the
detected pixels is smaller. At least one pixel is necessary to
create the corresponding edge. Pixels that satisfy the third
criterion and, at the same time, belong to an edge are colored
with the same color as the initial edge. These pixels are called
seeds, and they are propagated as four-neighbor pixels. In this
way, the whole corresponding edge is created. Propagation is
a passive procedure with very low computational cost. When
applying the aforementioned criteria, it is obvious that the
algorithm cannot find correspondences if an edge of the left
view intersects an edge of the right view. However, in the
semiautomatic approach, parts of the corresponding edges can
be found. In a complex image, the intersection of the edges
may occur when the left and right views of the objects overlap.
For this reason, we are researching (our research has yet to be
completed) the separation of a complex image into the pair of
stereo images and its reconstruction. We are elaborating on two
different methods. The first one is based on grayscale cameras
and the use of spatial filters in front of the PSVS, whereas
the second one is based on color cameras and the use of color
filters. The second method separates a complex image into the
pair of stereo images at frame rate.
After the corresponding edges have been created (detection
and propagation), the corresponding points in each pair of
edges can be found by mapping the points one by one. In
this second stage, the density of points, i.e., the number of
corresponding points in a pair of edges, is determined by means
of a prespecified number. The range of this number is from
3 to 1000, and it is manually selected before the algorithm is
implemented. The corresponding pairs of points are equally
distributed, and their locations and disparities are stored in a
matrix.

Fig. 6. Correspondence procedure. (a) In the first stage, seven points are
selected for correspondence from the desired edge. Only one corresponding
point was found. (b) In the second stage, 30 corresponding pairs of points were
found. Parallel lines show this mapping.

An example is shown in Fig. 6(a). The color of the initial


desired edge is in grayscale. The other edges are excluded when
the first two criteria are applied. Only one of them has the same
upper and lower limits and a smaller number of pixels than
the predefined one. For the application of the third criterion,
seven points are automatically selected from the initial edge.
After the criterion is applied, only one seed is found. This seed
is propagated, and the corresponding edge is created. Then,
having implemented the second stage of the algorithm, the
corresponding pairs of points are found. This mapping is shown
with the parallel lines in Fig. 6(b).
To sum up, the basic steps of the correspondence algorithm
are the following.
1) Find the edge points of the objects with image processing
(filtering, edge detection, and binary conversion).
2) Select the desired edges from the left side of the complex
binary image and assign a desired color to each edge.
3) Find all edge pixels in the same line for every selected
pixel and store them.
4) From these pixels, select the set that satisfies the criteria
of the correspondence algorithm. These pixels are called
seeds.
5) Propagate the seeds to the whole corresponding edge.
6) Repeat step 5) until the whole corresponding edge is
selected.
7) Store the coordinates of the corresponding pixels, as well
as the disparity of each pair of the corresponding pixels
in a matrix.
VI. E XPERIMENTAL R ESULTS
A. Examples of Complex Images
Fig. 7 shows the complex images captured by means of the
PSVS. A Pulnix TM-520 camera was used. Fig. 7(a) and (b)
shows the complex images of a scene comprising a few objects.
In these images, the left and right views of the objects do not
overlap. In Fig. 7(c)(e), instead, the two different views of the
complex images overlap. Using the previous correspondence
algorithm, it is possible to estimate the disparities of the corresponding points and then calculate the 3-D-point coordinates,

PACHIDIS AND LYGOURAS: PSEUDOSTEREO-VISION SYSTEM: A MONOCULAR STEREO-VISION SYSTEM

2555

Fig. 7. (a) and (b) Complex images of a scene comprising a few objects.
(c) Partial overlapping of the ashtrays is shown. (d) and (e) Complex images
of more complicated scenes are presented.

Fig. 9. Disparity map, computed by means of the complex image of pliers, for
500 points, is illustrated.

Fig. 8. (a) Initial complex image is captured by means of the PSVS. (b) Binary
image is illustrated after the initial processing of the complex image. (c) Left
view of the pliers is selected (with grayscale value). (d) Detection of the right
view is presented. Parallel lines show the mapping of the desired corresponding
pairs of points.

even in cases where complex images are captured from more


complicated scenes by means of the PSVS.
B. Disparities of Selected Points
In this section, an experiment with pliers is initially presented. The complex image of pliers [Fig. 8(a)] is captured by
means of the PSVS, and after the initial processing and implementation of the correspondence algorithm, the disparity map
of the pliers is depicted. For this experiment, the semiautomatic
procedure is implemented, and a part of a robotic software
application has been developed for this reason. This application
is called HumanPT, and a part of it is used for receiving images,
for their initial processing, for implementing the correspondence algorithm, and for calculating the coordinates of 3-D
points. The mirrors of the PSVS are aligned according to the
first case of mirror alignment (Section III).

First, the complex image of Fig. 8(a) is filtered with a median


filter, then it is converted to a binary image [Fig. 8(b)] with a
threshold equal to 92 (T = 92), and finally, the Roberts algorithm for edge detection is implemented. In this experiment, the
median filter is used to remove the noise from the grayscale
complex image, whereas the threshold is selected by means
of the histogram of the complex image. Then, from the edge
image, the left view of the object is selected [Fig. 8(c)], and the
correspondence algorithm is implemented. Thus, the right view
of pliers is detected [Fig. 8(d)]. For the detection of the right
view, five points of the left view are only used. In Fig. 8(d),
after implementing the second stage of the algorithm, 100 of
the corresponding pairs of points are indicated by means of
parallel lines connecting each pair. A disparity map is presented
in Fig. 9. It is computed by means of the PSVS for the edge
image; the number of desired points is determined to be 500.
The second stage of the algorithm is implemented to recalculate
the corresponding pairs of points.
As a second experiment, the implementation of the correspondence algorithm in a complex image captured from a
complicated scene is presented. In this image, the two different
views overlap, and as already explained, only the segments
of the edges can be used to find the correspondences. The
initial complex image [Fig. 10(a)] is converted to an edge
image [Fig. 10(b)]. First, it is smoothed by means of a mean
value filter, then the Roberts edge-detection algorithm is implemented, and finally, the image is converted to a binary image
with a threshold equal to eight (T = 8). In the resulting image,
two segments of the edges are selected by using two different
grayscale values [Fig. 10(c)].
After implementing the correspondence algorithm, the corresponding segments of the edges are detected, and the desired
corresponding pairs of points are computed. The number of

2556

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

Fig. 11. Complex images of the same iron plate for different distances. The
two different views are always adjacent.

Fig. 12.
Fig. 10. (a) Initial complex image of a complicated scene captured by means
of the PSVS. (b) Edge image is illustrated. (c) Two segments of edges are
selected with different grayscale values. (d) Having implemented the correspondence algorithm, the pairs of the corresponding points are indicated by
means of parallel lines.

selected points of the initial edges is five, and the seeds found
are three (for the segment with a grayscale value that is equal
to 185) and one (for the segment with a grayscale value that is
equal to 192). For this experiment, the number of the desired
corresponding pairs of points is specified to be 35 [Fig. 10(d)].
The coordinates of 3-D points can be computed from the
corresponding pairs of points by means of (12)(14).
C. Complex-Image Views Overlapping
In the two previous experiments, the examples of a scene
comprising an object and a complicated scene are presented.
Overlapping of the left and right views of an object in a complex
image takes place when the dimension of the object along the
X-axis of the camera is greater than the length of baseline b,
whereas it does not depend on depth z. To show it, by way of
experiment, Fig. 11 is used.
It consists of eight complex images of an iron plate measuring 98 68 mm. The long dimension of the plate is parallel to
the X-axis of the camera frame, b = 98 mm, and the distance
of the PSVS from the object is increased every 100 mm. As
the long dimension of the plate is equal to b, the two different
views are adjacent at any location of the plate. While the size
of the views is decreased as the distance from the iron plate
is increased, the two views are always adjacent. Thus, the
overlapping of the views does not depend on the distance of
the objects from the PSVS but only on b.

(a) Original pattern. (b) Sample of complex images.

D. Measurement Accuracy
The pattern of Fig. 12(a) was used to verify the accuracy of
the PSVS measurements. The camera was calibrated by means
of the method of Zhang [23] and the corner-detection algorithm
proposed in [24]. Here, the length of the system baseline is
b = 100 mm, and the active focal length is f = 16.276 mm.
Fig. 12(b) shows a sample of the complex images received
by the PSVS. The diameter of each circle is d = 10 mm, and
the distance between the circles is a = 20 mm. The PSVS
is mounted on the end effector of the robotic manipulator
PUMA 761.
One part of the HumanPT is used to achieve communication
between the robot controller and the personal computer by
means of the ALTER communication port, and another part is
used to control the robot. Thus, the manipulator is controlled
by means of the personal computer (server), whereas a second
PC (client) is connected with the first one via Ethernet. The
main robotic application is executed on the second PC. After
implementing the correspondence algorithm, a third part of this
software is used to calculate the values of depth z, the values
of circle diameters, and the distance between the centers of the
first and last circles. The Z-axis of the camera is aligned to the
Z-axis of the coordinate system on the base of the manipulator.
Using the aforementioned structure, the PSVS distance from
the pattern is increased every 50 mm (along the Z-axis) by
means of the robotic manipulator. Sixteen images are captured,
and after their processing, the results of the calculated depth,
the circle diameter, and the distance between the centers of the
first and last circles are presented in Table I.
The distances of the PSVS (of the optical center) from the
pattern are manually measured. For this reason, the position of
the optical center is first estimated. To estimate the position of

PACHIDIS AND LYGOURAS: PSEUDOSTEREO-VISION SYSTEM: A MONOCULAR STEREO-VISION SYSTEM

2557

TABLE I
DEPTH, CIRCLE DIAMETER, AND DISTANCE BETWEEN THE CENTERS
OF THE F IRST AND L AST C IRCLES M EASURED AND
CALCULATED IN MILLIMETERS

the optical center, the hand-eye problem is solved by means of


the method proposed in [26], and two homogeneous matrices
are estimated. The first one is the transformation matrix of the
camera frame with respect to the end effector frame of the
manipulator, and the second one is the transformation matrix
providing the frame that is established on the pattern plane
with respect to the base frame of the robot. The transformation
matrix of the end effector frame with respect to the base
frame is provided in each cycle by the robot controller through
the ALTER communication port. In Fig. 13(a), the curve for
the calculated depth-z error percentages versus the measured
depth z is shown, whereas in Fig. 13(b), the measured and
the calculated distances between the centers of the first and
last circles versus the calculated depth z are depicted. This
error increases as the measured depth increases. The previous
results are better compared with the results presented in [2],
[14], and [15]. For the experiment, the maximum value of depth
z is reduced to 1255 mm because of the upper limit of the
PUMA 761.

Fig. 13. (a) Calculated depth-z error percentages versus the measured depth z.
(b) Measured and calculated distances versus the calculated depth z.

E. Real-Time Applications of PSVS

Fig. 14. (a) Complex image of the TOB initial pose. (b) Complex image of
the TOB final pose.

Two paradigms that show the use of the PSVS in real-time


applications are presented. In the first paradigm, the PSVS
mounted on the end effector of the robotic manipulator keeps an
object in the field of view. The target object (TOB) is composed
of three red LEDs that form an isosceles triangle. Thus, the
three light sources of the TOB can easily be segmented, and
then, the corresponding centers are computed by means of
the correspondence algorithm in an automatic procedure. From
these centers, the geometric center of the three light sources is
calculated, and a frame is attached to it (TOB frame). A part
of the HumanPT robotic application is executed on the client
computer, and it is used for the endpoint closed-loop (ECL)
pose-based stereo visual servo control of the robot [25]. By
means of the PSVS, this software calculates, in each cycle, the
pose of the TOB frame. The desired point is selected to be a
virtual point along the camera Z-axis in a prespecified distance
ddes = 550 mm. Thus, by means of the PSVS, the manipulator

is driven to a final location, where the TOB frame is parallel to


the camera frame and where the distance of origins of the two
frames is ddes . The PSVS continues to keep the TOB in this
final pose. To calculate the pose of the TOB with respect to the
base coordinate system of the robot, the hand-eye problem is
solved by means of the method proposed in [26].
In Fig. 14(a), the complex image of the TOB initial
pose is shown. This pose is provided as a six-component
state vector of the form (x, y, z, x , y , z )T , and for this
paradigm, it has the following values: (25.125, 1050.125,
299.875, 160.005, 0.003, 9.992)T . The system is set in operation, and the end effector of the manipulator is translated and
rotated until the final pose of the TOB is obtained [Fig. 14(b)].
Then, the state vector of the final pose has the following values:
(125.000, 1126.500, 121.375, 162.971, 1.293, 0.405)T . In
Fig. 15, the translation along the three axes of the manipulator
versus the samples is shown.

2558

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

Fig. 15. Translation of the end effector of the manipulator along the X-, Y -,
and Z-axes from an initial to a final pose.

Fig. 17. Generated path of a trapezium-shaped aluminum piece with crosses


and the tracked path with a continuous line (all units are in millimeters).

along the edges of a trapezium-shaped aluminum piece. It


measures b1 = 451 mm, b2 = 366 mm, and h = 45 mm. In
order to better drive the TOB along the edges of the object,
a metal pin is attached to the invisible side of the TOB (bottom
side).
The PSVS always keeps the moving object in the field of vision using the same control method as before. Simultaneously,
new poses of the TOB are recorded as a six-component vector.
The measured cycle of the servo loop (control plus recording) is
70 ms. In Fig. 17, the recorded robot path is shown with crosses
as a 3-D graph. To better display the recorded path, only one
cross for every ten cycles is selected. Then, this path is tracked
in real time (loop cycle = 28 ms), and the result is shown in the
same figure (Fig. 17) as a continuous line.

Fig. 16. (a) Translation velocities of the end effector of the manipulator.
(b) Rotation velocities of the end effector of the manipulator.

VII. C OMPARISON W ITH O THER


S YSTEMS A PPLICATIONS

In Fig. 16(a) and (b), the corresponding translation and


rotation velocities versus the samples are shown, respectively.
These velocities in the stable state have very small values.
The velocity vector is again a six-component vector of the
form (Vx , Vy , Vz , x , y , z )T . Each sample corresponds to an
operation cycle. This cycle is 28 msthe minimum possible
loop cycle through the ALTER communication port. In this
paradigm, a Pulnix TM-6705 camera was used, which worked
at 60 Hz.
In the second paradigm, we exploit the high accuracy of the
PSVS measurement in relatively small distances to generate
a robot path, using a modification of the well-known method
Teaching by Showing. The object used is the previous TOB.
The same software is also used. An operator drives the TOB

To compare the performance of the PSVS with respect to


a standard stereo-vision system and to have the same measure, two parallel IEEE-1394 (firewire) cameras were used to
compose a stereo-vision system. The baseline length b of this
vision system was again 10 cm (as the PSVS). In each case, the
cameras were calibrated by means of the calibration method
of Zhang. The stereo system was mounted on a square-profile
8 8-cm aluminum tube that was 2 m long. Along this tube, a
target similar with this of Fig. 12(a), which was mounted on a
specially constructed thick aluminum base, could be accurately
moved. A stereo pair of images was acquired every 100 mm
from 500 to 1900 mm. The experiment was repeated using one
camera in the same location with respect to the Z-axis and PSV
apparatus instead of the stereo-vision system. Complex images
were captured again every 100 mm. The images that were

PACHIDIS AND LYGOURAS: PSEUDOSTEREO-VISION SYSTEM: A MONOCULAR STEREO-VISION SYSTEM

2559

alignment of the PSVS mirrors is possible (when it is necessary)


by means of a simple laser beam. By comparing the results
presented in [2], [14], and [15], it is found out that the results in
Fig. 18 are more accurate.
The PSVS can be used in applications where robot control
is necessary, providing more accurate measurements than the
other systems. Moreover, it can be used in the following cases:
1) The PSVS, located on a mobile robot, can significantly
improve its navigation. 2) It can be used to accurately measure
distances on moving objects, i.e., on a production line or located
on a moving vehicle to measure the relative distance and speed
of other vehicles or objects. 3) The PSVS concept can be used
to measure long distances, exploiting the fact that, using one
only stable camera, mirrors (2) and (4) can be located in long
distances, ensuring this way (large value of length of baseline b)
the accuracy in measurements. 4) From the other side, the PSVS
concept permits the construction of a stereo-vision system
that can accurately measure distances in a microworld. As an
example, we mention the case where a micro PSVS apparatus
can accurately drive a surgeon tool of a fully robotized system
where the signal, by means of an optical fiber, can reach to an
external computer for processing.
VIII. C ONCLUSION
In this paper, the design and the construction of a new
apparatus for stereo vision with a single camera are presented.
The PSVS has the following various remarkable features which
permit its use in real-time applications.

Fig. 18. Errors measured with respect to the real distance of each vision
system from the target.

acquired by means of the two-vision systems were processed


using our correspondence algorithm. In each case, the diameter
of a circular area (10 mm) along the X-axis, the distance of
the centers between two circular areas (40 mm long) along the
Y -axis, and the depth z were calculated, and the results of the
errors are shown in Fig. 18.
By comparing the results in Fig. 18, we realize the following:
1) The accuracy of the PSVS is better than the accuracy of
the standard stereo-vision system along the X- and Y -axes.
2) The accuracy of the PSVS is much better than the accuracy
of the standard stereo-vision system along the Z-axis. 3) The
PSVS can measure in smaller distances (smaller blind zone).
Moreover, camera parallelism was a difficult procedure (that
is why, rectification in a stereo pair of images is usually implemented, increasing the computational cost), and a computer
is always necessary for the alignment of cameras, whereas the

1) It has no moving parts.


2) It uses only one common CCD camera, and this way, the
two virtual cameras of the stereo system have exactly the
same geometric properties and parameters.
3) It receives a complex image, which is composed of a
stereo pair of images, in a single shot.
4) It directly processes a complex image.
5) It can be constructed in any dimension, covering every
type of camera and baseline length.
6) It is a relatively low-cost apparatus and a mechanically
robust construction; thus, it can be used in every vision
system.
7) By implementing our correspondence algorithm, it is
possible to find point disparities in real time.
The experimental results are better compared with those of
some older systems and standard stereo-vision systems. By
means of a fast and intelligent algorithm, which separates a
complex image into a pair of stereo images, it might be used
as an ordinary stereo-vision system. The PSVS can be used
in an important number of applications. This vision system,
combined with a PUMA 761 robotic manipulator, will be used
in a vision-based arc-welding system.
R EFERENCES
[1] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision. Reading,
MA: Addison-Wesley, 1993.
[2] W. Teoh and X. D. Zhang, An inexpensive stereoscopic vision system for
robots, in Proc. Int. Conf. Robot., 1984, pp. 186189.

2560

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 56, NO. 6, DECEMBER 2007

[3] Y. Nishimoto and Y. Shirai, A feature-based stereo model using small


disparities, in Proc. IEEE Comput. Soc. Conf. CVPR, 1987, pp. 192196.
[4] S. Nayar, Robotic vision system, U.S. Patent 4 893 183, Aug. 1988.
[5] D. Southwell, A. Basu, M. Fiala, and J. Reyda, Panoramic stereo, in
Proc. Int. Conf. Pattern Recog., 1996, pp. 378382.
[6] A. Goshtasby and W. A. Gruver, Design of a single-lens stereo camera
system, Pattern Recognit., vol. 26, no. 6, pp. 923936, Jun. 1993.
[7] M. Inaba, T. Hara, and H. Inoue, A stereo viewer based on a single
camera with view-control mechanism, in Proc. Int. Conf. Robots Syst.,
Jul. 1993, pp. 18571864.
[8] H. Mathieu and F. Devernay. Systeme de miroirs pour la stereoscopie,
INRIA, Sophia-Antipolis, France, Tech. Rep. 0172, 1995 (in French).
[9] S. Nene and S. Nayar, Stereo with mirrors, in Proc. ICCV, 1998,
pp. 10871094.
[10] J. M. Gluckman and S. K. Nayar, A real-time catadioptric stereo system
using planar mirrors, in Proc. IUW, 1998, pp. 309313.
[11] J. Gluckman and S. Nayar, Planar catadioptric stereo: Geometry
and calibration, in Proc. Conf. Comput. Vis. Pattern Recog., 1999,
pp. 10221028.
[12] J. M. Gluckman and S. K. Nayar, Real-time omnidirectional and
panoramic stereo, in Proc. IUW, 1998, pp. 299303.
[13] J. M. Gluckman and S. K. Nayar, Rectified catadioptric stereo sensors,
in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 2000, pp. 224236.
[14] D. H. Lee, I. S. Kweon, and R. Cipolla, A biprism stereo camera system,
in Proc. IEEE Comput. Soc. Conf. CVPR, 1999, pp. 8287.
[15] D. H. Lee and I. S. Kweon, A novel stereo camera system by a biprism,
IEEE Trans. Robot. Autom., vol. 16, no. 5, pp. 528541, Oct. 2000.
[16] S. Peleg, M. Ben-Ezra, and Y. Pritch, Omnistereo: Panoramic
stereo imaging, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 3,
pp. 279290, Mar. 2001.
[17] A. Wuerz, S. K. Gehrig, and F. J. Stein, Enhanced stereo vision using free-form surface mirrors, in Proc. Int. Workshop RobVis, 2001,
vol. 1998, pp. 9198.
[18] L. S. Pedrotti and F. L. Pedrotti, Optics and Vision. Englewood Cliffs,
NJ: Prentice-Hall, 1998.
[19] T. Pachidis and J. Lygouras, A pseudo stereo vision system as a sensor for
real time path control of a robot, in Proc. IEEE Instrum. Meas. Technol.
Conf., Anchorage, AK, 2002, pp. 15891594.
[20] U. R. Dhond and J. K. Aggarwal, Structure from stereoA review, IEEE Trans. Syst., Man, Cybern., vol. 19, no. 6, pp. 14891510,
Nov./Dec. 1989.
[21] J. Y. Goulermas and P. Liatsis, Hybrid symbiotic genetic optimisation
for robust edge-based stereo correspondence, Pattern Recognit., vol. 34,
no. 12, pp. 24772496, Dec. 2001.
[22] G. A. Jones, Constraint, optimisation and hierarchy: Reviewing stereoscopic correspondence of complex features, Comput. Vis. Image Underst., vol. 65, no. 1, pp. 5778, 1997.
[23] Z. Zhang, A flexible new technique for camera calibration, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 22, no. 11, pp. 13301334, Nov. 2000.
[24] T. Pachidis, J. Lygouras, and V. Petridis, A novel corner detection
algorithm for camera calibration and calibration facilities, in Proc.
2nd WSEAS Int. Conf. Signal Process. Comput. Geometry Vis., 2002,
pp. 69116916.

[25] S. Hutchinson, G. Hager, and P. Corke, A tutorial in visual servo control,


IEEE Trans. Robot. Autom., vol. 12, no. 5, pp. 651670, Oct. 1996.
[26] H. Zhuang, Z. Roth, and R. Sudhakar, Simultaneous robot/world
and tool/flange calibration by solving homogeneous transformation
of the form AX = YB, IEEE Trans. Robot. Autom., vol. 10, no. 4,
pp. 549554, Aug. 1994.

Theodore P. Pachidis (S99M04) was born in


Drama, Greece, in August 1962. He received the
B.S. degree in physics and the M.S. degree in electronics from the Aristotle University of Thessaloniki,
Thessaloniki, Greece, in 1985 and 1989, respectively, and the Ph.D. degree in robotics and machine
vision systems from the Department of Electrical
and Computer Engineering, Democritus University
of Thrace, Xanthi, Greece, in 2005.
In 1989, he was working as a Teacher of physics
and electronics in many schools in Kavala, Greece.
From 1996 to 1998, he was a Schoolmaster with the Public Professional Training Institute in Kavala. At the same time, he designed and constructed, with
his own business, an important number of electronic devices. Since 2005, he
has been with the Kavala Institute of Technology. He is also currently with the
Department of Electrical and Computer Engineering, Democritus University
of Thrace, Xanthi, Greece. His research interests include electronics, robotics,
machine vision systems, and visual C++ and microcontroller programming.

John N. Lygouras was born in Kozani, Greece, in


May 1955. He received the Diploma and Ph.D. degrees (with honors) in electrical engineering from the
Democritus University of Thrace, Xanthi, Greece, in
1982 and 1990, respectively.
In 1982, he was a Research Assistant with the
Department of Electrical and Computer Engineering,
Democritus University of Thrace, Xanthi, Greece. In
1997, he spent six months with the Department of
Electrical Engineering and Electronics, University of
Liverpool, Liverpool, U.K., as an Honorary Senior
Research Fellow. Since 2000, he has been an Associate Professor with the
Department of Electrical and Computer Engineering, Democritus University of
Thrace. His research interests are in the field of robotic-manipulator trajectory
planning and execution. His interests also include research on analog and digital
electronic system implementation and position control of underwater remotely
operated vehicles.

You might also like