You are on page 1of 43

S-118.

4250 POSTGRADUATE SEMINAR ON ILLUMINATION ENGINEERING, SPRING 2008


LIGHTING UNIT, DEPARTMENT OF ELECTRONICS, HELSINKI UNIVERSITY OF TECHNOLOGY (TKK)



OPTICAL PERFORMANCE:
CHARACTERIZATION OF A PUPILLOMETRIC CAMERA





Petteri Teikari, petteri.teikari@gmail.com
Emmi Rautkyl, emmi.rautkyl@tkk.fi



ABSTRACT
The object of this work is to help to understand the process of characterizing a camera used in
pupillometric research. In our case, the characterization consists of measuring geometric aberrations,
sharpness (MTF) and noise and determining the dynamic range. Finally some ways to compensate
the noticed flaws are presented.
TABLE OF CONTENTS
ABSTRACT .................................................................................................................................. 1
TABLE OF CONTENTS ................................................................................................................ 2
1 INTRODUCTION................................................................................................................. 3
2 OPTICS & IMAGING ........................................................................................................... 4
2.1 Structure of lenses and their optical characteristics ................................................................ 4
2.1.1 Focal length........................................................................................................................... 5
2.1.2 Aperture................................................................................................................................. 6
2.1.3 Image formation ................................................................................................................... 7
2.1.4 Depth of field (DOF) ........................................................................................................... 8
2.1.5 Modular transfer function (MTF) and contrast................................................................ 10
2.2 Noise......................................................................................................................................... 12
2.3 Dynamic range......................................................................................................................... 14
2.4 Optical aberrations .................................................................................................................. 15
2.4.1 Chromatic aberration.......................................................................................................... 15
2.4.2 Geometric aberrations........................................................................................................ 16
2.4.3 Vignetting ............................................................................................................................ 18
2.4.4 Diffraction........................................................................................................................... 19
2.5 Aberration correction.............................................................................................................. 20
3 APPLIED LENS DESIGN .................................................................................................... 22
3.1 Measurement science & Machine vision................................................................................ 22
3.2 Photography............................................................................................................................. 24
4 CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE.............................................. 25
4.1 Pupillometry & overview of the setup................................................................................... 25
4.2 Methods.................................................................................................................................... 25
4.3 Results of the measurements .................................................................................................. 27
4.3.1 Modulation transfer function and sharpness.................................................................... 27
4.3.2 Geometric aberrations........................................................................................................ 29
4.3.3 Dynamic range .................................................................................................................... 30
4.3.4 Noise.................................................................................................................................... 31
4.4 Image restoration..................................................................................................................... 31
4.4.1 Sharpness............................................................................................................................. 31
4.4.2 Geometric aberrations........................................................................................................ 31
4.4.3 Dynamic range .................................................................................................................... 31
4.4.4 Noise.................................................................................................................................... 32
4.5 Conclusions.............................................................................................................................. 33
5 DISCUSSION..................................................................................................................... 35
APPENDICES ............................................................................................................................ 36
Appendix 1: The Matlab code for an ideal image ............................................................................... 36
Appendix 2: The Matlab code for an image with coma...................................................................... 37
Appendix 3: The Matlab code for averaging monochrome images................................................... 38
Appendix 4: The Matlab code for image quality calculation.............................................................. 39
Appendix 5: The Matlab code for calculating variance of one row in the image............................. 40
6 REFERENCES................................................................................................................... 41
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
INTRODUCTION 3
STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
1 INTRODUCTION
Cameras do not 'see' in the same way those human beings are able to. They are not equivalent to
human optics because their lens design defines how the image is formed. Therefore, if cameras are to
be used in research purposes, it is important to know how to minimize the effect of the lens system
on the research data.
The object of this work is to help to understand the process of characterizing a camera. Under
special examination is a pupillometric camera, hence a camera used for providing data about the
autonomous nervous system by recording pupil size and dynamics. The experimental part of the
paper gives a practical example of characterizing such pupillometric camera very sensitive to
aberrations and noise and discusses possible ways to improve the image quality. That, together with
the discussion, forms the core of the paper and raises questions for experiments to come.
The work is meant for people not very familiar with optics or pupillometry. It takes a simple
approach to the optics and imaging in Chapter 2. Chapter 3, for one, gives more insight to metrology
with a review of more specific lens design used in machine vision and measurement science
applications.


OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 4
STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
2 OPTICS & IMAGING
In this chapter basic concepts of optic systems are reviewed in the detail needed for this work.
2.1 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
Lens or lens system is the optical structure that defines how the image is formed. In practice there
are always several lenses (basic types illustrated in Figure 1 [1]) in the lens that can be bought from a
store and therefore in this work the word lens
refers to the lens system. Lenses can be
categorized roughly to wide-angle and telephoto
lenses where wide-angles have larger angle of
view (with smaller focal length) and telephoto
lenses have smaller angle of view (with larger
focal length). Lenses can either have a fixed
focal length (prime lenses) or it can be changed
for example from wide-angle to telephoto when
they are commonly referred as zoom lenses.
Lenses could be characterized also according to
their lens design and the number of elements but that kind of characterization is beyond the scope of
this work.
Figure 2 demonstrates the typical structure of a lens for commercial digital cameras. The lens is
mounted on the camera using specific bayonet mounts [2] which are poorly intercompatible in
commercial cameras even though there exist
adapters to fit different bayonets to a given
camera. In machine vision cameras typically there
are three bayonet types [3]: 1) CS-mount, 2) C-
mount, and 3) M12x0.5 (metric threads). C-
mount is most commonly found from mid-range
to high-end optics whereas CS-mount is a bit
rarer but a CS-mount lens can be fitted into a C-
mount using a proper spacer. The flange (back
focal) distance is different for C- and CS-mounts,
for C-mount it is to the sensor 17.52mm whereas
it is 12.52mm for CS-mount [4]. C-mounts are
often found from microscopes too. M12x0.5 is
found from cheaper cameras.
Zoom wheel is used to change the focal length
if the focal length is not fixed which is however
the case with many machine vision lenses.
Aperture controls the amount of light reaching
the image forming surface (film or sensor).
Aperture is practically always found from all
lenses and in commercial lenses it can be
adjusted but again in many machine vision lenses
the aperture is fixed. In machine vision lenses it
is not common to have an image stabilizer which
would allow longer exposures with comparable image sharpness to shorter exposure without the
image stabilizer. In low light level situations image stabilizer can significantly enhance the image
quality. Focus wheel or similar structure then is used to make image sharp. In following chapters the
basic optical characteristics of lenses are reviewed.
Figure 2. Illustration of the typical structure of an
objective used in SLR-cameras. It should be noted
that in most of the machine vision lenses there is
no possibility to adjust the focus, the focal length
(zoom adjustment) or the aperture as they are all
fixed. Also image stabilizer is hardly ever found
from objectives intended for machine vision
applications. (Picture: Tuomas Sauliala).
Figure 1. Lenses classified by the curvature of the
two optical surfaces [1].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 5
STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
2.1.1 FOCAL LENGTH
The focal length of an optical system is a measure of
how strongly it converges (focuses) or diverges
(diffuses) light. A system with a shorter focal length
has greater optical power than one with a long focal
length as illustrated in Figure 3. For a thin lens in air,
the focal length is the distance from the center of the
lens to the principal foci (or focal points) of the lens.
For a converging lens (for example a convex lens), the
focal length is positive, and is the distance at which a
beam of collimated light will be focused to a single
spot. For a diverging lens (for example a concave lens),
the focal length is negative, and is the distance to the
point from which a collimated beam appears to be
diverging after passing through the lens.
Focal length for an ideal lens can be calculated in a following manner [5]:



(1)

where f = focal length
n = refraction index
R1 = radii of curvature 1 (object side of the lens)
R2 = radii of curvature 2 (imaging side of the lens)

However real-life lenses are not infinitely thin, therefore for the case of a lens of thickness d in air,
and surfaces with radii of curvature R1 and R2, the effective focal length f is given by [5]:



(2)

where f = focal length
n = refraction index
d = lens thickness
R1 = radii of curvature 1 (object side of the lens)
R2 = radii of curvature 2 (imaging side of the lens)

From Eq. (2) it can be seen that if the lens is convex the focal length will increase as the thickness
increases. Estimation of the focal length of a lens system comprised of two infinitely thin lenses can
be also derived from Eq. (2):


(3)

where f = focal length
dist = distance of the lenses
f1 = focal length of lens 1
f2 = focal length of lens 1
Figure 3. Focal length. The light rays
transmitted through an infinitely thin lens
meets at point P thus making focal length
of the lens f. (Picture: Tuomas Sauliala)
( )

=
2 1
1 1
1
1
R R
n
f
2 1 2 1
1 1 1
f f
d
f f f
ist
+ =
( )
2 1
2
2 1
1 1 1 1
R nR
n d
R
n
R
n
f

+

=
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 6

STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
As mentioned earlier, focal length relates to the angle of view of the lens. Due to the popularity of
the 35 mm standard (135 film, full-frame digital cameras), cameralens combinations are often
described in terms of their 35 mm equivalent focal length, that is, the focal length of a lens that
would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a
35 mm equivalent focal length is particularly common with digital cameras, which often use sensors
smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given
angle of view, by a factor known as the crop factor. In machine vision cameras the imaging sensor is
often smaller than the 135 film frame, thus the corresponding angle of view in machine vision lenses
have smaller focal length than their 35mm counterparts.
Table 1 shows the diagonal, horizontal, and vertical angles of view, in degrees, for lenses producing
rectilinear images, when used with 36 mm 24 mm format (that is, 135 film or full-frame 35mm
digital using width 36 mm, height 24 mm) [6]. The same comparison could be easily done for
machine vision imaging system simply by multiplying the focal length with some constant if the
sensor size and distance of the sensor to the imaging plane is known.

Table 1. Common lens angles of view for 135 film or full-frame 35 mm digital camera [6].
Focal
Length
13 15 18 21 24 28 35 50 85 105 135 180 210 300 400 500 600 830 1200
Diagonal
()
118 111 100 91.7 84.1 75.4 63.4 46.8 28.6 23.3 18.2 13.7 11.8 8.25 6.19 4.96 4.13 2.99 2.07
Vertical
()
85.4 77.3 67.4 59.5 53.1 46.4 37.8 27.0 16.1 13.0 10.2 7.63 6.54 4.58 3.44 2.75 2.29 1.66 1.15
Horizontal
()
108 100.4 90.0 81.2 73.7 65.5 54.4 39.6 23.9 19.5 15.2 11.4 9.80 6.87 5.15 4.12 3.44 2.48 1.72
2.1.2 APERTURE
Aperture is the circular opening at the center of a lens that admits light. It is generally specified by
the f-stop (also known as zone or exposure value), which is the focal length divided by the aperture
diameter. It is a dimensionless number that is a quantitative measure of lens speed, an important
concept in imaging [7]. Hence, a large
aperture corresponds to a small f-stop.
A change of one unit in f-stop
corresponds to halving or doubling the
light exposure. The f-number accurately
describes the light-gathering ability of a
lens only for objects an infinite distance
away. In optical design, an alternative is
often needed for systems where the
object is not far from the lens. In these
cases the working f-number is used.
Because the human eye responds to
relative luminance differences, the noise
is often measured in f-stops. The f-
number of the human eye varies from
about f/8.3 in a very brightly lit place to
about f/2.1 in the dark [ 8 ]. Toxic
substances and poisons (like Atropine)
can significantly reduce this range.
Pharmaceutical products such as eye
drops may also cause similar side-effects.
Figure 4. Diagram of decreasing apertures, that is,
increasing f-numbers, in one-stop increments; each
aperture has half the light gathering area of the previous
one. The actual size of the aperture will depend on the
focal length of the lens. Increasing the f-number will also
increase the depth of field as later discussed in more
detail, thus aperture does not simply regulate the amount
of light entering the image sensor [7].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 7
STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
2.1.3 IMAGE FORMATION
Lens form the image on the imaging plane (or focal plane). The front focal point (tip of y arrow in
Figure 5) of an optical system, by definition, has the property that any ray that passes through it will
emerge from the system parallel to the optical axis. The rear (or back) focal point (tip of y arrow in
Figure 5) of the system has the reverse property: rays that enter the system parallel to the optical axis
are focused such that they pass
through the rear focal point. The
front and rear (or back) focal
planes are defined as the planes,
perpendicular to the optic axis,
which pass through the front and
rear focal points. An object an
infinite distance away from the
optical system forms an image at
the rear focal plane. For objects a
finite distance away, the image is
formed at a different location, but
rays that leave the object parallel to
one another cross at the rear focal
plane [9]
The following relation exists for
the focal length and the object
image formation [10]:


(4)

where f = focal length
D = imaging distance
h1 = image height
h2 = object height

Magnification m is the relationship between the physical size of the object and size of the image on
the sensor or plane. It should be noted that magnification of 1:1 is rather high magnification in
photography and in machine vision and that can magnifications are only found in macro lenses
capable of focusing relatively close [11]. In Figure 5 the magnification can be calculated from the
following equation:

(5)

where m = magnification
y = height of the formed image
y = height of the object

and when a-f = p:


(6)

Figure 5. Image formation on an image plane. (Picture:
Tuomas Sauliala)
b
a
p
f
f
q
y
y
m = = = =
'
b a f
1 1 1
+ =
2 1
2
h h
Dh
f
+
=
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 8

STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
where f = focal length
a = distance of the object from the lens
b = distance of the imaging plane from the lens

It can also be seen from Figure 5 that there is a relation between image plance luminosity, focal
length and the distance of the object. If the same object is imaged at a close distance with wide focal
length at the same F-number of the lens as with telephoto focal length so that that formed image y
has the same size, the so-called effective F-number is then something than the F-number of the
optics (aperture). This reduction of light can be expressed with the following equation:



(7)

where F = effective F-number
F = F-number of the aperture
m = magnification

2.1.4 DEPTH OF FIELD (DOF)
In optics, particularly as it relates to film and photography, the depth of field (DOF) is the portion
of a scene that appears sharp in the image. Although a lens can precisely focus at only one distance,
the decrease in sharpness is gradual on either side of the focused distance, so that within the DOF,
the unsharpness is imperceptible under normal viewing conditions.
The DOF is determined by the subject
distance (that is, the distance to the plane
that is perfectly in focus), the lens focal
length, and the lens f-number (relative
aperture). Except at close-up distances,
DOF is approximately determined by the
subject magnification and the lens f-
number. For a given f-number, increasing
the magnification, either by moving
closer to the subject or using a lens of
greater focal length, decreases the DOF;
decreasing magnification increases DOF.
For a given subject magnification,
increasing the f-number (decreasing the
aperture diameter) increases the DOF;
decreasing f-number decreases DOF as
illustrated in Figure 6 [12]. Depth of field
has the following relation:


(8)

where = depth of the sharp area on the image plane
= diameter of the focus circle
F = F-number of aperture

Figure 6. The role of an aperture (diaphgram) in image
formation. When reducing the aperture (increasing f-
value) some of the light rays are clipped away causing
the increase of the circle of confusion. With the aperture
position illustrated in the figure above, circle of
confusion is b. With the biggest aperture the circle of
confusion would be a. (Picture: Tuomas Sauliala)
M
m
D
q f
F
+
=
+
=
1
'
F =
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 9

STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
When focus is set to the hyperfocal distance, the DOF extends from half the hyperfocal distance to
infinity, and is the largest DOF possible for a given f-number. There are two commonly used
definitions of hyperfocal distance, leading to values that differ only slightly. The first definition: the
hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at
infinity acceptably sharp; that is, the focus distance with the maximum depth of field. When the lens
is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity
will be acceptably sharp. The second definition: the hyperfocal distance is the distance beyond which
all objects are acceptably sharp, for a lens focused at infinity. The distinction between the two
meanings is rarely made, since they are interchangeable and have almost identical values. The value
computed according to the first definition exceeds that from the second by just one focal length [13].
The following relation exists for hyperfocal distance:

(9)

where Dd = hyperfocal distance
f = focal length
F = F-number of aperture
= diameter of the focus circle

It can be further derived from Eq. (9):



(10)

where Dd= hyperfocal distance
= diameter of the focus circle

Thus larger lens diameter leads to larger hyperfocal distance. For example old Soviet lens Helios
40-2 85mm f/1.5-22.0 has a hyperfocal distance about 250m at full aperture (f/1.5) and 48m at f/8
due to its huge size [14].
It is also possible to increase the DOF by taking multiple images of the same object with different
focus distances. Focus stacking is a digital image processing technique which combines multiple
images taken at different focus distances to give a resulting image with a greater depth of field than
any of the individual source images. Available programs for multi-shot DOF enhancement include
Syncroscopy AutoMontage,
PhotoAcute Studio,
Extended Depth of Field
plugin for ImageJ, Helicon
Focus and CombineZM.
Getting sufficient depth of
field can be particularly
challenging in microscopy
and macro photography [12].
Focus stacking can also be
used to create topological
maps of structures as it has
been done using freely
downloadable Extended
Depth of Field plug-in with
flys eye in Figure 7 [15].

diameter Lens
D
d
=
F
f
D
d
2
=
Figure 7. Image stack of a fly's eye. The whole stack is composed of 32
x 14-bits images of 1280x1024 pixels [15].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 10
STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS
2.1.5 MODULAR TRANSFER FUNCTION (MTF) AND CONTRAST
The optical transfer function (OTF) describes the spatial (angular) variation as a function of spatial
(angular) frequency. When the image is projected onto a flat plane, such as photographic film or a
solid state detector, spatial frequency is the preferred domain, but when the image is referred to the
lens alone, angular frequency is preferred. OTF may be broken down into the magnitude and phase
components as follows: [16]



(11)

where
( ) ( ) , , OTF MTF =
( )
( )

, 2
,

=
i
e PTF

and (,) are spatial frequency in the x- and y-plane, respectively.

The OTF accounts for aberration, which the limiting frequency expression above does not. The
magnitude is known as the Modulation Transfer Function (MTF) and the phase portion is known as
the Phase Transfer Function (PTF). In imaging systems, the phase component is typically not
captured by the sensor. Thus, the important measure with respect to imaging systems is the MTF.
Phase is critically important to adaptive optics and holographic systems. The OTF is the Fourier
transform of the Point Spread Function (PSF). The sharpness of a photographic imaging system or
of a component of the system (lens, film, image sensor, scanner, enlarging lens, etc.) can be thus
thought to be characterized by MTF as illustrated in Figure 8 [17].
Another related quantity is the Contrast Transfer Function (CTF). MTF describes the response of
an optical system to an image decomposed into sine waves. CTF describes the response of an optical
system to an image decomposed into square waves [16]. Contrast levels from 100% to 2% are
illustrated on Figure 9 for a variable frequency sine pattern. Contrast is moderately attenuated for
MTF = 50% and severely attenuated for MTF = 10%. The 2% pattern is visible only because
viewing conditions are favorable: it is surrounded by neutral gray, it is noiseless (grainless), and the
display contrast for CRTs and most LCD displays is relatively high. It could easily become invisible
under less favorable conditions.
Many photographic / machine vision lenses produce a superior image in the center of the frame
than around the edges as illustrated in Figure 11. When using a lens designed to expose a 35mm film
frame with a smaller-format sensor, only the central "sweet spot" of the image is used; a lens that is
unacceptably soft or dark around the edges when used in 35mm format may produce acceptable
results on a smaller sensor [18]. Typically when decreasing (increasing F-number) the difference
between center and border sharpness becomes less prominent.
The plot on Figure 10 illustrates the response of the virtual target to the combined effects of an
excellent lens (a simulation of the highly-regarded Canon 28-70mm f/2.8L) and film (a simulation of
Fuji Velvia). Both the sine and bar patterns (original and response) are shown. The red curve is the
spatial response of the bar pattern to the film + lens. The blue curve is the combined MTF, i.e., the
spatial frequency response of the film + lens, expressed in percentage of low frequency response,
indicated on the scale on the left. (It goes over 100%.) The thin blue dashed curve is the MTF of the
lens only. The edges in the bar pattern have been broadened, and there are small peaks on either side
of the edges. The shape of the edge is inversely related to the MTF response: the more extended the
MTF response, the sharper (or narrower) the edge. The mid-frequency boost of the MTF response is
related to the small peaks on either side of the edges.

( ) ( ) ( ) , , , PTF MTF OTF =
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 11
STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS







Figure 9. Contrast levels from 100% to 2%
illustrated for a variable frequency sine pattern.
Contrast is moderately attenuated for MTF = 50%
and severely attenuated for MTF = 10%. The 2%
pattern is visible only because viewing conditions
are favorable: it is surrounded by neutral gray, it is
noiseless (grainless), and the display contrast for
CRTs and most LCD displays is relatively high. It
could easily become invisible under less favorable
conditions [17].
Figure 8. Visual explanation of MTF and how it
relates to image quality. The top is a target
composed of bands of increasing spatial
frequency, representing 2 to 200 line pairs per mm
(lp/mm) on the image plane. Below you can see
the cumulative effects of the lens, film, lens+film,
scanner and sharpening algorithm [17].
Figure 11. Illustration of MTF curves for Canon
28-70 mm f/2.8L USM zoom lens. The graphs
show MTF in percent for the three line
frequencies of 10 lp/mm, 20 lp/mm and 40
lp/mm, from the center of the image (shown at
left) all the way to the corner (shown at right). The
top two lines represent 10 lp/mm, the middle two
lines 20 lp/mm and the bottom two lines 40
lp/mm. The solid lines represent sagittal MTF
(lp/mm aligned like the spokes in a wheel). The
broken lines represent tangential MTF (lp/mm
arranged like the rim of a wheel, at right angles to
sagittal lines). On the scale at the bottom 0
represents the center of the image (on axis), 3
represents 3 mm from the center, and 21
represents 21 mm from the center, or the very
corner of a 35 mm film image. Separate graphs
show results at f8 and full aperture [17].
Figure 10. The plot on the right illustrates the
response of the virtual target to the combined
effects of an excellent lens (Canon 28-70mm
f/2.8L) and film (a simulation of Velvia) [17].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 12
NOISE
2.2 NOISE
Digital imaging records visual information via a sensor placed at the focal plane of a camera's
optics to measure the light gathered during an exposure. The sensor is constructed as an array of
pixels, each of which is tasked to gather the light arriving within a small patch of sensor area. The
efficiency with which the sensor and its pixels gather light, and the accuracy to which it determines
the amount gathered by each pixel, are crucial for the quality of the recorded image. The incoming
light is the signal the photographer wishes the camera to transcribe faithfully; inaccuracies in the
recording process constitute noise, and distort the scene being photographed. In order to extract the
best performance from digital imaging, it is helpful to have an understanding of the various
contributions to image noise, how various design choices in digital cameras affect this noise, how
choices in photographic exposure can help mitigate noise, and how to ameliorate the visual effect of
noise post-capture.
Possible noise sources in digital imaging are [19]:
1. Photon shot noise (Figure 12)
2. Read noise (Figure 13)
3. Pattern noise (Figure 14)
4. Thermal noise (Figure 15)
5. Pixel response non-uniformity (Figure
18)
6. Quantization error (Figure 17)

Photon shot noise: Light is made up of
discrete bundles of energy called photons -- the
more intense the light, the higher the number of
photons per second that illuminate the scene.
The stream of photons will have an average flux
(number per second) that arrive at a given area of
the sensor; also, there will be fluctuations around
that average. The statistical laws which govern
these fluctuations are called Poisson statistics and
are rather universal, encountered in diverse
circumstances. The fluctuations in photon counts
is visible in images as noise -- Poisson noise, also
called photon shot noise; an example is shown in.
Figure 12 The term "shot noise" arises from an
analogy of the discrete photons that make up a
stream of light, to the tiny pellets that compose
the stream of buckshot fired from a shotgun [19].
Read noise: Photons collected by the sensels
(the photosensitive part of a pixel) stimulate the
emission of electrons, one for each captured
photon. After the exposure, the accumulated
photo-electrons are converted to a voltage in
proportion to their number; this voltage is then amplified by an amount proportional to the ISO gain
set in the camera, and digitized in an analog-to-digital converter (ADC). The digital numbers
representing the photon counts for all the pixels constitute the RAW data for the image (raw units
are sometimes called analog-to-digital units ADU, or data numbers DN). In the real world, the raw
level does not precisely reflect the photon count. Each electronic circuit component in the signal
processing chain -- from sensel readout, to ISO gain, to digitization -- suffers voltage fluctuations
that contribute to a deviation of the raw value from the ideal value proportional to the photon count.
Figure 13. Read noise of a Canon 1D3 at ISO 800.
The histogram of the noise is approximately
Gaussian. The average value of 1024 is due to an
offset Canon applies to raw data [19].
Figure 12. Photon shot noise in an image of the
sky from a Canon 1D3 (in the green channel). In
the histogram at right, the horizontal coordinate
is the raw level (raw units are sometimes called
analog-to-digital units ADU or data numbers
DN), the vertical axis plots the number of pixels
in the sample having that raw level. The photon
noise was isolated by taking the difference of two
successive images; the raw values for any one
pixel then differ only by the fluctuations in the
photon count due to Poisson statistics (apart from
a much smaller contribution from read noise) [19].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 13

NOISE
The fluctuations in the raw value due to the signal processing electronics constitute the read noise of
the sensor (Figure 13) [19].
Pattern noise: In terms of its spatial variation,
read noise is not quite white. Upon closer
inspection, there are one-dimensional patterns in
the fluctuations in Figure 14. Because the human
eye is adapted to perceive patterns, this pattern
or banding noise can be visually more apparent
than white noise, even if it comprises a smaller
contribution to the overall noise. Pattern noise
can have both a fixed component that does not
vary from image to image (can be easily fixed
with a proper pattern template); as well as a
variable component that, while not random from
pixel to pixel, is not the same from image to
image (which is again harder to eliminate) [19].
Thermal noise: Thermal agitation of electrons
in a sensel can liberate a few electrons; these
thermal electrons are indistinguishable from the
electrons freed by photon (light) absorption, and
thus cause a distortion of the photon count
represented by the raw data. Thermal electrons
are freed at a relatively constant rate per unit time,
thus thermal noise increases with exposure time
as illustrated in Figure 15. Another thermal
contribution to image degradation is amplifier
glow (Figure 16), which is caused by infrared
radiation (heat) emitted by the readout amplifier
For exposures of less than a second or so, read
noise is relatively constant and thermal noise
constitutes a negligible contribution to overall
image noise [19].
Pixel response non-uniformity (PRNU):
Not all pixels in a sensor have exactly the same
efficiency in capturing and counting photons;
even if there were no read noise, photon noise,
etc, there would still be a variation in the raw
counts from this non-uniformity in pixel
response, or PRNU. PRNU "noise" grows in
proportion to the exposure level -- different
pixels record differing percentages of the
photons incident upon them, and so the
contribution to the standard deviation of raw
values from PRNU rises in direct proportion to
the exposure level. On the other hand, photon
shot noise grows as the square root of exposure;
and read noise is independent of exposure level.
Thus PRNU is most important at the highest
exposure levels. At lower exposure levels, photon
noise is the dominant contribution until one gets into deep shadows where read noise becomes
important [19].
Figure 14. Pattern noise in a 20D at ISO 800.
Fixed pattern noise can be removed. By making a
template from the average of 16 identical
blackframes and subtracting it from the image
most of the fixed pattern noise is removed. The
residual variable component of pattern noise
consists in this example largely of horizontal
banding noise [19].
Figure 15. Thermal noise in 20D blackframes at
ISO 400. The knee in the data at exposure time
15sec is due to the max pixel raw level reaching
4095 (the maximum possible value on this
camera), indicating that the rise in standard
deviation is largely due to a few outliers in the
distribution [19].
Figure 16. Amplifier glow (lower right) in a 612 sec
exposure of a Canon 20D [19].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 14
DYNAMIC RANGE
Quantization error: When the analog voltage
signal from the sensor is digitized into a raw
value, it is rounded to a nearby integer value.
Due to this rounding off, the raw value misstates
the actual signal by a slight amount; the error
introduced by the digitization is called
quantization error, and is sometimes referred to
as quantization noise. In practice, this is a rather
minor contribution to the noise. Figure 17 shows
the result of quantization on the noise histogram.
Averaging the quantization error over a
uniformly distributed set of input values will yield
an average quantization error of about 0.3 of the
quantization step. Thus, quantization error is
negligible in digital imaging provided the noise
exceeds the quantization step [19].
In practice the following components in
imaging system can influence the
abovementioned noise components: 1) Little
physical pixel size (small sensor) which do not
allow sufficient amount of photons to hit the
sensor, 2) Sensor technology and manufacturing,
3) High ISO speed, 4) Long exposure time, 5)
Digital processing: sampling and quantization, 6) Raw conversion: noise reduction and sharpening.
In pupillometry the used infrared radiation is not used most efficiently by CMOS/CCD sensors that
have a peaked sensitivity around 550 nm corresponding to the sensitivity of human eye. Therefore
the camera should have as sensitive sensor as possible which is usually given as some minimal lux
amount needed for image formation in technical specifications (e.g. 0.0006 lux @ F1.4 for Watec
WAT DM2S [20], S/N ratio: 52dB). Thermal noise can be reduced by cooling the sensor with a
Peltier element (e.g. of modification [21], commercial Peltier-elements [22], and more theoretical
paper from Hamamatsu on CCD sensors for estimating the effect of cooling and reducing dark
current [23]) but it should noted that this cooling can introduce transient type disturbances [Juha
Peltonen, TKK, pers. comm..] to sensitive measurement devices if for example simultaneous
electroencephalography (EEG) recording is done.
2.3 DYNAMIC RANGE
Dynamic range (also called as exposure range) is the range of brightness over which a camera
responds. It is usually measured in f-stops. Cameras with a large dynamic range are able to capture
shadow detail and highlight detail at the same time. Practical dynamic range is limited by noise, which
tends to be worst in the darkest regions. Dynamic range can be specified as total range over which
noise remains under a specified level, i.e. the lower the level is, the higher is the image quality [24]. If
the imaged scene is static (not the case with pupil) it is possible to use several exposure values for
multiple images and increase the dynamic range with a technique called high dynamic range (HDR)
imaging [25,26].
Figure 19 shows real-world range of luminances ranging from 10
8
cd/m
2
to 10
-5
cd/m
2
[27]. This
wide range cannot be achieved using modern commercial digital cameras with a single exposure.
Latest commercial digital cameras such as Canon EOS 450D [ 28 ] uses 14-bit analog-digital
conversion but this is not the actual dynamic range of the camera as there is noise in the sensor
output. In digital imaging a way to maximize S/N-ratio (signal/noise) is to expose to the right of the
histogram while avoiding the saturation of white [29]. The Rose criterion (named after Albert Rose)
Figure 18. Noise due to pixel response non-
uniformity (PRNU) of a Canon 20D at ISO 100, as
a function of raw value. Fluctuations in the
response from pixel to pixel are about 0.6% [19].
Figure 17. The error introduced by quantization of
a noisy signal is rather small. On the left, noise of
width eight levels; on the right, the quantization
step is increased to eight levels, but the width of
the histogram increases by less than 10%.
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 15

OPTICAL ABERRATIONS
states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty.
An SNR less than 5 means less than 100% certainty in identifying image details [30]. Dynamic range
is not very critical parameter in pupillometry as mainly the idea is to isolate the dark pupil from light
surrounding and in theory only 2 light levels are needed. In practice some additional reserve is
needed but typically the image quality is more degraded by excessive noise rather than lack of
dynamic range.
2.4 OPTICAL ABERRATIONS
In an ideal optical system, all
rays of light from a point in the
object plane converge to the
same point in the image plane,
forming a clear image (Ideal
Matlab-presentation in Figure
20). However, since the lens
systems are never perfect, they
may produce distortions in the
image called aberrations. There
are many types of aberrations,
categorized into 1
st
order, 3
rd

order, 5
th
order, etc. The most
common types of aberrations
are the 3
rd
order aberrations:
chromatic, spherical, coma, field
curvature, distortion, and
astigmatism. In addition to
those the physical structure of
the optics might cause
vignetting or diffraction of light.
2.4.1 CHROMATIC ABERRATION
The velocity of light
changes when it passes
through different mediums.
Short wavelengths travel
slower in glass than long
wavelengths. This causes the
colors to disperse. The
phenomenon is called
chromatic aberration and can
be seen especially clearly in a
prism.
Chromatic aberration
includes longitudinal and
Figure 19. Typical range of real-world luminances (cd/m
2
). If the imaging device needs to capture all
the different shades in a scene with sun and starlight at the same it would need a dynamic range of 13
f-stops (10
8
/10
-5
) or 130dB [27].
Figure 20. An ideal image from a point source with no
aberration present. The source code of the image is presented in
Appendix 1. The Matlab code used for this and all subsequent
figures include constant values H=1 and WIJK=10.2, and r is a
function of x and y. The code used also sets rcos(theta) equal to
x.
Figure 21. Chromatic aberration is caused by a lens having a different
refractive index for different wavelengths of light. (Picture: Tuomas
Sauliala)
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 16

OPTICAL ABERRATIONS
lateral aberration along the axes. The phenomenon creates "fringes" of color around the image,
because each color in the optical spectrum cannot be focused at a single common point on the
optical axis [31]. Instead, a series of different colored focal points become arranged behind each other
on the optical axis as presented in Figure 21. Chromatic aberration can be best seen in the border
areas of the picture where the light refracts the most. It is a common error of lenses having long
focal length and therefore considered as a problem in telescope design. It is also found disturbing in
luminaire optics.
Chromatic aberration can be corrected by combining a positive (concave) low-dispersion lens with
a negative (convex) high dispersion lens. Also many types of glass have been developed to reduce
chromatic aberration, most notably, glasses containing fluorite [32].
2.4.2 GEOMETRIC ABERRATIONS
Geometric aberrations are monochromatic that are characterized by the changes of shape of the
imaged object as the name implies. In this chapter they are briefly reviewed.
2.4.2.1 Spherical Aberration
Refracted light rays from an ideal,
infinitely thin spherical lens meet in
the focus of the lens. However, all
the spherical lenses in real life
circumstances have thickness, which
results in the over-refraction of the
light beams coming from the edge of
the lens as can be seen in Figure 22.
These rays miss the focus point
causing blurring to the image. This
type of error is called spherical
aberration and it is a problem of
lenses that have small F-values. It is
difficult to correct the error caused
by spherical aberration. The
correction may partially be done with a
group of two spherical lenses, however,
the best result is achieved with
aspheric lenses [33].
2.4.2.2 Coma
Coma is an aberration that occurs
when the object is not on the optical
axis [34]. The light rays enter the lens
at an angle relative to the axis. This
causes the system magnification to
vary with the pupil position. The
image is distorted to resemble the
shape of a comet as can been seen in
Figure 23.
Coma is a common error in
telescope lenses that have long focal
length. It is dependent of the shape of
the lens and can therefore be corrected
Figure 23. Spherical aberration is caused by the spherical
shape of lens. (Picture: Tuomas Sauliala)
Figure 22. Matlab representation of coma. Coma causes the
image to represent a comet shape because the object or light
source is not on the optical axis. The Matlab code of coma is
presented in Appendix 2 for example of the source code. The
coefficient W131 was set to 20.2 in this calculation.
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 17

OPTICAL ABERRATIONS
Figure 24 Matlab representation of astigmatism.
Astigmatism occurs when the lens is unable to
focus horizontal and vertical lines in the same place.
with a set of lenses of different shape. The effect of coma can also be reduced by restricting the
height of light rays by adjusting the size and location of the aperture [35].
2.4.2.3 Astigmatism
Like coma, astigmatism also occurs off-axis.
The farther off axis the object is, the greater the
astigmatism. The lens is unable to focus
horizontal (tangential) and vertical (sagittal)
lines in the same plane [36]. Instead of focusing
rays to a point, they meet in two line segments
perpendicular to each other. These are the
sagittal and tangential focal lines. The light rays
in these two planes are imaged at different focal
distances. This results in an image with either a
sharp horizontal or a sharp vertical line as seen
in Figure 24
Astigmatism can be corrected by placing the
tangential and sagittal lines on top of each other.
In case there is a set of lenses, it can be done by
adjusting the distance between them. Other
option is to use different radii of curvature in
different planes (e.g. a cylindrical lens).
2.4.2.4 Field curvature
Curvature of field occurs when the image produced by a lens is focused on a curved plane, but the
film plane is flat. Figure 25 demonstrates the phenomenon in practice. Figure 26 is a Matlab
representation of field curvature



Figure 25. Field curvature is the aberration that
makes a planar object look curved in the image.
Adapted from [37].

Figure 26. Field curvature as a Matlab
representation.

Various patents have been made to help correcting field curvature aberration in an optical system
with either one or more lenses. E.g. one patent suggests using non-planar correction surface shaped
such that focal points of the focusing elements lie closer to a single plane than with planar correction
surface. Another patent, designed to remove field curvature in system of two imaging elements, talks
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 18

OPTICAL ABERRATIONS
on behalf of the fiber optic that can provide an input object to the second imaging element with a
reversed field curvature so that a correct output image can be obtained from the second imaging
element. [38] In microscopy one method that has been used to reduce field curvature is to insert a
field stop in order to remove light rays on the edge [39]. This method unfortunately greatly decreases
the light collecting power of the lens. It can also increase distortion [40].
2.4.2.5 Pincushion and barrel distortion
Distortion, yet another type of aberration, shifts the image position. The images of lines that meet
directly in the origin appear straight, but the images of any surrounding straight lines appear curved
[41]. There are two types of distortion: pincushion and barrel. In pincushion distortion, the lines at
the edge of the image bow inward. In barrel distortion, the edges bow out (Figure 27).

Figure 27. Barrel distortion on the left and pincushion distortion on the right [42].

Pincushion distortion is most common in telephoto-lenses (lenses with a field of view 25 or less
[43]) and barrel distortion in wide angle lenses (lenses with a field of view 65or more [43]). Both
types of distortion occur often in wide-angle zoom-lenses of low-cost, because the enlargement ratio
changes when moving from the edge towards the center of the lens. Lenses with solid focal lengths
can be optimized for certain focal lengths. It enables keeping the enlargement ratio constant
throughout the entire picture area.
Distortion is difficult to correct for in zoom lenses and possible only with expensive aspheric
lenses in practice. The aberration can, however, be minimized by a symmetrical lens design, which is
orthoscopic (ray leaves the lens at the same angle at which it entered) [44]. The aperture has no effect
on distortion. Neither has the position of the dimmer [33]. However, distortion depends on the
focusing distance. Infinity focus and close focus may yield different amounts of distortion with the
same lens [44].
2.4.3 VIGNETTING
In some cases the center of the picture-area is brighter that the edge of the picture area. This
phenomenon is caused by both the physical structure of optics (vignetting) and the so-called cosine
law. In vignetting, the physical structure of the lens partially restricts the light from arriving to the
surface where the image is to be formed. The phenomenon is most common in objectives that have
extensive focal lengths. Vignetting can be avoided by extinguishing the picture, in other words by
increasing the F-value of the lens. Vignetting is also reduced in many crop sensor cameras [45] as the
lenses are typically designed for full frame / 35mm cameras (see Figure 28right [46]). Sometimes
vignetting is, however, applied to an otherwise un-vignetted photograph for an artistic effect [47]. For
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 19

OPTICAL ABERRATIONS
ones part, the cosine law is a physical fact that can not be altered (Figure 28left). According to the
cosine law, the amount of light arriving to the picture-area decreases according to the cosine of the
incoming light beams fourth power [33].

2.4.4 DIFFRACTION

As the lens structures and the cells become smaller, a physical phenomenon called diffraction
becomes increasingly important. Diffraction reveals the dualistic nature of light. Traditional
geometric optics investigates light as a radiate phenomenon which refracts and reflects according to
optical principles. However, light has also wave nature, which can be seen, as a ray of light passes
through a small hole.
In diffraction, a light beam passing through a small hole, gains rings around itself. The first-order
ring is the strongest and its radius obeys the equation [48]:


(12)

where r = radius of the ring
= wavelength of light.
F = F-number

As can be seen in Figure 29 the
intensity of diffraction depends
mainly on the lenss F-number.
By extinguishing the lens, most
of the distortions of the lens can
be fixed, but on the other hand,
this augments the diffraction
phenomenon. Diffraction as it
stands, denies the manufacturing
of an ideal lens. There is no way
to completely avoid diffraction
from occurring [43].
F r 22 . 1 =
Figure 29. The effect of diffraction on MTF-function with
different F-numbers. The Resolution limit in the picture refers
to the Rayleigh limit for the resolution of the picture [48].
Figure 28. (left) . Cosine law is based on the geometry of lights behavior. The ray of light arriving in
a steep angle is spread into a larger area than light arriving perpendicularly. (Picture: Tuomas
Sauliala). (right) Example of vignetting profile (light falloff) at f/2.8@200mm of Nikon AF-S VR
Nikkor 70-200mm F2.8G on 35 full frame sensor. Red rectangle indicates the falloff on crop (1.5x)
sensors. The first band outside the central area indicates 2/3 stop falloff, and remaining bands are
1/3 stop intervals [45].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
OPTICS & IMAGING 20
ABERRATION CORRECTION
2.5 ABERRATION CORRECTION
As described before in Chapter 2.2., aberrations can often be corrected already before the lenses
are manufactured. That is done by exploiting their calibration information to choose the best suitable
material (based on e.g. optical, chemical, mechanical and thermal characteristics, processability and
price) and lens design for the purpose. Other option is to take action in the phase when to lens
system is built and pay attention to the refraction indices of the components. No matter how well the
lens or the lens system is chosen, there might, however, be left errors that can only be corrected
afterwards with specific software. In photography for example, aberrations such as barrel and
pincushion aberration can not be avoided when an image is taken with a wide angle or teleobjective.
PTLens [49] is software that corrects lens pincushion and barrel distortion, vignetting, chromatic
aberration and perspective. It can be installed either besides Photoshop or as an individual program.
No matter how it is used, it only needs the profile information of the camera that the images have
been taken with. The operating system is presented in Figure 30. The menus on left hand side are for
correcting purple fringing caused by chromatic aberration and darkening of the edges caused by
vignetting. The menu of right hand side is for choosing the image file. Distortion field is for
defining the parameters for correcting distortion.

Correcting chromatic aberration is demonstrated in Figure 31. The purple fringing can be seen in
window frames in the original image on the left. PTLens works well in removing the lateral
aberration. Longitudinal aberration can not be removed.

Figure 31. Example of chromatic aberration correction with PTLens. On the left original image, on
the right the corrected image [49].
Figure 30. (left) Operating system of PTLens for correcting distortion, vignetting, chromatic aberration
and perspective. (right) Magnification of the menus on left hand side [49].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

OPTICS & IMAGING 21

ABERRATION CORRECTION

As can be seen, in the left side image in Figure 32, there is barrel distortion caused by the wide angle
objective. In the enhanced image on the right the distortion is corrected rather well. However, the
image would still need perspective distortion correction which is also available in PTLens.



Figure 32. Example of barrel distortion correction with PTLens. On the left original image, on the
right the corrected image [49].

Another software to be used to correct aberrations is DxO Optics Pro. However, is it compatible
only with the optics of rather expensive digital SLR cameras. Another drawback is that it does not
support raw-format of the image. Hence jpg-images are the only images taken with cheaper cameras
that the software is able to analyze. Therefore, it is reasonable to say that DxO Optics Pro is
intended for professional photographers. All in all, DxO Optics Pro is a good program, but it is
expensive (around 100 Euros) compared to PTLens which is around 10 euros. Photoshops own
repair tools do not reach their level in image restoration and PTLens can be used as a Photoshop
plug-in.
Good alternative for graphical software is Matlab, a numerical computing environment and
programming language. Using Matlab for image enhancement requires some knowledge about signal
processing but there are many scripts available that can be easily modified for ones own purposes. In
reference [50] there is an example of Peter Kovesis code for distortion correction.







OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
APPLIED LENS DESIGN 22
MEASUREMENT SCIENCE & MACHINE VISION
3 APPLIED LENS DESIGN
In this chapter specific lens designs are presented in regard to machine vision and measurement
science applications as well as photographic lenses. Typically special designs are neglected in physics
and optics textbooks and this chapter serves as an introduction to some of the special cases.
3.1 MEASUREMENT SCIENCE & MACHINE VISION
A telecentric lens is a compound lens with an unusual geometric property in how it forms images.
The defining property of a telecentric system is the location of the entrance pupil or exit pupil at
infinity (Figure 33). This means that the chief
rays (oblique rays which pass through the
center of the aperture stop) are parallel to the
optical axis in front of or behind the system,
respectively [51,52]. If the entrance pupil is at
infinity, the lens is object-space telecentric. If
the exit pupil is at infinity, the lens is image-
space telecentric. Such lenses are used with
image sensors that do not tolerate a wide
range of angles of incidence. For example, a
3-CCD color beamsplitter prism assembly
works best with a telecentric lens, and many
digital image sensors have a minimum of
color crosstalk and shading problems when
used with telecentric lenses. If both pupils
are at infinity, the lens is double telecentric.
Such lenses are used in machine vision
systems to achieve dimensional and
geometric invariance of images within a
range of different distances from the lens (e.g.
Figure 35) and across the whole field of view (e.g. Figure 34).
Because their images have constant magnification and
geometry, telecentric lenses are used for determining the
precise size of objects independently from their position within
the FOV and even when their distance is affected by some
degree of unknown variations. These lenses are also commonly
used in optical lithography, for forming patterns in
semiconductor chips. In pupillometry, telecentric lenses would
provide less noise in measurement data to the changes in gaze.
Telecentric lenses tend to be larger, heavier, and more
expensive than normal lenses of similar focal length and f-
Figure 33. Working principle of different types of
lenses [51].
Figure 35. a) This image shows
different dimensions for pins on a
PCB. b) This image, which was
taken through a telecentric lens,
provides accurate information.
Courtesy of Edmund Optics [52]
Figure 34. Perspective error due to common optics (left
image) and perspective error absence (right image) with a
telecentric lens [51].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

APPLIED LENS DESIGN 23

MEASUREMENT SCIENCE & MACHINE VISION
number. This is partly due to the extra components needed to achieve telecentricity, and partly
because the object or image lens elements of an object or image-space telecentric lens must be at
least as large as the largest object to be photographed or image to be formed. These lenses can range
in cost from hundreds to thousands of euros, depending on quality. Because of their intended
applications, telecentric lenses often have higher
resolution and transmit more light than normal
photographic lenses [53].
It is also possible to transform commercially
available lenses to telecentric ones by adding an
extra aperture (Figure 36) as shown by Watanabe
and Nayar (1999) [54]. Authors have analytically
derived the positions of aperture placement for a
variety of off-the-shelf lenses and demonstrated
that their approach actually work in eliminating
magnification variations (Table 2).



As a summary telecentric lenses in metrology should be used when [51]:

Whenever a thick object (thickness > 1/10 FOV diagonal) must be measured
When different measurements must be carried on different object planes
When the object to lens distance is not exactly known or when it cannot be previewed
When holes must be inspected or measured
When the profile of a piece must be extracted
When the image brightness must be almost perfectly even
When defects can be detected using a directional
illumination and a directional point of view

However in practice some compromises have to be made due to the
larger size (extreme example in Figure 37) and higher cost of
telecentric lenses over traditional lenses. By acquiring a telecentric
lens will definitely make your measurement more accurate but
without further experimenting it is hard to estimate how much
better and would that be worth the invested money

Table 2. Magnification variations for four widely used lenses and their telecentric versions [54].
Figure 36. Telecentric optics achieved by adding
an aperture to a conventional lens. This simple
modification causes image magnification to be
invariant to the position of the sensor plane, i.e.,
the focus setting [54].
Figure 37. Extreme example
of a large telecentric lens
capable of a field of view of
over 400 mm (diagonal) [51].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
APPLIED LENS DESIGN 24
PHOTOGRAPHY
3.2 PHOTOGRAPHY
One special design aspect can be found from photographic lenses which is bokeh. Bokeh (derived
from Japanese bokeaji , "blur") is a
photographic term referring to the appearance of
out-of-focus areas in an image (Figure 38)
produced by a camera lens using a shallow depth
of field. Different lens bokeh produces different
aesthetic qualities in out-of-focus backgrounds,
which are often used to reduce distractions and
emphasize the primary subject [55]. It is important
to note that excluding bokeh the existing literature
on photographic lenses can be used for
understanding machine vision optics as well.
The shape of the aperture has a great influence
on the subjective quality of bokeh. When a lens is
stopped down to something other than its
maximum aperture size (minimum f-number), out-
of-focus points are blurred into the polygonal shape of the aperture rather than perfect circles. This
is most apparent when a lens produces undesirable, hard-edged bokeh, therefore some lenses have
aperture blades with curved edges to make the aperture more closely approximate a circle rather than
a polygon. Lens designers can also increase the number of blades to achieve the same effect.
Traditional "Portrait" lenses, such as the "fast" 85mm focal length models for 35mm cameras often
feature almost circular aperture diaphragms. Mirror (catadioptric) lenses on the other hand are
known for unpleasant bokeh [56]. The
comparison between the bokeh
patterns of thee different 50 mm
lenses with different price tags can be
seen in Figure 39 with the most
expensive one producing the most
pleasant and circular bokeh [57].
Bokeh can be simulated by
convolving the image with a kernel
corresponding to the image of an out-
of-focus point source taken with a real
camera. Diffraction may alter the
effective shape of the blur. Some
graphics editors have a filter to do this,
usually called "Lens Blur" though
Gaussian blur is often used to save
time or when realistic bokeh is not
required.

Figure 38. (left) Picture without bokeh, (right)
the same picture with synthetic bokeh [55]
Figure 39. Comparison of three different 50 mm Canon
prime lenses. (left) Canon EF 50mm f/1.2 L USM [~1200],
(center) Canon EF 50mm f/1.4 USM [~350], and (right)
Canon EF 50mm f/1.8 II [~100] [57].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 25

PUPILLOMETRY & OVERVIEW OF THE SETUP
4 CHARACTERIZING OPTICAL PERFORMANCE IN
PRACTICE

4.1 PUPILLOMETRY & OVERVIEW OF THE SETUP
Under characterization was Unibrain Fire-i monochrome board camera (M12x0.5 lens base with
the possibility to use C-mount lenses with an adapter) [58] with Sony's ICX098BL CCD sensor [59]
and Unibrain 50mm Telephoto Lens (view angle 6, f2.5, part no. 4370) [58]. The camera was set on
automatic exposure and placed in the middle of a Goldman perimeter with diameter of 60 cm. The
targets were attached one by one on a stand (used for holding subjects chin in visual field
measurements) at the horizontal distance of 38 cm from the image plane of the camera. Focusing
was done by hand. The Goldman perimeter was illuminated with blue LEDs (max=468 nm, hbw=26
nm) with an illuminance of ~40 lux at the target prints. In ideal case infrared radiation should have
been used to illuminate target prints but the surface of the prints was too reflective for infrared
radiation that the measurement would have been impossible with them.
4.2 METHODS
Three targets were used for characterizing the camera. They are presented in Figure 41, Figure 43
and in Figure 45. SFR quadrants in Figure 41 were used for defining the MTF thus the sharpness of
the imaging system. The grid in Figure 43 acted as the target for distortion measurements. Target in
Figure 45 was a noise target in accordance with ISO-15739 standard and it was also used for defining
the dynamic range.
The original targets were vector graphics of 8320 x 6400 pixels. For testing purposes, they were
printed on a standard dull coated photo paper (10 x 15 cm, Zoomi Kuvakauppa, Helsinki, Finland)
with the target size being 2.1 x 1.6 cm. Each target was recorded on video for 10 seconds with the
frame rate of 15 frames per second (fps). From each video 128 frames were averaged to one for
further analysis in comparison to a single frame. The averaged frames are presented in Figure 42,
Figure 44, and in Figure 46. All measurements were analyzed in Matlab using Imatest [60], a software
package for measuring key image quality factors.
Figure 40. (left) Goldman perimeter in use and being lit by 5 mm blue LEDs which are able to provide
even luminance for the inner sphere of the Goldman perimeter. (right) Rear of Goldman perimeter
showing the power control box of the LEDs on the left and the actual pupil camera mounted on the
back of the sphere (gray box with the red cord).
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 26

METHODS


Figure 41. Target used for measuring MTF. SFR
quadrants adapted from Imatest [60].

Figure 42. Recorded image to assess the
sharpness.



Figure 43. Target used for recording distortion.
The grid adapted from Imatest [60].

Figure 44. Recorded image to define distortion.



Figure 45. Target used for recording noise and
dynamic range. The ISO-15739 chart adapted
from Imatest [60].

Figure 46. Recorded image to define noise and
dynamic range.
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 27

RESULTS OF THE MEASUREMENTS
4.3 RESULTS OF THE MEASUREMENTS

4.3.1 MODULATION TRANSFER FUNCTION AND SHARPNESS
To define the MTF for the pupil camera, a region of interest (ROI) of 33 x 87 pixels was chosen
from the averaged frame in Figure 42. The ROI is presented on the right in the screenshot of Imatest
SFR program output in Figure 47.


Figure 47. Imatest SFR program output for defining the modulation transfer function. The MTF
response plots tell that the black and white edges in the image are not very sharp.
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 28

RESULTS OF THE MEASUREMENTS
The screenshot consists of two response plots, namely one plot in frequency and one in spatial
domain. They convey similar information, but in a different form. A narrow edge in spatial domain
corresponds to a broad spectrum in frequency domain and vice-versa.
As can be seen from the spatial domain plot (upper left), the edge rises from 10% to 90 % of its
final value in 3.82 pixels. Hence the edge is rather broad meaning that it appears blurry in the image.
The spatial frequency response plot (lower left) confirms that the camera does not offer images of
good quality what it comes to sharpness. According to Imatest tutorials [61] the best indicators of
image sharpness are the spatial frequencies where MTF is 50% of its low frequency value (MTF50)
or 50% of its peak value (MTF50P). This is because (1) Image contrast is half its low frequency or
peak values, hence detail is still quite visible. (2) The eye is relatively insensitive to detail at spatial
frequencies where MTF is low: 10% or less. (3) The response of virtually all cameras falls off rapidly
in the vicinity of MTF50 and MTF50P. Imatest also tells that for the images to be good, the MTF
response should be over 0.3 Cy/Pxl. As can be seen in the Figure x, the MTF50 for the camera
under testing is 0.126 Cy/Pxl, thus it does not fulfil the criterion. Not even the standardized
sharpening marked with the red, bold, dashed line in the plots is enough to improve the MTF
response to a good level. However, since the MTF is high below the Nyquist frequency and low at
and above it, the response should be free of aliasing problems.
It is worth of noticing that the horizontal and vertical resolution can be different for CCD sensors,
and should have been measured separately. Within this work it is, however, not relevant to repeat the
evaluation for horizontal sharpness. Also it is possible that the MTF varies over the lens area. It
means that a value taken on the middle of the lens in the test could have been better than a value if it
was taken on the edge. For all-inclusive analysis the MTF should be measures more thoroughly.
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 29

RESULTS OF THE MEASUREMENTS
4.3.2 GEOMETRIC ABERRATIONS
The distortion output is shown in Figure 48. Corrected vertical lines are deep magenta; horizontal
lines are blue. There is hardly any difference between them and the grey lines to be seen meaning that
the distortion is very small. The negative value of SMIA TV Distortion displayed below the image on
the left refers to small barrel distortion according to following definition:
SMIA TV Distortion > 0 is pincushion
SMIA TV Distortion < 0 is barrel.

The distortion is calculated according to the following formula:
B
B
A A
Distortion TV SMIA

+
=
2
100
2 1

(13)

where A1, A1 and B refer to the geometry in Figure 49.



The fact that |k1|< 0.01 tells that the distortion is insignificant and does not need to be corrected.

Figure 49. Illustrating SMIA TV
Distortion. SMIA = Standard Mobile
Imaging Architecture
Figure 48. Imatest Distortion program output for
assessing the distortion. The results tell that there is a
small amount of barrel distortion in the image but it is
insignificant and difficult to see.

OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 30

RESULTS OF THE MEASUREMENTS
4.3.3 DYNAMIC RANGE
Imatest Stepchart [62] was used for analyzing the dynamic range. To be able to perform the analysis,
it was fundamental to understand the term f-stop (also known as zone or exposure value (EV))
defined in Chapter 2.1.2. The camera was set to automatic exposure mode (range of 1/3400 1/31 s)
In Figure 50 the upper left plot shows the density response marked with gray squares, as well as the
first and second order fits marked with dashed blue and green lines. Dynamic range is grayed out
because the printed target has too small a dynamic range to measure a camera's total dynamic range.
Figure 50. Stepchart produces detailed results about the dynamic range and noise.
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 31

IMAGE RESTORATION

Imatest calculates the dynamic range for several maximum noise levels, from RMS noise = 0.1 f-
stop (high image quality) to 1 f-stop (relatively low quality). The upper right box contains dynamic
range results thus total dynamic range and range for several quality levels, based on luminance noise.
The total dynamic range in f-stops is 6.31 and a medium-high quality image can be achieved over a
range of 3.81 f-stops out of a total dynamic range. When a high quality image is required (maximum
noise = 0.1 f-stops), the dynamic range is reduced to 2.44 f-stops indicated by the yellow line in the
middle plot. These results mean that the practical dynamic range is limited by noise.
4.3.4 NOISE
The stepchart output images presented in Figure 50 were also used for analyzing the noise. In
Figure 50 the middle plot shows the RMS noise in f-stops (i.e. noise scaled to the difference in pixel
levels between f-stops) which decreases as brightness decreases. In general, the darkest levels have
the highest f-stop noise (large negative values of Log Exposure). In the lower left plot the noise is
scaled to the difference in pixel levels between the maximum density level and the patch
corresponding to a density of 1.5. For this camera, the pixel difference is 79.34. Noise measured in
pixels can be calculated by multiplying the percentage noise by 79.34. The lower plot also contains
the single number used to characterize overall noise performance. The average Luminance -channel
noise (Y = 1.88%) is fairly high which corresponds to poor image quality.
A more analytical way to characterize the noise in the image is to determine how much the pixel
value changes within each tone in the stepchart. For that variance of the pixel values was calculated
with Matlab. From the code presented in Appendix 5 it can be seen that the 315 pixel values of row
255 were chosen as source data. Their variance was 0.0069.
The lower right plot shows the noise spectrum. The descending spectrum indicates that the
neighboring pixels are correlated and that the spectrum and the image are the result of blurring (also
called smoothing or low-pass filtering). In general, noise reduction performed by the camera is a
likely cause for this kind of spectral, non-white noise. However, no data was available about the post-
processing tools of the camera.
4.4 IMAGE RESTORATION

4.4.1 SHARPNESS
The test showed that the pupil camera did not record very sharp images. This is mostly due to the
moderate quality of the lens. Hence, one option to improve the sharpness would be by changing the
lens into one that can maintain good MTF at the required focusing distance. If it is enough to
enhance the images instead of improving the camera system, the details can be boosted by setting a
threshold for the pixel levels. However, improving MTF by sharpening the image can easily result
easily in increased noise.
4.4.2 GEOMETRIC ABERRATIONS
According to the test results there was no need for aberrations correction, because the error caused
by barrel distortion was insignificantly small. Bigger errors could be removed from the images with
PTLens or other post-processing program as described in Chapter 2.4. Other option would be to
focus the camera carefully and avoid cheap wide-angle and teleobjectives when choosing the optics.
4.4.3 DYNAMIC RANGE
As mentioned before, the results did not give the whole picture of the pupil cameras dynamic
range, because the dynamic range of the target was not wide enough (printed media has a maximum
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 32

IMAGE RESTORATION
dynamic range of a little over 6 f-stops). Hence, the real range would have been somewhat bigger.
Also more light could have been used to increase the differences between light and dark. For
comparison, the dynamic range of Canon EOS-10D at ISO speed 400 was measured to be 8.47.
Comparing the dynamic range of a digital camera taking still photos and video camera recording
pupil for scientific purposes is, however, not very wise or even possible. For example, in still imaging
it is possible to take several photos with different exposure levels and that way gain full dynamic
range (a method called high dynamic range imaging (HDRI) in image processing, computer graphics
and photography [63]). In videography the exposure can not be altered between frames so widening
the range has to be done other way.
In still imaging it might become important to limit the amount of light entering the camera so that
the brightness does not exceed the dynamic range of the camera. Otherwise highlight details can
burn and be lost. In pupillary measurements, for one, the light levels are not that high that the
camera would need to be prepared for that. And in case the study setup is especially high-lit, it is still
possible to limit the light afterwards frame by frame in Photoshop CS2 or in whole in a video editing
program.
Another thing related to high illuminances is the ISO speed, that is, how fast the sensor responds
to light. In still imaging lowering the ISO speed is used for maximizing the dynamic range in bright
ambient so that the camera does not need to be mounted to get a good depth of field [64]. In pupil
measurements the camera is in any case in a fixed position so it is not necessary to have special
adjusting mechanisms in the video camera to be able to capture the maximum range of tones
between light and dark areas.
In practice camera manufacturers are faced with a classic tradeoff: contrast versus dynamic range. Most
images look best with enhanced contrast, but if contrast is increased, dynamic range suffers. Dynamic
range can be increased by decreasing contrast, but images tend to look flat. In pupil size
measurements the most essential thing is to have a clear contrast between the black pupil and the
grayish iris so that the pupil can be distinguished from its background. That is why dynamic range
can be left with only a little attention in evaluation of the imaging system.
4.4.4 NOISE
Noise is difficult to quantify because it has a spectral distribution instead of being just a simple
number. The most essential, single source of noise in this test was most likely the CCD. Because the
number of photons detected by the CCD varies, the CCD produces always some random noise in
the image. This noise is often called shot noise and it has white-noise (also called salt-and-pepper)
appearance. In addition to the shot noise CCD can cause also so-called thermal noise that emerges
from the heating of the cell and increases with exposure time. In the test of 10 seconds the increases
temperature was hardly a great source of noise, however, in the recordings of several minutes or
hours, the CCD would require cooling with fan or a cooling rib. [65]
As was seen in the test results, the amount of noise was the highest at low light levels. This was due
to the noise reduction software built in the camera that low-pass filters noise reducing its high
frequency components. Noise reduction of the camera operates with a threshold that prevents
portions of the image near contrast boundaries from blurring. This technique works well, but it can
obscure low contrast details and result in the loss of high spatial frequencies. To inverse this,
sharpening or unsharp masking is often applied to the image. The sharpening can, however, increase
the amount the noise creating a vicious circle of noise reduction and reproduction. That is why most
of the noise removal is being done and should be done by post-processing the image [66].
There is a wide range of software that can remove noise. Michael Almond has evaluated 22 of
them and finds PictureCode Noise Ninja the best noise reduction tool today [67]. However, he
thanked the programs ability to retain color saturation, which is not very relevant from the
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 33

CONCLUSIONS
perspective of the black and white pupil camera under testing. Applying different filters to remove
noise is also fairly easy by Matlab as explained by Peter Kovesi [68].
As a conclusion it should be noted that in this test the evaluation of the noise consisted of only
spatial noise.
4.5 CONCLUSIONS
The goal of this work was to characterize the
optical performance of a pupillometric camera
used in research at Lighting Unit of Helsinki
University of Technology (TKK). The approach
was similar as presented by Iacoviello and
Lucchetti (2005) [ 69 ] who had used similar
checkerboard target image (Figure 51) to
characterize the noise and sharpness (blur
identification). Additionally the authors had used
a synthetic benchmark image (Figure 52) in order
to simulate the pupillometric setup as well as
possible. This was not done by us due to time
constraints and due to the belief that it would
have brought only a little extra information of
the performance of our setup.
Compared to more commercial cameras
available, Unibrain did not prove to take images
of very good quality. In Table 1, there are
presented results of similar image quality
measurements conducted previously by of one of
the authors (Rautkyl, 2006 [70]). As can be seen,
systems camera (Canon 300 D), compact camera
(Canon A510) and even cell phone camera
(Nokia N90) showed to take over 100 times
sharper images, when MTF50 value was used as
indicator of sharpness. Dynamic range was only 6 f-stops with Unibrain compared to 13, 10 and 11
of the other cameras. There was also over 100 times more noise present in the test image. The only
category, in which Unibrain gained better results than the other cameras, was the amount of
distortion. That was due to the fact that the Unibrain used telephoto lens of narrow angle.

Table 1. Comparison on Unibrain to more commercial digital cameras. [70]
Unibrain Canon EOS 300D Canon Powershot A510 Nokia N90
MTF50 61 1600 1164 676
Distortion % 0.01 3.52 2.67 3.61
Dynamic Range 6 13 10 11
Noise 0.0069 0.000060 0.000044 0.000370

However, there was no comparison data for videocameras available. Therefore it is important to
consider that recording video instead of taking still images can also be both beneficial to the image
quality. Good thing about video is that offers more data for analysis and filters out time-dependent
errors. On the other hand, digital video cameras are not technically as good and adjustable as digital
still cameras.
Figure 51. Identification of parameters of
degradation for a 2 2 checkerboard target image
shot by the CCD camera: (A) the image; (B)
horizontal line: white to black blur identification;
(C) horizontal line: black to white blur
identification [69].
Figure 52. Identification of the blur by means of a
benchmark image (A) target image shot by the
CCD camera in the same condition of Fig. 5; (B) a
proper line of (A) plotted together with the
estimated blurred step [69].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 34

CONCLUSIONS
In addition it should be noted that in the end the quality of measurement depend on the ability to
detect edges from image and namely the edge between the pupil and the iris. In order for the pupil
detection program to run optimally the hardware should provide images that have as noise-free and
sharp intensity gradients. This has not been always the case in our previous recordings [71] as it can
be seen in Figure 53 center where the left part in the plot is rather flat (corresponding the upper
portion of the pupil) making edge detection rather difficult. However, that particular problem was
due to the improper placement of infrared lighting that caused shading of the pupil by eyelashes.

Figure 53. Intensity profiles (1D) of the eye image using one fixed y-(green line) and x-coordinate
(red line) [71].
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
DISCUSSION 35

5 DISCUSSION
In this work we reviewed the basic optics related to general imaging and machine vision metrology
of which we concentrated to characterize the pupillometric camera used in research. The whole
pupillometric setup can be thought to be comprised of the illumination, the camera (imaging part)
and the software algorithm to extract the pupil size from images. In this work illumination and
algorithm was excluded from analysis even though it should be noted that they are as essential as the
imaging part for good quality measurements (or even more essential). For those who are interested in
the other two aspects of pupillometry can read a paper on automated pupillometry by the other
author (Teikari, 2007 [71]).
The results from the characterization were more or less was what had been expected: 1) Geometric
distortions were practically negligible as could have been expected from a fixed focus lens; 2) The
sharpness of the lens was not par with the commercial digital cameras nor with the expensive
machine vision cameras but despite of this sharpness can be thought to be sufficient for our use; and
3) The amount of noise was rather large which was also to be expected for a cheap monochrome
(~100 euros) digital board camera and in practice noise removal algorithms need to be applied in the
pupil detection program with the increased computational cost and reduced sharpness.
In practice the situation could be improved significantly by investing more money into a better
camera with a low-noise and more sensitive sensor (approximate cost 500 euros) and to a proper
telecentric lens (approximate cost of 500 euros reviewed in Chapter 3.1). Only possible upgrade to
the existing hardware system could be to add Peltier-cooling to reduce the amount of thermal noise
in the sensor which could be attained with an investment of under 100 euros but that could again
introduce moisture condensation problems and transient noise to other measurement devices as
discussed in Chapter 2.2.
However it should be noted that ultimately the best way to improve image quality needed for
pupillometry is to provide a proper infrared (IR) illumination that is illuminating the pupil from all
directions. Our pupil detection program uses the dark-pupil method where the eye is being
illuminated indirectly so that the light is only being reflected from the iris and not from the back of
pupil as it is located deeper in the eye. In practice then this separation of iris and pupil is by far the
easiest to do with illumination rather than relying the pupil detection algorithm to rely on noisy and
edgeless images that are deteriorated by single-source IR lighting. After fixing the illumination the
next investment would be a telecentric lens which would reduce the variations in the measurement
data slightly due to the changes in head and gaze size which are inevitable even though the subjects
are instructed to stay put and keep their gaze fixated. This type of geometrical correction is not
possible to do after the recording with software. Thirdly one could invest to a proper camera that has
a good sensitivity in the infrared region (~800-900 nm) and in low light conditions in order to
improve the S/N-ratio with less sensor noise.

OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
APPENDICES 36

APPENDICES

APPENDIX 1: THE MATLAB CODE FOR AN IDEAL IMAGE

N=200;
R=4*sqrt(N/pi);
for x=1:N;
for y=1:N;
xnew=x-N/2-1;
ynew=y-N/2-1;
r=(xnew^2+ynew^2)^(1/2);
xgraph(x)=xnew;
ygraph(y)=ynew;
if r < R
matrix(x,y)=1;
else if r == R
matrix(x,y)=.5;
else matrix(x,y)=0;
end
end
matrix(x,y)=matrix(x,y)*(1+i*10.2*1*(r/R)^2*xnew/R);
end
end
fftshift(matrix);
fftmatrix=fftn(matrix);
newmatrix=abs((fftshift(fftmatrix)));
maxim=max(max(abs((newmatrix))));
newmatrix=abs(255*(newmatrix)/maxim);
maxmin=max(max(newmatrix));
image(xgraph,ygraph,newmatrix)
colormap('gray')


OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
APPENDICES 37

APPENDIX 2: THE MATLAB CODE FOR AN IMAGE WITH COMA

N=200;
R=4*sqrt(N/pi);
for x=1:N;
for y=1:N;
xnew=x-N/2-1;
ynew=y-N/2-1;
r=(xnew^2+ynew^2)^(1/2);
xgraph(x)=xnew;
ygraph(y)=ynew;
if r < R
matrix(x,y)=1;
else if r == R
matrix(x,y)=.5;
else matrix(x,y)=0;
end
end
matrix(x,y)=matrix(x,y)*exp(i*20.2*1*(r/R)^2*xnew/R);
end
end
fftshift(matrix);
fftmatrix=fftn(matrix);
newmatrix=abs((fftshift(fftmatrix)));
maxim=max(max(abs((newmatrix))));
newmatrix=abs(255*(newmatrix)/maxim);
maxmin=max(max(newmatrix));
image(xgraph,ygraph,newmatrix)
colormap('gray')
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
38

APPENDIX 3: THE MATLAB CODE FOR AVERAGING MONOCHROME IMAGES
function noiseRemovalByAveraging(I) % (petteri.teikari@gmail.com), 2007
codePath = cd; % setting the path containing the m-file

% Asks the user to choose an image file(s)
[fileNames, pathname] = uigetfile( ...
{ '*.bmp','Bitmap (*.bmp)'; ...
'*.jpg','JPEG-file (*.jpg)'; ...
'*.tif','TIFF-file (*.tif)'; ...
'*.*', 'All Files (*.*)'}, ...
'Pick a file', ...
'MultiSelect', 'on');

% Summing intensity values of all images chosen for every pixel
for indx=1:(length(fileNames))
% Changing the active path to the folder containing the files
cd(pathname)
% Reading the first file
I = imread(fileNames{indx});
readingData = sprintf('%s%s', 'Reading data from file: ',
fileNames{indx});
disp(readingData)
% Converts the input image to a grayscale image
% The if to check whether an image is RGB or grayscale
if 3==3
I = double(rgb2gray(I));
end
if indx == 1
% Preallocating the I_out to improve memory handling
sizeInputImage = size(I);
I_out = double(zeros(sizeInputImage(1,1), sizeInputImage(1,2)));
end
sprintf('%s', fileNames{indx}); % prints the processed filename
I_out = I_out + double(I);
end

% Averaging the intensity values
I_out = I_out/length(fileNames);

% Calculate the image quality parameters
cd(codePath); [snr, psnr, imfid, mse] = calculateImageQuality(I, I_out);
cd(pathname)

% Conversion from double to uint8
% NOTE: SOME INFROMATION COULD BE LOST HERE WITH I_out! (due to roundup
errors)
I_out = uint8(I_out);
I = uint8(I);

% Definition of the output-file
fileNameOut = sprintf('%s%d%s', 'ImAveraged_', length(fileNames), '.bmp');

% Writing the processed image to hard drive
imwrite(I_out, fileNameOut, 'bmp');
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
39

APPENDIX 4: THE MATLAB CODE FOR IMAGE QUALITY CALCULATION
function [snr, psnr, imfid, mse] = calculateImageQuality(I, I_out)

% Code taken from:
% http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=8791&objectType=file

% Signal quality calculations:
% SNR, PSNR, Image Fidelity, MSE

%Signal to Noise Ratio (SNR)
md = (I - I_out).^2;
mdsize = size(md);
summation = 0;
sumsq=0;
for i = 1:mdsize(1);
for j = 1:mdsize(2);
summation = summation + abs(md(i,j));
sumsq = sumsq + (I(i,j)^2);
end
end
snr = sumsq/summation;
snr = 10 * log10(snr);

%Peak Signal to Noise Ratio (PSNR)
md = (I - I_out).^2;
mdsize = size(md);
summation = 0;
sumsq=0;
for i = 1:mdsize(1);
for j = 1:mdsize(2);
summation = summation + abs(md(i,j));
end
end
psnr = size(I, 1) * size(I, 2) * max(max(I.^2))/summation;
psnr = 10 * log10(psnr);

%Image Fidelity
md = (I - I_out).^2;
mdsize = size(md);
summation = 0;
sumsq = 0;
for i = 1:mdsize(1);
for j = 1:mdsize(2);
summation = summation + abs(md(i,j));
sumsq = sumsq + (I(i,j)^2);
end
end
imfid = (1-summation)/sumsq;

%Mean Square Error
diff = I - I_out;
diff1 = diff.^2;
mse = mean(mean(diff1));
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL


APPENDIX 5: THE MATLAB CODE FOR CALCULATING VARIANCE OF ONE
ROW IN THE IMAGE

image=double(imread(x)/255);
image_row=image(:);
variance=var(image_row)


OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL
REFERENCES 41

6 REFERENCES

1 Anon. Wikipedia.org. Lens (optics). Available
from:
http://en.wikipedia.org/wiki/Lens_%28opti
cs%29 [Accessed 08 August 2008]
2 Anon. Wikipedia.org. Lens mount. Available from:
http://en.wikipedia.org/wiki/Lens_mount
[Accessed 08 August 2008]
3 Anon. ImagingSource. Products: Optics: Lenses.
Available from:
http://www.theimagingsource.com/en/prod
ucts/optics/lenses/ [Accessed 08 August 2008]
4 Anon. Edmund Optics. Understanding Video and
Imaging Equipment. Available from:
http://www.edmundoptics.com/techSuppor
t/DisplayArticle.cfm?articleid=263 [Accessed
11 August 2008]
5 Anon. Wikipedia.org. Focal length. Available from:
http://en.wikipedia.org/wiki/Focal_length
[Accessed 08 August 2008]
6 Anon. Wikipedia.org. Angle of view. Available
from:
http://en.wikipedia.org/wiki/Angle_of_view
[Accessed 08 August 2008]
7 Anon. Wikipedia.org. F-number. Available from:
http://en.wikipedia.org/wiki/F-number
[Accessed 08 August 2008]
8 Hecht, Eugene, 1987. Optics, 2nd ed., Addison
Wesley. ISBN 0-201-11609-X. Sect. 5.7.1
9 Anon. Wikipedia.org. Cardinal point (optics).
Available from:
http://en.wikipedia.org/wiki/Focal_plane#
Focal_points_and_planes [Accessed 08 August
2008]
10 Young, H.D. & Freedman, R.A., University
physics 10th edition (AddisonWesley, pg.. 1513)
11 Anon. Wikipedia.org. Macro photography.
Available from:
http://en.wikipedia.org/wiki/Macro_photog
raphy [Accessed 08 August 2008]
12 Anon. Wikipedia.org. Depth of field. Available
from:
http://en.wikipedia.org/wiki/Depth_of_fiel
d [Accessed 09 August 2008]
13 Anon. Wikipedia.org. Hyperfocal distance.
Available from:
http://en.wikipedia.org/wiki/Hyperfocal_di
stance [Accessed 09 August 2008]

14 Veijo Vilva. Canon EOS-350D with the incredible
Helios 40-2 1.5/85mm. Available from:
http://galactinus.net/vilva/retro/eos350d_h
elios.html [Accessed 09 August 2008]
15 Anon. EPFL. Biomedical Imaging Group.
Extended Depth of Field plugin for ImageJ.
Available from:
http://bigwww.epfl.ch/demo/edf/demo_3.
html [Accessed 09 August 200]
16 Anon. Wikipedia.org. Optical Transfer Function
(OTF). Available from:
http://en.wikipedia.org/wiki/Optical_transf
er_function [Accessed 09 August 2008]
17 Norman Koren. Introduction to resolution and
MTF curves. Available from:
http://www.normankoren.com/Tutorials/M
TF.html [Accessed 03 July 2008]
18 Anon. Photozone Lens Test FAQ. Available from:
http://www.photozone.de/Reviews/lens-
test-faq [Accessed 09 August 2008]
19 Emil Martinec, 2008, Noise, Dynamic Range and
Bit Depth in Digital SLRs. Available from:
http://theory.uchicago.edu/~ejm/pix/20d/t
ests/noise/index.html [Accessed 09 August
2008]
20 Anon. Watec WAT-902DM2S Datasheet.
Available from: http://www.aegis-
elec.com/products/WAT-
902DM2sDM2902DM3sDM3.htm [Accessed
11 August 2008]
21 P. Ryczek, S. Weiller, Orion83. Cooling a
QuickCam. Available from:
http://www.astrocam.org/cooling.htm [11
August 2008]
22 Anon. Melcor Thermoelectric cooling solutions.
Available from:
http://www.melcor.com/index_melcor.html
[Accessed 11 August 2008]
23 Anon. Hamamatsu: Characteristics and use of
FFT-CCD area image sensor. Available from:
http://sales.hamamatsu.com/assets/applicati
ons/SSD/fft_ccd_kmpd9002e06.pdf
[Accessed 11 August 2008]
24 Bockaert, V. Dynamic range. Digital
Photography review. Available from:
http://www.dpreview.com/learn/?/Glossar
y/Digital_Imaging/dynamic_range_01.htm
[Accessed 24 July 2008]
25 Anon. Wikipedia. High dynamic range imaging.
Available from:
http://en.wikipedia.org/wiki/High_dynamic
_range_imaging [Accessed 11 August 2008]
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

REFERENCES 42


26 Debevec Paul E., & Malik Jitendra. 1997.
Recovering high dynamic range radiance maps
from photographs. Pages 369378 of: Proceedings
of the 24
th
annual conference on Computer
graphics and interactive techniques. ACM
Press/Addison-Wesley Publishing Co.. Available
from:
http://www.debevec.org/Research/HDR/
[Accessed 11 August 2008]
27 Anon. HDRsofr. FAQ - HDR images for
Photography. Available from:
http://www.hdrsoft.com/resources/dri.html
[Accessed 11 August 2008]
28 Anon. DPReview.com. Canon EOS 450D /
Digital Rebel XSi: Review. Available from:
http://www.dpreview.com/news/0801/080
12403canoneos450d.asp [Accessed 11 August
2008]
29 Anon. Luminous Landscape. Maximizing S/N
Ratio in Digital Photography. Available from:
http://www.luminous-
landscape.com/tutorials/expose-right.shtml
[Accessed 11 August 2008]
30 Bushberg, J. T., et al., The Essential Physics of
Medical Imaging, (2e). Philadelphia: Lippincott
Williams & Wilkins, 2006, p.280.
31 Brandt, HM 1968, The Photographic Lens.
London: The Focal Press. ISBN-13:
9780240506463. 260 p
32 Anon. Canon. Lenses: Fluorite, aspherical and UD
lenses. Available from: http://cpn.canon-
europe.com/content/infobank/lenses/fluori
te_aspherical_and_ud_lenses.do [Accessed 14
July 2008]
33 Anon. Canon. EF Lens Work III, 8th edition.
Available from:
http://www.canoneurope.com/Support/Do
cuments/digital_slr_educational_tools/en/ef
_lens_work_iii_en.asp [Accessed 14 July 2008]
34 Lipson, SG 1995, Optical Physics (3rd
ed.).Cambridge University Press. ISBN 0-
5214-3631-1. 497 p.
35 O'Shea, DC 1985, Elements of Modern Optical
Design. USA: John Wiley & Sons. ISBN: 978-0-
471-07796-1. 416 p.
36 Anon 1995, Photographics Super Course of
Photography: Photographic Lenses. Petersens
Photographic, vol. 24, pp. 63-79.
37 Nave CR 2000, Distortion and Curvature of
Field. Georgia State University. Available from:
http://hyperphysics.phy-
astr.gsu.edu/Hbase/geoopt/aber3.html.
[Accessed 15 July 2008]

38 Anon. Patents no: 5125064 and 7332733.
Available from:
http://www.freepatentsonline.com [Accessed
15 July 2008]
39 Anon. Lens aberrations: Field Curvature.
University of California, Berkeley. Available from:
http://microscopy.berkeley.edu/Resources/
aberrations/curvature.html [Accessed 15 July
2008]
40 Jenkins, FA and and White, HE 1976,
Fundamentals of optics 4th ed. New York:
McGraw-Hill. ISBN-13: 978-0070323308. 746 p.
41 Price, WH 1976, The Photographic Lens.
Scientific American, vol. 72, pp. 72-83.
42 Anon. Wikipedia. Image distortion. Available
from:
http://en.wikipedia.org/wiki/Image_distorti
on [Accessed 11 August 2008]
43 Nakamura, J and Koyama T, 2005, Image
Sensors and Signal Processing for Digital Still
Cameras. Florida: CRCPress. ISBN-13: 978-
0849335457. 336 p.
44 van Walree P, Distortion.Available from:
http://www.vanwalree.com/optics/distortio
n.html [Accessed 15 July 2008]
45 Anon. Cambridge in Colour. Digital Camera
Sensor Sizes: How it influences your photography?
Available from:
http://www.cambridgeincolour.com/tutorials/dig
ital-camera-sensor-size.htm [Accessed 13 August
2008]
46 Anon. DPReview.com. Nikon AF-S VR Nikkor
70-200mm F2.8G Lens Review. Available from:
http://www.dpreview.com/lensreviews/nikon_70
-200_2p8_vr_n15/page5.asp [Accessed 13 August
2008]
47 van Walree P, Vignetting.Available from:
http://www.vanwalree.com/optics/vignettin
g.html [Accessed 15 July 2008]
48 Nakamura, J and Koyama T, 2005, Image
Sensors and Signal Processing for Digital Still
Cameras. Florida: CRCPress. ISBN-13: 978-
0849335457. 336 p.
49 Niemann Tom. PTLens v. 8.7.2. Available from:
http://epaperpress.com/ptlens/ [Accessed 11
August 2008].
50 Kovesi, P. MATLAB and Octave Functions for
Computer Vision and Image Processing. Available
from:
http://www.csse.uwa.edu.au/~pk/Research
/MatlabFns/ Accessed 16 July 2008]
OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA TEIKARI & RAUTKYL

REFERENCES 43


51 Anon. Opto-Engineering. Tutorial for telecentric
lenses. Available from: http://www.opto-
engineering.com/brochure/Telecentric_Lens
es.pdf [Accessed 04 July 2008]
52 Jon Titus. 2007. Telecentric lenses measure up.
Test & Measurement World 9/1/2007. Available
from:
http://www.tmworld.com/article/CA64731
21.html [Accessed 04 July 2008]
53 Anon. Wikipedia.org. Telecentric lens. Available
from:
http://en.wikipedia.org/wiki/Telecentric_le
ns [Accessed 09 August 2008]
54 Watanabe, M & Nayar, S 1997, Telecentric optics
for focus analysis. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, vol. 19, no.
12, pp. 1360-1365.
http://dx.doi.org/10.1109/34.643894
55 Anon. Wikipedia.org. Bokeh. Available from:
http://en.wikipedia.org/wiki/Bokeh
[Accessed 04 July 2008]
56 Anon. Wikipedia.org. Catadioptric system.
Available from:
http://en.wikipedia.org/wiki/Catadioptric_l
ens [Accessed 04 July 2008]
57 Anon. The-Digital-Picture.com. Canon EF 50mm
f/1.4 USM Lens Review. Available from:
http://www.the-digital-
picture.com/reviews/Canon-EF-50mm-f-
1.4-USM-Lens-Review.aspx [Accessed 09
August 2008]
58 Anon. Unibrain Fire-i firewire digital board
camera. Available from:
http://www.unibrain.com/Products/VisionI
mg/Fire_i_BC.htm [Accessed 11 August 2008]
59 Anon. Monochrome CCD Sony ICX-098BQ
specification. Available from:
http://www.unibrain.com/download/pdfs/
Fire-i_Board_Cams/ICX098BL.pdf
[Accessed 11 August 2008]
60 Anon. Imatest - Digital Image Quality Testing.
Available from: http://www.imatest.com/
[Accessed 11 August 2008]
61 Anon. Imatest - SFR Results: MTF (Sharpness)
plot. Available from:
http://www.imatest.com/docs/sfr_MTFplot
.html [Accessed 13 August 2008]
62 Anon. Imatest - Stepchart. Available from:
http://www.imatest.com/docs/tour_q13.ht
ml [Accessed 11 August 2008]
63 McHugh, ST, HDR: High Dynamic Range
Photography. Cambridge in Colour. Available

from:
http://www.cambridgeincolour.com/tutorial
s/high-dynamic-range.htm. [Accessed 24 July
2008]
64 McHugh, ST Dynamic Range in Digital
Photography. Cambridge in Colour. Available
from:
http://www.cambridgeincolour.com/tutorial
s/dynamic-range.htm. [Accessed 24 July 2008]
65 McFee, C. Noise sources in a CCD. Mullard
Space Science Laboratory. Available from:
http://www.mssl.ucl.ac.uk/www_detector/o
ptheory/darkcurrent.html#Dark%20current
[Accessed 24 July 2008]
66 Gonzalez, R & Woods, R, 2002, Digital image
processing, 2
nd
edition. Prentice Hall. 793 pp.
ISBN: 0201180758
67 Almond, M. Noise Reduction Tool Comparison.
Updated 13 February 2005. Available from:
http://www.michaelalmond.com/Articles/n
oise.htm [Accessed 24 July 2008]
68 Peter Kovesi, "Phase Preserving Denoising of
Images". The Australian Pattern Recognition Society
Conference: DICTA'99. December 1999. Perth WA.
pp 212-217.
69 Iacoviello, D & Lucchetti, M 2005, Parametric
characterization of the form of the human pupil
from blurred noisy images. Computer Methods
and Programs in Biomedicine, vol. 77(1): 39-48.
http://dx.doi.org/10.1016/j.cmpb.2004.09.001
70 Rautkyl, E., Aaltonen, L., Partinen, A. (2006),
Still-kuvan laatu: Subjektiivisen ja objektiivisen
kuvanlaadun korrelaatio digitaalisilla kameroilla.
Helsinki University of Technoly. Special
assignment in Communications Engineering. 51
pp.
71 Teikari Petteri, 2007. Automated pupillometry,
Project work of measurement science and
technology. Available from:
http://users.tkk.fi/~jteikari/Teikari_AutomatedP
upillometry.pdf [Accessed 13 August 2008]

You might also like