Professional Documents
Culture Documents
Yaroslavsky
TABLE OF CONTENTS
Ch. 1. INTRODUCTION
REFERENCES..................................................................................................10
References ..................................................................................................209
5.3 COMBINED ALGORITHMS FOR COMPUTING DFT AND DCT OF REAL VALUED
SIGNALS.....................................................................................................231
5.3.1 Combined algorithms for computing DFT of signals with real samples.......................231
5.3.2 Combined algorithm for computing the DCT via FFT .234
References................................................................................................237
References .........................................................................................274
Ch. 7. STATISTICAL METHODS AND ALGORITHMS...........................................275
References ..................................................................................................308
References .................................................................................................371
References ..................................................................................................410
References .............................................................................459
References........................................................................................................507
References .................................................................................................540
References....................................................................................................575
Chapter 1
INTRODUCTION
The history of science is, to a considerable degree, the history of invention, development and perfecting
imaging methods and devices. Modern science began with the invention and application of optical telescope
and microscope in the beginning of 17-th century by Galileo Galilei (1564-1642), Anthony Leeuwenhoek
(1632-1723) and Robert Hook (1635-1703). The next decisive stage was invention of photography in the
first half of the 19-th century. Photographic plates were the major means in discoveries of X-rays by
Wilhelm Conrad Roentgen (1845-1923, The Nobel Prize Laureate, 1901) and radio activity by Antoine
Henri Becquerel (1852-1908, The Nobel Prize Laureate, 1903) at the end of 19-th century. These
discoveries, in their turn, almost immediately gave birth to new imaging techniques: to X-ray imaging and
radiography. X-rays were discovered by W.C. Roentgen in experiments with cathode rays. Cathode rays
were discovered by Julius Plcker (1801-1868) in 1859 who used vacuum tubes invented in 1855 by
German inventor Heinrich Geissler (1815-1879). These tubes, as modified by Sir William Crookes (1832-
1919) led eventually to the discovery of the electron and finally brought about the development of
electronic television and electron microscopy in 30-th - 40-th of 20-th century. Designer of the first electron
microscope E. Ruska (1906-1988) was awarded The Nobel Prize in Physics for 1986. This award was shared
with G. Binning and H. Rohrer, who were awarded for their design of the scanning tunneling
microscope.The discovery of diffraction of X-rays by Max von Laue (1889-1960, the Nobel Prize Laureate,
1914) in the beginning of 20-th century marked the advent of a new imaging technique, imaging in
transform domain. Although Von Laues motivation was not creating a new imaging technique but rather
proving the wave nature of X-rays, lauegrams had very soon become the main imaging tool in
crystallography. Lauegrams are not conventional images for visual observation. However, using lauegrams,
one could numerically reconstruct the spatial structure of atoms in crystals. This shoot gave its crop in less
then half a century. One of the most remarkable scientific achievements of 20-th century is based on X-ray
crystallography. It is discovery by J. Watson and F. Crick of spiral structure of DNA (the Nobel Prize,
1953). And about the same time a whole bunch of such new transform domain imaging methods had
appeared: holography, synthetic aperture radar, coded aperture imaging, tomography. Two of these
inventions were awarded the Nobel Prize: D. Gabor for his invention and developments of the holographic
method (the Nobel Prize in Physics, 1971) and A. M. Cormack and G. N. Hounsfield for the development
of computer assisted tomography (the Nobel Prize in Physiology and Medicine, 1979). Denis Gabor
invented holography in 1948 ([1]).
This is what D. Gabor wrote in his Nobel Lecture ([2]) about the development of holography:
Around 1955 holography went into a long hibernation. The revival came suddenly and explosively in 1963,
with the publication of first successful laser hologram by Emmett N. Leith and Juris Upatnieks of the
University of Michigan, An Arbor ([3]). Their success was due not only to the laser, but to the long
theoretical preparation of Emmett Leith (in the field of the side looking radar) which started in 1955
Another important development in holography [happened] in 1962, just before the holography explosion.
Soviet physicist Yu. N. Denisyuk published an important paper ([4]) in which he combined holography with
the ingenious method of photography
in natural colors, for which Gabriel Lippman received the Nobel Prize in 1908.
Denis Gabor received his Nobel Prize in 1971. The same year, a paper Digital holography was
published in Proceedings of IEEE by T. Huang ([5]). This paper marked the next step in the development of
holography, the use of digital computers for reconstructing, generating and simulating wave fields, and
reviewed pioneer accomplishments in this field. These accomplishments prompted a burst of research and
publications in early and mid 70-th. At that time, most of the main ideas of digital holography were
suggested and tested ([6-16]). Numerous potential applications of digital holography such as fabricating
computer generated diffractive optical elements and spatial filters for optical information processing, 3-D
holographic displays and holographic television, holographic computer vision stimulated a great enthusiasm
among researchers. However, limited speed and memory capacity of computers available at that time,
absence of electronic means and media for sensing and recording optical holograms hampered
implementation of these potentials. In 1980-th digital holography went into a sort of hibernation similarly to
what happened to holography in 1950-th - 1960-th. With an advent, in the end of 1990-th, of the new
generation of high speed microprocessors, high resolution electronic optical sensors and liquid crystal
displays, of a technology for fabricating micro lens and mirror arrays digital holography is getting a new
wind. Digital holography tasks that required hours and days of computer time in 1970-th can now be solved
in almost real time for tiny fractions of seconds. Optical holograms can now be directly sensed by high
resolution photo electronic sensors and fed into computers in real time with no need for any wet photo-
chemical processing. Micro lens and mirror arrays promise a breakthrough in the means for recording
computer generated holograms and creating holographic displays. Recent flow of publications in digital
holographic metrology and microscopy indicate revival of digital holography from the hibernation.
The development of optical holography, one of the most remarkable inventions of the XX-th
century, was driven by clear understanding of information nature of optics and holography ([17-19]). The
information nature of optics and holography is especially distinctly seen in digital holography. Wave field
recorded in the form of a hologram in optical, radio frequency or acoustic holography, in digital holography
is represented by a digital signal that carries the wave field information deprived of its physical casing. With
digital holography and with incorporating digital computers into optical information systems, information
optics has reached its maturity.
The most substantial advantage of digital computers as compared with analog electronic and optical
information processing devices is that no hardware modifications are necessary to solving different tasks.
With the same hardware, one can build an arbitrary problem solver by simply selecting or designing an
appropriate code for the computer. This feature makes digital computers also an ideal vehicle for processing
optical signals adaptively since, with the help of computers, they can adapt rapidly and easily to varying
signals, tasks and end user requirements. In addition,
acquiring and processing quantitative data contained in optical signals, and connecting optical systems to
other informational systems and networks is most natural when data are represented and handled in a digital
form. In the same way as monies are the general equivalent in economics, digital signals are the general
equivalent in information handling. Thanks to its universal nature, the digital signal is an ideal means for
integrating different informational systems.
This is not a coincidence that digital holography appeared in the end of 60-th, the same period of
time to which digital image processing can be dated back to. In the same way, in a certain sense, as
inventing by Ch. Towns, G. Basov and A. Prokhorov lasers in mid 1950-th (the Nobel Prize in Physics,
1964) stimulated development of holography, two events stimulated digital holography and digital image
processing: beginning of industrial production of computers in 1960-th and introducing Fast Fourier
Transform algorithm made by J. W. Cooley and J. M. Tukey in 1965 ([21]).
Digital holography and digital image processing are twins. They share common origin, common
theoretical base, common methods and algorithms. This is the purpose of the present book to describe these
common principles, methods and algorithms for senior-level undergraduate and graduate students,
researchers and engineers in optics, photonics, opto-electronics and electronic engineering.
The theoretical base for digital holography and image processing is signal theory. Ch. 2 introduces
basic concepts of the signal theory and mathematical models that are used for describing optical signals and
imaging systems. The most important models are integral transforms. All major integral transforms, their
properties and interrelations are reviewed: convolution integral; Fourier transform and such its derivatives as
Cosine, Hartley, Hankel, Mellinn transforms; Fresnel transform; Hilbert transform; Radon and Abel
transforms; wavelet transforms; sliding window transforms. In the last section of the chapter, stochastic
signal transformations and corresponding statistical models are introduced.
One of the most fundamental problems of digital holography and, more generally, of integrating
digital computers and analog optics is that of adequate representation of optical signal and transformations in
digital computers. Solving this problem requires consistent account for computer-to-optics interface and for
computational complexity issues. These problems are treated in Chs. 3 , for signals, and in Ch.4, for signal
transforms. In Ch. 3, general signal digitization principles and concepts of signal discretization and element-
wise quantization as a practical way to implement the digitization are introduced. Signal discretization is
treated as signal expansion over a set of discretization basis functions and classes of shift, scale and
combined shift-scale basis function are introduced to describe traditional signal sampling and other image
discretization techniques implemented in coded aperture imaging, synthetic aperture imaging, computer and
MRI tomography. Naturally, image sampling theory, including 1-D and 2-D sampling theorems and analysis
of sampling artifacts, is treated at the greatest length. Treatment of element-wise quantization is mostly
focused on the compander - expander optimal non-uniform quantization
method that is less covered in the literature then Max-Lloyd numerical optimization of the quantization. A
separate section is devoted to peculiarities of signal quantization in digital holography. The chapter
concludes with a review of image compression methods. As this subject is
very well covered in the literature, only basic principles of data compression and a classification and brief
review of the methods are offered.
Ch. 4 provides a comprehensive exposure of discrete representation of signal transforms in
computers. In digital holography, computers should be considered as an integral part of optical systems. This
requires observing mutual correspondence principle between discrete signal transforms in computers and
analog transforms in optical system that is formulated in Sect. 4.1. On the base of this principle, discrete
representations of the convolution integral, Fourier and Fresnel integral transforms are developed in Sects.
4.2 - 4.4. Conventional Discrete Fourier and Discrete Fresnel Transforms are represented here in a general
form that takes into consideration that
image or hologram sampling and reconstruction devices can be placed in optical setups with arbitrary shifts
with respect to their optical axes. This requires introducing into discrete Fourier and Fresnel transforms
arbitrary shift parameters. The presence of these parameters provides to modified in this way transforms
some new useful features such as flexibility of image resampling using Discrete Fourier Transform (DFT)
and allows to treat numerous modifications of DFT and, primarily, Discrete Cosine Transform (DCT), in the
most natural way as special cases of DFT for different shift parameters and different type of signal
symmetry.
A critical issue in digital holography and image processing is the computational complexity of the
processing. The availability of fast computation algorithm determine feasibility of their practical
applications. These issues are treated in Chs. 5 and 6. Ch. 5 describes efficient computational algorithms for
image and hologram filtering in signal domain. Described are separable, recursive, parallel and cascade
implementations of digital filters; new implementation of signal convolution in DCT domain that is
practically free of boundary effects of the traditional cyclic convolution with DFT; recursive filtering in
sliding window in the domain of DCT and other transforms; combined algorithms of 1-D and 2-D DFT and
DCT for real valued signals.
Ch. 6 exposes basic principles of the so called fast transform of which Fast Fourier Transform is the
most known special case. The fast transforms are treated here in the most general matrix form that allows to
formalize their design and provides a very compact and general formulas for the fast algorithms. From the
practical point of view, pruned algorithms and quantized DFT described in this chapter may represent the
most interest for
the reader.
Many phenomena in holography and, generally, in imaging are treated as random, stochastic. In
addition, one of the task of digital holography is statistical simulation of holographic and imaging processes.
Hence, the next chapter, Ch. 7, deals with statistical methods and algorithms. It covers basically all involved
issues, from measuring signal statistical characteristics
for different statistical models of signals to building digital statistical models and to generating arrays of
pseudo-random numbers with prescribed statistical characteristics. Practical applications are represented in
this chapter by methods for generating correlated phase masks and diffusers and
by demonstration of statistical simulation for studying speckle noise phenomena in imaging systems that use
coherent radiation.
The primary task of processing images and holograms after they are digitized and put into computer
is usually correcting distortions in imaging and holographic systems that prevent images from being perfect
for the end user. We call this task sensor signal perfection. IIn image processing, the termimage restoration
is commonly accepted to designate this task. Image reconstruction in holography and tomography may also
be regarded a similar task. Methods for sensor signal perfection, image restoration and image reconstruction
are discussed in Ch. 8. The methods assume a canonical imaging system model, described in Sect. 8.1, that
consist of a cascade of units that perform signal linear transformation, nonlinear point-wise transformation
and stochastic transformation such as adding to the signal signal independent noise. As a theoretical bench
mark, the design of optimal linear filters is used. The filters correct signal distortions in the linear filtering
units of the system model such as image blur, and suppress additive signal independent noise in such a way
as to minimize root mean square restoration error measured over either a set of images in an image data base
or image ensemble or even over a particular individual image. In the latter case, filters are adaptive as their
parameters depend on the particular image to which they are applied. In view of the implementation issues,
the filters are designed and work in the domain of orthogonal transforms that can be computed with fast
transform algorithms. The design of such filters is discussed in Sect. 8.2. In Sect.8.3, this approach is
extended to local adaptive filters that work in transform domain of a window sliding over the image and, in
each window position, produce filtered value of the window central pixel. Illustrative examples are
presented that demonstrate edge preserving noise suppression capability of local adaptive filters. In
particular, it is
shown that such filters can be efficiently used for filtering signal dependent noise, such as speckle noise.
Sect. 8.4 extends this approach further to the design of optimal linear filters for multi component images
such as color, multi spectral or multi modality images. In real application, it very frequently happens . Very
that noise randomly replaces signal values that in this case are completely lost. In such situation, the model
of impulse noise is used. Filtering impulse noise, in general, requires nonlinear filtering. Efficient and fast
impulse noise filtering algorithms that use simple linear filter and point wise threshold detector of pixels
distorted by impulse noise are described in Sect. 8.5. Other filters for filtering impulse noise are described in
Ch. 12.All above mentioned filtering methods are aimed at correcting signal distortions attributed to linear
filtering and stochastic transformation units of the imaging system model. Sect. 8.6 deals with processing
methods for correcting image and hologram gray scale distortions in the point-wise nonlinear
transformation unit.
In a broad sense, image restoration can be treated as applying to the distorted signal a transformation
inverse to that that caused the distortions provided that the inverse transformation is apppropriately modified
to allowfor random interferences and other distortions that are always present in the signals. This is how
image reconstruction in holography and tomography is usually carried out. The modification of the inverse
transformation to allow for random interferences and distortions is implemented as pre-processing of
holograms or, correspondingly, image projections before applying the inverse transformation and post-
processing after the reconstruction transformation. This technology is illustrated in Sect. 8.7.
As a rule, in image processing applications, end user needs, for solving his particular tasks, apply to
images some additional processing to ease visual image analysis. Such a processing is usually called image
enhancement. Image enhancement methods are reviewed and illustrated in
Sect. 8.8.
In digital holography and digital image processing real images and holograms are represented in
computers as arrays of their samples obtained with one or another sampling device in its certain setting.
Meanwhile it is very frequently necessary to resample images to obtain, from available array of samples,
samples located in position other then those of available samples. The resampling assumes interpolation
between available data. Many interpolation methods for sampled data are known. Among them, discrete
sinc-interpolation that satisfies discrete sampling theorem described in Ch. 4, has certain advantages. Ch. 9
focuses on properties and efficient computational algorithms for discrete sinc-interpolation and on using it
for image zooming, rotation, differentiating, integrating, polar-to-Cartesian coordinate conversion and
tomographic reconstruction.
A very important applied problem of image processing is image parameter estimation, and, in
particular, localization of objects in images. Methods for solving this problem are discussed in Chs. 10 and
11. In Ch. 10, a basic mathematical model of observation with additive sensor noise is formulated and
applied to the design of optimal device for localization of a target object with the highest possible accuracy
and reliability. Analytical formulas are also obtained that characterize potential accuracy and reliability of
target localization for single and multi-component images and non-correlated and correlated sensor noise.
Ch. 11 focuses on the problem of reliable localization of a target object in clutter images when the
localization is hindered by the presence in images non-target background objects. The main issue in solving
this problem is how to reliably discriminate target and non-target objects. In the chapter, a localization
device is considered that consists of a linear filter and a unit that finds coordinates of the signal highest
maximum at the filter output. The linear filter is optimized so as to secure the highest ratio of the filter
response to the target object in its location to standard deviation of the filter response to background non-
target image component. This problem is solved for exactly known target objects and for objects that are
known to the accuracy of their certain parameters such as, for instance, scale and rotation angle, for spatially
homogeneous and inhomogeneous images. The solution called optimal adaptive correlator is extended to the
case of localization in color and multi component images. Some practical implication of this solution and
its implementation in nonlinear optical and opto-electronic correlators are also discussed and illustrated.
An important place in the arsenal of methods for image denoising, enhancement and segmentation
belong to nonlinear filters. Quite a number of nonlinear image processing filters have been reported in the
literature. In the Ch. 12, these filters are classified and represented in a unified way in terms of fundamental
notions of pixel neighborhood and of a standardized set of estimation operations. The chapter also provides
and illustrates practical examples of several unconventional filters for image denoising, enhancement, edge
detection and segmentation. In conclusion, possible
implementation of the filters in neuro-morphic parallel networks are briefly discussed.
The last, 13-th chapter is devoted to the problem specific for digital holography proper, the digital-
to-analog conversion problem of encoding computer generated holograms for recording on physical optical
media and to analysis of distortions in reconstructed images associated with encoding methods. As
computer generated holograms and optical elements are still recorded using means that are not directly
intended for this purpose, there is no unique encoding method. In the chapter, the most simple and known
methods are described and analyzed.
In all chapters, exposition is extensively supported by diagrams, graphical and picture illustrations
as the author shares the well known Chinese saying that a picture costs more than a thousand words.
REFERENCES