Professional Documents
Culture Documents
We know that a picture is worth more than thousand of words. An image may be defined
as a two-dimensional function, f (x, y), where x and y are spatial coordinates, and the
amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the
image at that point.
The digital image is composed of finite number of elements each of which has a
particular location and values. These elements are called picture elements, image
elements and pixels. Pixel is the term used most widely to denote the elements of a digital
image. Where pixel value typically represented by gray level, colors and heights.
Process level for DIP:- There are no clear-cut boundaries about where image
processing ends and fields such as image analyze. But there are three type of
computerized process…
One of the first applications of digital images was in the newspaper industry, when
pictures were first sent by submarine cable between London and New York. But in the
early 1920s, the introduction of the Bartlane cable picture transmission system reduced
the time required to transport a picture across the Atlantic from more than a week to less
than three hours. Specialized printing equipment coded pictures for cable transmission
and then reconstructed them at the receiving end.
Some of the initial problems in improving the visual quality of these early digital pictures
were related to the selection of printing procedures and the distribution of intensity levels.
At end of 1921, printing method is abandoned in favor of a technique based on
photographic reproduction made from tapes perforated at the telegraph receiving
terminal. The improvement was there both in tonal quality and in resolution.
The early Bartlane systems were capable of coding images in five distinct levels of gray.
This capability was increased to 15 levels in 1929 toward 15 levels. But it was quite
typical for that type of images that could be obtained using the 15-tone equipment.
During this period, introduction of a system for developing a film plate via light beams
that were modulated by the coded picture tape improved the reproduction process. At that
time computers were not involved in digital images creation.
Digital images require so much storage and computational power that progress in the
field of digital image processing has been dependent on the development of digital
computers and of supporting technologies that include data storage, display, and
transmission.
In 1940s with the introduction by John von Neumann of two key concepts:
(1) Memory to hold a stored program and data
(2) Conditional branching.
These two ideas are the foundation of a central processing unit (CPU), which is at the
heart of computers today. There is a series of key advances that led to computers
powerful enough to be used for digital image processing as follows:
In late 1960s and early 1970s digital image processing techniques was used in medical
imaging, remote Earth resources observations, and astronomy.
Tomography consists of algorithms that use the sensed data to construct an image.
Motion of the object in a direction perpendicular to the ring of detectors produces a 3-D
rendition of the inside of the object.
MRI use magnetic & radio waves to create the image but computerized tomography
CT scan use X-Ray’s
Great advantages of MRI is ability to change the contrast of image for this small
change in radio waves
Ability to change imagining plane without moving patient with the help of axial and
control image.
MRI was contrast agents that are made up of iodine (dark color substance use in
medicine) and are observed by normal tissues, which make easier for doctor to detect
tumor.
Examples of Fields that Use Digital Image Processing
Today, there is almost no area of technical endeavor that is not impacted in some way by
digital image processing. The areas of application of digital image processing are so
varied that some form of organization is desirable in attempting to capture the breadth of
this field. The principal energy source for images in use today is the electromagnetic
energy spectrum. Images based on radiation from the EM spectrum are the most familiar,
especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves
can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they
can be thought of as a stream of mass less particles, each traveling in a wavelike pattern
and moving at the speed of light. Each mass less particle contains a certain amount (or
bundle) of energy. Each bundle of energy is called a photon. If spectral bands are
grouped according to energy per photon, we obtain the spectrum shown in Fig.(a),
ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at
the other.
Energy of one photon (electron volts)
10(6)10(5) 10(4) 10(3) 10(2) 10(1) 10(0) 10(-1) 10(-2) 10(-3) 10(-4) 10(-5) 10(-6) 10(-7)
10(-8) 10(-9)
• Geological applications use sound in the low end of the sound spectrum (hundreds
of Hz) while imaging in other areas use ultrasound (millions of Hz).The most
important commercial applications of image processing in geology are in mineral
and oil exploration. For image acquisition over land, one of the main approaches
is to use a large truck and a large flat steel plate. The plate is pressed on the
ground by the truck, and the truck is vibrated through a frequency spectrum up to
100 Hz. The strength and speed of the returning sound waves are determined by
the composition of the Earth below the surface. Computer analyzes these, and
images are generated from the resulting analysis.
• For marine acquisition, the energy source consists usually of two air guns towed
behind a ship. Returning sound waves are detected by hydrophones placed in
cables that are either towed behind the ship, laid on the bottom of the ocean, or
hung from buoys (vertical cables). The two air guns are alternately pressurized to
and then set off. The constant motion of the ship provides a transversal direction
of motion that, together with the returning sound waves, is used to generate a 3-D
map of the composition of the Earth below the bottom of the ocean.
(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the
set N8 (p).
(c) m-adjacency (mixed adjacency). Two pixels p and q with values from V are m-
adjacent if
(i) q is in N4(p),or
(ii) q is in Nd(p) and the set N4(p) intersection N4(q) has no pixels whose
values are from V.
Neighbourhood Operations
The following figure shows the neighborhood blocks for some of the elements in a 6-by-5
matrix with 2-by-3 sliding blocks. The center pixel for each neighborhood is marked with
a dot.
The center pixel is the actual pixel in the input image being processed by the operation. If
the neighborhood has an odd number of rows and columns, the center pixel is actually in
the center of the neighborhood. If one of the dimensions has even length, the center pixel
is just to the left of center or just above center. For example, in a 2-by-2 neighborhood,
the center pixel is the upper left one.
For any m-by-n neighborhood, the center pixel is
• floor(([m n]+1)/2)
•
In the 2-by-3 block shown in fig. the center pixel is (1,2), or, the pixel in the second
column of the top row of the neighborhood.
For example, suppose fig. represents an averaging operation. The function might sum the
values of the six neighborhood pixels and then divide by 6. The result is the value of the
output pixel.
Set Operations
Let A be a set composed of ordered pairs of real numbers .If a=(a1,a2) is an element of A
then we write
a belongs to A
Similarly if a is not an element of A then we write
a does not belongs to A
Sets are used in image processing to let the elements of sets be coordinates of pixels
representing regions(objects) in images.If every element of a set A is also an element of a
set B,then A is said to be a subset of B.There are also union as well as intersections to
join two sets or to find common elements in two sets.
Image Addition
Addition is a discrete version of continuous integration.In astronomical observations,a
process equivalent to the method is to use integrated capabilities or similar sensors for
noise reduction by observing the scene over long periods of time.Cooling also reduces
sensor noise.The net effect is an analogous to averaging a set of noisy digital images.
Image Subtraction
Image subtraction is the difference of enhancement of images.Subtracting one image
from the other clearly shows the differences.Consider he example of an area of medical
imaging called mask mode radiography,a commercially successful and highly beneficial
use of image subtraction.Consider image differences of the form:
g(x,y)=f(x,y) - h(x,y)
Here the h(x,y) the mask is an X-ray image of a region of patient’s body captured by an
intensified TV camera located opposite of an X-ray source.The procedure consists
ofinjecting an X-ray source contrast medium into patient’s bloodstream taking a series of
images called live images of same anatomical region as h(x,y) and subtracting the mask
from the serious of incoming live images after injection of the contrast medium.The net
effect of subtracting the mask from each sample live is that the areas that are different
f(x,y) and h(x,y) appear in the output image g(x,y)
Hence these tool operations are helpful for completion of image processing tasks.