You are on page 1of 11

• Digital Image.

By Mahesh Digrajkar
• Analog images are physical images that are created when the film
in a camera is exposed to light.
• The chemical coating on the film strip reacts to the light and burns
an image permanently onto the film.
• Digital images are computer files that are created by the
electronic sensor in a digital camera when you take a picture.
• Digital images can also be created by scanning a physical
photograph and saving it to a computer as a file.
• Anatomy Of A Digital Image.
• All information that can be processed by a computer is binary. Any
data in its simplest form can be described as combinations of 1
and 0, e.g. 10110010. Given the correct context, the computer
knows how to interpret this stream of numbers.
• The smallest unit of computer data is a bit ( short for “ binary
digit” ). Each bit can be either 1 or 0.
• 1 Byte = 8 Bits. Each byte of data can have 256- (2)8 – possible
combinations of ones and zeros. Therefore each byte can have
any of 256 values .
1 Kilobyte = 1024 Bytes.
1 Megabyte = 1024 KB. ( approx 1 million bytes )
1 Gigabyte = 1024 MB. ( approx 1 billion bytes )/ 5 min DV
footage.
1 Terabyte = 1024 GB. ( approx 1 trillion bytes ).

• Pixel.
• The most common type of a digital image is a bitmap ( or raster)
images.
• Each image comprises a number of building blocks known as
collectively as “ pixels” ( short for “picture elements” ).
• Each pixel is binary number, representing a single square of a solid
color. Put enough of them together and you have a picture.
• Pixels are regularly shaped and arranged in a grid like manner for
the sake of efficiency, making them easily displayed and
simplifying all the “ behind –the- scene” mathematics performed
by the computer.
• Digital images have square shaped pixels, and so do most devices
for viewing digital images, such as computer monitors.
• A pixel is the smallest addressable full-color (RGB) element in a
digital image device. The address of a pixel corresponds to its
physical coordinates on a sensor or a screen.
• Pixels are full color samples of an original image. More pixels
provide a more accurate representation of the original image.
• The tonal and color intensity of pixels are variable.
• A pixel carries tricolor RGB information.
• A photosite can only carry information about one color. It can only
be red or green or blue.
• Analog to Digital
• Analog to digital conversion is carried out by converting
continuously varying analog voltages into a series of numerical
values called samples.
• A sample is a numerical code value that represents a waveform’s
amplitude at a specific moment in time.
• Digital samples are numbers which can be stored in a computer’s
memory and saved to a computer hard drive as data.
• Sampling precision is the maximum number of digits (bits) each
sample can represent. The sampling precision determines how
accurately the amplitudes in the original waveform can be
recreated.
• Sampling is done on x axis. It is the conversion of x axis (infinite
values) to digital values.
• But the story of digitizing a signal does not end at sampling too,
there is another step involved which is known as Quantization.
• Quantization is opposite to sampling. It is done on y axis. When
you are quantizing an image, you are actually dividing a signal into
quanta(partitions).
• On the x axis of the signal, are the co-ordinate values, and on the y
axis, we have amplitudes. So digitizing the amplitudes is known as
Quantization.
• That means that when we sample an image, we actually gather a
lot of values, and in quantization, we set levels to these values.
• For analog-to-digital conversion to result in a faithful reproduction
of the signal, slices, called samples, of the analog waveform must
be taken frequently. The number of samples per second is called
the sampling rate or sampling frequency.
• Any analog signal consists of components at various frequencies.
The simplest case is the sine wave, in which all the signal energy is
concentrated at one frequency. In practice, analog signals usually
have complex waveforms, with components at many frequencies.
The highest frequency component in an analog signal determines
the bandwidth of that signal. The higher the frequency, the
greater the bandwidth, if all other factors are held constant.
• Suppose the highest frequency component, in hertz for a given
analog signal is fmax. According to the Nyquist Theorem, the
sampling rate must be at least 2fmax, or twice the highest analog
frequency component.
• The sampling in an analog-to-digital converter is actuated by a
pulse generator (clock). If the sampling rate is less than 2fmax,
some of the highest frequency components in the analog input
signal will not be correctly represented in the digitized output.
• When such a digital signal is converted back to analog form by a
digital-to-analog converter, false frequency components appear
that were not in the original analog signal. This undesirable
condition is a form of distortion called aliasing.
• Sampling (clock) frequency
• The (clock) frequency at which the picture signal is sampled is
crucial to the accuracy of analogue to digital conversion. The goal
is to be able, at some later stage, to faithfully reconstruct the
original analogue signal from the digits. Clearly using too high a
frequency is wasteful whereas too low a frequency will result in
aliasing – so generating artifacts.
• Nyquist stated that for a conversion process to be able to re-
create the original analogue signal, the conversion (clock)
frequency must be at least twice the highest input frequency
being sampled.
• Since each of the color difference channels will contain less
information than the Y channel (an effective economy since our
eyes can resolve luminance better than chrominance) their
sampling frequency is set at half that of the Y channel.
• Nyquist Frequency:
• The Nyquist Sampling Theorem explains the relationship between
the sample rate and the frequency of the measured signal. It
states that the sample rate fs must be greater than twice the
highest frequency component of interest in the measured signal.
This frequency is often referred to as the Nyquist frequency, fN.
• Sampling frequency = 2 X Nyquist frequency.
• To understand why the sample rate should be greater than twice
the Nyquist frequency, take a look at a sine wave measured at
different rates.
• In case A, the sine wave of frequency f is sampled at that same
frequency. Those samples are marked on the original signal on the
left and, when constructed on the right, the signal incorrectly
appears as a constant DC voltage.
• In case B, the sample rate is twice the frequency of the signal. It
now appears as a triangle waveform. In this case, f is equal to the
Nyquist frequency, which is the highest frequency component
allowed to avoid aliasing for a given sampling frequency.
• In case C, the sampling rate is at 4f/3.
• There are two types of component signals; the Red, Green and
Blue (RGB) and Y, R-Y, B-Y but it is the latter which is by far the
most widely used in digital television and is included in the ITU-R
BT.601(more commonly known by the abbreviations Rec. 601 or
BT.601) and Rec. 709 specifications. (International
Telecommunication Union- Radiocommunication Sector)
• The R-Y and B-Y color difference signals carry the color
information while Y represents the luminance.
• Cameras, telecine, etc. generally produce RGB signals. These are
easily converted to Y, R-Y, B-Y using a resistive matrix –
established analogue technology.
• Signal preparation
• The analogue to digital converter (ADC) only operates correctly if
the signals applied to it are correctly conditioned. There are two
major elements to this. The first involves an amplifier to ensure
the correct voltage and amplitude ranges for the signal are given
to the ADC.
• For the second major element the signals must be low-pass
filtered to prevent the passage of information beyond the
luminance band limit and the color difference band limit, from
reaching their respective ADCs. If they did, aliasing artifacts would
result and be visible in the picture. For this reason low pass (anti-
aliasing) filters sharply cut off any frequencies beyond the band
limit.
• Sampling and digitization
• The low-pass filtered signals of the correct amplitudes are then
passed to the ADCs where they are sampled and digitized.
Normally two ADCs are used, one for the luminance Y, and the
other for both color difference signals, R-Y and B-Y.
• Within the active picture the ADCs take a sample of the analogue
signals (to create pixels) each time they receive a clock pulse
(generated from the sync signal). For Y the clock frequency is 13.5
MHz and for each color difference channel half that – 6.75 MHz –
making a total sampling rate of 27 MHz.
• It is vital that the pattern of sampling is rigidly adhered to,
otherwise onward systems and eventual conversion back to
analogue will not know where each sample fits into the picture –
hence the need for standards!
• Co- sited sampling is used, alternately making samples of Y, R-Y,
and B-Y on one clock pulse and then on the next, Y only (there are
half the color samples compared with the luminance).
• This sampling format is generally referred to as 4:2:2 and is
designed to minimize chrominance/luminance delay – any timing
off-set between the color and luminance information.
• The amplitude of each sample is held and precisely measured in
the ADC. Its value is then expressed and output as a binary
number and the analogue to digital conversion is complete. Note
that the digitized forms of R-Y and B-Y are referred as Cr and Cb.
• In addition to increasing the sample rate, aliasing can also be
prevented by using an antialiasing filter. This is a lowpass filter
that attenuates any frequencies in the input signal that are
greater than the Nyquist frequency, and must be introduced
before the ADC to restrict the bandwidth of the input signal to
meet the sampling criteria. Analog input channels can have both
analog and digital filters implemented in hardware to assist with
aliasing prevention.
• A low-pass filter is designed to eliminate the effects of moiré.
Moiré is something that shows up in photographs that include
repeating lines such as stripes or fences, or very fine patterns,
especially in clothing. Usually it results in strange, wavy patterns
and color aberrations.
• A low-pass filter blurs the light that reaches the sensor which
helps eliminate moiré and color shifts, although the final image is
slightly softened.
• What is Bit Depth?
• Bit depth refers to the number of digital bits used to store the
grey scale or color information of each pixel as a digital
representation of the analog world.
• The higher the bit depth, the more shades of grey or range of
colors in an image, and the bigger the file size for that image.
• Higher color bit depth ( more gradations of brightness value) and
higher sampling frequency ( more samples per second) will yield
pictures that have better fidelity to the original scene.
• Bit depth is the maximum number of digits (bits) each sample’s
components can represent. It determines how accurately the
amplitudes in the original analog waveform can be recreated.
• An 8 bit sampling resolution (bit depth) means that the
continuous values of the input signal will be quantized to 2 to the
8th power, or 256 code values.
• That means 256 shades of red, green and blue each.
• When the red, green and blue color palettes are multiplied to
define the entire color palette, the result is, 1,67,77216 shades of
colors, defined as 8 bit color in digital realm.
• A cineon 10 bit log file employs 1,024 code values per channel of
red, green and blue to digitally encode red, green and blue film
densities.
• A 16 bit sampling quantizes to 2 to the 16th power, or 65,536 code
values to each R, G, B color which provides much more accuracy
and subtlety of color shading.
• This is especially important when working with wide gamut color
spaces where most of the more common colors are located
relatively close together or in digital intermediate where a large
number of digital transform algorithms such as in digital
intermediate work are used consecutively.
• The digital recording is only an approximation of the original
continuous wave form, but it has several advantages over an
analog recording of that waveform.
• No recording noise is introduced except by the sampling process
itself, digital numbers are easily manipulated and transmitted, and
digital copies of the waveform are as good as the original
recording.
• Resolution.
• Color.
• When painting , you can make new colors by mixing different
primary colors together. Mixing yellow and blue paint makes
green. This process is known as the “ Subtractive system of color
mixing”, because the more color you mix in, the closer you get to
black.
• To produce color images on a monitor, different amounts of red,
green and blue light are mixed together. This process is based on
the “ additive system of color mixing”, where the primary colors
are red, green and blue, and the more colors you mix together,
the closer you get to white.
• A color digital image contains a number of channels. Each channel
is a single, monochrome image. Most color digital images have
red, green and blue channels, which are mixed together to form
the full color RGB image.
• Each channel contains a possible number of color values ( in the
same way that grey scale images do), which provides the possible
color range for each channel.
• In a sense, three separate grey scale images are mixed together to
make a color image, so there might be three separate 8-bit-per-
pixel channels in every image.
• In a typical 400 by 400 pixel color image, there will be three
channels ( one each for red, green and blue), having 400 by 400
pixels at 8 bits per pixel. The entire image will therefore have 400
by 400 pixels with an effective overall bit depth of 24 bits per pixel
( or 8 bits per channel).
• The resultant file will be three times bigger than its monochrome
counterpart and have a total of approximately 16.8 million
possible colors available to the image.
• In addition, other paradigms combine channels in ways different
from RGB images. For example, CMYK images combine four
channels ( one each for cyan, magenta, yellow and black) in the
same way that the printing press combines ink on paper to form
color images.
• A HLS model uses three channels, one each of hue, luminosity
and saturation.
• Other color models use one channel to describe the luminance,
and two for color content, such as Lab model or the color models
used in high dynamic range(HDR) formats.
• For the most part, different models can be used to produce the
same digital image, but there may be differences in terms of the
color space of the image, meaning that certain models can
produce colors that others can’t reproduce.
• CMYK Image
• Alpha Channels
• It is also possible for an image to have extra, no-image channels.
The most common fourth channel is “Alpha” and is usually used as
an extra “control” channel, to define regions of an image that can
be used when modifying or combining images.
• For example, a pixel with an alpha channel of 0 may mean that the
pixel shouldn’t be modified at all, with an alpha channel of 255,
the pixel would be affected fully, and a value of 127 might mean
that the pixel would be affected 50%.
• There may be additional channels to perform specific functions for
color correction, masking and so on.
• Each additional channel will increase the file size- doubling the
number of channels will typically double the file size.
• Alpha channels aren’t generally displayed as part of the image and
won’t affect its appearance.
• Color sampling
• To enable recording to ever-smaller storage devices, different
digital formats discard varying proportions of chroma information
from the signal.
• This affects how much you can stretch contrast before introducing
noise.
• A 4:4:4 chroma sampled media stores 100% of the chroma
information and thus has an impressive amount of latitude for
color correction.
• Media encoded at 422 chroma subsampling (typically high-end
HD camcorders) has a fair amount of latitude within which a
colorist can adjust contrast by a decent amount before noise
becomes a problem.
• The majority of consumer level and DSLR cameras that record to
H.264 based formats encode 4:2:0 chroma subsampled media.
• This discards 3/4th of the chroma data in a manner considered to
be perceptually indistinguishable from the original, in an effort to
shrink the file size to create media that is more manageable in
low-cost workflows.
• While in many cases 4:2:0 subsampled source media is considered
suitable for professional work, the discarded color information
makes it difficult to make significant adjustments to contrast
without introducing noise.
• This chroma sampling can also make various types of special
effects work, such as green-screen compositing, more challenging
to accomplish.
• However, for many types of programs, the advantages in cost and
ease of use far outweigh the disadvantages.
• Compression
• It goes without saying that less compression is better than more.
• Most low cost digital acquisition formats are nearly always 8 bit,
while 10 and 12 bit image capture is available for more expensive
camcorders and digital cinema cameras.
• The image data that was discarded by compression and chroma
subsampling while recording is lost forever.
• Log Vs Normalized media
• Creating log encoded media lets you preserve the greatest
latitude for adjustments in grading.
• Each camera has a different method of log-encoding that is
customized to take maximum advantage of its sensor, many are
based on the Cineon log curve that was originally developed by
Kodak for scanning the 13 stops of latitude of film to the Cineon
and DPX image sequence formats, in an effort to preserve as
much detail as possible within the 10 bits per channel of data
these formats use.
• Log encoded media should be considered as a sort of “digital
negative”.
• While the initial appearance of log-encoded media is unpleasant,
being deliberately low-contrast and desaturated, the recorded
image preserves an abundance of image data that can be
extracted for maximum flexibility in grading process.
• When debayering raw media, these log standards are available as
a gamma setting of some kind, such as:
• Log C : Media recorded by ARRI Alexa cameras, which is similar to
the standard Cineon Log gamma curve.
• REDLog Film: Media recorded by RED cameras which is designed
to remap the original 12 bit R3D data to the standard Cineon Log
gamma curve.
• S-Log/S-Log2: Sony’s proprietary S-Log settings are very different
from Cineon Log gamma curve, owing to their wide dynamic
range.
• Digital Image Quality.
• The digital image / information can be transferred across large
distances with no loss of quality.
• As long as the data remains unaltered, it is as good as the original.
The same can not be said for video and film, which are both
analog formats and therefore subject to many forms of
degradation, the most significant being “generation loss”.
• In the DI pipeline, the aim is to maintain the highest possible
quality throughout, with the material prepared for supervised
sessions with the filmmakers, who then make decisions about
how to affect the images and control the subjective level of
quality.
• Summary.
• Digital media offers many advantages over other types of media,
such as the capability of making perfect duplicates quickly.
However, lots of variables must be considered.
• Generally speaking, as the quality of an image increases, so do the
associated costs.
• While a larger file might contain more pixels, and therefore more
detail, it also becomes less practical to work with, requiring more
storage space and additional processing and transmission time.
• There are no specific standards when working with digital images.
They may be encoded in a variety of different file formats, each
suitable for specific purposes.
• Digital files may have additional options available for tracking and
protecting images, or they can be compressed to reduce storage
requirements.
• Lossless compression can reduce the size of each file without
compromising quality; lossy compression can produce much
smaller files but may degrade image.
• However, digital images can suffer from the lack of sufficient
information to produce colors or details, thus exhibiting a number
of artifacts. Some artifacts may only become noticeable late in
the digital intermediate process(e.g., while performing color
grading).
• Advantages and Limitations of Digital Technology.
• Digital cameras provide real instant photography. Within a second
or two of the exposure, you can see the captured image on the
built-in LCD screen
• Images are captured as digital files and stored on removable
media cards. Unlike film, the cards are reusable. Once the files
have been transferred elsewhere, you can erase the images from
the card and reuse it again. This cuts out all the film and film
processing costs.
• A digital file is data, no different to any other computer file. It can
be saved to any computer storage media. The file can also be
copied and recopied without any loss of quality. Copies can be
kept in more than one picture library, or in other locations, all
giving high-quality images.

You might also like