You are on page 1of 42

TELEVISION AND VIDEO ENGINEERING

UNIT - I
FUNDAMENTALS OF TELEVISION

CONTENTS
1. Introduction
1.1 Television System And Scanning Principles
1.1.1 Sound Transmission
1.1.2 Picture Transmission

1.2 Scanning Process


1.2.1 Horizontal Scanning
1.2.2 Vertical Scanning

1.3 Video Signals


1.4 Characteristics Of Human Eye
1.5 Brightness Perception And Photometric Qualities
1.5.1 Brightness Perception
1.5.2 Photometric Measurements
1.6 Aspect Ratio And Rectangular Scanning
1.6.1 Aspect Ratio
1.6.2 Rectangular Scanning

1.7 Persistence of Vision And Flicker


1.7.1 Persistence Of Vision
1.7.2 Flicker

1.8 Vertical Resolution


1.9 Kell Factor

1.10 Horizontal Resolution


1.11 Video Bandwidth
1.12 Interlaced Scanning
1.13 Camera Tubes
1.14 Camera Lenses
1.14.1 Lens Focal Length
1.14.2 Angle of View

1.15 Autofocus Systems


1.16 Camera Pick Up Devices
1.17 Image Orthicon
1.17.1 Introduction
1.17.2 Operation
1.17.3 Image Section
1.17.4 Scanning Section
1.17.5 Electron Multiplier

1.18 Vidicon
1.18.1 Introduction
1.18.2 Operation
1.18.3 Charge Image
1.18.4 Storage Action
1.18.5 Signal Current

1.18.6 Light Transfer Characteristics

1.19 Plumbicon
1.19.1 Introduction
1.19.2 Operation
1.19.3 Light Transfer Characteristics

1.20 Silicon Diode Array Vidicon


1.20.1 Introduction

1.20.2 Operation
1.20.3 Scanning And Operation

1.21 CCD Solid State Image Scanners


1.22 Comparison Of Camera Tubes
1.23 Camera Tube Deflection Unit
1.24 Video Processing Of Camera Signals
1.25 Colour Television Signals And System
1.25.1 Colour Tv Signals
1.25.2 Colour Tv System

TECHNICAL TERMS
1. Geometric form : Frame size size adopted in TV systems.
2. Raster : Rectangular area of the picture tube screen on CRT scanned by the electron
beam as it is deflected horizontally and vertically.
3. Aspect Ratio : Width of frame / Height of frame
4. Scanning : The process in which the electron beam is made to move left to right and
top to bottom to convert all the pixels into electrical signals at a very fast rate.
5. Horizontal scanning : Process in which the electron beam is made to move left to
right and again right to left on the picture.

6. Progressive scanning : Scanning both vertical and horizontal takes place at the same
time.
7. Resolving capability : Abiliyt of eye to see the alternate black and white bars
distinctly without any difficulty.
8. Picture resolution : Ability of image reproducing system to resolve the finer details.
9. Pixels : Picture elements The elementary areas into which picture details may be
broken up.
10. Monochrome picture : Black & white picture.
11. Camera tube : Device used to convert optical information into corresponding
electrical signal.
12. Chroma signal : Color signal.
13. Y signal : Luminance signal
14. Color burst : Sample (8 10 cycles) of color subcarrier.
15. Contrast ratio : Bmax / Bmin. Where Bmax is maximum luminance and Bmin is
minimum luminance.
16. Horizontal resolution : Ability to resolve and reproduce fine picture details and
maximum number of pixels along the horizontal scanning line.
17. Vertical Resolution : The ability to resolve and reproduce finer details of picture in
vertical direction.
18. Kell factor : Resolution Factor.
19. Interlaced error : Errors that occur due to the time difference in starting scanning the
second field from the usual 32microsecond point.
20. Dark current : Current produced when there is no light falling on the camera tube.

UNIT-1

FUNDAMENTALS OF TELEVISION

1.INTRODUCTION
Television means to see from a distance. The desire in man to do so has been there
for ages. In the early years of the twentieth century many scientists experimented with the
idea of using selenium photosensitive cells for converting light from pictures into electrical
signals and transmitting them through wires.
The first demonstration of actual television was given by J.L. Baird in UK and C.F.
Jenkins in USA around 1927 by using the technique of mechanical scanning employing
rotating discs. However, the real breakthrough occurred with the invention of the cathode ray

tube and the success of V.K. Zworykin of the USA in perfecting the first camera tube (the
iconoscope) based on the storage principle. By 1930 electromagnetic scanning of both
camera and picture tubes and other ancillary circuits such as for beam deflection, video
amplification, etc. were developed.
Though television broadcast started in 1935, world political developments and the
second world war slowed down the progress of television. With the end of the war, television
rapidly grew into a popular medium for dispersion of news and mass entertainment.
The three different standards of black and white television have resulted in the
development of three different systems of colour television, respectively compatible with the
three monochrome systems.
The original colour system was that adopted by the USA in 1953 on the
recommendations of its National Television Systems Committee and hence called the NTSC
system. The other two colour systemsPAL and SECAM are later modifications of the NTSC
system, with minor improvements, to conform to the other two monochrome standards.

1.1 TELEVISION SYSTEM AND SCANNING PRINCIPLES


A television system is a Canadian term for a group of television stations which share
common ownership, branding, and programming, but are not legally considered a full
television network. Systems may be informally referred to as networks by some people, but
are not true networks under current Canadian broadcasting regulations.
Systems are differentiated from networks primarily by their less extensive service area
while a network will serve most Canadian broadcast markets in some form, a system will
typically serve only a few markets. As well, a system may or may not offer some classes of
programming, such as a national newscast, which are typically provided by a network.
Television systems should not be confused with twinsticks, although some individual
stations might be part of both types of operations simultaneously.

Fig 1.1 Basic Monochrome TV Transmitter


1.1.1 Sound Transmission
The microphone converts the sound associated with the picture being televised into
proportionate electrical signal, which is normally a voltage. This electrical output, regardless
of the complexity of its waveform, is a single valued function of time and so needs a single
channel for its transmission. The audio signal from the microphone after amplification is
frequency modulated, employing the assigned carrier frequency. In FM, the amplitude of the
carrier signal is held constant, whereas its frequency is varied in accordance with amplitude
variations of the modulating signal. As shown in Fig. 1.1 output of the sound FM transmitter
is finally combined with the AM picture transmitter output, through a combining network,
and fed to a common antenna for radiation of energy in the form of electromagnetic waves.
1.1.2 Picture Transmission
The picture information is optical in character and may be thought of as an
assemblage of a large number of bright and dark areas representing picture details. These
elementary areas into which the picture details may be broken up are known as picture
elements, shown in fig 1.1 which when viewed together, represent the visual information of
the scene.
Thus the problem of picture transmission is fundamentally much more complex,
because, at any instant there are almost an infinite number of pieces of information, existing
simultaneously, each representing the level of brightness of the scene to the reproduced. In
other words the information is a function of two variables, time and space.

1.2 SCANNING PROCESS

Fig: 1.2 Scanning Process


The scanning technique is similar to reading and writing information on a page,
starting at top left and processing to end at right bottom. The scanning is done by line by line.
Two types of scanning occur simultaneously.
Horizontal scanning from left to right at fast rate and vertically from top to bottom at
low rate. The retrace and trace period obtained during scanning of lines. The retrace of beam
is very fast compared to forward scan by cutting off the beam during horizontal and vertical
fly back intervals.
1.2.1 Horizontal Scanning
The linear rise of current in the horizontal deflection coils (Fig. 2.1 (b)) deflects the
beam across the screen with a continuous, uniform motion for the trace from left to right. At
the peak of the rise, the saw tooth wave reverses direction and decreases rapidly to its initial
value. This fast reversal produces the retrace or fly back. The start of the horizontal trace is at
the left edge of raster. The finish is at the right edge, where the fly back produces retrace back
to the left edge.

Fig 1.3(a) Path of scanning beam in covering picture area (Raster).

Fig 1.3(b) Waveform of current in the horizontal deflection coils producing


linear (constant velocity) scanning in the horizontal direction
1.2.2 Vertical Scanning
The saw tooth current in the vertical deflection coils moves the electron beam from
top to bottom of the raster at a uniform speed while the electron beam is being deflected
horizontally. Thus the beam produces complete horizontal lines one below the other while
moving from top to bottom. the trace part of the saw tooth wave for vertical scanning deflects
the beam to the bottom of the raster. Then the rapid vertical retrace returns the beam to the
top. Note that the maximum amplitude of the vertical sweep current brings the beam to the
bottom of the raster.

Fig 1.3(c) Vertical deflection and deflection current waveform


As shown in Fig. 1.3(c) during vertical retrace the horizontal scanning continues and
several lines get scanned during this period. Because of motion in the scene being televised,
the information or brightness at the top of the target plate or picture tube screen normally
changes by the time the beam returns to the top to recommence the whole process. This
information is picked up during the next scanning cycle and the whole process is repeated 25
times to cause an illusion of continuity.

1.3 VIDEO SIGNALS


Camera signal corresponding to picture or scene is transmitted. Blanking pulses to
make horizontal and vertical retrace as invisible. Sync pulses to synchronize the transmitter
and receiver scanning system. Information about some color signal and some sample of color
sub-carrier frequency.

Fig: 1.4 Video signal

1.4 CHARACTERISTICS OF HUMAN EYE


Formulating the requirements of camera tube scanning and reproducing system, this
must considered. Characteristics are

Visual activity/Resolution - resolving fine details in picture


Persistence of vision
Brightness and color sensation.

Fig: 1.5 Human eye Optics

1.5 BRIGHTNESS PERCEPTION AND PHOTOMETRIC QUALITIES


1.5.1 Brightness Perception
Brightness - the apparent luminance of a patch in an image.
Lightness - apparent reflectance of a perceived surface.
The perceived light level of a patch relative to other patches in the same image.
Visible light is in range of electromagnetic energy that can be precieved by human eye. The
wavelength range is of about 0.4 to 0.7um

1.5.2 Photometric Measurements


Quantitative determinations of the values of quantities characterizing optical
radiation OR such optical properties of materials as transparency and reflectivity. Photometric
measurements can be made with instruments that contain optical detectors. In the simplest
cases in the visible light range, the human eye is used as a detector in evaluating photometric
quantities.
Quantity

Symbol Defining
equation

Luminous flux

Luminous energy

Q = vdt

Unit
Name

Symbol

lumen

lm

lumen-second

lm-s

Quantity

Symbol Defining

Luminous intensity (of a light L

Unit

equation

Name

Symbol

l=d v/d

candela

cd

K = v/e

lumen per watt

lm/w

source in some direction) .


Luminous efficacy of radiant K
power
candela

and in a given direction)

meter (formerly, nit) lux

Illuminance (at a point of a E

per

square cd/m2

Luminance (at a given point L

E = dv/dA

lux

lx

=ldt

candale-section

cd-s

surface).
Luminous pulse remittance

Table 1. Principal photometric qualities

1.6 ASPECT RATIO AND RECTANGULAR SCANNING


1.6.1 Aspect Ratio
The aspect ratio of an image is the ratio of the width of the image to its height,
expressed as two numbers separated by a colon. That is, for an x:y aspect ratio, no matter how
big or small the image is, if the width is divided into x units of equal length and the height is
measured using this same length unit, the height will be measured to be y units. For example,
consider a group of images, all with an aspect ratio of 16:9. One image is 16 inches wide and
9 inches high. Another image is 16 centimeters wide and 9 centimeters high. A third is
8 yards wide and 4.5 yards high. Two dimensional sector scanning which a slow sector
scanning section is superimposed on a rapid sector in a scanning perpendicular direction.
1.6.2 Rectangular Scanning
Rectangular or Progressive scanning, as opposed to interlaced, scans
the entire picture line by line every sixteenth of a second. In other words,
captured images are not split into separate fields like in interlaced
scanning. Computer monitors do not need interlace to show the picture on
the screen. It puts them on one line at a time in perfect order i.e. 1, 2, 3,

4, 5, 6, 7 etc. so there is virtually no "flickering" effect. As such, in a


surveillance application, it can be critical in viewing detail within a moving
image such as a person running away. However, a high quality monitor is
required to get the best out of this type of scan.

Fig 1.6 Rectangular Scanning


Example: Capturing moving objects
When a camera captures a moving object, the sharpness of the frozen image
will depend on the technology used. Compare these JPEG images, captured by three different
cameras using progressive scan, 4CIF interlaced scan and 2CIF respectively.

1.7 PERSISTENCE OF VISION AND FLICKER


1.7.1 Persistence Of Vision
The image formed on the retina is always retained for a short period, this is due to the
brightness sensed by rods in the retina of eye by photochemical process. The sensation of
eye resulting from a single short flash and continued for 20ms.
An object of low brightness image is viewed for a long period of time where as image
of high brightness is viewed for short period of time. The continuation of photochemical
process is the continuation of brightness impression in visual centre of the brain and is
called as Persistence of Vision.
1.7.2 Flicker

Although the rate of 24 pictures per second in motion pictures and that of scanning 25
frames per second in television pictures is enough to cause an illusion of continuity, they are
not rapid enough to allow the brightness of one picture or frame to blend smoothly into the
next through the time when the screen is blanked between successive frames. This results in a
definite flicker of light that is very annoying to the observer when the screen is made
alternately bright and dark. This problem is solved in motion pictures by showing each
picture twice, so that 48 views of the scene are shown per second although there are still the
same 24 picture frames per second. As a result of the increased blanking rate, flicker is
eliminated.

1.8 VERTICAL RESOLUTION


The "vertical resolution" of NTSC TV refers to the total number of lines (rows)
scanned from left to right across the screen - BUT Counted from Top to Bottom, or Vertically.
This number is set by the NTSC TV 'Standard' .This Vertical Resolution number is static - it
doesn't change.

Therefore, the Vertical Resolution is the same for ALL TV's

manufactured to meet a specified Standard.


For comfortable viewing an angle of about 10 to 15 can be taken as the optimum
visual angle. Hence the best viewing distance for watching television is about 4 to 8 times the
height of the picture. The maximum no. of whit and dark lines which can be resolved by
human eye in vertical direction of screen of height H is given by

nv, the number of vertical

resolution .

nv=H/Do
where
H is the height of the screen
D is the iris opening/ distance from centre of image to lens

o is the minimum angle of resolution

Thus for D/H=6, Visual angle is about 10, v=600 lines(by subtuting above formula)

1.9 KELL FACTOR


The Kell factor, named after RCA engineer Raymond D. Kell, is a parameter used to
limit the bandwidth of a sampled image signal to avoid the appearance of beat frequency

patterns when displaying the image in discrete display devices, usually taken to be 0.7. The
number was first measured in 1934 by Raymond D. Kell and his associates as 0.64 but has
suffered several revisions given that it is based on image perception, hence subjective, and is
not independent of the type of display. It was later revised to 0.85 but can go higher than 0.9,
when fixed pixel scanning (e.g., CCD or CMOS) and fixed pixel displays (e.g., LCD or
plasma) are used, or as low as 0.7 for electron gun scanning.
From a different perspective, the Kell factor defines the effective resolution of a
discrete display device since the full resolution cannot be used without viewing experience
degradation. The actual sampled resolution will depend on the spot size and intensity
distribution. For electron gun scanning systems, the spot usually has a Gaussian intensity
distribution. For CCDs, the distribution is somewhat rectangular, and is also affected by the
sampling grid and inter-pixel spacing.
Kell factor is sometimes incorrectly stated to exist to account for the effects of
interlacing. Interlacing itself does not affect Kell factor, but because interlaced video must be
low-pass filtered (i.e., blurred) in the vertical dimension to avoid spatio-temporal aliasing
(i.e., flickering effects), the Kell factor of interlaced video is said to be about 70% that of
progressive video with the same scan line resolution.
1.10 HORIZONTAL RESOLUTION
The horizontal resolution of television, and other video displays, is dependent upon the
quality of the video signal's source. As an example - the horizontal resolution of VHS tape is
(about) 240 lines; broadcast TV (about) 330 lines, laserdisc (about) 420 lines; and DVD
(about) 480 lines.
To avoid getting entangled too deeply within the inherent complexities of TV
technology, it's sufficient to note that there are a number of variables contributing to the
'stated' horizontal resolution value. Even the measurement methods are not always consistent.
For instance - how the vertical columns (dots/dashes) are counted ... as single black / white
(dark and light) lines, or as "line pairs - (1) black and (1) white line."
A TV's resolution can be reported as the result of counting the total number of picture
elements (pixels) per scan line, across the entire screen-width, multiplied by the total number
of scan lines. However, TV screen-sizes vary, making an equal comparison of different

displays more complex. TV's also differ technically, functionally and in component quality;
this results in additional complications.
An alternative method is to count the number of pixels that fit within a prescribed
circle, having a diameter equal to the screen height. Known as LPH - Lines per Picture
Height - this is the 'correct' method in determining TV resolution.
As this shows, along with other, similar variables, the accuracy of a 'stated' horizontal
resolution for a particular display, may depend on who is doing the 'stating' . However, for the
purpose of this overview of HDTV-Resolution, the primary point regarding horizontal
resolution, is that it is variable. Unlike vertical resolution which is 'fixed,' horizontal
resolution can differ from one TV display to another. It would be realistic to aim at equal
vertical and horizontal resolution and as such the number of alternate black and white bars
that should be considered is equal to
Na aspect ratio = 585 4/3 = 780

1.11 VIDEO BANDWIDTH


BWS = 1/2 [(K AR (VLT) FR) (KH / KV)]
Where:
BWS = Total signal bandwidth
K = Kell factor
AR = Aspect ratio (the width of the display divided by the height of the display)
VLT = Total number of vertical scan lines
FR = Frame rate or refresh rate
KH = Ratio of total horizontal pixels to active pixels
KV = Ratio of total vertical lines to active lines
The circuits that process video signals need to have more bandwidth than the actual
bandwidth of the processed signal to minimize the degradation of the signal and the resulting
loss in picture quality. The amount the circuit bandwidth needs to exceed the highest
frequency in the signal is a function of the quality desired. To calculate this, we assume a
single-pole response and use the following equation:

H(f)(dB) = 20log(1/(1+(BWS/BW-3dB)).5)
Rearranging and solving for the -0.1dB and the -0.5dB attenuation points, we get the
following:
BW-3dB min = BWS (-0.1db) 6.55
BW-3dB-min = BWS(-0.5db) 2.86
Where:
BW-3dB = the minimum -3db bandwidth required for the circuit
A minimum bandwidth that's about six and a half times' the highest frequency in the
signal. If you can tolerate 0.5dB attenuation, it needs to be only about three times. To account
for normal variations in the bandwidth of integrated circuits, it is recommended that the
results from equations 3 and 4 be multiplied by a factor of 1.5. This will ensure that the
attenuation performance is met over worst-case conditions. In equation mode, it is expressed
as follows:
BW-3dB nominal = BW-3dB-min 1.5
In addition to bandwidth, the circuits must slew fast enough to faithfully reproduce
the video signal. The equation for the minimum slew rate is as follows:
SRMIN = 2 pi BWS Vpeak
Substituting and simplifying,
SRMIN = BWS 6.386
This is because some distortion can occur as the frequency of the signal approaches
the slew-rate limit. This can introduce frequency distortion, which will degrade the picture
quality. In equation form:
SRnominal = SRMIN 2
we calculate a maximum signal bandwidth (BWS) of about 4.2MHz. This is
the highest frequency in the signal. Now let's assume that we need less than 0.1dB
attenuation. we calculate the minimum signal bandwidth necessary to be 27.5MHz. to
account for variations, gives 41.3MHz.

1.12 INTERLACED SCANNING


Interlaced scan-based images use techniques developed for Cathode Ray Tube
(CRT)-based TV monitor displays, made up of 576 visible horizontal lines across a standard

TV screen. Interlacing divides these into odd and even lines and then alternately refreshes
them at 30 frames per second. The slight delay between odd and even line refreshes creates
some distortion or 'jaggedness'. This is because only half the lines keeps up with the moving
image while the other half waits to be refreshed.

Fig: 1.7 Interlaced Scanning Vertical retrace line has been removed
The effects of interlacing can be somewhat compensated for by using de-interlacing.
De-interlacing is the process of converting interlaced video into a non-interlaced form, by
eliminating some jaggedness from the video for better viewing. This process is also called
line doubling. Interlaced scanning has served the analog camera, television and VHS video
world very well for many years, and is still the most suitable for certain applications.
However, now that display technology is changing with the advent of Liquid Crystal Display
Thin Film Transistor (TFT)-based monitors, DVDs and digital cameras, an alternative
method of bringing the image to the screen, known as progressive scanning, has been created.
In the 625 lime monochrome system, for successful interlaced scanning, the 625 lines
of each frame or picture are divided into sets of 312.5 lines and each set is scanned alternately

to cover the entire picture area. To achieve this the horizontal sweep oscillator is made to
work at a frequency of 15625 Hz (312.5 50 = 15625) to scan the same number of lines per
frame (15625/25 = 625 lines), but the vertical sweep circuit is run at a frequency of 50 instead
of 25 Hz. Note that since the beam is now deflected from top to bottom in half the time and
the horizontal oscillator is still operating at 15625 Hz, only half the total lines, i.e., 312.5
(625/2 =312.5) get scanned during each vertical sweep. Since the first field ends in a half line
and the second field commences at middle of the line on the top of the target plate or screen,
the beam is able to scan the remaining 312.5 alternate lines during its downward journey. In
all then, the beam scans 625 lines (312.5 2 = 625) per frame at the same rate of 15625 lines
(312.5 50 = 15625) per second. Therefore, with interlaced scanning the flicker effect is
eliminated without increasing the speed of scanning, which in turn does not need any increase
in the channel bandwidth.

1.13 CAMERA TUBES


A TV camera tube may be called the eye of a TV system. For such an analogy to be
correct the tube must possess characteristic that are similar to its human counterpart. Some of
the more important functions must be (i) sensitivity to visible light, (ii) wide dynamic range
with respect to light intensity, and (iii) ability to resolve details while viewing a multi element
scene. During the development of television, the limiting factor on the ultimate performance
had always been the optical-electrical conversion device, i.e., the pick-up tube.
Most types developed have suffered to a greater or lesser extent from (i) poor
sensitivity, (ii) poor resolution,(iii) high noise level, (iv) undesirable spectral response, (v)
instability, (vi) poor contrast range and (vii) difficulties of processing.
However, development work during the past fifty years or so, has enabled scientists
and engineers to develop image pick-up tubes, which not only meet the desired requirements
but infact excel the human eye in certain respects. Such sensitive tubes have now been
developed which deliver output even where our eyes see complete darkness.

1.14 CAMERA LENSES


1.14.1 Lens Focal Length
We define focal length as the distance from the optical center of the lens to the focal
plane (target or "chip") of the video camera when the lens is focused at infinity.
We consider any object in the far distance to be at infinity. On a camera lens the symbol

(similar to an "8" on its side) indicates infinity.

Since the lens-to-target distance for most lenses increases when we focus the lens on
anything closer than infinity (see second illustration), we specify infinity as the standard for
focal length measurement.
Focal length is generally measured in millimeters. In the case of lenses with fixed
focal lengths, we can talk about a 10mm lens, a 20mm lens, a 100mm lens, etc. As we will
see, this designation tells a lot about how the lens will reproduce subject matter.

Fig 1.8 Zoom and Prime Lenses


Zoom lenses came into common use in the early 1960s. Before then, TV cameras used
lenses of different focal lengths mounted on a turret on the front of the camera, as shown on

the right. The cameraperson rotated each lens into position and focused it when the camera
was not on the air.
Today, most video cameras use zoom lenses. Unlike the four lenses shown here, each
of which operate at only one focal length, the effective focal length of a zoom lens can be
continuously varied. This typically means that the lens can go from a wide-angle to a
telephoto perspective.
To make this possible, zoom lenses use numerous glass elements, each of which is
precisely ground, polished, and positioned. The space between these elements changes as the
lens is zoomed in and out.
With prime lenses, the focal length of the lens cannot be varied. It might seem that we
would be taking a step backwards to use a prime lens or a lens that operates at only one focal
length. Not necessarily. Some professional videographers and directors of photography
especially those who have their roots in film feel prime lenses are more predictable in their
results. Prime lenses also come in more specialized forms, for example, super wide angle,
super telephoto, and super fast (i.e., it transmits more light).
However, for normal work, zoom lenses are much easier and faster to use. The latest
of HDTV zoom lenses are extremely sharp -- almost as sharp as the best prime lenses.
1.14.2 Angle of View

Fig 1.9 Angle of view of different lenses

Angle of view is directly associated with lens focal length. The longer the focal length
(in millimeters), the narrower the angle of view (in degrees). You can see this relationship by
studying the drawing on the left, which shows angles of view for different prime lenses.
A telephoto lens (or a zoom lens operating at maximum focal length) has a narrow
angle of view. Although there is no exact definition for a "telephoto" designation, we would
consider the angles at the top of the drawing from about 3 to 10 degrees in the telephoto
range.
The bottom of the drawing (from about 45 to 90 degrees) represents the wide-angle
range. The normal angle of view range lies between telephoto and wide angle. With the
camera in the same position, a short focal lens creates a wide view and a long focal length
creates an enlarged image in the camera.

1.15 AUTOFOCUS SYSTEMS

Fig:1.10 Auto Focus system

There are two main ways for cameras to focus automatically: contrast detection and
phase detection. The former uses data from the CCD or CMOS sensor and looks at how
sharp the resulting photograph would be. It's simple, but slow, as the camera has to go
through all of the possibilities until it finds one where the subject is clearly contrasted from
the background. The latter uses a tool that works like a rangefinder, which accurately
calculates the correction needed to get the subject in focus. It's fast, but difficult to operate as
the light coming into the lens needs to reach both the phase detector and the sensor (or the
film) at the same time.

This has meant that phase detection has traditionally been

reserved for SLRs, which already have a mirror that sends the image to the viewfinder. At
the same time, a second mirror also sends it down to the phase detector. While focusing is
taking place, the sensor is covered by these mirrors, which rules out video. SLRs that do
shoot video fold their mirrors out of the way and rely on the contrast detection found on
ordinary compacts.

1.16 CAMERA PICK UP DEVICES


The scene of picture is focused with help of lens system on a photosensitive target
near a pickup tube. The electrical state of each area varies with intensity of light. The
electrical response of each element is read of f with help of electron beam circuit produce
electrical pulses. The target plate is held with electrical potential with respect to cathode of
pick up tubes. The actual beam varies in accordance with electrical state of picture element.
The beam scans the image horizontally by means of magnetic field setup by
horizontal deflection coil. Similarly the beams scan the image vertically by means of
magnetic field setup by vertical deflection coil. The scanning must done in fast speed over the
changing or moving pictures.

1.17 IMAGE ORTHICON


1.17.1 Introduction

The image orthicon was common in American broadcasting from 1946 until 1968. A
combination of the image dissector and the orthicon technologies, it replaced the iconoscope
and the orthicon, which required a great deal of light to work adequately.
While the iconoscope and the intermediate orthicon used capacitance between a
multitude of small but discrete light sensitive collectors and an isolated signal plate for
reading video information, the image orthicon employed direct charge readings from a
continuous electronically charged collector. The resultant signal was immune to most
extraneous signal "crosstalk" from other parts of the target, and could yield extremely
detailed images. For instance, image orthicon cameras were used for capturing Apollo/Saturn
rockets nearing orbit after the networks had phased them out, as only they could provide
sufficient detail.
An image orthicon camera can take television pictures by candlelight because of the
more ordered light-sensitive area and the presence of an electron multiplier at the base of the
tube, which operated as a high-efficiency amplifier. It also has a logarithmic light sensitivity
curve similar to the human eye. However, it tends to flare in bright light, causing a dark halo
to be seen around the object; this anomaly is referred to as "blooming" in the broadcast
industry when image orthicon tubes were in operation. Image orthicons were used
extensively in the early color television cameras, where their increased sensitivity was
essential to overcome their very inefficient optical system.

Fig:1.11 Image orthicon


1.17.2 Operation

An image orthicon consists of three parts: a photocathode with an image store


("target"), a scanner that reads this image (an electron gun), and a multistage electron
multiplier.
In the image store, light falls upon the photocathode which is a photosensitive plate at a very
negative potential (approx. -600 V), and is converted into an electron image (a principle
borrowed from the image dissector). Once the image electrons reach the target, they cause a
"splash" of electrons by the effect of secondary emission. On average, each image electron
ejects several "splash" and these excess electrons are soaked up by the positive mesh
effectively removing electrons from the target and causing a positive charge on it in relation
to the incident light in the photocathode. The result is an image painted in positive charge,
with the brightest portions having the largest positive charge.
A sharply focused beam of electrons (a cathode ray) is generated by the electron gun
at ground potential and accelerated by the anode around the gun at a high positive voltage
(approx. +1500 V). Once it exits the electron gun, its inertia makes the beam move away
from the dynode towards the back side of the target.
At this point the electrons lose speed and get deflected by the horizontal and vertical
deflection coils, effectively scanning the target. Thanks to the axial magnetic field of the
focusing coil, this deflection is not in a straight line, thus when the electrons reach the target
they do so perpendicularly avoiding a sideways component.

1.17.3 Image Section


The inside of the glass face plate at the front is coated with a silver antimony coating
sensitized with cesium, to serve as photocathode. Light from the scene to be televised is
focused on the photocathode surface by a lens system and the optical image thus formed
results in the release of electrons from each point on the photocathode in proportion to the
incident light intensity. Photocathode surface is semitransparent and the light rays penetrate it

to reach its inner surface from where electron emission takes place. Since the number of
electrons emitted at any point in the photocathode has a distribution corresponding to the
brightness of the optical image, an electron image of the scene or picture gets formed on the
target side of the photo coating and extends towards it. Through the conversion efficiency of
the photocathode is quite high, it cannot store charge being a conductor. For this reason, the
electron image produced at the photocathode is made to move towards the target plate located
at a short distance from it. The target plate is made of a very thin sheet of glass and can store
the charge received by it. This is maintained at about 400 volts more positive with respect to
the photocathode, and the resultant electric field gives the desired acceleration and motion to
the emitted electrons towards it. The wire-mesh screen has about 300 meshes per cm2 with an
open area of 50 to 75 per cent, so that the screen wires do not interfere with the electron
image. the target plate is very thin, with thickness close to 0.004 mm.
1.17.4 Scanning Section
The electron gun structure produces a beam of electrons that is accelerated towards
the target. As indicated in the figure, positive accelerating potentials of 80 to 330 volts are
applied to grid 2, grid 3, and grid 4 which is connected internally to the metalized conductive
coating on the inside wall of the tube. The electron beam is focused at the target by magnetic
field of the external focus coil and by voltage supplied to grid 4. The alignment coil provides
magnetic field that can be varied to adjust the scanning beams position, if necessary, for
correct location. Deflection of electron beams to scan the entire target plate is accomplished
by magnetic fields of vertical and horizontal deflecting coils mounted on yoke external to the
tube. These coils are fed from two oscillators, one working at 15625 Hz, for horizontal
deflection, and the other operating at 50 Hz, for vertical deflection. operating at 50 Hz, for
vertical deflection. The target plate is close to zero potential and therefore electrons in the
scanning beam can be made to stop their forward motion at its surface and then return
towards the gun structure.
1.17.5 Electron Multiplier
The returning stream of electrons arrive at the gun close to the aperture from which
electron beam emerged. The aperture is a part of a metal disc covering the gun electrode.
When the returning electrons strike the disc which is at a positive potential of about 300
volts, with respect to the target, they produce secondary emission. The disc serves as first

stage of the electron multiplier. Successive stages of the electron multiplier are arranged
symmetrically around and back of the first stage.

Fig 1.12 Electron-multiplier section of the Image Orthicon.

Fig 1.13 Light transfer characteristics of two different Image Orthicons.


Five stages of multiplication are used, details of which are shown in Fig. 6.4. Each
multiplier stage provides a gain of approximately 4 and thus a total gain of (4)5 1000 is
obtained at the electron multiplier. This is known as signal multiplication.
The multiplication so obtained maintains a high signal to noise ratio. The secondary
electrons are finally collected by the anode, which is connected to the highest supply voltage
of + 1500 volts in series with a load resistance RL

1.18 VIDICON
1.18.1 Introduction
A vidicon tube is a video camera tube design in which the target material is a
photoconductor. While the initial photoconductor used was selenium, other targetsincluding
silicon diode arrayshave been used.

Fig:1.14 Vidicon
1.18.2 Operation
The vidicon is a storage-type camera tube in which a charge-density pattern is
formed by the imaged scene radiation on a photoconductive surface which is then scanned by a
beam of low-velocity electrons. The fluctuating voltage coupled out to a video amplifier can be
used to reproduce the scene being imaged. The electrical charge produced by an image will
remain in the face plate until it is scanned or until the charge dissipates. Pyroelectric
photocathodes can be used to produce a vidicon sensitive over a broad portion of the infrared

spectrum.

Fig 1.15 Circuit for output signal from a Vidicon camera tube
1.18.3 Charge Image
The photolayer has a thickness of about 0.0001 cm, and behaves like an insulator with
a resistance of approximately 20 M when in dark. With light focused on it, the photon
energy enables more electrons to go to the conduction band and this reduces its resistivity.
When bright light falls on any area of the photoconductive coating, resistance across the

thickness of that portion gets reduces to about 2 M. Thus, with an image on the target, each
point on the gun side of the photolayer assumes a certain potential with respect to the DC
supply, depending on its resistance to the signal plate. For example, with a B + source of 40 V
(see Fig. 6.7), an area with high illumination may attain a potential of about + 39 V on the
beam side. Similarly dark areas, on account of high resistance of the photolayer may rise to
only about + 35 volts. Thus, a pattern of positive potentials appears, on the gun side of the
photolayer, producing a charge image, that corresponds to the incident optical image.
1.18.4 Storage Action
Though light from the scene falls continuously on the target, each element of the
photocoating is scanned at intervals equal to the frame time. This results in storage action and
the net change in resistance, at any point or element on the photoconductive layer, depends on
the time, which elapses between two successive scanning and the intensity of incident light.
Since storage time for all points on the target plate is same, the net change in resistance of all
elementary areas is proportional to light intensity variations in the scene being televised.
1.18.5 Signal Current
As the beam scans the target plate, it encounters different positive potentials on the
side of the photolayer that faces the gun. Sufficient number of electrons from the beam are
then deposited on the photolayer surface to reduce the potential of each element towards the
zero cathode potential. The remaining electrons, not deposited on the target, return back and
are not utilized in the vidicon.
1.18.6 Light Transfer Characteristics
Vidicon output characteristics are shown in Fig. 6.9. Each curve is for a specific value
of dark current, which is the output with no light. The dark current is set by adjusting the
target voltage. Sensitivity and dark current both increase as the target voltage is increased.
Typical output for the vidicon is 0.4 A for bright light with a dark current of 0.02 A.

Fig1.16 Light Transfer Characteristics of Vidicon


The photoconductive layer has a time lag, which can cause smear with a trail
following fast moving objects. The photoconductive lag increases at high target voltages,
where the vidicon has its highest sensitivity.

1.19 PLUMBICON
1.19.1 Introduction
Plumbicon is a registered trademark of Philips for its Lead Oxide (PbO) target
vidicons. Used frequently in broadcast camera applications, these tubes have low output, but
a high signal-to-noise ratio. They had excellent resolution compared to Image Orthicons, but
lacked the artificially sharp edges of IO tubes, which caused some of the viewing audience to
perceive them as softer. CBS Labs invented the first outboard edge enhancement circuits to
sharpen the edges of Plumbicon generated images.

Fig:1.17 Plumbicon

1.19.2 Operation
Compared to Saticons, Plumbicons had much higher resistance to burn in, and comet
and trailing artifacts from bright lights in the shot. Saticons though, usually had slightly
higher resolution. While broadcast cameras migrated to solid state Charged Coupled Devices,
plumbicon tubes remain a staple imaging device in the medical field. Narragansett Imaging is
the only company now making Plumbicons, and it does so from the factories Philips built for
that purpose in Rhode Island, USA. While still a part of the Philips empire, the company
purchased EEV's (English Electric Valve) lead oxide camera tube business, and gained a
monopoly in lead oxide tube production.

Fig 1.18 Output signal Current

1.19.3 Light Transfer Characteristics

Fig1.19 Light Transfer Characteristics

The current output versus target illumination response of a plumbicon is shown in Fig.
1.19 It is a straight line with a higher slope as compared to the response curve of a vidicon.
The higher value of current output, i.e., higher sensitivity, is due to much reduced
recombination of photogenerated electrons and holes in the intrinsic layer which contains
very few discontinuities. For target voltages higher than about 20 volts, all the generated
carriers are swept quickly across the target without much recombinations and thus the tube
operates in a photosaturated mode. The spectral response of the plumbicon is closer to that of
the human eye except in the red colour region.

1.20 SILICON DIODE ARRAY VIDICON


1.20.1 Introduction
This is another variation of vidicon where the target is prepared from a thine n-type
silicon wafer instead of deposited layers on the glass faceplate. The final result is an array of
silicon photodiodes for the target plate. Figure 1.20 shows constructional details of such a
target. As shown there, one side of the substrate (n-type silicon) is oxidized to form a film of
silicon dioxide (SiO2) which is an insulator.
1.20.2 Operation

By photomasking and etching processes, an array of fine openings is made in the


oxide layer. These openings are used as a diffusion mask for producing corresponding
number of individual photodiodes.

Fig1.20 Silicon Diode Array Vidicon


Boron, as a dopent is vapourized through the array of holes, forming islands of p-type
silicon on one side of the n-type silicon substrate.
Finally a very thin layer of gold is deposited on each p-type opening to form contacts
for signal output. The other side of the substrate is given an antiflection coating. The resulting
p-n photodiodes are about 8 m in diameter. The silicon target plate thus formed is typically
0.003 cm thick, 1.5 cm square having an array of 540 540 photodiodes. This target plate is
mounted in a vidicon type of camera tube.
1.20.3 Scanning And Operation
The photodiodes are reverse biased by applying +10 V or so to the n + layer on the
substrate. This side is illuminated by the light focused on to it from the image. The incidence
of light generates electron-hole pairs in the substrate. Under influence of the applied electric
field, holes are swept over to the p side of the depletion region thus reducing reverse bias on
the diodes. This process continues to produce storage action till the scanning beam of electron
gun scans the photodiode side of the substrate. The scanning beam deposits electrons on the
p-side thus returning the diodes to their original reverse bias. The consequent sudden increase
in current across each diode caused by the scanning beam represents the video signal.
The current flows through a load resistance in the battery circuit and develops a video
signal proportional to the intensity of light falling on the array of photodiodes. A typical value
of peak signal current is 7 A for bright white light. The vidicon employing such a multidiode

silicon target is less susceptible to damage or burns due to excessive high lights. It also has
low lag time and high sensitivity to visible light which can be extended to the infrared region.
A particular make of such a vidicon has the trade name of Epicon. Such camera tubes have
wide applications in industrial, educational and cctv (closed circuit television) services.

1.21 CCD SOLID STATE IMAGE SCANNERS


The operation of solid state image scanners is based on MOS circuitry. The CCD may
be thought to be in a shift register formed by a string of very closely spaced MOS capacitors.
It can store and transfer analog charge signals either electrons or holes in electrically or
optically.

Fig:1.21 A three phase n-channel MOS charge coupled device. (a) construction (b)
transfer of electrons between potential wells (c) different phases of clocking voltage
waveform.
While broadcast cameras migrated to solid state Charged Coupled Devices,
plumbicon tubes remain a staple imaging device in the medical field. Narragansett Imaging is
the only company now making Plumbicons, and it does so from the factories Philips built for
that purpose in Rhode Island, USA.
While still a part of the Philips empire, the company purchased EEV's (English
Electric Valve) lead oxide camera tube business, and gained a monopoly in lead oxide tube

production.The application of small positive potentials to the gate electrodes results in the
development of depletion regions just below them.
These are called potential wells. The depth of each well (depletion region) varies with
the magnitude of the applied potential. The gate electrodes operate in groups of three, with
every third electrode connected to a common conductor.
The spots under them serve as light sensitive elements. When any image is focused
onto the silicon chip, electrons are generated within it, but very close to the surface. The
number of electrons depends on the intensity of incident light. Once produced they collect in
the nearby potential wells. As a result the pattern of collected charges represents the optical
image.
Charge Transfer
The charge of one element is transferred along the surface of the silicon chip by
applying a more positive voltage to the adjacent electrode or gate, while reducing the voltage
on it.
The minority carriers (electrons in this case) while accumulating in the so called wells
reduce their depths much like the way a fluid fills up in a container. The accumulation of
charge carries under the first potential wells of two consecutive trios

1.22 COMPARISON OF CAMERA TUBES

1.23 CAMERA TUBE DEFLECTION UNIT

Fig:1.22 Camera tube deflection unit


It mounts itself inside a deflection coil unit which consists of focusing coil, horizontal
and vertical deflection coils, alignment coils and magnets. The focusing coil surrounds entire
tube extending from electron gun to face plate of tube. It produces axial field because of dc
current passing through it. The horizontal and vertical deflection coils are pair of coils each in
a shape of yokes mount on pick-up tube. The horizontal deflection coils produce a vertical
field and vertical deflection coils produces horizontal field. The field strength of deflecting
magnetic field is about 1/10th of focusing coil. The required currents have to be supplied by
deflection drive circuits of camera chain. The alignment coils are a pair of coils positioned
just outside the limiting aperture that produce a magnetic field at right angles to the tube axis.

1.24 VIDEO PROCESSING OF CAMERA SIGNALS


Digital video comprises a series of orthogonal bitmap digital images displayed in
rapid succession at a constant rate. In the context of video these images are called frames. We
measure the rate at which frames are displayed in frames per second (FPS). Since every
frame is an orthogonal bitmap digital image it comprises a raster of pixels. If it has a width of
W pixels and a height of H pixels we say that the frame size is WxH.
Pixels have only one property, their color. The color of a pixel is represented by a
fixed number of bits. The more bits the more subtle variations of colors can be reproduced.
This is called the color depth (CD) of the video.

Fig1.23 Video Processing of Camera signal


An example video can have a duration (T) of 1 hour (3600sec), a frame size of
640x480 (WxH) at a color depth of 24bits and a frame rate of 25fps. This example video has
the following properties:
1. pixels per frame = 640 * 480 = 307,200
2. bits per frame = 307,200 * 24 = 7,372,800 = 7.37Mbits
3. bit rate (BR) = 7.37 * 25 = 184.25Mbits/sec size = 184Mbits/sec * 3600sec =
662,400Mbits = 82,800Mbytes = 82.8Gbytes

1.25 COLOUR TELEVISION SIGNALS AND SYSTEM


1.25.1 Colour Tv Signals
The three primary colour signals produced by three camera tubes represent the
proportions of red, green and blue light components of the pixel being scanned. These signal
voltages R,G,B representing colours for the differences in the nonlinearity of response of the

camera tubes and picture tube to provide the gamma corrected R,G,B signals. These
gamma corrected signals RGB are adjusted to an arbitrary value of 1V each.
These signals could be directly used for transmission and reproduction of picture
on a three beam colour picture tube with very fine RGB phosphor dots deposited sequentially
along the screen.
The colour components are represented by the colour difference signals R-Y-B-Y
forming the chrominance signal which can be decoded in color TV receiver to obtain R,G,B
signal to drive the colour picture tube for colour picture reproduction.

1.25.2 Colour Tv System


A sequential technique for transmitting colour difference signal alternately and
combining them through a delay line switching under the name SECAM was developed in
France, while in Germany PAL system was developed in which phase errors are cancelled by
alternating the phase of the colour vector alternately.
In PAL colour TV system which was designed as a variant of NTSC system to
eliminate its susceptibility to phase error the colour difference signals R-Y and B-Y are
directly used to carry the colour information. Before modulation these signals are weighted
by factors of 0.877 and 0.493 to produce V and U chroma signals. Weighting reduce the
possibility of over modulation of the transmitter carrier in presence of highly saturated colour
during very light and very dark parts of picture.

QUESTION BANK
UNIT 1
PART A (2 Marks)
1. What is Raster?
2. Define aspect ratio and justify the choice of aspect ratio.
3. Draw the simple block diagram of TV transmitter.
4. Draw the simple block diagram of TV receiver.
5. What do you mean by scanning?
6. What do you mean by rectangular scanning?
7. What do you mean by horizontal scanning and vertical scanning?
8. What are the factors that decide the number of scanning lines per frame?
9. Write short notes on image continuity.
10. Write short notes on persistence of vision.
11. What is flicker? How it can be avoided?
12. What is interlaced scanning?
13. Why vertical scanning is 50Hz in Indian TV system?
14. What is the vertical and horizontal scanning frequency in interlaced scanning?
15. What is the scanning periods (Both vertical and horizontal) in interlaced scanning?
16. Give scanning sequence in interlaced scanning.
17. What do you mean by interlaced error?
18. Why total number of lines in any TV system must be an odd number?
19. How maximum video signal frequency is calculated?

20. Why 625 lines are there in Indian TV system, why not 623 or 627?
21. What is the bandwidth needed for sync pulses?
22. Define picture resolution.
23. Define vertical and horizontal resolution.
24. Define kell factor.
25. What is the maximum and minimum video frequency required?
26. What are the characteristics of human eye and what are the factors decided by them?
27. What is visual acuity? What is the factor decided by it?
28. Write short notes on TV Camera.
29. Briefly explain the characteristics of camera tubes.
30. Write short notes on camera lenses.
31. Write short notes on f stop number and focal length of camera lenses.
32. Write short notes on photo-emissive and photoconductive effect.
33. What is dark current and image lag?
34. Write short notes on auto-focus system.
35. Write short notes on electron multiplier.
36. Write short notes on field-mesh image orthicon.
37. Briefly explain leaky capacitor principle of working of vidicon.
38. Give the construction of plumbicon.
39. Give the construction of silicon diode array vidicon.
40. Write short notes on lens turret and zoom lens.
41. Draw the functional block diagram of monochrome TV camera.
42. Briefly explain video processing of camera signal.
43. Give comparison of camera tubes.
44. Define contrast ratio.
45. Explain how the inherent smear effect in a vidicon is overcome in a plumbicon.
46. Define compatibility and reverse compatibility.
47. What are the requirements to meet compatibility?
48. Briefly explain three color theory or what is additive mixing & subtractive mixing?
49. State Grassmans law.
50. What are the tristimulus value of spectral colors?
51. Define Luminance, hue and saturation.
52. Explain how transmission of color difference signals aids to compatibility of color TV
system.

53. Draw color circle showing location and magnitude of primary and complementary
colors.
54. Briefly explain unsuitability of (G-Y) for color signal transmission. Or Why (G-Y)
color difference signal is not used?
55. What is frequency interleaving?
56. What is the bandwidth of color signal transmission?
57. Briefly explain modulation of color difference signal.
58. What is color burst?

PART B (16 Marks)


1. With neat diagram explain the characteristics of human eye.
2. Explain interlaced scanning.
3. Explain the construction and working of image orthicon.
4. Explain the construction and working of vidicon.
5. Explain the construction and working of plumbicon and silicon diode array vidicon.
6. (a) Explain the construction and working of CCD solid state image scanner.(8)
(b) Compare various TV camera pick-up tubes. (8)
7. (a) Camera pick-up tube deflection unit.(8)
(b) video processing of camera signal.(8)
8. (a) Auto-focus system.(8)
(b) Camera lenses.(8)
9. Explain sound and picture transmission in TV system.
10. Briefly explain vertical and horizontal resolution and derive the maximum video
signal frequency required.
11. Explain color TV camera.
12. Explain the generation and transmission of color difference signals.
13. Explain the unsuitability of (G-Y) for color signal transmission.
14. Define Y and color difference signal. Explain formation of these for color bar pattern
of white, yellow, cyan, green, magenta, red, blue and black.

You might also like