You are on page 1of 55

Fundamental Concepts of Video

Digital video
A picture is "drawn" on a television or computer display screen by sweeping an electrical signal horizontally across the display one line at a time. One complete set of lines makes a picture. This is called a frame. Once the first complete picture is scanned, there is another portion of the waveform to retrace to the top of the display and start scanning the next frame, or picture. This sequence is repeated at a fast enough rate so that the displayed images are perceived to have continuous motion.

Digital video

Interlaced versus Progressive Scans

Interlaced and Progressive Scans are two different types of scanning systems. They differ in the technique used to "paint" the picture on the screen. Television signals and compatible displays are typically interlaced Computer signals and compatible displays are typically progressive (noninterlaced). Interlaced scanning is where each picture, referred to as a frame, is divided into two separate sub-pictures, referred to as fields. Two fields make up a frame. An interlaced picture is painted on the screen in two passes, by first scanning the horizontal lines of the first field and then retracing to the top of the screen and then scanning the horizontal lines for the second field in-between the first set.

Interlaced Scans

Progressive Scans

Visual resolution in television systems is specified in terms of a parameter called "TV lines." This parameter is typically used to indicate horizontal resolution Same technique can be used for vertical resolution. TV lines are determined by viewing a test pattern consisting of alternating black and white lines that are placed closer and closer together. The pair of lines with the closest spacing that can be distinguished as separate lines determines the resolution.


Type of video signals

Analog video
Analog video is a video signal transferred by analog signal. It contains the luminance (brightness) and chrominance (color) of the image

Digital video
Digital video is a type of video that works by using a digital, rather than analog, of the video signal.

Components video Composite video S-Video

Components video
A video color format that maintains different video signals in different (three) separate channels. Component video provides a sharper image One way of maintaining signal clarity is by separating the components of a video signal so that they do not interfere with each other More band-width is needed because of redundant data

RGB analog component video

There is one channel for each color Each channel include B&W image use no compression and impose no real limit on color depth or resolution Require large bandwidth to carry the signal RGB requires an additional signal for synchronizing the video display. Most modern computers offer this signal via the VGA port.

RGB analog component video

The various RGB (Red, Green, Blue) analog component video standards are used

composite sync, where the horizontal and vertical signals are mixed together on a separate wire (the S in RGBS) separate sync, where the horizontal and vertical are each on their own wire (the H and V in RGBHV) sync on green, where a composite sync signal is overlaid on the green wire (SoG or RGsB).

Y'PbPr analog component video

This types of component analogue video signals do not use R,G,B components but rather following components Luma, a colorless component carry B&W image
(The luma is a weighted sum of the RGB colors )

Chroma combination of one or more colorcarrying components that give only color information. This overcomes the problem of data redundancy that present in RGB signals, since there is only one black and white image carried, instead of three. Synchronization signal is either transmitted on separate channel or on luma (Y)

Some Confusions
Component video connectors are not unique but same connectors are used for several different standards The settings on many DVD players and TVs may need to be set to indicate the type of input/output being used, and if set wrong the image may not be properly displayed.
Progressive scan, for example, is often not enabled by default, even when component video output is selected. Modern game systems (such as the PlayStation, GameCube, Xbox, and Wii) use the same connector pins for both RGB and YPbPr component video, with a software or hardware switch to determine which signal is generated. Hence, a common complaint, especially with the PS2, is that the RGB signals are very green, with very dark reds and blues. This is because the system menu has not been changed from Component to RGB. This problem also occurs when trying to play back DVDs on the PS2 using the RGB-output.

Composite video
A video color format that combines all three YUV video signals into one channel. The first video signal to include color, composite video transmits brightness/luma (Y) and colors/chroma (U and V) over one cable. NTSC, PAL and SECAM television sets have composite video inputs. Most new sets also include S-video and component video connections, which provide a sharper image than composite video.

Composite video
Composite video also called CVBS (Composite Video Blanking and Sync) It is a format of an analog television (picture only) signal before it is combined with a sound signal and modulated onto an RF carrier.

Composite video
Composite video can easily be directed to any broadcast channel simply by modulating the proper RF carrier frequency with it
analogue home video equipment records a signal in (roughly) composite format: These devices then give the user the option of modulating on to a VHF or UHF frequency

Some devices (1980s) that connect to a TV naturally output a composite signal. This may then be converted to RF with an external box known as an RF modulator that generates the proper carrier
often for channel 3 or 4 in North America channel 36 in Europe.

The RF modulator is preferably left outside the console so the RF doesn't interfere with the components inside the machine VCRs and similar devices already have to deal with RF signals in their tuners, so the modulator is located inside the box.

Composite video
The process of modulating RF with the original video signal, and then demodulating the original signal again in the TV, introduces several losses into the signal. RF is also "noisy" because of all of the video and radio signals already being broadcast, so this conversion also typically adds noise or interference to the signal as well. For these reasons, it is typically best to use composite connections instead of RF connections if possible. Almost all modern video equipment has at least composite connectors, so this typically isn't a problem; however, older video equipment and some very low-end modern televisions have only RF input (essentially the antenna jack);


(Super-video) A video color format that combines the three YUV video signals into two channels. Brightness/luma (Y) is in one channel, and color/chroma (U and V) are in another. S-video provides a sharper image than composite video, but is not as good as component video.


Pin 1GNDGround (Y) Pin 2GNDGround (C) Pin 3YIntensity (Luminance) Pin 4CColor (Chrominance)

Color space conversion

Color space conversion

Color space conversion

National Television System Committee
Horizontal Frequency Vertical Frequency Color Subcarrier Frequency Video Bandwidth Sound Carrier

15.734 kHz 60 Hz 3.579545 MHz 4.2 MHz 4.5 MHz

Phase Alternating Line
SYSTEM Line/Field Horizontal Freq. Vertical Freq. Color Sub Carrier Video Bandwidth Sound Carrier PAL 625/50 15.625 kHz 50 Hz 4.433618 MHz 5.0 MHz 5.5 MHz PAL N 625/50 15.625 kHz 50 Hz 3.582056 MHz 4.2 MHz 4.5 MHz PAL M 525/60 15.750 kHz 60 Hz 3.575611 MHz 4.2 MHz 4.5 MHz

Sequential Couleur Avec Memoire or Sequential Color with Memory

Line/Field Horizontal Frequency Vertical Frequency Video Bandwidth

625/50 15.625 kHz 50 Hz 5.0 MHz

625/50 15.625 kHz 50 Hz 6.0 MHz

Sound Carrier

5.5 MHz

6.5 MHz


NTSC M Lines/Fields 525/60


PAL N 625/50

PAL M 525/60


625/50 15.625 kHz 50 Hz

Horizontal Frequency
Vertical Frequency Color Subcarrier Frequency Video Bandwidth Sound Carrier

15.734 kHz
60 Hz

15.625 kHz
50 Hz

15.625 kHz
50 Hz

15.750 kHz
60 Hz

15.625 kHz
50 Hz

3.579545 4.433618 3.582056 3.575611 MHz MHz MHz MHz 4.2 MHz 4.5 MHz 5.0 MHz 5.5 MHz 4.2 MHz 4.5 MHz 4.2 MHz 4.5 MHz 5.0 MHz 5.5 MHz 6.0 MHz 6.5 MHz

NTSC is the analog television system in use in the United States, Canada, Japan, South Korea, Taiwan, the Philippines, Mexico, and some other countries, mostly in the Americas. It is named for the National Television System Committee, the U.S. standardization body that adopted it.

The National Television System Committee was established in 1940 by the Federal Communications Commission (FCC), in the United States (US) Resolve the conflicts which had arisen between companies over the introduction of a nationwide analog television system in the U.S. The committee in March 1941 issued a technical standard for black and white television. This built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA) that used 441 lines.

In January 1950 the Committee was reconstituted, this time to decide about color television. March 1953 it unanimously approved what is now called simply the NTSC color television standard, later defined as RS-170a. Color information was added to the black and white image by adding a color subcarrier of 4.5 x 455/572 MHz (approximately 3.58 MHz) to the video signal.

The NTSC format is used with the M format broadcast television systems), which consists of 29.97 interlaced frames of video per second. Each frame consists of 486 visible scanlines out of a total of 525 . PAL uses 625 lines, and so has a higher picture resolution. The NTSC system interlaces its scanlines, with visible scanlines 21263 drawn in the first field, and visible scanlines 283-525 drawn in the second field An NTSC television channel as transmitted occupies a total bandwidth of 6 MHz. A guard band, which does not carry any signals, occupies the lowest 250 kHz of the channel to avoid interference between the video signal of one channel and the audio signals of the next channel down.

PAL, short for Phase Alternating Line, is a color encoding system used in broadcast television systems in large parts of the world

SECAM, also written SCAM (Squentiel couleur mmoire, French for "Sequential Color with Memory"), is an analog color television system first used in France. A team led by Henri de France working at Compagnie Franaise de Tlvision (later bought by Thomson) invented SECAM. It is, historically, the first European color television standard.

Chroma subsampling
Chroma subsampling is the practice of implementing more resolution for the (quantity representative of) luminance than the (quantity representative of) color. It is used in many video encoding schemes (both analog and digital) and also in JPEG encoding. Why subsampling works Because the human eye is less sensitive to color than luminance, bandwidth can be optimized by storing more luminance detail than color detail. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate.

In video systems, this is achieved through the use of color difference components. The signal is divided into a luma (Y') component and two color difference components (chroma). Luma and Chroma components are formed as a weighted sum of R'G'B' components instead of linear RGB components. As a result, luminance and color detail are not completely independent of one another.

Sampling systems and ratios

The subsampling scheme is commonly expressed as a three part ratio (e.g. 4:2:2)
Luma horizontal sampling reference Cb and Cr (chroma) horizontal factor (relative to first digit) Same as second digit, except when zero. Zero indicates Cb and Cr are subsampled 2:1 vertically

R'G'B' (no subsampling)

"4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all.

The two chroma components are sampled at half the sample rate of luma, so horizontal chroma resolution is cut in half. This reduces the bandwidth of a video signal by one-third with little to no visual difference. Many high-end digital video formats and interfaces use this scheme:
Digital Betacam DVCPRO50 and DVCPRO HD Digital-S CCIR 601 / Serial Digital Interface / D1 ProRes 422

In 4:1:1 chroma subsampling, the horizontal color resolution is quartered. The bandwidth is halved compared to no chroma subsampling. In some professional circles, the 4:1:1 chroma subsampling of the DV format was initially not considered broadcast quality and only acceptable for low-end and consumer Formats that use 4:1:1 chroma subsampling include:

Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically. Cb and Cr are effectively centered vertically halfway between image rows. This scheme is found in:
All versions of MPEG PAL DV and DVCAM HDV most common JPEG/JFIF, H.261, and MJPEG VC-1

From RGB to YUV Y = 0.299R + 0.587G + 0.114B U = 0.492 (B-Y) V = 0.877 (R-Y) It can also be represented as: Y = 0.299R + 0.587G + 0.114B U = -0.147R - 0.289G + 0.436B V = 0.615R - 0.515G - 0.100B From YUV to RGB R = Y + 1.140V G = Y - 0.395U - 0.581V B = Y + 2.032U