Professional Documents
Culture Documents
2008
Joint Photographic Experts Group (JPEG) Motion Picture Experts Group (MPEG) Digital Video Broadcasting (DVB) Standard
The Satellite Standard DVB-S
Introduction
The TV systems of the world employ about 5 MHz of baseband
bandwidth. Satellite transmission using FM requires that this bandwidth be multiplied further to occupy between 27 MHz and 36 MHz. This amount of bandwidth results in a high-quality signal that can be recovered with relatively inexpensive receivers.
However, the real cost of the analog baseband and analog FM comes in
the inefficient use of space segment.
Introduction
Digital compression plays a very important role in modern video transmission. Its principle benefits are:
More channels available per satellite, which greatly increases the variety
of programming available at a given orbit position, in turn promoting new services like impulse PPV and home education and making it feasible to expand a programming service through tailoring (e.g., packaging several different feeds of the same material with different advertising or cultural view) and multiplexing (i.e., sending the same channel at several different times);
The potential of using a common format for satellite DTH, cable TV, and
terrestrial broadcasting;
It provides a base for HDTV in the digital mode because the number of
bits per second of a compressed HDTV signal is less than what was previously required for a broadcast-quality conventional TV signal.
Introduction
Compression systems that were marketed in the 1980s met a variety of
needs, such as video teleconferencing, PC videophones, distance education, and early introductions of narrowband ISDN. Some examples of these early applications of digital video compression are listed in Table.
Introduction
A wide range of performance of compression systems results from the
relationship between the data rate (which is proportional to the occupied bandwidth) and the quality of the picture.
When quality can be sacrificed, then data rates below 1 Mbps are
possible. On the other hand, if the intended application is in the field of education or entertainment, then significantly more than 1 Mbps is dictated.
The table gives an indication of the relationship between bit rate and
application in commercial broadcasting. A perfect video reproduction of analog TV standards is achieved with rates of 90 Mbps or greater. Typical viewers cannot usually tell that anything is impaired when the signal is compressed to a rate of 45 Mbps. Below this value, it becomes subjective..
Introduction
Lossless Compression compression systems that operate at 45 Mbps or greater and are
designed to transfer the signal without permanent reduction of resolution and motion quality.
the output of the decoder is identical to the input to the encoder. Lossy Compression In contrast, operation is below about 10 Mbps It introduces a change in the video information that cannot be
recovered at the receiving end.
Compression Technology
From a technical standpoint, video sequences scanned at the rate of
either 30 or 25 frames per second with 525 or 625 lines each, respectively, contain a significant amount of redundancy both within and between frames.
Compression Technology
For TV distribution and broadcast applications over satellites, we wish
to use data rates below 10 Mbps in order to save transponder bandwidth and RF power from the satellite and Earth station.
This means that we must employ the lossy mode of compression, which
will alter the quality in objective (numerical) and subjective (human perception) terms. A subjective measure of quality depends on exposing a large quality of the human subjects (viewers) to the TV display and allowing them to rate its acceptability. The TASO scale shown in Table of Lecture5 is an excellent example of such a subjective scale for measuring quality.
Compression Technology :
Digital Processing
Any analog signal can be digitized
through the two-step process
Compression Technology :
Digital Processing
The number of bits determines the quality of reproduction, which can
be specified in terms of the signal-to-quantization noise ratio (S/Nq). The more bits per sample, the better the reproduction, as evidenced by the equation
Typical values of M are in the range of 6 to 12, with 8 being the most common. At this level, the S/Nq is equal is 22.8 dB. This relation indicates that doubling the number of bits per sample reduces the quantization noise by 6 dB.
Compression Technology :
Digital Processing
A further refinement can be done by deploying companding technique. Companding compress this scale to emphasize the lower levels and deemphasize higher levels, which are inherently easier to accept by the human watcher or listener.
Subsampling Technique - reduced sampling rate below the Nyquist rate and can
still be lossless, provided that some additional conditions are met.
Compression Technology :
Digital Processing
Essentially all of the practical encoding and compression systems use
subsampling and quantization prior to compression.
The images at the receiving end are smoothed using the mathematical
process called interpolation. This produces a more natural look to the image so that human observers cannot detect the potential impairment from subsampling.
Compression Technology :
required to represent a digital image.
DCT has proven to be the most popular mathematical procedure and is now part
of the JPEG and MPEG series of standards. The mathematical formulation of the DCT in the forward direction is
where
Compression Technology :
Compression Technology :
The first step taken by the DCT coder is to represent the block in the form of an
N x N matrix of the pels and then apply the DCT algorithm to convert this into a matrix of coefficients that represent the equivalent spatial frequencies. More bits are removed by limiting the number of quantization steps and by removing some of the obvious redundancy. For example, coefficients that are zero are not transmitted.
Compression Technology :
in nature and, contain a high degree of redundancy.
Consider a video segment of the nightly news with a reporter at her chair behind a
desk. During the entire time that she is speaking, the foreground and background never change; in fact, the only noticeable motion is of her head, mouth, and perhaps her upper body and arms. The result of this is that only the first frame needs to be encoded in its complete form; the remaining frames must only be replaced by information about the changes. This is possible because interframe correlation is high and, in fact, two or more consecutive intermediary frames can be predicted through interpolation.
The formal way to state this is that an approximate predication of a pel can be
made from the previously coded information that has already been transmitted. For greater resolution, the error between the predicted value and the previous one can be sent separately, which is a technique called differential pulse code modulation (DPCM). Both DCT and DPCM can be combined to provide a highly compressed but very agreeable picture at the receiving end.
Compression Technology :
Motion Compensation
Motion compensation - the technique used to reduce the redundant information
between frames in a sequence. It is based on estimating the motion between video frames by observing that individual elements can be traced by their displacement from point to point during the duration of the sequence.
Compression Technology :
Hybrid Coding
Two or more coding techniques can be combined to gain more advantage
from compression without sacrificing much in the way of quality.
By 1992, JPEG had become one of the most popular digital image
compression standards for PC applications, particularly hard disk storage of photographs and CD-ROM media. The standard is implemented both in software and as hardware available in the form of plug-in boards.
JPEG is really a set of standards that allow the user to make his or her
own tradeoff of compression versus quality.
The MPEG series of standards supports a wide variety of picture formats with a
very flexible encoding and transmission structure.
It allows the application to use a range of data rates to handle multiple video
channels on the same transmission stream and to allow this multiplexing to be adaptive to the source content.
The frame-to-frame compression approach with the use of intra (I) pictures
(discussed later in this chapter) permits fast forward and reverse play for CDROM applications, impacting the degree of compression since frames cannot be interpolated if used for these features.
MPEG-processed videos can be edited by systems that support the standard. Random access can be allowed using the I pictures. The standard will most probably have a long lifetime since it can adapt to
improvements in compression algorithms, VLSI technology, motion compensation, and the like.
MPEG-3: Originally designed for HDTV, but abandoned when it was realized that
MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.)
MPEG-7: A multimedia content description standard. MPEG-21: MPEG describes this standard as a multimedia framework.
is aimed at nonbroadcast applications like computer CD-ROM. draws from JPEG in the area of image compression using DCT is also intended to provide a broad range of options to fit the
particular application. For example, there are various profiles to support differing picture sizes and frame rates and it can encode and decode any picture size up to the normal TV with a minimum number of 720 pixels per line and 576 lines per picture. The minimum frame rate is 30 (noninterlaced) and the corresponding bit rate is 1.86 Mbps. .
Predicted (P) pictures, on the other hand, are predicted from the I pictures
and incorporate motion-compensation as well. They are therefore not usable for reference because they require other pictures to be decoded properly.
As stated previously, the I pictures are stand-alone DCT images that can
be decompressed and used as a reference.
Like differential PCM, it transmits only changes and throws away data
that the human ear cannot hear. This information is processed and time division multiplexed with the encoded video to produce a combined bit stream that complies with the standard syntax. This is important because it allows receivers designed and made by different manufacturers to be able to properly interpret the information.
A multiplexing system to implement a common MPEG 2 transport stream (TS); A common service information (SI) system giving details of the programs being
broadcast (this is the information for the on-screen program guide); A common first-level Reed-Solomon (RS) forward error correction system (this improves the reception by providing a low error rate to the decoded data, even in the presence of link fades);
A common scrambling system; A common conditional access interface (to control the operation of the receiver
and assure satisfactory operation of the delivery system as a business).
DVB-S: The satellite DTH system for use in the 11/12-GHz band, configurable to
suit a wide range of transponder bandwidths and EIRPs;
DVB-C: The cable delivery system, compatible with DVB-S and normally to be
used with 8-MHz channels (e.g., consistent with the 625-line systems common in Europe, Africa, and Asia);
DVB-T: The digital terrestrial TV system designed for 7-to 8-MHz channels; DVB-SI: The service information system for use by the DVB decoder to configure
itself and to help the user navigate the DVB bitstreams;
DVB-TXT: The DVB fixed-format teletext transport specification; DVB-CI: The DVB common interface for use in conditional access and other
applications.
Bit rates and bandwidths can be adjusted to match the needs of the
satellite link and transponder bandwidth and can be changed during operation.
The video, audio, and other data are inserted into payload packets of
fixed length according to the MPEG Transport System packet specification. This top level packet (payload) is then being converted into the DVB-S structure by inverting synchronization bytes in every eighth packet header (the header is at the front end of the payload). There are exactly 188 bytes in each payload packet, which includes program-specific information so that the standard MPEG 2 decoder can capture and decode the payload. These data contain picture and sound along with synchronization data for the decoder to be able to recreate the source material.
The inner FEC is then introduced in the form of convolutional code. The final step is at the physical layer where the bits are modulated on a
carrier using QPSK. The amount of FEC is adjusted to suite the frequency, satellite EIRP and receiving dish size, transmission rate, and rainfall statistics for the service area. The system therefore can be tailored to the specific link environment