You are on page 1of 75

T E C H N O L O G Y

Introduction

Systems and their applications are based on the connection of individual devices. A well-known standard for connecting digital TV devices is the so-called SDI (Serial Digital Interface) format, which was defined more than a decade ago to carry the uncompressed digital component signal. Together with the introduction of video compression formats, new methods for the transport of these new compressed video signals were defined, methods which are based on the introduction of a Serial Data Transport Interface (SDTI). This SDTI signal forms a bridge to the data transport technologies of the computer and telecom industries. The Video Connection Book was not written for development or planning engineers, but for editors, directors and broadcast managers. To provide basic information about connection techniques their differences and applications we have had to drastically simplify technical facts. Engineers should study the relevant Standards of SMPTE and other organizations for more information.

Video Connection

Chapter 1

Page 6

The TV Legacy and Modern Telecommunication Philosophy The Principle of Layering in Telecommunication Standards
Chapter 2
Page 8

Steps from Analog to Digital Video


Chapter 3
Page 12

The Serial Digital Interface and Compressed Video Signals Cascading of Compression Codecs
Chapter 4
Page 15

The Need for a Truly Data Signal


Chapter 5
Page 18

Differences between the Video-Centric World of SDI and the Data World
Chapter 6
Page 21

Audio Signals within the SDI Bit-Stream


Chapter 7
Page 23

From SDI to SDTI The Serial Data Transport Interface


Chapter 8
Page 27

The Meaning of Interoperability for Broadcast Facilities


Chapter 9
Page 29

The Meaning of Network Technology for Broadcast Facilities


Chapter 10
Page 31

DVCPRO, the DIF Packet and the World of Data Signals

C O N T E N T S

Chapter 11

Page 33

The Transport of DVCPRO DIF Packets over SDTI


Chapter 12
Page 38

The Transport of MPEG-2 over SDTI


Chapter 13
Page 42

The Stream Transfers of Data


Chapter 14
Page 44

The File Transfers of Data


Chapter 15
Page 47

Fiber Channel
Chapter 16
Page 52

The ATM Wide Area Network


Chapter 17
Page 58

IEEE 1394 More than just a Network for the Consumer Market
Chapter 18
Page 63

The Universal File Exchange Format


Chapter 19
Page 68

Conclusion
References
Page 70

Index

Page 72

Contacts

Page 74

Video Connection

The TV Legacy and Modern Telecommunication Philosophy The Principle of Layering in Telecommunication Standards

The big achievement of the color encoding standards, NTSC and PAL, was not only to stay within a given transmitter bandwidth, but also to reduce a 3 cable interconnection to a single cable interconnection. It therefore became possible at the time of switching from black & white to color TV to use the same coaxial cables for color, as were already being used to carry black & white monochrome signals. This was of huge economic benefit for broadcasters. The legacy NTSC and PAL formats were defined in so-called vertical standards. Today, telecommunications standards are written in a layered structure. The Society of Motion Picture and Television Engineers wisely adopted this principle for structuring its TV standards as well. The reason for this was the convergence of the broadcast and communication industries. The understanding of layers is crucial to the understanding of todays connection techniques. Figure 1 interprets the analog color TV standards as a set of layers. We can differentiate three layers: Top Layer: The imager CCD is the source for the primary color signals, which are needed for the display of imaged information at the receiver. The Top Layer specifies these parameters. Middle Layer(s): These layers specify the color encoding scheme and the formatting of the color and luminance

C H A P T E R

signal into one single CCVS signal. Other parts of these middle layers can define the rules for transporting the CCVS signal over the modulation system of a transmitter. Bottom Layer: It specifies all physical (electrical & mechanical) parameters level, cable, impedance, connector and is called the Physical Layer.

Top Layer

525/59.94 525/59,94
R (red signal) G (green signal) B (blue signal)

625/50
R G B

Fig. 1: The legacy color TV standard shown in a modern layered structure

Middle Layer
NTSC PAL-M PAL SECAM

Bottom Layer
Level: 1 volt pp Cable: coaxial Impedance: 75 ohms Connector: BNC

SUMMARY
Modern standards for telecommunications are written in a layered structure. Applications are defined in the top layers, physical interconnection cables in the bottom layer. Middle layers define rules for transporting the information and for communication between the involved hardware & software devices. To ease the convergence between broadcast technologies and modern IT & telecom technologies, SMPTE adopted the principle of layering for its standardization work.

Video Connection

Steps from Analog to Digital Video

Over the last few decades, technological progress has changed the way video information is transported and/or stored. Two decades ago, the 1/2 inch recording industry moved from NTSC and PAL composite signals to analog component signals. These changes affected the middle layers, as can be seen in Figure 2. The bottom layer with the 75 ohm coaxial cable stayed the same.

Fig. 2: Layering model for analog component signals

Top Layer
R-G-B- Imager 525 R-G-B- Imager 625

Middle Layer(s)
Computation of Y, CR and CB analog component signals

Bottom Layer
Coaxial cable (3) Impedance: 75 ohms Connector: BNC

1/2" analog Camcorder

C H A P T E R

In the mid-eighties the first digital D1 VTRs were introduced. In order to record TV on these digital recorders the analog component signals were digitized. Another layer, which specified the digitization of the analog component signals, was added to the middle layer. The physical bottom layer changed as well. The digitalized analog signal needed 11 twisted pair wires to carry the signal (Figure 3).
Fig. 3: Layering model for the parallel form of digital component signals

Top Layer
R-G-B- Imager 525 R-G-B- Imager 625

Middle Layer(s)
Computation of Y, CR, CB analog component signals Digitalization into a parallel digital signal

Bottom Layer
11 balanced signal pairs carrying clock and 10 data bits Cables: twisted pair (11x) Receiver impedance: 110 ohms Connector: 25 contact D subminiature

D1 Digital VTRs

Video Connection

The parallel digital signal was very awkward to handle. It was no longer possible to use the installed coaxial video cables because this signal required a special cable. Due to the use of 10 wire pairs to carry 10 bits, the cable became very thick. The connectors were large and the distances that could be bridged were too short. All these problems were solved with the invention of the Serial Digital Signal (SDI), which carried the individual bits in a serial form instead of a parallel form (Figure 4).

Fig. 4: Layering model for the Serial Digital Signal (SDI)

Top Layer
R-G-B- Imager 525 R-G-B- Imager 625

Middle Layer(s)
Computation of Y, CR and CB analog component signals Digitalization into a parallel digital signal (10 bits = 10 wire pairs) Serialization into 1 single serial digital signal (1 bit = 1 wire pair)

Bottom Layer
Cables: coaxial Impedance: 75 ohms Connector: BNC

10

C H A P T E R

The middle layer now consists of three sub-layers. Comparing Figure 4 with Figure 1 we can see that the top and bottom layers remained unchanged. Only the middle layer was replaced on the way from analog to digital (Figure 5). This highlights the fact that the SDI signal can be carried over the same coaxial cable as the analog color TV signal.

Computation of analog component signal NTSC PAL-M PAL SECAM Digitalization into a parallel digital signal Serialization into 1 single serial digital signal

Fig. 5: Replacement of the middle layer on the way from analog to digital

SUMMARY
The breakthrough to digital TV facilities was made possible by using the same Physical Layer of the legacy analog TV standard.

Video Connection

11

The Serial Digital Interface and Compressed Video Signals Cascading of Compression Codecs

Over the last decade, the majority of broadcast facilities have moved from analog to digital. Digital Technology has enabled a variety of new production tools, from digital video effects (DVE) to non-linear editing (NLE). Connections within these digital TV studios as shown in Figure 6 make use of the Serial Digital Signal (SDI), which is standardized as ANSI/SMPTE 259M. The success of the SDI technique was based on the fact that this signal could be carried over already installed cables, which were already being used to carry analog NTSC or PAL signals.
Fig.6: SDI the backbone of todays digital TV broadcast facilities SDI - Routing Control

Contribution

Acquisition Production

PostProduction

SDI-Routing System

VTRs Servers

Playout

Archiving

SDI

12

C H A P T E R

When SDI was introduced, compression technology was still in the laboratories. Therefore it was designed to transport uncompressed video signals only. The Video Compression Book 1) deals with the various new compression formats and their applications. It explains that both DV compression (as used in DVCPRO) and MPEG-2 compression are based on a quantization of the so-called DCT coefficients. This quantization is the main source of some unavoidable quality losses in DCT based compression codecs (Figure 7).

Sources of Quality Losses in DCT Based Compression Codecs

Encoder
video

Decoder
DCT VLC -1 Q -1 video Raster ReTransformation

IN SDI

Raster Transformation DCT

Quantization VLC

OUT SDI

Codec = Encoder + Decoder Fig. 7: Compression - Codec = Compression - Encoder + Compression - Decoder
1 ) The Video Compression Book was issued by Panasonic in 1999 and can be downloaded from Panasonics web page at http:// www.panasonic-broadcast.com (available in English, Spanish and German).

In todays daily production processes it has become unavoidable to interconnect devices that are internally based on a compression format. This could be done as shown in Figure 8 via SDI, but would lead to a cascading of compression codecs.

Video Connection

13

DVCPRO device # 1 Encoder


IN
Raster Transformation VLC -1 video VLC Quantization DCT Q -1 video

DVCPRO Decoder
DCT Raster ReTrans- OUT formation

Encoder
IN
Raster Transformation video Quantization DCT VLC

SDI

SDI

Fig.8: Cascading of compression codecs via SDI connections

Cascading of compression codecs slightly impacts picture quality because the quantization of the DCT coefficients is repeated several times. Slightly means that the difference may not become visible. However, if you copy a tape via SDI, you dont get a clone of the original. With uncompressed signals, you do! Figure 9 shows the difference in performance of uncompressed and compressed video signals.
S/N Fig. 9: Schematic trajectory of the Signal to Noise Ratio over the number of cascaded compression codecs (D1, D5) DCT compression

Number of codecs cascaded via SDI

SUMMARY
SDI connections represent the backbone of todays digital broadcast facilities. Using these connections for individual devices that internally apply compression technology may slightly impact the picture quality.

14

C H A P T E R

device # 2 Decoder
VLC -1 DCT Q -1 video Raster ReTransOUT formation

DVCPRO device # 3 Encoder


IN
Raster Transformation VLC -1 video Quantization DCT VLC Q -1 video

Decoder
DCT Raster ReTrans- OUT formation

SDI

SDI

The Need for a Truly Data Signal

To avoid the possible negative impacts on picture quality through connecting compression devices via SDI, a new type of connection method was required. It needed to be based on a truly data signal. But what does truly data mean? Is there a difference between a data signal and the digital TV (SDI) signal? To answer this question, it seems appropriate to make a detour into the world of modern telecommunications. All communications between devices require that the devices agree on the format of the data. The set of rules defining a format is called a protocol. Ethernet is such a protocol for local-area network (LAN) applications. A Network is any collection of independent computers that communicate with one another over a shared network medium. LANs are networks usually confined to a geographic area, such as a single building. Ethernet is the most popular physical layer LAN technology in use today. The heart of the Ethernet system is the Ethernet frame, which is used to deliver data between computers. The frame is like a packet of variable size, consisting of a packet and an address label.

Video Connection

15

Label
Preamble 8 Bytes Address Source Destination 6 Bytes 6 Bytes Length/ Type 2 Bytes Data

Payload
CRC 4 Bytes

46 - 1500 Bytes

Fig. 10: The Ethernet packet consisting of a packet and an address label

Pa
l

ylo

ad

Labe

Ethernet frames are bit-oriented frames that contain Preamble 64 bits (alternating 1s and 0s) used for synchronization, Destination address 48 bits, Source address 48 bits, Length / Type 16 bits (identifies the amount & type of payload data), Payload data 46 to 1500 bytes, and CRC 32 bits Cyclical Redundancy Check, used for error detection. Ethernet describes the physical layer of a network. TCP/IP is the most commonly-used set of transport layers above that physical layer. The Transmission Control Protocol (TCP) and the Internet Protocol provide the basic transport functionality.

16

C H A P T E R

TCP/IP breaks your information up into a sequence of smaller datagrams. A datagram is a collection of data that is sent as a single message. Each of these datagrams is sent through the network individually to the other end, where they are re-assembled. However, while datagrams are in transit, the network doesnt know there is any connection between them. It is perfectly possible that datagram 14 will actually arrive before datagram 13. It is also possible that somewhere in the network, an error will occur, and some datagram wont get through at all. In that case, it has to be sent again. In the context of this book a datagram is no different from a packet plus label (IP Header). The Transmission Control Protocol is responsible for breaking up the message into datagrams, addressing the host, reassembling the datagrams at the other end, resending anything that gets lost, and putting things back in the right order. The Internet Protocol is responsible for routing individual datagrams. On top of TCP/IP is the so called File Transfer Protocol (FTP), which provides the necessary command set to transport data files from one computer to another. We have seen that a data signal is characterized by packaged data. Each package has its label to identify source, destination and type of payload.

SUMMARY
The packaging of data described above, forms the so-called truly data signal. It means the combination of a payload of data with a label or header that provides additional information about the payload type, size, source and destination.

Video Connection

17

Differences between the Video-Centric World of SDI and the Data World

The SDI signal is only the digital representation of the analog Y, CR and CB video signals. During the analog to digital conversion process the video signals are sampled into 720 luminance pixels and 720 chrominance pixels (360 for CR, and 360 for CB) per line. Each pixel is assigned 10 bits. In the sequence shown in Figure 11, the serial SDI signal carries the bits of the samples of one active TV line.

Fig. 11: Bit sequence within the SDI signal for a one line period

Horizontal blanking

TV-line n

TV-line n+1

TV-line n+2

TV-line n+3

TV-line n+4

Group 1 40 bits SAV 10 bits 10 bits 10 bits 10 bits

Group 2 40 bits

Group 3 40 bits

Gr. 359 40 bits

Gr. 360 40 bits 40 bits EAV

CB

CR

18

C H A P T E R

The horizontal synchronization information of the analog video signal is translated into two special markers of 40 bits each. The Start of Active Video (SAV) signal precedes the first group of each line and the End of Active Video (EAV) follows the last group. These signals do not represent a label, because they provide no information about source, destination, and length or type of payload e.g. the number of TV-lines. Additional information embedded in the EAV and SAV signals indicates vertical blanking and the field sequence. The transmitted payload of 14,400 bits per active line can only be interpreted correctly when the receiver has synchronized itself to the incoming SDI signal with the help of the digital horizontal synchronization signals EAV and SAV. The receiver then counts the SAV signals and assigns the correct line number to the payload. In the worst case, the synchronization time may be as long as one field of video. All payloads received during this time may be lost. In the data world a label attached to the payload will avoid such a loss. The above-described SDI version is based on a sampling frequency of 13.5 MHz for the luminance video signal. There is another version with 18 MHz sampling for luminance. That version carries 19,200 bits as payload per active line. Figure 12 shows the basic structure of the SDI signal for the period of one line.

Video Connection

19

Fig. 12: Basic structure of the SDI signal for one line period.

EAV

Ancillary Data

SAV

Payload 14,400 bits at 270 Mb/sec 19,200 bits at 360 Mb/sec

Packetized communication can be synchronous or asynchronous. Synchronous signals are closely tied to some sort of clock, so each packet begins, for example, precisely at 0.0 ms, then 7.5 ms, then 15.0 ms, and so on. Asynchronous signals are not bound tightly to a clock. Their data packets usually have some kind of unique bit pattern to identify the beginning and end of a packet. Most serial communications and practically all LAN communications are asynchronous. Synchronization between transmitting and receiving devices can be achieved through the transport of timing references within the stream. An additional prerequisite for a synchronous transmission of a TV signal via an asynchronous line is the availability of so-called Quality of Service parameters as explained later. The connection of two devices over SDI provides a transmission that is not only synchronous, but real-time as well. A synchronous data communication can be either real-time or non-real-time.

SUMMARY
The SDI signal is a digital representation of analog video signals, but it is not a data signal. The video data are not packaged and no label identifies the content, its source or its destination.

20

C H A P T E R

Audio Signals within the SDI Bit-Stream

Not all bit space within the SDI signal is occupied by video. Figure 13 shows that the space available between the EAV and the SAV signals, which equals the horizontal blanking period of analog video, can be used to carry digital audio signals.
H Fig. 13: Available space within SDI for a period of one TV frame which can be used for embedding audio into the SDI bit-stream

Optional video data Audio

Active Video of Field 1

Optional video data Audio

Active Video of Field 2

SAV EAV

Video Connection

21

Although the video data are not packaged (as explained above), the packaging of the digital audio signal became mandatory due to two reasons: not all lines are available to carry audio, and the audio sampling frequencies lead to a non-uniform distribution of audio samples over the available lines. As an example, some television lines may carry three samples, and some four. Other values are also possible. The AES 3 digital audio stream contains in addition to the digital audio samples AES auxiliary data and some audio data that are related to each audio sample. The audio samples and the audio data are packed (mapped) into a so-called Ancillary Data Packet.
Label Payload
Flag 30 bits ID 20 bits Length Sample 10 bits 30 bits 30 bits 30 bits ------10 bits Sample Sample CS

Fig. 14: Data Packet containing the audio samples of the AES 3 audio stream

The ID-bits identify the payload as audio samples and the length specifies the variable number of audio samples in the packet as shown in Fig.14. The checksum (CS) is used to determine the validity of the data packet. The packets are placed in the space indicated in Figure 13. The same packet structure is used to transport the AES auxiliary data or any other type of data. The ID is used to identify the type of data in the payload.

SUMMARY
While the video part of the SDI signal does not represent a truly data signal, the audio signal is embedded within the SDI signal in the form of packetized data using a labeled payload.

22

C H A P T E R

From SDI to SDTI The Serial Data Transport Interface

The embedded audio shows that the SDI signal can be used as a type of container for packetized data. You may compare it with a truck carrying a container as shown in Figure 15. The truck has two freight compartments, a small one in the front of the truck (usually used as a cabin for the driver) and the huge container.

Ancillary data
EAV SAV

Payload Fig. 15: The SDI signal as container for the transport of packetized data

Video Connection

23

The Serial Data Transport Interface (SDTI) places Header Data into the Ancillary Data space, places User Data into the Payload, and adds CRC to the User Data. Using our example you may say that the freight papers are placed in the small forward compartment with the freight load placed in the container on the trailer.
Ancillary Data Fig. 16: Basic structure of the Serial Data Transport Interface (SDTI) signal
EAV Header Data SAV

Payload User Data


CRC CRC

Header

User Data

24

C H A P T E R

Label
Flag 30 bits ID 20 bits Length 10 bits 460 bits

Payload
Header Data CS 10 bits

Fig. 17: Header Data in the Serial Data Transport Interface (SDTI) signal

The Header Data is packed into an Ancillary Data Packet shown in Figure 17. The packet structure is identical to the structure used for embedding audio into the SDI signal as explained in chapter 6. The Header Data contain such information as Line number, Length of the SDI payload, Destination address, Source address, and Identification for Fixed Block Size or Variable Block Size within SDI payload. The Header Data contain the information of a label, which belongs to the User Data in the payload section of the container. We placed these label data in the payload of a packet (Figure 17). Then this packet of Header Data got its own (different) label. User Data organization and their location within the payload are not defined by the SDTI standard. This is done in separate application documents for different types of payloads. The payload is structured in Data Blocks. They can be of fixed block size or variable block size. Each 10 bits of a data block can carry either 8 bits or 9 bits of User Data. Figure 18A shows the structure for a Data Block of fixed size and Figure 18B the same for a Data Block of variable size.
Video Connection

25

Fig. 18: (A) SDTI payload structure for Data Blocks of fixed size (B) SDTI payload structure for Data Blocks of variable size

Header Type Data Block

Header Separator Type Word Count Data Block End Code

A data block header precedes each data block. The data structure for the data block header is shown in Figure 18. The header contains the following information: Type identifies the type of data stream (for example DVCPRO), and Word Count provides information about the length of the variable sized Data Block With the variable block size, consecutive block data words can be of any size. The next data packet can either be placed immediately after the previous one, or on the next line. A Separator and End Code support the correct word synchronization.

SUMMARY
The SDI signal is used as a container for packetized data signals. This SDI container has two compartments. The first one (the former horizontal blanking area) is used to carry a Data Header that is in no way different to the usual label explained in chapter 4. The second, larger compartment (the former area for the active line part of the video signal) is used to carry the actual (User) Data. Specifications for the organization of this payload (e.g. DVCPRO) are defined in separate application documents.

26

C H A P T E R

The Meaning of Interoperability for Broadcast Facilities

The European Broadcasting Union (EBU), the largest User Group worldwide, is concerned about the interoperability of equipment from different vendors. In the year 2000, EBU experts analyzed the situation and reported that they had found different signal types (DV, DV-based & lots of MPEG-2 interpretations), different interfaces which can be used to transfer those signals, different file formats to transfer content as files, different streaming format to transfer in real-time and faster, and consequently: too many possibilities to transfer signals. The EBU experts suggested the following steps as a path to a solution: limit the number of different options, agree to use common transfer mechanisms, and container and file formats for DV, DV-based and MPEG-2 content, and come to a common understanding for the term interoperability in the TV and IT worlds.

Video Connection

27

Along this proposed path the EBU experts defined levels of interoperability: Level 0: the ability to exchange data, Level 1: the ability to understand and process these data, and Level 2: reliable and maintained content quality throughout the TV production chain, which requires, for example, that different MPEG-2 encoder and decoder implementations achieve the same quality The European Broadcasting Union summarized its opinion in the official EBU Statement D89 as follows: The EBU is of the opinion, that interoperability is achieved if the essence (audio, video) generated at the source passes through the production process without impairment, metadata passes through the system without error, the components used to build a system can be interconnected by a simple plugging operation, the interconnection of system components is independent of manufacturer, and when required, real-time transfer faster than real-time can be achieved.

Well-defined and standardized interconnections between individual devices are one of several necessary means to fulfill these requests and to make devices and their applications interoperable.

28

C H A P T E R

The Meaning of Network Technology for Broadcast Facilities

Networks are the collection of elements that process, manage, transport, and store information, enabling the connection and integration of multiple computing, control, monitoring, and communication devices within one local facility (Local-Area Networks = LAN); or even between locally separated facilities (Wide-Area Networks = WAN). The main reasons why all hardware and software devices within a broadcast facility should be networked are: to share resources (storage devices, production equipment, NLE-systems, DVE-units, archives), and to share content (audio, video, data, metadata) among users. The implementation of network technology is expected to increase productivity. EBU experts reported that they are expecting: to get the functionality to exchange information in real-time, faster than real-time but also slower than real-time (faster or slower than the captured event), extended use of file transfer, to give a higher number of users access to information, to apply the so-called client/server-technology to give more than one user access to the same resource at the same time,

Video Connection

29

to get enhanced connection topologies (not only point to point), to transfer different signal types over one common transport medium, and to avoid picture quality degradation just for the purpose of the transmission. The following characteristics are used to categorize different types of networks: Topology: The geometric arrangement of a network system. Common are bus, star, and ring topologies. Protocol: The protocol defines a common set of rules and signals that are used by devices on the network to communicate. One of the most popular protocols for LANs is Ethernet (explained in chapter 4). Computers on a network are sometimes called nodes, or processing locations. Every node has its own unique network address. Computers and devices that allocate resources for a network are called servers.

SUMMARY
Video network technology is expected to provide broadcast facilities with the tools to increase the efficiency of their business processes. This will mainly be achieved by giving a higher number of users access to information and resources, and through giving more than one user access to the same resource at the same time.

30

C H A P T E R

1 0

DVCPRO, the DIF Packet and the World of Data Signals

It was explained that SDI is only a digital representation of an analog TV signal. Features of a data signal are the packet and the label. Although digital VTRs DVCPRO for example communicate quite well via the established SDI signal, internally they already apply the technique of packaging and labeling. DVCPRO uses quite a special package, the so-called DIF package. DIF means Digital Interface. The DIF package (sometimes called DIF-block as well) has its origin within the tracks of a DVCPRO recording as shown in Figure 19.
Fig. 19: Digital Interface (DIF) packets in the magnetic tracks of a DVCPRO recording
33.8 mm/s
CUE TRACK

DVCPRO tape

18m
6.35 mm

CTL TRACK

12 tracks/frame

DIF packet ID (label) data

Video Connection

31

Each DIF block consists of a 3-byte ID and 77 bytes of data as shown in Figure 20. The ID identifies the type of data in a DIF block and provides a block number.
Fig. 20: Digital Interface (DIF) packet as used in DVCPRO

DIF-ID
(Byte #0..#2) 0 1 2 3 4 5 6 7

DIF-Data
(Byte #3... #79) 8 9 77 78 79

Each magnetic track (Figure 19) contains 135 DIF blocks of video data, 9 DIF blocks of audio data, 3 DIF blocks of video auxiliary data and an additional 16 DIF blocks, which carry error protection data for video and audio. At the digital interface video and audio blocks are mixed into the video & audio section of 144 DIF blocks. A DIF sequence is completed with additional 6 DIF blocks, which carry, for example, information about video auxiliary data (VAUX) and time code. In 525/60, a video frame is composed of 10 such DIF sequences.

SUMMARY
DVCPRO is well prepared for the data world. It already records data packages called DIF packets. Due to the nature of these DIF packets they can be easily placed into an SDTI container, as explained in chapter 7. They can be picked out of the SDTI container and re-recorded on DVCPRO without any loss of quality.

32

C H A P T E R

1 1

The Transport of DVCPRO DIF Packets over SDTI

SDI as explained in chapter 7 can be considered as a container and underlying transport for data. SDTI provided the general rules as to how to load User Data into this container. SDTI does not specify exactly how the DVCPRO DIF packets should be placed within the container. This is done by a specific protocol (SMPTE 321M). The first step is to re-package the DIF packets. Telecommunications people call this Mapping. [They must like this procedure very much, because they use it intensively.] Re-packaging can mean: placing several smaller packets into one larger box, and attaching a new label to the box, thus creating a new larger packet, or distributing the contents of a large packet into a number of smaller packets. This procedure requires that each of the new smaller packets be given a unique number and that an algorithm be added so that the receiver of the smaller packets can re-combine the original contents of the large packet. So-called containers are used to transport data packages. These containers may fit the size of the packages or they may be too small or too large. If they are too small, you have to start a re-packaging procedure, similar to the one described above. If the container can carry more than one package, it can carry different types of packages. The freight papers, i.e. the label, must contain sufficient

Video Connection

33

information to identify each individual package. To simplify things, you can imagine the container as just a new type of package, which can carry several other packages. Lets return to our DIF package (DIF Block). Its structure and size were explained in chapter 10. Figure 21 shows how two DIF blocks are combined into one fixed size SDTI block.
Fig. 21: A DIF Block (A). DIF block
DIF-ID 3 bytes DIF-Data 77 bytes

SDTI block of fixed size with 170 words B carrying 2 DIF blocks (B). SDTI container carrying SDTI blocks with DVCPRO DIF packets (C). C
EAV

SDTIType 1 word

Signal Type 6 words

DIFID 3 words

DIFData 77 words

DIFID 3 words

DIFData 77 words

CRC 4 words

SDTI block of fixed size with 170 words

DIF-Data

DIF-Data

DIF-Data

DIF-Data

DIF-Data

DIF-Data

DIF-Data

Header Data

Space

DIF-Data

Type

Type

Type

Type

SAV

Space

Ancillary Data

Payload 1,440 words at 270 Mb/s 1,920 words at 360 Mb/s

A specific ID called SDTI Type provides information about the use of fixed size SDTI blocks. The SDTI format supports blocks of fixed or variable size as payload in the container. DVCPRO uses a version with a fixed sized block as shown in Figure 21B. This block contains two original DIF packages as shown in Figure 21A.

34

C H A P T E R

1 1

A specific label Signal Type provides information such as: the specific type of video frame (number of lines, interlaced/progressive, field/frame frequency), the DIF structure format (DVCPRO or DVCPRO 50), and the transmission rate flag (normal, 2 times, 4 times). Figure 21C shows how the SDTI container is filled with SDTI blocks. An SDTI data block of the fixed-block variety (as used by DVCPRO) includes two DIF blocks and associated words. In the 525/60 system, the compressed video data stream within an SDI video frame is composed of 750 SDTI data blocks (1500 DIF blocks) for the 25 Mb/s compression structure, or 1500 SDTI data blocks (3000 DIF blocks) for the 50 Mb/s structure. The figures for the 625/50 system are 20% higher. The data rate of an SDI connection (= physical layer) used for SDTI, equals 270 or 360 Mb/s. This bandwidth is completely used when carrying uncompressed video, but only part of it is used when transporting one single compressed video signal as provided by DVCPRO. The space occupied by one DVCPRO signal is called a Channel Unit. Up to four DVCPRO signals can be carried through one 270 Mb/s SDTI pipe as shown in Figure 22.
Fig. 22: The available bandwidth of a 270 Mb/s SDTI pipe can be used to carry up to 4 DVCPRO signals 270 Mb/s pipe 270 Mb/s pipe
36 Mb/s 36 Mb/s 36 Mb/s 36 Mb/s 36 Mb/s

1 DVCPRO signal

4 DVCPRO signals

Video Connection

35

Fig. 23: Series of raster lines in one SDTI frame forming Channel Units occupied by individual DVCPRO signals

A channel unit is a series of SDI raster lines into which SDTI data blocks are mapped. Each channel unit can carry an individual DVCPRO compressed video signal. In the case of 25 Mb/s DVCPRO a channel unit is composed of the SDTI data blocks of one compressed video frame and occupies the space of 94 SDI raster lines, as shown in Figure 23. One SDI video frame can contain up to 4 channel units with the 270 Mb/s interface (Figure 22) or 6 channel units with the 360 Mb/s interface.
H

V Line 21

HEADER

94 lines - Channel Unit 0

94 lines - Channel Unit 1

HEADER

94 lines - Channel Unit 2

94 lines - Channel Unit 3

Line 525 SAV EAV

36

CC HH AA PP TT EE R R1 1

DVCPRO 50 uses the space of two adjacent channel units. A 25 Mb/s DVCPRO signal, which should be transported at four times the normal speed, uses the space of all 4-channel units available with the 270 Mb/s interface. A 50 Mb/s DVCPRO 50 signal, which should be transported at twice the normal speed, also uses the space of all 4-channel units.

Preview

SDI Router

4X VTR

SDI Router

SDTI 4x

DVCPRO Server SDTI SDTI SDTI SDTI

SDTI On-Air Server 4x

Switcher

SDTI 4x DVCPRO Non-Linear Editor

Fast Transfer Archive

Fig. 24: DVCPRO 4X Transfer System

SUMMARY
DVCPRO uses only about one quarter of the bandwidth supported by SDTI. This enables up to four different DVCPRO compressed video signals to be carried over one SDTI cable or a video clip to be transported at 4 times the normal speed from a Camcorder VTR into an NLE editing server.

Video Connection

37

The Transport of MPEG-2 over SDTI

a
B Bytes I B P B I

Frametype

b
Bytes

Frame

c
Fig. 25: Number of bytes/ frame for MPEG-2 (b) and DVCPRO (c) types of compression

Time

The Video Compression Book explained in detail the differences between DV compression and MPEG-2 compression. DV compression as used by DVCPRO results in a fixed number of bytes per frame (Figure 25 c) where as MPEG-2 can result in a variable number of bytes per frame (Figure 25 b). These differences are reflected in the procedure that is used to map the compressed video into the SDTI structure. The previous chapter explained that the DVCPRO signal is mapped into SDTI blocks of a fixed predetermined size. The 25 Mb/s DVCPRO signal always occupies a fixed, predetermined number of 94 lines within the raster lines of the underlying SDI transport. This procedure cant be applied

38

C H A P T E R

1 2

to an MPEG-2 signal whose byte count per frame varies as shown in Figure 25. SDTI does not specify how DVCPRO or MPEG-2 data are placed within the container provided by SDTI. This is done by specific protocols. DVCPRO uses a protocol defined in SMPTE 321M. Due to its inherent differences, MPEG-2 uses a completely different protocol, called SDTI-CP, which is defined in SMPTE 326M. CP means Content Packages. SDTI-CP is a packaging structure for the assembly of: a system item that carries control information and any metadata which is related to the picture, audio and auxiliary data items, a picture item, an audio item, and an auxiliary item, which carries ancillary data lines, teletext, or other data. Figure 26 shows the basic structure of a Content Package. It is constructed of the four items listed above.

IT EM S
Content Package

Fig. 26: Basic structure of a Content Package

System

Picture

Audio

Auxiliary

Video Connection

39

Fig. 27: Series of raster lines in one SDTI frame for a Content Package consisting of system, picture, audio and auxiliary items.

The system, picture, audio and auxiliary items are each formatted as SDTI variable blocks. The data structure of a variable length block was shown in Figure 18 of chapter 7. The data in each SDTI variable block continue through as many lines as necessary.
H

V
Line 13 System item

HEADER

1st field

Audio item Auxiliary item

2nd field

Line 525 SAV EAV

The filling of the payload column shown in Figure 27 varies over time due to the variable amount of bytes per frame, which is typical for an MPEG-2 compression. This variation is shown in Figure 28 over a 12-frame MPEG-2 GOP sequence.

40

CRC

Picture item

C H A P T E R

1 2

1st field

2nd field

On one hand, SDTI-CP offers an extremely flexible means of transferring content over SDTI, while on the other, it demands unnecessarily complex receivers/ decoders if all possibilities are to be met. To limit these requirements in order to allow practical working devices, decoder templates for the encoding of SDTI-CP with MPEG coded picture streams were defined (SMPTE RP 204). This situation with MPEG is well known. Theoretically, it also offers extreme flexibility, which in practice leads to very complex problems. Therefore, the so-called flexibility of MPEG-2 was first restricted by operating ranges (forthcoming SMPTE Recommended Practice 213) and finally by one single specific implementation (forthcoming SMPTE Standard 356M).

Fig. 28: Variable filling of the raster lines of one SDTI frame with a Content Package carrying an MPEG-2 video item shown over a 12-frame GOP sequence.

SUMMARY
Differences between DV- and MPEG-2 compression are reflected in the procedure that is used to map compressed video into the SDTI structure. MPEG-2 video is part of a Content Package, which carries a system, picture, audio and auxiliary data item. MPEG-2 compression leads to a variable number of bytes per frame, which results in variable sizes of the Content Packages. On the contrary, DVCPRO signals provide a constant byte stream per frame. The application of SDTI for MPEG-2 signals is therefore much more complex than for DVCPRO signals.

Video Connection

41

The Stream Transfers of Data

A Stream is a collection of data sent over a data channel in a sequential fashion as a continuous flow of data (video, audio, etc.). The bytes are typically sent in small packets, which are reassembled into a contiguous stream. In the context of TV broadcast, streaming of television program material involves the following characteristics: a continuous process in which the transmitter pushes program material to receivers that may join or leave the stream at any time; the receiver may need some time to synchronize itself to the streamed signal, e.g. in TV applications the receiver needs to wait at least until the start of the next frame, there is usually no return path between transmitter and receiver, the transmitter plays out the material without receiving feedback from the receivers; there is no capability to control data flow or re-transmit lost or corrupt data, critical studio operations such as playout, output monitoring, video editing, etc., require streams with extremely low error rates, less-critical operations such as video browsing can use streams which require lower bandwidth and exhibit higher error rates,

42

C H A P T E R

1 3

the mean reception frame-rate is dictated by the sending frame-rate; the transmission frame-rate is not necessarily equal to the materials original presentation frame-rate, thus allowing faster or slower than real-time streaming between suitably configured devices, and transfer from a transmitter to one or more receivers. The received quality of a streamed TV signal is directly related to the Quality of Service (QoS) of the data link. For network performance, QoS characteristics are measured in terms such as: bandwidth, bit error rate, jitter and delay (latency), and access set-up time. In streaming there is usually no return path to request a retransmission, so the receiver must make the best of received data. There is no guaranteed quality; instead just the so-called best effort. There are obvious similarities between data streaming and the transfer (pushing) of digital video via SDI or analog PAL/NTSC via analog links.

SUMMARY
A stream-receiver may join or leave a transmitted stream at any time without losing information. Due to the missing return path to the transmitter there is no capability for the re-transmission of lost or corrupt data. The received quality is directly related to the Quality (of Service) of the data link. A Stream Transfer of data is comparable to the transfer of digital video via SDI or analog PAL/NTSC over analog links. It therefore fits the TV signal model very well.

Video Connection

43

The File Transfers of Data

A File is a collection of data or information that has a name, which is called the filename. All information stored in a computer must be in a file. There are many different types of files often defined for a specific application (computer program). In the context of TV broadcasting, file transfer of television program material involves each of the following characteristics: there is a return path between the transmitter and the receiver, a file is both fixed and limited in length, the moving or copying of a file, with the dominant requirement that what is delivered at the destination is an exact bit-for-bit replica of the original; guaranteed delivery quality is achieved by means of the retransmission of corrupted or lost data packets, in contrast to a stream transfer, the receivers may not join or leave the file transfer at any time. If the start or the end of a file is not part of the received data, the whole file will be lost. the transmission rate may not have a fixed value and the transmission may even be discontinuous; although the transfer may often be required to take place at high speed, there will be no demand that it should take place at a steady rate, or be otherwise synchronized to any external event or process.
44

C H A P T E R

1 4

An example is the transfer of a data file between disk servers. The conversion from stream to file and vice versa is quite often automatically done in the background. When a certain scene or video clip is to be moved from a DVCPRO VTR to a server, the operator defines the in and out points through the time-code information related to the first and last frames of the selected scene. After the operator or the control systems have started the VTR, a video and audio stream leaves the VTR via the SDTI output. At the in-point time, the server starts to record the clip onto its hard disk(s). At this moment the server assigns a filename to the clip and the diskinternal File Allocation Table (FAT) notes the point on the disk where the data with this filename starts and where the clip ends. Within the hard disk, the FAT links the assigned filename to physical locations (addresses) on the disk. It is now possible to move this video clip via file transfer as defined above between disks or servers. It is also possible to stream transfer the payload of this file to a monitoring device or back to another VTR via an SDTI or SDI server output. At this moment the file is stripped of its filename and transferred as a stream to the other devices according to the rules defined in chapter 13. Due to the fact that the transmission rate for the transfer of a file may not have a fixed value and that transmission may even be discontinuous, file transfer doesnt suit the TV signal model well. Although file transfer offers some

Video Connection

45

advantages the easy use of standard IT-links for example we should not expect it to become a general replacement for stream transfers in broadcast facilities. We should consider file transfer more as a replacement for the manual transport of videotapes, because you cant view or process the content during these two methods of transport. Both can be done in the background, by moving content from one location to another.

SUMMARY
There are five major differences between a stream and a file transfer. A file transfer requires a return path between transmitter and receiver; a stream transfer does not. A file transfer does not require a specified connection quality (QoS); the stream transfer does. A file transfer guarantees that the content quality sent is the content quality received; a stream transfer only does its best. A file transfer does not guarantee real-time transfer that depends on the selected QoS. A stream-receiver can join or leave a transmitted stream at any time without losing information, the file-receiver cannot. File transfers will continue to be used in addition to stream transfers but will not become a general replacement for streaming.

46

C H A P T E R

1 5

Fiber Channel

Fiber Channel is a network that has been accepted as the high-performance computer peripheral interconnect and is very well suited for use in broadcast studio applications. For these applications, Fiber Channel is used by videodisk recorder vendors and for shared storage attachments. Commonly-available Fiber Channel links support payload rates of about 100 Mb/s. A special Fiber Channel Audio / Video (FC-AV) standard has recently been approved. Fiber Channel connections can be based on a variety of transmission protocols including SCSI, TCP/IP and ATM. Despite the name, Fiber Channel can run over both copper and fiber media. The main tradeoffs are that although longer distances can be achieved with fiber, it is more expensive. Speeds of up to 100 Mb/s can run on both copper and fiber; higher rates require fiber media. Fiber Channel defines three topologies, namely Pointto-Point, Fabric and Arbitrated Loop. A Point-to-Point topology as shown in Figure 29A is the simplest of the three. It consists of only two Fiber Channel devices connected directly to each other. The transmit fiber of one device connects to the receive fiber of the other device, and vice versa. There is no sharing of media, which allows the devices to enjoy the total bandwidth of the connection.

Video Connection

47

The Fabric topology as shown in Figure 29B is a network topology based on cross-point switches to connect up to 224 devices. The benefit of this topology is that many devices can communicate at the same time; the media is not shared. The Arbitrated Loop topology as shown in Figure 29C has become the most dominant, but it is also the most complex. Its a cost-effective way of connecting up to 127 ports in a single network without the need of a cross-point switch.
Fig. 29: Fiber Channel Point-to-Point topology (A)

Fiber Channel Fabric topology (B)

Switch

Fiber Channel Arbitrated Loop topology (C) C

48

C H A P T E R

1 5

Add Fiber Channel Network Card

Add Fiber Channel Network Card

Fiber Channel Network


DVCPRO

DVCPRO Non-Linear Editing Systems

DVCPRO Server

The Fiber Channel data structure consists of Frames, Sequences and AV Containers: Fiber Channel defines a variable length Frame consisting of a label and a packet of up to 2,112 bytes of data. That Frame is just a different name for our well-known packet and forms the basic unit of communication between two devices. It is similar to the structure of an Ethernet frame shown in Figure 10 of chapter 4. A Fiber Channel Sequence is a series of one or more related frames transmitted unidirectionally from one device to another. For each frame transmitted in a Sequence, the sequence count information in the label is increased by one. This provides a means for the recipient to arrange the frames in the order in which they were transmitted and to verify that all expected frames have been received.

Fig. 30: DVCPRO interoperability maintains DVCPRO quality via a Fiber Channel network

Video Connection

49

A single AV Container maps exactly into a single Fiber Channel Sequence. The forthcoming Fiber Channel Audio-Video (FC-AV) standard defines the mapping of various digital video formats to Fiber Channel. The mapping is based on an AV container system. Each Container is intended to hold the information of one TV frame. Data transported in a container is segregated into Objects as shown in Figure 31B. An Object is a collection of data treated as a discrete entity. Object types are for example video, audio or ancillary data.
Fig. 31: The Fiber Channel AV-Container with DVCPRO content

FC-AV Container 1

FC-AV Container 2

Header

Object 0
SDTI information

Object 1
not used

Object 2
DVCPRO

Object 3
not used

Object n
not used

Header Information taken from the SDTI Header Descriptor Time Stamp Packet Length CDS Packet CDS Packet

Payload CDS Packet CDS Packet CDS Packet

SDTIType

Signal Type

DIFID

DIFData

DIFID

DIFData

CRC

SDTI Stream DIF-Data DIF-Data DIF-Data DIF-Data DIF-Data DIF-Data DIF-Data Header Data DIF-Data Type Type Type Type EAV SAV

Space

Space

50

C H A P T E R

1 5

One of the possible Object types is a Compressed AV Stream delivered via SDTI from a DVCPRO VTR. The format of a Compressed AV Stream is based on a FC-AV sub-container format shown in Figure 31C. The subcontainer is composed of Stream Header and Compressed Data Stream (CDS) packets. Figures 31C to 31E show how the content for the CDS packet is taken from the SDTI stream for DVCPRO compression. Information for Object 0 (SDTI information) is taken over from the SDTI Header of the SDTI stream, as explained in chapter 7. In the 525/60 system one compressed AV Stream Object, which equals the information of one video frame, is composed of 750 CDS packets for 25 Mb/s DVCPRO compression. This number of CDS packets equals the number of SDTI data blocks over the period of an SDI video frame.

SUMMARY
The high performance of a Fiber Channel network makes it ideally suited for connecting video servers and for shared storage attachments. A special Fiber Channel Audio / Video (FC-AV) standard has recently been approved. The Arbitrated Loop structure has become the most dominant Fiber Channel topology.

Video Connection

51

The ATM Wide Area Network

ATM, the Asynchronous Transfer Mode, which is a network technology based on asynchronous time division multiplexing, uses a small packet of a fixed size. The small, constant size allows ATM equipment to transmit video, audio, and computer data over the same network. The connection established by ATM is usually unidirectional. Current ATM implementations support data transfer rates of from 25 to 622 Mb/s. The most important and distinctive part of the ATM system is the packet. Since the packet has a specific length, a new name has been given to it: a cell. The reason for the new name was to avoid confusion. But for you, the reader, it is more important to understand that a cell is just another packet. The ATM cell is 53 bytes, 5 of which are for the header information and 48 for the data payload. The actual cell payload in an ATM cell does not contain any error detection or correction. Each cell contains a destination address and can be multiplexed asynchronously over a link. ATM uses a star topology. All devices are connected to one ATM switch. This switch directs incoming cells to the right output. ATM creates a fixed channel, or route, between two points whenever data transfer begins. When purchasing ATM service, you can generally choose between four different types of service:

52

C H A P T E R

1 6

Constant Bit Rate (CBR) specifies a fixed bit rate so that data are sent in a steady stream. This is analogous to a leased line. Variable Bit Rate (VBR) provides a specified throughput capacity but data are not sent evenly. This is a popular choice for voice and video-conferencing data. Unspecified Bit Rate (UBR) does not guarantee any throughput levels. Available Bit Rate (ABR) provides a guaranteed minimum capacity but allows data to surge at higher capacities when the network is free. Synchronization between transmitting and receiving devices is achieved through the transport of timing references within the stream. In order to stream content between TV studios over a wide area, ATM is used according to the so-called AAL 1 (ATM Application Layer 1) specifications. Its Quality of Service provides Constant Bit Rate (CBR) and mechanisms for timing recovery. Both of these are required for the synchronous transmission of a TV signal via an asynchronous line. The AAL 1 specification includes a Forward Error Correction (FEC) and byte interleaving mechanism that is capable of recovering up to four lost cells in a group of 128. This FEC significantly improves the quality of the received stream. A number of suppliers offer ATM network access devices with SDTI interfaces.

Video Connection

53

Fig.32: Layer structure of a DVCPRO Network


Application Layer

All varieties of DVCPRO compression formats, covering both SDTV and HDTV, can now be efficiently transported over ATM. To make this possible, Panasonic developed a mapping format from DV based programming material to ATM. This mapping format called ATM wrapper allows faster than real-time or multi-program video streaming. The simultaneous transfer of four individual DVCPRO programs, the four times faster than real-time transfer of a single 25 Mb/s program, and the real-time transfer of DVCPRO HD can all be performed over an ATM link at 155.52 Mb/s. The layered structure of a DVCPRO network system is shown in Figure 32. The upper application layer is formed by DVCPRO. The middle layer is the adaptation layer, which places the compressed AV stream into the FC-AV container as explained in chapter 15. The new Wrapper Layer provides a generic mechanism for all kinds of video, audio, data and metadata and is therefore called the Common Layer.

DVCPRO
DV-based SDTI stream (chapter 11)

Adaptation Layers

(chapter 14) Compressed AV stream (FC-AV container) Streaming format and/or File transfer format

Wrapper Layer (chapter 7) Transport Layer SDTI container Physical Layer SDI transport (chapter 1) Fibre Channel

ATM wrapper

AAL1 ATM Other transport scheme (ex. 1394)

54

C H A P T E R

1 6

Since the above-mentioned AAL1 process is carried out asynchronously, it is necessary to make video frame synchronization at this upper Common Layer. This is done by defining a SYNC Stream Block (SSB), which is transmitted in one video frame period. Figure 33 shows that the SSB consists of the SSB Header and one (or more) FC-AV containers (explained in chapter 15). More than one container is used for faster than real-time transmission or multiprogram transmission.

SSB Header

FC-AV container 0
Object 0 Header
SDTI information

Object 2
DVCPRO

FC-AV container 1 to n
not used for DVCPRO 25

The SSB Header contains such information as SMPTE Universal Label, length information, number of containers, number of programs, and information related to the content of the container such as: program number, and container size.

Fig. 33: Sync Stream Block (SSB) as ATM wrapper for the FC-AV container

Video Connection

55

Contribution

Production studio

Affiliate station

ATM unit

ATM unit

Program content transfer

ATM wide area network

Fig. 34: Live streaming and content transfer over an ATM wide-area network

Relay station

Due to the use of a generic ATM wrapper, which is based on the FC-AV container model, TV signals of different compression formats can be transported via an ATM wide area network. Figure 34 shows an application example.

56

C H A P T E R

1 6

Distribution

Net-station

Home, office

ATM unit Bus for live relay

HD camera News, sport, event

ATM unit Relay point

SUMMARY
Panasonic developed a mapping format from DV based programming material to ATM. This mapping format called ATM wrapper allows faster than real-time or multi-program video streaming. A number of suppliers offer ATM network access devices with SDTI interfaces. All varieties of DVCPRO compression formats, covering SDTV and HDTV, can now be transported efficiently over ATM in accordance with its AAL 1 specification.

Video Connection

57

IEEE 1394 More than just a Network for the Consumer Market

The IEEE 1394 bus was designed to support a variety of digital audio/video applications. Some of the applications are tailored to the consumer or industrial market. The version used in these environments relates to a specific cable and connector type, and is limited to cable lengths of about 4.5 m / 14' 9". Some companies have announced cables that work of up to 100 m / 109 yd. The physical topology of IEEE 1394 is a tree or daisy-chain network with up to 63 devices. Each device, or node, connected to the 1394 serial bus, supports automatic configuration. Each time a 1394 device is added to or removed from the serial bus, the 1394 bus reconfigures itself. This allows for hot plugging of devices and means that 1394 devices can communicate with each other without needing a host system or bus manager. The physical connections between nodes are made with a single cable that carries power and balanced data in each direction. The 1394 serial bus provides for differing requirements by supporting data rates of 100, 200 and 400 megabits per second. In the near future, the 1394 serial bus will support data rates of 800, 1600 and 3200 megabits per second. Unlike most other protocols, IEEE 1394 provides the capability for isochronous as well as asynchronous transmission. Isochronous means a form of data transmission that guarantees a certain minimum data rate, as

58

C H A P T E R

1 7

required for time-dependent data such as video or audio. Isochronous can be contrasted with asynchronous, which refers to processes in which data streams can be broken by random intervals, and synchronous processes, in which data streams can be delivered only at specific intervals. Isochronous service is not as rigid as synchronous service, but not as lenient as asynchronous service. To transmit data, a 1394 device first requests control of the physical layer. With asynchronous transport, the addresses of both sender and receiver are transmitted, followed by the actual packet data. Once the receiver accepts the packet, a packet acknowledgment is returned to the original sender. With isochronous transport, the sender requests an isochronous channel with a specific bandwidth. Isochronous channel IDs are transmitted, followed by the packet data. The receiver monitors the incoming datas channel ID and accepts only data with the specified ID. The bus sends a start indicator in the form of a timing gap. This is followed by the time slots for isochronous channels, as shown in Figure 35. Whatever time remains may be used for any pending asynchronous transmission. Since the slots for each of the isochronous channels have been established, the bus can guarantee their bandwidth and thus their successful delivery.
Packet frame = 125 microseconds

Fig. 35: Isochronous channels within an IEEE 1394 Packet Frame of 125 microseconds

Time slot for isochronous channel 1

Time slot for isochronous channel 2

Time slot for asynchronous packets

Start indicator Time

Video Connection

59

The IEEE 1394 specification defines a basic mechanism for real-time data transport, but does not establish the protocols needed for specific application requirements such as transporting DVCPRO, DV or MPEG streams. This information is provided by the IEC-61883 protocol. The protocol covers three areas: 1. the Common Isochronous Packet (CIP) format, 2. Connection Management Procedures for making isochronous connections between devices, and 3. a framework for sending commands and controls from one device to another. A Common Isochronous Packet (CIP), as shown in Figure 36, is used to transport AV data. Common refers to the fact that this CIP is used for all kinds of AV data (DV, DVCPRO, MPEG). In the following section, we will refer to the DVCPRO implementation. A DIF block as explained in chapter 10 is the basic unit for all transmissions over IEEE 1394. Each IEEE 1394 isochronous stream packet is composed of six DIF blocks, assembled without regard to DIF sequence boundaries.

60

C H A P T E R

1 7

IEEE 1394 Header (32 bytes) Header CRC (32 bytes) CIP HEADER (64 bytes)

Fig. 36: Common Isochronous Packet (CIP) for transport over IEEE 1394

Data = DIF blocks 120 quadlets = 480 bytes

Header CRC (32 bytes)

The IEEE 1394 Header contains such information as data length, time-code and channel number. Isochronous packets are not sent to a specific address but are identified by a channel number. The first two quadlets (32 bytes each) of the isochronous payload constitute the CIP Header. The key bits are: information regarding the source ID, indicating where the data came from, description of the data block size this is the only information which the receiving side needs to know in order to reconstruct the original source packet, compression format, and raster format information.

Video Connection

61

Although the IEEE 1394 interface is primarily designed for consumer devices including mini-DV camcorders and VTRs, Panasonic supports 1394 through its DVCPRO professional video format. This allows desktop non-linear editing systems to accept DVCPRO formatted video in the compressed domain and edit the contents. The 1394 interface was originally developed by Apple Computer and named FireWire. i-Link and other brand names are also used, but these names are basically interchangeable as IEEE 1394 for consumer use. However, the DVCPRO implementation of 1394, referred to as the DVCPRO Terminal, differs in three important ways from other implementations: the DVCPRO data stream uses locked audio whereas consumer video data may use unlocked audio as well, the part of the CIP Header which identifies the DV data is different for the DVCPRO data stream, and the DVCPRO and DV data stream for 60Hz field rate follows the 4:1:1 sampling structure. 50Hz field rate DVCPRO uses 4:1:1, wheras consumer DV uses 4:2:0.

SUMMARY
IEEE 1394 devices allow for hot plugging and can communicate with each other without needing a bus manager. IEEE 1394 provides the capability for isochronous transmission, guaranteeing a reserved bandwidth for the data transport. Panasonic supports 1394 through its DVCPRO professional video format. This allows desktop non-linear editing systems to accept DVCPRO formatted video in the compressed domain and edit the contents.

62

C H A P T E R

1 8

The Universal File Exchange Format

Chapter 13 & 14 dealt with the differences between file- and stream-transfers. To use the benefits of a file transfer the guaranteed delivery of all bits and bytes it is mandatory to standardize one File Exchange Format or to at least document a limited number. A File Exchange Format can be moved between devices from different vendors over different types of transport. It is more complex than a file format that is created within a computer to store parts of an incoming video stream on its own attached storage devices. One can say that the file stays on top of a software pyramid, as shown in Figure 37.
Fig. 37: The software pyramid

file format (SW) executable programm (SW) operating system (SW) Type of CPU (HW)

Video Connection

63

The foundation of the software pyramid is determined by the CPU type used. If you change the CPU type you have to change the Operating System as well. The executable program the .exe files sit on top of the operating system. They form the applications and define their own application dependent file formats. That means that you cant usually replace one of the building blocks below the file format without affecting the file format itself (Figure 38).
Fig. 38: Removal of a lower layer may affect the file format

Executable program (SW)

Fi le fo rm at (S W )

Executable program (SW)

Operating system (SW) Type of CPU (HW)

Innovation cycles are becoming shorter and shorter. This affects hardware (CPUs), operating systems, application programs and, of course, file formats. If you have ever tried to read a document-file you wrote 10 years ago with the latest version of your word processing program, youll know why. The latest software simply tells you: I cant open this file! This is not acceptable in broadcast applications for files containing program content.

64

C H A P T E R

1 8

On the other hand, computer-based technology has already proven its usefulness in many applications within the professional broadcasting environment. Prominent examples can be found worldwide in the area of serversystems for production, post-production, playout and archiving. The common denominator in all these applications is the transport of program data and storage on non-linear media on the basis of proprietary file formats. The European Broadcasting Union has already expressed a strong requirement to share files between systems from different manufacturers, both within the studio and between different broadcast facilities. The expected operational and economic benefits of adopting exchange in file-form can briefly be summarized as follows: File exchange avoids the possibility of introducing any picture quality degradation during the data transport due to the Guaranteed Delivery of all bits and bytes (see chapter 14), Metadata, audio, video and data can be transferred in one common wrapper, Systems can be built using general computer equipment, an economic benefit to overall system costs etc., and The transmission rate may not have a fixed value and the transmission may even be discontinuous; this enables economic file transport in the background through the use of unoccupied bandwidth in available networks. There is no demand for it to take place at a steady rate, or be otherwise synchronized to any external event or process.

Video Connection

65

Organizations like EBU are particularly concerned about program exchange between their members. In the past this took place via an exchange of recorded videotapes. That is why recording standards were so important. In future, the exchange of programs may take place via file transfer depending on the cost of wide area network services. EBU experts have requested that a future common file format should not be related to any specific hardware (CPU) or operating system. It should not be payload specific and support the mapping of all major compression schemes into the file body.
Fig. 39: General structure of an AV-file Preamble
Label Allocation table General metadata

Body
Content = essence + metadata DVCPRO, MPEG, metadata

Post-amble
EOF

Any File Exchange Format should be well defined and in the public domain. Panasonic fully supports this view. Work is going on in various trade and standards organizations to fulfill these requirements. The general structure of any AV-file, as shown in Figure 39, consists of: a preamble: label, table of file content, and general metadata to describe the information located in the body,

66

C H A P T E R

1 8

the file body itself, into which payload data such as uncompressed or compressed video, audio, data and additional metadata is mapped, and a post-amble with an end of file (EOF) marker. Although file transfer offers some advantages, we should not expect it to replace stream transfer. A receiver may join or leave a stream at any time, which allows the receiver to select only a part of the content delivered via the stream. The file structure shown in Figure 39 does not permit the so-called partial file access. If required, the file header information has to be repeated within the stream in adequate intervals to permit receivers to synchronize itself onto transfers already in progress. This makes the file structure much more complex. The above-explained basic structure is not ideal for archiving programs on tape either. To search for a certain program, it is necessary to recover the metadata first. Therefore, it is preferable to store the metadata on a different, separate storage medium, which provides much shorter access times than tape. The metadata provides a link to the content on tape and vice versa.

SUMMARY
Work is going on to define a File Exchange Format that is independent of any specific hardware or vendor. We may not see a universal File Exchange Format that covers all applications. Different applications naturally have different requirements. Though file transfer offers some advantages, we should not expect it to replace stream transfer.

Video Connection

67

Conclusion

Fig. 40: DVCPRO loss-less Contribution Network

The use of modern IT infrastructures increases the efficiency of broadcast facilities. A prerequisite for this was the transformation of the digital TV signal of the Serial Digital Interface (SDI) into a truly data signal. The Serial Digital Transport Interface (SDTI) laid the foundation for that transformation. Now networking technologies can be applied which allow sharing and transport of content between individual devices and users. Due to the fact that DVCPRO already records true data packages, it is well prepared for this data world, as shown in Figure 40. Table 1 summarizes the typical properties of the Network-Interface Types and NetworkContainers discussed in this book.
Long distance
Microwave, satellite ATM unit ATM wide area network ATM unit

Fiber Channel SDTI


Inside facility

Fiber Channel SDTI

IEEE1394

IEEE1394

Studio A Short distance 68

Studio B

C H A P T E R

1 9

SDTI
Desktop LAN

Ethernet

Fiber Channel

ATM

IEEE 1394

Main application

WAN Stream transfer File transfer

Uni-directional Bi-directional Guaranteed bandwith

Properties in main application

Quality of Service Studiosynchronous Isochronous

Asynchronous Can Carry DVCPRO

Table 1: Network-Interface Types and NetworkContainers

Video Connection

69

References

1) The Video Compression Book, Panasonic 1999. 2) Report of the EBU/SMPTE Task Force on Harmonized Standards for the Exchange of Television Programme Material as Bitstreams, August 1998. 3) SMPTE 125M-1995 Television Component Video Signal 4:2:2 Bit-Parallel Digital Interface. 4) SMPTE 259M-1997 Television 10-Bit 4:2:2 Component and 4fsc Composite Digital Signals Serial Digital Interface. 5) SMPTE 305M-2000 Television Serial Data Transport Interface. 6) SMPTE 306M-1998 Television Digital Recording 6.35-mm Type D-7 Component Format Video Compression at 25 Mb/s 525/60 and 625/50. 7) SMPTE 314M-1999 Television Data Structure for DV-Based Audio, Data and Compressed Video 25 and 50 Mb/s. 8) SMPTE 321M-1999 Television Data Stream Format for the Exchange of DV-Based Audio, Data and Compressed Video Over a Serial Data Transport Interface. 9) SMPTE 326M-2000 Television SDTI Content Package Format (SDTI-CP). 10) RP 204-2000 SDTI-CP MPEG Decoder Templates. 11) SMPTE 345M-2001 Television Mapping of SYNC Stream Block in ATM Common Layer to ATM Adaption Layer Type 1.

70

C H A P T E R

2 0

12) SMPTE 354M-2001 Television ATM Common Layer for Transport of Packetized Audio, Video and Data over Asynchronous Transfer Mode (ATM) using ATM Adaption Layer Type 1 (AAL 1). 13) Hans Hoffmann, Interfaces, Protocols and Interoperability in TV Production & Archiving, 1st Management Symposium Cologne, May 2000. 14) EBU Technical Statement D89-2000: Quality and interoperability in a 625/50 digital television production environment using MPEG compression. 15) Steve Steinke, ATM Basics, LAN Magazine/Network Magazine, May 1995. 16) Charles L. Hedrick, Introduction to the Internet Protocols, Rutgers University 1987. (http://oac3.hsc.uth.tmc.edu/staff/snewton/ tcp-tutorial/index.html) 17) Gary Hoffman and Daniel Moore, IEEE 1394: A Ubiquitous Bus, COMPCON 95. 18) University of New Hampshire InterOperability Laboratory, Fiber Channel Tutorial. (http://www.iol.unh.edu/training/fc/fc_tutorial.html) 19) WPI Department of Electrical and Computer Engineering: ATM Specifications (http://www.ece.wpi.edu/courses/ee535/hwk10cd95/ bkh/specs.html 20) T11/Project 1305-D/1.4, Fiber Channel - Audio Video (FC-AV), Working Draft, September 17, 2000. (ftp://ftp.t11.org/t11/pub/fc/av/00-252v3.pdf)

Video Connection

71

Index

AAL 1 53 AES 22 ANSI/SMPTE 259M 12 Asynchronous 52, 59 ATM 47 Compression 13 Container 26 Content package 39 CRC 16 D1 - VTR 9 Datagram 17 Destination address 16 DIF 31 DV 13 DVCPRO 13 End of Active Video (SAV) 19 Ethernet 15 European Broadcasting Union (EBU) 27 FC-AV 47 File exchange 63 File transfer 29, 44 FTP 17 IEEE 1394 58 Isochronous 59 Length/Type 16 LAN 15 Layer 6 Mapping 33

MPEG-2 Network Payload data Physical layer Protocol Quality of Service (QoS) SDTI SDTI-CP Serial Data Transport Interface Serial Digital Signal (SDI) Server SMPTE Society of Motion Picture and Television Engineers Source address Start of Active Video (SAV) Stream transfer Synchronous WAN

13 15 16 11 15 43 23 39 23 10 45 3 6 16 19 42 59 29

72

C H A P T E R

2 1

Video Connection

73

Panasonic 2001 Matsushita Electric Industrial Co., Ltd. Video Systems Division Contacts: USA Panasonic Broadcast & Digital Systems Company 3330 Cahuenga Blvd W. Los Angeles, CA 90068 phone +1-323-436-3500 email safarj@panasonic.com United Kingdom Panasonic Broadcast Europe Ltd. West Forest Gate, Wellington Road Wokingham, Berkshire, RG40 2AQ phone +44-118-902-9200 email enquiries@panasonic-pbe.co.uk Germany Panasonic Broadcast Europe GmbH Hagenauer Str. 43 65203 Wiesbaden phone +49-611-18160 email info@panasonic-broadcast.de Japan Matsushita Electric Industrial Co., Ltd. Video Systems Division 2-15 Matsuba-cho, Kadoma, Osaka, 571-8503 Japan phone +81-6-6905-4675 email notani@vsd.mei.co.jp

74

Production: AV MEDIA TECHNOLOGY - Consultants Ringstrasse 9 64342 Seeheim-Jungenheim Germany www.av-media-technology.de flenner + fraembs Agentur fr MedienDesign Gotenstrasse 6/II 20097 Hamburg Germany www.flenner-fraembs.de

You might also like