You are on page 1of 5

Why YUV is preferred over RGB

YCbCr is a consumer video format and this is the way HD is encoded. RGB is the
traditional computer format. One is not superior to the other because each has it's own
strengths and weaknesses. YCbCr is preferred because it is the native format. However
many displays (almost all DVI inputs) only except RGB. If your display is HDMI it will
likely except YCbCr if not switch to RGB. Auto should use YCbCr whenever possible.
YCbCr is the digital counterpart to analog YPbPr component video. (YCbCr converted to
analog is YPbPr). Digital RGB converted to analog is usually referred to as VGA.

SD and HD DVDs are encoded in 8bit YCbCr 4:2:0. After/during decoding it's
upsampled to YCbCr 4:2:2. If RGB output is required the YCbCr is upsampled again to
4:4:4 (sometimes done only once 4:2:0 -> 4:4:4) and a standard and simple transform
converts to RGB 4:4:4. If done properly, you'll never notice the difference between the
two. The advantage of YCbCr 4:2:2 is that it can be sent as 10bit (or 12bit) video via
HDMI (all versions). RGB 4:4:4 is restricted to 8bit (except for the new deep color
formats). However, if your display takes 8bit video and then upsamples to 10bit or higher
for display, you may only need 8bit video. RGB is also the only format used with DVI
(with a few exceptions).

YUV is a color space typically used as part of a color image pipeline. It encodes a color
image or video taking human perception into account, allowing reduced bandwidth for
chrominance components, thereby typically enabling transmission errors or compression
artifacts to be more efficiently masked by the human perception than using a "direct"
RGB-representation. Other color spaces have similar properties, and the main reason to
implement or investigate properties of Y'UV would be for interfacing with analog or
digital television or photographic equipment that conforms to certain Y'UV standards.

The scope of the terms Y'UV, YUV, YCbCr, YPbPr, etc., is sometimes ambiguous and
overlapping. Historically, the terms YUV and Y'UV were used for a specific analog
encoding of color information in television systems, while YCbCr was used for digital
encoding of color information suited for video and still-image compression and
transmission such as MPEG and JPEG. Today, the term YUV is commonly used in the
computer industry to describe file-formats that are encoded using YCbCr.

The Y'UV model defines a color space in terms of one luma (Y') and two chrominance
(UV) components. The Y'UV color model is used in the PAL and SECAM composite
color video standards. Previous black-and-white systems used only luma (Y')
information. Color information (U and V) was added separately via a sub-carrier so that a
black-and-white receiver would still be able to receive and display a color picture
transmission in the receiver's native black-and-white format.

Y' stands for the luma component (the brightness) and U and V are the chrominance
(color) components; luminance is denoted by Y and luma by Y' – the prime symbols (')
denote gamma compression,[1] with "luminance" meaning perceptual (color science)
brightness, while "luma" is electronic (voltage of display) brightness.
The YPbPr color model used in analog component video and its digital version YCbCr
used in digital video are more or less derived from it, and are sometimes called Y'UV.
(CB/PB and CR/PR are deviations from grey on blue–yellow and red–cyan axes, whereas U
and V are blue–luminance and red–luminance differences.) The Y'IQ color space used in
the analog NTSC television broadcasting system is related to it, although in a more
complex way.

Y'UV was invented when engineers wanted color television in a black-and-white


infrastructure.[2] They needed a signal transmission method that was compatible with
black-and-white (B&W) TV while being able to add color. The luma component already
existed as the black and white signal; they added the UV signal to this as a solution.

The UV representation of chrominance was chosen over straight R and B signals because
U and V are color difference signals. This meant that in a black and white scene the U
and V signals would be zero and only the Y' signal would need to be transmitted. If R and
B were to have been used, these would have non-zero values even in a B&W scene,
requiring all three data-carrying signals. This was important in the early days of color
television, because holding the U and V signals to zero while connecting the black and
white signal to Y' allowed color TV sets to display B&W TV without the additional
expense and complexity of special B&W circuitry. In addition, black and white receivers
could take the Y' signal and ignore the color signals, making Y'UV backward-compatible
with all existing black-and-white equipment, input and output. It was necessary to assign
a narrower bandwidth to the chrominance channel because there was no additional
bandwidth available. If some of the luminance information arrived via the chrominance
channel (as it would have if RB signals were used instead of differential UV signals),
B&W resolution would have been compromised.[3]

Conversion to/from RGB


Y'UV signals are typically created from RGB (red, green and blue) source. Weighted
values of R, G, and B are summed produce Y', a measure of overall brightness or
luminance. U and V are computed as scaled differences between Y' and the B and R
values.

Defining the following constants:

Y'UV is computed from RGB as follows:


The resulting ranges of Y', U, and V respectively are [0, 1], [-UMax, UMax], and [-VMax,
VMax].

Inverting the above transformation converts Y'UV to RGB:

Equivalently, substituting values for the constants and expressing them as matrices gives:

99% of computer graphics apps work in RGB color space. Whether you are working in
Photoshop, After Effects, Commotion, Combustion, Shake (it does not work in YUV
AFAIK), flame, inferno, smoke, fire, you are working in RGB color space. RGB is the
preferred color model for computer graphics and it isn't going anywhere. You can't
expect these apps (and plug-ins) to be re-written from the ground up to work in YUV
space, and I doubt you will see this any time in the near future.

Working in an RGB app is not a bad thing. As long as you start with pristine source
material captured with a 10bit codec, keep your intermediate renders in a lossless RGB
format like Animation or Microcosm, and work 16bit when you have serious color issues
to deal with you will be fine. And as I said, even the big boy infernos and fires work in
RGB color space with 10bit YUV i/o. The majority of commercials and broadcast work
is finished in these RGB boxes. I'd much rather have the toolset found in my RGB apps
like flame, Commotion, After Effects, Photoshop, etc. than be limited to the toolsets
found in the YUV based video editing apps like FCP.

When storing video digitally there are two philosophies in which you can store it: RGB
and YUV. Each has a variation or two that change how accurate they are, but that's it (i.e.
RGB16,RGB24,RGB32 and then YUV, YUY2, etc).

RGB stores video rather intuitively. It stores a color value for each of the 3 color levels,
Red Green and Blue, on a per pixel basis. The most common RGB on computers these
days is RGB24 which gives 8 bits to each color level (that's what gives us the 0-255
range as 2 to the 8th power is 256), thus white is 255,255,255 and black is 0,0,0.

YUV colorspace is a little different. Since the human eye perceives changes in brightness
better than changes in color, why not focus more on brightness than the actual colorlevel?
In YUV colorspace you have 3 values: Luminance, or just Luma (which is the brightness
level) is abbreviated at Y. U is the Red Difference sample, and V is the Blue Difference
sample. What does the "difference" mean?

Well if you add up the Red, Green, and Blue levels, you get the brightness of a pixel (i.e.
0,0,0 is all black and thus the least bright, white is the most bright at 255,255,255). So
why not store that brightness more accurately and store the color levels less accurately if
brightness matters more? That's exactly what YUV colorspace does.

YUV is generally stored in 16-bit, and 8 bits are given to Luma and 4 bits to each of the
two Chroma samples. This works by only sampling Chroma half as often as you sample
Luma, or rather every 2 pixels share the same color values.

So basically YUV stores more relevant data at a lower accuracy than RGB.

This is important because when you convert between the two colorspaces, either you lose
some data, or assumptions have to be made and data must be guessed at or interpolated.

The top image is an original and below it is an image sampled with YUV 4:2:0 sampling,
notice how the colors of the hairline at the top left become puzzled because of the chroma
averaging between pixels.

Converting back and forth between colorspaces is bad because you can lose detail, and it
also slows down the process. Thus you want to avoid colorspace conversions as much as
possible.
Premiere, and almost all video editing programs, work in RGB because it's easier to deal
with mathematically. Premiere demands all incoming video be in RGB32 - or 24-bit color
with 8-bit alpha channel, specifically, and will convert the YUV footage you give it.

AVISynth itself can work in either colorspace, but YUV is preferred and most (if not all)
AVISynth filters run in YUV colorspace.

TMPGEnc's VFAPI plugins all operate in RGB colorspace because all of its filtering
and processing runs in RGb

VirtualDub runs in RGB when you use Normal Recompress or Full Processing Mode (in
the Video dropdown menu). All of VirtualDub's internal functions and filters run in RGB
colorspace only. However, Fast Recompress doesn't decode the video to RGB, and
instead just shunts whatever your source is into the compressor you've selected - thus if
your source is YUV it shunts the video data as YUV into the video compressor.

This is important because almost all distribution video codecs run in YUV colorspace.
This includes DivX, XviD, MPEG1, MPEG2, MPEG-4, DV, etc. HuffYUV (guess where
the name comes from) runs YUV colormode natively but you can indeed compress RGB
video data with it (but it will be bigger). There's an option in the codec controls to
automatically convert incoming RGB video data to YUV in order to save space if you
want to.

Thus using Fast Recompress in VirtualDub (or by the same token, NanDub) is not only
the fastest way to transcode video but also the least costly in terms of colorspace
conversions. But the drawback is you cannot you any of VirtualDub's filters in Fast
Recompress mode - VirtualDub never even touches the incoming video stream. So how
can you do it? Use AVISynth!

By scripting all your filters in AVISynth and operating in YUV colorspace, you can
avoid more costly color conversions. The optimal scenario involves only 2 colorspace
conversions: MPEG2 from DVD in YUV, converted RGB video in Premiere, converted
YUV output in Huffyuv and from that output stay in YUV colorspace all the way through
to the video compressor. By doing this you not only save time but also quality by
avoiding colorspace conversions.

If you do need an RGB process after your editing, then export the footage as RGB and
apply the process before doing any YUV filtering. This is easy in avisynth as you can
keep track of the colorspace used - but more about this later :)

You might also like