You are on page 1of 53

CODECS

HOW TO CHOOSE THE RIGHT CODEC FOR EVERY PROJECT


INDEX

01. What a Codec Does Direct Intermediate

Lossyness A Real-World Example

02. The Codec Journey 05. The Codec you Color-Correct

A. Grade the camera files


03. The Codec You Shoot With
B. Consolidate and Transcode
Cost
C. Carry on the Direct Intermediate
Storage

Finishing 06. The Codec you send to VFX

Editing Hardware Go big or go home

04. The Codec You Edit With 07. The Codec You Export

Why should I transcode before editing?


08. The Codec You Archive
Proxy Edit
back to index

COD EC S D ON ’T N E E D TO BE HARD.
NO , R E A L LY, TH EY D O N’T.

By the end of this article, you will be able to pick At each stage, I’ll explain which factors you should be
the best codec for you on each project. My goal considering as you choose a codec, and I’ll give you
is to empower you to make your own informed some examples of the most commonly-used codecs
decisions about codecs, instead of relying on what for that stage.
worked for someone else.
Along the way, we’ll cover why low-end codecs
I’m going to walk you through every step in the and high-end codecs can each slow down your
process of making a video. Click on a heading to editing, the reasons for a proxy/offline edit, a real-
jump to that section. I’ll cover: world project walkthrough, some storage-saving
strategies, and an explanation for why transcoding
The codec you shoot
cannot improve your image quality.
The codec you edit
The benefits of optimizing your codecs can be
The codec you color-correct
huge. The right codec will preserve your images in
The codec you send to VFX the highest quality, help you work faster, and it will
also enable you to take the best advantage of your
The codec you export
computer and storage. You’ll be able to work faster
The codec you archive on a laptop than many can on a high-end tower.

3
back to the index
0 1 . W H AT A C O D EC D O E S
back to index

A codec is a method for making video files smaller, usually by carefully throwing away data that we
probably don’t really need, and they’re pretty smart about how they do that. A few years ago, I created
a video that covers the main compression techniques that many codecs use. It’s not required viewing to
understand this article, but it certainly won’t hurt.
How Codecs Work – Tutorial.

5
back to index

IF YOU’RE SKIPPING THE VIDEO, HERE ARE


SOME VERY BASIC EXPLANATIONS:

Chroma subsampling Temporal compression


Throws away some color data (4:4:4 is no Uses previous frames (and sometimes
chroma sampling. 4:2:2 is some chroma following frames) to calculate the current
subsampling.4:2:0 is lots of chroma frame. Bad for editing.
subsampling). Bad if you’re doing color-
Bit depth
correction. Really bad if you’re doing green
screen or VFX work. The number of possible colors. Deeper bit-
depth (larger numbers) is good for color-
Macro-Blocking correction and VFX.
Finds blocks (varying size) of similar colors
and makes them all the same color. Bad
for VFX and color-correction. Almost all
codecs use this to some degree, and the
amount tends to vary with the bitrate.

6
back to index

Codec Comparison Table


I’ve also pulled together a list of all of the most advantage to using popular codecs – they are more
common codecs used in the postproduction world. likely to work on your system, your client’s system,
This list can help you compare different codecs your system-in-five-years, etc. And it’s easier to find
against each other and make the best decision for help if something goes wrong.
your project.
Click on the image to view the entire table, and
There are many different codecs that can be used think about which codecs might be a good fit for
in the editing process, but the ones I’ve included you as you read through the article.
are by far the most common. There is a significant

Check out the table

7
back to index

Lossyness
One of the columns in the table is “lossyness,”
which is an important concept with codecs. When
I’m talking about lossyness, I don’t necessarily
mean what your eye sees. I mean the amount of
data that is retained by the codec, only some of
which you can see. The question is: If I had an
uncompressed image, and then I compressed it
with this codec, how similar would the new image
be to the old image? How much information is lost
in the transcode? If the two images are very similar,
then the codec is not very lossy, and if they’re pretty
different, then it’s more lossy.

The lossyness is a combination of the techniques


that the particular codec uses and its bitrate. A
more lossy codec is not necessarily “bad.” In some
cases (when viewing online, for instance), it’s really
not necessary to retain 100% of the original image.
Using a more lossy codec can be a really smart
move because of how much space it saves.

8
back to index

IF THE IMAGE LOOKS JUST AS GOOD TO MY EYE, THEN WHY


SHOULD I CARE IF IT’S TECHNICALLY ‘LOSSY’?
You should care because you may want to change the image. If you are doing any sort of color
correction, then you will be changing the image, allowing you to see elements of the image that
weren’t visible (or prominent) when you captured it.

For example here is an image that was captured raw.

9
back to index

Here is a screengrab of it compressed with H.264,


using a standard YouTube-recommended settings. And then compressed with DNxHD 350x:

h.264 DNxHD

They all look pretty much the same, don’t they? The
visual quality is just about the same, and the H.264
file is a fraction of the size of the DNxHD file. This
is why it’s the recommended setting for YouTube. It
looks just about as good to the eye, and the file is
much easier to upload to the internet.

10
back to index

The trouble with the H.264 version, however, comes when you try to make changes to the image. What if
you wanted to increase the exposure?

h.264 DNxHD

Now we can see where the highly-compressed image falls apart. Her hair and shirt look terrible in the
h.264 image, and the buildings by the river look all mushy. The DNxHD still looks great, though.

h.264 DNxHD h.264 DNxHD

11
back to index

THIS IS WHY YOU REALLY WANT A


HIGH-QUALITY CODEC WHEN YOU
CAPTURE THE IMAGE – because you will
probably want to make changes later on,
but you don’t know yet what those changes
might be. You’ll want to tweak the color and
contrast, maybe tweak the speed, maybe
add some VFX. A highly-compressed file
doesn’t allow for those changes without
breaking down.

This is why it’s a good idea to capture


your footage in 10-bit even if you may be
outputting an 8-bit file in the end – you
don’t know, when you shoot, which bits
you’re going to want.

12
back to the index
02. THE CODEC JOURNEY
back to index

NOW THAT WE’VE GOTTEN SOME


OF THE FOUNDATIONAL IDEAS OUT
OF THE WAY,
it’s time to walk through the different stages that
you’ll encounter in each project.

Every projects starts with a codec that you capture


in the camera, and it ends with a codec that you
export (delivery codec) and hand off to your client
or upload to the web. In the simplest case, you do
all of your editing and color-correction right on
the camera files and then export to your delivery
codec, so you’re only ever using two codecs.

But most of the time it gets a little bit more


complicated. You might transcode to a different
codec for editing, and potentially for color-
correction, and definitely for VFX. But it all starts
with…

14
back to the index
03. THE CODEC YOU SHOOT WITH
back to index

THIS IS YOUR CAPTURE CODEC,


(also called the “camera native codec”
or “acquisition codec”).

Generally speaking, you should aim for the


highest-quality codec that your camera (or
your budget) can capture. When I say “highest
quality”, I mean that you want to capture as much
information as possible, so you want less-lossy
codecs: less compression, higher bit-depth, and
less chroma subsampling. The more information
you have when you capture, the more flexibility
you will have later, especially in color-correction
and VFX (if you’re doing that).

Of course, you also have to consider a lot of


other, practical factors in this decision, otherwise
we would always be shooting 8K raw, right?

16
back to index

Cost
The first consideration is obviously cost. Generally camera, via HDMI or SDI, and compress it separately.
speaking, the more expensive the camera, the So you end up with two copies of your footage –
higher quality codecs are available on it. I say one copy heavily compressed on the camera, and
generally because there are some “sweet spot” a second copy lightly compressed on the external
cameras that can offer excellent codecs at a recorder. The key thing here is that the camera sends
reasonable price. Panasonic’s GH series (especially the signal out to the recorder before compressing it.
in the early days when the GH2 was hacked) were
One important note here is that many cheaper
known for offering better codecs than the other
cameras only output 8-bit, and often not in 4:4:4.
cameras in its price range.
An external recorder might be able to compress
to a 12-bit codec, but if the camera is only sending
TIP: BETTER CODECS WITH EXTERNAL
RECORDERS 8 bits, the recorder can only record 8 bits. Some
cheaper cameras may also not output a “clean”
One way that people (myself included) have found HDMI signal that is suitable for recording. We call
to capture higher-quality codecs on cheaper an output signal “clean” when it’s just the pure
cameras is to use an external recorder. image with no camera interface overlays.

These devices (many of which can double as external


monitors) take an uncompressed signal from the

17
back to index

Storage
The second factor to consider is storage space. High-quality codecs tend
to be higher bit-rate, which means that the files are larger. You need to be
prepared to store and back up all of those files as you’re shooting, and
you may also have to upgrade your memory cards in order to be able to
record the high-bitrate data. If you’re shooting solo, then you may end up
choosing a lower-quality codec because it allows you to change memory
cards less often and focus on the story instead.

Finishing
Another factor to consider is how much color-correction and VFX
(collectively referred to as finishing) you plan to do. If you’re going to be
doing very minimal color-correction and no VFX, then you can probably
get away with lower bit-depth, chroma subsampling, and macro blocking
that come with lower quality capture codecs.

18
back to index

Editing Hardware
The last factor to consider is your editing machine, because
most capture codecs are not well suited to editing without
a high-performance computer. H.264 and some raw files
require a powerful CPU/GPU to edit smoothly, and very-
high-bitrate codecs may require high-speed hard drives
or data servers. Unless you happen to be shooting an
edit-friendly codec, you may have to transcode your files
to another codec before editing, which can take time. For
most people, transcoding the footage isn’t a huge issue
because it can be done overnight or on a spare computer.
If you’re working on very tight turn-around times, however,
you may choose a codec that will allow you to start editing
immediately after a shoot, even if that means a higher cost
or a sacrifice in image quality. I explain which codecs are
best for editing in the next section.

19
back to the index
04. THE CODEC YOU EDIT WITH
back to index

Alright, you’ve shot your film, and you’ve got all of your files onto your computer. Now you need to decide
whether you’re going to edit with these files, or whether you want to transcode into another format.

Why should I transcode


before editing?
CAN’T I JUST EDIT THE FILES THAT CAME OUT
OF THE CAMERA?

Well, it depends. Pretty much all of the major


software packages can now edit any codec that
your camera creates, (unless you’re a badass
shooting on a brand-new camera with brand-new
technology). But while it’s almost always possible
to edit the codecs that your camera shot, it’s not
always the best idea.
For many of us, however, the capture codec isn’t
If you are lucky enough to be shooting on a codec
going to be optimized for editing. There are two
that is great for editing (see the codec chart), then
main factors you need to consider when choosing
you can skip this step.
your edit codec: compression type and bit rate.

21
back to index

Highly-Compressed codecs can Slow Down Your Editing

Most lower to mid-range cameras record with codecs that use temporal compression, also known as long-
GOP compression. I will give you a simple explanation here, but if you’re interested in learning in more
detail, check out my codecs video, starting at 19:00.

The simple explanation of a long-GOP is that, for each frame, the codec only captures what has changed
between this frame and the previous frame. If the video doesn’t include a lot of motion, then this means that
the new file can be a LOT smaller than the original. The difference between this frame and the last frame is just
a few pixels, so all you need to store is a few pixels. That’s great!

DECODED:

STORED
IN FILE:

Frame 1 Frame 2 Frame 3

22
back to index

The issue, however, is that these codecs tend The other thing that can cause issues with playback
only to work well when played forward. (If you’re is raw video. Raw video needs to be converted
curious why, take a look at the video). That’s great before it can be displayed (sort of like a codec
for viewing on YouTube and your DVD player, but does), and some computers can’t decode the raw
it’s not great for editing, because when you’re file fast enough, especially if it’s 4K. Ironically, both
editing you’re often jumping around, or playing a the low-end cameras and the highest-end cameras
clip backward. It takes a lot more processing power produce files that are hard to edit!
to do those things quickly with a long-GOP codec.
A high-end computer might have no trouble, but High-Bitrate codecs can
even a mid-range computer will lag and stutter
Slow Down Your Editing
when you skim through the footage quickly or
jump around. For low to mid-range codecs, you don’t have to
worry about the bitrates at all. Once you start
Codecs that aren’t long-GOP (a.k.a. Intra-frame
moving up the ladder, however, high bitrate codecs
codecs), however, can play backwards just as easily
can cause issues with editing, especially if you’re
as forwards, and even a mid-range computer can
working on everyday computers.
skip around very smoothly. If you’ve only ever
edited clips straight from the camera, you not The reason is because your computer needs to
might realize what you’re missing! be able to read the data from your hard drive at

23
back to index

a bitrate that is at least as high as your codec’s multicam with three cameras? Suddenly you need
bitrate. It makes sense — if your codec is 50Mb/s 3x that data rate: 2,202Mb/s! At that point, you’re
(fifty megabits per second), then your computer going to need to invest in some high-performance
needs to be able to read that file from your hard hard drives or RAIDs.
drive at 50Mb/s or else it’ll fall behind and stutter.
Here are some rough guidelines for common
(note that Mb/s stands for megabits per second, data storage speeds, though of course there will
while MB/s stands for megabytes for second. There always be certain models that underperform or
are eight bits in a byte, so you need to multiple by overperform.
8 when converting from MB/s to Mb/s)
Standard spinning drive: 100-120MB/s
The good news is that hard drives are getting
Professional spinning drive: 150-200MB/s
faster every day, so 50Mb/s is never going to cause
any problems. But what if you’re editing ProRes Standard SSD: 400-500 MB/s
422HQ at 4K, which is 734Mb/s? The average
Low-end RAID: 200-300 MB/s
external hard drive is only just barely fast enough
to play that back, and some cheaper hard drives High-end RAID: 1000-2000 MB/s
won’t manage it. And then, what if you’re editing a

24
back to index

Shooting in log can slow down your editing

Shooting in a log color space is a way of preserving


as much of your dynamic range as possible, allowing
you to capture a scene that has bright highlights and
dark shadows without blowing out the highlights
or crushing the blacks. Blown-out highlights are a
particularly nasty side-effect of shooting on video
instead of film, and so shooting in log can help
make your footage feel more cinematic. Now that Log
log profiles are available even on most prosumer
cameras, it’s an extremely popular way to work.

The downside is that the image that comes out of


the camera doesn’t look so great, so you need to
add in a bunch of contrast and saturation in order to
get even close to the final image. The most common
way to do that is to add a LUT to your footage, which
is essentially a simple preset color correction that
Log with LUT
brings your footage back to a “normal” look.

25
back to index

If you’re shooting in a log color space, then you


need to apply a LUT to your footage in order to
preview it with normal color and contrast. This
means that your editor will need to apply the
appropriate LUT to all of the clips when editing. This
can be annoying to manage, and it can also slow
down the computer a bit, because it needs to first
decode each frame and then apply the LUT before
displaying it. It’s certainly possible to edit the log
footage without any LUT, but it’s not ideal. The color
of two shots may influence how you intercut them.

If you’re going to transcode your files before


editing them, then you can apply the LUT during
the transcode process. That way, the editor
is always working with footage that has good
contrast and color and never has to bother with
LUTs. Note that you should only do this if you are
using a Proxy workflow, not a Direct Intermediate
workflow (described below).

26
back to index

Consider time spent encoding


The main downside of transcoding your footage wanted without stuttering. But that didn’t bother me
before editing is simply the time it takes to do too much because the edit was extremely simple –
the transcode. If you have a lot of footage to go just a few cuts, maybe some music, a title, and I was
through, and your computer isn’t particularly fast, done. Even though my editing speed wasn’t ideal, I
it may take a long time. If you’re not in a big hurry, would have spent more time in the transcode than
you can let the transcode run overnight, potentially I would have saved in editing speed, so I just used
on multiple computers if you have access to them, the original files.
but that’s not always ideal.
If I were editing a longer piece with the same setup,
When I worked at Khan Academy, our founder however, I would transcode to DNxHD or ProRes.
would regularly record short video messages Generally, I would do most of the transcoding
to send out to people, sometimes on very tight overnight, often with multiple machines running at
schedules. I would usually shoot in 4K in a long- the same time.
GOP log format, and edit them on a MacBook Pro.
Editing 4K long-GOP with a LUT (to correct for
the log footage) on a laptop would mean I could
play the video back just fine in Premiere Pro, but
I couldn’t zoom around the timeline as fast as I

27
back to index

Proxy Edit
If you’re going to transcode the native camera The proxy workflow is so common that many high-
files before you edit them, then you’ll use an end cameras record a high-end raw file *and* a
“intermediate” codec. It’s called intermediate ProRes or DNxHD proxy file at the same time. After
because it comes between the capture codec and the shoot, the raw files are backed up and put in
the export codec. There are two common ways of storage, while the proxy files are sent off to the
working with intermediate codecs: editors and to the director/producers for dailies.

The first is the “proxy” workflow or “offline edit.” When choosing a proxy codec, you want to go for
This means that you are transcoding your captured one that does not use temporal compression (aka
footage into an intermediate format, editing with inter-frame compression or long-GOP compression),
that format, and then re-linking back to the original and you want to pick one that has a lower bitrate.
camera files before exporting. Because you will use The low bitrate means that the files are much
the camera files to export and not the proxy files, smaller, so you can use fewer/smaller/cheaper hard
you don’t need to worry so much about picking drives, simplifying your workflow. Woot!
a proxy codec with great image quality – lossy
codecs are fine. You can optimize for editing speed While the proxy files are great for editing, you
and storage convenience instead. shouldn’t do more than basic color-correction with

28
back to index

proxy files. If you are going to do all of your color- Some good choices for proxy codecs
correction inside of your editing software, then it’s
best to re-link back to your camera files because By far the most common proxy codecs are DNxHD/
your proxy files may have lower color quality. DNxHR and ProRes. They have both been around
for years, so they’re very widely supported.
The good news is that most editing software today Everyone knows how to handle them. They are
can switch between the camera files and the proxy both very well suited to a proxy workflow (ProRes
files in just a couple clicks, so you can even go back even has a preset called “proxy”), and are nearly
and forth if you need to. interchangeable when used for proxies.

We’ve published detailed guides for proxy Since DNxHD is made by Avid, and ProRes is
workflows in each of the major NLEs: made by Apple, it makes sense that DNxHD
would work better on Media Composer and
Final Cut Pro X Proxy Workflows
ProRes would work better on Final Cut Pro X.
Premiere Pro Proxy Workflows That used to certainly be true, but nowadays
both codecs work very smoothly on all modern
Avid Media Composer Proxy Workflows
editors (including Premiere Pro). There may be a
slight speed increase in using the codec that was
designed for the system, but it’s very slight.

29
back to index

The only significant difference between the two Start off with the smallest ProRes or DNx
for a proxy workflow is the fact that you may have codec in the same resolution as your capture
trouble creating ProRes on a PC, while DNxHD codec. Look at the GB/hr column and multiply
is very easy to create cross-platform. The only it by the number of hours of footage you
officially-supported way to create ProRes on a PC have. If you have enough storage space, then
is with Assimilate Scratch. There are some other you’re good – use that codec. If you have lots
unsupported methods for creating ProRes files of extra storage space, think about using the
on a PC, but they’re not always reliable. PCs can next largest flavor.
easily play back and edit ProRes files, but you
If you don’t have enough storage space, or
can’t encode new ProRes files on a PC as easily
if you’re on an underpowered machine, then
as DNxHD, and so some editors prefer a DNxHD
take the resolution down a notch. A lot of
workflow for that reason.
huge-budget Hollywood films were edited in
Regardless of which of the two codecs you pick, 480p just a few years ago, so don’t sweat it
you also have to pick which flavor you want. if you need to lower your resolution from 4K
This is really going to depend on your storage down to 720P for the edit.
constraints – it’s a tradeoff between image quality
and file size. The good news is that you don’t
need tip-top image quality when you’re editing,
so you can choose a low-bitrate codec.

30
back to index

Direct Intermediate
The other type of intermediate workflow is intermediate workflow. Some people will also call
something that I’m calling “Direct Intermediate.” this an “online” workflow, but this is also confusing
This means that you transcode your camera files into because that term was created to describe a
a codec that is both good for editing and very high- workflow that includes an offline and an online edit,
quality (not very lossy). Because the codec is very not a workflow that’s online from start to finish.)
high quality, almost all of the original information
The key to picking a good Direct Intermediate
from the camera files has been preserved, and so
codec is to make sure that you are preserving all
it’s not necessary to re-link back to the camera files
of the information from your capture codec. An
– you can just export directly from the intermediate
intermediate codec will never make your images
files. There will be some theoretical loss of
better (more detailed explanation below), but it
information when you transcode, but if you pick
can definitely make them worse if you choose the
a good enough intermediate codec, it’ll be small
wrong codec. The important thing is to understand
enough that you don’t need to worry about it.
the details of your original footage and make sure
(Note: I’m calling this process “Direct that your intermediate codec is at least as good
Intermediate” because there isn’t a common as your capture codec in each area. If you capture
name for this workflow. People usually just call your footage on a DSLR like a Sony A7Sii at 4K,
this “intermediate,” but that can be confusing then you will be recording in a 4:2:0, 8-bit, Long-
because proxy workflows are also a kind of GOP codec at 100Mbps. You want an intermediate

31
back to index

codec that is at least 4:2:0 and 8-bit. Going beyond pack a lot more information into those 100 megabits
these values (e.g. to 4:4:4 and 12-bit) won’t hurt, than ProRes can. In order for ProRes to match the
but it also won’t help at all, so it’s probably not image quality of h.264, you need a much higher
worth the extra storage space. bitrate. I would recommend only using ProRes 422
or ProRes 422 HQ if you’re starting with a 100Mbps
Let’s say, for example, that we want to go with a
h.264 codec. ProRes 422 will probably do just fine,
ProRes codec. We have 4 options to choose from
but if you have lots of storage space, then going up
that are 4:2:2 and 10-bit.
to ProRes 422 HQ will have a slight edge.
145Mb/s ProRes 422 Proxy While it’s fine to simply match the bit-depth and
328Mb/s ProRes 422 LT color sampling when choosing an intermediate, you
should always increase the bitrate at least a little.
471Mb/s ProRes 422 If you’re going from long-GOP to a non-long GOP
707Mb/s ProRes 422 HQ codec, then you should increase the bitrate a lot.

You might think that all you need is to match the Side note: If you wanted to go with DNxHD instead
camera bitrate (100Mbps), but you actually need to of ProRes, you have similar options, except that
greatly exceed the camera bitrate. This is because DNxHD also offers an 8-bit version for the lower-
h.264 is a much more efficient codec than ProRes. end codecs. Since our footage is 8-bit to start with,
Because h.264 uses long-GOP compression, it can that won’t hurt us at all.

32
back to index

THE PROXY WORKFLOW SOUNDED PRETTY software that automatically syncs audio tracks or
GOOD. WHY DO THE DIRECT INTERMEDIATE? multicam shoots.

Part of the reason why the Direct Intermediate Another reason why you might want to use a Direct
workflow is common is because it used to be a lot Intermediate workflow is because you can move
harder to use a proxy workflow. Some of the major right on to color-correction and VFX (“finishing”)
software providers didn’t make it particularly easy process without swapping around any files. Keep
to relink back to the original camera files, and reading, and I’ll explain more about why that’s
so people would choose a direct intermediate convenient in the Color-Correction and VFX
workflow. Nowadays, however, it’s pretty easy to sections.
do in any editing package. The main exception is One downside, however, is that you can’t “bake
when you have a lot of mixed footage types. If you in” the LUTs for your editor – you’re going to need
have multiple frame rates and frame sizes in the to apply a LUT via a color-correction effect in your
same project, switching back and forth from the editing software. If you were to include the LUT in
proxies to the capture codecs can be a headache. your transcode for Direct Intermediate workflow,
you would be losing all of the benefits of recording
If you are using some third-party tools to help
in log in the first place.
prep and organize your footage before you start
cutting, those can also make the relinking process The other obvious downside is that you need to
more tricky. One common example might be store all of these (much larger) files.

33
back to index

An intermediate codec will never make your images better

This is very important, because it is very This is a photo of a rose reflected in a water
commonly misunderstood, and there is a lot of droplet. It’s 4 megapixels, and it looks pretty nice
misinformation online. Transcoding your footage on my 27-inch monitor.
before you edit will never increase the quality of
the output. There are some extra operations that
you could do in the transcode process (such as
doing an up-res) that could increase the image
quality, but a new codec by itself will never
increase the quality of your image.

If you choose the right codec, you can avoid


hurting your image, but you can never improve it.

That includes going from h.264 to DNxHD or


ProRes. That includes going from 8-bit to 10-bit.
Now what if I take a photo of my monitor with a
That includes going from 4:2:0 to 4:4:4.
Red Helium 8k camera? This is a beast of a camera.
Here is an illustration that can help you understand I shot the photo of the rose a few years ago with
this concept: a cheapo Canon Rebel DSLR, worth about $250

34
back to index

today. The Red Helium setup costs about $50,000, my fancy new image will never be better than the
it’s 35 megapixels, it’s raw, it has one of the best first one. I have a file that is technically higher-
camera sensors ever produced. resolution, but it does not capture any more of my
subject (the rose) than the first one did.
Which will be a better image – the 4 megapixel
photo, or the 35 megapixel photo? This is what you’re doing when you’re transcoding.
You are making a copy of a copy, taking a photo of a
photo. If you use a fancy high resolution camera to
take a photo of a photo, you will be able to preserve
pretty much all of the information in the original
image, but you won’t be able to add anything more.

The big caveat is that, if you are doing any


processing, any transformation of the image
(adding a LUT, for instance), then you definitely
do want to transcode into a higher-quality codec,
which will retain new information. But if you’re not
altering the image, then transcoding will not make
The Red camera has more megapixels, right? It’s your image somehow “better.”
raw, and it has all of the Red digital magic, right?
But since I’m using my high-resolution camera to I explain this in more detail in this article
take a photo of the photo, not a photo of the rose,

35
back to index

A Real-World Example
Let’s say you’re editing a documentary that So you might decide to use a Proxy workflow
captured 4K footage using a Sony A7sii camera, instead and transcode your files to the ProRes 422
recording in the long-GOP version of XAVC-S. Not Proxy 4K format. Then your footage would only
ideal for editing. If they shot 40 hours of footage take up 2.8TB, just barely more than your captured
for your feature-length documentary, you’d end up footage. You can then easily edit off of a single
with about 2.7TB of camera files, which can fit on hard drive, and your workflow gets a lot simpler.
one hard drive easily (though you’ve made other, (For instructions on how to calculate bitrates and
separate backups, of course!). file sizes, check out this article: The Simple Formula
to Calculate Video Bitrates).
You could convert that to a high-quality, not-very-
lossy codec for a Direct Intermediate workflow,
maybe ProRes 422 HQ in 4K.

The downside is that you would need about 12.7TB


in order to store that all of those ProRes files. You
would have to use an expensive RAID setup in order
to have easy access to all of that footage in one
project, at least $1,000. Peanuts for a big facility, but
a significant investment for a solo editor.

36
back to index

Let’s say that you’re working with another editor


who’s on the other side of the country. You might
decide to transcode the footage even further
down to ProRes 422 Proxy HD, which would shrink
your footage down to just 640GB, which becomes
more feasible to send over the Internet if you
have a fast connection. (18hrs to download on an
80Mbps connection)

When the edit is all done, you just re-link your


project back to the original camera files and
export. Even though you and your remote editor
have been working in a pretty lossy codec, the final
export bypasses it, so you don’t lose any quality.

37
back to the index
05. THE CODEC YOU COLOR-CORRECT
back to index

Ok, now you’ve got your video edited, and


it’s time for color-correction. Everything we’re
talking about here will apply whether you are
color-correcting inside your editing application,
or whether you are sending your edit to
dedicated color-correction software.

The big question at this point is whether you


want to color-correct straight on the original
camera files, or whether you want to transcode.
If you did a proxy/offline edit, then you definitely
don’t want to color-correct the proxy files,
because they have a lower image quality. In order
to make good decisions about color, you need
the highest quality image that you have available,
because you need to be able to see exactly what
you have to work with.

So we need to work with high-quality images,


and we have a few different options:

39
back to index

A. Grade the camera files


This is certainly a simple option. If you did a proxy edit, you can relink
to the camera files for the finishing process and go to town. This will
give you maximum image quality, but remember how the camera files
can be slow to work with? The camera files may slow down the process
a little, but depending on the software you use and the amount of
work you need to do, you might decide that the simplicity is worth a
little bit of potential slowdown. If you have a short edit without a lot of
complexity, then this can be a great and easy workflow.

Let’s assume that the color-correction slow-down bothers you, so you


need a codec that is easier to work with. You could transcode all of
your footage to a high-image-quality codec, link to those files, and
then start doing your color-correction. But… that kind of defeats the
purpose of a proxy workflow, doesn’t it? We used proxies because
we didn’t want to have to deal with the large files that would create.
Fortunately, there is another option.

40
back to index

B. Consolidate and Transcode


If you used a proxy/offline workflow for the edit Now you can take this new consolidated project
but don’t want to color-correct the camera files, (after relinking to the originals) and transcode all
one good option is to relink to the camera files, of these files to a very high-quality, high-bitrate
consolidate your project and then transcode to a codec, and start color-correcting. This is different
high-end codec. from the Direct Intermediate workflow because
you are not transcoding all of your footage –
When you consolidate a project, your editing
just the footage that made it into the final edit,
software will make a copy of your project along with
which might be 1/20th or 1/50th the length of the
a copy of the media, but only the particular files that
footage that you originally shot. Now it doesn’t
you ended up using in your sequence. So if you shot
sound so bad to transcode to a high-bitrate codec
7 takes but only used one of them in the edit, it’ll only
because you don’t have to store so much of it.
copy that one take. This cuts down on the storage a
Even at ProRes 4444 4K, a full-length feature film
lot, which comes in handy at this stage. You can also
will only be about 2TB – pretty manageable.
consolidate down even further so that you only keep
the specific portions of each take that you actually Now you can finish your film with top-quality
used in the edit, discarding the rest. In this case, the images and speedy processing, on a hard drive
software will usually include a few seconds before that fits in your pocket. Woot!
and after each take (called “handles”), in case you
want to add a fade or motion tracking.

41
back to index

C. Carry on the Direct Intermediate


The third option is to go with the Direct Intermediate editing
workflow, in which case you’re good to go. You already transcoded
all of your files to a high-quality codec before you started editing, so
you can just carry on with those same files for color-correction. That is
also convenient because those files are good both for editing and for
color-correcting and VFX (see below).

If you are handing off the project to an external colorist or VFX


person, then you can either give them all of your high-quality footage
(potentially annoying because of the size), or you can use the same
consolidation tip that we used above. Handing off the consolidated
project can help you move faster and save your colorist’s time as well.

In addition to the simplicity of the Direct Intermediate workflow (you use


only one set of files), you have one other advantage: going back and
forth between editing and color-correcting is simpler.

42
back to index

Imagine you’ve finished your proxy edit – you And now we find another good reason for a
consolidate and transcode, send it off to your Direct Intermediate edit. If you are going to do
colorist, and then decide that you need to make some of your color work and your editing work
some changes to the edit. Now you’ve got go simultaneously, or at least are going to go back
back to the proxies to make the edit and then and forth a couple times, then it can be simpler
re-consolidate and re-send the footage. The to use one codec for both. This is especially
mechanics of that can get pretty messy. In a high- convenient if you are doing your editing and
end post-production workflow, there is usually a finishing in the same software package (or set of
“lock” on the edit so that the finishing processes packages, e.g. Creative Cloud).
can start. This means that (unless bad things
happen) you will try very hard not go back and
make changes to the edit. But hey, bad things
happen, so it’s best to be prepared.

43
back to the index
06. THE CODEC YOU SEND TO VFX
back to index

If you’re doing any VFX work, then you’re probably


going to need to send files to another program
(potentially another machine, for another artist).
If you’re doing all of your VFX work in your editor
(which is becoming more and more viable for simple
jobs), then you can skip this section. Just use the
same codec as you used for your color correction.

For most of us, however, we need to set up a


“round-trip” process that sends clips from the
editor to the VFX software and then back again
when they’re finished. This happens on a shot- If you’re editing in Premiere Pro and doing mild VFX
by-shot basis, so you’re not sending the entire in After Effects with Dynamic Link, then you can also
sequence to VFX, like you probably did for color skip this section. Dynamic Link automatically does
grading. The question of when in the process you the round-tripping for you. If you’re doing a lot of
send your shots to VFX depends very much on VFX work, you may still want to use the techniques
the particular workflow. Some people will send to in this section, because Dynamic Link can be a little
VFX after the edit is locked and color-correction bit finicky with too many projects. Adobe is always
finished, but time pressure can force you to start working on those bugs, however, and so it’s partly
sending off shots before then. up to personal taste.

45
back to index

Go big or go home
In the VFX process, you tend to use very high-end These problems often arise because of image
(high bitrate) codecs for two main reasons. The first compression artifacts that are invisible to the naked
is simply that VFX artists need all the information eye. 4:2:2 or 4:2:0 color subsampling, for instance,
you can give them in order to do their job well. has almost no visible impact on the image. The
VFX artists are some of the pickiest people when it human eye cares mainly about contrast and seldom
comes to codecs, and for good reason. Everyone notices low color resolution, but the greenscreen
wants high-quality images, but image issues can extraction process relies primarily on color values.
often pose more of a problem for VFX than it does If the codec has thrown away a large portion of the
for editing, color-correction, and final export. color values by using 4:2:0 chroma subsampling, a
good color key may be impossible.
Many tasks in VFX work require very detailed
analysis of the image on a pixel-by-pixel level, The second reason why you want to use high-end
which most editors never need to do. For instance, codecs is because of generation loss. In the VFX
if you’re doing a green-screen extraction, you process, you will probably have to compress your
want the edge between your character and the file multiple times. You will compress the file once
greenscreen to be as clean as possible. We’ve all when you send it to them. And then, if they need
seen awful greenscreen shots where the edges to pass the file on between multiple specialists,
of the character are all choppy or blurred out. they may compress that file two or three times

46
back to index

before they send it back. When a file is compressed multiple times,


we call that multiple generation loss. If you are using a low-end
codec, the image will get progressively worse each time you re-
compress it. One of the great things about the really high-quality
codecs is that you can compress them a couple times without losing
much quality. While it’s always better to avoid compressing a video
multiple times, if you’re using very high-quality codecs, you’re usually
pretty fine.

Some high-end VFX workflows will only use lossless compression for
this reason. The good news is that your VFX shots are usually only a
few seconds per clip, which means your file sizes will be small even
with high-end codecs. So go big! If you captured 4:4:4 in the camera,
then definitely send 4:4:4 to VFX. Otherwise, I would pick a top-of-
the-line 4:2:2 codec (ProRes 422 HQ or DNxHQX).

And of course, you should always communicate beforehand with VFX


about what codec to send. If you think they’re making a bad choice,
send them this article.

47
back to the index
07. THE CODEC YOU EXPORT
back to index

Now you’ve finished the editing, the color,


and the VFX – you’re ready to export. You will
usually do the final export from the software
that you used for color-correction, using the
codec that you used in the color-correction
process.

If your client is in the media business, they


should know what codec they want, so you can
skip the rest of this section!
before streaming it, and you have absolutely
If your client is not a video expert, they may
no control over the settings that they use. This
not know what they want, so you need to make
means that, if you upload a low-quality codec,
some decisions for them. Most of the time,
then we have the scenario where we’re taking a
your client is going to want a video to upload
low-quality photo of a low-quality photo that we
to YouTube and/or other social media sites.
talked about. Bad! Avoid!
You may be tempted to choose a codec that
is good for streaming on the Internet. But you As a general rule, if you want the best quality
would be wrong! The reason why: these sites result, you should upload the best quality source.
do not stream the same file that you upload to They’re going to compress again anyway, so
your viewers – they compress the file *again* giving them more data work with can’t hurt, right?

49
back to index

If you have a fast enough connection, you could If the video is not public, they may also want a
upload a ProRes 422. Some people have reported small file that they can email or link directly to their
slightly (only slightly) better results when uploading own clients so that they can download it. In these
ProRes instead of the recommended h.264. If you cases, it may be appropriate to deliver more than
are delivering a file to a client, for them to upload two separate files, especially if it’s a long video.
to Youtube, then I would not give them ProRes, The file they should upload to YouTube will be
since you don’t know what kind of bandwidth too large to email conveniently. In this case, I will
they’re going to have. Fortunately, these sites usually down-res the file and compress it very
tend to publish recommended upload specs (just heavily. You also have to be realistic and decide
Google it). I personally will take whatever bitrate whether you think that your client will actually
they recommend and multiple by about 1.5x to 2x. understand the difference between the two files.

Your client may also want a file that they can embed If I need to deliver more than one file, I will usually
directly into their website (though I would dissuade call one of them “HD” in the filename and the
them, if you can). Generally speaking, you want a other one “small” or “not HD” in the filename. If
very heavily-compressed h.264. If you’re curious what you try to describe the different codecs to them,
a good bitrate is, my reasoning is that, if anyone I can almost guarantee they’ll have forgotten
knows what the sweet-spot bitrate is, it’s YouTube. the difference by next week, but they’ll probably
I periodically download a video from YouTube and remember what HD and “not HD” means.
check its bitrate, and use that as a benchmark.

50
back to the index
08. THE CODEC YOU ARCHIVE
back to index

You’ve delivered the file(s) to your client, so now


you can sit back and relax… almost.

As any working professional in this industry knows,


the day when you deliver the finished product to
your client is often not the last time you touch a
project. Sometimes a client wants to go back and
change something weeks later, or they want a
higher-quality codec, or maybe you want to add
it to your personal reel. In any of these cases, you
may have moved on to a different machine or to
different software, making it a headache to open
up the original project and re-export.

This is where it comes in handy to have a great


archive of the finished project in an extremely high quality, however, it’s always good to do your own
quality codec. If your client requested a very high- export with a codec that is lossless or as close to
quality codec for delivery, then you’re generally lossless as you can afford, given the space it will
set. Just keep a copy of that file, and you’re good. take. I will generally export to a very high-bitrate
If they need a delivery codec that’s not tip-top 4:4:4 codec – either DNxHD/HR or ProRes.

52
back to index

GOT QUESTIONS?

Please feel free to reach out to me if you have


questions: david@frame.io.

This is the first in a series of Frame.io ebooks, so


make sure you’re signed up for the newsletter to
hear about the next one!

Many thanks to Larry Jordan, Shane Ross, and


Philip Hodgetts for their input on this article!

53

You might also like