You are on page 1of 7

Unit 10 - VFX Craft:

As an ATD at DNEG my work spans across multiple creative departments which has given
me a broad understanding of the responsibilities and work that these departments carry out.
In this assignment I will give an overview of each of the departments you would find in a
typical VFX house.

Concept Stage:

The concept stage is a very crucial stage that is used especially by the modelling team as
reference/guides or creating the assets needed.

Depending on how closely the VFX company is working with the client this is usually carried
out by the concept artists hired by the client and passed onto the VFX company. In some
cases the VFX house can influence the design as they begin the asset creation stage. This
can be due to creative decisions made with the client, technical limitations or time
constraints. For Avengers Infinity War, VFX vendor Framestore worked with Marvel to help
create the final look for the Iron Man suit which was made up of tiny nanobots that could
move around and form new parts of the suit on demand. There was a lot of back and forth
communication between Marvel and Framestore

Concept will range from 2D work such as sketches, drawings and digital paintings to
physical 3D sculpts/maquettes.

Asset Creation Stage:

Modelling/Build:
Any CG objects that are needed by the show are created by the modelling/build team. They
are usually made in accordance to a brief given by the client. In many cases they are
modelled after real-world objects, such as helicopters, cars or people. This means that they
must be made to a very high standard so they can be indistinguishable from their real world
counterparts.

During the build stage models are created with a few different versions being made which
are called LODs(Level of Detail). ​Models are made in accordance with the LOD numbering
system with lower LOD models having less detail in the geometry and higher LOD numbered
models containing more and more detail. Typically the highest LOD is reserved for LIDAR
scanned geometry. ​You can then use these models for range of different shots that require
the models to be various distances away from the camera. F​or example if you have a close
up establishing shot of a building you would want to use the highest LOD quality version for
it to look as realistic as possible. If there's another shot where that building is far off in the
distance the lower LOD building would better suit the situation, as during rendering there
would be less detail for the computer to have to load in and render.

In VFX, Modelling artists are usually specialised in one of two specific types of 3D modelling:
Hard Surface and Organic:
Organic modelling is the creation of usually living creatures or plants. Due to their
soft-surfaced/natural look, modelling artist uses 3D sculpting programs such as ZBrush or
Mudbox to achieve the intended results. These programs allow to you push, pull and deform
the base mesh as if it was a piece of clay, giving the artist a great deal freedom without
having to worry about the intricacies of traditional modelling software.

Hard surface modelling is the creation of objects such as buildings, vehicles or props. These
types of objects can be created using more traditional and mathematically precise modelling
techniques such as extrusion and creating edge loops along the topology. The programs
used for this type of modelling are Autodesk Maya which is the industry standard and in
certain special cases SideFX Houdini.

These models will be passed down to the shot layout team unless they are intended to be
animated in which case they are picked up by the rigging team.

Texture Artist:
Texture Artists work closely with the build modellers to create the textures for the models
they're creating. Textures are maps that define how the surface of a model looks and are
based on the UVs from the model which allows the textures to be "wrapped" around it.
Textures can be painted using Adobe Photoshop but more specialised software such
Foundry Mari or Substance Painter/Designer from Allegorithmic are more commonly used in
the VFX industry. These programs allow the artist to paint the textures directly onto the
model without them having to worry about ensuring textures line up correctly due to
placement on the UVs.

Example of some texture map types that artists will use to achieve their desired result:

Diffuse Maps:
These maps define the main colours of the model. They ideally contain no lighting effects
effects like specular highlights and reflections.

Bump/Height Maps:
These are monochrome maps that provide displacement that is perpendicular to the model.
The displacement is based on constant value defined by the colour gradient on map. These
maps don't add any details to the model and are mainly used in lighting.

Normal Maps:
Normal maps provide extra details onto the model without the cost of adding extra geometry
to the model. They use the RGB channels in the image to store X, Y, Z data respectively.
Thanks to this, each pixel gives a 3D vector which corresponds to each pixel in the UV map.
Using this shading for the model can be realistically calculated using the map values.
Specular Maps:
These define how shiny the model will appear and use pixel colour to determine the strength
(black to white). The higher the colour value the more shiny the model will appear to be.

Displacement Maps:
These work very much like the bump/height maps mentioned above however the
monochrome values are used to displace the geometry itself. The black 0 value representing
no displacement and any other lighter colours providing displacement values. Using
displacement maps means that you can use lighter, less detailed models to work with and
apply heavier details later using the map. The most common usage of these kinds of maps is
for terrain geometry which usually very heavy and can slow down your scene drastically.

LookDev:
In this stage the LookDev TD takes the model and the texture maps for characters,
buildings, props and vehicles. The TD will load them into the various scene templates that
use the same light setups that are being used in the sequences that these models appear in.
This allows the LookDev TDs to create "Looks" for the models which are made specifically to
work well in a certain scene. Models are given shaders that are developed by the company.
Usually a single "uber" shader is used which contains a vast array of settings which can be
adjusted to change different attributes. These attributes affect the way light within a
rendering engine affects the model. The main attributes that are used on these shaders are
Diffuse, Specular, Transmission and Emission.

Rigging:
Riggers or Rigging TDs are responsible for the building of the control rigs for all objects
intended to be used by the animators. Rigging will usually start by the TDs creating a base
skeleton that uses per-vertex weights painted onto the mesh to define how much skeleton
"bones" will affect the defined area. The bones are very often hidden from animators and
more user friendly controls are created instead using either Python or MEL scripting.
Painting the weights on is a very crucial part of the rigging process and must be done
correctly to accurately maintain a consistent volume as 3D software doesn't understand the
concept of matter under the skin. This is called volume preservation and is vital to achieve
photorealistic deformations on the characters.

Rigging TDs are also responsible for adding the underlying muscle systems on the model.
Muscle systems on humanoid/animal-like creatures allows for more advanced deformations
of the geometry giving greater realism when the character moves around. Muscle systems
also allow for a technique known as "skin-sliding" which is the simulation of the skin moving
along the muscles and skeleton of a creature.
The most common software for rigging in VFX is Autodesk Maya but Houdini by SideFX is
becoming more popular thanks to it's procedural node-based workflow.
Shot-Specific Work:

3D Based Work:

Matchmove:
Matchmove is a vital part of the 3D pipeline that allows for CG environments characters to
be seamlessly integrated with the live-action scan. ​This is commonly achieved by tracking
2D points at different depths in the scan. The software will perform complex calculations,
utilising the 2D points, to construct a 3D camera. This camera will follow the same
movements as the live action camera. This method could fail if the original plate scan doesn't
count many high-contrast points that it finds easy to track. The After Effects plugin, Mocha
Pro, works around this by bypassing the need for tracking points to be included in the scan.
It uses a method known as Planar tracking which identifies and tracks surfaces as opposed
to individual points. There are weaknesses to the method also as it struggles when there are
no flat 2D-like planes visible in the scan.

An important task carried out by matchmove is Bodytracking. Bodytrack artists animate 3D


"digi-double" characters to match the movements of the actor or actress in the live action
scan. This animation is cached out and used later on in the pipeline by other departments.
The best use case would be if the character crashes through a window and CG glass shatter
simulation need to be added by FX. The FX TDs would use the bodytrack of the character to
get a perfect simulation of the glass shattering around the characters body.

Layout:
Using the newly gained matchmove data, the Layout team then place 3D models in the
scene and plan out the shots. The Layout TDs are part of the previsualisation process, they
are responsible for creating the base foundation on which the rest of the downstream CG
departments will depend on. Because of this they need to have a good understanding of the
3D pipeline. Studios often hire 3D Generalists to fill these kind of positions.

Animation:
Animators take the rigs that are created by the rigging team and bring them to life mainly by
using keyframe animation. Although techniques such as motion and performance capture
are more commonly used in film now, the resulting animation still needs to be touched up,
fixed or sometimes even completely re-animated using keyframe. Animators use a
combination of real world references and animation theory to create believable performance
with the characters that they are animating.

Keyframe Animation:
Using the rigs provided by the rigging team, the animators use separate channels for
translation and/or rotation which are assigned values and then setting keys at particular
frames to create keyframes. The channel's values between each keyframe along the timeline
are interpolated giving the appearance that the particular section of the rig is moving. They
are either interpolated linearly or through using Bezier splines. The animators edit and
assign values to channel by moving handles on the rigs in the viewport in the 3D software.
They can also edit the movement by adjusting curves in the animation/graph editor.
The graph editor provides a 2D, interactive graphical representation of the current animation
movement using curves.

FX:
Effects (FX) Artists are known as FX TDs (Effects Technical Directors). They are responsible
for creating art driven 3D simulation caches that would be too difficult and time consuming to
create using traditional animation techniques. These can be element based effects such as
fire, smoke, water and their interactions with other 3D objects or destruction of props or
environments. Other effects that FX TDs might be tasked with creating include: blood, magic
effects and terrain deformation (dirt, sand, concrete, etc).

Another type of effects artist is the CFX (Creature Effects) TD. CFX is FX simulations that
are specific to creatures and characters such as hair (known as groom), fur, cloth, skin and
muscles. These sims are performed on top of the characters geometry and add to their
overall realism.

The most popular software for FX work right now is Houdini by SideFX, although Autodesk
Maya is still used in certain cases. Other softwares such as Blender and Cinema 4D also
contain FX systems that can give great results.

Lighting:
The main role of a Lighting artist (known in some companies as a Lighting TD) is a light the
3D scene in such a way that it matches the lighting in the original plate scan. There are two
methods that are used to achieve these results. The first is to arrange different types of CG
lights around the scene to create a light rig. The lights are placed and adjusted in such a way
that they replicate the light sources in the original plate. The second lighting technique is to
use HDRI maps (High Dynamic Range Images). An HDRI is a spherical map that is
constructed from multiple high-resolution images, of the set environment, that are all taken at
the same point at multiple exposure settings. This will result in an image that contains all of
the light values that will be needed from very darkest black to the brightest white values
possible.

Since lighting is at the very end of the 3D pipeline any unseen issue or problems with the
scene assets will become apparent during the lighting process. This means that much of a
Lighting TDs job can be spent chasing up scene issues and liaising with the pipeline
department ensure that more complex issues are corrected.

Lighting involves working with the chosen renderer that the VFX company with the work on
the light rigs or HDRIs being carried out inside the chosen 3D software (predominantly
Autodesk Maya). More recently most companies are using software specific to setting up
lighting in scenes. The most popular of these is Katana by the Foundry which can be used
with multiple rendering engines, such as RenderMan, Arnold or Redshift. At DNEG the main
tool/renderer used for lighting is Clarisse. Clarisse combines the functionality of tools like
Katana, with a Maya-like interface with a path tracing renderer built into the software itself.
2D Based Work:

Rotoscoping:
Known simply as roto, rotoscoping is the process of creating Alpha Mattes from the plate
scan photography. While keying the green or blue screen would be the easier and more
efficient option but in most cases it isn't possible to film in front a evenly lit screen. The best
way to describe roto is as if the roto artist is cutting out pieces from the original scan. This
allows CG elements to be placed behind objects or people in the scene. Roto artists cut out
these elements from the scan uses 2D spline based objects that are animated follow the
objects movement via transform attributes.
It is possible to do Rotoscoping in 2D compositing software such as Nuke, however some
companies develop their own proprietary roto software. In doing so they can easily add new
features to help speed up the roto process.

Prep/Paint:
Prep is the "preparation" of a scan so that is is ready for compositing. The paint aspect of
prep is the skilled process of reconstructing sections of scans that have been occluded by an
unwanted object in the scene. This is done by taking samples of the area from other parts of
the scan on the same or other frames.
- Dust shadows caught on the camera lens if the plate is shot on physical film
- Camera rigs, boom microphone poles or unlucky crew members accidently in shot.
- Wires from actor or stunt performer harnesses
- Items in the shot that affect story continuity or don't fit with creative decisions from the
client.

Compositing:
The role of the compositor is to bring together all of the elements created by all of the
previous upstream departments. These include CG rendered elements, digital matte
paintings, roto mattes, prep elements and finally the live action scans. These are loaded into
compositing software and layered on top of each other and edited using various
mathematical operations that are applied to the elements. Compositing artists tweak settings
and iterate through many different versions to reach the final image. The aim is to create the
illusion that the CG elements were all present during the initial shoot.

The most widely used compositing software is Nuke, which is developed by The Foundry.
The main advantage of Nuke and what makes it the industry standard is it's node-based
workflow. This is different to other compositing software such as Adobe After Effects which
uses a layer based workflow. The node-based workflow makes collaboration between artists
simpler as it is much easier to understand an organised nuke network as you are able to
zoom out and see the whole project laid out and organised into different sections with notes,
highlighting, etc. This makes understanding very large scenes quicker and easier saving
time and money for the studio in the long run.

You might also like