You are on page 1of 4

IB Computer Science 2016 CaseStudy

Study online at quizlet.com/_27nh1i


1.

2D computer
graphics

2D Computer Graphics consist of creating, displaying and manipulating objects in the computer in two
dimensions. Drawing programs and 2D CAD programs allow objects to be drawn on an X-Y scale. Although 3D
images can be drawn in 2D programs, their views are static. They can be scaled larger or smaller, but they cannot
be rotated to different angles as with 3D objects in 3D graphics programs. They also lack the automatic lighting
effects of 3D programs.

2.

3D Graphics

3D Computer Graphics consist of creating, displaying and manipulating objects in the computer in three
dimensions. 3D CAD and 3D graphics programs allow objects to be created on an X-Y-Z scale (width, height,
depth). 3D entities can be rotated and viewed from all angles as well as be scaled larger or smaller. They also
allow lighting to be applied automatically in the rendering stage.

3.

Animation
production pipeline

A 3D animation production pipeline consists of a group of people, hardware and software that work in a specific
sequential order to create a 3D animation product. The three main stages of the production pipeline are :
Preproduction - consisting of Storyboard, Animatic and Design
Production - consisting of Layout, R&D, Modeling, Texturing, Rigging/Setup, Animation, VFX, Lighting and
Rendering
Postproduction - consisting of Composting, 2D VFX/Motion Graphics, Color Correction, Final Output

4.

Anthropomorphism

It is the attribution of human characteristics and qualities to non-human beings, objects or natural phenomena.
Writers use anthropomorphism in order to help tell their stories. Anthropomorphized animals have ascribed
human traits, emotions and personalities i.e. they can walk, talk and act like humans.

5.

Avars

The heart of computer animation is the animation variable or 'avar'. When working with a figure, animators will
assign avars to every part of the model that needs to move. For example, assigning an avar to a figure's leg allows
the animator to move it forward or backward by altering the value. A stick figure may require only a few avars,
while characters in fully computer-animated movies may involve hundreds of avars, many used solely to create
realistic facial expressions.

6.

Cel

A cel, short for celluloid, is a transparent sheet on which objects are drawn or painted for traditional, handdrawn animation. Cel animation was used before the advent of computer animation in the production of
cartoons or animated movies where each frame of the scene was drawn by hand.

7.

Computergenerated imagery
(CGI)

Computer-generated imagery (CGI) is the application of the field of 3D computer graphics to special effects. CGI
is used for visual effects because the quality is often higher and effects are more controllable than other more
physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes,
and because it allows the creation of images that would not be feasible using any other technology. It can also
allow a single artist to produce content without the use of actors, expensive set pieces, or props.

8.

Diamond Square
algorithm

The diamond-square algorithm is a method for generating height maps (used for elevation above a surface) for
computer graphics.
It is widely used today in animation
Used to create 3D objects like landscapes
It is carried out by first implementing the midpoint displacement algorithm
Take a square, find the center and then midpoints of each side to get 4 squares, this is then repeated till every
pixel is filled
In diamond square, instead of using the 2 closest corners to average, 4 closest corners are used to find the midpoint

9.

Embarrassingly
parallel

Most visualization and analysis algorithms are embarrassingly parallel, meaning that little or no effort is required
to separate the problem into a number of parallel tasks. This is often the case where there exists no dependency
(or communication) between those parallel tasks. Portions of the data set can be processed in any order and
without coordination and communication. A common example of an embarrassingly parallel problem lies within
graphics processing units (GPUs) for the task of 3D projection, where each pixel on the screen may be rendered
independently.

10.

Fractal
landscapes

A fractal is something (a function or object) that exhibits self-similarity. For example, a mountain range is a fractal, at
least in appearance. From close up, small features of an individual mountain resemble large features of the mountain
range, even down to the roughness of individual boulders. This principal of self-similarity is used to generate fractal
landscape. First we generate a coarse, initial random terrain. Then we'll recursively add additional random details that
mimic the structure of the whole, but on increasingly smaller scales.
There are several methods for Fractal Terrain Generation such as :
- Midpoint Displacement Method
- Diamond-Square algorithm Transform - a variation of the Midpoint Displacement method
- Terrain Generation Using The Fast Fourier
- Multifractal Method
The standard method seems to be Diamond Square algorithm, as it produces fairly realistic landscape with little effort.
The fast Fourier transform allows for flatter terrain generation and the ability to tile. The newest method is the
Multifractal Technique that uses images of real terrain to generate terrain with very accurate features.

11.

GPU

A graphics processing unit (GPU) is a computer chip that performs rapid mathematical calculations, primarily for the
purpose of rendering images, animations and video for the computer's screen. GPUs are located on plug-in cards, in a
chipset on the motherboard or in the same chip as the CPU. The more sophisticated the GPU, the higher the
resolution and the faster and smoother the motion in games and movies.
A GPU is able to render images more quickly than a CPU because of its parallel processing architecture, which allows it
to perform multiple calculations at the same time. The resulting performance improvements have made GPUs
popular chips even other for resource-intensive tasks unrelated to graphics.

12.

Hidden
surface
determination

In 3D Computer graphics, hidden surface determination [also known as hidden surface removal (HSR), occlusion
culling (OC) or visible surface determination (VSD)] is the process in which we decide which surfaces or parts of
surfaces are not visible from a certain viewpoint. We need an algorithm to determine what parts of each object should
get drawn. A hidden surface determination algorithm is a solution to the visibility problem. It was the first major
problem in the field of 3D computer graphics. This process of hidden surface is called hiding and the algorithm that is
used is called a hider. Hidden surface determination is necessary to render an image correctly, so that one cannot
look through walls in virtual reality. It also speeds up rendering since objects that are not visible can be removed from
the graphics pipeline.

13.

Inverse
kinematics

Inverse kinematics refers to the use of the kinematics equations of a robot to determine the joint parameters (i.e. the
position and angle of joints in a series of flexible, jointed objects) so as to achieve the desired position of the endeffector. Humans solve this problem all the time without even thinking about it. If we want a spoon we just reach out
and grab the spoon without thinking what the shoulder needs to do or what the elbow needs to do.

14.

Keyframes

Keyframing is the simplest form of animating an object. It is based on the notion that an object has a beginning state
or condition and will be changing over time, in position, form, color, luminosity, or any other property, to some
different final form. Keyframing takes the stance that we only need to show the "key" frames, or conditions, that
describe the transformation of this object, and that all other intermediate positions can be figured out from these. For
example, if you want a title to change from green to blue over time, you would set two keyframes at two different
points in time. The first one would define the text's color as green, and the second keyframe would set the color to
blue. The process of figuring out the frames in between two keyframes is called "in-betweening" or simply "tweening".
The frames played in succession yields a simple, though complete, keyframed animation.

15.

Matte
(process)

In digital matting a foreground element is extracted from a background image by estimating a color and opacity for
the foreground element at each pixel. The opacity value at each pixel is typically called its alpha and the opacity image
taken as a whole is referred to as the alpha matte or key. Fractional opacities (between 0 and 1) are important for
transparency and motion blurring of the foreground element, as well as for partial coverage of a background pixel
around the foreground object's boundary. Matting is used in order to composite the foreground element into a new
scene.

16.

Morphing

Morphing refers to an animation technique in which one image is gradually transformed into another. Morphing is
done by coupling image warping with color interpolation. Morphing is the process in which the source image is
gradually distorted and vanished while producing the target image. So earlier images in the sequence are similar to
source image and last images are similar to target image. Middle image of the sequence is the average of the source
image and the target image.

17.

Motion
capture (Mocap)

Motion capture (Mo-Cap) is a way to digitally record human movements. The recorded motion capture data is mapped
on a digital model in 3D software (e.g. Maya or 3D Studio Max) so the digital character moves like the actor who was
recorded. In other words, it transforms a live performance into a digital performance.
The MoCap technology is used in the entertainment industry for films and games to get more realistic human
movements. A famous example of a movie with lots of motion capture technology is Avatar.

18.

Photorealism

Photorealism in computer graphics usually means creating an image that is indistinguishable from a photograph of a
scene. Photorealism consists in simulating, as exactly as possible, the light propagation from the light sources to the
observer's eye, taking all the reflection, transmission and absorption phenomenon into account. The image produces
the same vision response as the scene.

19.

Primitives

In computer graphics, a primitive is an image element, such as an arc or a square from which more complicated images
can be constructed. In 3D applications basic geometric shapes and forms such as spheres, cubes, toroids (doughnut
shape), cylinders, pyramids cones, wedges, are considered to be primitives because they are the building blocks for
many other shapes and forms.

20.

Ray-tracing

In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in
an image plane and simulating the effects of its encounters with virtual objects. It renders three-dimensional graphics
with very complex light interactions such as pictures full of mirrors, transparent surfaces, and shadows. It is based on
the idea that you can model reflection and refraction by recursively following the path that light takes as it bounces
through an environment.

21.

Render farms

A render farm is a high performance computer system, e.g. a computer cluster, built to render computer-generated
imagery (CGI), typically for film and television visual effects. Each computer is called a node. The computing that takes
place is called parallel computing, because each node can be given a specific and unique task that doesn't require any
other node - in short, the calculations can happen in parallel (at the same time). It is the job of the main computer to
distribute tasks to each render node, and the software application that does this is called a render management
software, or a Queue manager.

22.

Rendering

Rendering is the process of generating an image from a 2D or 3D model(s) (collectively called a scene file), by means of a
software program. The model is a description of three-dimensional objects in a strictly defined language or data
structure.
When an artist is working on a 3D scene, the models he manipulates are actually a mathematical representation of
points and surfaces (more specifically, vertices and polygons) in three-dimensional space. The term rendering refers to
the calculations performed by a 3D software package'srender engine to translate the scene from a mathematical
approximation to a finalized 2D image. During the process, the entire scene's spatial, textural, and lighting information
is combined to determine the color value of each pixel in the flattened image. 'Rendering' is also used to describe the
process of calculating effects in a video editing file to produce final video output.

23.

Rendering
equation

In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point
is given as the sum of emitted plus reflected radiance under a geometric optics approximation. The various realistic
rendering techniques in computer graphics attempt to solve this equation.
The main goal of computer graphics is to calculate the image that could be seen by a camera in a virtual world. This
requires the calculation of the power reaching the camera from a given direction, i.e. through a given pixel, taking into
account the optical properties of the surfaces and the light sources in the virtual world. The spectral properties of this
power are responsible for the color sensation. The power is usually evaluated on a few (at least 3) representative
wavelengths, then the color sensation is determined from these samples.

24.

Scanline
rendering

Scanline rendering is an algorithm for visible surface determination. Scene elements are projected onto a 2D viewing
plane, clipped to the image and rendered by filling in the affected pixels. It works on a row-by-row basis rather than a
polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top 'y' coordinate
at which they first appear, then each row or scanline of the image is computed using the intersection of a scanline with
the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the
active scan line is advanced down the picture. Scanline rendering is faster than raytracing but does not produce as
accurate results for reflections and refractions.

25.

Stop motion
(animation)

Stop Motion Animation is a technique used in animation to bring static objects to life on screen. This is done by moving
the object in increments while filming a frame per increment. When all the frames are played in sequence it it creates
the illusion of movement. Clay figures, puppets and miniatures are often used in stop motion animation as they can be
handled and repositioned easily.

26.

Tweening

Tweening or in-betweening is a key process in all types of animation, including computer animation. Tweening is the
process where the content of the frames between the keyframes is created automatically by the animation software so that
the animation glides smoothly from one keyframe to the next. In-betweens help to create the illusion of motion.
Sophisticated animation software enables us to identify specific objects in an image and define how they should move and
change during the tweening process.

27.

Uncanny
valley

The uncanny valley is a hypothesis in the field of aesthetics which holds that when features look and move almost, but not
exactly, like natural beings, it causes a response of revulsion among some observers. When an attempts at photorealistic
human rendering fails, it ends up somewhere in the depths of the uncanny valley. For example, trying to recreate perfectly
detailed human eyes in 3d animated faces that don't match normal human muscle movements. Hence, the uncanny valley
phenomenon is important for concept artists, production designers, 3D modelers, animators, and render specialists.

You might also like