You are on page 1of 77

Czech Technical University in Prague

Faculty of Information Technology


Department of Software Engineering
Bachelors thesis
2D Game Engine
Viktor Chlumsky
Supervisor: Ing. Tom as Nikod ym
16th May 2013
Acknowledgements
I would like to thank Ing. Tom as Nikod ym for his help and for taking the
time to supervise my thesis, and to Ing. Tom as Barak for helping me learn
3D graphics programming and for providing useful tips.
Declaration
I hereby declare that the presented thesis is my own work and that I have
cited all sources of information in accordance with the Guideline for adher-
ing to ethical principles when elaborating an academic nal thesis.
I acknowledge that my thesis is subject to the rights and obligations sti-
pulated by the Act No. 121/2000 Coll., the Copyright Act, as amended,
in particular that the Czech Technical University in Prague has the right
to conclude a license agreement on the utilization of this thesis as a school
work under the provisions of Article 60(1) of the Act.
In Prague on 16th May 2013 . . . . . . . . . . . . . . . . . . . . .
Czech Technical University in Prague
Faculty of Information Technology
c 2013 Viktor Chlumsk y. All rights reserved.
This thesis is a school work as dened by Copyright Act of the Czech Repub-
lic. It has been submitted at Czech Technical University in Prague, Faculty
of Information Technology. The thesis is protected by the Copyright Act and
its usage without authors permission is prohibited (with exceptions dened
by the Copyright Act).
Citation of this thesis
Chlumsk y, Viktor. 2D Game Engine. Bachelors thesis. Czech Technical
University in Prague, Faculty of Information Technology, 2013.
Abstract
This thesis details the development of a game engine oriented on two-
dimensional games. It also concerns with the related problems associated
with game programming. The topics include graphics rendering with the
OpenGL library, including text rendering and graphical eects based on
particles, audio playback, physics simulation, and resource management.
Keywords game, engine, OpenGL, 2D
Abstrakt
Tato pr ace pojedn av a o v yvoji hernho enginu specializovaneho na dvou-
rozmerne hry. Zab yv a se take souvisejc problematikou spojenou s pro-
gramov anm poctacov ych her. Temata zahrnuj vykreslovan graky po-
moc knihovny OpenGL, vcetne vykreslov an textu a grack ych efektu vy-
uzvajcch c astice, prehr av an zvuku, simulaci fyziky a spr avu zdroju.
Klcova slova hry, engine, OpenGL, 2D
ix
Contents
Introduction 1
1 Basics of game programming 3
1.1 Creating a graphical application . . . . . . . . . . . . . . . . 3
1.2 Real-time vector graphics . . . . . . . . . . . . . . . . . . . 6
1.3 Game physics . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Game object management . . . . . . . . . . . . . . . . . . . 14
1.5 Resource management . . . . . . . . . . . . . . . . . . . . . 15
2 Technologies 17
2.1 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 OpenAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Text rendering . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Particle eects . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Architecture 25
3.1 Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Platform abstraction . . . . . . . . . . . . . . . . . . . . . . 26
3.5 Game logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6 Design patterns . . . . . . . . . . . . . . . . . . . . . . . . . 27
4 Implementation 29
4.1 Platform abstraction layer . . . . . . . . . . . . . . . . . . . 29
4.2 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 Graphical user interface . . . . . . . . . . . . . . . . . . . . 40
xi
4.4 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5 Sound playback system . . . . . . . . . . . . . . . . . . . . . 45
4.6 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.7 Resource management . . . . . . . . . . . . . . . . . . . . . 47
5 Results 49
5.1 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Particle system . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.3 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.4 Text rendering . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.5 Demo game . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Conclusion 55
Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Bibliography 57
A List of abbreviations 59
B User manual 61
B.1 Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
B.2 Importing the library . . . . . . . . . . . . . . . . . . . . . . 62
B.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
C Contents of the enclosed CD 63
xii
List of Figures
1.1 Frame buer components. . . . . . . . . . . . . . . . . . . . . . 7
1.2 Vector graphics rendering pipeline. . . . . . . . . . . . . . . . . 8
1.3 Examples of common blending modes. . . . . . . . . . . . . . . 8
1.4 Example content of a depth buer. The longer the distance from
the camera, the lighter the color. . . . . . . . . . . . . . . . . . 9
1.5 Example content of a stencil buer. In this case, glossy surfaces
have been marked with white color. . . . . . . . . . . . . . . . . 9
1.6 Continuous collision detection schematic. . . . . . . . . . . . . . 13
2.1 Relation between texture objects, units, and sampler variables. . 20
2.2 Comparison of raster-based text rendering techniques. . . . . . . 23
2.3 The dierence between regular and kerned type. (Courtesy of
Wikipedia) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1 The engines architecture. . . . . . . . . . . . . . . . . . . . . . 26
4.1 The structure of a text mesh (wireframe in red). . . . . . . . . . 41
4.2 Mapping the signed distance d to opacity (black). . . . . . . . . 42
4.3 The basic physical shapes. . . . . . . . . . . . . . . . . . . . . . 43
4.4 A phantom collision. . . . . . . . . . . . . . . . . . . . . . . . . 44
4.5 Additional special physical shapes. . . . . . . . . . . . . . . . . 44
5.1 A simple, yet exhaustive graphics test. . . . . . . . . . . . . . . 49
5.2 An example particle eect. . . . . . . . . . . . . . . . . . . . . . 50
5.3 An interface for testing physics related algorithms. . . . . . . . 51
5.4 Text with various eects rendered by the engine. . . . . . . . . . 52
5.5 A screenshot of the demo game. . . . . . . . . . . . . . . . . . . 53
xiii
List of Tables
4.1 The list of window events. . . . . . . . . . . . . . . . . . . . . . 31
4.2 Responsibilities of OpenGL objects. . . . . . . . . . . . . . . . . 33
5.1 Particle system performance test results. . . . . . . . . . . . . . 50
5.2 Physics functions performance test results. . . . . . . . . . . . . 52
xv
Introduction
At the core of every computer game lies the game engine. Whether it has
been created specically for the game or independently, its purpose is to
take care of the technical side of the problem, and oer comprehensive tools
to translate a game idea into a working computer program.
Although two-dimensional games are being eclipsed by the ever increas-
ing realism of modern three-dimensional graphics, I believe there is still
place for them. If the capabilities of modern graphical hardware are utilized
in 2D games, they can be just as visually stunning as 3D games, although
in a dierent way. This has been proven by titles such as Limbo [19].
Because of that, I decided to develop a game engine oriented on 2D games.
An engine for programmers one that allows to describe the game mechanics
in a programming language. My goal is to provide high-level functionality,
but allow the user to have complete control over the program, never forcing
them to follow a predetermined way to construct the game, or limit its
possible features.
In my thesis, I detail the engines development and related problems. The
rst chapter serves as a summary of the problems I faced, and the second
one describes the technologies I used. The subsequent chapters concern with
the engines design and implementation, and the results of the eort can be
found in the last one.
Unless the context requires otherwise, I will refer to a developer who uses
the engine as a user, and the end-user as a player.
1
Chapter 1
Basics of game programming
A computer game must be interactive, responding to the players input, and
present an audiovisual representation of its game world. The problems asso-
ciated with game programming and game engine development are described
in this chapter.
1.1 Creating a graphical application
A game is in essence a graphical application. It can be based on either
passive or active rendering. With active rendering, the entire game window
is periodically redrawn several times per second, for the whole duration
of the program. This is not needed for some games, like chess, where most
of the scene does not change often.
This section describes the basic structure of a game application with active
rendering.
1.1.1 Active rendering loop
The backbone of any program based on active graphics rendering is the
rendering loop. Each step of the loop consists of the following actions:
1. Retrieve user input.
2. Update game data.
3. Redraw the scene.
3
1. Basics of game programming
To prevent unnecessary CPU hogging, the program should wait for a short
time before repeating the process. It is possible to add an arbitrary sleep
command at the end of the cycle, but the most elegant solution is using
vertical synchronization.
1.1.2 Vertical synchronization
This feature, provided by the OS, makes it possible to synchronize the
rendering loop with the refresh rate of the monitor. It requires double
buering (see 1.2.3.1). When enabled, attempting to present the back buer
onto the screen will block the thread until the next monitor refresh and
return once the rendered scene is displayed to the player. That way, we
not only provide downtime for the CPU, but also save computational power
spent on drawing redundant frames that would be replaced before ever
reaching the monitor, and prevent image tearing.
1.1.3 Update timing
Updating game data is usually dependent on the elapsed time. Since the
time it takes to go through each step of the rendering loop depends on
many factors and cannot be predicted, it has to be measured. The dierence
between two frames is only a few milliseconds, and for durations this short,
it is best to use the most precise time measurement available, which is
querying the processor time.
1.1.4 Update granularity
Even on powerful machines, unusual circumstances may occur, such as a
momentary OS freeze or a deliberate halt invoked by the user, and a single
step of the rendering loop might suddenly take a thousand times longer
than the previous one. The game should be prepared for this scenario. In a
physical simulation for example, a single 1-second step could produce a very
dierent result than fty 20-millisecond ones [2].
It may be a good idea to declare an upper bound of the step length and if it
is exceeded, perform multiple shorter steps instead. To increase determinism,
it is also possible to choose a xed step length, and always perform a number
of them, based on the elapsed time.
4
1.1. Creating a graphical application
1.1.5 Retrieving user input
The player interacts with the game using peripheral devices such as a
keyboard and a mouse. The game must intercept input from these devices
to be able to convert it to in-game actions. The OS usually provides the
input to the application in the form of event messages. There are many
dierent types of these messages. The basic ones include keyboard, mouse,
and system events. The complete list of messages in the Windows OS can
be found in [16].
1.1.5.1 Keyboard input
A keyboard event is generated whenever a key is pressed or released. When a
key is held down, it starts to generate repeat key press events. This behavior
is designed for text editing and should be disregarded in games.
There are multiple ways to interpret each key on the keyboard. One way
is to consider the character the key represents. Each person might have a
dierent keyboard layout though. If the players movement in the game
is controlled by the W, A, S, and D keys, they might be located in very
inconvenient positions on a dierent keyboard layout. For this reason, it is
usually better to identify keys by their physical position.
On a traditional keyboard, there are several special keys, that are used
to alter the meaning of other keys. On the PC platform, they are Control,
Alt, Shift, and the Windows key. The states of these keys are known as
modiers and should be taken into account when interpreting keyboard, and
possibly mouse input. In many games though, they are treated as regular
keys.
In most games, the keyboard input must also sometimes be interpreted as
text, for example when interacting with an in-game chat console. This task
is somewhat complicated, because for dierent languages, there are dierent
text input mechanisms, in which keystrokes often dont directly correspond
to characters. Fortunately, the OS takes care of it, and sends a message
whenever a valid character is typed with its Unicode value, in addition to
the key press event.
1.1.5.2 Mouse input
There are multiple dierent mouse events. Pressing the mouse buttons
generates ones very similar to the keyboard events. An event is also generated
5
1. Basics of game programming
whenever the mouse cursor moves, providing its new coordinates. Modern
mice are also equipped with a scroll wheel that generates events when
rotated.
1.1.5.3 System events
There is also a number of indirect events that usually originate from a user
action, but result in an OS procedure, which the application should know
about. Those relevant for a game include resizing the window, requesting to
close it, or changing its focus state.
1.2 Real-time vector graphics
Vector graphics, as opposed to raster graphics and ray tracing, is a rendering
method based on polygon rasterization. It is currently by far the most
prevalent method of real-time graphics rendering in both 2D and 3D video
games. Modern GPU hardware is constructed primarily for this rendering
method, so it can be performed very eciently. The main traits specic to
polygon-based vector graphics are:
Hollow models since everything is constructed from points, lines and
planar primitive shapes, there is no notion of volume. Therefore, all
3D models are merely hollow surfaces.
Lack of curvature there are no curved primitives available, so all
complex shapes are approximated with straight lines only.
Plastic transformations unlike raster graphics, meshes can be easily
reshaped with ane transformations, along with their raster-based
textures.
1.2.1 2D versus 3D rendering
In vector graphics, the dierence between 2D and 3D rendering is minimal.
Since the perspective projection maintains line straightness, rendering a
3D scene is only a matter of correctly transforming vertices from 3D space
into a plane. That can be achieved with an ane transformation. The only
remaining problem is depth ordering, which is solved by the depth buer
(see 1.2.6). There are also certain eects specic to 3D graphics, such as
lighting, but the overall rasterization process remains the same.
6
1.2. Real-time vector graphics
1.2.2 Ane transformations
In vector graphics, the layout of most objects is dened by vertex coordinates.
It is often needed to transform the coordinates, either progressively to
simulate movement, or simply to place an object in the game world. There
is a very simple and eective way to do this ane transformations. An
ane transformation can be expressed as a transformation matrix. It can
be applied to a vertex by multiplying its homogeneous coordinates by the
matrix. A very helpful property of the transformations is that they can be
easily combined by multiplying their matrices [18]. A transformation can be
applied to a whole mesh by applying it on its vertices. The most common
basic ane transformations include scaling, translation and rotation.
1.2.3 Frame buer
The rendered pixels are stored in a frame buer. Its shape and dimensions
correspond to the game window. It consists of several components, or
attachments, as illustrated in Figure 1.1, which include the color buer (a),
where the actual visible content is stored, the depth buer (b), and the
stencil buer (c).
(a)
(b)
(c)
Figure 1.1: Frame buer components.
1.2.3.1 Double buering
To keep a scene from being displayed in the middle of rendering, it is possible
to perform the rendering o-screen with double buering. That way, there
will be two instances of the frame buer, and while one is displayed on the
screen, the other serves as the rendering target. When rendering is complete,
they can be swapped.
7
1. Basics of game programming
1.2.4 Rendering process
The process of converting numeric vertex data into colored pixels is detailed
in the diagram in Figure 1.2.
Vertex data Vertices Primitives Pixels Colors
Vertex transformation
(vertex shader)
Primitive
assembly
Rasterization Color computation
(fragment shader)
Figure 1.2: Vector graphics rendering pipeline.
Before the resulting pixels are placed into the frame buer, some addi-
tional operations may occur, such as blending, depth testing, or stencil
testing [8].
1.2.5 Blending
Normally, the resulting pixel colors would be placed directly into the color
buer, overwriting any previous content. With blending, it is possible to
combine the new color with the original one (Figure 1.3). The way this is
done can be specied by a blending function.
(a) no blending (b) transparency (c) additive (d) multiplicative
Figure 1.3: Examples of common blending modes.
The most common use of blending is to simulate transparency. Color
values include a special alpha channel that is never directly visible, but can
8
1.2. Real-time vector graphics
be used for blending. In that case, the alpha value represents opacity, and
the blending function is dened as
O
C
= S

S
C
+ (1 S

) D
C
, (1.1)
where S
C
and S

is the new color and alpha value, respectively, D


C
is the
original color, and O
C
is the resulting blended color.
1.2.6 Depth testing
To ensure that objects in a 3D scene appear in the correct order, with the
near ones overlapping the farther ones, there is the depth buer (Figure
1.4). The depth buer stores a depth value for each pixel, which signies the
distance of the fragment from the viewer. When depth testing is enabled,
pixels are only rendered if they are closer to the viewer than the ones in the
frame buer.
Figure 1.4: Example content of a depth buer. The longer the distance
from the camera, the lighter the color.
1.2.7 Stencil testing
Figure 1.5: Example content of a stencil buer. In this case, glossy surfaces
have been marked with white color.
9
1. Basics of game programming
Stencil testing is another way to control which pixels should be rendered.
An area of the scene can be marked in the stencil buer (Figure 1.5), and
later, a special eect may be rendered that only aects the marked pixels.
1.2.8 Fixed and programmable pipeline
There are two dierent rendering architectures: the xed pipeline and the
programmable pipeline [8].
The xed pipeline is the older approach. It allows the user to set a number
of common parameters before the process is started, like transformation
matrices or light sources. The vertices can only have certain predened
attributes like position and texture coordinates. The programmable pipeline
is based on shaders, and allows the user to completely dene the manner in
which their data is converted into pixels on the screen.
The advantage of the xed pipeline is compatibility with older hardware,
but it is only capable of a limited set of predened tasks. Only with shaders,
it is possible to implement many modern graphics eects, and their versatile
nature leaves space for imagination and novelty.
1.2.9 Shaders
Shaders are special programs that are executed on the GPU. The basic
types of shaders are the vertex shader and the fragment shader.
1.2.9.1 Vertex shader
The task of the vertex shader is to make sense of the provided vertex
data and compute the nal position of the vertex. Typically, the vertex
shader multiplies input coordinates by a transformation matrix. It may also
generate additional outputs, like texture coordinates, that can be used by
the fragment shader.
1.2.9.2 Fragment shader
The fragment, or pixel shader, is invoked on a per-pixel basis, to determine
its color. Since vertices dont correspond to pixels, the outputs from the
vertex shader are not received directly. They are interpolated from the
neighboring vertices, based on the fragments distance to each.
10
1.3. Game physics
1.2.9.3 Geometry shader
A relatively new shader type is the geometry shader. Its use is optional,
and unlike the vertex shader, it is able to generate new vertices, eectively
changing the geometry of the mesh. The geometry shader is invoked imme-
diately after the vertex shader, and for each input vertex, it can produce
any number of actual vertices, including zero. It can be used for various
tasks, for example dynamic level of detail.
1.2.10 Texturing
Rendering raster images in vector graphics can be done by texturing. Al-
though textures are typically two-dimensional, they can also have one or
three dimensions (representing a volume of voxels in that case). A texture
represents a bitmap in the video memory that is optimized for sampling. It
can be mapped onto polygons, and the raster will be stretched to t. This is
achieved by interpolating texture coordinates between vertices and sampling
a color from the texture at each fragment. The way color values are sampled
depends on the textures ltering mode.
1.2.11 Anti-aliasing
The pixels in the frame buer correspond to a grid of discrete points across
the scene. The rest of the visible eld is not represented in the nal image.
Although it is a good approximation, it causes undesirable artifacts that
could not occur in the real world, such as pixelated edges. To combat this,
we need to perform anti-aliasing.
To reduce artifacts near polygons edges, we can use multisampling, a form
of anti-aliasing that improves the result by using more than one discrete
point (sample) per pixel during the rasterization phase. For the inside of
polygons, there are advanced modes of texture ltering, whose purpose is to
make each texture sample better represent the whole area that is projected
into a single pixel [12].
1.3 Game physics
The goal of physics simulation in games is to make the game objects behave
more realistically. Conventionally, game physics revolves around collisions.
By not allowing objects to phase through one another, a collision system
gives them their solidity.
11
1. Basics of game programming
1.3.1 Basic kinematics
The simplest way to represent movement of objects is with a velocity vector
v, which denotes how an objects position changes over time. Its position P
can then be updated as
P := P + t v, (1.2)
where t is the length of the step. Although this only allows linear move-
ment between steps of the simulation, it is a sucient approximation if the
steps are short enough.
The velocity vector can be modied by forces. While a force, such as gravity,
inuences an object, its acceleration vector a is added to the velocity:
v := v + t a. (1.3)
1.3.2 Collision detection
There are two basic methods to detect collisions, to predict them before
they happen (a priori), or to detect them after the fact (a posteriori).
1.3.2.1 Discrete method
Detecting collisions once they happen simplies the problem to merely
nding if the objects intersect. There are several problems with this method
though. Although the objects velocity vector can be used to return them
to a position before the collision occurred, in some cases, it may be too late
to avert the consequences. Also, since the intersection tests are performed
discretely, we could easily miss collisions that happened in between steps, so
a fast moving object could skip through a thin wall [1]. Finding a collision
between a point and a line or a plane would be impossible. A simple
intersection test also doesnt provide any information about the collision
beyond a boolean value.
1.3.2.2 Continuous method
A better way is to search for future collisions in objects trajectories. Of
course, we do not know how their trajectories will change, but using the
velocity vector, we can predict them up until the next step of the simulation.
It is possible to predict collisions analytically, which makes this method
continuous.
In Figure 1.6, we have a moving object (hexagon) about to collide with a
12
1.3. Game physics
v
C
C'
n
Figure 1.6: Continuous collision detection schematic.
stationary one (triangle). v is the hexagons velocity vector. C is the point
of the hexagon, which will collide at C

. We need to nd the time of impact


t, which can be dened by the equation
C

= C + t v. (1.4)
It is also possible to nd the collision normal n, which is the portion of v
corresponding to the force that would be theoretically exerted on the hit
object. This vector can be very useful in the subsequent collision resolution.
The problem can be generalized to a pair of moving objects by simply
using the dierence of their velocities.
1.3.3 Collision resolution
There is no single correct way a collision should be resolved. It all depends
on the nature of the game and the programmers intentions. Usually, it is
best to rst place the objects in their states at the time of collision, before
modifying their trajectory.
In the example in Figure 1.6, the possible solutions include
simply stopping the object,
deecting the object and making it slide along the edge by changing
its velocity to
v := v n, (1.5)
making the object bounce away with
v := v 2n. (1.6)
13
1. Basics of game programming
This problem becomes signicantly harder, when a single object has multiple
collisions in one step.
1.4 Game object management
Another problem is ecient storage of the game objects. It would be
possible to put them in a simple list, but more complex structures can aid
in important tasks, such as drawing the scene and allowing the objects to
interact with each other.
1.4.1 Scene graph
In many computer games and engines, objects are organized into a scene
graph. It is a hierarchical structure, where objects are arranged by their
logical relations, which have to be dened manually [4]. The main advantage
is that it allows to manipulate groups of objects at once and to reuse them.
It is primarily useful when drawing the scene, but since objects that form
groups tend to be close together, the scene graph may also serve as a
bounding volume hierarchy, allowing to search for objects by location.
1.4.2 Spatial queries
Searches that involve objects locations are known as spatial queries and are
often needed in a game. For example, when searching for possible collisions,
or when drawing a rectangular portion of a 2D game world, objects that
lay in a given area must be found quickly. There are many data structures
optimized for these and other spatial queries, mostly tree structures, such
as k-D trees, quadtrees, or R-trees.
1.4.3 Spatial hashing
Another, much simpler structure optimized for spatial queries is the spatial
hash table, a multidimensional version of the regular hash table. Unlike
the complex tree structures, it doesnt need balancing, and modifying it
is simple and fast [15]. Searching for objects at a given coordinate has an
average constant time complexity, just like the regular hash table. It is very
similar to a simple grid structure, but is not limited by any bounds. In two
dimensions, the hashing function maps any position P to a cell C of size g
14
1.5. Resource management
in the 2D array of size s in a repeating manner:
C
X
=

P
X
g
X

mod s
X
C
Y
=

P
Y
g
Y

mod s
Y
(1.7)
Unlike in the regular hash table, the purpose of the hashing function is not
to diuse similar values across the array, but quite the opposite, because
each object needs to be placed into each cell that corresponds to a point in
it. The eciency of spatial hashing relies heavily on setting the right values
for g and s, to minimize the number of objects per cell and cells per object.
1.5 Resource management
Video games use various external resources, such as images and sounds, that
are not directly parts of the program. Although it is possible, these are
usually not included in the programs binary le, but in external les or
archives. They have to be loaded into memory at some point before they
can be used. This problem is known as resource management.
There are many benets to having a dedicated resource manager in a
program. When a resource is needed at any point, we can just request it
from the manager, and it will be delivered, without having to worry about
how it is loaded and where from. Resources will never be unnecessarily
loaded multiple times, and they will be disposed of automatically.
1.5.1 File archives
In order to speed up the loading process, game resources are usually ag-
gregated into large archives. It is faster to read from a single le than a
large number of small ones. The archives usually include a table of contents,
possibly a checksum, and the raw bytes of the contained resources. There
is no need to apply a compression to the archive as a whole, since this is
taken care of by the formats of the individual les.
1.5.2 String management
In any application, all strings intended to be displayed to the user should
be stored separately in one place, to simplify their editing and localization.
15
1. Basics of game programming
The strings can then be accessed by keys, which should be resolved by the
resource manager.
1.5.3 Disk access
Since accessing multiple les on the disk at a time would only delay nishing
any of them, it is better to do it systematically, one le at a time. Disk
reading is a relatively slow operation but does not consume the CPU, which
means that the program can do other things while waiting for the disk.
To solve this, we can maintain a separate disk access thread with a queue,
where jobs can be submitted and results gathered later.
1.5.4 File decoding
To signicantly reduce the required storage space, most resources are stored
in a compressed format. The application must decompress them before they
can be used.
1.5.4.1 Image les
The most popular image formats, including PNG, TGA, and JPEG, can
be decoded with the Developers Image Library (DevIL) [22]. It is a free
cross-platform library with an API heavily based on that of OpenGL. Its
original name is OpenIL.
1.5.4.2 Sound les
The simplest sound le format is raw PCM, usually found in a WAV le. It
has no compression and consists of a sequence of samples. It can be decoded
quite easily. One of the most popular sound formats is MP3. Unfortunately,
due to legal reasons, it is problematic to provide support for its decoding.
There are however free alternatives, such as OGG Vorbis, or the lossless
FLAC format. They can be decoded by the LibVorbis [23] and LibFLAC [5]
libraries.
16
Chapter 2
Technologies
For the implementation of the engine, I used a number of preexisting key
technologies. They are discussed and examined in this chapter.
2.1 OpenGL
In order to take advantage of hardware accelerated graphics and create a
portable application, we need to use the technology that is supported by a
wide range of consumer GPUs. There are only two libraries supported by
most, Direct3D by Microsoft [17], and OpenGL [11]. Since both run on the
same hardware, the way they function is fairly similar. A major dierence
though, is that while OpenGL is available on most platforms, including
mobile ones, Direct3D is proprietary and only available on Microsofts own
platforms. For these and other reasons, I decided to use OpenGL as the core
graphics library of the engine. This section describes the basics of working
with modern OpenGL. It is based on [10, 12], where further information is
available.
2.1.1 Overview
OpenGL, or Open Graphics Library, is an API for vector-based graphics
rendering. Since its implementation is provided by the GPU manufacturer,
it allows the user to take full advantage of hardware accelerated graphics.
The current version of OpenGL is 4. 3. Up until version 2, OpenGL only
supported xed pipeline rendering. Versions 3 and 4 brought some advanced
functionality, like the geometry shader, but they denitely dont represent
as important advancement, and their support is limited to the more recent
17
2. Technologies
and more powerful hardware. I chose OpenGL 2 as the minimum version
required by the engine.
2.1.2 Extensions
Even if a higher OpenGL version is not supported, the desired functionality
might be available via extensions. Extensions are non-standard additions to
OpenGL, and their support varies between dierent GPU models. Many
miscellaneous features start out as extensions only to be later adopted into
the OpenGL standard. Although their support cannot be guaranteed, there
are certain extensions that are very common and often present on models,
which predate the OpenGL version that adopted them.
Some extensions have multiple versions, and many are also duplicated
as core functions, so to maximize compatibility, it is best to make the
program adaptive to any available version.
2.1.3 Architecture
OpenGL works as a state machine. Large portion of the data needed
for rendering, including the source of vertex data and the active shader
and textures, is stored in global states. In fact, the only values provided
directly in the draw command are the type of primitives and the number of
vertices to be rendered. Global states are generally undesirable in object-
oriented programming, because they are hard to manage [14]. They are also
incompatible with multi-threading, since one can never be sure that a global
state has not been changed by a dierent thread.
2.1.4 OpenGL objects
There is a number of dierent objects in OpenGL. They usually repre-
sent GPU resources and are accessed through integer handles. The most
important ones are described in this section.
2.1.4.1 Shaders and programs
A shader object is created by compiling its source code at the games runtime.
Shaders for dierent rendering stages are linked together to form a shader
program. Before rendering using the programmable pipeline, a program
must be activated.
18
2.1. OpenGL
A shader program has two types of input variables vertex attributes
and uniforms. Attribute values are specic to each vertex. They are usu-
ally bound to vertex buer objects, which serve as streams of input data.
Uniform variables stay the same throughout the rendering process. Their
values are set globally by the user. They are shared among the shaders in a
program.
2.1.4.2 Array buers
An array buer stores numeric data in video memory, which can be used
in multiple ways. Array buers are usually thought of as dierent kinds of
objects based on how they are used.
The most common way to use an array buer is for vertex data storage.
Then it is referred to as a vertex buer object, or VBO. Its typical content
is a series of vertex and texture coordinates. A VBO can be bound to a
shader attribute to supply input data. Values of multiple attributes may be
interleaved in the buer.
Another option is to store a sequence of vertex indices, creating an in-
dex buer object, or IBO. When it is bound during rendering, the values
in the buer determine which vertices should be drawn and in what order.
This also allows to repeat vertices without storing duplicate coordinates.
2.1.4.3 Vertex array objects
Before a mesh can be drawn, each vertex attribute must be correctly bound
to either a constant value, or a VBO. Some additional values must also be
specied when binding, and an index buer might be used as well. Since a
mesh is usually drawn in each frame, the same sequence of commands to
set up the shader input must be executed every time. In OpenGL 3, this
common scenario is solved by vertex array objects, which store all of the
necessary bindings and can be reactivated with a single command.
2.1.4.4 Textures
Textures represent one-, two-, or three-dimensional raster images in the
video memory. OpenGL supports multitexturing, or using multiple textures
at the same time. There are several texture slots, called units, to which
the texture objects are bound. Shaders then access the texture through
uniform sampler variables, which can be set to a texture unit index. This
relation is demonstrated in Figure 2.1.
19
2. Technologies
0 1 2 3 ... Texture units:
Texture objects:
Shader sampler
Figure 2.1: Relation between texture objects, units, and sampler variables.
A texture object may hold pixels in many dierent formats. Although the
32-bit RGBA format is most prevalent, it is also possible to store oating-
point data in the texture, which is useful for some special eects. In the
shader, the sampled values are always returned as a oating-point vector of
four components.
2.1.4.5 Frame buer objects
Frame buer objects are used for o-screen rendering. They can fully sup-
plement the default frame buer, but their components (color, depth, stencil
buer) must be created separately. Each component must be attached to
either a texture, or a render buer.
The advantage of textures as rendering targets is that they can be di-
rectly used in further rendering. Render buers, however, are faster and
oer some additional functionality, such as multisampling.
2.1.5 OpenGL Shading Language
The source code of OpenGL shaders is written in the OpenGL Shading
Language, or GLSL. It is syntactically very similar to the C language.
The primary data types are oating-point scalars, vectors, and matrices.
Common operations like vector matrix multiplication can be done naturally,
with mathematical operators. The available mathematic functions in GLSL
include those most commonly used in computer graphics and are also
compatible with the vector types.
20
2.2. OpenAL
2.2 OpenAL
OpenAL is a cross-platform library, whose primary purpose is to play 3D
sound [6]. Its API is based on that of OpenGL, which is why it has a similar
name. Similarly to OpenGL, a context must be created rst, before the
library can be used. There are only three types of objects in OpenAL, a
listener, sources, and buers.
2.2.1 Listener
Each context has a single listener. It represents the player, who perceives
the sound. The listener has a position and a velocity in space, which is
used to simulate 3D sound playback. The listener object also holds other
parameters that aect sound properties globally.
2.2.2 Source
Sound sources represent objects in space that produce sound. They too
have a position and a velocity, and several parameters for sound playback.
Each source has a queue of sound buers to play and can only play them
one at a time.
2.2.3 Buer
An OpenAL buer stores the raw waveform of a sound. It can be played by
a source.
2.3 Text rendering
In most applications, text rendering is taken care of by the operating system.
This, is however problematic inside a hardware-accelerated OpenGL context.
The functionality must therefore be implemented by the game engine.
2.3.1 Vector approach
Since the majority of professional fonts are vector-based, it is intuitive to
preserve this format in a vector graphics environment. The only problem
is that fonts contain curves, and those cant be rendered by OpenGL.
They would have to be approximated with additional vertices. A good
approximation might quickly result in hundreds of vertices per character,
21
2. Technologies
which could have an impact on the games performance. If however, a
balance can be found between quality and performance, or the games art
style calls for a jagged-looking font, this approach is feasible.
2.3.2 Raster approach
An alternative way is to pre-render the required glyphs, in sucient resolu-
tion, into a texture, and then draw them in the form of textured quads. The
obvious problem with this method is the limited resolution of the glyphs.
There are several ways to mitigate this problem.
2.3.2.1 Basic ltering
The source texture may be either monochromatic or grayscale, with anti-
aliased glyph edges. By rendering the glyphs in their original form, only
with texture ltering, the result will be somewhat pixelated at a higher
resolution (Figure 2.2a).
2.3.2.2 Thresholding
To ensure that the text has sharp edges at any resolution, we can reinterpret
the sampled values in the shader. With linear texture ltering, the glyphs
color intensity will be smoothly interpolated, and we can treat values larger
than a certain threshold as being inside the glyph, and the rest outside.
Although this will result in sharp edges, the pixelation might manifest in
the glyphs shape (Figure 2.2b).
2.3.2.3 Signed distance eld
There is a way to improve this even further, with a technique described in
[9]. If the smoothed edge of the glyphs in the source texture are thicker, the
sampling interpolation is much more eective in approximating the actual
shape of the glyph. This way, the color intensity can be thought of as a
signed distance from the glyphs edge, with zero being the threshold. The
texture represents a eld of these distances, hence signed distance eld.
With this method, all pixelation is eliminated, and its only downside is that
it tends to round sharp corners (Figure 2.2c). If the range of the signed
distance values is broad enough, they can be used for special graphical
eects, such as text border or a halo [9].
22
2.4. Particle eects
(a) Basic ltering (b) Thresholding (c) Signed distance eld
Figure 2.2: Comparison of raster-based text rendering techniques.
2.3.3 Kerning
Once we have a way to render individual characters, we need to combine
them into words. Fortunately, this is not very hard for Latin characters, but
one of the diculties to consider is kerning.
With kerning, certain character pairs may have their spacing adjusted,
to improve the appearance of the text. These kerning pairs are usually
dened by the font.
No kerning
Kerning applied
Figure 2.3: The dierence between regular and kerned type. (Courtesy of
Wikipedia)
2.4 Particle eects
Particle eects are graphical eects based on a large amount of uniform but
independent small objects particles. They are commonly used in games
23
2. Technologies
because they can produce impressive visuals with little eort. Eects that
can be simulated with particles include sparks, weather, re, and smoke.
2.4.1 Data model
Particle eects are based on the fact that each individual particle moves
and behaves independently. Therefore, some data must be maintained on
a per-particle basis. That may include its position, velocity, or constant
parameters that dierentiate it from the other particles.
2.4.2 GPU processing
The GPU is best at performing the same operation on a large dataset,
normally the screens pixels, in parallel [13]. This is exactly the type of task
needed to update a set of particles. Using the GPU also eliminates the need
to transfer the data from the CPU in order to render the particles.
2.4.2.1 Transform feedback
There is a feature in OpenGL 3 that is ideal for this task, called transform
feedback. It allows to redirect the output variables of a vertex shader into
an array buer, which may then serve as input of the same shader in the
next iteration. The whole update process is then performed in the vertex
shader. When transform feedback is active, it is possible to turn o the
fragment shader, so that nothing is actually rendered on the screen.
2.4.3 Instanced rendering
Since particles usually have the same shape (which can be slightly modied
in the vertex shader), it is possible to draw them quickly with instanced
rendering. Instanced rendering allows to draw the same mesh many times
over with a single draw command.
We also need to supply the particle data, which diers between instances.
By setting the vertex attribute divisor, we can specify if the vertex attribute
in question should be changed at every vertex, every instance, or even every
N-th instance.
24
Chapter 3
Architecture
The engine is conceived as a library, upon which the user may build their
product. It is designed not to limit the game developer and allow them to
use only the parts of the engine that t their needs. This is achieved by the
overall layered architecture. Its components are also divided into categories
across the layers by functionality.
3.1 Layers
The engines components are organized into layers, with each depending on
the underlying ones. The lowest layer consists of independent utilities, and
each one that follows builds upon the previous one, bringing more advanced,
but also more specic functionality. If the user nds some of the top layers
inviable, they can choose not to use them, and start building their game on
top of one of the lower ones. For example, they may decide to only use the
engine as a middleware for providing an OpenGL window.
3.2 Categories
The components are also divided into the following logical categories, which
are mutually independent:
graphics and GUI
audio
physics
network
25
3. Architecture
An overview of the architecture can be seen in Figure 3.1.
Network Physics Audio Graphics
Utilities
Platform abstraction
Low-level classes
Basic functions
Advanced functions
L
A
Y
E
R
S
Figure 3.1: The engines architecture.
3.3 Dependencies
The engines graphics is based on OpenGL. It uses the new OpenGL API,
which relies on the programmable rendering pipeline, making 2.1 the min-
imum supported version of OpenGL. For sound playback, the OpenAL
library [6] is used. The engine also utilizes several additional libraries for
le decoding, including DevIL [22], LibVorbis [23], and LibFLAC [5].
3.4 Platform abstraction
Many functions needed by an interactive graphical application must be
mediated by the operating system. These are accessed through the platform
abstraction layer. Although, as of now, the engine only supports the Mi-
crosoft Windows platform, the platform abstraction layer makes sure the
engine can be easily ported onto new platforms. All platform-dependent func-
tions are accessed exclusively through this component, and therefore adding
support of another platform is equivalent to providing a new implementation
of the abstraction layer.
26
3.5. Game logic
3.5 Game logic
Unlike many commercial engines, mine does not support or enforce any
scripting languages. An important design point is that all game logic is
left up to be implemented by the user, allowing them to dene the game
mechanics in a programming language.
3.6 Design patterns
There are several key design patterns used throughout the engine.
3.6.1 Parent factory
In several components, there is a major parent class, which represents an
environment for other objects. These child objects can only come from
a valid parent, which serves as their factory. An example is the OpenGL
context, which owns other OpenGL objects.
3.6.2 Static factory method
Some objects, especially those representing OpenGL resources, cannot be
copied, and their creation may fail. They should be always held by pointers.
To solve these problems, all constructors for these classes are private, and
replaced by a static factory method that returns a pointer, or a null value if
the construction fails.
27
Chapter 4
Implementation
I gave the engine the codename Artery Engine. It is realized in the C
++
programming language, for the Microsoft Windows platform. All of the
engines classes and functions are in the Artery namespace. All its macros
have the AR prex.
In this chapter, I am going to describe the implementation of individual
components of the engine, one by one.
4.1 Platform abstraction layer
This component provides access to the functions that are provided by the
OS. To separate its API from the platform-dependent implementation, I
used the pointer to implementation idiom (PIMPL). That means that all
classes in the layer contain a pointer to an unspecied type, to which all
methods are delegated.
4.1.1 Window management
The window is the interface between the game and the player. It both
presents the visual output of the game and captures the players input. In
the engine, it is represented by the Window class.
Window is an abstract class, which has to be overridden by the user, specifying
how to handle the players input. It may exists in three states:
as a prototype describing a window that does not exist yet,
hidden created in the OS, but not visible to the player,
29
4. Implementation
open visible and interactive.
Generally, it is best to set the windows properties in the prototype state,
performing the rst draw cycle in the hidden state, and presenting the
nished window to the player afterwards.
4.1.1.1 Window settings
When creating the window in the OS, there are many parameters that must
be set. To simplify the task, I chose four congurations that cover practically
all window modes used by computer games. These modes are:
xed-size,
resizable,
borderless,
fullscreen.
The only other window parameter besides the mode that is mandatory is
its dimensions. Several other options may be set for the window, including
icon, title, or mouse cursor. I also added support for multiple monitors.
The available monitors may be queried, and one of them may be set as the
target of a fullscreen window.
4.1.1.2 Event capture
In Windows, all received window messages are passed to a WNDPROC func-
tion, which must be specied. Since there could be multiple windows in
the application, but only one WNDPROC function, there has to be a way to
make sure the message reaches the correct Window object. Fortunately, it is
possible to store arbitrary user data with the window handle, so I inserted
the objects pointer.
To specify how window events are handled, it is necessary to override
the Window classs virtual methods. Their prototypes are dened as methods
with an empty body, so that it is possible to only redene the ones of interest.
The list of all possible events can be seen in Table 4.1.
To allow synchronous event handling, the incoming events do not call these
methods immediately. Instead, there is an additional method, digestEvents,
which invokes the awaiting events methods when called.
30
4.1. Platform abstraction layer
Method name Event description
openEvent The window has been opened.
closeEvent The window has been closed.
destroyEvent The window is about to be destroyed.
charTypeEvent A character has been typed.
keyPressEvent A keyboard key has been pressed.
keyReleaseEvent A keyboard key has been released.
mousePressEvent A mouse button has been pressed.
mouseReleaseEvent A mouse button has been released.
mouseScrollEvent The mouse wheel has been scrolled.
mouseMotionEvent The mouse cursor has moved.
repaintEvent Part of the window needs to be repainted.
resizeEvent The window has been resized.
focusEvent The window has gained the players focus.
blurEvent The window has lost the players focus.
hideEvent The window has been minimized.
restoreEvent The window has been restored.
closeRequestEvent The player attempts to close the window.
Table 4.1: The list of window events.
4.1.2 Multi-threading
The OS also provides resources for multi-threading. I translated the low-
level Windows API into a user-friendly Thread class and added a number
of synchronization mechanisms. The Thread class is an abstract class with
a virtual run method, which must be overridden in order to specify the
threads body.
For synchronization, I created classes that represent mutexes and sema-
phores. I also created a macro that can be used to easily create a critical
section. It can be used as a control structure:
1 AR_CRITICAL(aMutex) {
2 someSynchronizedOperation();
3 }
4.1.3 Other utilities
I also provided an object reimplementation of several minor utilities of the
OS, including
31
4. Implementation
functions and classes to retrieve, represent, and manipulate with date,
time, and precise processor time,
pop-up messages primarily for error reporting,
interaction with the clipboard,
the sleep procedure.
4.2 Graphics
A large portion of the engine takes care of graphics rendering based on the
OpenGL library. The main graphics component is the OpenGL abstraction
layer, upon which the more advanced functions are built.
4.2.1 Linear algebra
For vertex transformations, I created classes that represent vectors and
matrices. They are template classes that come in float and double variants.
Their operators have been overridden, so that they can be used seamlessly
in mathematical operations. A separate Affinity class serves as a static
factory for the most common ane transformations, and can perform related
operations on vectors and matrices.
4.2.2 OpenGL abstraction layer
With the OpenGL abstraction layer, I attempt to provide a more user-
friendly interface to the OpenGL library at a minimal performance cost.
It is designed to minimize the use of global states and simplify the most
common tasks. OpenGL is a very large library, and my abstraction layer
certainly does not cover all its functions, but I included the ones most
important for modern 2D graphics. All OpenGL constructs are represented
by C
++
objects with intuitive interfaces.
4.2.2.1 Extension control
The minimum supported version of the library is OpenGL 2. However, the
abstraction layer contains some functionality from the later versions. In
these cases, it automatically tries to load the functions from all possible
OpenGL extensions. If a function is completely unsupported by the GPU,
attempting to create an object that relies on it will fail.
32
4.2. Graphics
4.2.2.2 Graphics context
The central class of the component is the GraphicsContext. It represents
the OpenGL context of a window. All other OpenGL objects belong to the
context they were created in and can only be used inside it. It serves as a
factory for the other OpenGL objects.
Each graphics context has a frame buer. Since the target frame buf-
fer may change, I separated it into its own class. To simplify switching
rendering modes, I transferred certain settings that are commonly global
across the context to be managed by the objects they usually relate to.
Table 4.2 shows how I divided the global OpenGL states among the objects.
The last column lists unmanaged states, which are congured at the moment
they are used.
Context Frame buer Shader (immediate)
rendering target viewport rectangle blending mode active shader
texture bindings scissor test stencil test vertex array
depth test stencil mode texture unit
blending test
culling test
Table 4.2: Responsibilities of OpenGL objects.
4.2.2.3 Frame buer
A frame buer that can be used as the rendering target is represented by
the abstract FrameBuffer class. There are three implementations of this
class:
ScreenFrameBuffer, the default frame buer of the context,
TextureFrameBuffer, an FBO attached to a texture,
RenderFrameBuffer, an FBO attached to render buer objects.
It is possible to blit pixels between frame buers, typically into a Texture-
FrameBuffer, which can be used directly as a texture.
33
4. Implementation
4.2.2.4 Shader program
For OpenGL shaders, there is the Shader class, which represents a complete
shader program, consisting of a vertex shader and a fragment shader. In
favor of simplicity, I abandoned the separate compilation and linkage of
shaders, at the cost of the possibility that some shaders might need to be
compiled multiple times for dierent programs.
A Shader object is created just from two strings containing the shaders
respective source codes. It is then automatically compiled and linked. As
with any object in this component, if its creation fails, a null pointer will be
generated instead. In debug mode, the compilation error will also be printed.
If the shader is created successfully, the list of its uniform variables is
retrieved, and a list of uniform handle objects is generated. These objects
provide access to the uniform variables and cache their values. This method,
inspired by [14], prevents unnecessary OpenGL calls when setting the uni-
form variables. The uniform objects have various types, depending on the
variable they represent, and can be retrieved from the Shader object by
name. This also serves as type verication.
For vertex attributes, I added an option to dene their order before compi-
lation, which allows to use the same bindings across multiple shaders that
have the same input attributes. Alternatively, it is possible to query their
assigned positions later, by their names.
It is intended that each shader should be represented by a separate class
derived from Shader. Since all OpenGL objects are created exclusively by
the GraphicsContext, there is no public constructor available. To allow
inheritance, I created a protected move constructor, specically for this
case. It accepts a valid Shader object created by the context, overtakes its
content, and destroys it in the process. Creating a custom shader object
should look like this:
1 MyShader * MyShader::create(GraphicsContext *gc) {
2 // Create a regular shader object from GLSL sources.
3 Shader *base = gc->createShader(vertexSrc, fragmentSrc);
4 // Check that it has been created successfully.
5 AR_ASSERT(base);
6 // Retrieve uniform objects.
7 SingleUniform<Vector4f> *unifColor = NULL;
8
34
4.2. Graphics
9 // Return the object if successful.
10 if (base->getUniform(unifColor, "color"))
11 // This constructor passes base to the parents move
constructor.
12 return new MyShader(base, unifColor);
13 // Otherwise delete the shader and return null.
14 delete base;
15 return NULL;
16 }
To render data with a Shader, all that is needed is to pass a vertex array to
the draw function, which takes care of activating the shader if necessary.
4.2.2.5 Feedback shader
A special kind of shader is the FeedbackShader. It uses transform feedback
(see 2.4.2.1) and allows to store vertex shader outputs into a VBO. Unlike
the regular Shader, it does not have to contain a fragment shader.
4.2.2.6 Array buers
The VertexBuffer and IndexBuffer classes represent OpenGL array buf-
fers. I made loading data into them simple by providing template methods
that automatically deduce the type. To be used for rendering, they have to
be bound to a vertex array.
4.2.2.7 Vertex arrays
Although the vertex array object only appears in OpenGL since version 3, I
decided to make it a key element of the abstraction layer because it promotes
the stateless design I am pursuing. Even if it did not exist at all, I would
probably create a very similar mechanism. Since it does, I incorporated it
in my engine, but to maintain compatibility with OpenGL 2, I also added
an alternative, software implementation. When a VertexArray object is
created, the appropriate version is chosen automatically.
4.2.2.8 Instanced drawing
I provided support for instanced drawing by adding the drawInstanced
method to Shader. I also added the InstancedVertexArray class derived
from VertexArray, in which it is possible to set the divisor for each attribute.
35
4. Implementation
4.2.2.9 Textures
A texture is represented by the abstract Texture class, of which there are
several descendants, such as 1D, 2D and 3D textures, or the TextureFrame-
Buffer. The pixel data for a texture is passed in the form of a bitmap class
designed specically for this purpose. A bitmap class can also have from
one to three dimensions and one of a variety of pixel types compatible with
OpenGL. It holds a sequence of raw pixel data in the memory.
Texture objects may be bound to texture units via the GraphicsContext,
and those can be bound to uniform variables of a sampler type.
4.2.3 Mesh system
In order to draw a single triangle in OpenGL, one must create a vertex array
object, a vertex buer, and possibly an index buer, and correctly bind
them to the vertex attributes. To simplify this, I created a mesh system
that takes care of all these tasks.
The attribute bindings are dened by a vertex type. It serves as a template
argument for the mesh classes. There are several predened vertex types,
but it is possible to create custom ones. This is an example of a vertex type
denition:
1 struct MyVertex {
2 // Physical data structure - 3D coordinate and normal vector
3 struct {
4 float x, y, z;
5 } coord, normal;
6
7 // GLSL attribute names
8 static const char * const attributeList[] = {
9 "coord",
10 "normal"
11 };
12
13 // Attribute bindings
14 static void bindAttributes(VertexArray *va, VertexBuffer *vb) {
15 va->bindVertexBuffer(0, vb, 3, 0, 6);
16 va->bindVertexBuffer(1, vb, 3, 3, 6);
17 }
18 };
36
4.2. Graphics
4.2.3.1 Meshes
The Mesh class has a vertex type as its template argument and aggregates
a VertexArray, a VertexBuffer, and optionally an IndexBuffer, binds
them according to the vertex type, and species what primitive type should
the vertices form. To create a mesh, an array of vertices is all that is needed:
1 MyVertex vertices[] = {
2 { { ax, ay, az }, { nx, ny, nz } },
3 { { bx, by, bz }, { nx, ny, nz } },
4 { { cx, cy, cz }, { nx, ny, nz } }
5 };
6
7 Mesh<MyVertex> *mesh = Mesh<MyVertex>::create(gc, TRIANGLES,
AR_ARRAY_STATIC(vertices));
The derived DynamicMesh class has additional methods allowing to dynam-
ically alter its vertices.
4.2.3.2 Mesh shaders
The MeshShader is a Shader that can draw meshes of a given vertex type. It
mostly serves as a safeguard to make sure the attributes are bound correctly.
The vertex attribute names in the shaders source code must correspond to
the ones specied by the vertex type.
4.2.4 Post-processing
I also added a simple way to perform shader-based fullscreen post-processing.
The PostProcessBuffer class contains a frame buer object that stores
the scenes pixels, a shader that transforms them, and a rectangle mesh to
ll the game window. To create a post-processing buer, it is only needed
to supply the source code of the fragment shader. In it, two variables are
provided, the sampler of the scenes pixels and the current position in the
scene. Additional uniform variables can be created, and their objects can
be retrieved through the API.
To allow drawing the scene into the post-processing buer with multi-
sampling, the class actually contains two frame buer objects a Render-
FrameBuffer and a TextureFrameBuffer. Before the post-processing stage,
the pixels are copied to the latter.
Rendering with post-processing is therefore very simple:
37
4. Implementation
1 // Create the post-processing buffer.
2 PostProcessBuffer *post = PostProcessBuffer::create(gc,
fragmentSource, SCREEN_WIDTH, SCREEN_HEIGHT, 4);
3
4 // Rendering function:
5 void drawScene() {
6 gc->pushTarget(post);
7 // Draw the scene here.
8 gc->popTarget();
9 // Apply post-processing.
10 post->render();
11 }
4.2.5 Particle system
For the particle system, I decided to take a more low-level approach, with the
possibility of adding a higher-level layer in the future. This approach allows
the user to dene almost any particle eect they come up with, without
worrying about the technical details of the implementation. I divided the
classes of the particle system in such a way that the same set of particles
may be used in more than one way.
The particle system is based on shaders that control the way particles
are updated and rendered. Only parts of those shaders are dened by the
user, the rest takes care of the technical execution. The source codes are
then spliced together to form the resulting shader.
There are multiple ways to implement particles in OpenGL, and gener-
ally, the more optimal ones are less widely supported. To take advantage
of modern technologies and maximize compatibility at the same time, I
created two alternative implementations of the particle system out of four
considered possibilities.
4.2.5.1 Frame buer object implementation
Unfortunately, even the compatibility version is not guaranteed to be sup-
ported in OpenGL 2, as it requires the extensions for instanced drawing and
frame buer objects with a oating-point texture attachment. It is based
on rendering the particle data into a oating-point texture via a frame
buer object. The texture is then sampled to retrieve the data. Sampling is
38
4.2. Graphics
a fairly clumsy and inecient operation for this task, but I have not found
a better solution with the same compatibility.
4.2.5.2 Transform feedback implementation
A much cleaner implementation is based on transform feedback. It also
relies on instanced drawing with attribute divisors, which implies OpenGL 4
as the minimum version [12]. The particle data is output directly into a
vertex buer that is passed to the rendering shader along with the vertex
data. The two are dierentiated by attribute divisors.
4.2.5.3 Particle class
The central entity of the particle system is the denition of a class of
particles, the ParticleClass class. It denes the data representation of the
particles and serves as a factory for all objects that use the particular class
of particles.
4.2.5.4 Particle set
A set of physical particles is stored in a ParticleSet. It consists of a double
buer with the particle data that are being updated inside the video memory,
and a mesh that denes the geometry of each particle. The initial particle
data may also be uploaded into the buer manually.
4.2.5.5 Particle processor
The ParticleProcessor class represents a shader that updates the particle
data. It is dened by the source code of a GLSL function that receives the
current values of the particle variables and modies them accordingly.
4.2.5.6 Particle renderer
The ParticleRenderer class represents a shader that is used to render the
particles. The vertex shader includes the implementation-specic mechanism
to retrieve the particle data, so user code is also wrapped into a function
that is incorporated into the resulting source code, but the fragment shader
is used as-is. It is up to the user to forward the particle variables to the
fragment shader.
39
4. Implementation
4.3 Graphical user interface
The next part of the engine oers tools that allow the user to create a
graphical user interface for the game, which usually includes a game menu
and a heads-up display (HUD), where statistics, such as the players health
can be displayed.
4.3.1 Indirect calls
I created an indirect call system, inspired by signals and slots in the Qt
library [7], that would allow to dynamically connect objects that generate
or register events to the intended responses, without creating unnecessary
dependencies. This is typically used to connect GUI elements (a button) to
a function in the game logic.
The system has three main classes: the signal emitter, which contains
a dispatcher, and the slot. The emitter may be used to send a signal to
all slots attached to its dispatcher. The dispatcher manages a list of at-
tached slots. The main reason it has been separated into a class of its
own is to provide an API that allows to attach ones slot to a signal, but
prevents unauthorized emission of the signal. The slot represents the re-
ceiving end, an action that should be performed when the signal is sent.
There are multiple types of slots, representing functions, object methods, etc.
Since a signal is similar to a function call and may have any set of ar-
guments, all of the classes of this component rely heavily on templates that
specify their argument types.
4.3.2 Text rendering
For text rendering, I applied the signed distance eld method described in
Section 2.3. The central class of this component is the font, which generates
text meshes, which can be rendered with text shaders.
4.3.2.1 Font
The Font class represents a font and can produce text meshes. It holds a
texture with the signed distance eld of all available characters, along with
their positions and dimensions, and the list of kerning pairs.
I created a custom le type that stores all data necessary to create a
40
4.3. Graphical user interface
font object, including the raster signed distance eld, and a simple utility
program that converts a vector font to this format using the FreeType
library [21]. There is no need to perform this conversion every time the
game is launched.
4.3.2.2 Text mesh
A text mesh can be created by supplying the desired string along with
several parameters, such as text alignment or line height, to the font object.
Each glyph is represented by a quad in the mesh. The correct positions and
texture coordinates are deduced for each vertex, so that the string is typeset
correctly in the font. The vertex coordinates are in font units, meaning that
the value 1 corresponds to the font size.
Lorem ipsum
Figure 4.1: The structure of a text mesh (wireframe in red).
4.3.2.3 Text shader
A text mesh can be rendered with a text shader. The main job of a text
shader is to convert the signed distance eld into sharp anti-aliased glyphs.
The following paragraphs detail this procedure.
At each fragment, we need to determine the opacity, depending on the
position in the glyph. The rst step is to sample the signed distance eld
and transform the value to the interval [1, 1] to retrieve the signed distance.
Next, we need to convert it to the unit coordinate system as
d = d
MAX
m + b, (4.1)
where m is the sampled distance, d
MAX
is the maximum distance encoded
in the texture (in font units), and b is an optional bias parameter that can
be used to adjust the glyphs thickness.
Theoretically, the fragments where d is below zero belong to the glyph,
and the rest do not. However, to prevent pixelated edges, we must smoothen
them and map the values near zero distance to a gradient transition between
41
4. Implementation
zero and full opacity. We need to choose a threshold t, so that the gradient
spans the interval [t, t] (Figure 4.2).
0 t t
d
Figure 4.2: Mapping the signed distance d to opacity (black).
If t is too small, the result will be pixelated. If it is too large however,
the text will lose its sharpness. Generally, the edge should be around a
pixel thick, or at least proportional to a pixel, so that edge smoothing is
consistent across all text sizes. Therefore, we need to nd the pixel size in
relation to the signed distance d.
For this, there are the derivative functions in GLSL. They return how
much a value changes between consecutive fragments on the X or Y axis.
Although the text may be stretched unevenly in the two dimensions, and
the correct thickness for horizontal and vertical edges may dier, an average
value should suce. The fragment size w can be then computed from the
coordinate P in font units as:
w =
1
2

dP
X
dx

dP
Y
dy

(4.2)
Now we can derive the threshold t as
t = s w, (4.3)
where s is a coecient that adjusts the edge smoothness. Finally, we may
compute the opacity of the fragment as
=
1
2

1 clamp

d
t

, (4.4)
with the clamp(x) function clamping any values outside the interval [1, 1]
to sgn(x).
There are several text shaders available in the engine, with dierent graphical
eects, such as gradients, borders, or shadows.
42
4.4. Physics
4.4 Physics
The physics system is optimized for platformer-style games. It is based
on elastic collisions between 2D objects. It does not support rotational
movement, so a realistic simulation cannot be achieved with this system.
However, this simplied model is sucient for most games where physics
isnt a key gameplay element, and sometimes even favored.
4.4.1 Basic shapes
The rst problem was to choose the supported shapes for the physical sim-
ulation. After researching various possibilities, I came to the conclusion
that for anything more complex than points, lines, convex polygons, and
circles, an analytical solution would be either problematic or slow. For these
reasons, I chose these four shapes as the building blocks of the engines
physical simulation.
(a) point (b) line (c) convex polygon (d) circle
Figure 4.3: The basic physical shapes.
This limited choice of shapes brought an unexpected problem though. To
create more complex structures, we must combine the basic shapes. However,
due to imprecise oating-point calculations, unexpected results were being
produced in the proximity of the seams.
In the example in Figure 4.4, we can see a blue rectangle traveling on
top of terrain that is composed of two line segments, AB and BC. In this
scenario, a collision should be detected between the bottom right corner of
the rectangle and the line segment, diverting the rectangle in a way that it
may continue moving up the ramp. Instead, due to imprecise calculation,
a collision is found between the right edge of the rectangle and point B of
the line segment, as if the rectangles base was below B. This phantom
collision prevents the rectangle from continuing whatsoever.
43
4. Implementation
A
B
C
Figure 4.4: A phantom collision.
To solve this, I introduced two additional basic shapes, the elementary
building blocks of polygons vertices and edges. An edge is similar to a
line segment, but is only one-sided, and most importantly, does not detect
collisions at its endpoints. If a pair of edges form a concave, like at point
B in the previous example, it is not even needed. It is, though, for convex
joints. For those, there is the vertex shape. Unlike a point, it only detects
collisions from the directions it is exposed to. Constructing complex shapes
out of edges and vertices at convex joints prevents the occurence of phantom
collisions.
(a) vertex (b) edge
Figure 4.5: Additional special physical shapes.
Each shape has several basic methods that determine its physics-related
qualities such as perimeter, area, or center of mass.
4.4.2 Shape interaction
I created methods that compute certain physical relations between any pair
of the basic shapes. They are
intersection testing,
minimum distance computation,
collision prediction.
A special implementation is made for each of the 21 possible combinations
of the 6 shapes, for each of these functions. They are all computed us-
ing analytical mathematics and optimized for maximum performance. All
44
4.5. Sound playback system
functions involving a polygon have a linear time complexity in respect to
the number of their sides, and all others have a constant time complexity.
For interaction between a pair of convex polygons, I used algorithms based
on the rotating calipers method [20], which have linear time complexity
O(n
A
+ n
B
), with n
A
+ n
B
being the sum of the polygons sides.
To allow calling one of these functions on a pair of shapes of unknown
type, I used the double dispatch idiom.
4.4.3 Objects
A shape with a position and a velocity vector is a physical object. To predict
a collision between two objects, for example player and wall, all we need
to do is:
1 Collision c = player.predictCollision(wall);
The Collision object c now contains all necessary information, including
if and when a collision is going to occur, and its normal vector. The objects
may also dene how to resolve the collisions, depending on the other objects
type.
4.4.4 Data structures
Before we can start predicting collisions, we must rst nd the pairs of
objects that are close to each other and have a possibility of collision. For
this purpose, currently there is only one data structure available in the
engine, the spatial hash table, described in 1.4.3.
4.5 Sound playback system
The 3D sound system of the engine is designed to allow the user to playback
sound les with simple commands without worrying about the technical
details, such as buering. I implemented it using the OpenAL library (see
Section 2.2).
The core of the component is the SoundSystem class. It holds an OpenAL
context and is designed as a singleton. Global parameters such as the listener
properties or master volume can be controlled by this this class, and it also
serves as a factory for other objects belonging to the sound system, similarly
to the GraphicsContext class.
45
4. Implementation
The SoundSystem also maintains a buering thread. In the thread, the
buers of all the sounds that are being played at the moment are periodic-
ally checked and relled if needed. The period may be set when the class is
being initialized. All OpenAL resources, including buers, are recycled by
the sound system.
4.5.1 Sound emitter
One of the objects that can be created by the SoundSystem is the Sound-
Emitter, an object in the game world capable of producing sound from its
location. It can be compared to a source in OpenAL, but it is capable to
play more than one sound at a time.
4.5.2 Sound
Another such object is the Sound, representing a sound that can be played,
either directly or by an emitter. It can be created from a Waveform object,
which contains the sounds raw waveform and could be considered analogous
to a bitmap object.
4.5.3 Playback
When a sound is played, it generates a Playback object. It can be used to
control the playback of a sound and to check if it is still playing. It also
provides an implicit boolean conversion to check that the sound has started
playing successfully.
4.6 Network
The network component of the engine oers a low-level API to transmit data
across the internet, allowing to create multiplayer games. Since the way the
games data is synchronized relies on many factors, including the type of
the game, the way game data is stored, the connection architecture, and
many others, for now, I decided not to implement the whole synchronization
process. Instead, I created socket classes for the TCP and UDP protocols
specialized for the most common tasks. They are designed to be used in the
game loop in the non-blocking mode. They support both IPv4 and IPv6
protocols.
46
4.7. Resource management
4.6.1 TCP sockets
The API for TCP connection is based on the network I/O of the Java
programming language. The available classes are the regular and server
socket. The server socket can listen on a port and accept connections,
yielding socket objects that represent clients. A regular socket can connect
to the server.
4.6.2 UDP sockets
For communication over the UDP protocol, I created two classes. One is
the regular UDP socket that may send and receive data to and from any
address. To simulate a steady connection like in the TCP protocol, there is
also the mono socket, which must be bound to a single address, and only
communicate with that one.
4.7 Resource management
The engines resource manager takes care of loading, storing, and disposing
of the games external resources. Currently, the supported resource types
are strings, textures, fonts, and sounds. Each resource is mapped to a name,
by which it can be accessed.
4.7.1 Resource denition
Before the resources can be loaded, they must be dened, so that the
manager knows what, how, and when to load. For each resource, the
following parameters must be dened:
type of the resource,
name,
source of the resources data a le or an entry in an archive,
parameters specic to the resource type, for example texture ltering.
The resource manager maps each resource to its string name using a trie.
As soon as they are dened, their representation in the trie is established
and marked as not loaded.
47
4. Implementation
4.7.2 Resource access
When the resource name is passed to the manager, it returns a resource
access object. It can be saved to avoid having to look up the resource in
the trie every time. There is a dierent access class for each resource type.
Most of them oer an implicit conversion to the data type they represent.
This is an example of how a resource can be retrieved and used:
1 TextureResource ground = resourceManager->getTexture("tGround");
2 gc->bindTexture(0, ground);
4.7.3 Placeholders
Some resource types may have a dierent resource set as their placeholder.
If such a resource is requested, but not loaded yet, the placeholder resource
is returned instead, and the loading process starts immediately. This allows
for progressive resource loading.
4.7.4 Loading process
The loading process can be started manually, or automatically, when a
resource is needed. The process consists of several steps depending on the
type of the resource. Generally, it starts with loading the contents of a le
into the memory. This action takes place in a dedicated disk access thread.
The resource manager then allows the resource object to schedule additional
tasks necessary to convert the data into the desired objects.
4.7.4.1 Asynchronous processing
A task may be submitted to the managers asynchronous thread pool, where
isolated processes, such as decoding the le data from a compressed format,
may take place. The engine has several simple functions that allow decoding
images, sounds, and fonts with the use of third-party libraries (DevIL [22],
LibVorbis [23], LibFLAC [5]).
4.7.4.2 Threadbound processing
Calling an OpenGL procedure, such as uploading texture data into the video
memory, can only be called from the thread the OpenGL context is current
in [10]. For such tasks, there is the threadbound processing queue. To make
this work, the resource manager must be allowed to resolve these tasks in
the game loop.
48
Chapter 5
Results
During the development of the engine, I tested its individual parts, and in
the end, I created a simple demonstrational game to test its viability as a
whole. Since the purpose of a game engine is to provide audiovisual output
for the player, most tests were based on simple observation.
5.1 Graphics
The graphical capabilities of the engine have been tested mainly visually.
The rst test of the OpenGL abstraction layer can be seen in Figure 5.1.
Although it may seem simple, it contains almost every single implemented
OpenGL function, including shaders, textures, blending, stencil buer,
instanced drawing, frame buer objects, and transform feedback.
Figure 5.1: A simple, yet exhaustive graphics test.
49
5. Results
5.2 Particle system
Both implementations of the particle system (described in sections 4.2.5.1
and 4.2.5.2) have been tested on a set of dierent particle eects. One can
be seen in Figure 5.2.
Figure 5.2: An example particle eect.
I have measured the performance of this particular eect. The frame-
rates for various particle counts can be seen in Table 5.1.
Particle count
Frames per second
Implementation A Implementation B
20 2,421.0 2,372.8
1,024 2,261.1 2,204.5
16,384 1,081.42 1,035.15
65,536 393.04 372.86
262,144 96.877 91.498
1 million 13.647 12.916
Table 5.1: Particle system performance test results.
The measurement has been performed by counting the number of rendered
frames over a longer time period. For the test I used an Nvidia GTX 460
GPU and a resolution of 1280720 pixels with 4 MSAA.
50
5.3. Physics
Implementation A uses transform feedback, and B is a legacy implementa-
tion, which stores the particles inside a texture. Though I expected A to
be signicantly faster, the dierence in their performance is surprisingly
minimal.
5.3 Physics
To test the collision detection algorithms, I created an interactive utility, in
which the results can be checked visually. If the algorithm is wrong, the
results will almost always dier wildly from the correct values, making it
immediately visible in the graphical representation. The main weakness of
this test is identifying errors under boundary conditions. The interface of
this utility can be seen in Figure 5.3.
Figure 5.3: An interface for testing physics related algorithms.
The utility is also capable of benchmarking the time it takes to perform
the operations. When implementing them, I experimented with multiple
versions of each function and used the fastest one. The nal results can
be seen in Table 5.2. The values have been measured by repeating each
operation sequentially over a million times for dierent inputs on an Intel i5
CPU at 3.4 GHz.
The rst case is trivial and can be used as a reference. The last case
is a stress test with a pair of polygons with very large amounts of sides. The
remaining polygons have from 5 to 8 sides each. As you can see, even a
51
5. Results
Case
Time per operation [ns]
Intersection Distance Collision
point point 8.06 17.57 5.85
point line 8.16 22.92 22.51
vertex edge 8.05 19.23 24.12
polygon point 22.76 90.23 63.85
polygon circle 129.88 209.13 357.39
polygon polygon 157.88 242.19 177.39
108-polygon 133-polygon 1,476.8 3,945.6 2,468.1
Table 5.2: Physics functions performance test results.
collision test between two extremely large polygons takes only a few micro-
seconds, which means that it could be performed over 6000 times per frame
at 60 FPS.
5.4 Text rendering
A demonstration of the text rendering shaders can be seen in Figure 5.4. In
this picture, the height of a glyphs raster representation is 64 pixels.
Figure 5.4: Text with various eects rendered by the engine.
5.5 Demo game
The nal part of my work was to test the viability of the engine in practice
by using it to create a demonstrational game. The game I created is more of
a technology demo, since it doesnt oer any sophisticated game logic, but it
showcases all the functions vital for a game, along with many miscellaneous
52
5.5. Demo game
features of the engine. It is a parallax 2D platformer that takes place on an
isolated landmass. It is shown in Figure 5.5.
Figure 5.5: A screenshot of the demo game.
5.5.1 Features
The game utilizes the engines automated resource management system.
All its external les are loaded and managed automatically.
An FPS indicator in the foreground demonstrates the capability of
rendering dynamically changing text meshes and its possible use in a
games HUD.
Physical simulation the player character is able to move smoothly
across an uneven surface and shoot projectiles that bounce o of
obstacles.
A 2D lighting eect is used to dynamically illuminate the scene.
Particle eects simulate the elemental forces re and rain.
A displacement eld eect distorts the scenes pixels and showcases
the possibilities of the engines shader-based post-processing.
The sound system is also represented by providing an atmospheric
rain sound in the background.
53
Conclusion
I have successfully implemented a 2D game engine and demonstrated its
viability on an example game. The conducted tests have veried that all of
the engines components function as intended.
Unfortunately, because implementing some low-level components of the
engine, especially the platform abstraction layer and collision detection
functions, took a very large amount of time, I was not able to focus on
the engines higher-level parts as much as I hoped to. Despite its limited
capabilities in this regard, it is already clear that the engine signicantly
simplies game development in comparison to using plain OpenGL.
I intend to continue working on the engine in the future, improving it
and expanding its functionality.
Future work
Thanks to the engines architecture, it is not a problem to add new functions.
In the near future, I would like to implement
form elements (widgets) for the GUI, which could be used to construct
complex menus,
advanced predened shaders, possibly supporting skeletal animation,
integration with a more advanced physics library, such as Box2D [3].
In the more distant future, it would be perhaps possible to expand the
engine to three dimensions.
55
Conclusion
There is also the possibility of porting the engine onto dierent platforms.
Since 2D games tend to be less performance intensive, they are ideal for
mobile platforms.
56
Bibliography
[1] Lars Bishop, Dave Eberly, Turner Whitted, Mark Finch, and Michael
Shantz. Designing a PC game engine. Computer Graphics and Applica-
tions, IEEE, 18(1):4653, 1998.
[2] David M Bourg. Physics for Game Developers. OReilly Media, Inc.,
2002.
[3] Erin Catto. Box2D. http://box2d.org/about/, May 2013.
[4] Thomas Cheah. A practical implementation of a 3D game engine. In
Computer Graphics, Imaging and Vision: New Trends, pages 351358.
IEEE, 2005.
[5] Josh Coalson. FLAC free lossless audio codec. http://
flac.sourceforge.net/, May 2013.
[6] Creative Labs. OpenAL. http://connect.creativelabs.com/
openal/, May 2013.
[7] Digia. Qt project. http://qt-project.org/, May 2013.
[8] Randima Fernando and Mark J Kilgard. The Cg Tutorial: The denitive
guide to programmable real-time graphics. Addison-Wesley Longman
Publishing Co., Inc., 2003.
[9] Chris Green. Improved alpha-tested magnication for vector textures
and special eects. In ACM SIGGRAPH 2007 courses, pages 918.
ACM, 2007.
[10] Khronos Group. OpenGL API documentation. http://
www.opengl.org/documentation/, May 2013.
57
Bibliography
[11] Khronos Group. OpenGL overview. http://www.opengl.org/about/,
May 2013.
[12] Khronos Group, independent contributors. OpenGL Wiki. http:
//www.opengl.org/wiki/, May 2013.
[13] Kevin Krewell. Whats the dierence between a CPU and a
GPU? http://blogs.nvidia.com/2009/12/whats-the-difference-
between-a-cpu-and-a-gpu/, December 2009.
[14] Eric Lengyel. Game Engine Gems 2. AK Peters Ltd, 2011.
[15] Tristam MacDonald. Spatial hashing. http://www.gamedev.net/page/
resources/ /technical/game-programming/spatial-hashing-
r2697/, September 2009.
[16] Microsoft Corporation. About messages and message queues. http:
//msdn.microsoft.com/en-us/library/ms644927/, May 2013.
[17] Microsoft Corporation. Getting started with Direct3D (Windows).
http://msdn.microsoft.com/en-us/library/hh769064/, May 2013.
[18] Petr Olsak.

Uvod do algebry, zejmena linearn. FEL

CVUT v Praze,
2007.
[19] Playdead. LIMBO. http://limbogame.org/, May 2013.
[20] Godfried T Toussaint. Solving geometric problems with the rotating
calipers. In Proc. IEEE Melecon, volume 83, page A10, 1983.
[21] David Turner, Robert Wilhelm, and Werner Lemberg. The FreeType
project. http://www.freetype.org/, May 2013.
[22] Denton Woods. DevIL a full featured cross-platform image library.
http://openil.sourceforge.net/about.php, May 2013.
[23] Xiph.Org. Vorbis audio compression. http://xiph.org/vorbis/, May
2013.
58
Appendix A
List of abbreviations
2D two-dimensional
3D three-dimensional
API application programming interface
CPU central processing unit
FBO frame buer object
FPS frames per second
GLSL OpenGL Shading Language
GPU graphics processing unit
GUI graphical user interface
HUD heads-up display
I/O input / output
IBO index buer object
MSAA multisample anti-aliasing
OS operating system
PC personal computer
PCM pulse-code modulation
VBO vertex buer object
59
Appendix B
User manual
This manual should help you start creating games with the Artery Engine.
If you want to use it with Microsoft Visual C
++
2012, you may skip to
Section B.2.
B.1 Compilation
If you have Microsoft Visual Studio 2012, open the solution le, /project
/Artery1/Artery1.sln, and build the solution.
Otherwise, you have to setup the project manually. Make sure to link
the following libraries:
Ws2_32
opengl32
OpenAL32
DevIL
ogg
vorbis
vorbisfile
flac
Also make sure to add /project/Artery1/include to the include direct-
ories. Then, compile all .cpp les inside /project/Artery1/src, and link
them into a static or dynamic library.
61
B. User manual
B.2 Importing the library
To import the engine into your application, all you need to do is add
/library/include to the include directories, and link artery.lib from
either /library/debug or /library/release to the program.
B.3 Usage
To start with Artery Engine, try and examine the example program /library
/example.cpp, which shows how the engine may be used to create a graphical
application. For more information, please refer to the reference manual
found in /doxygen.
62
Appendix C
Contents of the enclosed CD
thesis.pdf ................................... thesis in PDF format
readme.txt..................................description of contents
demo....................................................demo game
doxygen....................................Doxygen documentation
reference.pdf.................................reference manual
examples.........................................example programs
latex...................................source les of the document
library ................................ engine compiled as a library
readme.txt .................................. usage instructions
example.cpp.............................an example source code
debug ............................................ debug library
include ....................................engines include les
release ..........................................release library
project .............................. solution in Visual Studio 2012
readme.txt ...................................build instructions
Artery1...........................................engine project
include................................necessary include les
lib.............................................used libraries
src...............................................source code
DemoGame.....................................demo game project
Particles.........................example particle eect project
utilities .......................................... related utilities
fontgen ......................................font le converter
63

You might also like