Professional Documents
Culture Documents
Daniel Margreiter
http://www.danielmargreiter.com
Shader Languages
It was used to program shaders in assembly but nowadays are the programs written with different high level languages. The best known languages are GLSL (for OpenGL) and HLSL (for DirectX). There is one more interesting language named Cg (C for graphics). Cg was developed by NVidia in cooperation with Microsoft. Cg can compile GLSL and HLSL code and it can be used for Vertex Shaders as well as Pixel Shaders.
http://www.danielmargreiter.com
Why a pipeline?
A system is divided into different stages of a pipeline for getting out more performance. Be aware, a pipeline is just as fast as the slowest link of the chain. Stage one cannot be update until stage ten is updated. General, a pipeline with n stages would increase the performance n times. I compare it with a car assembly. Lets say, all stages of a car production pipeline take one minute to get the whole thing done except one stage
http://www.danielmargreiter.com
takes two minutes for montaging the car door. So every stage except the stage which takes two minutes has to wait at least one minute to move their work forward. There is every two minutes a car coming out of the production line. OK, I hope everything is clear so far. Next chapter is about the pipeline on our graphics cards.
1. Application Stage The Application Stage is there for calculation and is mostly running on the CPU. These calculations cause a rerendering of the objects. Good examples for things running on CPU are physics, AI, Steering Behaviours or collision detection.
2. Die Geometry Stage The Geometry Stage is mostly about vertices. It can be divided in several different stages as follows: Model & View Transformation -> Vertex Shading -> Projection -> Clipping -> Screen Mapping
Model & View Transformation In this area the model space will be converted into the world space and afterwards the world space will be converted into the view space. The model space is the coordinate system of the model. Because we need the positions in the 3D world, we have to convert them with the world matrix. The world coordinates are then converted into the view space, which have its origin at the camera. Everything will be calculated with matrices, if you do not have knowledge about matrices, read the wiki article or wait until I wrote one.
http://www.danielmargreiter.com
Vertex Shading (programmable) Here are running your self-programed Vertex Shaders. Vertex Shader can affect every single vertex of a mesh. A good example where a vertex shader is used, is clothes animation on characters in games.
Hull Shading and Domain Shading Hull and Domain Shader are used for Tesselation. Tesselation can be used to create a high poly mesh out of a low poly.
Geometry Shading (programmable) In this stage are running your self-programed Geometry Shaders. Here can be added new primitives to your meshes. Like mentioned before, primitves are points, lines or triangles. The best example for a Geometry Shader is Parallax Occlusion Mapping (POM).
Stream Out (programmable) The Stream Out is used for saving performance and reduces the overhead because you can move around the following stages. So it does not go through pixel operations or something like that.
Projection (configurable) There are different processes which can be used. You can choose between perspective view or parallel view (e.g. orthographical). In perspective view the objects which are farer away appear smaller and objects which are near to the cam look bigger. In parallel views the objects are everywhere the same size. Every data from the steps before will be mapped into a unit cube. A unit cube is a coordinate system from -1 to +1 on every axis.
Clipping (configurable) In the clipping stage every vertex which is outside of the unit cube will be cut off and on the boundary of the unit cube will be set new vertices. These vertices will be connected to form again a primitive.
Screen Mapping (static) Screen mapping maps the picture which we have got out of the previous stages to the output window. DirectX9 had their origin at
http://www.danielmargreiter.com
the middle of the screen with [0,0]. Also do XNA since it is based on DirectX 9. Since DirectX10 the left upper corner is the origin with [0,0]. In OpenGL it is the bottom left corner.
3. Rasterize Stage Also the Rasterizer Stage can be split into several stages as follows: Triangle Setup -> Triangle Traversal -> Pixel Shader -> Merger
Triangle Setup (static) The data which comes from the previous stages will be calculated for the triangle surfaces.
Triangle Traversal / Scan Conversion (static) Every pixel which is coverd more than the half from a triangle will be assigned to this triangle.
Pixel Shader (programmierbar) Here are running the self-programed Pixel Shaders. A Pixel Shader can affect single pixels. A example for a pixel shader would be Bloom or Depth of Field (DoF).
Merger (configurable) In the last stage before the display you can combine/merge different things. You can merge different buffers (e.g. Depth Buffer, Frame Buffer).
http://www.danielmargreiter.com
indiedev.de
http://www.danielmargreiter.com
And here is my closing paragraph for this article. If nothing is programmable or configurable it is called a fixed function pipeline. There are also today consoles with a fixed functions pipeline, for example the Wii does have one.