You are on page 1of 38

Srinivas Institute of Technology, Mangalore Page 1 of 38

Graphics Programming Programming oriented approach is used. Minimal application programmer's interface (API) is used which allow to program many interesting two- and three-dimensional problems, and to familiarize with the basic graphics concepts. 2-D graphics is regarded as a special case of 3-D graphics. Hence the 2-D code will execute without modification on a 3-D system. A simple but informative problem called: The Sierpinski gasket is used. 2-D programs that do not require user interaction can be written with knowledge presented here. The chapter is concluded with an example of a 3-D application. The Sierpinski Gasket Problem It is the drawing of an interesting shape that has a long history and that is of interest in areas such as fractal geometry. The Sierpinski gasket is an object that can be defined recursively and randomly; in the limit, however, it has properties that are not at all random. Consider the three vertices in the plane. Assume that their locations, as specified in some convenient coordinate system, are (Xl, Y1), (X2, Y2), and (X3, Y3). The construction proceeds as follows: 1. Pick an initial point at random inside the triangle. 2. Select one of the three vertices at random. 3. Find the point half way between the initial point and the randomly selected vertex. 4. Display this new point by putting some sort of marker, such as a small circle, at its location. . 5. Replace the initial point with this new point. 6. Return to step 2. Thus, each time a point that is generated, it is displayed on the output device. In the figure p0 is the initial point, and Pl and P2 are the first two points generated by the algorithm. A possible form for our graphics program might be this: main( ) { initialize_the_system(); for (some_number_of_points) { Pt = generate_a_point(); display_the_point(pt); } cleanup();}
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 2 of 38

Although the final OpenGL program will have a slightly different organization, it will be almost that simple. The program is developed in stages. 1.Generating and displaying points. The basic questions that arise are: How to represent points in space ? and Should a 2-D, 3-D, or other representation be used? The Pen-Plotter Model Most early graphics systems were 2-D systems. The conceptual model that they used is now referred to as the pen-plotter model, referencing the output device that was available on these systems. A pen plotter produces images by moving a pen held by a gantry, a structure that can move the pen in two orthogonal directions around the paper. The plotter can raise and lower the pen as required to create the desired image. Pen plotters are still in use; they are well suited for drawing large diagrams, such as blueprints. Process of creating an image in a pen-plotter is similar to the process of drawing on a pad of paper. The user works on a 2-D surface of some size and moves a pen around on this surface, leaving an image on the paper. Such a graphics system can be described with two drawing functions: moveto(x,y) ; It moves the pen to the location (x, y) on the paper without leaving a mark. lineto(x,y); It moves the pen to (x, y), and draws a line from the old to the new location of the pen. Example1: Output: Fragment moveto(0,0); lineto(1,0); lineto(1, 1); lineto(0,1); lineto(0,0); Example2: Output: Fragment moveto(0,0); lineto(1,0); lineto(1, 1); lineto(0,1); lineto(0,0); moveto(0, 1); /* Added Code */ lineto(0.5, 1.866); lineto(1.5, 1.866); lineto(1.5. 0.866);
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 3 of 38

lineto (1, 0); moveto(1, 1); lineto(1.5, 1.866); Drawback of Pen-plotter model: It does not extend well to threedimensional graphics systems. E.g. To produce the image of a 3-D object on 2-D pad of pen-plotter model, the positions of 2-D points corresponding to points on 3-D object are to be specified. These two-dimensional points are the projections of points in three-dimensional space. The mathematical process of determining projections is an application of trigonometry. However, an API allows users to work directly in the domain of their problems, and to use computers to carry out the details of the projection process automatically, without the users having any trigonometric calculations within the application program. For 2-D applications, such as the Sierpinski gasket, discussion is started with a three-dimensional world. Mathematically, a 2-D plane or a simple 2-D curved surface is viewed as a subspace of a threedimensional space. Hence, statements-both practical and abstract -about the bigger 3-D world will hold for the simpler 2-D one. We can represent a point in the plane z=0 as p = (x, y, 0) in 3-D, or as p = (x, y) in 2-D subspace corresponding to the plane. OpenGL, like most 3-D graphics systems, allows either representation, with the underlying internal representation being the same, regardless of which form the user chooses. Hence a 3-D point is represented by a triplet:

regardless of in what coordinate system p is represented. Vertex (rather than point): A vertex is a location in space (in Computer Graphics a 2-D, 3-D, 4-D spaces are used). Vertices are used to define the atomic geometric objects that are recognized by graphics system. The simplest geometric object is a point in space, which is specified by a single vertex. Two vertices define a line segment. Three vertices can determine either a triangle or a circle. Four vertices determine a quadrilateral, and so on.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 4 of 38

Multiple Forms of functions: OpenGL has multiple forms for many functions. The variety of forms allows the user to select the one best suited for the problem. General format of the vertex function is glVertex* where the * can be interpreted as either two or three characters of the form nt or ntv, where n signifies the number of dimensions (2, 3, or 4); t denotes the data type, such as integer (i), float (f), or double (d); and v, if present, indicates the variables are specified through a pointer to an array, rather than through an argument list. Regardless of which form a user chooses, the underlying representation is the same. Basic OpenGL types: GLfloat and GLint, are used rather than the C types, such as float and int. These types are defined in the OpenGL header files and usually in the obvious way-for example, #define GLfloat float However, use of the OpenGL types allows additional flexibility for implementations where, for example, we might want to change floats to doubles without altering existing application programs. Returning to the vertex function, if the user wants to work in 2-D with integers, then the form glVertex2i(GLint xi, GLint yi) is appropriate, and glVertex3f(GLfloat x, GLfloat y, GLfloat z) specifies a position in 3-D space using floatingpoint numbers. If an array is used to store the information for a 3-D vertex, GLfloat vertex[3] then glVertex3fv(vertex) can be used. Vertices can define a variety of geometric objects; different numbers of vertices are required depending on the object. Any number of vertices can be grouped using the functions glBegin and glEnd. The argument of glBegin specifies the geometric type that the vertices define. Hence, a line segment can be specified by glBegin(GL_LINES); glVertex2f(xl,yl); glVertex2f(x2,y2); gl End(); Same data can be used to define a pair of points, by using the form, glBegin(GL_POINTS);
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 5 of 38

glVertex2f(xl,yl); glVertex2f(x2,y2); glEnd(); Heart of Sierpinski gasket program: Suppose that all points are to be generated within a 500 x 500 square whose lower left-hand corner is at (0,0) - a convenient, but easily altered, choice. First we must consider how to represent geometric data in our program? Since we are working in plane z=0 we could use either of the form glVertex3f(x, y, 0) or glVertex2f(x,y). We could also define a new data type typedef GLfloat point2[2]; a two-element array for 2-D points is used. And use something like point2 p; glVertex2fv(p); Returning to Sierpinski gasket we create a function called display that generates 5000 points each time it is called. Assume that an array of triangle vertices triangle[3] is defined in display. void display(void) { /* an arbitrary triangle point 2 is defines as float*/ point2 triangle[3] = {{0.0, 0.0}, {250.0, 500.0}, {500.0, 0.0}}; /* or set to any desired initial point */ static point2 p = {75.0, 50.0}; int j, k; int rand(); /* standard random-number generator*/ for(k=0;k<5000;k++) //generates 5000 points { j = rand()%3; /* pick a random vertex from 0,1, 2 */ p[0] = (p[0] + triangle[j][0])/2; /* compute new point */ p[1] = (p[1] + triangle[j][1])/2; glBegin(GL_POINTS); /* display new point */ glVertex2fv(p) ; glEnd(); } glFlush(); } The function rand is a standard random-number generator that produces a new random integer each time that it is called. Modulus operator is used to reduce these random integers to the three integers 0, 1, and 2. The call to glFlush ensures that points are rendered to the screen as soon as possible. If
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 6 of 38

it is left, the program works correctly, but in a busy or networked environment there may be noticeable delay. A complete program is not yet written. Figure below shows the expected output: Issues leftwith here are: 1. Color of drawing. 2. Position of the image on the screen. 3. Size of the image. 4. Window (area on the screen) for the image. 5. How much of the infinite pad will appear on the screen? 6. How long will the image remain on the screen?

The basic code answering these questions and to control the placement and appearance of renderings will not change substantially across programs. Coordinate Systems: How to interpret the values of x, y and z in our specification of vertices ie, in what units are they? Are they in feet meters, centimeters? Where is the origin? Originally graphics systems required the user to specify all information, such as vertex locations, directly in units of display device. One drawback of that was if high level application were used then point were described in terms of screen locations in pixels or centimeters from the corner of the display. Graphics systems allow users to work in any desired coordinate system. The advent of device independent graphics freed application programmers from worrying about the details of input and output devices. The users coordinate system became known as World coordinate system or the Problem coordinate system or application model or object coordinate system. Units on the display were first called Device Coordinates or physical device coordinates. E.g. For raster devices, such as most CRT displays, the term raster coordinates or screen coordinates is used. Raster coordinates are always expressed in some integer type, because the center of any pixel in the frame buffer must be located on a fixed grid, or, equivalently, because pixels are inherently discrete and they can be addressed using integers.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 7 of 38

At some point, the values in world coordinates must be mapped into device coordinates, as shown in figure below: The graphics system, rather than the user, is responsible for this task, and the mapping is performed automatically as part of the rendering process. To define this mapping, the user needs to specify only a few parameters-such as the area of the world to be seen and the size of the display. The OpenGL API To gain control over the appearance of the object on display To control the flow of the program To interact with the window system. (Chapter 3) OpenGL structure is similar to that of modern APIs including Java3D and DirectX. Hence any effort we put in learning OpenGL will carry over to other s/w systems. OpenGL is easy to learn. How to specify primitives to be displayed? Graphics functions Graphics package model can be viewed as a black box (a term that engineers use to denote a system whose properties are described by only its inputs and outputs; nothing is known about its internal workings). Hence graphics system can be viewed as a box whose inputs are function calls from a user program (say measurements from input devices, such as the mouse and keyboard and possibly other input, such as messages from the operating system); the outputs are primarily the graphics sent to the output devices.

For simplicity inputs are simple function calls and outputs as primitives displayed on CRT. An API is defined through the functions in its library. A good API may contain hundreds of functions, so it is helpful to divide the functions into 7 groups by their functionality: 1. The primitive functions: define the low-level objects or atomic entities that the system can display. Depending on the API, the primitives

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 8 of 38

2.

3.

4.

5.

6.

7.

can include points, line segments, polygons, pixels, text, and various types of curves and surfaces. If primitives are what of an API- the objects that can be displayed, then attributes are the how. That is, the attributes govern the way that a primitive appears on the display. Attribute functions perform operations ranging from choosing the color, to picking a pattern with which to fill the inside of a polygon, to selecting a type face for the titles on a graph. Synthetic camera must be described to create an image. Then its position and orientation in its world must be defined, and equivalent of a lens must also be selected. This process not only will fix the view, but also clip out objects that are too close or too far away. The viewing functions specify various views. One of the characteristics of a good API is that it provides the user with a set of transformation functions that carry out transformations of objects, such as rotation, translation, and scaling. For interactive applications, the API provides a set of input functions to deal with diverse forms of input that characterize modern graphics systems say functions to deal with devices such as keyboards, mice, and data tablets. In any real application, the complexities of working in a multiprocessing multi-window environment (usually a network environment) may need to be handled. The control functions communicate with the window system, to initialize the programs, and to deal with any errors that take place during the execution of programs. Query functions: They provide useful information of API that can be useful in an application. E.g. Camera parameters or values in the frame buffer. To write device-independent programs, the implementation of the API need to take care of differences between devices, such as how many colors are supported or the size of the display. However there are applications where some properties of the particular implementation need to be known. E.g. If the programmer knows in advance the display device at work supports only two colors rather than millions of colors, the programmer choose to do things differently. A good API provides this information through a set of query functions.

The Graphics Pipeline and State Machine: We can think of entire graphics system as a state machine, a black box that contains a finite state machine. This state m/c has inputs that come from the application program. These inputs may change the state of the m/c or can cause the m/c to produce a visible output. From perspective of the API, graphics functions are of 2 types: 1. Those that define primitives that flow through the pipeline inside the state m/c and
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 9 of 38

2. Those that either change the state inside the m/c or return state information. In OpenGL, functions such as glVertex are of first type and almost other functions are of second type. One important consequence of this view is that in OpenGL most parameters are persistenttheir values remain unchanged until we explicitly change them through functions that alter the state. Example, once we set a color, that color remains the current color until it is changed through color altering functions. Another consequence is that the attributes we may conceptualize as bound to objects e.g.: red line or blue circle are part of the state, and line will be drawn in red only if the current color state calls for drawing in red. The OpenGL Interface: Most of the applications will be designed to access OpenGL directly through functions in three libraries. 1. GL (Graphics Library or OpenGL in windows): OpenGL function names begin with the letters gl and are stored in a library usually referred to as GL. 2. Graphics utility library (GLU): This library uses only GL functions, but contains code for common objects, such as spheres, that users prefer not to have to write repeatedly. This library is available in all OpenGL implementations. Functions in the GLU library begin with glu. 3. GL Utility Toolkit (GLUT): To interface with the window system and to get input from external devices into our programs we need one more library. For each major window system there is a system-specific library that providesglue between the window system and OpenGL. For X windows library is called GLX For windowswgl Macintoshagl GLUT (OpenGL Utility Toolkit) addresses the problems of interfacing with the different window system.ie. Rather than using different libraries for different window system we can use readily available library called GLUT. GLUT provides the minimum functionality that should be expected in any modern windowing system.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 10 of 38

Organization of the libraries for an X Window system environment is as shown:

Note that various other libraries are called from the OpenGL libraries, but that the application program does not need to refer to these libraries directly. It can use only GLUT functions and thus can be recompiled with the GLUT library for other window systems. A similar organization holds for other environments, such as Microsoft Windows. OpenGL makes heavy use of macros to increase code readability and to avoid the use of magic numbers. Thus, strings such as GL_FILL and GL_POINTS are defined in header ( . h) files. In most implementations, one of the include lines #include <GL/glut.h> or #include <glut.h> is sufficient to read in glut.h, gl.h, and glu. h. Primitives and Attributes API should support a small set of primitives that all h/w is expected to support. OpenGL basic library has a small set of primitives whereas an additional library, GLU, contains a richer set of objects derived from the basic library. OpenGL supports 2 classes of primitives: 1. Geometric primitives and 2. Image or raster primitives.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 11 of 38

Simplified OpenGL pipeline is as shown:


Geometric pipeline Transform Project Clip

OpenGL application Program

Pixel operations

Frame buffer

Pixel pipeline

Geometric primitives are specified in problem domain and include points, line segments, polygons etc. These primitives pass through a geometric pipeline, where they are subject to a series of geometric operations. These operations (rotation and translation) determine whether a primitive is visible, where on the display it appears if it is visible, and Rasterization of the primitive into pixels in the frame buffer. Rater primitives, such as array of pixels, lack geometric properties and cannot be manipulated in space in the same way as geometric primitives. They pass through separate pipeline on their way to the frame buffer. The basic OpenGL primitives are specified via points in space or vertices. Thus, the programmer defines objects with sequences of the form glBegin(type): glVertex*( . . .): glVertex*(...): glEnd(); The value of type specifies how OpenGL interprets the vertices to define geometric objects. Other code and OpenGL function calls can occur between glBegin and glEnd. E.g. Attributes can be changed or calculations can be performed for the next Vertex between glBegin and glEnd, or between two invocations of glVertex. A major conceptual difference between the basic geometric types is whether or not they have interiors.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 12 of 38

Aside from the point type, all the other basic types will be defined either in terms of vertices or by finite pieces of lines, called line segments in contrast to lines that are infinite in extent. Of course, a single line segment is itself specified by a pair of vertices, but the line segment is of such importance that it can be considered a basic graphical entity. Thus the use of line segments can be: To define approximations to curves. To connect data values for a graph. For the edges of closed objects, such as polygons, that has interiors. To display line segments or points we have few choices in OpenGL. primitives and their type specifications include the following:

Line segments (GL_LINES): The line-segment type causes successive pairs of vertices to be interpreted as the endpoints of individual segments. Because the interpretation is done on a pair wise basis, successive segments usually are disconnected. Polylines (GL_LINE_STRIP): It is used if successive vertices (and line segments) are to be connected. Many curves can be approximated via a suitable polyline. GL_LINE_LOOP: If the polyline need to be closed the final vertex is located in the same place as the first, or GL_LINE_LOOP can be used which will draw a line segment from the final vertex to the first, thus creating a closed path. Polygon Basics Line segments and polylines can model the edges of objects, but closed objects also may have interiors as shown in figure below. Polygon refers to an object that has a border that can be described by a line loop, but has an interior. Polygons play a special role in computer graphics because they can be displayed rapidly and they can be used to approximate arbitrary or curved surfaces. The performance of graphics systems is measured by the number of polygons per second that can be rendered or displayed. Ways of rendering or displaying polygons: Only its edges are displayed.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 13 of 38

Its interior be filled with a solid color, or a pattern Either display or not display the edges. Outer edges of a polygon can be defined easily by an ordered list of vertices. But if the interior is not well defined, then the polygon may be rendered incorrectly or may not be displayed at all. Three properties will ensure that a polygon will be displayed correctly: It must be simple, convex, and flat. Simple: A 2-D polygon in which no pair of edges cross each other. They will have well-defined interiors. Although the locations of the vertices determine whether or not a polygon is simple, the cost of testing is sufficiently high that most graphics systems require that the application program do any necessary testing. Many APIs provide consistent fill from but it varies from one implementation to other and is only possible if polygon is convex. Non-simple: Graphics system must handle by some means if a non-simple polygon is to be displayed and to define an interior for a non-simple polygon. (Chapter 7).

Convex: An object is convex if all points on the line segment between any two points inside the object, or on its boundary, are inside the object regardless of the type of the object and its dimension (whether 2-D or 3-D).

Convex objects include triangles, tetrahedra, rectangles, circles, spheres, and parallelepipeds. There are various tests for convexity. However, like simplicity testing, convexity testing is expensive and usually is left to the application program. In 3-D, polygons present a few more difficulties, because, unlike all 2D objects, they are not necessarily flat i.e., all vertices that define polygon need not lie in the same plane. One property that most graphics systems exploit, and that can be used, is that any three vertices that are not collinear determine both a triangle and the plane in which that triangle lies.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 14 of 38

Hence, the use of only triangles is safe in rendering such objects correctly. Almost the triangles are used because typical rendering algorithms are guaranteed to be correct only if the vertices form a flat convex polygon. In addition, hardware and software often support a triangle type that is rendered much faster than is a polygon with three vertices. Polygon Types in OpenGL For objects with interiors the following types can be specified:

Polygons (GL_POLYGON): Successive vertices define line segments, and a line segment connects the final vertex to the first. The interior is filled according to the state of the relevant attributes. Note that a mathematical polygon has an inside and an outside that are separated by the edge. The edge itself has no width. Consequently, most graphics systems allow to fill the polygon with a color or pattern or to draw lines around the edges, but not to do both. In OpenGL, glPolygonMode function can be used to select edges instead of fill (the default). However, to draw a filled polygon and to display its edges, it must be drawn twice, once in each mode, or a polygon and a line loop with the same vertices must be drawn. Triangles and Quadrilaterals (GL_TRIANGLES, GL_QUADS): These objects are special cases of polygons. Successive groups of three and four vertices are interpreted as triangles and quadrilaterals, respectively. Using these types may lead to a rendering more efficient than that obtained with polygons. Strips and Fans (GLTRIANGLE_STRIP, GL_QUAD_STRIP, GL_TRIANGLE_FAN) : These objects are based on groups of triangles or quadrilaterals that share vertices and edges.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 15 of 38

In the triangle_strip, for example, each additional vertex is combined with the previous two vertices to define a new triangle. For the quad_strip, two new vertices are combined with the previous two vertices to define a new quadrilateral. A triangle fan is based on one fixed point. The next two points determine the first triangle, and subsequent triangles are formed from one new point, the previous point, and the first (fixed) point.

Approximating a Sphere Many curved surfaces can be approximated using fans and strips. E.g. To approximate to a sphere, a set of polygons defined by lines of longitude and latitude as shown can be used. Either quad strips or triangle strips can be used for the purpose. Consider a unit sphere. It can be described by the following three equations: x(, )= sin cos , y(, )= cos cos,
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 16 of 38

z(, ) = sin.

Circles of constant longitude can be obtained by fixing and varying . Likewise, circles of constant latitude can be obtained by fixing and varying . Quadrilaterals can be defined by generating points at fixed increments of . Degrees must be converted to radians for the standard trigonometric functions. The code for the quadrilaterals corresponding to increments of 20 degrees in and to 20 degrees in is, given below: for(phi=-80.0; phi<=80.0; phi+=20.0) { phir=c*phi; phir20=c*(phi+20); glBegin(GL_QUAD_STRIP); for (theta=-180.0; theta<=180.0; theta+=20.0) { thetar=c*theta; x=sin(thetar)*cos(phir); y=cos(thetar)*cos(phir); z=sin(phir); glVertex3d(x,y,z); x=sin(thetar)*cos(phir20); y=cos(thetar)*cos(phir20); z=sin(phir20); glVertex3d(xty,z); } glEnd( ); } But strips can not be used at the poles, because all lines of longitude converge there. Instead two triangle fans one at each pole can be used, as follows: glBegin(GL_TRIANGLE_FAN); glVertex3d(0.0, 0.0, 1.0); c=M_PI/180.0; c80=c*80.0; z=sin(c80); for(thet=-180.0;theta<=180.0;theta += 20.0) { thetar=c*theta; x=sin(thetar)*cos(c80); y=cos(thetar)*cos(c80); glVertex3d(x,y,z); }
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 17 of 38

glEnd( ); glBegin(GL_TRIANGLE_FAN); glVertex3d(0.0, 0.0, -1.0); z = -sin(c80); for(theta=-180.0; theta<=180.0; theta += 20.0) { thetar=c*theta; x=sin(thetar)*cos(c80); y=cos(thetar)*cos(c80); glVertex3d(x,y,z); } glEnd( ) Text In computer graphics text may need to be displayed in a multitude of fashions by controlling type styles, sizes, colors, and other parameters. (Note that in non graphical applications, a simple set of characters, always displayed in the same manner). We also want to have different choices of fonts. Fonts are typefaces of particular style such as Times, Cambria etc. Graphics applications must provide a choice of fonts. Fonts are families of typefaces (design style of text) of a particular style, such as Times, Computer Modern, or Helvetica. Two forms of text: Stroke Text:

It is constructed as are other graphic primitives. Vertices are used to define line segments or curves that outline each character. The characters defined by closed boundaries can be filled. Advantage: It can be defined to have all the detail of any other object, and, because it is defined in the same way as are other graphical objects, it can be manipulated by standard transformations, and viewed like any other graphical primitive. Because stroke characters are defined the same way as other primitives it can be manipulated by different transformations (can be made bigger or rotated). It retains its detail and appearance even after manipulations. Consequently, a character is needed to be defined only once, and transformations can be used to generate it at the desired size and orientation. Defining a full 128- or 256-character stroke font, however, can be complex, and the font can take up significant memory and processing time.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 18 of 38

The standard PostScript fonts are defined by polynomial curves, and they illustrate all the advantages and disadvantages of stroke text. The various PostScript fonts can be used for both high- and lowresolution applications. Often, developers mitigate the problems of slow rendering of such stroke characters by putting considerable processing power in the printer. This strategy is related to the client-server concepts (Chapter 3). Raster text:

It is simple and fast. Characters are defined as rectangles of bits called bit blocks. Each block defines a single character by the pattern of 0 and 1 bits in the block. A raster character can be placed in the frame buffer rapidly by a bit-block-transfer (bitblt) operation, which moves the block of bits using a single instruction (Chapter 9). OpenGL allows the application program to use instructions that allow direct manipulation of the contents of the frame buffer such as write pixel or set pixel. Size of raster characters can be increased only by replicating or duplicating pixels, a process that gives larger characters a blocky appearance.

Limitations: Other transformations of raster characters, such as rotation, may not make sense, because the transformation may move the bits defining the character to locations that do not correspond to the location of pixels in the frame buffer. Raster characters often are stored in read-only memory (ROM) in the hardware and hence, a particular font might be of limited portability. Because stroke and bitmap characters can be created from other primitives, OpenGL does not have a text primitive. However, GLUT provides a few bitmap and stroke character sets that are defined in software and are portable. E.g. A bitmap character of 8 x 13 pixels can be produced by glutBitmapCharacter(GLUT_BITMAP_8_BY_13, c) where c is the number of the ASCII character to be placed on the display. The character is placed at the present raster position on the
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 19 of 38

display, a location that is part of the graphics state, is measured in pixels, and can be altered by the various forms of the function glRasterPos*.(Chapter 3 illustrates that both stroke and raster text can be implemented most efficiently through display lists).
GLUT_BITMAP_8_BY_13 A fixed width font with every character fitting in an 8 by 13 pixel rectangle.

Here is a routine that shows how to render a string of ASCII text with glutBitmapCharacter: void output(int x, int y, char *string) { int len, i; glRasterPos2f(x, y); len = (int) strlen(string); for (i = 0; i < len; i++) { glutBitmapCharacter(GLUT_BITMAP_HELVETICA_18, string[i]); } } Curved Objects Primitives are the basic set that can be defined through set of vertices. With exception of points all consists of line segments or use line segments to define boundary of a region that can be filled with solid color or a pattern. Two approaches to create richer set of objects (means not the primitives) like curved surfaces: Primitives can be approximated to curves and surfaces. E.g. A regular polygon of n sides can be approximated to draw a circle. A regular polyhedron can be approximate to a sphere. Tessellation: Generally, a mesh of convex polygons can be approximated to a curved surface either at the rendering stage or within the user program. Curved objects are defined mathematically and then graphics functions can be built to implement those objects. Objects such as quadric surfaces and parametric polynomial curves and surfaces are well understood mathematically, and they can be specified through sets of vertices. (Chapter 10) E.g. A sphere is defined by its center and a point on its surface. A cubic polynomial curve can be defined by four points. Most graphics systems give aspects of both approaches. In OpenGL, utility library (GLU) can be used for a collection of approximations to common curved surfaces, and functions can be written to define other own objects. Advanced features of OpenGL can also be to work with parametric polynomial curves and surfaces.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 20 of 38

Attributes It is any property that determines how a geometric primitive is to be rendered. Each geometric type has a set of attributes. Sl.No Primitive Some of the attributes 1 Point Color Size 2 Line segments Color, Thickness, and Type (solid, dashed, or dotted). 3 Filled primitives, Have more attributes, with enough such as polygons parameters to specify how the fill should be done: E.g. Not to fill the polygon and to display only its edges Fill with a solid color or a pattern Fill with the edges displayed in a color different from that of the interior Stroke text (if Direction of the text string, supported) Path followed by successive characters in the string Height and Width of the characters, Font, and Style (bold, italic, underlined).

Type of the primitive differs from its attribute. E.g. A red solid line and a green dashed line are the same geometric type, but each is displayed differently. Attributes may be associated with, or bound to, primitives at various points in the modeling and rendering pipeline. Bindings may not be permanent. Immediate-mode graphics: In this, primitives are not stored in the system, but rather are passed through the system for possible display as soon as they are defined. The present values of attributes are part of the state of the graphics system. When a primitive is defined, the present attributes for that type are used, and it is displayed immediately. There is no memory of the primitive in the system. Only the primitive's image appears on the display; once erased from the display, it is lost. (Chapter 3 introduces display lists, which will enable to keep objects in memory, so that these objects can be redisplayed.)

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 21 of 38

Color Color in computer graphics is based the three-color theory which says: If two colors produce the same tristimulus values, then they are visually indistinguishable. Two colors that match visually are known as metameric pairs; they have the same tristimulus values. An additive color model is used because it is more appropriate for CRT displays with the primaries red, green, and blue. This model considers a color, as being formed from three primary colors that are mixed to form the desired color. All colors cannot be matched in this way. But with a particular set of primaries say standard red, green, and blue, combined a color, close to the desired one can be obtained. The matching process can be represented abstractly as follows. Matching Color i.e., C = T1R + T2G + T3B, were R, G, and B represent the three primaries Red, Green and Blue respectively. T1, T2, T3 are the strengths, or intensities, of the three primaries, called the tristimulus values. Thus relative to the particular set of primaries, the target color can be characterized by the triplet (T1, T2, T3). Thus a continuous function is reduced to three numbers. Hence for a visual match, only a color's tristimulus values are to be matched. Color Gamut: The range of colors that can be produced on a given system with a given set of primaries. Color Cube: In additive color model, a color is viewed as a point in a color solid, as shown below. The solid is drawn using a coordinate system corresponding to the three primaries. The distance along a coordinate axis represents the amount of the corresponding primary in the color. The maximum value of each primary is normalized to 1 and hence any color can be represented with this set of primaries as a point in a unit cube. The vertices of the cube correspond to black (no primaries on); red, green, and blue (one primary fully on); the pairs of primaries, cyan (green and blue fully on), magenta (red and blue fully on), and yellow (red and green fully on); and white (all primaries fully on). The principal diagonal of the cube connects the origin (black) with white. All colors along this line have equal tristimulus values and appear as shades of gray.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 22 of 38

With additive color, primaries add light to an initially black display, yielding the desired color. Subtractive color model : It suits for processes such as commercial printing and painting. The primaries are usually the complementary colors: cyan, magenta, and yellow (CMY). RGB additive system has a dual with a CMY subtractive system. Color handling in a graphics system from the programmer's perspective i.e. , through the API.: There are two different approaches. RGB Color Model In a three-primary-color, additive-color RGB system, there are conceptually separate frame buffers for red, green, and blue images. Each pixel has separate red, green, and blue components that correspond to locations in memory. In a typical system, there might be a 1280 x 1024 array of pixels, and each pixel might consist of 24 bits (3 bytes): 1 byte for each of red, green, and blue. Such a frame buffer would have over 3 megabytes (MB) of memory that would have to be redisplayed at video rates. Programmers specify any color that can be stored in the frame buffer. With 24-bits/pixel, 224 (16 M where M denotes 10242) colors are possible. But programmers like to specify a color independently of the number of bits in the frame buffer and the drivers and hardware match that specification as closely as possible to the available display. Programmers specify color components as numbers (using the color cube) between 0.0 and 1.0, where 1.0 denotes the maximum value of the corresponding primary, and 0.0 denotes a zero value of that primary. Graphic Function: In OpenGL, to draw in red, the following function call is used: glColor3f(1.0, 0.0, 0.0); The execution of this function will set the present drawing color to red. Because the color is part of the state, draw in red is continued until the color is changed. The "3f" is used in a manner similar to the glVertex function: It conveys that, a three-color (RGB) model is used, and that the values of the components are given as floats in C. If an integer or byte type to specify a color value, the maximum value of the chosen type corresponds to the primary fully on, and the minimum value corresponds to the primary fully off.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 23 of 38

Advantages and Disadvantages: RGB Model is used in lighting and shading. It is a more difficult model to support in hardware, due to higher memory requirements, but modern systems support it more easily now that memory is cheaper. It is also possible for a system to support true color in software through the API, and to have the hardware use approximate techniques to implement a display visually close to an RGB-color display. Many systems have frame buffers that are limited in depth. Consider a frame buffer having a spatial resolution of l280 x 1024, but each pixel is only 8 bits deep. The 8 bits can be divided into smaller groups of bits to assign to red, green, and blue. Although this technique is adequate in a few applications, it usually does not give enough flexibility in color assignment, and breaking apart individual bytes into small groups of bits can affect performance. Four-color (RGBA) system. The fourth color (A) uses what is called the alpha channel, but is stored in the frame buffer, as are the RGB values; it can be set with four-dimensional versions of the color functions. (Chapter 9 covers various uses of the alpha channel, such as for creating fog effects or for combining images). Here, alpha value is just used in the initialization of an OpenGL program. The alpha value will be treated by OpenGL as an opacity or transparency value. Transparency and opacity are complements of each other. An opaque object passes no light through it; a transparent object passes all light. An object can range from fully transparent to fully opaque. To clear an area of the screen-a drawing window-in which output is displayed: It is done o As first step in any program o Whenever a new frame is to be drawn. The function call glC1earCo1or(1.0, 1.0, 1.0, 1.0); defines a four-color clearing color that is white, because the first three components are set to 1.0, and is opaque, because the alpha component is 1.0. Then the function glC1ear is used to make the window on the screen solid and white. Indexed Color Model Analogy: It is analogous to an artist who paints in oils. The oil painter can produce an almost infinite number of colors by mixing together a limited number of pigments from tubes. It can be said that the painter has a potentially large color palette. At anyone time, however, perhaps due to a limited number of brushes, the painter uses only a few colors from a large palette. Similarly, if a limited number of colors from a large selection (palette) can be chosen, the graphics system must be able to produce good-quality images most of the time.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 24 of 38

Description: This model is used when pixel-depth is limited. Colors can be selected by interpreting the limited-depth pixels as indices, rather than as color values. These indices correspond to lines or entries in a table. Suppose that the frame buffer has k bits per pixel. Each pixel value or index is an integer between 0 and 2k - 1. Suppose that the colors can be displayed with an accuracy of m bits; that is, it is possible to chose from 2m reds, 2m greens, and 2m blues. Hence, any of 23m colors can be produced on the display, but the frame buffer can specify only 2k of them. The specification is handled through a user-defined color-lookup table of size 2k x 3m. The user program fills the 2k entries (rows) of the table with the desired colors, using m bits for each of red, green, and blue.

After filling the look-up table, user can then specify a color by its index, which points to the appropriate entry in the color-lookup table. For k =m = 8, a common configuration, user can choose 256 out of 16 M colors. The 256 entries in the table constitute the user's color palette.

Graphics Functions: In color-index mode, the present color is selected by a function such as glIndexi(element); that selects a particular color out of the table. Setting and changing the entries in the color-lookup table involves interacting with the window system (Chapter 3). One difficulty is that, if the window system and underlying hardware support only a limited number of colors, the window system may have only a single color table that must be used for all its windows, or it might have to juggle multiple tables, one for each window on the screen. GLUT allows to set the entries in a color table for each window through the function glutSetColor(int color, GLfloat red, GLfloat blue, GLfloat green) Advantages and Disadvantages: This model is used when pixel-depth is limited. It requires less memory for the frame buffer and fewer other hardware components.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 25 of 38

However, the cost issue is less important now, and color-index mode presents a few problems. For a standard 8-bit (256-color) frame buffer, a limited set of colors exist to represent a single image adequately. However, when working with dynamic images that must be shaded, more colors are usually needed. Setting of Color Attributes using RGB color model: (w.r.to the example program) 3 attributes of the colors: 1. Clear color, which is set to white by the function call glClearColor(l.0, 1.0, 1.0, 1.0); 2. Rendering color for the points by setting the color state variable to red through the function call glColor3f(1.0, 0.0, 0.0); 3. Size of rendered points: For 2 pixels wide, following call is used. glPointSize(2.0); Viewing Viewing decisions are to be specified in the program. A fundamental concept that emerges from the synthetic-camera model is that, the specification of the objects in the scene is completely independent of the specification of the camera. Once both the scene and the camera have been specified, the computer system forms an image by carrying out a sequence of operations in its viewing pipeline. The application program needs to worry about only specification of the parameters for the objects and the camera. There are default viewing conditions (like lens adjustment in camera) in computer image formation. Flexibility should be provided to change the lens (eg to take picture of ants or elephants). Same is true for graphics system also. 2-D viewing: is based on taking a rectangular area of 2-D world and transferring its contents to the display. The area of the world of which the image is to be taken is known as the viewing rectangle or clipping rectangle. Objects inside the rectangle are in the image; objects outside are clipped out and are not displayed. Objects that straddle the edges of the rectangle are partially visible in the image. The size of the window on the display, and where this window is placed on the display, is independent decisions. 2-D graphics view is a special case of 3-D graphics. Hence the viewing rectangle is in the plane z = 0 within 3-D viewing volume:

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 26 of 38

If viewing volume is not specified, OpenGL uses its default, a 2 x 2 x 2 cube, with the origin in the center. In terms of 2-D plane, the bottom-left corner is at (-1.0, -1.0), and the upper-right corner is at (1.0, 1.0).

The Orthographic View It is a special case of the orthographic projection (Chapter 5). Orthographic projection This simple orthographic projection takes a point (x, y, z) and projects it into the point (x, y, 0), as shown below: 2-D world consists of only the plane z = 0. Hence the projection has no effect. However, 3-D graphics system technique can be employed to produce the image.

In OpenGL, an orthographic projection with a right-parallel-piped viewing volume is specified via void glOrtho(Gldouble left, Gldouble right, Gldouble bottom, Gldouble top, Gldouble near, GLdouble far) All parameters are distance measured from the camera. In OpenGL, the camera starts off at the origin pointing in the negative z direction as below: The orthographic projection "sees" only those objects in the volume specified by viewing volume.

Unlike a real camera, the orthographic projection can include objects behind the camera. Thus, as long as the plane z =0 is located between near and far, the 2-D dimensional plane will intersect the viewing volume. If using a 3-D volume seems strange in a two-dimensional application, the function void gluOrtho2D(GLdoub1e left, GLdouble right, GLdouble bottom, GLdouble top) in the utility library may make the program more consistent with its 2-D structure. This function is equivalent to glOrtho with near and far set to
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 27 of 38

-1.0 and 1.0, respectively. (Chapters 4 and 5- moving the camera and creating more complex views are discussed). Matrix Modes Pipeline graphics systems have an architecture that depends on multiplying together, or concatenating, a number of transformation matrices to achieve the desired image of a primitive. Like most other OpenGL variables, the values of these matrices are part of the state of the system and remain in effect until changed. The two most important matrices are Model-view and Projection matrices. At any time, the state includes values for both of these matrices, which start off set to identity matrices. (Chapter 4 discusses a set of OpenGL functions to manipulate these matrices). The usual sequence is to modify the initial identity matrix, by applying a sequence of transformations. There is only a single set of functions that can be applied to any type of matrix. Matrix mode: Setting the matrix mode selects the matrix to which the operations apply. It is a variable that is set to one type of matrix and is also part of the state. The default matrix mode is to have operations apply to the model-view matrix. So to alter the projection matrix, modes must be switched. The following sequence is common for setting a 2-D viewing rectangle: glMatrixMode(GL_PROJECTION); glLoadldentity( ); gluOrtho2D(0.0, 500.0, 0.0, 500.0); glMatrixMode(GL_MODELVIEW); This sequence defines a 500 x 500 viewing rectangle with the lower-left corner of the rectangle at the origin of the 2-D system. It then switches the matrix mode back to model-view mode. In complex programs, it is always better to return to a given matrix mode, to avoid possible confusion over the current mode. E.g. In this case mode returned to model-view mode. Control Functions The OpenGL Utility Toolkit (GLUT) is a library of functions that provides a simple interface between the systems. Details specific to the underlying windowing or operating system are inside the implementation, rather than being part of its API. The application programs written using GLUT run under multiple window systems. Interaction with the Window System The term window is used in a number of different ways in the graphics and workstation literature. Window or screen window denotes a rectangular area of our display that, by default, is the screen of a raster CRT. It has a height and width. It displays the contents of the frame buffer. So, positions in the
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 28 of 38

window are measured in window or screen coordinates, where the units are pixels. In a modern environment, many windows can be displayed on the CRT screen. Each can have a different purpose, ranging from editing a file to monitoring the system. The term window system refers to the multi-window environment provided by systems such as the X Window system and Microsoft Windows. The window in which the graphics output appears is one of the windows managed by the window system. Hence, to the window system, the graphics window is a particular type of window-one in which graphics can be displayed or rendered. References to positions in this window are relative to one corner of the window. While deciding which corner is the origin, much care must be taken. Usually, the lower-left corner is the origin, and has window coordinates (0, 0). However, virtually all raster systems display their screens from top to bottom, left to right. From this perspective, the top-left corner should be the origin. OpenGL commands assume that the origin is bottom-left. But information returned from the windowing system, such as the mouse position, often has the origin at the top left and thus requires conversion of the position from one coordinate system to the other. Although screen may have a resolution of, say, 1280 x 1024 pixels, the window that used can have any size, up to the full screen size. Thus, the frame buffer must have a resolution equal to the screen size. Conceptually, use of a window of say, 300 x 400 pixels is corresponding to a 300 x 400 frame buffer, even though it uses only a part of the real frame buffer. Graphics GLUT functions: 1. glutInit(int *argcp, char **argv) It initiates an interaction between the windowing system and OpenGL. It is used before opening a window in the program. The two arguments allow the user to pass command-line arguments, as in the standard C main function, and are usually the same as in main. 2. glutCreateWindow(char *tit1e) It opens an OpenGL window. title string provides title at the top to the window displayed. The window thus created has a default size, a position on the screen, and characteristics such as use of RGB color. But GLUT functions are also available to specify these parameters before to the window creation. E.g. The code glutInitDisp1ayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutIn1tW1ndowSize(480, 640); glutInitWindowPosition(0, 0); specifies a 480 x 640 window in the top-left corner of the display. RGB is specified rather than indexed (GLUT_INDEX)color; a depth buffer for hidden-surface removal; and double rather than single (GLUT_SINGLE)
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 29 of 38

buffering. The defaults, which are needed now, are RGB color, no hiddensurface removal, and single buffering. Thus, these options need not explicitly specified. But specifying them makes the code clearer. Note that parameters are logically OR-ed together in the argument to glutInitDisplayMode. Aspect Ratio and Viewports The aspect ratio of a rectangle is the ratio of the rectangle's width to its height. The independence of the object, viewing, and workstation window specifications can cause undesirable side effects if the aspect ratio of the viewing rectangle, specified by glOrtho, is not the same as the aspect ratio of the window specified by glutInitWindowSize. Figure illustrates the difference.

(a) Viewing Rectangle (b) Display window If they differ objects are distorted on the screen. This distortion is a consequence of the default mode of operation, in which the entire clipping rectangle is mapped to the display window. This distortion can be avoided in two ways: By ensuring that the clipping rectangle and display window have the same aspect ratio. Using the concept of a viewport: It is the flexible way. A viewport is a rectangular area of the display window. By default, it is the entire window, but it can be set to any smaller size in pixels via the function void glViewport(GLint x, GLint y, GLsizei w, GLsizei h) where (x. y) is the lower-left corner of the viewport (measured relative to the lower-left corner of the window), and w and h give the height and width, respectively. The types are all integers specifying positions and distances in pixels.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 30 of 38

Primitives are displayed in the viewport, as shown in figure. For a given window, height and width of the viewport can be adjusted to match the aspect ratio of the clipping rectangle, thus preventing any object distortion in the image. The viewport is part of the state. Hence changing the viewport between successive rendering/redisplay gives the effect of multiple viewports with different images in different parts of the window. (Chapter 3 discusses interactive changes in the size and shape of the window). main, display, and myinit Functions The screen may clear immediately before the output primitive is seen. Solution to this problem includes: Inserting a delay, such as via a standard function such as sleep(enoughtime). By event-processing (Chapter 3): say using the GLUT function void glutMainLoop(void) which causes the program to begin an event-processing loop. If there are no events to process, the program will sit in a wait state, with the graphics on the screen, until the program is terminated through some external means, such as by hitting the "kill" key. Display callback A function which sends Graphics to the screen. It is specified through the GLUT function void glutDisplayFunc(void (*func)(void)) Here, func is the name of the function that will be called whenever the windowing system determines that the OpenGL window needs to be redisplayed. One of these times include: when the window is first opened. Thus if all graphics are put into this function (for our non-interactive example), func will be executed once and the graphics (say gasket) will be drawn. when the window is moved from one location on the screen to another and when a window in front of the OpenGL window is destroyed, making visible the whole OpenGL window.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 31 of 38

A main program that works for most non-interactive applications: #include <GL/glut.h> void main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB) glutInitWindowSize(500, 500); glutInitWindowPosition(0,0); . glutCreateWindow("simple OpenGL example) glutDisplayFunc(display); myinit( ); glutMainLoop(); } myinit( ) sets the OpenGL state variables dealing with viewing and attributes. These parameters are preferred to set once, independently of the display function. The standard include ( .h) file for GLUT is loaded before the beginning of the function definitions. In most implementations, the compiler directive #inc1ude <GL/glut.h> will add in the header files for the GLUT library, the OpenGL library (gl .h), and the OpenGL utility library (g1u.h). The macro definitions for the standard values, such as GL_LINES and GL_RGB are in these files. Program Structure Every program further written will have the same structure as the gasket program. Always the GLUT toolkit is used. The main function will then consist of calls to GLUT functions to set up our window(s) and to make sure that the local environment supports the required display properties. The main will also name the required callbacks and callback functions. Every program must have a display callback, and most will have other callbacks to set up interaction. The myinit function will set up programmers options, usually through OpenGL functions on the GL and GLU libraries. Although these options could be set in main, it is probably clearer to keep GLUT functions separate from OpenGL functions. Most of the graphics output will be generated in the display callback, in most programs. The Gasket Program Two functions given below, complete the program that will generate the Sierpinski gasket. myinit program: void myinit(void) { /* attributes */ glClearCo1or(1.0, 1.0, 1.0, 0.0); /* white background */ glCo1or3f(1.0, 0.0, 0.0); /* draw in red */ /* set up viewing */ glMatrixMode(GL_PROJECTION); glLoadldentity( );
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 32 of 38

gluOrtho2D(0.0, 500.0, 0.0, 500.0); glMatrixMode(GL_MOOELVIEW); } Note: Red points are drawn on a white background. 2-D coordinate system is set up. Hence the points are defined within a 500 x 500 square with the origin in the lower-left corner. display function: void display( void ) { typedef GLfloat point2[2]; /* define a point data type */ /*triangle */ point2 vertices[3]={{0.0,0.0},{250.0, 500.0},{500.0,0.0}}; int i. j. k; int rand(); /* standard random--number generator */ point2 p = {75.0, 50.0}; /* arbitrary point inside triangle*/ /*clear the window */ glClear(GL_COLOR_BUFFER_BIT); /* compute and output 5000 new points */ for( k=0; k<5000; k++) { j = rand( )%3; /* pick a vertex at random */ /* Compute point halfway between vertex and old point */ p[0] = (p[0]+vertices[j][0])/2.0; p[1] = (p[1]+vertices[j][1])/2.0; /* plot point */ glBegin(GL_POINTS); glVertex2fv(p); glEnd(); } glFlush(); } Note: It has an arbitrary triangle (three vertices) defined in it, and an arbitrary initial point is defined within this triangle. A loop will generate a fixed number of (5000) points. glF1ush is GLUT function which will force the system to plot the points on the display as soon as possible. Output:

Polygons and Recursion If the program is run with more iteration, then much of the randomness in the image would disappear. Then there will be there are no points in the middle regardless of how many points are generated.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 33 of 38

Another method of generating the Sierpinski gasket that uses polygons instead of points, and does not require the use of a random-number generator. If line segments connecting the midpoints of the sides of the original triangle are drawn, then the original triangle would be divided into four triangles, the middle one containing no points.

Same observation (i.e., subdivide each of these triangles into four triangles by connecting the midpoints of the sides, and each middle triangle will contain no points) is applicable to each of the other three triangles. Advantage: Use of polygons enables filling of solid areas on the display. Strategy: Strategy starts with a single triangle and subdivides it into four smaller triangles by bisecting the sides, and then removes the middle triangle from further consideration. This procedure is repeated on the remaining triangles until the size of the triangles that is removed is small enoughabout the size of one pixel - that the remaining triangles can be drawn. Implementation: This process is implemented through a recursive program. 1. A function that draws a single triangular polygon given three arbitrary vertices: void triangle(point2 a, point2 b, point2 c) { glBegin(GL_TRIANGLES); glVertex2fv(a); gl Vertex2fv(b); glVertex2fv(c); glEnd(); } 2. Suppose that the vertices of the original triangle are given by the array point2 v[3]; then the midpoints of the sides are given by the array m[3], which can be computed via the code for (j=0; j<2; j++) m[0][j]=(v[0][j]+v[1][j])/2.0; for (j=0; j<2; j++) m[1][j]=(v[0][j]+v[2][j])/2.0; for (j=0; j<2; j++) m[2][j]=(v[1][j]+v[3][j])/2.0; 3. With these six points, the function triangle can be used to draw the three triangles formed by (v[0], m[0], m[l]), (v[2], m[l], m[2]), and (v[1], m[2], m[0]). These triangles are not to b drawn; but they have to be subdivided. Hence, the process is made recursive. A recursive function, divide_triangle( point2 a, point2 b, point2 c, int k) that will draw the triangles only if k is zero; otherwise, it subdivides the triangle specified by a, b, and c, and decrease k, is given below: void divide_triangle(point2 a, point2 b, point2 c, int k) { point2 ab, ac ,bc;
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 34 of 38

int j; if (k>0) { /* compute midpoints of sides */ for (j=0; j<2; j++) ab[j]=(a[j]+b[j])/2; for (j=0; j<2; j++) ac[j]=(a[j]+c[j])/2; for (j=0; j<2; j++) bc[j]=(b[j]+c[j])/2; /* subdivide all but inner triangle */ divide_triangle(a, ab, ac, k-l); divide_triangle(c, ac, bc, k-l); divide_triangle(b, bc, ab, k-l); } else triangle(a,b,c); /* draw triangle at end of recursion */ } 4. The display function now uses a global value of n determined by the main program to fix the desired number of subdivision and it calls divide_triangle once. void display(void} { glClear(GL_COLOR_BUFFER_BIT); divide_triangle(v[0J, v[1], v[2], n); glFlush( ); } 5. The rest of the program is the same as previous gasket program, except that the value of n is read. Output:

Three-Dimension Galasket 2-D graphics is a special case of 3-D graphics. It is illustrated by converting 2-D Sierpinski gasket program to a program that will generate a 3-D gasket. Either of the two approaches that used for the 2-D can be used. Both extensions start in a similar manner, replacing the initial triangle with a tetrahedron.

Use of 3-D Points: Because every tetrahedron is convex, the midpoint of a line segment between a vertex and any point inside a tetrahedron also is inside the tetrahedron. Hence, the same procedure as before can be followed. But instead of the three vertices required to define a triangle, four initial vertices are needed to define the tetrahedron. As long as no three vertices are collinear, the four vertices of the tetrahedron can be chosen at random without affecting the character of the result.
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 35 of 38

Changes made in the function display : Defining a 3-D point data type typedef GLfloat point3[3]; Then the vertices of the tetrahedron are initialized. E.g. /* vertices of an arbitrary tetrahedron */ point3 vertices[4] = {{0.0, 0.0, 0.0}, {250.0, 500.0, l00.0}, {500.0, 250.0, 250.0}, {250.0, 100.0, 250.0}}; /* arbitrary initial point */ point3 p = {250.0, 100.0, 250.0} Function glVertex3fv is used to define points. But here points are not restricted to a single plane. Hence it may be difficult to envision the 3-D structure from the 2-D image displayed. To get around this problem, a color function is added which makes the color of each point depend on that point's location. Thus the resulting image can be understood more easily. void display( ) { /* computes and plots a single new point */ int rand( ); int i; j = rand( ) % 4; /* pick a vertex at random */ /* Compute point halfway between vertex and old point */ p[0] = (p[0]+vertices[j][0])/2.0; p[1] = (p[1]+vertices[j][1])/2.0; p[2] = (p[2]+vertices[j][2])/2.0; /* plot point */ glBegin(GL_POINTS); glColor3f(p[0]/250.0, p[1]/250.0, p[2]/250.0); glVertex3fv( p ); glEnd(); /* replace old point by new */ old.x = new.x; old.y = new.y; old.z = new.z; glFlush(); } As this program works in 3-D, a 3-D clipping volume (in main.c) is defined by g1Ortho(-500.0, 500.0, -500.0, 500.0, -500.0, 500.0); Output: Figure shows that if enough points are generated, the resulting figure will look like the initial tetrahedron with increasingly smaller tetrahedrons removed.

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 36 of 38

Use of Polygons in Three Dimensions Following the second approach, faces of a tetrahedron are the four triangles determined by its four vertices. Then the subdivision algorithm can be applied to each of the triangle faces. Thus, the code is almost the same as in 2-D. triangle routine now uses points in 3-D, rather than in 2-D: void triang1e(point3 a, point3 b, point3 c) { glBegin(GL_POLYGON); glVertex3fv(a); glVertex3fv(b); glVertex3fv(c); glEnd(); } Code for divide_triangle does the same: void divide_triangle(point2 a, point2 b, point2 c, int k) { point2 ab, ac ,bc; int j; if (k>0) { /* compute midpoints of sides */ for (j=0; j<2; j++) ab[j]=(a[j]+b[j])/2; for (j=0; j<2; j++) ae[j]=(a[j]+c[j])/2; for (j=0; j<2; j++) be[j]=(b[j]+c[j])/2; /* subdivide all but inner triangle */ divide_triangle(a, ab, ac, k-l); divide_triangle(c, ac, bc, k-l); divide_triangle(b, bc, ab, k-l); } else triangle(a,b,c); /* draw triangle at recursion */ }

end

of

Then subdivided tetrahedron can be generated by applying triangle subdivision to each of the four original triangles that form the surface of the tetrahedron: void tetrahedron( int n ) { glColor3f(1.0, 0.0, 0.0); divide_triangle(v[0J, v[l], v[2], k); glColor3f(0.0, 0.1, 0.0); divide_triangle(v[3J, v[2J, v[l], k);
Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 37 of 38

glColor3f(0.0, 0.0, 1.0); divide_triangle(v[0], v[3], v[l], k); glColor3f(0.0, 0.0, 0.0); divide_triangle(v[0], v[2], v[3], k); } where n is the number of subdivision steps. Note: Different color is assigned to each face to make the output clearer. Hidden-Surface Removal: The 3-D program explained above draws triangles in the order that they are specified in the program. This order is determined by the recursion in our program and not by the geometric relationship between the triangles. Each triangle is drawn (filled) in a solid color and is drawn over those triangles that have already been rendered to the display. Hence there will be confusion in the output. If 3-D Sierpinski gasket is constructed out of small solid tetrahedra, then only those faces of tetrahedra that are in front of all other faces as seen by a viewer can be seen. Hence the above order is contrast to this. Figure shows a simplified version of this hidden-surface problem. From the viewer's position, only the triangle A is seen clearly, but triangle B is blocked from the view, and triangle C is only partially visible.

If the position of the viewer and the triangles are known, then the triangles must be drawn to get the correct image. Algorithms for ordering objects so that they are drawn correctly are called visible-surface algorithms or hidden-surface-removal algorithms, depending on problem under consideration. (Chapters - 4 & 7) z-buffer algorithm: It is a hidden-surface-removal algorithm that is supported by OpenGL. This algorithm can be turned on (enabled) and off (disabled) easily. In the main program, auxiliary storage is requested for a z or depth buffer, by modifying the initialization of the display mode to the following: glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH); Algorithm is enabled by the function call glEnable(GL_DEPTH_TEST) either in main.c or in an initialization function such as myinit.c. Because the algorithm stores information in the depth buffer, this buffer must be cleared whenever the display is to be redrawn. Thus, clear procedure in the display function is modified as follows: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Dept Of CSE

Srinivas Institute of Technology, Mangalore Page 38 of 38

The display callback is void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); tetrahedron(n): glFlush(): } Output: (For a recursion of five steps)

Dept Of CSE

You might also like