You are on page 1of 22

Computer Graphics

UNIT-7
LIGHTING AND SHADING

Gradations or shades of color give 2-D images the appearance of being 3-D. The reason for
shading of colors is the interaction between light and the surfaces in the models.

7.1 Light and Matter

The color of point seen in an object is determined by multiple interactions among light
sources and reflective surfaces. These interactions can be viewed as a recursive process.
Consider the simple scene below.

Some light from the source that reaches surface A is


reflected.

Some of this reflected light reaches surface B, and some of it


is then reflected back to A, where some of it is again reflected
back to B, and so on.

This recursive reflection of light between surfaces accounts tor subtle shading effects, such as
the bleeding of colors between adjacent surfaces.

Mathematically this recursive process results in an integral equation, the rendering


equation, which can be used to find the shading of all surfaces in a scene. This equation
cannot be solved in general, even by numerical methods.

There are various approximate approaches, such as radiosity and ray tracing. Though these
approximations are near to rendering equations, they depend on particular types of surfaces and
they are slow.

Hence Phong reflection model is used which provides a compromise between physical

correctness and efficient calculation.


Light- sources:
Light-emitting (or self-luminous) surfaces producing rays of light.
The light rays are traced to see their effect as they interact with reflecting surfaces in the
scene. This approach is similar to ray tracing, but considers only single interactions between

light sources and surfaces.


Two independent parts in modeling are:
Modeling the light sources in the scene.
Building a reflection model that deals with the interactions between materials
and light.
Overview of the process considered in a model:
Rays of light from a point source are traced as shown.
Viewer sees only the light that leaves the source and
reaches the eyes - perhaps through a complex path and

multiple interactions with objects in the scene.


If a ray of light enters the eye directly from the source,

the color of the source is seen.


If the ray of light hits a surface that is visible to the viewer,
the observed color is based on the interaction between the
source and the surface material i.e. the color of the light reflected from the surface toward the
eyes is observed.

In terms of computer graphics, viewer is replaced by the projection


plane, as shown below.

Priyanka H V

Page 1

Computer Graphics

Conceptually, the clipping window in this plane is mapped to the screen; thus, the

projection plane can be considered as ruled into rectangles, each corresponding to a pixel.
The color of the light source and of the surfaces determines the color of one or more

pixels in the frame buffer.


Only those rays that leave the source and reach the viewer's eye, either directly or through

interactions with objects need to be considered.


In the case of computer viewing, these are the rays that reach the COP after passing through

the clipping rectangle.


Most rays leaving a source do not contribute to the image and they need not be considered.
There may be both single and multiple interactions between rays and objects.
It is the nature of these interactions that determines whether an object appears red or brown,

light or dark, dull or shiny.


When light strikes a surface, some of it is absorbed, and some of it is reflected.
If the surface is opaque, reflection and absorption account for all the light striking the

surface.
If the surface is translucent, some of the light is transmitted through the material and emerges

to interact with other objects. These interactions depend on wavelength.


An object illuminated by white light appears red because it absorbs most of the incident light

but reflects light in the red range of frequencies.


A shiny object appears so because its surface is smooth. Conversely, a dull object has a rough

surface.
The shading of objects also depends on the orientation of their surfaces, a factor that is

characterized by the normal vector at each point.


These interactions are classified below.

3 types of interactions between light and materials:


1. Specular surfaces:
Appear shiny because most of the light that is reflected is scattered in a narrow

range of angles close to the angle of reflection.


Mirrors are perfectly specular surfaces.
The light from an incoming light ray may be partially absorbed, but all reflected light
emerges at a single angle, obeying the rule that the angle of incidence is equal to the angle of
reflection.

2. Diffuse surfaces:
They are characterized by reflected light being scattered in all directions.
Walls painted with matte or flat paint are diffuse reflectors, as are many natural

materials, such as terrain viewed from an airplane or a satellite.


Perfectly diffuse surfaces scatter light equally in all directions and thus appear the same to all
viewers.

3. Translucent surfaces:
allow some light to penetrate the surface and to emerge from another location on

the object. This process of refraction characterizes glass and water.


Some incident light may also be reflected at the surface.
All these surfaces are modeled in Phong Model.

7.2 Light Sources


Light can leave a surface through two fundamental processes:

Self-emission and
Reflection.

Light source is an object that emits light only through internal energy sources.

Priyanka H V

Page 2

Computer Graphics

A light source, such as a light bulb, can also reflect some light that is incident on it from the
surrounding environment.

Simple light models neglect this term.

A self-emission term can be easily simulated.

Consider a light source shown below.

It is an object with a surface. Each point (x, y, z) on the surface


can emit light that is characterized by the direction of emission (
, ) and the intensity of energy emitted at each wavelength .

A general light source can be characterized by a six-variable


illumination function I (x, y, z, , , ).

Total contribution of the source on the surface can be obtained by


integrating over the surface of the source, as shown below.

This integration process accounts for

Emission angles that reach the surface and


Distance between the source and the surface.

For a distributed light source, such as a light bulb, the evaluation of this integral (using
analytic or numerical methods) is difficult.

It is easier to model the distributed source with polygons, each of which is a simple source,
or with an approximating set of point sources.

4 basic types of Color light sources:

Light sources are modeled as having three components-red, green, and blue and can be

described through a three-component intensity or luminance function I =

each of whose

components is the intensity of the independent red, green, and blue components.

The red component of a light source can be used for the calculation of the red component of
the image.

Because color-light computations involve three similar but independent calculations, a single
scalar equation can be used which may represent any of the three color components.

The following four lighting types are sufficient for rendering most simple scenes.

1. Ambient lighting
Uniform lighting is called ambient light.
In some rooms, such as in certain classrooms or kitchens, the lights have been designed
and positioned to provide uniform illumination throughout the room.
Such illumination is achieved through large sources that have diffusers whose purpose is
to scatter light in all directions.
Priyanka H V

Page 3

Computer Graphics

Such illumination can be simulated, at least in principle, by modeling all the distributed
sources, and then integrating the illumination from these sources at each point on a
reflecting surface.
Making such a model and rendering a scene with it would be a daunting task for a
graphics system, especially one for which real-time performance is desirable.
Alternatively, ambient light can be used for uniform lighting. Thus, ambient illumination
is characterized by an ambient intensity, Ia, that is identical at every point in the scene.
The ambient source has three color components: I =
Scalar Ia is used to denote anyone of the red, green, or blue components of Ia.
Every point in the scene receives the same illumination from Ia, each surface can reflect
this light differently.
2. Point sources
An ideal point source emits light equally in all directions.
A point source located at a point po can be characterized by a three-component color

matrix:

I=

The intensity of illumination received from a point source is proportional to the inverse
square of the distance between the source and surface.
Hence, at a point p (in figure), the intensity of light
received from the point source is

I (po) can be used to denote any of the components of I(po).


Point sources are easier to use but cause the image less
resemblance to physical reality.
Scenes rendered with only point sources tend to have high
contrast i.e. objects appear either bright or dark. This high contrast effect can be mitigated
by adding ambient light to a scene.
In the real world, the large size of light sources contributes to softer scenes.
The distance term also contributes to the harsh renderings with point sources. Although
the inverse-square distance term is correct for point sources, in practice it is usually
replaced by a term of the form (a + bd + cd2)-1, where d is the distance between p and po.
The constants a, b, and c can be chosen to soften the lighting.
If the light source is far from the surfaces in the scene, the intensity of the light from the
source is sufficiently uniform that the distance term is constant over the surfaces.
3. Spotlights
Spotlights are characterized by a narrow range of angles through which light is emitted.
A simple spotlight can be constructed from a point source by limiting the angles at which
light from the source can be seen.
A cone whose apex is at ps, which points in the direction Is, and whose
width is determined by an angle , as shown in figure, can be used. If =
180, the spotlight becomes a point source. More realistic spotlights are
characterized by the distribution of light within the cone- usually with
most of the light concentrated in the center of the cone.

Priyanka H V

Page 4

Computer Graphics

Thus, the intensity is a function of the angle between the direction of the
source and a vector s to a point on the surface (as long as this angle is less
than ; figure).
This function is usually defined by cose, where the exponent e (figure)
determines how rapidly the light intensity drops off. Cosines are
convenient functions for lighting calculations. If both s and l are unitlength vectors, the cosine can be computed with the dot product cos = s. l, a calculation
that requires only three multiplications and two additions.
4. Distant light
Most shading calculations require the direction from the point on the
surface to the light source. This vector must be computed repeatedly to
calculate the intensity at each point on the surface. This is a significant part
of the shading calculation.
However, if the light source is far from the surface, the vector does not change for
different points on the surface just as the light from the sun strikes all objects that are in
close proximity to one other at the same angle.
Then a point source of light can be replaced with a source that illuminates objects with
parallel rays of light -a parallel source. In practice, the calculations for distant light
sources are similar to the calculations for parallel projections; they replace the location of
the light source with the direction of the light source.
Hence, in homogeneous coordinates, a point light source at po is represented internally as
a 4-D column matrix:

p0 =
In contrast, the distant light source is described by a direction vector whose
representation in homogeneous coordinates is the matrix

p0 =
The graphics system can carry out rendering calculations more efficiently for distant light
sources than for near ones. Of course, a scene rendered with distant light sources looks
different from a scene rendered with near sources. OpenGL allows both types of sources.
7.3 The Phong Reflection Model

It uses the four vectors shown in figure, to calculate a color


for an arbitrary point p on a surface. If the surface is curved,

all four vectors can change from point to point.


Vector n is the normal at p.
Vector v is in the direction from p to the viewer or COP.
Vector l is in the direction of a line from p to an arbitrary point on the source for a distributed

light source or to the point light source.


Vector r is in the direction that a perfectly reflected ray from l would take. The r is

determined by n and l.
The Phong model supports the three types of material-light interactions:
Priyanka H V

Page 5

Computer Graphics

Ambient, Diffuse, and Specular

Consider a set of point sources. Each source let have separate ambient, diffuse, and specular
components for each of the three primary colors.

Thus at any point p on the surface, a 3 x 3 illumination matrix for the i th light source is:

The first row contains the ambient intensities for the red, green, and blue terms from source i.

The second row contains the diffuse terms; the third contains the specular terms.

The amount of the incident light that is reflected, at the point of interest, can be computed.
Thus, for each point, the computed matrix of reflection is:

The value of Ri depends on the material properties, the orientation of the surface, the
direction of the light source, and the distance between the light source and the viewer.

Then contribution for each color source can be computed by adding the ambient, diffuse, and
specular components.

E.g: Red intensity that is seen at p from source i is

Total intensity is obtained by adding the contributions of all sources and, possibly, a global
ambient term. Thus, the red term is

where Iar is the red component of the global ambient light.

Notation can be simplified by noting that the necessary computations are the same for each
source and for each primary color.

They differ depending on whether the ambient, diffuse, or specular terms are considered.

Hence, subscripts i, r, g, and b can be eliminated.

Then I = Ia+ Id+ Is = LaRa + LdRd + LsRs,

1. Ambient Reflection

The intensity of ambient light Ia is the same at every point on the surface.

Some of this light is absorbed, and some is reflected.

The amount reflected is given by the ambient reflection coefficient, Ra = ka. Because only a
positive fraction of the light is reflected, 0 Ka 1

Thus Ia= Ka La.

La can be any of the individual light sources, or it can be a global ambient term.

A surface has three ambient coefficients-Kar, Kag, and Kab and they can differ.

2. Diffuse Reflection
Priyanka H V

Page 6

Computer Graphics

A perfectly diffuse reflector scatters the light that it reflects equally in all directions.

Hence, such a surface appears the same to all viewers.

The amount of light reflected depends both on the material, because some of the incoming
light is absorbed, and on the position of the light source relative to the surface.

Diffuse reflections are characterized by rough surfaces.

If a cross section of a diffuse surface is magnified, the image is shown below.

Rays of light that hit the surface at only slightly different angles are
reflected back at markedly different angles.

Perfectly diffuse surfaces are so rough that there is no preferred angle

of reflection. Such surfaces, sometimes called Lambertian surfaces, can be modeled


mathematically with Lambert's law.

Consider a diffuse planar surface shown below, illuminated by the sun (point source).

The surface is brightest at noon and dimmest at dawn and


dusk because, according to Lambert's law, only the vertical
component of the incoming light is seen.
Consider a small parallel light source striking a plane, as
shown. As the source is lowered in the (artificial) sky, the
same amount of light is spread over larger areas (d/cos), and
the surface appears dimmer (Rd reduces by cos). .

Returning to the initial point source, diffuse reflections can be


characterized mathematically.

Lambert's law states that

, where is the angle between the normal at the point

of intersection, n and the direction of the light source l.

If both l and n are unit-length vectors, then cos = l. n.

If a reflection coefficient Kd that represents the fraction of incoming diffuse light that is
reflected, is added, then diffuse reflection term: ld = Kd (l. n) Ld.

If a distance term, is to be incorporated, to account for attenuation as the light travels a


distance d from the source to the surface, following quadratic attenuation term can be used:

If the facet (surface point) is aimed away from the eye (> 90) this dot product and hence
the Id is negative. But model evaluates Id to 0 with the modified computation replacing
(l.n) with max[(l.n), 0].
For near 0, brightness varies only slightly with angle, because cosine changes
slightly there. As approaches 90, however, the brightness falls rapidly to 0.
3. Specular Reflection

The images with only ambient and diffuse reflections appear shaded and 3-D, but with all the
surfaces dull, somewhat like chalk because the highlights reflected from shiny objects are not
considered.

Priyanka H V

Page 7

Computer Graphics

These highlights usually show a color different from the color of the reflected ambient and
diffuse light.

E.g. A red ball, viewed under white light, has a white highlight that is the reflection of some
of light from the source in the direction of the viewer.

Specular surface is smooth. The smoother the surface is, the more it resembles a mirror;

Figure shows that as the surface gets smoother, the reflected light is concentrated in a smaller
range of angles, centered about the angle of a perfect reflector-a mirror or a perfectly
specular surface.

Modeling specular surfaces realistically can be complex, because the pattern by which the
light is scattered is not symmetric, it depends on the wavelength of the incident light, and it
changes with the reflection angle.

Phong proposed an approximate model that can be computed with only a slight increase over
the work done for diffuse surfaces.

The model adds a term for specular reflection. Hence, the surface can be considered as being
rough for the diffuse term and smooth for the specular term.

The amount of light that the viewer sees depends on the angle between r, the direction of a
perfect reflector, and v, the direction of the viewer.

The Phong model uses the equation

The coefficient Ks (0 Ks 1) is the fraction of the incoming specular light that is reflected.

The exponent is a shininess coefficient.

Figure below shows how, as is increased, the reflected light is concentrated in a narrower
region,

centered

on the angle of a perfect reflector.

In the limit, as goes to infinity, it is a mirror surface; values in the range 100 to 500
correspond to most metallic surfaces, and smaller values (<100) correspond to materials that
show broad highlights.

The computational advantage of the Phong model is that, if the r and n are normalized to unit
length, the dot product can again be used, and the specular term becomes

By adding a distance term, the Phong model is written:

Priyanka H V

Page 8

Computer Graphics

7.3.1 Use of the Halfway Vector (Blinn Phong / Modified Phong Model)

If the Phong model with specular reflections is used in rendering, the dot product r . v should
be recalculated at every point on the surface.

An approximation can be obtained by using the unit vector half-way between the viewer

vector and the light-source vector: h =

Figure below shows all five vectors.

Here is the angle between n and h, the half-angle.


When v lies in the same plane as do 1, n, and r, it can be
shown that 2 = .

If r . v is replaced with n . h, the calculation of r can be


avoided.

7.4 Computation of Vectors

Most of the calculations for rendering a scene involve the determination of the required
vectors and dot products.

For each special case, simplifications are possible.

E.g. if the surface is a flat polygon, the normal is the same at all points on the surface.

If the light source is far from the surface, the light direction is the same at all points.

Normal Vectors

For smooth surfaces, the normal vector to the surface exists at every point and gives the local
orientation of the surface.

Its calculation depends on how the surface is represented mathematically.

Computation of normal for the plane:

A plane can be described by the equation ax + by + cz + d = 0.

In terms of the normal to the plane, n, and a point po on the plane, the same equation is
n . (p - po) = 0,

where p is any point (x, y, z) on the plane.

Comparing the two forms, the vector n in homogeneous coordinates, is given by =

Assume three non-collinear points - po, p1, p2 in this plane (they are sufficient to determine
it uniquely).

The vectors p2 p0 and p1-p0 are parallel to the plane, and their cross product can be used to
find the normal
n = (p2 p0) x (p1-p0)

Priyanka H V

Page 9

Computer Graphics

Order of the vectors in the cross product is important because reversing the order changes the
surface from outward pointing to inward pointing, and that can affect the lighting
calculations.

Computation of normal for the unit sphere centered at the origin:

The usual equation tor this sphere is the implicit equation f(x, y, z) =

or, in vector form, f (p) =p . p - 1 = 0.

The normal is given by the gradient vector, which is represented by the column matrix,

The sphere could also be represented in parametric form. In this form, the x, y, and z values
of a point on the sphere are represented independently in terms of two parameters u and v:
x = x(u, v),

y = y(u, v),

z = z(u, v)

For a particular surface, there may be multiple parametric representations. One parametric
representation for the sphere is
x(u, v) =cos u sin v, y(u, v) = cos u cos v, z(u, v) = sin u

By varying u and v vary in the range

< u<

< v<

-, all the points on the

sphere can be obtained.

Then the normal can be obtained from the tangent plane at a point p(u, v) = [x(u, v) y(u, v)
z(u, v)]T on the surface, as shown.

The tangent plane gives the local orientation of the surface at a point.
Tangent plane can be derived by taking the linear terms of the Taylor
series expansion of the surface at p.

The result is that, at p, lines in the directions of the vectors represented by

lie in the tangent plane.

Taking

their

For

the

As only the direction of n is needed, the above expression can be divided by cos u to obtain

cross

product,

the

normal

sphere,

the unit normal to the sphere n = p.

In OpenGL, a normal can associated with a vertex through functions such as


glNorma13f(nx, ny, nz);
glNorma13fv(pointer_to_norma1);

Normals are modal variables.

If a normal is defined before a sequence of vertices through glVertex calls, this normal is
associated with all the vertices, and is used for the lighting calculations at all the vertices.

Priyanka H V

Page 10

Computer Graphics

7.4.1 Angle of Reflection

The calculated normal at a point and the direction of the light source can be used to compute
the direction of a perfect reflection.

An ideal mirror is characterized by the following statement: The angle of incidence is equal
to the angle of reflection. These angles are as pictured below.

The angle of incidence is the angle between the normal and the light
source (assumed to be a point source).

The angle of reflection is the angle between the normal and the
direction in which the light is reflected.

In 2-Ds, there is but a single angle satisfying the angle condition.

In 3Ds, there is an infinite number of angles satisfying the condition.

At a point p on the surface, the incoming light ray and the normal at the point must all lie in
the same plane. These two conditions are sufficient to determine r from n and l.

The direction of r is important than its magnitude.

Assuming both l and n have been normalized (to have unit vectors) such that

It is needed that
If i = r, then,

cos i = cos r.

Using the dot product, the angle condition is cos i = l . n = cos r =n . r

The coplanar condition implies r can be written as a linear combination of l and n:


r = l + n.

Taking the dot product with n,


n . r = l . n + = l.n

A second condition between and can be from the requirement that r also be of unit length;
thus,
l = r . r = 2 + 2 l . n + 2,
Solving these two equations,
r = 2(1. n) n -1.
7.5 Polygonal Shading

Three ways to shade the polygons.

The three vectors l , n, and v - can vary from point to point on a surface.

1. Flat/Constant shading

For a flat polygon n is constant from point to point on a surface.

For a distant viewer, v is constant over the polygon.

Finally, if the light source is distant, l is constant.

Distant can be interpreted as


The source is at infinity: Then shading equations and their implementation needs to be
suitably adjusted.

Priyanka H V

Page 11

Computer Graphics

E.g. The location of the source must be changed to the direction of the source.
The size of the polygon relative to the distance of the polygon from the source or viewer.

If the three vectors are constant, then the shading calculation needs to be carried out only
once for each polygon, and each point on the polygon is assigned the same shade. This

technique is known as flat shading.


In OpenGL, flat shading is specified through
glShadeModel(GL_FLAT);
If flat shading is in effect, OpenGL uses the normal associated with the first vertex of a single

polygon for the shading calculation.


For primitives such as a triangle strip, OpenGL uses the normal of the third vertex for the

first triangle, the normal of the fourth for the second, and so on.
Similar rules hold for other primitives, such as quadrilateral strips.
Considering a mesh constructed using polygons.
Flat shading will show differences in shading for the polygons in that mesh. E.g.

If the light sources and viewer are near the polygon, the vectors l and v will be different
for each polygon. However, if the polygonal mesh has been designed to model smooth
surface, flat shading will almost always be disappointing, because shading between

adjacent polygons is very small.


The human visual system has a remarkable sensitivity to small differences in light
intensity, due to a property known as lateral inhibition. Hence shading with intensities
increasing in sequence will appear with overshooting brightness on one side and

undershooting on the other.


Shading with large differences at the edges of polygons may produce stripes along the
edges of the polygon. These stripes are called Mach bands. This can be avoided by

employing smoother shading techniques that do not produce such large differences.
2. Interpolative / Gouraud shading
OpenGL interpolates colors assigned to vertices across a polygon.
OpenGL interpolates colors for other primitives such as lines, if shading model is smooth via
glShadeModel(GL_SMOOTH);

If both smooth shading and lighting are enabled, and the normal is assigned to each vertex of

the polygon to be shaded.


The lighting calculation is made at each vertex, determining a vertex color, using the material

properties and the vectors v and 1 computed for each vertex.


If the light source is distant, and either the viewer is distant or there are no specular

reflections, then interpolative shading shades a polygon in a constant color.


Consider the mesh with polygons. In this case, multiple polygons meet at interior vertices of
the mesh, each of which has its own normal. Hence the normal at the vertex is discontinuous.

Then the idea of a normal existing at a vertex may complicate the mathematics.
Gouraud realized that the normal at the vertex n could be defined in a way to achieve

smoother shading through interpolation.


Consider an interior vertex where four polygons meet (figure). Each has its own normal.

In Gouraud shading, the normal at a vertex is defined to be the


normalized average of the normals of the polygons that share the
vertex.

Priyanka H V

E.g. The vertex normal is given by

Page 12

n=

Computer Graphics

In OpenGL, Gouraud shading (Intensity / Color interpolation shading) is achieved just by

setting vertex normals correctly.


To find the normals that are averaged, a data structure can be used to represent the mesh.
Traversing this data structure can generate the vertices with the averaged normals.

1.3 Phong shading (normal-vector interpolation-shading)

Even the smoothness introduced by Gouraud shading may not prevent the appearance of

Mach bands.
Phong proposed that, instead of interpolating vertex intensities, as in Gouraud shading,

normals across each polygon can be interpolated.


Consider a polygon that shares edges and vertices with other polygons in the mesh, as shown
below.

Vertex normals can be computed by interpolating over the normals of


the polygons that share the vertex.

Then bilinear interpolation can be used to interpolate the normals

over the polygon.

Consider the figure below.

The interpolated

normals at vertices A and B can be used to interpolate

normals along the

edge between them:

n() = (l - )nA + nB
Similar
interpolation can be done on all the edges.
The normal at any interior point can be obtained from points on the edges by
n(, ) = (l - )nC + nD
Once the normal at each point is obtained, an independent shading calculation can be done.
Phong shading produces renderings smoother than those of Gouraud shading, but at a

significantly greater computational cost.


There are various hardware implementations for Gouraud shading, and they require only a

slight increase in the time required for rendering; neither is true, however, for Phong shading.
Consequently, Phong shading is almost always done offline.
7.6 Approximation of a Sphere by Recursive Subdivision
To illustrate the interactions between shading parameters and polygonal
approximations to curved surfaces, here, a sphere is used as curved surface to
illustrate shading calculations and the sphere is approximated using polygons.
Recursive subdivision is introduced which is a powerful technique for generating
approximations to curves and surfaces to any desired level of accuracy.
Step 1

A tetrahedron is used whose facets could be divided initially into triangles.

The regular tetrahedron is composed of four equilateral triangles, determined by four vertices
say, (0,0,1), (0, 22/3, -1/3), (-6/3, -2/3, -1/3), (6/3,-2/3, -1/3).

All four vertices lie on the unit sphere, centered at the origin.

To get a first approximation by drawing a wireframe for the tetrahedron:

Four vertices are defined globally using the point type:


point3 v[4]={{0.0, 0.0, 1.0}, {0.0, 0.942809, -0.333333},{-0.816497. -0.471405, -0.333333},
{0.816497, -0.471405, -0.333333}};

Triangles can be drawn via the function

Priyanka H V

Page 13

Computer Graphics

void triangle( point3 a, point3 b, point3 c)


{
glBegin(GL_LINE_LOOP);
glVertex3fv(a);
glVertex3fv(b);
glVertex3fv(c);
glEnd( );
}

The tetrahedron can be drawn by


void tetrahedron( )
{
triangle(v[0], v[1], v[2]);,
triangle(v[3], v[2], v[1]);
triangle(v[0], v[3], v[1]);
triangle(v[0], v[2], v[3]);
}

The order of vertices obeys the right-hand rule, so the code can be converted to draw shaded
polygons with little difficulty.

Step 2

Closer approximation to the sphere can be obtained by subdividing each facet of the
tetrahedron into smaller triangles.

Subdividing into triangles will ensure that all the new facets will be flat.

The bisectors of the sides of the triangle are connected forming four equilateral triangles.

After each facet is subdivided, the four new triangles in each facet will still be in the same
plane as the original triangle.

The new vertices created by bisection can be moved to the unit sphere by normalizing each
bisected vertex, using a simple normalization function such as
void normal(point3 p)
{
double d = 0.0;
int i;
for ( i =0; i <3 ; i++) d += p[i]*p[i];
d=sqrt(d) ;
if (d > 0.0) for(i = 0; i < 2; i++) p[i]/=d;
}

Step 3:

A single triangle, defined by the vertices numbered a, b, and c, can be subdivided by the
following code:
point3 v1, v2, v3;
int j;
for (j=0; j < 3; j++) vl[j]=v[a][j]+v[b][j];
normal (vl);

Priyanka H V

Page 14

Computer Graphics

for (j=0; j<3; j++) v2[j]=v[a][j]+v[c][j];


normal (v2);
for (j=0; j<3; j++) v3[j]=v[c][j]+v[b][j];
normal (v3);
triangle(v[a], v2, v1);
triangle(v[c], v3, v2);
triangle(v[b], v1, v3);
triangle(vl, v2, v3);
Step 4:

This code can be used in the tetrahedron routine to generate 16 triangles rather than 4, but
there must be a provision to repeat the subdivision process n times to generate successively
closer approximations to the sphere.

By calling the subdivision routine recursively, the number of subdivisions be controlled.

Hence tetrahedron routine is made depend on the depth of recursion by adding an argument
n:
void tetrahedron(int n)
{
divide_triangle(v[0], v[l], v[2], n);
divide_triangle(v[3], v[2], v[l], n );
divide_triangle(v[0], v[3], v[l], n );
divide_triangle(v[0], v[2], v[3], n );
}

Step 5:

The divide_triangle function calls itself to subdivide further if n is greater than zero, but
generates triangles if n has been reduced to zero. Here is the code:
divide_triangle(point3 a, point3 b, point3 c, int n)
{
point3 v1, v2,v3;
int j;
if (n > 0)
{
for(j=0; j<3; j++) vl[j]=a[j]+b[j];
normal(vl);
for(j=0; j<3; j++) v2[j]=a[j]+c[j];
normal(v2);
for(j=0;j<3; j++) v3[j]=c[j]+b[j];
normal (v3);
divide_triangle(a, v2, v1, n-l);
divide_triangle(c, v3, v2, n-l);
divide_triangle(b, v1, v3, n-1);
divide_triangle(vl, v2, v3, n-l);

Priyanka H V

Page 15

Computer Graphics

}
else triangle(a. b. c);
}

Figure above shows an approximation to the sphere drawn with this code.

Lighting and shading can be added to the sphere approximation after the following section.

7.7 Light Sources in OpenGL

OpenGL supports the four types of light sources and allows at least eight light sources in a
program.

Each must be individually specified and enabled. Although there are many parameters that
must be specified, they are exactly the parameters required by the Phong model.

The OpenGL functions


glLightfv(source, parameter, pointer_to_array);
glLightf(source, parameter, value);

set the required vector and scalar parameters, respectively.

There are four vector parameters that can be set: the position (or direction) of the light
source, and the amount of ambient, diffuse, and specular light associated with the source.

E.g:

To specify the first source GL_LIGHT0, and to locate it at the point (1.0,2.0,3.0). Its
position can be stored as a point in homogeneous coordinates:
GLfloat light0_pos[ ]={l.0, 2.0, 3.0, 1.0};

With the fourth component set to zero, the point source becomes a distant source with
direction vector
GLfloat light0_dir[ ]={l.0, 2.0, 3.0, 0.0};

To add a white specular component and red ambient and diffuse components to the single
light source:
GLfloat diffuse0[]={l.0, 0.0, 0.0, 1.0};
GLfloat ambient0[]={1.0, 0.0, 0.0, 1.0};
GLfloat specular0[]={1.0, 1.0, 1.0, 1.0};
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_POSITION, light0_pos);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambient0);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse0);
glLightfv(GL_LIGHT0, GL_SPECULAR, specular0);

Both lighting and the particular source must be enabled.

To add a global ambient term that is independent of any of the sources. E.g. To add a
small amount of white light:
GLfloat globa1_ambient[]={0.1, 0.1, 0.1, 1.0};
GlLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);

Priyanka H V

Page 16

Computer Graphics

The distance terms are based on the distance-attenuation model, f (d) = 1 / (a + bd + cd2)
which contains constant, linear, and quadratic terms. These terms are set individually via
glLightf; E.g.
GlLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, a);

To convert a positional source to a spotlight by choosing the spotlight direction


(GL_SPOT_DIRECTION), the exponent (GL_SPOT_EXPONENT), and the angle

(GL_SPOT_CUTOFF).
All three are specified by glLightf and glLightfv.
GL_LIGHT_MODEL_LOCAL_VIEWER and GL_LIGHT_MODEL_TWO_SIDE:

Lighting calculations can be time-consuming.


If the viewer is assumed to be an infinite distance from the scene, then the calculation of
reflections is easier, because the direction to the viewer from any point in the scene is
unchanged.
The default in OpenGL is to make this approximation, because its effect on many scenes is
minimal.
To make full light calculation using the true position of the viewer, model can be changed
using:
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);

A surface has both a front face and a back face.

For polygons, these faces are determined by the order in which the vertices are specified,
using the right-hand rule.

For most objects, since only the front faces are seen, the shading of the back-facing surfaces
is neglected.

To ensure that the properly placed viewer may see a back face, both the front and back faces
correctly.

The following function can be used for the purpose:


glLightModeli(GL_LIGHT_MODEL_TWO_SIDED, GL_TRUE);

Light sources are objects, just like polygons and points.


Light sources are affected by the OpenGL model-view transformation. They can be
defined at the desired position or in a convenient position and can be moved to the

desired position by the model-view transformation.


The basic rule governing object placement is that vertices are converted to eye
coordinates by the model-view transformation that is in effect at the time the vertices are

defined.
7.8 Specification of Materials in OpenGL

Material properties in OpenGL match up directly with the supported light sources and with
the Phong reflection model.

Different material properties can be also be specified for the front and faces of a surface.

All the reflection parameters are specified through the functions


glMaterialfv(face, type, pointer_to_array);
glMaterialf(face, value);

Priyanka H V

Page 17

Computer Graphics

E.g. To define ambient, diffuse, and specular reflectivity coefficients (ka, kd, ks) for each
primary color:
GLfloat ambient[ ] = {0.2, 0.2, 0.2, 1.0};
GLfloat diffuse[ ] = {1.0, 0.8, 0.0, 1.0};
GLfloat specular[ ]={l.0, 1.0, 1.0, 1.0};
Here, small amount of white ambient reflectivity, yellow diffuse properties, and white specular
reflections are defined.

To set the material properties for the front and back faces by the calls
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular);

Note that if both the specular and diffuse coefficients are the same (as is often the case), both can
be specified by using GL_DIFFUSE_AND_SPECULAR for the type parameter. To specify
different front- and back-face properties, GL_FRONT and GL_BACK can be used.

The shininess of a surface - the exponent in the specular-reflection term - is specified by

glMaterialf. E.g.
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, 100.0);
Material properties are modal: Their values remain the same until changed, and, when

changed, affect only surfaces defined after the change.


Surfaces having an emissive component that characterizes self-luminous sources can also
be defined in OpenGL. That method is useful if a light source is to be appeared in the
image. This term is unaffected by any of the light sources, and it does not affect any other
surfaces. It adds a fixed color to the surfaces and is specified in a manner similar to other
material properties. E.g.
GLfloat emission[]={0.0, 0.3, 0.3, 1.0};
glMaterialfv(GL_FRONT_AND_BACK, GL_EMISSION, emission);
defines a small amount of blue-green (cyan) emission.

7.9 Shading of the Sphere Model

To shade the approximate spheres with OpenGL's shading model, normals must be assigned.

E.g. To flat shade each triangle, using the three vertices to determine a normal, and then to
assign this normal to the first vertex.

The cross product is used and then the result is normalized.

A cross-product function is:


cross(point3 a, point3 b, point3 c, point3 d);
{
d[0]=(b[1]-a[1])*(c[2]-a[2])-(b[2]-a[2])*(c[1]-a[1]);
d[I]=(b[2]-a[2])*(c[0]-a[0])-(b[0]-a[0])*(c[2]-a[2]);
d[2]=(b[0]-a[0])*(c[1]-a[1])-(b[1]-a[1])*(c[0]-a[0]);
normal (d);
}

Assuming that light sources have been defined and enabled, triangle routine can be changed to
produce shaded spheres:
Priyanka H V

Page 18

Computer Graphics

void triangle(point3 a, point3 b, point3 c)


{
point3 n;
cross(a, b, c, n);
glBegin(GL_POLYGON);
glNorma13fv(n);
glVertex3fv(a);
glVertex3fv(b);
glVertex3fv(c);
glEnd( );
}

The result of flat shading the spheres is shown in above figure.

As the number of sub-divisions is increased the interior of the spheres appears smooth. But
the edges of polygons around the outside of the sphere image are still visible. This type of

outline is called a silhouette edge.


Interpolative shading can be easily applied to the sphere models, because the normal at each

point p on the surface is in the direction from the origin to p.


Then the true normal can be assigned to each vertex, and OpenGL will interpolate the shades

at these vertices across each triangle.


Thus, triangle is changed to
void triangle(point3 a. point3 b, point3 c)
{
point3 n;
int i;
glBegin(GL_POLYGON);
for (i=0; i < 3; i++) n[i] = a[i];
normal (n);
glNorma13fv(n);
glVertex3fv(a);
for(i=0; i < 3; i++) n[i]=b[i];
normal (n);
glNorma13fv(n);
glVertex3fv(b);
for (i=0; i <3; i++) n[i]=c[i];
normal(n);
glNorma13fv(n) ;
glVertex3fv(c);
glEnd( );
}

The results of this definition of the normals are shown in figure.

7.10 Global Illumination


Limitations imposed by the local lighting model:
Priyanka H V

Page 19

Computer Graphics

If an array of spheres is illuminated by a distant source, the spheres close to the source
prevent some of the light from the source from reaching the other spheres.
In a real scene these spheres are specular and some light is scattered among spheres. But
if local model is used, each sphere is shaded independently and hence all appear the same
to the viewer.
Local model also cannot produce shadows.

These effects can be achieved by more sophisticated (and slower) global rendering
techniques, such as:

Ray tracing: It works well for highly specular surfaces. E.g. the scenes composed of

highly reflective and translucent objects, such as glass balls.


Radiosity: It is best suited for scenes with perfectly diffuse surfaces, such as interiors of
buildings.

These two are complementary methods


Ray Tracing

It is an extension of local lighting model, in many ways.

There are many light rays leaving a source. But, only those light rays entering the lens of the
synthetic camera and pass through the center of projection contribute to the image.

Figure shows a single point source with several of the possible interactions with perfectly
specular surfaces.

Rays can enter the lens of the camera directly from the source, from
interactions with a surface visible to the camera, after multiple
reflections from surfaces, or after transmission through one or more
surfaces.

Most of the rays that leave a source do not enter the lens and do not
contribute to the image. Hence, attempting to follow all rays from a

light source is a time wasting endeavor.


Ray cast model:

The cast rays that contribute to the image,which can be obtained if the rays are traced in the
reverse direction i.e. the rays starting at the center of projection

The ray tracing is as shown in figure below.

Here an image plane is ruled into pixel sized areas.

A color must be assigned to every pixel. Hence, at least one ray is cast
through each pixel.

Each cast ray either intersects a surface or a light source, or goes off to
infinity without striking anything.

Pixels corresponding to this latter case can be assigned a background color.

Rays that strike say opaque surfaces need a shading calculation for their point of intersection.
Doing so and then using Phong model the image produced will be the same as could be
produced by the local renderer.

Comparison of pipeline renderer and ray racer:

Pipeline renderer

Priyanka H V

Ray racer

Page 20

Computer Graphics

The sequence of steps in the pipeline renderer The sequence in which the calculations are
are:

carried out is different.

Object modeling
Projection
Visible surface determination.

Works on a vertex-by-vertex basis


The reflection model is immediately applied.

Works on a pixel-by-pixel basis


Prior to the application of the reflection
model, it is first checked that whether the
point of intersection between the cast ray and
the surface is illuminated.

Other features in ray tracing:

Shadow or feeler rays from the point on the surface to each source are computed. If a
shadow ray intersects a surface before it meets the source, the light is blocked from reaching
the point under consideration and this point is in shadow, at least from this source. No

lighting calculation needs to be done for sources that are blocked from a point on the surface.
If all surfaces are opaque and the light scattered from surface to surface is not considered, an
image that has shadows added is obtained. This is what that can be accomplished without ray
tracing. But the needed hidden-surface calculation for each point of intersection between a

cast ray and a surface is expensive.


The shadow ray for highly reflective surfaces (mirror surface) can be followed as it bounces
from surface to surface, until it either goes off to infinity or intersects a source. Such
calculations are usually done recursively, and take into account any absorption of light at

surfaces.
Ray tracing is particularly good at handling surfaces that are both reflecting and transmitting.
A cast ray to a surface can be followed with the property that: if a ray from a source strikes a
point, then the light from the source is partially absorbed, and some of this light contributes
to the diffuse reflection term. The rest of the incoming light is divided between a transmitted

ray and a reflected ray.


From the perspective of the cast ray, if a light source is visible at the intersection point, then
three tasks need to be performed:
1. Contribution from the light source at the point is computed using the standard reflection

model.
2. A ray is cast in the direction of a perfect reflection.
3. A ray is cast in the direction of the transmitted ray.
These two rays are treated just like the original cast ray; that is, they are intersected with

other surfaces, end at a source, or go off to infinity.


At each surface that these rays intersect, additional rays generated by reflection are perfectly
diffuse. Then the rendering equation to a point can be simplified using a numerical method
called radiosity.

Radiosity:

It breaks up the scene into small flat polygons, or patches, each of which can be assumed to
be perfectly diffuse and renders in a constant shade, as shown:
There are two steps to find the shades needed:

Priyanka H V

Page 21

Computer Graphics

1. Patches are considered pair wise to determine form factors that describe how the light
energy leaving one patch affects the other.
2. Once the form factors are determined, the rendering equation, which starts as an integral
equation, can be reduced to a set of linear equations for the radiosities - essentially the
reflectivity-of the facets.

Radiosity renders interiors that are composed diffuse reflectors.

Even the distributed light sources, if modeled as emissive patches, appear in the rendering.

Priyanka H V

Page 22

You might also like