You are on page 1of 21

M.E.

CAPD NOTES

Q1. Architecture of interactive computer graphics

Q2. Classification interactive computer graphics


Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help of
programming. Computer graphics is made up of number of pixels. Pixel is the smallest graphical picture
or unit represented on the computer screen. Basically there are two types of computer graphics
namely.
Interactive Computer Graphics: Interactive Computer Graphics involves a two way communication
between computer and user. Here the observer is given some control over the image by providing him
with an input device for example the video game controller of the ping pong game. This helps him to
signal his request to the computer.

The computer on receiving signals from the input device can modify the displayed picture appropriately.
To the user it appears that the picture is changing instantaneously in response to his commands. He can
give a series of commands, each one generating a graphical response from the computer. In this way he
maintains a conversation, or dialogue, with the computer.
Interactive computer graphics affects our lives in a number of indirect ways. For example, it helps to
train the pilots of our airplanes. We can create a flight simulator which may help the pilots to get trained
not in a real aircraft but on the grounds at the control of the flight simulator. The flight simulator is a
mock-up of an aircraft flight deck, containing all the usual controls and surrounded by screens on which
we have the projected computer generated views of the terrain visible on take-off and landing.

Non Interactive Computer Graphics: In non-interactive computer graphics otherwise known as passive
computer graphics. It is the computer graphics in which user does not have any kind of control over the
image. Image is merely the product of static stored program and will work according to the instructions
given in the program linearly. The image is totally under the control of program instructions not under
the user. Example: screen savers.

Q3. Application display and interactive device


Write about computer device and monitor

COMPILED BY AVESH 1
M.E. CAPD NOTES

Q4. Effect of scan coversion


effects (side effects) of scan conversion.

Staircase: A common example of aliasing effects is the staircase of jagged appearance, we see
when scan converting a primitive such as a line or a circle.
Unequal Brightness: Another side effect that is less noticeable is the unequal brightness of lines
of different orientation. A slanted line appears dimmer than a horizontal or vertical line although
all are presented at the same intensity level. The reason for this problem can be explained as the
pixels on the horizontal/vertical lines are placed one unit apart, whereas those on the diagonal
line are approximately 1.414 units apart. This difference in the density produces the perceived
difference in brightness.

Picket Fence Problem: The picket fence problem occurs when an object is not aligned with of
does not fit into the pixel grid properly.

Q4. Scan conversion with real time scan conversion


In this technique the picture is randomly represented in terms of visual attributes like color, shade
and intensity, and geometrical properties like X, Y co-ordinates, slopes and text, which are ordered in
Y. The scan conversion program scans through this information and calculates the intensity of each
pixel on the screen. Consider Fig. 3.20. The figure consists of four lines (AB, BC, AD and CD). Three
scan lines 1, 2 and 3 are also shown in the figure. The active edge list for this figure is shown in
Table.3.4.

Thus for scan line 3, all the edges above the scan line are also included in the active list.

Q5. RUN LENGTH ENCODING


In run length encoding scheme the number of pixels of same intensity and color in a given scan line is
specified. In the simplest case the encoded data will show the intensity and run length. For example,
suppose we have a pixel arrangement in a scan line as shown in Fig. 3.21.

COMPILED BY AVESH 2
M.E. CAPD NOTES

For the scan line of Fig 3.21, the encoding is shown in Fig. 3.22. For the first four pixels in the scan line
the line intensity is zero. Intensity is one for the next pixel, zero for next two pixels, and one for the next
pixel and so on.
Run length encoding has the advantage of large data compression. Its disadvantage is that since the run
lengths are stored sequentially, addition or deletion of lines is difficult.

Q6. CELL ENCODING


Run length encoding stores a picture sequentially line by line. In contrast cell encoding considers
representation of picture information by dividing the display area into cells of suitable sizes. For
example, a display area of 512 512 can be divided into 4096 cells of 64 pixels. In the case of drawings,
combinations of these adjacent cells can be used to construct complete lines.

CHAPTER 2
Q1. Graphics function
Putpixel
Purpose:-Putpixel function is to draw the pixel on the screen. Pixel is small dot on the screen.
Syntax:-putpixel(x co-orinate, y co-ordinate,COLOR);
Example: putpixel(100,100,BLUE);

SetbkColor
Purpose:-Setbkcolor function is used to set background color of the screen.
Syntax:-setbkcolor(COLOR);
Example:-setbkcolor(RED);

Setlinestyle
Purpose:-setlinestyle function is used to set the current line style, width and pattern
Syntax:-setlinestyle(linestyle, pattern, thickness);
Example:-setlinestyle(SOLID_LINE,1,2);

Setcolor
Purpose:-setcolor is to set color of the objects which is to be drawn after this setcolor line.
Syntax:-setcolor(COLOR);
Example:-setcolor(RED);

Rectange:-
Purpose:- Rectangle function is used to draw the rectangle on the screen. X1,y1 are the lower
left co-ordinates of the rectangle and the x2,y2 are the upper right co-ordinates of the
rectangle.
Syntax: rectangle(x1,,y1,x2,y2);

COMPILED BY AVESH 3
M.E. CAPD NOTES

Example: rectangle(100,100,200,200);

Textheight
Purpose:-textheight returns the height of a string in pixels.
Syntax:-textheight(STRING);
Example:-i=textheight(HELLO);

Textwidth
Purpose:-textwidth returns the width of a string in pixels
Syntax:-textwidth(STRING);
Example:-i=textwidth(HELLO);

Getx
Purpose:-getx returns the current positions of x o-ordinate
Syntax:-getx();
Example:-x=getx();

Gety
Purpose:-gety returns the current positions of y co-ordinate
Syntax:-gety();
Example:-y=gety();

Getmaxx
Purpose:-getmaxxreturns the maximum x co-ordinate on the screen
Syntax:-getmaxx();
Example:-maxx=getmaxx();

Getmaxy
Purpose:-getmaxy returns the maximum y co-ordinate on the screen
Syntax:-getmaxy();
Example:-maxy=getmaxy();

Line
Purpose:-Line function is used to draw the line on the screen.
Syntax: line(x1,y1,x2,y2);
Example:-line(100,100,200,100);

Closegraph
Purpose:-closegraph function shut down the graphic system
Syntax:-closegraph();
Example:-closegraph();

Circle
Purpose: Circle function is used to draw the circle on the screen
Syntax: circle(x,y,radius);
Example:circle(100,100,50);

COMPILED BY AVESH 4
M.E. CAPD NOTES

Q2. Open GL interface


Silicon Graphics (SGI) developed the OpenGL application-programming interface (API) for the
development of 2D and 3D graphics applications. It is a low-level vendor-neutral software interface. It is
often referred to as the assembler language of computer graphics.

It provides enormous flexibility and functionality. It is used on a variety of platforms. OpenGL is a low-
level graphics library specification. OpenGL makes available to the programmer a small set of geometric
primitives - points, lines, polygons, images, and bitmaps. OpenGL provides a set of commands that allow
the specification of geometric objects in two or three dimensions, using the provided primitives,
together with commands that control how these objects are rendered into the frame buffer.

The OpenGL API was designed for use with the C and C++ programming languages but there are also
bindings for a number of other programming languages such as Java, Ada, and FORTRAN. OpenGL
provides primitives for modelling in 3D. Its capabilities include viewing and modelling transformation,
viewport transformation, projections (orthographic and perspective), animation, lighting etc.

Q3. Co-ordinate system


Transformations can be carried out either in 2-dimensions or in 3-dimensions. The theory of two-
dimensional transformations is discussed first in this chapter.
This is then extended to three dimensions. When a design package is initiated, the display will have a set
of co-ordinate values. These are called default co-ordinates. A user co-ordinate system is one in which
the designer can
specify his own co-ordinates for a specific design application. These screen independent coordinates can
have large or small numeric range, or even negative values, so that the model can be represented in a
natural way. It may, however, happen that the picture is too crowded with several features to be viewed
clearly on the display screen. Therefore, the designer may want to view only a portion of the image,
enclosed in a rectangular region called a window. Different parts of the drawing can thus be selected for
viewing by placing the windows. Portions inside the window can be enlarged, reduced or edited
depending upon the requirements. Figure 3.7 shows the use of windowing to enlarge the picture.

Q4. Primitive and Attributes


GRAPHIC PRIMITIVES
A drawing is created by an assembly of points, lines, arcs, circles. For example, drawing shown in Fig 3.1
consists of several entities. In computer graphics also drawings are created in a similar manner. Each of
these is called an entity. The drawing entities that a user may find in a typical CAD package include :

COMPILED BY AVESH 5
M.E. CAPD NOTES

Point line circle arc spline ellipse polygon rectangle

Explain this shape and also standard solid primitives

Attributes
Design attributes (such as geometric shape and size), and manufacturing attributes (the sequence of
processing steps required to make the part). In companies which employ several design engineers and
manufacturing a diverse
range of products, such classifications and coding has a number of other uses. One of the major benefits
is avoiding the duplication of similar components. This can result in considerable savings in terms of
design cost, processing cost and tooling cost. One prime necessity to realize this is to have a good design
retrieval system. The parts classification and coding is required in a design retrieval system, and in
computer aided process planning the process routing is developed by recognizing the specific attributes
of the part and relating these attributes to the corresponding manufacturing operations.

Part Design Attributes


Basic (External/Internal) shape
Axisymmetric/Prismatic/sheet metal
Length/diameter ratio
Material
Major dimensions
Minor dimensions
Tolerances
Surface finish

Part Manufacturing Attributes


Major process of manufacture
Surface treatments/coatings
Machine tool/processing equipment
Cutting tools
Operation sequence
Production time
Batch quantity
Production rate
Fixtures needed

Chapter 3
Q1. CLIPPING

COMPILED BY AVESH 6
M.E. CAPD NOTES

Clipping is the process of determining the visible portions of a drawing lying within a window. In clipping
each graphic element of the display is examined to determine whether or not it is completely inside the
window, completely outside the window or crosses a window boundary. Portions outside the boundary
are not drawn. If the element of a drawing crosses the boundary the point of inter-section is determined
and only portions which lie
inside are drawn. Readers are advised to refer to books on computer graphics for typical clipping
algorithms like Cohen-Sutherland clipping algorithm. Fig. 3.17 shows an example of clipping.

Q2. Cohen-Sutherland algorithm

1. Given a line segment with endpoint and


2. Compute the 4-bit codes for each endpoint.
If both codes are 0000,(bitwise OR of the codes yields 0000 ) line lies completely inside the
window: pass the endpoints to the draw routine.
If both codes have a 1 in the same bit position (bitwise AND of the codes is not 0000), the line
lies outside the window. It can be trivially rejected.

COMPILED BY AVESH 7
M.E. CAPD NOTES

3. If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie
outside the window and the line segment crosses a window edge. This line must be clipped at
the window edge before being passed to the drawing routine.

4. Examine one of the endpoints, say . Read 's 4-bit code in order: Left-to-
Right, Bottom-to-Top.

5. When a set bit (1) is found, compute the intersection I of the corresponding window edge with
the line from to . Replace with I and repeat the algorithm

Q3. PROJECTIONS
In drawing practice, a 3-dimensional object is represented on a plane paper. Similarly in computer
graphics a 3-dimensional object is viewed on a 2-dimensional display. A projection is a transformation
that performs this conversion. Three types of projections are commonly used in engineering practice:
parallel, perspective and isometric.

PARALLEL (ORTHOGONAL) PROJECTION


This is the simplest of the projection methods. Fig. 3.18 shows the projection of a cube on to a projection
plane. The projectors, which are lines passing through the corners of the object are all parallel to each
other. It is only necessary to project the end points of a line in 3-D and then join these projected points.
This speeds up the transformation process. However a major disadvantage of parallel projection is lack
of depth information.

PERSPECTIVE PROJECTION
The perspective projection enhances the realism of displayed image by providing the viewer with a sense
of depth. Portions of the object farther away from the viewer are drawn smaller then those in the
foreground. This is more realistic as it is the way we see an object. In perspective projection the
projections connect the eye with every point of the object and therefore all projections converge to the
eye.

COMPILED BY AVESH 8
M.E. CAPD NOTES

As the display screen is a two-dimensional space, we cannot display three-dimensional objects but only
their projections. Computationally, projection transformations are in general quite expensive. Since the
generation of a perspective view of a given object may require the projection transformation of a
considerable number of points, the projection applied is usually restricted to the central projection and
sometimes to even simpler parallel or orthographic projection in order to keep the execution time for
the generation of a perspective view within reasonable limits. Figure 3.19 explains the central projection
as it is usually applied in computer graphics.
The problem is to determine the projection of an object point, located somewhere in a three-
dimensional space, onto a plane in that space, called the image plane. This projection is called the image
point of the corresponding object point. In a central projection, The center of projection, also called the
viewpoint, is located on one of the axes of the three dimensional orthogonal co-ordinate system. In
Figure 3.19 the viewpoint is arbitrarily located on the Z-axis. This fact can also be expressed by saying
that the optical axis is aligned with the Z-axis of the co-ordinate system. The image plane is perpendicular
to the optical axis; i.e., in figure 3.19 it is parallel to the xy-plane of the co-ordinate system. This fact
accounts for the simplicity of a central projection.

ISOMETRIC PROJECTION
In isometric projection the three orthogonal edges of an object are inclined equally to the projection
plane. Because of the relative ease of projection and the ability to give 3-D perception, isometric
projection is widely used in computer aided design. In computer aided design the co-ordinates of the
drawing are available in their natural co-ordinate system. These are transformed suitably to enable the
viewer different views or rotate the object in such away that all the faces of the object are made visible
continuously.
There are several uses for this technique in product design. Hence good design packages incorporate
several viewing transformation techniques. The viewing parameters depend upon the system graphics
standard followed in developing the graphics package. The algorithms for these viewing transformations
are available in literature.

Chapter 05
Q1. Z-buffer Algorithm
This scheme was developed by Catmull using Sutherlands classification scheme. In this technique, pixels
interior to a polygon are shaded and their depth is evaluated by interpolation from Z-values of the
polygon vertices after the viewing transformation has been applied. For every pixel, (X, Y) values are the
pixel co-ordinates and Z value is the viewing space depth. For each interior polygon point a search is
carried out to determine the minimum Z value. This search is conveniently implemented using a Zbuffer
that holds for a current point (X, Y) the smallest Z value so far encountered. The Z-buffer algorithm has
the advantage of simplicity. It handles scenes of any complexity. There is also no computation required
for depth sort. The storage space required, however is large. This could be reduced by pre-processing,
so that polygons nearest the viewpoint are processed first.

This is a special case of Z-buffer algorithm. In this algorithm for each scan line the frame buffer is
initialized to the back ground and Z buffer to the minimum Z. The intersection of the scan line with the
two dimensional projection of each polygon is found. The depth of each pixel on the scan line between
each pair of the intersection is determined. If the pixel depth is greater than that in the Z-buffer then
this line segment is currently visible.

A spanning scan line algorithm, instead of solving the hidden surface removal on a pixel-by-pixel basis
using incremental Z calculation, uses spans along the scan line over which there is no depth conflict.

COMPILED BY AVESH 9
M.E. CAPD NOTES

Consider the three dimensional screen space shown in Fig. 3.26. A scan line algorithm moves a scan line
plane down the Y-axis. This
plane, parallel to the XOZ plane, intersects the objects in the scene and reduces the hidden surface
problem to a 2-D space. The line segments obtained through the intersection are then analyzed to detect
hidden surfaces. This is done by considering the spans which form part of a line segment that is contained
between edge intersections of all active polygons.

Depth Buffer (Z-Buffer) Method


This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the Z-
depth of each surface to determine the closest (visible) surface.

In this method each surface is processed separately one pixel position at a time across the surface. The
depth values for a pixel are compared and the closest (smallest z) surface determines the color to be
displayed in the frame buffer.

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override
the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.

Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 depth 1).

The frame buffer is used to store the intensity value of color value at each position (x, y).

The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.

Algorithm

COMPILED BY AVESH 10
M.E. CAPD NOTES

Step-1 Set the buffer values


Depthbuffer (x, y) = 0
Framebuffer (x, y) = background color
Step-2 Process each polygon (One at a time)
For each projected (x, y) pixel position of a polygon, calculate depth z.
If Z > depthbuffer (x, y)
Compute surface color,
set depthbuffer (x, y) = z,
framebuffer (x, y) = surfacecolor (x, y)

Advantages
It is easy to implement.
It reduces the speed problem if implemented in hardware.
It processes one object at a time.
Disadvantages
It requires large memory.
It is time consuming process.

Q2. Area-Subdivision Method


The area-subdivision method takes advantage by locating those view areas that represent part of a
single surface. Divide the total viewing area into smaller and smaller rectangles until each small area is
the projection of part of a single visible surface or no surface at all.

Continue this process until the subdivisions are easily analyzed as belonging to a single surface or until
they are reduced to the size of a single pixel. An easy way to do this is to successively divide the area
into four equal parts at each step. There are four possible relationships that a surface can have with a
specified area boundary.

Surrounding surface One that completely encloses the area.

Overlapping surface One that is partly inside and partly outside the area.

Inside surface One that is completely inside the area.

Outside surface One that is completely outside the area.

COMPILED BY AVESH 11
M.E. CAPD NOTES

The tests for determining surface visibility within an area can be stated in terms of these four
classifications. No further subdivisions of a specified area are needed if one of the following conditions
is true

All surfaces are outside surfaces with respect to the area.


Only one inside, overlapping or surrounding surface is in the area.
A surrounding surface obscures all other surfaces within the area boundaries.

Q3. SHADING
In simple polygonal mesh models, the surface is represented by constant shading. To introduce more
realistic shading, incremental shading is necessary. Two commonly used incremental shading techniques
are:
Gauraud Shading
Phong Shading

GAURAUD SHADING
Gauraud Shading involves bilinear intensity interpolation over a polygon mesh. It is restricted to diffuse
component of the reflection model. The technique first calculates the intensity at each vertex, assuming
that the light source is at infinity. The intensity of the light reflected over the polygonal surface can then
be obtained by integrating the
interpolation process with the scan conversion process. The intensities at the edge of each scan line are
calculated from the vertex intensities and intensities along the scan line from these.

PHONG TECHNIQUE
Phong model overcomes some of the deficiencies of Gauraud technique and incorporates specular
reflection. The important feature of the Phong model are:
Vertex normals instead of vertex intensities are calculated by averaging normal vectors of the
surface that share the vertex.
Bilinear interpolation is used for incremental interpolation of points interior to polygons.
A separate intensity is evaluated for each pixel from the interpolated normals.

Q4. ANTI ALIASING


Aliasing in computer graphics manifests in several ways. The computer generated image will have jagged
edges and incorrectly rendered fine detail or texture. Objects smaller than the size of the pixel may be
lost while displaying. Ant aliasing is the technique adopted to solve these problems. One technique is to
increase the sampling rate by increasing the resolution of the raster. Another method is to calculate the
raster at higher resolution and display it at lower resolution by averaging the pixel and attribute at the
lower resolution. Three techniques are described briefly below.

SUPER SAMPLING OR POST FILTERING


This method is a three-stage process. The stages are :
The image is sampled at n times the display resolution.
The sample image is then low pass filtered
The filtered image is resampled at the device resolution.
This method goes well with Z-buffer technique. However this may cause some blurring. Another
disadvantage of this technique is that it is not a suitable method for dealing with small objects. Since the

COMPILED BY AVESH 12
M.E. CAPD NOTES

memory requirements when used with Z-buffer technique is large, it is essentially a virtual memory
technique.

PRE-FILTERING OR AREA SAMPLING


This technique involves performing sub-pixel geometry in the continuous image generation domain and
returns for each pixel an intensity which is computed by using the area of visible sub-pixel fragments as
weights in an intensity run. Thus this is an area sampling method. Since this method involves
considerable amount of computation, several
modifications to this algorithm have been developed.

STOCHASTIC SAMPLING
This method is a two stage process.
Sample the image using a sampling grid where the position of each sampling point has been
subjected to random perturbation.
Use these sample values with a reconstruction filter to determine the pixelintensities to which the
unperturbed sample positions correspond.
The problem with this method is that it is only easily incorporated where the image synthesis uses an
incoherent sampling method. Since the method splits the objects into micro polygons, it is suitable of
objects consisting of parametric bi-cubic patches.

Q5. HIDDEN SURFACE REMOVAL


One of the difficult problems in computer graphics is the removal of hidden surfaces from the images of
solid objects. In Fig. 3.25 (a) an opaque cube is shown in wire frame representation. Edges 15, 48, 37,
14, 12, 23, 58 and 87 are visible whereas edges 56, 67 and 26 re not visible. Correspondingly, surfaces
1265, 2673 and 5678 are not visible since the object is opaque. The actual representation of the cube
must be as shown in Fig. 3.25 (b).

Chapter 6
Q1. SALIENT FEATURES OF SOLID MODELING
FEATURE-BASED DESIGN
The most fundamental aspect in creating a solid model is the concept of feature-based design. In typical
2-D CAD applications, a designer draws a part by adding basic geometric elements such as lines, arcs,
circles and splines. Then dimensions are added. In solid modeling a 3-D design is created by starting a
base feature and then adding other features, one at a time, until the accurate and complete
representation of the parts geometry isachieved.

COMPILED BY AVESH 13
M.E. CAPD NOTES

A feature is a basic building block that describes the design, like a keyway on a shaft. Each feature
indicates how to add material (like a rib) or remove a portion of material (like a cut or a hole). Features
adjust automatically to changes in the design thereby allowing the capture of design intent. This also
saves time when design changes are made. Because features have the ability to intelligently reference
other features, the changes made will navigate through design, updating the 3-D model in all affected
areas. Figure 6.7 shows a
ribbed structure. It consists of feature like ribs and holes.

Similarly, if a flanged part shown in Fig. 6.8 (A) is to be created, the one approach is to sketch the cross
section as shown in Fig. 6.8 (B) and then revolve through 360.

In typical solid modeling software the designer can create a feature in two basic ways. One is to sketch
a section of the shape to be added and then extrude, revolve, or sweep it to create the shape. These are
called sketched features.
Another type of feature is the pick-and-place feature. Here the designer simply performs an engineering
operation such as placing a hole, chamfering or rounding a set of edges, or shelling out the model.

An important component of every feature is its dimensions. Dimensions are the variables that one
changes in order to make the design update automatically. When a dimension is changed the solid
modeling software recalculates the geometry.

COMPILED BY AVESH 14
M.E. CAPD NOTES

Design of a part always begins with a base feature. This is a basic shape, such as a block or a cylinder
that approximates the shape of the part one wants to design. Then by adding familiar design features
like protrusions, cuts, ribs, keyways, rounds, holes, and others the geometry of a part is created.

This process represents true design. Unlike many CAD applications in which designing means drawing a
picture of the part, working with the feature-based solid modeling method is more like sculpting designs
from solid material.
Features/available in typical solid modeling software are:

Building Assemblies
Designs usually consist of several parts. Solid modellers can put two or more parts together in an
assembly. All the tools a designer needs to build, modify, and verify assemblies are available in solid
modelling softwares.

Q2. GEOMETRIC MODELING


Computer representation of the geometry of a component using software is called a geometric model.
Geometric modeling is done in three principal ways. They are:
Wire frame modeling
Surface modeling
Solid modeling
These modeling methods have distinct features and applications.
WIRE FRAME MODELING
In wire frame modeling the object is represented by its edges. In the initial stages of CAD, wire frame
models were in 2-D. Subsequently 3-D wire frame modeling software was introduced. The wire frame
model of a box is shown in Fig. 6.2 (a). The object appears as if it is made out of thin wires. Fig. 6.2(b),
6.2(c) and 6.2(d) show three objects which can have the same wire frame model of the box. Thus in the
case of complex parts wire frame models can be confusing. Some clarity can be obtained through hidden
line elimination. Though this type of modeling may not provide unambiguous understanding of the
object, this has been the method traditionally used in the 2-D representation of the object, where
orthographic views like plan, elevation, end view etc are used to describe the object graphically.

COMPILED BY AVESH 15
M.E. CAPD NOTES

A comparison between 2-D and 3-D models is given below:

SURFACE MODELING
In this approach, a component is represented by its surfaces which in turn are represented by their
vertices and edges. For example, eight surfaces are put together to create a box, as shown in Fig. 6.3.
Surface modeling has been very popular in aerospace product design and automotive design. Surface
modeling has been particularly useful in the development of manufacturing codes for automobile panels
and the complex doubly curved shapes of aerospace structures and dies and moulds.

Apart from standard surface types available for surface modeling (box, pyramid, wedge, dome, sphere,
cone, torus, dish and mesh) techniques are available for interactive modeling and editing of curved
surface geometry. Surfaces can be created through an assembly of polygonal meshes or using advanced
curve and surface modeling techniques like B-splines or NURBS (Non-Uniform Rational B-splines).
Standard primitives used in a typical surface modeling software are shown in Fig. 6.4. Tabulated surfaces,
ruled surfaces and edge surfaces and revolved are simple ways in which curved geometry could be
created and
edited.

COMPILED BY AVESH 16
M.E. CAPD NOTES

SOLID MODELING
The representation of solid models uses the fundamental idea that a physical object divides the 3-D
Euclidean space into two regions, one exterior and one interior, separated by the boundary of the solid.
Solid models are:
bounded
homogeneously three dimensional
finite

There are six common representations in solid modeling.


Spatial Enumeration: In this simplest form of 3D volumetric raster model, a section of 3D space
is described by a matrix of evenly spaced cubic volume elements called voxels.
Cell Decomposition: This is a hierarchical adaptation of spatial enumeration. 3D space is sub-
divided into cells. Cells could be of different sizes. These simple cells are glued together to
describe a solid object.
Boundary Representation: The solid is represented by its boundary which consists of a set of
faces, a set of edges and a set of vertices as well as their topological relations.
Sweep Methods: In this technique a planar shape is moved along a curve. Translational sweep
can be used to create prismatic objects and rotational sweep could be used for axisymmetric
components.
Primitive Instancing: This modeling scheme provides a set of possible object shapes which are
described by a set of parameters. Instances of object shape can be created by varying these
parameters.
Constructive Solid Geometry (CSG): Primitive instances are combined using Boolean set
operations to create complex objects.

In most of the modeling packages, the approach used for modeling uses any one of the following three
techniques:
Constructive solid geometry (CSG or C-Rep)
Boundary representation (B-Rep)
Hybrid method which is a combination of B-Rep and CSG.

Q6. Geometry and topology


Geometry: Length of the line L1, L2 & L3, and the angle between the lines and radius R and center P1 of
the half circle.
Topology of the object: L1 shares a vertex (point) L2 and C1, L2 shares a vertex with L1 and L3, L3 shares
a vertex with L2 and C1, L1 and L3 dont overlap, P1 lies outside the object, whereas the other case P1
lies inside the object.

COMPILED BY AVESH 17
M.E. CAPD NOTES

EXCHANGE OF CAD DATA BETWEEN SOFTWARE PACKAGES


Necessity to translate drawings created in one drafting package to another often arises. For example
you may have a CAD model created in PRO/E package and you may wish that this might be transferred
to I-DEAS or Unigraphics. It may also be necessary to transfer geometric data from one software to
another. This situation arises when you would want to carry out modeling in one software, say PRO/E
and analysis in another software, say ANSYS. One method to meet this need is to write direct translators
from one software to another. This means that each system developer will have to produce its own
translators. This will necessitate a large number of translators. If we have three software packages we
may require six translators among them. This is shown in Fig. 17.3.

A solution to this problem of direct translators is to use neutral files. These neutral files will have
standard formats and software packages can have pre-processors to convert drawing data to neutral file
and postprocessors to convert neutral file data to drawing file. Figure 17.4 illustrates how the CAD data
transfer is a accomplished using neutral file.
Three types of neutral files are discussed in this chapter. They are:
Drawing exchange files (DXF)
IGES files
STEP files

COMPILED BY AVESH 18
M.E. CAPD NOTES

OTHER DATA EXCHANGE FORMATS


There are several existing alternative data exchange formats. These include the Standard Product Data
Exchange Format (SDF) of Vought Corporation (available for CADAM, CADDS-5, PATRAN, and PRIME etc.)
Standard Interchange Format (SIF) of Intergraph Corporation (available for Applicon, Autotrol, and
Calma etc.), ICAM Product Data Definition Interface (PDDI), and VDA sculptured surface Interface
(VDAFS), Electronic Design Interchange Format (EDIF), Transfer and Archiving of Product Definition Data
(TAP) etc. Another alternative to IGES is the neutral format outlined in ANSI Y14.26M standard. It must
be noted here that some of the features of many of these alternatives are superior to that of IGES.

Q7. FINITE ELEMENT MODELING FROM SOLID MODELS


Optimization of designs require detailed analysis to determine stresses, deflection, natural
frequencies, mode shapes, temperature distribution, heat flow rates etc. Finite element
technique is used to carry out these analysis is to create a finite element model. The solid
model geometry can be used directly to create the FE model.
Automatic mesh generation and application of loads and boundary conditions can
also be carried out while creating the finite element model.
It is often necessary to modify design geometry to create an effective finite element
model. Automatic mesh generation can be done either through free meshing or adaptive
meshing. A comprehensive library of finite elements are available in the software. The
elements will be linear or parabolic. Shell, solid, beam, rod, spring, damper, mass and gap
are some of the elements available for the purpose of modeling. A FE modeling software
provides extensive capabilities to define loading and boundary conditions to correctly
simulate the environment that a part will be subjected in operation.
Loads can be:
Structural loads
Heat transfer loads
Physical and material properties can be obtained from the material database of the
solid model. Facilities for mode checking and model optimization are also available.

Q8. FINITE ELEMENT ANALYSIS


FEA is a convenient tool to analyze simple as well as complex structures. The use of finite element
analysis is not restricted to mechanical engineering systems alone. FEA finds extensive application in
electrical engineering, electronics engineering, micro electro mechanical systems, biomedical
engineering etc. In manufacturing, FEA is used in simulation and optimization of manufacturing
processes like casting, machining, plastic molding, forging, metal forming, heat treatment, welding etc.
Structural, dynamic, thermal, magnetic potential and fluid flow problems can be handled with ease and
accuracy using FEA.

FEA was initially developed in 1943 by R. Courant to obtain approximate solution to vibration problems.
Turner et al published in 1956 a paper on Stiffness and Deflection of Complex Structures. This paper
established a broader definition of numerical analysis as a basis of FEA. Initially, finite element analysis
programs were mainly written for main frame and mini computers. With the advent of powerful PCs,
the finite element analysis could be carried out with the help of several FEA software packages. Finite
element method can be applied to a variety of design problems concerning automobiles, airplanes,
missiles, ships, railway coaches and countless other engineering and consumer products.

The finite element method is a numerical procedure. This method involves modeling the structure using
a finite number of small interconnected elements. Consider the plate shown in Fig. 7.1 (a). Suppose that
it is acted upon by a force P as shown and our interest is to determine the stresses in the plate. The plate
is discretized into 20 of elements and 33 nodes as shown in Fig. 7.1 (b). Nodes in this case are the corner
points of each element which have a square shape. The nodes are numbered 1 to 33. Each element is
formed by four nodes. For example, nodes 1,9,10,2 form element 1. The elements are numbered 1 to
20.

COMPILED BY AVESH 19
M.E. CAPD NOTES

A displacement function is associated with each element. Every interconnected element is linked to
other neighboring elements through common interfaces (nodes, edges and surfaces). Using the stress-
strain relationship of the material of the part under analysis, the designer can determine the behavior
of given node. The set of equations describing the behavior of all nodes results in a set of algebraic
equations. These equations are expressed in matrix form.

The solution obtained by finite element analysis is approximate. The accuracy of solution may depend
on the type of element used and the number of elements. It is necessary to have a thorough
understanding of the physics of the problem to select the most appropriate element for a given problem.
It is advisable to try a number of solutions by increasing the number of elements until the relative error
in successive solutions is small. In Fig. 7.1 the size of all elements are shown equal. This need not be
strictly adhered to. In fact, automatic mesh generation may result in elements of varying sizes. As long
as the aspect ratio of the elements is within permissible limits, the variation in the size of the elements
does not matter.

It may be necessary to refine the mesh to improve the accuracy of the solution in specific parts of the
component being analyzed. For example, referring to Fig. 7.1, we can state by intuition that the intensity
of stress around point A will be more. It is therefore advisable to refine the mesh around point A. An
example of refined mesh is shown in Fig. 7.2. Five more nodes have been added and the number of
elements is now 35. This may yield a better solution to the stress intensity.

GENERAL STEPS INVOLVED IN FINITE ELEMENT ANALYSIS


select the element type and discretize the component
select a displacement function
define stress strain relationship
derive the element stiffness matrix
assemble global stiffness matrix
solve to obtain nodal displacements
solve for element strains and stresses

COMPILED BY AVESH 20
M.E. CAPD NOTES

Q9. kINEMATICS AND ANIMATION


WRITE ABOUT ANIMATION AND MOTION IN SOFTWARE

Q10. FEATURES AND APPLICATION OF COMMERCIAL PACKAGES OF CAE

COMPILED BY AVESH 21

You might also like