This document discusses different algorithms for visible surface determination, which is the problem of determining which surfaces are visible when rendering 3D objects on screen. It describes object-precision and image-precision approaches and algorithms like back-face culling, the painter's algorithm, the z-buffer algorithm, and binary space partitioning trees. It also discusses concepts like coherence that the algorithms can exploit to improve performance.
This document discusses different algorithms for visible surface determination, which is the problem of determining which surfaces are visible when rendering 3D objects on screen. It describes object-precision and image-precision approaches and algorithms like back-face culling, the painter's algorithm, the z-buffer algorithm, and binary space partitioning trees. It also discusses concepts like coherence that the algorithms can exploit to improve performance.
This document discusses different algorithms for visible surface determination, which is the problem of determining which surfaces are visible when rendering 3D objects on screen. It describes object-precision and image-precision approaches and algorithms like back-face culling, the painter's algorithm, the z-buffer algorithm, and binary space partitioning trees. It also discusses concepts like coherence that the algorithms can exploit to improve performance.
Graphics Introduction What is Visible Surface Determination? Problem Outline Given a set of 3D objects and a viewing specification, we wish to determine which lines or surfaces are visible, so that we do not needlessly calculate and draw surfaces which will not ultimately be seen by the viewer, or which might confuse the viewer ... so that we can display only the visible lines or surfaces Visible Surface Determination In 3D we must be concerned with whether or not objects are obscured by other objects Most objects are opaque so should obscure things behind them Also known as visible surface detection or hidden surface elimination methods Related problem is hidden line removal Simple example ...
Simple example ...
? ? Simple example ...
Simple example ... Consider the wireframe drawing of the two cubes, which is correctly drawn and which is incorrectly drawn? ? Approaches The are two fundamental approaches to the problem: Object-space / object-precision Image-space / image-precision Although there are major differences in the basic approach taken by the various algorithms, most use sorting and coherence methods to improve performance
Object Precision Algorithms Object-precision algorithms do their work on the objects themselves before they are converted to pixels in the frame buffer Compare objects directly with each other, eliminating entire objects or portions of them that are not visible The resolution of the display device is irrelevant here as this calculation is done at the mathematical level of the objects Object Precision Algorithms Pseudocode:
First developed for vector graphics systems Image Precision Algorithms Image precision algorithms do their work as the objects are being converted to pixels in the frame buffer Determine which of n objects is visible at each pixel in the image The resolution of the display device is important here as this is done on a pixel by pixel basis Image Precision Algorithms Pseudocode:
First written for raster devices to take advantage of the relatively small number of pixels for which the visibility calculations had to be performed Optimal Algorithm Later, algorithms often combine object and image space calculations, with object-space calculations chosen for accuracy and image-space ones chosen for speed Coherence Visible surface algorithms can take advantage of coherence the degree to which parts of an environment or its projection exhibit local similarities Object coherence if one object is entirely separate from another, comparisons need to be done only between the two objects, and not between their component faces or edges Face coherence surface properties typically vary smoothly across a face, allowing computations for one part of a face to be modified incrementally to apply to adjacent parts Edge coherence an edge may change visibility only where it crosses behind a visible edge or penetrates a visible face Implied edge coherence if one planar face penetrates another, their line of intersection (the implied edge) can be determined from two points of intersection Coherence Scan-line coherence the set of visible object spans determined for one scan line of an image typically differs little from the set on the previous line Area coherence a group of adjacent pixels is often covered by the same visible face. Span coherence Refers to a faces visibility over a span of adjacent pixels on a scan line Depth coherence adjacent parts of the same surface are typically close in depth, whereas different surfaces at the same screen location are typically separated farther in depth Frame coherence pictures of the same environment at two successive points in time are likely to be quite similar, despite small changes in objects and viewpoint Algorithms Object-precision Back-face culling Painters algorithm Image-precision Z-buffer algorithm Back-Face Culling Back-face culling works on solid objects which you are looking at from the outside (i.e. the polygons of the surface of the object completely enclose the object) A quick test for fast elimination. It is not always suitable and rarely sufficient but can reduce workload considerably. Back-Face Culling Which way does a surface point? Vector mathematics defines the concept of a surfaces normal vector A surfaces normal vector is simply an arrow that is perpendicular to that surface Every planar polygon has a surface normal (actually, two normals). A surface normal is a vector that is normal to the surface of the polygon. Given that the polygon is part of a solid object, we are interested in the normal that points OUT, rather than the normal that points IN Back-Face Culling Consider the two faces of a cube and their normal vectors Vectors N 1 and N 2 are normals to surfaces 1 and 2 respectively Vector V points from surface 1 to the viewpoint Depending on the angle between V, and N 1 and N 2 , they may or may not be visible to the viewer If -90 90 or cos 0 then the polygon can be considered front facing and can be drawn Otherwise, it is culled (we choose to render only front-facing polygons) N 2 N 1 V surface1 surface2 Back-Face Culling Pseudocode Back-Face Culling Limitations: It can only be used on solid objects It works fine for convex polyhedra but not necessarily for concave polyhedra Additional example ... We need to ensure that objects are proper solids, for culling Back-face culling can be correct in some places but not adequate for objects which have holes, are non-convex, or multiple objects This teapot is not quite a proper solid and as a result the image is incorrect No Hidden Surface Removal Painters Algorithm Depth-sort algorithm Developed by Newell and Sancha in 1972 The idea is to go back to front, drawing all objects into the frame buffer with nearer objects being drawn over top of objects that are further away Pseudocode: Painters Algorithm This algorithm would be very simple if the z-coordinates of the polygons were guaranteed never to overlap Unfortunately, that is usually not the case, which means that step two can be somewhat complex
As illustration for the algorithm, consider drawing these two cubes: Painters Algorithm
Z-Buffer Algorithm The z-buffer or depth-buffer algorithm is one of the simplest (and the most widely used) visible surface algorithms Developed by Catmull in 1974 Is an image-precision algorithm, and involves the use of a (surprisingly!) z-buffer The z-buffer is a secondary buffer that stores the depth (or z-value) of each pixel The depth-buffer has the same width and height as the frame buffer Relatively easy to implement in hardware or software Z-Buffer Algorithm All of the elements of the z-buffer are initially set to very far away Whenever a pixel colour is to be changed, the depth of this new colour is compared to the current depth in the z- buffer If this colour is closer than the previous colour the pixel is given the new colour, and the z-buffer entry for that pixel is updated as well Otherwise the pixel retains the old colour, and the z- buffer retains its old value Z-Buffer Algorithm Pseudocode: z x y z 2 z 1 B A Z-Buffer Algorithm Example: A-Buffer Method Represents an antialiased, area-averaged, accumulation-buffer method A drawback of the depth-buffer method is that it can only find one visible surface at each pixel position: it deals with opaque surfaces and cannot accumulate intensity values for more than one surface, as is necessary if transparent surfaces are to be displayed The A-buffer method extends the depth buffer so that each position in the buffer can reference a linked list of surfaces Binary-Space Partitioning Trees Construct BSP Tree Other Algorithms Scan-Line Algorithms Operate at image precision to create an image one scan line at a time Extension of the scan-conversion algorithm, the difference is that we deal not with just one polygon, but rather with a set of polygons Other Algorithms Area-Subdivision Algorithms All follow the divide-and- conquer strategy of spatial partitioning in the projection plane If it is easy to decide which polygons are visible in the area, they are displayed Otherwise the area is subdivided into smaller areas to which the decision logic is applied recursively