You are on page 1of 24

http://ppg.sagepub.

com/
Progress in Physical Geography
http://ppg.sagepub.com/content/30/4/467
The online version of this article can be found at:

DOI: 10.1191/0309133306pp492ra
2006 30: 467 Progress in Physical Geography
Peter F. Fisher and Nicholas J. Tate
Causes and consequences of error in digital elevation models

Published by:
http://www.sagepublications.com
can be found at: Progress in Physical Geography Additional services and information for

http://ppg.sagepub.com/cgi/alerts Email Alerts:

http://ppg.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:

http://www.sagepub.com/journalsPermissions.nav Permissions:

http://ppg.sagepub.com/content/30/4/467.refs.html Citations:

What is This?

- Aug 1, 2006 Version of Record >>


at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Progress in Physical Geography 30, 4 (2006) pp. 467489
2006 SAGE Publications 10.1191/0309133306pp492ra
I Introduction
The digital record of land surface elevations
was one of the rst widely available forms of
geographical information. Such digital records
are often distributed in the form of a digital
elevation model or DEM, and their deriva-
tives are frequently employed throughout
physical geography for applications ranging
from geomorphometry (Pike, 2000) to hydro-
logical modelling (Kenward et al., 2000) and
the physiographic correction of digital satellite
imagery (Goyal et al., 1998).
DEMs come in a number of forms, but all
usually consist of les containing a large number
of records (often more than 10
5
records) where
each record represents a statement or estimate
of the elevation at a point in space. From the
outset, it is important to be aware that the
DEM is usually the end result of a number of
modelling and processing steps as typied by the
Causes and consequences of error
in digital elevation models*
Peter F. Fisher
1
** and Nicholas J. Tate
2
1
Department of Information Science, City University, Northampton Square,
London EC1V 0HB, UK
2
Department of Geography, University of Leicester, University Road,
Leicester LE1 7RH, UK
Abstract: All digital data contain error and many are uncertain. Digital models of elevation surfaces
consist of les containing large numbers of measurements representing the height of the surface of the
earth, and therefore a proportion of those measurements are very likely to be subject to some level of
error and uncertainty. The collection and handling of such data and their associated uncertainties has
been a subject of considerable research, which has focused largely upon the description of the effects
of interpolation and resolution uncertainties, as well as modelling the occurrence of errors. However,
digital models of elevation derived from new technologies employing active methods of laser and radar
ranging are becoming more widespread, and past research will need to be re-evaluated in the near
future to accommodate such new data products. In this paper we review the source and nature of
errors in digital models of elevation, and in the derivatives of such models. We examine the correction
of errors and assessment of tness for use, and nally we identify some priorities for future research.
Key words: digital elevation model, digital surface model, error modelling, fitness for use,
uncertainty, visualization.
*
Part of this paper has been previously published in French as Tate and Fisher (2005) and is included here
with agreement of Herms Publishers.
**
Author for correspondence.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
468 Causes and consequences of error in digital elevation models
ow chart in Figure 1. The progression from
conceptual model through to digital model
requires not only the selection of suitable proce-
dures, but also the application of suitable meas-
urement and statistical processes. Our
understanding of these processes and their
associated errors tell us that there is a very small
probability (effectively no chance) that all digital
records in any DEM are correct. DEMs there-
fore contain endemic error, and this error can
propagate through to products derived from the
DEM, for example to hydrogeomorphic param-
eters such as catchment size and stream net-
work characteristics (Walker and Willgoose,
1999) and surface ow dispersal areas (Endreny
and Wood, 2001). In this respect, DEMs are no
different from any other type of digital geo-
graphical data, all of which contain some error
that can propagate to dependent operations/
products. However, models of elevation are dis-
tinct for four reasons: 1) they were one of the
rst forms of digital geographical information
which became available; 2) they are now widely
used; 3) they are closely associated with the
mathematical concepts of surface modelling;
and 4) they represent a tangible, directly
observable phenomenon of which all people
have direct experience: the surface of the earth.
This paper reviews research on the under-
standing, modelling and propagation of error
in DEMs, with the aim of presenting a
comprehensive statement of the issues sur-
rounding these topics. We therefore start by
dening key terms and methods used to
describe error. Next, we examine various
sources of error that can accumulate during the
process of DEM construction. Statistical mod-
els of error, and other approaches that have
been suggested for examining error, are the
focus of the next section, followed by a consid-
eration of the propagation of the error into
derived information. We review some of the
methods suggested for error visualization and
correction and, nally, we examine research to
determine tness for use of DEM products. In
conclusion, we identify a number of areas which
require further research in the near future.
II Denitions
1 Digital elevation model (DEM)
The term digital elevation model has two gen-
eral meanings. First, it is any set of measure-
ments that record the elevation of the surface
of the earth, such that the spatial proximity of,
and spatial relationships between, those meas-
urements can be determined either implicitly
or explicitly; a simple list of elevations is not a
DEM. On the other hand, a list of triplet meas-
urements of elevation together with Easting
and Northing to give location does constitute a
DEM, because the spatial relationships of the
elevations can be recreated from the location
information. Second, and more specically, a
DEM is a set of elevation values which are
recorded on a regular grid most commonly in
a square form, less frequently in a triangular or
rectangular form. Since the dimensions of the
grid are known and the number of observa-
tions in each row is known, the implicit spatial
relationships between elevation values can be
determined. The DEM in grid form is the most
widely used data model for the distribution of
digital elevation data by data providers (for
example, successive USGS DEMs), and has
been the format on which the vast majority of
research on error and uncertainty has been
based. In discussion below we therefore use
Figure 1 A ow diagram showing the
process of construction of a DEM through
the intermediary of a contour map
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 469
DEM synonymously with gridded DEM
although much of what we state is equally
applicable to other formats. Digital terrain
model (DTM) is sometimes used instead of
DEM, but is more correctly used to describe a
set of digital records related to terrain not just
elevation (Burrough, 1986).
2 Error and uncertainty
In this paper, we adopt the convention that
considers error to be the objective or formal
problems with measurement/estimation and
other, less tangible issues to be uncertainty
(Hunter and Goodchild, 1993; Gahegan and
Ehlers, 2000).
People can conceive of the surface of the
earth relatively clearly, and we can therefore
say that it is possible to measure the height of
this surface above a dened datum at a set of
points in space. If we believe rmly in this con-
ceptualization, and we are able to revisit each
point and repeat the measurement, we can
assert that any difference encountered
between the two measurements is due to
error in one or, more likely, both measure-
ments. If the two data sets are obtained by dif-
ferent methods, for example one by optical
theodolite and the second by laser levelling,
then there may be a justied belief in one set of
measurements being more accurate (the latter)
and so one (the rst) being in relative error.
If, in addition to a set of point measure-
ments (however derived), we have a model of
the form of the surface of the earth, which
can be expressed mathematically, then it is
possible to estimate the elevation at unknown
locations from those at measured locations by
a mathematical process of interpolation (as is
often the case in DEMs and is discussed
below). Differences between the estimated
elevation and the measured elevation are a
matter of the delity of the mathematical
model. These are still regarded as errors in the
digital elevation model, however.
Generalizing these two instances, there-
fore we can say that error of a given set of
point measurements of a surface can only
properly be determined by comparison with
another set of known, usually more accurate
measurements. Such data are often termed
reference data and assumed to be error free
(Gens, 1999; Kyriakidis et al., 1999).
Other forms of doubt about the quality of
the digital elevation model constitute aspects
of uncertainty, and are largely related to
representation. For example, if we do not
vary the quality of measurement/estimation,
differences due to data gathered at contrast-
ing resolutions should more correctly be
regarded as an aspect of the uncertainty of the
model representation.
3 A typology of error
Errors in DEMs can clearly occur in both the
elevation or vertical (Z) and planimetric or
horizontal (XY) coordinates, but the focus is
usually on the former because planimetric error
will produce elevation error. Many commercial
data suppliers only report elevation error. Errors
in DEMs are usually (Cooper, 1998; Wise,
2000) categorized into three groups: gross errors
or blunders, systematic errors due to determinis-
tic bias in the data collection or processing, and
random errors.
Gross errors or blunders can be the result of
user error or equipment failure: such errors are
infrequent in commercial DEMs but they do
occur. They are evident with higher frequency
in non-commercial products. Systematic
errors can be dened as the result of a deter-
ministic system which if known may be repre-
sented by some functional relationship
(Thapa and Bossler, 1992: 836). Conceptual
examples of simple systematic errors are por-
trayed in Figure 2, A and B. Real examples of
systematic errors include the contour line
ghosts identied in many DEMs derived from
contour data (Guth, 1999) as terracing, and
the distinctive parallel striping artifacts found in
some USGS 7.5-minute DEMs (Brown and
Bara, 1994; Garbrecht and Starks, 1995)
and the Canadian Terrain Resource Information
Management (TRIM) DEM data product
(Albani and Klinkenberg, 2003).
In contrast, random errors in a DEM
accrue from a great variety of measurement/
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
470 Causes and consequences of error in digital elevation models
operational tasks in producing the DEM.
These may be represented conceptually as
random variations around the true reference
value, but a number of models of the spatial
distribution are possible, and two alternatives
are shown in Figure 2, C and D.
4 Describing error
As with the denition of error, the measure-
ment of error in DEMs is somewhat confused.
The most common descriptor is the root mean
square error (RMSE; Li, 1988; Shearer, 1990;
Desmet, 1997; Hunter and Goodchild, 1997):
where z
DEM
the measurement of elevation
from the DEM, and z
Ref
higher accuracy
measurement of elevation for a sample of n
RMSE
z z
n
DEM
=

( )
Ref
2
Figure 2 Comparisons of a prole through a DEM and the occurrence of error. (A)
The occurrence of error with bias; (B) the occurrence of systematic error; (C) the
occurrence of spatially autocorrelated error (the normal situation); (D) the occurrence
of random error (no spatial autocorrelation). In each instance the upper diagram shows
the ground surface as a thick line and the ground surface with the error as a thin line,
and in the lower diagram the error alone
Source: Modied from Shearer (1990); also published in Tate and Fisher (2005).
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 471
points. This measure has the property that it is
always positive. Some authors use n 1 as the
denominator acknowledging the similarity
between this equation and a standard deviation
(Shearer, 1990; Desmet, 1997). Indeed, the
RMSE is equal to the standard deviation of the
error if the mean error is (or is assumed to be)
zero. It should be noted that RMSE is a widely
used measure of conformity between a set of
estimates and the actual values, and has
become a standard measure of map accuracy.
RMSE is usually reported as a single aspatial
global statistic per DEM based on comparison
with a limited sample of points. For example,
for the USGS 7.5-minute DEM product the
RMSE calculation requires a minimum of only
28 points (USGS, 1990). On the other hand,
the Ordnance Survey of Great Britain does not
even claim to determine the error for each tile
of the DEM, but asserts a single statement of
the RMSE in the product specication, imply-
ing that it is the same for the whole national
data set, although the report does give a range
of values for different land surface slopes.
In a number of studies, however, the mean
error has not been found to be equal to zero
(Li, 1988; Monckton, 1994; Fisher, 1998), and
so the RMSE is not necessarily a good
description of the statistical distribution of
the error. Therefore other researchers have
suggested the use of a more complete statis-
tical description of errors by reporting the
mean error (ME) and error standard deviation
(S) (Li, 1988; Fisher, 1998):
ME can be either negative or positive, and
records systematic under- or overestimation of
the elevations in the DEM, otherwise known
as bias. Figure 2C shows a conceptual model
of error in a DEM without bias and Figure 2A
shows the same pattern of error with positive
S
z z ME
n
DEM
=

[( ) ]
Ref
2
1
ME
z z
n
DEM
=

( )
Ref
bias. Shearer (1990) and Desmet (1997) advo-
cate use of the mean absolute error by replacing
z
DEM
z
Ref
with the modulus, |z
DEM
z
Ref
|.
This is similar to RMSE. Both ME and S are
preferred, however, as these will allow the esti-
mation of bias. S records the dispersion, as
does the RMSE, but if ME is relatively large
then S and RMSE may be very different.
None of these descriptive statistics
(RMSE, ME, S) reports more than a global
summary statistic for a data set. Crucially, all
fail to describe the spatial pattern of error, and
in a DEM the error is likely to vary spatially
(Figure 2C, as compared with Figure 2D).
However, in spite of improved understanding
about the types of error within DEMs, there
is still relatively little known about the spatial
structure of that error (Monckton, 1994;
Hunter and Goodchild, 1997; Liu and Jezek,
1999). As a response, there have been a vari-
ety of studies that have attempted to describe
the pattern of DEM errors spatially by means
of both geostatistical variograms (Fisher,
1998; Kyriakidis et al., 1999; Liu and Jezek,
1999; Holmes et al., 2000; Weng, 2002;
Zhang and Goodchild, 2002; see section V)
and Fourier-based analysis (Liu and Jezek,
1999). In a study of the errors produced from
a comparison of low accuracy elevation data
with higher accuracy reference data, Liu and
Jezek (1999) employed both methods to
describe the spatial pattern of error for
51.2km by 51.2km DEM of the McMurdo
Dry Valley region in the Transarctic
Mountains of Antarctica. Their analysis
revealed a highly anisotropic and scale-
dependent pattern of error, closely correlated
to the characteristics of the terrain surface. In
contrast, Holmes et al. (2000) observed that
for a low accuracyhigh accuracy comparison
for the Los Olivos quadrangle, Santa Barbara
County, California, the correlation between
error and various terrain indices was poor.
A further important weakness of RMSE has
been noted by Hunter and Goodchild (1996:
15) who observed that while containing useful
information about the nal [DEM] product, [it]
says nothing about the numerous contributing
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
472 Causes and consequences of error in digital elevation models
factors that may have played a role in the over-
all process giving rise rise to the error.
III Sources of error in DEMs
Three main sources of error in DEMs are usu-
ally identied (Shearer, 1990; Li, 1992; Li and
Chen, 1999; Gong et al., 2000):
1. those derived from variations in the accu-
racy, density and distribution (Li and
Chen, 1999: 203) of the measured source
data as determined by the specic method
of data generation;
2. those derived from the processing and
interpolation used to derive the DEM
from the source data;
3. those resulting from the characteristics
of the terrain surface being modelled in
relation to the representation of the DEM.
The rst two are quite clearly errors. The
third, however, should be considered a matter
of uncertainty. Alternatively, we can think of
1 as data-based, being strictly concerned with
the source data, while 2 and 3 are model-
based being concerned with how well the
resulting DEM approximates the real physio-
graphy (Theobald, 1989; Shortridge, 2001).
1 Method of source data generation
Historically, DEMs encountered by the scien-
tist/academic have most frequently been
sourced by digitizing contour lines and spot
heights from paper maps. Other sources may
have included imagery such as stereo aerial
photographs using various types of pho-
togrammetry, or less frequently point meas-
urements derived direct from land survey.
The rst step in the construction of such a
DEM from contours is therefore the creation of
the source map. Source map error is generally
dened as arising from the processes of collec-
tion, recording, generalization, symbolization
and production inherent in the cartographic
process (MacEachren, 1985; Muller, 1987).
Even though the spatial elevation data used to
produce a contour map may have been pro-
duced photogrammetrically, the transformation
to a contour map will introduce inaccuracy,
from both cartographic generalization and map
production (Fryer et al., 1994). It is difcult to
generalize about the errors introduced into con-
tours/spot heights from these sources, but
Fryer et al. (1994) suggest that the photogram-
metric errors alone might reasonably be
expected to be about 2 or 3 per 10,000 units of
ying height. To the error in the contour lines is
added error from the digitizing process.
Digitized contours can be stored in their own
right, but more usually they are interpolated to
produce the gridded DEM (see section III.2).
DEM construction directly from manual and
semi-automated photogrammetry introduces
both random and systematic errors (Petrie,
1990; Shearer, 1990; Fryer et al., 1994).
Random errors may accrue through the lack of
precision in the identication of target points on
the photograph as part of the process of aerial
triangulation, and systematic errors may accrue
from changes in the lm media, instrument
errors and from operator fatigue (Fryer et al.,
1994). Measurement methods include the
manual collection of elevation points along pro-
les using a stereoplotter, or the more auto-
mated collection of elevation points from a
digital stereomodel. In the former, a well-
known example is the Firth Effect (Hunter
and Goodchild, 1995: 534) which occurs when,
collecting data in proles, elevations are under-
estimated when moving in an upslope direction
and overestimated in a downslope direction
producing a distinct herringbone pattern in the
elevations. Hunter and Goodchild (1995) also
noticed edge discontinuities that they attrib-
uted to interpolation errors from the automated
photogrammetric system used by the USGS.
The recent availability of computer-based
digital photogrammetric systems (DPS), some-
times as a component of image processing soft-
ware, means that bespoke DEM construction
has become more widespread. Such systems
are usually based on some form of hierarchical
stereo image correlation, and are able to pro-
duce gridded DEMs in a fully automatic manner
with optional manual editing. It is difcult to
generalize about the characteristics of such sys-
tems, but Gong et al. (2000) have suggested
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 473
that, in the absence of any editing, the fully
automated mode may produce data of low
accuracy relative to traditional photogrammet-
ric methods. Interestingly, Davis et al. (2001)
have modelled the relationship between the
estimated stereo-correlation quality within the
DPS and the RMSE obtained of the resulting
DEM when compared with a higher accuracy
kinematic GPS survey. For 1:40,000 scale
images of an urban area, they were able to pre-
dict DEM RMSE to within 8%.
Recently, there has been an increase in the
use of active airborne sensors for DEM cre-
ation. Such active systems include LiDAR
(light detection and ranging, also known as
laser altimetry; Dubayah et al., 2000) and
InSAR (interferometric synthetic aperture
radar; Goyal et al., 1998). LiDAR uses the
emission and reection of light pulses usually
from an airborne scanner. The quality of ele-
vation information obtained is a function of
the sensor and scanning system, the nature
and quality of the positioning/orientation sys-
tem in the aircraft, aircraft speed/flying
height, and the characteristics of the terrain
surface (Huising and Gomes-Pereira, 1998;
Wehr and Lohr, 1999). For example, when
working over land cover types such as forest,
it can be difcult to determine whether the
light pulse has penetrated to the ground
(Dubayah et al., 2000). The systematic error
for laser altimetry has been estimated to
range from a minimum of 5 cm in flat
paved/barren areas to a maximum of 200cm
in grass and scrubland, and random errors
from 10cm in at areas to 200cm in hilly
areas have been noted (Huising and Gomes-
Pereira, 1998). Similarly to Laser-LiDAR, the
quality of the elevation information obtained
from InSAR is related to sensor and terrain
surface characteristics (Goyal et al., 1998).
DEMs produced by these active systems
have the potential to be of much higher accu-
racy than some of the traditional methods of
DEM construction discussed above, although
Baltsavias (1999: 90) has noted that there are
many more sources of error in active systems
than in photogrammetry, which make both
the assessment and propagation of error more
complicated.
The construction of DEMs using fully
automated photogrammetry and active sys-
tems of data capture constitute what
Lemmens (1999) has termed a process of
blind sampling of terrain. He identies four
primary terrain surface characteristics which
will inuence the quality of DEM data: the
existence of micro-relief which make elevation
measurement points spatially unrepresenta-
tive; non-selective spatial coverage of the sen-
sor; the presence of sloping ground altering
signal reection; and signal attenuation/fallout
due to the varying reectivity of different land
cover types. In fact, for all these methods the
elevation surface identied by the sensor may
not be the surface of the ground but a com-
posite surface of other features including
buildings and vegetation. Indeed, measure-
ment accuracies as high as 5 cm, together
with problems of the penetration of vegeta-
tion, mean that the certainty that the surface
of the earth is the height being measured has
been lost for these systems.
Irrespective of the method of DEM con-
struction, the error in a DEM can also be
inuenced by the density and distribution of
the measured point source data. For example,
for each of a selection of DEMs constructed
by manual photogrammetry, stman (1987)
observed that an increased point density
reduced the RMSE.
2 Processing and interpolation
The degree of processing and interpolation
required to derive a regular gridded DEM
from a set of measurements will depend on
the method of measurement and the nature
of the data model. Thus if resources allowed,
data collection by eld survey, for example,
could be tailored to the specications of a
gridded DEM by recording height measure-
ments at all grid intersections. Data collection
in photogrammetry and active systems can
also generate a gridded model by direct meas-
urement. If the distribution of the source data
is irregular or not at the desired spacing, then
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
474 Causes and consequences of error in digital elevation models
some degree of processing/interpolation of
values at grid intersections is required and this
can itself be a source of error.
There are a great variety of interpolation
methods available for terrain surface interpo-
lation. Watson (1992) has identified two
classes of interpolation methods for terrain
surfaces: tted functions and weighted averag-
ing. In contrast, Hutchinson and Gallant
(1999) have identied three classes based on:
triangulation, local surface patches and locally
adaptive gridding. For further details about
these methods of interpolation, and alterna-
tive more generic schemes of classication,
the reader is referred to Lam (1983),
Burrough (1986), Watson (1992), Hutchinson
and Gallant (1999) and Mitas and Mitasova
(1999). Less generic interpolation methods
can be identied for specic types of source
data. For example, when interpolating con-
tour data to a DEM, Robinson (1994) has dis-
tinguished between those generic methods of
interpolation mentioned above, and more
purpose-designed methods which make some
use of the extra information supplied by the
contours; for example, the construction of
lines of steepest descent between contours
(Leberl and Olson, 1982). The crucial point is
that since different methods of interpolation
produce different estimates for height values
at the same point, these methods will also
produce different quantities of error in the
DEM.
A variety of empirical work has looked at
the effects of different methods of interpola-
tion on DEM error (MacEachren and
Davidson, 1987; Desmet, 1997; Yang and
Hodler, 2000; Rees, 2000; Wise, 2000)
usually by means of observing the results of
different interpolators on sample DEMs.
Estimates of error can be obtained by com-
paring interpolated values with a higher accu-
racy reference surface (section II.2; Fisher,
1998; Holmes et al., 2000; Davis et al., 2001),
or with a subset of original points withheld
from the interpolation process (MacEachren
and Davidson, 1987; Desmet, 1997; Yang and
Hodler, 2000). Accuracy description can be
summarized statistically, or more qualitatively
using visual descriptions (Declercq, 1996;
Carrara et al., 1997; Desmet, 1997; Yang and
Hodler, 2000). We consider the role of visual-
ization in error identication more explicitly in
section VI below.
In general, there seems to be no single
interpolation method that is the most accurate
for the interpolation of terrain data.
Geostatistical kriging is attractive from a sta-
tistical standpoint, since it is the best linear
unbiased estimator and the error introduced
by the processes of estimation can be directly
determined (Oliver and Webster, 1990).
However, kriging variance is directly propor-
tional to the distance of an interpolated value
from an input observation. Furthermore, the
success of a given interpolation method appar-
ently depends on the nature of the terrain
surface (smooth or rough) and the distribution
of the measured source data (irregular or reg-
ular). This may result in no clear interpolation
method being preferred (Wise, 2000). When
interpolating an existing DEM to a higher res-
olution DEM, Rees (2000: 17) suggests that
the RMS accuracy (RMSE) of the interpolated
DEM, r, is directly proportional to the stan-
dard deviation of the height difference
between adjacent points in the DEM, . The
constant of proportionality is dimensionless
and in the cases studied varied between 0.21
and 0.6 depending on the interpolator used,
the factor by which resolution is increased,
and the fractal dimension. For a variety of
DEMs Rees observed that the RMS accuracy
of the interpolated DEM has very little sensi-
tivity to the choice of interpolation method
between the bilinear, bicubic and kriging
approaches (Rees, 2000: 18). For a DEM rep-
resenting smooth undulating agricultural ter-
rain Desmet (1997) found that although spline
interpolation appeared preferable the extrap-
olation of this conclusion must be done with
care as the study area was extremely smooth
(Desmet, 1997: 579). It is clearly difcult to
make any general conclusions.
The data collected by active systems may
not only require interpolation, but also
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 475
considerable processing to obtain the result-
ant DEM. For example, Gens (1999) observes
that the error introduced into InSAR-derived
DEMs depends very much on the details of
the interferometric processing applied. This
includes the processes of registration of the
radar image, formation of the interferogram,
phase unwrapping and reconstruction of the
DEM (de Fazio and Vinelli, 1993, as cited in
Gens, 1999). Processing may also inuence
the final form of the data and hence the
degree of error in the DEM. For example, a
reduction in precision by rounding off eleva-
tions to the nearest metre can introduce
sufcient error into the DEM to generate sig-
nicant error in terrain derivatives (Figure 3;
Carter, 1992; Nelson and Jones, 1995).
3 Terrain representation
Recall from section III.1 that terrain surface
characteristics can directly inuence the qual-
ity of elevation measurements, particularly
with the more active systems of data capture.
However, terrain surface characteristics also
interact with the resolution of the model indi-
rectly reecting the fact that DEMs consti-
tute a discrete sample of a continuous
variable. In section II.2 we noted that the
Figure 3 The SK40 2020km tile of the Ordnance Survey 50m resolution DEM of
Britain. (A) Standard grey scale view; (B) histogram with the diagnostic cyclic peaks
indicating over-representation of the contour linesand; (C) slope map showing steep
slopes at both the contour positions (very steep slopes white lines) and integer
rounding in low relief areas (moderate slopes grey lines)
Source: Crown Copyright/Database right 2005. An Ordnance Survey/EDINA
supplied service.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
476 Causes and consequences of error in digital elevation models
issues of resolution do not map comfortably
into error and error alone, but are more an
issue of the wider concept of uncertainty.
Many authors have viewed resolution as a
question of error, however, and for complete-
ness issues relating to resolution are consid-
ered here.
The resolution (or sampling interval) of
the DEM is a function of both the source data
and any interpolation process carried out on
that source data to derive the gridded DEM.
In conceptual terms, we might think of the
choice of resolution of the DEM to be akin to
the discrete sampling of a continuous func-
tion: and information will clearly be lost for
those distances that are smaller than the sam-
pling interval, but information will also be
altered at distances up to twice the sampling
interval, known as the Nyquist Critical
Frequency (Press et al., 1989: 386). Both of
these can cause error in the DEM. On this
basis, it might be expected that the larger the
resolution of the DEM, the less well the
model will approximate the underlying
continuous real terrain and the greater the
potential for error (Gao, 1998). However, res-
olution is intimately related to the character-
istics of the terrain surface, since at a given
resolution error can also be increased in the
DEM by increasing the complexity of the ter-
rain surface. Clearly, the accuracy to which
any given terrain is approximated by the DEM
will depend on the match (or mismatch)
between the resolution and the spatial char-
acteristics of the terrain: some landforms will
be approximated well, others less so
(Theobald, 1989).
A variety of empirical work has conrmed
the relationship between resolution and error
in real DEMs. For example, in an analysis of
photogrammetrically derived DEMs created
from the ISPRS DEM evaluation exercise
(Torlegrd et al., 1986), Li (1994) observed a
positive relationship between resolution and
error standard deviation (S). In an analysis of
high-resolution DEMs derived from digitized
contours, Gao (1987) observed that errors in
terms of RMSE increased linearly with spatial
resolution, and that the accuracy of represen-
tation was inversely correlated with terrain
complexity. Similar trends have been observed
by Fisher (1998) and Gong et al. (2000).
An attractive strategy for assessing quality
of a DEM is to compare the data with a for-
mal model of that data, which, following
French use, is termed a terrain nominal
(Duckham and Drummond, 2000). For exam-
ple, Duckham and Drummond (2000)
employed a fractal model of physiography as a
formal model for the analysis of river network
characteristics. A similar fractal model was
also used by Polidori et al. (1991) to detect
smoothing due to interpolation in DEMs. In
both cases statements about quality are con-
ditional on the selection of an appropriate for-
mal model, but the appropriateness of such a
model may itself be uncertain (Goodchild and
Tate, 1992).
IV Error models
Modelling the error in continuous variables
can take two possible routes: derivation of
the error analytically, and simulation of the
error stochastically (Shortridge, 2001; Zhang
and Goodchild, 2002). Stochastic simulation
is further subdivided into: unconditioned and
conditioned.
1 Analytical error models
Hunter and Goodchild (1995) utilized a simple
model of error based on the RMSE of the
DEM. For any given pixel, error was assumed
to follow a Gaussian distribution around the
measured elevation value, and the global
RMSE for the DEM was taken to be an esti-
mate of the local error variance around this
value. In this manner, it is relatively simple to
map per-pixel probabilities across a DEM in
relation to a particular given elevation value, a
contour line. This model is embedded in the
Pclass operation of the Idrisi GIS and has been
used by Eastman et al. (1993) in assessing the
risk of sea-level rise leading to ooding. This
approach is also exploited in the work of
Huss and Pumar (1997) to determine the
probability of the visible area.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 477
2 Unconditioned error simulation models
Unconditioned error simulation models rely
on stochastic simulation of realizations of ran-
dom functions (RF). They are informed by
properties of the error distribution but they
do not honour any actual estimates of error.
At their most basic, they comprise an algo-
rithm to select independent and uncorrelated
values drawn from a normal distribution
which can be added to the original DEM
often using a Monte Carlo method of simulat-
ing a number of independent realizations (eg,
Fisher, 1991; 1992). Such models may be
optionally constrained to a description of the
pattern of spatial dependence in the error.
This can be achieved in a variety of ways, that
include using a random cell-swapping algo-
rithm which iterates towards a desired value
of Morans I (Fisher, 1991), Gearys C
(Veregin, 1997) or (rho) (Hunter and
Goodchild, 1997); a spatially autoregressive
process with a target correlation (Zhang and
Goodchild, 2002: 107) or by using a variogram
to characterize estimates of error (Englund,
1993; Fisher, 1998).
The problem with unconditioned simula-
tion is that it still makes the assumption that
the pattern of error is uniform over the study
area or a wider region. This is not necessarily
the case, as is demonstrable from studies of
the distribution of actual errors (Monckton,
1994; Fisher, 1998). If error is spatially auto-
correlated, then it is generally larger in one
area and smaller in another, it is not the same
everywhere (compare Figure 2, C and D). To
address this regionalization of the error, the
model needs to be conditioned.
3 Conditioned error models
Conditioned error models directly honour
observations of error at the sample locations.
Such observations, for example, might have
been obtained by comparison between the
DEM and a higher accuracy reference data
set collected from within the area of the
DEM. Geostatistical methods of conditional
simulation are popular (Fisher, 1998;
Kyriakidis et al., 1999; Holmes et al., 2000).
A practical approach to geostatistical condi-
tional simulation (Delhomme, 1979; Zhang
and Goodchild, 2002) is to create an uncondi-
tional realization of a RF with a covariance
which matches the observed sample data.
Then, the values of the unconditional realiza-
tion for the sample locations are kriged. Since
both unconditionally simulated and kriged
estimates are available for the RF realization,
this allows direct estimation of the kriging
error. Since kriging is an exact interpolator,
this error will be zero for locations correspon-
ding to the sample locations, but non-zero
elsewhere. This estimate of the kriging error
can then be added to the kriged surface of the
observed sample data to produce a simple
conditional simulation (Delhomme, 1979:
272).
Individual unconditional realizations can be
accumulated in a Monte Carlo methodology
to estimate error statistics as considered in
section V below.
4 Fuzzy elevation models
Recently, a number of researchers have chal-
lenged the assumption that the concept of the
land surface is well dened. They argue that
the denition of what is measured in a digital
elevation model is vague, and so it is suitable
to a treatment by vague or fuzzy mathemat-
ics (Santos et al., 2002; Lodwick and Santos,
2003). Fuzzy set theory and fuzzy logic have
been widely researched in Geographical
Information Science (Fisher, 2000), but fuzzy
mathematics has rarely been employed.
Instead of the values stored in a DEM being
regarded as an estimate of the actual eleva-
tion of the land surface at a point, a fuzzy
DEM assumes that any elevation stored is
one of a number of possible elevations.
Furthermore, the possibility distribution of
the elevations reects, not the error in the
DEM, but the uncertainty in the conceptual-
ization of the surface given that the value
intended to be measured is vague. With the
vertical precision of LiDAR remote sensing
for DEM creation, this is a very real problem:
is the sensor measuring the top of the crop in
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
478 Causes and consequences of error in digital elevation models
the eld or the top of the ground and, if it is
bare ground, is it the crest of the plough fur-
row, or the trough? Indeed, comparable ques-
tions, at a different precision, can be asked of
earlier DEMs and even of contour maps: is it
the top of the vegetation (trees) that is meas-
ured or the ground surface? The intention is
clear, but the actuality is in doubt.
V Error propagation
For the study of error distributions in data to
have any meaning, it is important to study
their propagation into subsequent products or
predictions. Digital elevation models are very
widely used in making planning decisions and
in environmental models, and it is the propa-
gation of errors to these types of models that
are of greatest interest. Unfortunately, how-
ever, such propagation is also complicated,
and studies are rare.
Heuvelink et al. (1989) and Heuvelink
(1998) have shown that the Taylor series of
equations can be used to evaluate the propa-
gation of errors into the derivatives when
measurements at a location are compared.
This method can, for example, be employed if
two DEMs of an area are being used to mon-
itor the progressive accumulation of material
in landll sites where the volume of the ll is
an important derivative and margins of error
in estimation are an important potential cost.
Even simpler propagation formulae are
embedded in the Idrisi GIS for simple overlay
actions (Eastman et al., 1993), based on stan-
dard error propagation formulae (Taylor,
1982). As soon as information that is local to
the area of concern is used, including slope or
aspect calculation, the local dependence of
error must become part of the equation and
the formulae become complex. Therefore,
Heuvelink (1998), and many others, recom-
mend working with Monte Carlo simulation.
1 Slope and aspect derivatives
Slope and aspect are important components
for the determination of hydrological flow
paths (Veregin, 1997) and indices employed by
rainfall-runoff models such as TOPMODEL
(Wolock and Price, 1994; Brasington and
Richards, 1998).
The evaluation of the accuracy of eleva-
tion derivatives has usually been obtained by
direct comparison with measurements from
higher accuracy reference surfaces (Chang
and Tsai, 1991) or the real land surface
(Bolstad and Stowe, 1994). Florinsky (1998)
has argued, however, that such comparisons
are invalid since they imply the existence of
higher accuracy reference surfaces that are
locally smooth and differentiable. Real terrain
possesses the fractal characteristic of non-
differentiability, and therefore zooming in to
larger scales will reveal different surfaces with
different gradients/aspects and different
morphometry (Skidmore, 1989; 1990).
We can identify three components to the
error budget of the derivatives of elevation.
First, error, which occurs in measured or
interpolated elevation values, results in error
in the derivatives. Second, uncertainty can be
introduced owing to the resolution of the
DEM. Various empirical studies have been
undertaken to examine the effects of DEM
resolution on the accuracy of slope, gradient
and aspect, with some variability of observa-
tions. For example, Chang and Tsai (1991)
observed that error in all three is positively
related to DEM resolution, although Gao
(1998) found that gradient is the most sensi-
tive to resolution change. These rst two
components collectively constitute what
Florinsky (1998: 49) has termed the accuracy
of the initial data, that is the DEM. The third
component concerns the precision of the cal-
culation method (Florinsky, 1998) where error
is introduced by the specic method of deriv-
ative calculation. Making use of Evanss
(1980) polynomial representation, Florinsky
(1998) derived RMSE error expressions for
gradient, aspect, horizontal and vertical pro-
le curvatures obtained from a DEM. He
showed that error in the derivatives is directly
proportional to elevation RMSE and inversely
proportional to DEM resolution. That ner
resolutions introduce larger errors is a reversal
of the more intuitive positive relationship
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 479
between DEM resolution and error discussed
above, whereby coarser resolution DEMs
introduce greater error as a result of poorer
approximation of the real terrain surface.
The consequences of DEM error on slope
and aspect have also been examined by
Hunter and Goodchild (1997). They simulated
error in a variety of simulations containing dif-
ferent degrees of positive autocorrelation and
added them to the DEM. They explored the
relation to slope and aspect derived from the
DEM. This result was previously suggested by
Goodchild (1996) on the basis that, if the land-
scape is smooth (showing positive spatial
autocorrelation) and the DEM is smooth, then
the error must also vary smoothly from place
to place. If the error has high positive spatial
autocorrelation, then slopes over short
distances (normally cell-to-cell in a GIS calcu-
lation) will have less error than if the positive
autocorrelation over the same distance is close
to zero (ie, the error is white noise). Many
DEMs are not smooth, however, and show
various systematic errors of their creation
as spatial discontinuities, as discussed in
section III.1.
2 Visibility and other products
In a series of papers, Fisher (1991; 1992; 1993;
1996a; 1996b; 1998) has explored the propa-
gation of DEM error into visibility determina-
tion from a DEM (the viewshed) using Monte
Carlo simulation. He has shown that as spa-
tial autocorrelation in the error is increased,
the area determined as being visible increases
(on the same basis as Goodchilds argument
for slope determination). Furthermore, the
distribution of the probability of being visible
is more polarized, with more locations having
higher probabilities. Fisher (1998) also
reported that if the error model is conditioned
on the distribution of empirically determined
errors in the DEM, then the probabilities may
be lower than in an unconditioned error
model with the same degree of cell-to-cell
autocorrelation. Fisher (1998) argued that the
analysts condence in propagating the condi-
tioned error into the visibility problem should
be greater than using unconditioned error
because the model is using as much informa-
tion as is available on the distribution of error.
On the other hand, Ehlschlaeger et al. (1997)
explored the sensitivity of predicted paths to
DEM uncertainty related to the change in
resolution between 0.5 minute arc and 30m.
They used the latter scale data as high-quality
information to parameterize and model the
uncertainty in the former. The same scale
transformation was studied by Kyriakidis et al.
(1999) and Holmes et al. (2000), using
sequential simulation. Holmes et al. (2000)
showed the sensitivity of various simple and
complex derivatives from the DEM, including
hillslope failure. They suggested that working
with the original DEM might seriously under-
estimate the area at risk of such failure.
Several studies have examined the sensi-
tivity of landslide risk estimation to DEM
error. Murillo and Hunter (1997) used Monte
Carlo simulation of DEM error in the US
Pacic Northwest to propagate the error into
a simple model of landslide susceptibility
involving only the DEM and a geological data
set which was treated as correct. Davis and
Keller (1997a; 1997b) went one step further
when they used sequential simulation based
on the variogram for Monte Carlo simulation
of DEM error. The realizations of the error
model were combined in a model with fuzzy
memberships of soil types to model the
boundary uncertainty in the soil database.
A further development is that of sensitivity
and uncertainty analysis advocated by
Crosetto and Tarantola (2001). They pre-
sented a framework where the uncertainty in
inputs and sensitivity in modelling can both be
examined and the inuence of each assessed.
Unfortunately, although their example appli-
cation includes a DEM as input, they do not
parameterize the errors in the DEM and
assess its contribution or importance in the
overall uncertainty of the model. The GLUE
model (generalized likelihood uncertainty
estimation), proposed and investigated exten-
sively by Beven (2002), is primarily intended
for analysis of parameter sensitivity, and has
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
480 Causes and consequences of error in digital elevation models
been used in a number of applications where
digital records of elevation are important,
such as hydrological modelling. It has not
been used to examine the model sensitivity to
parameters of DEM error models.
Anile et al. (2003) have considered the
consequences for visibility calculation when
the DEM is treated as fuzzy. They basically
use the fuzziness to predict whether a loca-
tion is in view, could be in view or is not, giv-
ing a three-valued outcome instead of the
usual binary solution (in-view or out-of-
view).
The propagation of DEM error into a num-
ber of environmental and planning models has
been explored. Future research must focus on
judging when the DEM error is critical to an
application and how much uncertainty (from
whatever source) is possible in the other data
before the DEM error is relatively unimpor-
tant in determining the possible variety of
outcomes.
VI Visualization of error
One of the most diagnostic methods for
investigating errors is visualization. The most
common view of a DEM as a contour map or
as a colour (or grey scale) image (Figure 3A),
is only good for detecting the most extreme
errors, however; values that differ dramati-
cally from the elevations in the vicinity (see
section II.3 regarding gross errors/blunders)
can sometimes be detected by this method,
but even then, it is not the most diagnostic
approach.
Hunter and Goodchild (1995) have sug-
gested visualizing uncertainty around a par-
ticular contour line (see also Kraus, 1994).
They take the RMSE to be as a standard devi-
ation from a normal distribution, and calcu-
late the probabilities of any elevation being
less than or greater than a threshold elevation
(the contour value) that can be estimated and
visualized (Hunter and Goodchild, 1995).
The most diagnostic visualization methods
rely on either summary graphs or mapping
DEM derivatives like slope and shaded relief.
These are effective for recognizing system-
atic errors, as opposed to identifying random
error. For example, ghost contours are a sys-
tematic error in many DEMs interpolated
from contour data. They are caused by over-
representation of elevations equal to the digi-
tized contours (Wood, 1994; Guth, 1999),
and can be detected very simply by the cyclic
arrangement of peaks in a histogram of the
DEM (Figure 3B). They are also detectable as
alignments of steep slopes in the DEM due to
the relatively sudden changes from one con-
tour value to another, and are detectable in
slope maps derived from the DEM (Figure
3C). A similar terracing (alignments of
steeper slopes) can be discernable in areas of
very gentle relief, due to storage of the DEM
as integers. This terracing can also be visible
in slope and shaded relief maps derived from
the DEM (Wood and Fisher, 1993). Other
systematic errors that can be detected in
shaded relief maps are the triangular facets
that result from TIN-based interpolation, and
piecewise reformatting in georeferencing
(Hunter and Goodchild, 1995).
Fisher (1997) and Ehlschlaeger et al. (1997)
used animation to visualize uncertainty. In
both studies, time for the viewer is used as a
metaphor for uncertainty, so the longer an
item is unchanged the more certain it is.
Ehlschlaeger et al. (1997) used serial animation
to show the uncertainty in land inundated
when sea levels rise. The land areas exposed
for the longest period are most likely to be dry
land for a particular amount of sea-level rise in
Boston Harbour (see also Eastman et al.,
1993). Fisher (1997), on the other hand, used
a continuously varying random selection of
grid cells in the elevation model, and changed
the elevation in them according to a stochastic
model of the occurrence of error. The display
was therefore continuously changing in a
process he calls random animation. These ani-
mations have advantages and disadvantages,
but to experienced users both methods are
quite intelligible.
Hunter and Goodchild (1996) have dis-
cussed the need for a general model for visu-
alization and management of uncertainty.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 481
They have advocated the generation of multi-
ple stochastic realizations of equally probable
mappings of all data (including any DEMs)
and, when asked to display a particular theme
for an area, made random selection of one to
display. Crosetto and Tarantola (2001)
employed exactly this approach to uncer-
tainty and sensitivity modelling (especially of
land cover). However, although they used a
DEM in their model, they do not consider the
uncertainty in it.
VII Error correction and tness
for use
The correction of the errors in a DEM is a logi-
cal progression from their identication, detec-
tion, measurement and propagation (Veregin,
1989; Li and Chen, 1999; Lpez, 2002) and is
part of the process of error reduction.
1 Error reduction
Certain types of DEM error can be detected
and corrected relatively easily. Systematic
errors, as depicted in Figure 2D, can often be
identied visually and corrected using some
variant of appropriate low-pass ltering and
value adjustment (Albani and Klinkenberg,
2003). Remarkably few methods exist for
either the detection of other errors from can-
didate elevation values or their correction.
Lpez (1997; 2000) and Felicsimo (1994)
have presented methodologies for the identi-
cation of blunders using a variety of statisti-
cal criteria to distinguish locally extreme
values. Felicsimo (1994) suggested that local
deviations from multiples of standard devia-
tions could be used, while Lpez (1997) pro-
posed a method based on PCA transformed
subsets of DEM values. Lpez (2002) subse-
quently employed both methods in an analy-
sis of the DEMs generated as part of the
ISPRS DEM evaluation exercise (Torlegrd
et al., 1986). Although Lpez (2002) sug-
gested that an expert should assess and cor-
rect each blunder identied, he also proposed
that corrected values could simply be
obtained by linear interpolation from the local
values in the neighbourhood of extreme
points. Using this procedure, he noted that
reductions in RMSE of up to 8% were possible
although the outliers identied comprised less
than 1% of values in each DEM tested.
Other methods for error identication and
correction rely on the introduction of contex-
tual information to determine whether or not
an elevation point contains error. For exam-
ple, the presence of isolated depressions/sinks
in a DEM which make little sense hydrologi-
cally (Jenson and Domingue, 1988) and at
regions characteristic of rounding errors
(Nelson and Jones, 1995) have respectively
led to the development of methods for the
removal of spurious pits and the smoothing of
DEMs. These are relatively simple and com-
mon error correction procedures, and their
use is often motivated by specic hydrological
uses of the DEM, such as drainage network
derivation (Wise, 2000). Wood (2002), how-
ever, has argued that pits can occur as a logi-
cal consequence of the process of DEM
creation, and are common at the conuence
of two channels. However, if the model is
required for hydrological modelling, the ow
through the conuence must be preserved,
and so pit removal is essential.
2 Fitness for use
The determination of whether or not a DEM
is of sufcient quality for a certain application
is a more difcult question. While consider-
able progress has been made in describing and
modelling error, comparatively little progress
has been made in determining the minimum
data quality requirements for specic applica-
tions, or the development of methods to
assess what Chrisman (1983) termed tness
for use (Frank, 1998; Veregin, 1999; de Bruin
et al., 2001). This may be partly explained by a
lack of choice in data supply. Historically, few,
if any, alternative sources of DEM data were
available for specic applications, and where
no alternative data exist the process of assess-
ing tness for use can be considered unneces-
sary (Agumya and Hunter, 2002). However,
with increasing choice in data supply, this situ-
ation is becoming the exception rather than
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
482 Causes and consequences of error in digital elevation models
the rule, at least in developed nations. Indeed,
Veregin (1999) has identied increased data
supply by the private sector as one of the main
reasons for an increased interest in data qual-
ity issues. In order to discriminate between
the increasing number and variety of DEM
data products of often contrasting quality that
are now available for individual locations,
appropriate strategies must be developed.
Since error will inuence the quality of infor-
mation obtained from any DEM product, the
user ideally needs to be in a position to answer
the questions identified by Agumya and
Hunter (1999b: 42): Of the available informa-
tion [ie, several candidate DEMs] that can be
afforded, which is the most suitable? and Is
this information [a specic candidate DEM] t
for my application?. Data quality is funda-
mental to both these questions along with
other issues of suitability and tness such as
perhaps resolution and cost. The second ques-
tion requires an assessment of the quality of
the DEM, along with propagation of the qual-
ity to derivatives required by an application.
Data can effectively be considered to be t for
use when the quality of a data set is better
than the worst acceptable quality required by
the application (Frank, 1998: 7). While this
notion is conceptually simple, in practice, the
worst acceptable quality for a specic DEM
application is at best difcult to determine in a
robust and veriable manner and at worst
unknown or unknowable. In such a situation, a
simple, although cost-ineffective and far from
optimal solution, would be to avoid any assess-
ment of tness for use at all, and obtain the
highest-quality data available. Such a situation
is clearly untenable, because such data may
not be t for use for all conceivable applica-
tions, for reasons such as resolution and cost.
As noted by Agumya and Hunter (1999a),
the usual approach to assess tness for use is
standards-based; for example, the user
asserts a threshold of acceptable RMSE for a
DEM. In addition to the shortcomings of
RMSE noted in section IV above, and more
generic problems of standards, such as their
static nature and implementation difculties
(Veregin, 1999), the crucial observation,
made by Devillers et al. (2002) among others,
is that tness for use can only be assessed rel-
ative to an intended use. Therefore, absolute
standards-based statements such as the
RMSE of a DEM are on their own of little
practical use to the data user, who will often
not know a limiting value of RMSE for their
intended application. Agumya and Hunter
(1999b: 35) have observed that the use of
such standards-based statements for the
assessment of tness for use is severely lim-
ited by the lack of any quantitative estimation
of the consequences of error on the decisions
made using the data. For example, given the
propagation of a specic RMSE of a DEM,
what are the consequences of the resulting
error in the derivatives on decisions made
using the data as part of an application? In an
attempt to develop methods to help answer
this question, and to provide an alternative to
a standards-based statement, Agumya and
Hunter have developed a risk-based strategy
for determining the tness for use of digital
geographic data including DEMs (Agumya
and Hunter, 1997; 1999a; 1999b; 2002). The
key component of this strategy is an appraisal
of the consequences of being exposed to risks
of error in the data (by using the data to make
decisions), set against the degree of risk that is
considered to be tolerable. The overall risk
strategy encompasses risk identification,
risk analysis, risk exposure, risk appraisal, risk
assessment and risk response (Agumya and
Hunter, 1999b: 40), who present an example
concerning the selection of a DEM for ood
extent estimation. This strategy offers an
objective procedure to address the conse-
quences of error in decision making, as well as
a framework which requires the assessment
of error as a matter of course. However,
the potential user of a DEM is faced with
additional problems, including making the
comparison between an overall estimate
of potential risk exposure and the
acceptable/tolerable risk which is com-
pounded by the variety of units (eg, lives,
injuries, money) in which risk exposure may
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 483
be expressed (Agumya and Hunter, 1999b),
and determining just what degree of risk is
tolerable in any situation.
De Bruin et al. (2001) have approached the
tness for use problem by the estimation of
what they term the expected value of con-
trol (EVC) within an explicit framework of
probabilistic decision analysis. This enables
the choice of one DEM for a given location
from a selection of candidate DEMs. This
was achieved by the estimation and compari-
son of the expected loss incurred, if each
DEM is used for a specic decision-making
task where loss can be expressed in a number
of ways including economic loss. The esti-
mate of the error in each DEM forms a key
element of the loss. In practice, this error is
estimated, and then propagated probabilisti-
cally into a loss function from which losses are
obtained. In this manner, de Bruin et al. (2001)
obtained and compared the expected losses
for two candidate DEMs of differing origin
and resolution that were available for a spe-
cic construction task in The Netherlands,
where loss could be expressed directly as the
monetary costs incurred to correct any error
in the volume estimated from each DEM.
The expected value of control translates as
the ability of the user to minimize these
losses/costs by the selection of one DEM
rather than another. In the construction DEM
example, there was little difference between
the final estimates of loss/cost, and both
DEMs were deemed equally suitable for the
task at hand. As noted by de Bruin et al.
(2001: 459), the outcomes of the decision
framework need to be assigned values, and
although losses/costs in monetary terms
were calculated for the example used, it may
be impossible to calculate such objective
quantities in other contexts.
VIII Conclusion
From this paper, it can be seen that the focus
of the vast majority of the research on error
(and uncertainty) in DEMs is concerned with
its identification, description, visualization
and modelling. Such work is often only con-
cerned with subsets of steps in the overall
progression from conceptual model to digital
record, as compared with the full process
(Figure 1). Most prevalent are comparisons
between the conceptual model of the land
surface and the contour model, and the con-
ceptual model and the nal DEM (Figure 4).
Complete inventories of errors accumulating
from the beginning to the end of this process
are missing from the research reviewed.
Figure 4 Typical studies of DEM error and uncertainty relate to only part of the
conceptualization of the process of production of DEMs; no research seems to have
studied the whole process
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
484 Causes and consequences of error in digital elevation models
A large number of studies have looked at
the process of interpolating a DEM from a
sparse scattering of points or from contour
lines (with intense sampling of points along
the line but none elsewhere). A number of
ingenious approaches to this problem have
been advanced, and knowledge of the sys-
tematic errors introduced by different inter-
polation methods is available. This knowledge
relates to a very specic step in the creation
of a DEM, however, ignoring the errors that
may occur in previous steps in the process
(Figure 1). Furthermore, such studies fre-
quently consider interpolation methods avail-
able to academics, and actually tell us
relatively little about the processes of com-
mercial DEM production even if the methods
are clearly stated by the producers. It might
be argued that this whole corpus of research
is of increasingly little relevance, however,
because the method of DEM creation is
increasingly by active methods described in
section III.1, and measurements may even be
made at a greater density than the grid of the
derived DEM.
Error description, as expressed in stan-
dards of spatial data quality, has always been
based on the RMSE, in spite of the increasing
research literature on error modelling which
has highlighted the statistical shortcomings of
this measure, and demonstrated that much
more can be achieved with a description of
the spatial distribution of error, either as a var-
iogram or by actually including more accurate
values of elevation along with the DEM.
Researchers have used these measures of
error as a basis of modelling and propagating
the error into a number of standard deriva-
tives of DEMs.
An interesting and novel approach to sur-
face creation has been the treatment of the
elevation values as fuzzy numbers. This
approach, however, has been introduced too
recently to be evaluated in any detail in this
review.
The assessment of DEM tness for use is
of increasing importance given the wider
choice in DEM data supply that now exists for
many geographical locations. However, rela-
tively little work has been done in this area.
This must change, for it is arguably only when
the link is made between DEM quality and
application-quality requirements that the real
relevance of DEM error is apparent.
In general, the studies reviewed above
have only considered elevation in isolation
from other types of data. However, the error
in the DEM is just one type of uncertainty
that may enter a particular model. Clearly,
uncertainties can accrue from any error and
uncertainty in other data in the model, specif-
ically, uncertainty in the conceptualization of
the model itself, and uncertainty in the algo-
rithm used to implement the model. Among
those authors who have examined uncer-
tainty in spatial data other than in DEM appli-
cations, very few have examined the
consequences of DEM uncertainty in com-
parison with the uncertainty of the other data
layers in the analysis.
Whilst there is an increasing tendency to
collect larger volumes of elevation data with
seemingly ever-improved precision and accu-
racy, we have no evidence that this improve-
ment and the associated costs are
worthwhile. Very little work has been done to
determine the minimum data requirements
for specic applications of DEMs. The central
question in a modelling process suffused with
uncertainty is: are the errors which may be
present in one type of data input to the model
signicant in terms of the sensitivity of the
model? In certain situations they may be crit-
ical, but in others they may not. If the DEM is
combined with other data in contexts like
hydrological and diffuse pollution modelling,
for example, the effect of the error may be
diluted, and be unimportant compared to
errors in other data and uncertainty in the
model itself. So far, not only has this question
been unanswered, it has been unaddressed.
Acknowledgements
This paper is the outcome of many years work
and therefore owes much to many people, too
numerous to name individually. We would like
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 485
to thank all those we have discussed DEM
error with over the years. Nicholas Tate would
specifically like to thank the University of
Leicester for nancial support during a period
of study leave/sabbatical while working on
this paper, as well as logistical support from
Michael Goodchild /NCGIA(UCSB) and
Karen Kemp (University of Redlands) during
this period. The text also beneted from some
useful comments by Claire Jarvis.
References
Agumya, A. and Hunter, G.J. 1997: Determining t-
ness for use of geographic information. ITC Journal
1997-2, 10913.
1999a: A risk based approach to assessing the tness
for use of spatial data. URISA Journal 11, 3344.
1999b: Assessing tness for use of geographic infor-
mation: what risk are we prepared to accept in our
decisions? In Lowell, K. and Jaton, A., editors, Spatial
accuracy assessment: land information uncertainty in
natural resources, Chelsea, MI: Ann Arbor Press,
3543.
2002: Responding to the consequences of uncer-
tainty in geographical data. International Journal of
Geographical Information Systems 16, 40517.
Albani, M. and Klinkenberg, B. 2003: A spatial lter
for the removal of striping artifacts in digital elevation
models. Photogrammetric Engineering and Remote
Sensing 63, 75566.
Anile, M.A., Furno, P., Gallo, G. and Massolo, A.
2003: A fuzzy approach to visibility maps creation
over digital terrains. Fuzzy Sets and Systems 135,
6380.
Baltsavias, E.P. 1999: A comparison between pho-
togrammetry and laser scanning. ISPRS Journal of
Photogrammetry and Remote Sensing 54, 8394.
Beven, K.J. 2002: Towards an alternative blueprint for a
physically based digitally simulated hydrologic
response modeling system. Hydrological Processes 16,
189206.
Bolstad, P.V. and Stowe, T. 1994: An evaluation of
DEM accuracy: elevation, slope and aspect.
Photogrammetric Engineering and Remote Sensing 60,
132732.
Brasington, J. and Richards, K. 1998: Interactions
between model predictions, parameters and DTM
scales for TOPMODEL. Computers & Geosciences
24, 299314.
Brown, D.G. and Bara, T.J. 1994: Recognition and
reduction of systematic error in elevation and deriva-
tive surfaces from 7.5 minute DEMs.
Photogrammetric Engineering and Remote Sensing 60,
18994.
Burrough, P.A. 1986: Geographical Information Systems
for land resources assessment. Oxford: Oxford
University Press.
Carrara, A., Bitelli, G. and Carla, R. 1997:
Comparison of techniques for generating digital
terrain models from contour lines. International
Journal of Geographical Information Systems 11,
45173.
Carter, J.R. 1992: The effect of data precision on the
calculation of slope and aspect using gridded DEMs.
Cartographica 29, 2234.
Chang, K. and Tsai, B. 1991: The effect of DEM reso-
lution on slope and aspect mapping. Cartography and
Geographic Information Systems 18, 6977.
Chrisman, N.R. 1983: Issues in digital cartographic qual-
ity standards: a progress report. In Moellering, H.,
editor, National Committee for Digital Cartographic
Data Standards Report 3, Columbus, OH:
NCDCDS, 331.
Cooper, M.A.R. 1998: Datums, coordinates and differ-
ences. In Lane, S., Richards, K. and Chandler, J.,
editors, Landform monitoring, modelling and analysis,
Chichester: Wiley, 2136.
Crosetto, M. and Tarantola, S. 2001: Uncertainty
and sensitivity analysis: tools for GIS-based model
implementation. International Journal of Geographical
Information Science 15, 41537.
Davis, C. H., Jiang, H. and Wang, X. 2001: Modeling
and estimation of the spatial variation of elevation
error in high resolution DEMs from stereo image pro-
cessing. IEEE Transactions on Geoscience and Remote
Sensing 39, 248389.
Davis, T.J. and Keller, C.P. 1997a: Modelling uncer-
tainty in natural resource analysis using fuzzy sets
and Monte Carlo simulation: slope stability predic-
tion. International Journal of Geographical Information
Systems 11, 40934.
1997b: Modelling and visualizing multiple spatial
uncertainties. Computers & Geosciences 23, 397408.
de Bruin, S., Bregt, A. and van de Ven, M. 2001:
Assessing tness for use: the expected value of spatial
data sets. International Journal of Geographical
Information Science 15, 45771.
Declercq, F.A.N. 1996: Interpolation methods for scat-
tered sample data: accuracy, spatial patterns and pro-
cessing time. Cartography and Geographic Information
Systems 23, 12844.
de Fazio, M. and Vinelli, F. 1993: DEM reconstruction
in SAR interferometry practical experiences with
ERS-1 SAR data. Proceedings of IGARSS93, Tokyo,
Japan, 1207209.
Delhomme, J.P. 1979: Spatial variability in groundwater
ow parameters: a geostatistical approach. Water
Resources Research 15, 26980.
Desmet, P.J.J. 1997: Effects of interpolation errors on
the analysis of DEMs. Earth Surface Processes and
Landforms 22, 56380.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
486 Causes and consequences of error in digital elevation models
Devillers, R., Gervais, M., Bdard, Y. and
Jeansoulin, R. 2002: Spatial data quality: from
metadata to quality indicators and contextual end-
user manual. Paper presented at the OEEPE/ISPRS
Joint Workshop on Spatial Data Quality
Management 2122 March 2002, Istanbul.
Dubayah, R., Knox, R., Hofton, M., Blair, J.B. and
Drake, J. 2000: Land surface characterization using
lidar remote sensing. In Hill, M. and Aspinall, R.,
editors, Spatial information for land use management,
Singapore: International Publishers Direct, 2538.
Duckham, M. and Drummond, J. 2000: Assessment
of error in digital vector data using fractal geometry.
International Journal of Geographical Information
Systems 14, 6784.
Eastman, J.R., Kyem, P.A.K., Toledano, J. and Jin, W.
1993: GIS and decision making. Explorations in
Geographical Information Systems Technology vol-
ume 4. Worcester, MA: UNITAR, Clark Labs.
Ehlschlaeger, C.R., Shortridge, A.M. and
Goodchild, M.F. 1997: Visualizing spatial data
uncertainty using animation. Computers &
Geosciences 23, 38795.
Endreny, T.A. and Wood, E.F. 2001: Representing ele-
vation uncertainty in runoff modelling and owpath
mapping. Hydrological Processes 15, 222336.
Englund, E. 1993: Spatial simulation: environmental
applications. In Goodchild, M.F., Parks, B.Q. and
Steyaert, L.T., editors, Environmental modeling
and GIS, New York: Oxford University Press,
43237.
Evans, I.S. 1980: An integrated system of terrain analy-
sis and slope mapping. Zeitschrift fr Geomorphologie,
Suppl. Bd. 36, 27495.
Felicsimo, A. 1994: Parametric statistical method for
error detection in digital elevation models. ISRPS
Journal of Photogrammetry and Remote Sensing 49,
2933.
Fisher, P.F. 1991: First experiments in viewshed uncer-
tainty: the accuracy of the viewshed area.
Photogrammetric Engineering and Remote Sensing 57,
132127.
1992: First experiments in viewshed uncertainty: sim-
ulating the fuzzy viewshed. Photogrammetric
Engineering and Remote Sensing 58, 34552.
1993: Algorithm and implementation uncertainty in
viewshed analysis. International Journal of
Geographical Information Systems 7, 33147.
1996a: Propagating effects of database generalization
on the viewshed. Transactions in GIS 1, 6981.
1996b: Reconsideration of the viewshed function in
terrain modelling. Geographical Systems 3, 3358.
1997: Animation of reliability in computer-generated
dot maps and elevation models. Cartography and
Geographical Information Systems 23, 196205.
1998: Improved modelling of elevation error with geo-
statistics. GeoInformatica 2, 21533.
2000: Sorites paradox and vague Geographies-
geographies. Fuzzy Sets and Systems 113, 718.
Florinsky, I.V. 1998: Accuracy of local topographic vari-
ables derived from digital elevation models.
International Journal of Geographical Information
Science 12, 4762.
Frank, A.U. 1998: Metamodels for data quality descrip-
tion. In Jeansoulin, R. and Goodchild, M., editors,
Data quality in geographic information: from error to
uncertainty, Paris: Herms, 1529.
Fryer, J.G., Chandler, J.H. and Cooper, M.A.R.
1994: On the accuracy of heighting from aerial photo-
graphs and maps: implications to process modellers.
Earth Surface Processes and Landforms 19, 57783.
Gahegan, M. and Ehlers, M. 2000: A framework for
the modeling of uncertainty between remote sensing
and geographical information systems. ISPRS Journal
of Photogrammetry and Remote Sensing 55, 17688.
Gao, J. 1997: Resolution and accuracy of terrain repre-
sentations by grid DEMs at a micro-scale.
International Journal of Geographical Information
Systems 11, 199212.
1998: Impact of sampling intervals on the reliability of
topographic variables mapped from grid DEMs at a
micro scale. International Journal of Geographical
Information Systems 12, 87590.
Garbrecht, J. and Starks, P. 1995: Note on the use of
USGS Level 1 7.5 minute DEM coverages for land-
scape drainage analysis, Photogrammetric Engineering
and Remote Sensing 61, 51922.
Gens, R. 1999: Quality assessment of interferometri-
cally derived digital elevation models. International
Journal of Applied Earth Observation and
Geoinformation 1, 102108.
Gong, J., Li, Z., Zhu, Q., Sui, H. and Zhou, Y. 2000:
Effects of various factors on the accuracy of DEMs: an
intensive experimental investigation. Photogrammetric
Engineering and Remote Sensing 66, 111317.
Goodchild, M.F. 1996: Attribute accuracy. In
Guptill, S.C. and Morrison, J., editors, Elements of
spatial data quality, Oxford: Pergamon, 5979.
Goodchild, M.F. and Tate, N.J. 1992: Description of ter-
rain as a fractal surface and application to digital eleva-
tion model quality assessment: forum. Photogrammetric
Engineering and Remote Sensing 58, 156870.
Goyal, S.K., Seyfried, M.S. and ONeills, P.E. 1998:
Effect of digital elevation model resolution on topo-
graphic correction of airborne SAR. International
Journal of Remote Sensing 19, 307596.
Guth, P.L. 1999: Contour line ghosts in USGS level 2
DEMs. Photogrammetric Engineering and Remote
Sensing 65, 28996.
Heuvelink, G.B.M. 1998: Error propagation in environ-
mental modelling with GIS. Research Monographs in
GIS Series. London: Taylor and Francis.
Heuvelink, G.B.M., Burrough, P.A. and Stein, A.
1989: Propagation of errors in spatial modeling with
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 487
GIS. International Journal of Geographical Information
Systems 3, 30322.
Holmes, K.W., Chadwick, O.A. and Kyriakidis, P.C.
2000: Error in a USGS 30m DEM and its impact
on terrain modeling. Journal of Hydrology 233,
15473.
Huising, E.J. and Gomes-Pereira, L.M. 1998: Errors
and accuracy estimates of laser data acquired by var-
ious laser scanning systems for topographic applica-
tions. ISPRS Journal of Photogrammetry and Remote
Sensing 53, 24561.
Hunter, G.J. and Goodchild, M.F. 1993: Mapping
uncertainty in spatial databases, putting theory into
practice. Journal of the Urban and Regional
Information Systems Association 5, 5562.
1995: Dealing with error in a spatial database: a simple
case study. Photogrammetric Engineering and Remote
Sensing 61, 52937.
1996: Communicating uncertainty in spatial data-
bases. Transactions in GIS 1, 1324.
1997: Modeling the uncertainty of slope and aspect
estimates derived from spatial databases.
Geographical Analysis 29, 3549.
Huss, R.E. and Pumar, M.A. 1997: Effect of database
errors on intervisibility estimation. Photogrammetric
Engineering and Remote Sensing 63, 41524.
Hutchinson, M.F. and Gallant, J.C. 1999:
Representation of terrain. In Longley, P.A.,
Goodchild, M.F., Maguire, D.J. and Rhind, D.W., edi-
tors, Geographical Information Systems, volume 1: prin-
ciples and technical issues, New York: Wiley, 10524.
Jenson, S.K. and Domingue, J.Q. 1988: Extracting
topographic structure from digital elevation data
for geographic information system analysis. Photo-
grammetric Engineering and Remote Sensing 54,
1593600.
Kenward, T., Lettenmaier, D.P., Wood, E.F. and
Fielding, E. 2000: Effects of digital elevation model
accuracy on hydrologic predictions. Remote Sensing of
Environment 74, 43244.
Kyriakidis, P.C., Shortridge, A.M. and Goodchild,
M.F. 1999: Geostatistics for conation and accuracy
assessment of digital elevation models. International
Journal of Geographical Information Science 13,
677707.
Kraus, K. 1994: Visualization of the quality of surfaces
and their derivatives. Photogrammetric Engineering
and Remote Sensing 60, 45762.
Lam, N.S.-N. 1983: Spatial interpolation methods: a
review. The American Cartographer 10, 12949.
Leberl, F.W. and Olson, D. 1982: Raster scanning for
operational digitizing of graphical data. Photogrammetric
Engineering and Remote Sensing 48, 61527.
Lemmens, M.J.P.M. 1999: Quality description
problems of blindly sampled DEMs. In Shi,
W., Goodchild, M.F. and Fisher, P.F., editors,
Proceedings of the International Symposium on Spatial
Data Quality 99, Hong Kong: Hong Kong
Polytechnic University, 21018.
Li, Z. 1988: On the measure of digital terrain model
accuracy. Photogrammetric Record 12, 87377.
1992: Variation of the accuracy of digital terrain mod-
els with sampling interval. Photogrammetric Record 14,
11328.
1994: A comparative study of the accuracy of digital
terrain models based on various data models. ISPRS
Journal of Photogrammetry and Remote Sensing 49,
211.
Li, Z. and Chen, J. 1999: Assessment of the accuracy
of digital terrain models (DTMs): theory and practice.
In Shi, W., Goodchild, M.F. and Fisher, P.F., editors,
Proceedings of the International Symposium on
Spatial Data Quality 99, Hong Kong: Hong Kong
Polytechnic University, 202209.
Liu, H. and Jezek, K.C. 1999: Investigating DEM error
patterns by directional variograms and Fourier analy-
sis. Geographical Analysis 31, 24966.
Lodwick, W.A. and Santos, J. 2003: Constructing
consistent fuzzy surfaces from fuzzy data. Fuzzy Sets
and Systems 135, 25977.
Lpez, C. 1997: Locating some types of random errors
in digital terrain models. International Journal of
Geographical Information Systems 11, 67798.
2000: Improving the elevation accuracy of digital ele-
vation models: a comparison of some error detection
procedures. Transactions in GIS 4, 4364.
2002: An experiment on the elevation accuracy
improvement of photogrammetrically derived DEM.
International Journal of Geographical Information
Systems 16, 36175.
MacEachren, A.M. 1985: The accuracy of thematic
maps: implications of choropleth symbolization.
Cartographica 22, 3858.
MacEachren, A.M. and Davidson, J.V. 1987:
Sampling and isometric mapping of continuous geo-
graphic surfaces. The American Cartographer 14,
299320.
Mitas, L. and Mitasova, H. 1999: Spatial interpolation.
In Longley, P.A., Goodchild, M.F., Maguire, D.J. and
Rhind, D.W., editors, Geographical Information
Systems, volume 1: principles and technical issues, New
York: Wiley, 48192.
Monckton, C. 1994: An investigation into the spatial
structure of error in digital elevation data. In
M.Worboys, M., editor, Innovations in GIS 1, London:
Taylor and Francis, 20111.
Muller, J.-C. 1987: The concept of error in cartography.
Cartographica 24, 115.
Murillo, M.L. and Hunter, G.J. 1997: Assessing
uncertainty due to elevation error in a landslide sus-
ceptibility model. Transactions in GIS 2, 28998.
Nelson, E.J. and Jones, N.L. 1995: Reducing elevation
roundoff errors in digital elevation models. Journal of
Hydrology 169, 3749.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
488 Causes and consequences of error in digital elevation models
Oliver, M.A. and Webster, R. 1990: Kriging: a method
of interpolation for geographical information systems.
International Journal of Geographical Information
Systems 4, 31332.
stman, A. 1987: Quality control of photogram-
metrically sampled digital elevation models.
Photogrammetric Record 12, 33341.
Petrie, G. 1990: Photogrammetric methods of data
acquisition for terrain modelling. In Petrie, G. and
Kennie, T.J.M., editors, Terrain modelling in surveying
and civil engineering, Caithness: Whittles Publishing,
2648.
Pike, R.J. 2000: Geomorphometry: diversity in quanti-
tative surface analysis. Progress in Physical Geography
24, 120.
Polidori, L., Chorowicz, J. and Guillande, R. 1991:
Description of terrain as a fractal surface and applica-
tion to digital elevation model quality assessment.
Photogrammetric Engineering and Remote Sensing 57,
132932.
Press, W.H., Flannery, B.P., Teukolsky, S.A. and
Vetterling, W.T. 1989: Numerical recipes: the art of
scientic computing (FORTRAN version). Cambridge:
Cambridge University Press.
Rees, W.G. 2000: The accuracy of digital elevation
models interpolated to higher resolutions.
International Journal of Remote Sensing 21, 720.
Robinson, G.J. 1994: The accuracy of digital elevation
models derived from digitized contour data.
Photogrammetric Record 14, 80514.
Santos, J., Lodwick, W.A. and Neumaier, A. 2002:
A new approach to incorporate uncertainty in terrain
modeling. In Egenhofer, M.J. and Mark, D.M., edi-
tors, GIScience 2002: Proceedings of the Second
International Geographical Information Science
Conference, LNCS 2478, Berlin: Springer, 29199.
Shearer, J.W. 1990: The accuracy of digital terrain
models. In Petrie, G. and Kennie, T.J.M., editors,
Terrain modelling in surveying and civil engineering,
Caithness: Whittles Publishing, 31536.
Shortridge, A. 2001: Characterizing uncertainty in
Modeldigital elevation models. In Hunsaker, C.T.,
Goodchild, M.F., Friedl, M.F. and Case, T.J., editors,
Spatial uncertainty in ecology: implications for remote
sensing and GIS applications, New York: Springer,
23857.
Skidmore, A.K. 1989: A comparison of techniques for
calculating gradient and aspect from a gridded digital
elevation model. International Journal of Geographical
Information Systems 3, 32334.
1990: Terrain position as mapped from a gridded digi-
tal elevation model. International Journal of
Geographical Information Systems 4, 3349.
Tate, N.J. and Fisher, P.F. 2005: Les erreurs dans les
modles numriques dlvation. In Devillers, R. and
Jeansoulin, R., editors, Qualit de lIinformation
gographique, Paris: Herms, 94112.
Taylor, J.R. 1982: An introduction to error analysis.
Oxford: Oxford University Press.
Thapa, K. and Bossler, J. 1992: Accuracy of spatial data
used in geographical information systems. Photo-
grammetric Engineering and Remote Sensing 58, 83541.
Theobald, D.M. 1989: Accuracy and bias issues in sur-
face representation. In Goodchild, M.F. and Gopal, S.,
editors, The accuracy of spatial databases, London:
Taylor and Francis, 99106.
Torlegrd, K., stman, A. and Lindgren, R. 1986: A
comparative test of photogrametrically sampled digi-
tal elevation models. Photogrammetria 41, 116.
US Geological Survey (USGS), 1990: Digital elevation
models: data users guide. National Mapping Program
Technical Instructions, Data Users Guide 5,
Department of the Interior. Reston, VA: US
Geological Survey.
Veregin, H. 1989: Error modeling for the map overlay
operation. In Goodchild, M.F. and Gopal, S., editors,
The accuracy of spatial databases, London: Taylor and
Francis, 318.
1997: The effects of vertical error in digital elevation
models on the determination of ow-path direction.
Cartography and Geographic Information Science 24,
6779.
1999: Data quality parameters. In Longley, P.A.,
Goodchild, M.F., Maguire, D.J. and Rhind, D.W.,
Geographical Information Systems principles and
applications, volume 1, New York: Wiley, 17789.
Walker, J.P. and Willgoose, G.R. 1999: On the effect
of digital elevation model accuracy on hydrology and
geomorphology. Water Resources Research 35,
225968.
Watson, D.F. 1992: Contouring: a guide to the analysis
and display of spatial data. Computer Methods in the
Geosciences, volume 10. Oxford: Pergamon Press.
Wehr, A. and Lohr, U. 1999: Airborne laser scanning
an introduction and overview. ISPRS Journal of
Photogrammetry and Remote Sensing 54, 6882.
Weng, Q. 2002: Quantifying uncertainty of digital ele-
vation models derived from topographic maps. In
Richardson, D.E. and van Oosterom, O., editors,
Advances in spatial data handling: 10th International
Symposium on Spatial Data Handling, Berlin: Springer,
40318.
Wise, S. 2000: Assessing the quality for hydrological
applications of digital elevation models derived from
contours. Hydrological Processes 14, 190929.
Wolock, D.M. and Price, C.V. 1994: Effects of digital
elevation model map scale and data resolution on a
topography-based watershed model. Water Resources
Research 30, 304152.
Wood, J. 1994: Visualizing contour interpolation
accuracy in digital elevation models. In Hearnshaw,
H.M. and Unwin, D.J., editors, Visualization in
Geographical Information Systems, Chichester: Wiley,
16880.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from
Peter F. Fisher and Nicholas J. Tate 489
2002: Visualizing the structure and scale dependency
of landscapes. In Fisher, P. and Unwin, D., editors,
Virtual reality in geography, London: Taylor and
Francis, 16374.
Wood, J. and Fisher, P.F. 1993: Assessing interpolation
accuracy in elevation models. IEEE Computer
Graphics and Applications 13, 4856.
Yang, X. and Hodler, T. 2000: Visual and statistical
comparisons of surface modeling techniques for
point-based environmental data. Cartography and
Geographic Information Science 27, 16575.
Zhang, J. and Goodchild, M.F. 2002: Uncertainty in
geographical information. Research Monographs in
GIS Series. London: Taylor and Francis.
at FUND COOR DE APRFO PESSL NIVE on December 16, 2013 ppg.sagepub.com Downloaded from

You might also like