You are on page 1of 36

ARSGISIP Final Report V ANNEXES D.

2 Common Remote Sensing Method Pool



V-117
D.2 COMMON REMOTE SENSING METHOD POOL

Common Remote Sensing Method Pool

OVERVIEW
method
name
objective subdivision sensor type
Radiometric
Calibration
The radiance measured by remote sensing
systems over a given ground feature is
influenced by changes in illumination,
atmospheric conditions, viewing geometry,
and instrument response characteristics. The
need to perform correction for any or all of
these influences depends directly upon the
particular application at hand.
Atmospheric Correction
Relative Radiometric Correction
Radar Calibration
optical
optical
radar
Geocoding
Remote sensing data are distorted by the
earth curvature, relief displacement and the
acquisition geometry of the satellites (i.e.
variations in altitude, attitude, velocity,
panoramic distortion). The intent of geometric
correction is to compensate for the distortions
introduced by these factors so that the
corrected image will have the geometric
integrity of a map.
Geocoding of optical Data
Geocoding of Radar Data
optical
radar
Topographic
Correction
Topography does not only affect the
geometric properties of an image but will as
well have an impact on the illumination and
the reflection of the scanned area. This effect
is caused by the local variations of view and
illumination angles due to mountainous
terrain. An ideal slope-aspect correction
removes all topographically induced
illumination variation so that two objects
having the same reflectance properties show
the same Digital Number despite their
different orientation to the sun's position.
optical
Speckle
Filtering
Because of the coherence of the emitted
waves, the mechanics of radar are
characterised by the shimmering
phenomenon called speckle. Present
solutions usually utilised for speckle filtering
in order to improve the radiometric resolution
of a SAR product are essentially of two types:
(a) the averaging of several samples of a
same scene (multi-look processing, low pass
filtering); (b) adaptive filtering, taking into
account the local statistics and texture
properties of one image.
radar
Coherence
Images
Contrary to optical data for which only the
amplitude information from the signal is
usable, in remote sensing radar it is possible
to measure the amplitude and the phase of
the signal as reflected by the earth's surface.
The phase data used under certain
conditions allows a new source of information
to be generated: the coherence image.
radar
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-118
Image Fusion
Image fusion in a general sense can be
defined as the combination of two or more
different images to form a new image by
using a certain algorithm. It aims at the
integration of all relevant information from a
number of single images into one new image.
optical radar
Index Images
Index images provide information on the
chemical composition of the target. They are
applied in vegetation analysis, mineral
exploration, soil type classification, and also
to reduce relief induced illumination effects.
optical
Spectral
Signatures
Spectral properties of classes change with
time and seasons. They are dependent on
data collection conditions even after
calibration, e.g. due to soil moisture or crop
development. Good knowledge of spectral
signatures or characteristics of classes is
essential to determine e.g. suitable data
collection periods or to interpret results of an
unsupervised classification. They can reduce
the need for detailed ground information.
optical
Classification
Information extraction from remote sensing
data on land cover, crop classes etc. is
typically performed with supervised or
unsupervised classification procedures.
Supervised classification requires significant
localized ground information, whereas
unsupervised classification typically depends
on information about spectral properties of
classes to interpret clustering results.
Classification of optical Data
Classification of radar Data
optical radar
Accuracy
Assessment
Because the accuracy of remotely sensed
data is critical to any successful mapping
project, accuracy assessment is an important
tool for anyone who applies remote sensing
techniques. The user of land-cover maps
needs to know how accurate the product is in
order to use the data efficiently. Although a
number of methods for accuracy assessment
is available some of them are generally
accepted within the remote sensing
community and can be seen as standard
approaches.

see also Parameterization Pool


ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-119
PREPROCESSING METHODS FOR REMOTE
SENSING DATA
Radiometric Calibration
ATMOSPHERIC CORRECTION
1 OBJECTIVE
For the generation of mosaics of images taken at different times, or for the study of changes in the
reflectance of ground objects at different times or locations, it is usually necessary to apply a sun
elevation correction and an earth-sun distance correction. Both corrections ignore topographic and
atmospheric effects (LILLESAND & KIEFER 2000).
The objective of the so called atmospheric correction is to retrieve the surface reflectance (that
characterises the surface properties) from remotely sensed imagery by removing the atmospheric
effects, thus improving the data analysis in many ways (RICHTER 1996):
- the influence of the atmosphere and the solar illumination is removed or at least greatly reduced;
- multitemporal scenes recorded under different atmospheric conditions can better be compared
after atmospheric correction; changes observed will be due to changes on the earth's surface and
not due to different atmospheric conditions;
- results of change detection and classification algorithms can be improved if careful consideration
of the sensor calibration aspects is taken into account (FRASER & KAUFMAN 1985);
- ground reflectance data of different sensors with similar spectral bands (e.g. Landsat TM band 3,
SPOT band 2) can be compared. This is a particular advantage for multitemporal monitoring, since
data of a certain area may not be available from one sensor for a number of orbits due to cloud
coverage; the probability of getting data with low cloud coverage will increase with the number of
sensors;
- ground reflectance data retrieved from satellite imagery can be compared to ground
measurements, thus providing an opportunity to verify the results.
2 THEORY
In the solar spectral region 0.4 - 2.5 m the images of spaceborne sensors mapping land and ocean
surfaces of the earth strongly depend on atmospheric conditions and solar zenith angle. The images
contain information about solar radiance reflected at the ground and scattered by the atmosphere. To
infer the spectral properties (reflectance) of the earth's surface the atmospheric influence has to be
eliminated. The atmosphere modifies the information of the earth's surface in several ways:
- it contributes a signal independent of the earth's surface (path radiance);
- it partly absorbs the ground reflected radiance;
- it scatters the ground reflected radiance into the neighbourhood of the observed pixel. The
scattering is caused by molecules as well as aerosols in the atmosphere. Therefore, dark areas
surrounded by bright areas appear brighter to the remote observer than to the near observer
(adjacency effect).

Thus, the atmospheric influence modifies the spectral information of the earth's surface and also
degrades the spatial resolution of sensors (RICHTER 1996).
In the thermal spectral region, ground temperature is a key parameter in geology, hydrology, and
vegetative science. The retrieval of ground temperature from remotely sensed radiance
measurements generally requires multi-band thermal sensors and some information about the surface
emissivity e. Temperature results can be checked if the scene contains calibration targets, favourably
water surfaces of known temperature.
For sensors with two or more channels in the thermal IR, the split-window technique (ANDING &
KAUTH 1970), the multi-window technique respectively, is the standard technique to reduce the
atmospheric effects on the surface temperature (e.g. AVHRR sensors of the NOAA satellite serie). For
single-band thermal sensors, the multiband split-window technique for atmospheric correction cannot
be applied. Therefore, an assumption about the ground emissivity has to be made to calculate the
ground brightness temperature. Presently, Landsat TM is the only available high spatial resolution
satellite sensor with a thermal spectral band (RICHTER 1996).
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-120
3 METHOD(S)
The spectral radiance measured by sensor observed at any given pixel location can be expressed by

Ltot = total spectral radiance measured by sensor
! = reflectance of object
E = irradiance on object
T = transmission of atmosphere
Lp = path radiance
All quantities depend on wavelength. Only the first term in equation no. 1 contains valid information
about ground reflectance. The second term represents the scattered path radiance, which introduces
haze in the imagery and reduces image contrast (LILLESAND & KIEFER 2000).

Sun elevation correction and earth-sun distance correction (ignoring atmospheric effects)
The sun elevation correction accounts for the seasonal position of the sun relative to the earth. Image
data acquired under different solar illumination angles are normalized by calculating pixel brightness
values assuming the sun was at the zenith on each date of sensing. The correction is usually applied
by dividing each pixel value in a scene be the sine of the solar elevation angle for the particular time
and location of imaging.
The earth-sun distance correction is applied to normalize for the seasonal changes in the distance
between the earth and the sun. The irradiance from the sun decreases as the square of the earth-sun
distance.
Ignoring atmospheric effects, the combined influence of solar zenith angle and earth-sun distance on
the irradiance incident on the earth's surface can be expressed as

E = normalized solar irradiance
E0 = solar irradiance at mean earth-sun distance
"0 = sun's angle from the zenith
d = earth-sun distance, in astronomical units

Information on the solar elevation angle and earth-sun distance for a given scene are normally part of
the ancillary data supplied with the digital data (LILLESAND & KIEFER 2000).

Atmospheric correction
Atmospheric correction has received a considerable attention from researchers in remote sensing who
have devised a number of solution approaches. Sophisticated approaches are computationally
demanding and have only been validated on a very small scale (FALLAH-ADL et al. 1995).
KONDRATYEV et al. (1992) give a detailed overview of correction algorithms. Basically they can be
grouped as follows (a-c):

a) qualitative approaches surrendering radiative transfer modelling, reduce atmospheric effects without
performing radiative transfer calculations such as i.e. Tasseled-Cap approach (CRIST & CICONE
1984), statistical approaches (CASELLES & LOPEZ 1989, HE & JANSA 1990), multispectral methods
using differences of two channels like vegetation indices (PALTRIDGE & MITCHELL 1990) or haze
compensation procedures which are designed to minimize the influence of path radiance effects. One
means of haze compensation in multispectral data is to observe the radiance recorded over target
areas of essentially zero reflectance (i.e. deep clear water in the NIR region). Any signal observed
over such an area represents the path radiance, and this value can be subtracted from all pixels in that
band (LILLESAND & KIEFER 2000);

b) quantitative approximations utilizing simplifications for the radiative transfer calculation.
Atmospheric correction algorithms basically consist of two major steps: First, the optical characteristics
of the atmosphere are estimated either by using special features of the ground surface or by direct
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-121
measurements of the atmospheric constituents (KAUFMAN et al. 1994) or by using theoretical models.
Various quantities related to the atmospheric correction can then be computed by the radiative transfer
algorithms given the atmospheric optical properties. Second, the remotely sensed imagery can be
corrected by inversion procedures that derive the surface reflectance (RICHTER 1996).
Most simple approaches assume that detectors and data systems are designed to produce a linear
response to incident spectral radiance resulting in a linear radiometric response function (FORSTER
1984, KPKE 1989, RICHTER 1990). Each spectral band and detector of the sensor has its own
response function, and its characteristics are monitored using onboard calibration lamps (and
temperature references for thermal channels). The absolute spectral radiance output of the calibration
sources is known from prelaunch calibration and is assumed to be stable over the life of the sensor.
Thus, the onboard calibration sources form the basis for constructing the radiometric response
function by relating known radiance values incident on the detectors to the resulting DNs. A linear fit to
the calibration data results in the following relationship (LILLESAND & KIEFER 2000):

DN = digital number value recorded
G = slope of response function (channel gain)
L = spectral radiance measured (over the spectral bandwidth of the channel)
B = intercept of response function (channel offset)



Lmin = the spectral radiance corresponding to a DN response of 0
Lmax = minimum radiance required to generate the maximum DN (here 255), the radiance
at which the channel saturates

Equation no. 4 can be used to convert any DN in a particular band to absolute units of spectral
radiance in that band if Lmax and Lmin are known from the sensor calibration.
More complex approaches are the Kondratyev-Sobolev approximation (KONDRATYEV et al. 1992),
ATCOR (Lowtran7) (RICHTER 1990), and 5S (TANRE et al. 1990).

c) quantitative tabular approaches based on exact radiative transfer calculations (FRASER et al. 1992,
HABA ET AL. 1979, KAUFMAN & SENDRA 1989, SINGH 1992, TEILLET ET AL. 1987).

TYPE OF SENSORS:
optical
CONTACT
Bettina Mschen
Friedrich-Schiller-University of Jena
Institute for Geography
Dept. of Geoinformatics, Hydrology and Modelling
Loebdergraben 32
07743 Jena
Germany
phone: +49-3641-9488 60
fax: +49-3641-9488 62
e-mail
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-122
REFERENCES
ANDING, D. & KAUTH, R. (1970):
Estimation of Sea Surface Temperature from Space. Remote Sensing of the Environment,
no. 1, pp. 217-220.
BUETTNER, K. J. K. & KERN, C. D. (1965):
The determination of infrared emissivities of terrestrial surfaces. J. Geophys. Research, no.
70, pp. 1329-1337.
CASELLES, V. & LOPEZ, G. M. J (1989):
An alternative simple approach to estimate atmospheric correction in multitemporal studies.
Int. J. Rem. Sens., 10, pp. 1127-1134.
CRIST, E. P. & CICONE, R. C. (1984):
A physically-based transformation of Thematic Mapper data - the TM tasseled cap. IEEE
Trans. Geosci. Rem. Sens., GE-22, pp. 256-263.
FALLAH-ADL, H., JAJA, J., LIANG, S., KAUFMAN, Y.J., TOWNSHEND, J. (1995):
Efficient Algorithms for Atmospheric Correction of Remotely Sensed Data. Supercomputer
_95, IEEE Computer Society Press, Dec. 1995.
FORSTER, B. C. (1984):
Derivation of atmospheric correction procedures for Landsat MSS with particular reference
to urban data. Int. J. Rem. Sens., 5, pp. 799-817.
FRASER, R. S. & KAUFMAN, Y. J. (1985):
The relative importance of scattering and absorption in remote sensing. IEEE Transactions
on Geosciences and Remote Sensing, no. 23, pp. 625-633.
FRASER, R. S., FERRARE, R. A., KAUFMAN, Y. J., MARKHAM, G. L., MATTOO, S. (1992):
Algorithms for atmospheric corrections of aircraft and satellite imagery. Appl. Opt., 24, pp.
81-93.
HABA, Y., KAWATA, Y., KUSAKA, T., UENO, S. (1979):
The system of correcting remotely sensed earth imagery for atmospheric effects. Proc. 13th
Int. Symp. Rem. Sens. Environ., Ann Arbor, MI, pp. 1883-1894.
HE, G. & JANSA, J. (1990):
Eine radiometrische Anpassungsmethode fr die Mosaikherstellung aus digitalen Bildern.
ZPF, 58, pp. 43-47.
KAUFMAN, Y. J. (1989):
The atmospheric effect on remote sensing and its correction. ASRAR, G. (ED., 1989):
Optical Remote Sensing, technology and application, Chapter 9, Wiley.
KAUFMAN, Y. J. & SENDRA, C. (1989):
Algorithm for automatic atmospheric corrections to visible and near-IR satellite imagery. Int.
J. Rem. Sens., 9, pp. 1357-1381.
KAUFMAN Y. J., GITELSON, A., KARNIELI, A., GANOR, E., FRASER, R. S., NAKAJIMA, T.,
MATTOO, S., HOLBEN, B. N. (1994):
Size distribution and scattering phase function of aerosol particles retrieved from sky
brightness measurements. JGR-Atmospheres, no. 99, pp. 10341-10356.
KONDRATYEV, K. J., KOZODEROV, V. V., SMOTKY, O. L. (1992):
Remote Sensing of the Earth from Space: Atmospheric Correction. Springer, New York.
KPKE, P. (1989):
Removal of atmospheric effects from AVHRR albedos. J. Appl. Met., 28, pp. 1342-1348.
LILLESAND T.M. & KIEFER R.W. (2000):
Remote Sensing and Image Interpretation. 4th Edition, John Wiley & Sons, New York, NY,
pp. 477-482.
PALTRIDGE, G. W. & MITCHELL, R. M. (1990):
Atmospheric and viewing angle correction of vegetation indices and grassland fuel moisture
content derived from NOAA/AVHRR. Rem. Sens. Environm., 31, pp. 121-135.
POPP, T. (1993):
Korrektur der atmosphrischen Maskierung zur Bestimmung der spektralen Albedo von
Landoberflchen aus Satellitenmessungen. DLR-FB 93-32, 137 p.
RICHTER, R. (1996):
A Spatially-Adaptive Fast Atmospheric Correction Algorithm. ERDAS IMAGINE - ATCOR2
User Manual (Version 1.0).
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-123
RICHTER, R. (1990):
A fast atmospheric correction algorithm applied to Landsat TM images. Int. J. Remote
Sensing, no. 11, pp. 159-166.
TANRE, D., DESCHAMPS, P. Y., DUHAUT, P., HERMAN, M. (1987):
Adjacency effect produced by the atmospheric scattering in Thematic Mapper data. J.
Geophys. Research, no. 92 (D10), pp. 12000-12006.
TANRE, D., DEROO, C., DUHAUT, P., HERMAN, M., MORCETTE, J. J., PERBOS, J., DECHAMPS,
P. Y. (1990):
Technical note : Description of a computer code to simulate the satellite signal in the solar
spectrum: The 5S code. Int. J. Rem. Sens., pp. 659-668.
TEILLET, P. M., O_NEILL, N. T., KALINAUSKAS, A., STURGEON, D., FEDOSEJEVS, G. (1987):
A dynamic regression algorithm for incorporating atmospheric models into image correction
procedures. Proc. IGARSS _87, Ann Arbor, pp. 913-918.
SINGH, S. M. (1992):
Fast atmospheric correction algorithm. Int. J. Rem. Sens., 13, pp. 933-938.
back to overview



Relative Radiometric Correction
1 OBJECTIVE
Relative radiometric correction is a method of correction that applies one image as a reference and
adjusts the radiometric properties of subject images to match the reference (HALL et al. 1991, YUAN
& ELVIDGE 1996). Rectified images appear to have been acquired with the reference image sensor,
under atmospheric and illumination conditions equal to those in the reference scene (HALL et al.
1991).
Relative image to image calibration can be used if either the percentage of total pixels whose DNs
have changed in the image is small relative to the entire image, or if the overall reflectance distribution
and dynamic range remain rather constant except for image-wide, low frequency differences
(CHAVEZ & McKINNON 1994). This kind of normalisation does not require ancillary datasets on for
instance atmospheric temperature, relative humidity and/or aerosol backscatter that is normally very
demanding when it comes to the logistical and personnel time required.
2 THEORY
A wide range of algorithms have been developed. A common form for linear radiometric rectification is
. uk=ak * xk+bk, . where the derivation of the normalisation coefficients, ak and bk, varies according to
the algorithm selected. uk is the normalised DN of band k in image X on date 1 and xk is the original
value of the same band.
In the following method section yk symbolises the DN of band k in the reference image Y. sx and sy
denotes the standard deviation of respectively X and Y, sxy denotes the covariance and sxx the
variance (see Table 1).
3 METHOD(S)

Haze Correction
Haze correction (LILLESAND & KIEFER 2000, YUAN & ELVIDGE 1996) is a simple method that
assumes that objects with zero reflectance should have the same minimum DN on both reference and
subject images.

Simple Regression Normalisation
Simple Regression Normalisation (COLWELL 1983, YUAN & ELVIDGE 1996) applies the least-
squares regression equation in order to derive normalisation coefficients.

No Change Normalisation
The No change normalisation (ELVIDGE et al. 1995, Mas 1999, YUAN & ELVIDGE, 1996) is also
based on the least-squares regression equation but for derivation of the coefficients it only applies the
_no change_ pixels that are located in a narrow central belt of the scattergram. Compared to the
simple regression method it avoids statistical outliers at the same time as it applies a relative large set
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-124
of pixels. However, both methods depend on an accurate geometric correction as they use the
covariance in the calculation of the coefficients.


Table 1 : Methods for radiometric normalisation of remote sensing data
Radiometric Normalisation Coefficients
a
k


b
k

Dark set Bright set Normalisation
) ( ) (
) ( ) (
dark
k
bright
k
dark
k
bright
k
x x
y y


) ( ) ( dark
k
k
dark
k
x a y
Haze Correction 1
min min
k k
x y
Mean Standard Deviation Normalisation
k
k
x
y
s
s

k
k k
x a y
Minimum Maximum Normalisation
min max
min max
k k
k k
x x
y y


min min
k k k
x a y
No Change (nc) Normalisation
) (
) (
nc
x x
nc
y x
k k
k k
s
s

) ( ) ( nc
k
k
nc
k
x a y
Pseudo-Invariant (pi) Feature
Normalisation
) (
) (
pi
x
pi
y
k
k
s
s

) ( ) ( pi
k
k
pi
k
x a y
Simple Regression Normalisation
k k
k k
x x
y x
s
s

k
k k
x a y



Histogram Matching/Equalisation
Histogram matching/equalisation (CHAVEZ & McKINNON 1994) applies the histograms of the
reference and subject images to identify DN values of pre-selected cumulative percentage points (10,
20, 30 etc.). A histogram-matching transformation is then applied to the subject image so that its
histogram will have the same characteristics as the reference at the selected sampling interval (i.e. the
same cumulative percentages occur at the same DNs). This method assumes that the reference and
subject images have the same dynamic range and DN characteristics at the given sampling intervals.

Minimum-Maximum and Mean-Standard Deviation Normalisation
The minimum-maximum and the mean-standard deviation normalisation (YUAN & ELVIDGE 1996) are
two different methods that both apply statistical parameters, i.e. respectively the minimum and
maximum, and the mean and standard deviation, in order to derive normalisation coefficients.

Dark set - Bright set Normalisation
The Dark set - Bright set Normalisation (HALL et al. 1991, Yuan & ELVIDGE 1996) is similar to the
minimum-maximum as it also relies on the extreme values to derive normalisation coefficients,
however, this method applies the average of a set of, respectively, dark and bright pixels. The sets of
pixels are extracted from the subject and reference images through Kauth-Thomas transformation
(HALL et al. 1991), i.e. pixels are extracted from the extremes in the greenness-brightness histogram
and, consequently, they do not have to be the same pixels from image to image.

Pseudo-Invariant Normalisation
Also the Pseudo-Invariant Normalisation (YUAN & ELVIDGE 1996) uses a transformation, i.e. the NIR
to R ratio and a NIR threshold, to select pixels used for the calculation of normalisation coefficients.
These pseudo-invariant objects are normally man made and are assumed not to have experienced
any significant change from date 1 to date 2 in terms of reflectivity.

ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-125
Normalisation Targets
The method of Normalisation Targets (ECKHARDT et al. 1990, JENSEN et al. 1995) also applies a set
of pixels that are assumed to have been stable over a period. They are defined in both reference and
subject images and are basis for derivation of regression coefficients. These targets are chosen
according to a set of acceptance criteria (op. cit.) and are assumed to be constant reflectors. Any
changes in their brightness values should, therefore, be attributed to detector calibration, astronomic,
atmospheric, and phase angle differences.

The majority of these methods have been inter-compared and assessed by YUAN & ELVIDGE (1996).
According to their results methods employing low percentages of the image data, extracted from
atypical cover types, to derive normalisation coefficients, i.e. dark set _ bright set normalisation and
pseudo-invariant normalisation, do not perform well because, for the most part, they work well only for
the small image area defined in the normalisation procedure. Rather, variables employed should be
derived from a larger subset of the image; however, areas of apparent change have to be excluded.

TYPE OF SENSORS:
optical
CONTACT
Gidske Andersen
Nansen Environmental and Remote Sensing Center
Edv. Griegsvei 3 A
Solheimsviken
N-5037 Bergen
NORWAY
phone: +47 55 29 72 88
fax: +47 55 20 00 50
e-mail

REFERENCES
COLWELL, R.N. (ED.) (1983):
Manual of Remote Sensing American Society of Photogrammetry.
CHAVEZ, P.S. & MACKINNON, D.J. (1994):
Automatic Detection of Vegetation Changes in the Southwestern United States Using
Remotely Sensed Images. Photogrammetric Engineering and Remote Sensing 60(5): 571-
583.
ELVIDGE, C.D., YUAN, D., WERACKOON, R.D. & LUNETTA, R.S. (1995):
Relative Radiometric Normalization of Landsat Multispectral Scanner (MSS) Data Using an
Automated Scattergram Controlled Regression. Photogrammetric Engineering and Remote
Sensing 61(10): 1255-1260.
HALL, F.G., STREBEL, D.E., NICKESON, J.E. & GOETZ, S.J. (1991):
Radiometric Rectification: Toward a common Radiometric Response Among Multidate,
Multisensor Images. Remote Sensing of Environment (35): 11-27.
JENSEN, J.R., RUTCHEY, K., KOCH, M. & NARUMALANI, S. (1995):
Inland Wetland Change Detection in the Everglades Water Conservation Area 2A Using a
Time Series of Normalized Remotely Sensed Data. Photogrammetric Engineering and
Remote Sensing 61(2): 199-209.
LILLESAND & KIEFER (2000):
Remote Sensing and Image Interpretation 4th Edition, John Wiley & Sons, Inc.
MAS, J.-F. (1999):
Monitoring land-cover changes: a comparison of change detection techniques. International
Journal of Remote Sensing 20(1): 139-152.
YUAN, D. & ELVIDGE, C.D. (1996):
Comparison of relative radiometric normaliztaion techniques. ISPRS Journal of
Photogrammetry and Remote Sensing 51(3): 117-126.
back to overview
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-126
Radar Calibration
1 OBJECTIVE
Radar imaging produces an image in which the value of each pixel is proportional to the amplitude of
backscattered electromagnetic radiation from the illuminated surface. For a point target, the
backscattered power is related to the emitted power. In the case of extended surfaces composed of a
multitude of elementary scatterers, the term backscattering coefficient is used. This coefficient is noted
, with no unit, has a high dynamic and is usually expressed in decibels (dB). The backscattering
coefficient provides information about the surface being observed. It is a function of the frequency, the
polarisation and angle of incidence of the emitted waves as well as of the geometric and physical
properties of the illuminated surface. The backscattering coefficient needs to be calibrated which
renders its measurement comparable in time and space.
2 THEORY
The methods of SAR calibration for which an overview is proposed by Freeman, 1992, will depend on
the SAR system characteristics and is therefore always documented by the data provider. In the case
of ERS SAR data, the calibration method is achieved via the following expressions (LAUR et al. 1996):
3 METHOD(S)

N = number of pixels within the Area Of Interest (AOI) i.e. the group of pixels corresponding
to the distributed target in the image
DNi = digital number corresponding to the pixel at location (i,j)
# = average incidence angle within the distributed target
#ref = reference incidence angle, i.e. 23.0 degrees
K = calibration constant (specific to the type of data product and to the processing centre
PAF (Processing and Archiving Facilities))

In addition to this procedure, the number of pixels used for the derivation of the backscattering
coefficient needs to be large enough in order to be statistically valid (FELLAH et al. 1996).
TYPE OF SENSORS:
radar

CONTACT
Dr. Kader Fellah
Universit Louis Pasteur Strasbourg
Service Rgional de Traitement d'Image et de Tldtection
Parc d'Innovation
Boulevard Sbastien Brandt
F - 67400 Illkirch
FRANCE
phone: +33 (0)3 90 24 46 42 (direct) / 46 47 (standard)
fax: +33 (0)3 90 24 46 46
e-mail

ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-127
REFERENCES
FREEMAN, A. (1992):
SAR Calibration: An Overview. IEEE Transactions on Geoscience and Remote Sensing,
Vol. 30, N6, pp. 1107-1121.
LAUR, H., BALLY, P., MEADOWS, P., SANCHEZ, J., SCHATTLER, B. & LOPINTO, E. (1996):
Derivation of the Backscattering coefficient sigma0 in ESA ERS SAR PRI Products. ERS
SAR Calibration, Issue 2, Rev. 2, NES-TN-RS-PM- HL09, ESRIN, ESA.
FELLAH K., BALLY P., BESNUS Y., MEYER C., RAST M., & DE FRAIPONT P. (1996):
Impact of SAR radiometric accuracy in hydrological and agro-environmental applications. A
case study in multi-scale soil moisture retrieval over the Alsace plain. Proceedings of
Retrieval of bio- and geophysical parameters from SAR data for land applications, Toulouse,
17-20 October 1995, IEEE/CNES, pp. 337-346.
back to overview



Geocoding
GEOCODING OF OPTICAL DATA
1 OBJECTIVE
Remote sensing data are distorted by the earth curvature, relief displacement and the acquisition
geometry of the satellites (i.e. variations in altitude, attitude, velocity, panoramic distortion). The intent
of geometric correction is to compensate for the distortions introduced by these factors so that the
corrected image will have the geometric integrity of a map (LILLESAND & KIEFER 2000). Generally it
is distinguished between methods of the absolute rectification and the relative registration. For the
absolute rectification the image is registered with ground control points from a topographical map. The
relative registration overlays images of the same area, which had been acquired at different times or
from different satellites. One image is declared as the master, the other one is registered to its
geometry. Geocoded imagery with highest accuracy could only be reached with absolute rectification.
An exact geocoding is very important for the analysis of changes (change detection). But also the
extraction of distances and areal balances require reliable rectifications. Detailed descriptions on
geocoding methods will be found in current remote sensing books (MATHER 1987, LILLESAND &
KIEFER 200, B$HR & VGTLE 1998, RICHARDS & JIA 1998, etc.).
2 THEORY
Geocoding requires the correction of systematic and non-systematic distortions using polynomial
transformations. Systematic distortions are corrected by applying mathematical formulas to model the
distortion sources. An example of such a systematic distortion is the eastward rotation of the earth
during satellite image acquisition. Each scan line therefore covers an area slighly shifted to the west of
the previous one. To correct this skew distortion each scan line is offsetted slightly to the west,
appearing in a parallelogram shape of multispectral satellite imagery. Random distortions could result
from movements of the sensor platform (roll, pitch and yaw) and are corrected with recorded satellite
orientation information or ground control points (GCPs).
Three appraoaches for geometric correction can be distinguished: relative correction, non-parametric
correction, and parametric correction (orthorectification).
3 METHOD(S)

Relative Correction/ Non-Parametric Correction
The correction of non-systematic distortions could be splitted into the following methodological steps:
- accurate localisation of ground control points
- calculation of the transformation matrix
- transormation into the new coordinate system
- resampling to adjust the pixels to the new coordinate system.
Accurate localisation of the ground control points and calculation of the transformation matrix:
Ground control points are remarkable constant objects in the image (airports, road crossings, bridges,
etc.) with known coordinates necessary for the calculation of the transformation coefficients, to transfer
the uncorrected image (with image coordinates of column and row numbers) into a cartographic
projection (usually an orthogonal system like UTM, etc.). They have to be selected carefully, since all
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-128
other image pixels will be extrapolated according to their position. Therefore the GCPs should be
distributed regularly over the image, also considering different altitude levels, to guarantee a reliable
rectification. Row and column values as well as the orthogonal coordinates are then submitted to a
least-squares regression analyses to determine the coefficients for the transformation equation.The
transformations are nth-order polynomial transformations, which try to recalculate the original image
geometry from the distorted acquisition.
x =f
1
(X,Y)
y =f
2
(X,Y)

(x, y) = distorted image coordinates (column, row)
(X, Y) = correct (map) coordinates
f1, f2 = transformation functions


Figure 1: Superimposition of the geometrically correct output matrix on the distorted image matrix
(LILLESAND & KIEFER 2000).

Transformation into the new coordinate system and resampling:
Since a cell in the output matrix will not directly overlay a pixel in the input matrix, the pixels have to be
assigned by a resampling procedure. Therefore we first define an undistorted output matrix of empty
map cells and then fill in each cell with the grey level of the corresponding pixels in the distorted image
(LILLESAND & KIEFER 2000).
Several resampling methods are used: Nearest Neighbour, Bilinear and Bicubic (BERNSTEIN 1978,
MOIK 1980). Fig. 1 shows how the dark pixel will be filled using the different resampling methods. With
Nearest Neighbour resampling, the closest grey value (a) will be used, with Bilinear resampling the
new grey value will be a distance-weighted average of (a) and the three (b), with Bicubic resampling it
will be a weigthed average of the 16 surrounding pixels. The advantage of Nearest Neighbour
resampling is that the original grey values will be kept without any spatial averaging. Features in the
output matrix may have a spatial offset of one-half pixel, which can cause a staircase like disjointed
appearance of linear features in the output image product. Bilinear and Bicubic resampling include the
neigbouring pixels in a weighted average, which sharpens the image but alters the original radiometry.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-129
The estimation of the accuracy is done with the rms-error (root-mean-square-error), a unit for the
distance between original and transformed coordinate of one point, which is averaged over the whole
image. In general it is anticipated to reach subpixel accuracy, rms-error less than 1.

Parametric Correction: Orthorectification
If the remote sensing data contains a high relief energy or a mosaic has to be made from several
images, orthorectification is necessary. Orthorectification is the method to remove the effects of sensor
geometry and terrain variation. All objects are shown in a perpendicular parallel projection
(comparable to a topographical map), that means each point looks as if an observer is looking straight
down at it. Orthorectified remote sensing data is therefore often used as background information to the
visualisation of vector information in Geo Information Systems. It is essential for remote sensing
evaluations in high mountaineous terrain, since the resulting relief displacement could be more than
one pixel. The height information is usually provided by a Digital Elevation Model. Relief displacement
is corrected by taking each pixel of a DEM and finding the equivalent position in the satellite or aerial
image. After internal orientation, three dimensional ground control points are entered for external
orientation. The triangulation is calculated afterwards using a least square block bundle adjustment.
The grey value is assigned through resampling of the neighbouring pixels. The quality of the ortho
image is significantly depending on the quality of the DEM. While the near-vertical viewing SPOT
scene can use very coarse DEMs, aerial photography of 1:60 000 and larger need height information
of 1 m vertical accuracy (KRAUS 1994). This is also applicable to oblique SPOT scenes (up to 25
degrees) which would require also fairly detailed DEMs.

TYPE OF SENSORS:
optical
CONTACT
Dr. Volker Hochschild
Friedrich-Schiller-University of Jena
Institute for Geography
Dept. of Geoinformatics, Hydrology and Modelling
Loebdergraben 32
07743 Jena
Germany
phone: +49-3641-948 855
fax: +49-3641-948 852
e-mail

REFERENCES
B$HR, H.P. & T. VGTLE (1998):
Digitale Bildverarbeitung. - Heidelberg.
BERNSTEIN, R. (1978):
Digital Image Processing for Remote Sensing. - New York.
KRAUS, K. (1994):
Photogrammetrie, Band 1, Grundlagen und Standardverfahren. 5th Edition, Ferdinand
Dmmlers Verlag, Bonn
LILLESAND, T.M. & R.W. KIEFER (1994):
Remote Sensing and Image Interpretation. - New York.
MATHER, P.M. (1987):
Computer processing of remotely sensed images. - Chichester.
MOIK, J.G. (1980):
Digital Processing of Remotely sensed Images. - NASA SP-431, Washington D.C.
RICHARDS, J.A. & X. JIA (1998):
Remote Sensing Digital Image Analysis. - Berlin.
back to overview
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-130
GEOCODING OF RADAR DATA
1 OBJECTIVE
The principle of side-looking SAR is measurement of the electromagnetic signal round trip time for the
determination of slant ranges to objects and the strength of the returned signal. This principle causes
several types of geometrical distortions.
2 THEORY
SAR data geocoding is a very important step for many users because SAR data should be
geometrically correct in order to be compared or integrated with other types of data (satellite images,
maps, etc.). Geocoding an image consists of introducing spatial shifts on the original image in order to
have a correspondance between the position of points on the final image and their location in a given
cartographic projection (GUINDON & ADAIR, 1992).
Radiometric distortions also exist in connection with terrain relief and often cannot be completely
corrected. In addition, resampling of the image can introduce radiometric errors. For these reasons,
the thematic user of the image needs information on what he should expect in terms of interpretability
of geocoded images for a given thematic application. A layover/shadowing mask and a local incidence
angles map are both helpful for many applications.
3 METHOD(S)
The geocoding (map projected) is generally applied in slant-range SAR image and after terrain
correction (orthorectified). Most of the image processing (ERDAS Imagine, Earth View,...) software
allow the geocoding of SAR image. This is usually down by selecting Control Points (CPs) on a
reference map and on the SAR image and then by applying an affine transformation (MEIER et al.
1993)
TYPE OF SENSORS:
radar

CONTACT
Dr. Kader Fellah
Universit Louis Pasteur Strasbourg
Service Rgional de Traitement d'Image et de Tldtection
Parc d'Innovation
Boulevard Sbastien Brant
F - 67400 Illkirch
FRANCE
phone: +33 3 8865 5200 (5192)
fax: +33 3 8865 5199
e-mail
REFERENCES
MEIER E., FREI U., NUESCH D. (1993):
Precise Terrain Corrected Geocoded Images. Chapter in "SAR Geocoding: Data and
Systems", ed. G. Schreier, 1993, Wichmann, pp. 173-186.
GUINDON, B. & ADAIR, M. (1992):
Analytic Formulation of Spaceborne SAR Image Geocoding and 'Value Added' Product
Generation Procedures Using Digital Elevation Data. Canadian Journal of Remote Sensing,
Vol. 18, pp. 2-12.
back to overview


ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-131
Topographic Correction
1 Obje 1 Obje 1 Obje 1 Objective ctive ctive ctive
Topography does not only affect the geometric properties of an image but will as well have an impact
on the illumination and the reflection of the scanned area. This effect is caused by the local variations
of view and illumination angles due to mountainous terrain. Therefore identical land-cover might be
represented by totally different intensity values depending on its orientation and on the position of the
sun at the time of data acquisition.
Neglecting the atmospheric influence and the adjacency effects we can state that in the visible and
near-infrared bands the direct sun radiation is the only illuminating factor. But most objects, including
forest, have non-Lambertian reflectance characteristic and the effects of topography on scene
radiance cannot be neglected in rugged terrain. Within this chapter we will focus on the correction of
slope-aspect effects. The influence of adjacent slopes and optical thickness are neglected.
2 THEORY
An ideal slope-aspect correction removes all topographically induced illumination variation so that two
objects having the same reflectance properties show the same Digital Number despite their different
orientation to the sun's position. As a visible consequence the three-dimensional relief impression of a
scene gets lost and the image looks flat.
In order to achieve this result several radiometric correction procedures have been developed.
Besides empirical approaches, such as image ratioing, which do not take into account the physical
behaviour of scene elements, early correction methods were based on the lambertian assumption, i.e.
the satellite images are normalised according to the cosine of the effective illumination angle (SMITH
et al. 1980). However, most objects on the earth's surface show non-lambertian reflectance
characteristics (MEYER et al., 1993). Therefore the cosine correction had to be extended by
introducing parameters, which simulate the non-lambertian behaviour of the surface (CIVCO 1989,
COLBY 1990). The estimation of these parameters is generally based on a linear regression between
the radiometrically distorted bands and a shaded terrain model. A comparison between four correction
methods, including the non-parametric cosine correction, confirms a significant improvement in
classification results, when applying the parametric models (MEYER et al. 1993).
3 METHOD(S)

Cosine Correction
The cosine correction is a statistic-empirical method. Such approaches are based on a significant
correlation between a dependent and one or several independent variables. The quality of such an
correction of course depends on the degree of explanation of the regression function.
The cosine correction is often applied in flat terrain to equalise illumination differences due to different
sun positions in multitemporal data sets. It is a strictly trigonometric approach based on physical law
assuming a Lambertian reflection characteristic of objects and neglecting the presence of an
atmosphere.
The amount of irradiance reaching an inclined pixel is proportional to the cosine of the incidence angle
i, where i is defined as the angle between the normal on the pixel and the zenith direction. Only the
part of cos(i) * Ei of the total incoming irradiance Ei reaches the inclined pixel. The cosine law only
takes into account the sun's position in the form of the sun's zenith angle.


LH = radiance observed for horizontal surface;
LT = radiance observed over sloped terrain;
sz = sun's zenith angle;
i = sun's incidence angle in relation to the normal on a pixel.

The cosine correction only models the direct part of irradiance. As weakly illuminated regions receive a
considerable amount of diffuse irradiance, these areas show a disproportional brightening effect when
corrected (the smaller the cos(i), the stronger the overcorrection).

ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-132
Minnaert Correction
The Belgian astrophysicist G.J. Minnaert modified the common cosine correction by adding a constant
k:

k = Minnaert constant.

The parameter k is considered to be a measure of the extent to which a surface is Lambertian (in
which case k = 1). The values of k varies between 0 and 1. In areas with a cos(i) near 0, k increases
the denominator and counteracts an overcorrection as obtained in the common cosine correction. The
parameter k can be determined empirically by linearising the equation logarithmically and estimating
the slope of a linear regression.

C-Correction
Bringing the original data into the form LT = m cos(i) + b we can introduce a parameter c which is the
quotient of b and m of the regression line. The parameter c is built in the cosine law as an additive
term:


c = correction parameter;
m = inclination of regression line;
b = intercept of regression line;
LH = radiance observed for horizontal surface;
LT = radiance observed over sloped terrain;
sz = sun's zenith angle;
i = sun's incidence angle in relation to the normal on a pixel;

The effect of c is similar to that of the Minnaert constant. It increases the denominator and weakens
the overcorrection of faintly illuminated pixels.

TYPE OF SENSORS:
optical

CONTACT
Dr. Klaus Steinnocher
Austrian Research Center Seibersdorf
Division of Systems Research
Dept. of Environmental Planning
A - 2444 Seibersdorf
AUSTRIA
phone: +43.2254.780.3876
fax: +43.2254.780.3888
e-mail
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-133
REFERENCES
CIVCO, D.L. (1991):
Topographic normalisation of Landsat Thematic Mapper digital imagery. Photogrammetric
Engineering and Remote Sensing, Vol. 55, No. 9, pp. 1303-1309.
COLBY, J.D. (1991):
Topographic normalisation in rugged terrain. Photogrammetric Engineering and Remote
Sensing, Vol. 57, No. 5, pp. 531-537.
MEYER, P.; ITTEN, K.I.; KELLENBERGER, T.; SANDMEIER, S. & SANDMEIER, R. (1993):
Radiometric correction of topographically induced effects on Landsat TM data in an alpine
environment. ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 48, No. 4, pp.
17-28.
SANDMEIER, S. & ITTEN, K.I. (1997):
A Physically-Based Model to Correct Atmospheric and Illumination Effects in Optical
Satellite Data of Rugged Terrain. IEEE Transactions on Geoscience and Remote
Sensing:35(3):708-717.
SMITH, J.A.; TZEU LIE LIN & RANSON, K.J. (1980):
The lambertian assumption and Landsat data. Photogrammetric Engineering and Remote
Sensing, Vol. 46, No. 9, pp. 1183-1189.
back to overview




Speckle Filtering
1 OBJECTIVE
Because of the coherence of the emitted waves, the mechanics of radar are characterised by the
shimmering phenomenon called speckle. Consequently, for an homogeneous area, radar
backscattering received by the satellite is highly variable, but this variation is in no way due to
characteristics of the medium itself. This manifests itself on the radar images in the form of a salt and
pepper effect. Consequently, for amplitude radar images, one pixel value is not statistically
representative of the radar backscattering from the surface to which it belongs. It is therefore
necessary to pay particular attention to interpreting one group of pixels compared to another group in
order to evaluate those characteristics of a medium which correspond to a given area of the image.
2 THEORY
In order to be able to estimate the reflectivity of a target, it is often necessary to reduce the variance of
the speckle by using a multi-look technique. This technique involves averaging independent samples
of the image. Most of the data supplied by earth observation satellites are produced in this way with a
certain number of looks. This is the case for the ERS standard data for which the number of looks is
equal to three.
3 METHOD(S)
Present solutions usually utilised for speckle filtering in order to improve the radiometric resolution of a
SAR product are essentially of two types:
- the averaging of several samples of a same scene (multi-look processing, low pass filtering),
- adaptive filtering, taking into account the local statistics and texture properties of one image.
The advantage of the second technique is to better preserve the local information, and therefore to
degrade less the geometrical resolution of the initial image (DESNOS et al. 1993; LOPEZ et al. 1993)
However, many thematic applications need images with a still better compromise between radiometric
and geometric resolution than what can be obtained by the way of mono-channel filtering, as
described above. Such a purpose can be achieved by the way of multi-temporal filtering when the data
set is non-coherent.
In the general case, it is possible to apply methods of optimal or sub-optimal pixel summation, leading
to approach the theoretical number of looks, by achieving a radiometric equalization and by using the
correlation between the channels in order to minimize a given criterion. Under the hypothesis of
channels homogeneity Lee et al. (1991) and FELLAH et al. (2000) have proposed a method
minimising the variance. Another method which take into account textured channels has been
introduced by Bruniquel 1996.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-134
The choice of the method is driven by the considered application: maximisation of radiometric
resolution improvement (variance minimisation) or calibration preservation of a single (or several)
date(s) (MSE minimisation).
All these techniques based on specific processing of multi-temporal SAR images improve the thematic
value of the ERS images and is a prerequisite before SAR image classification.
TYPE OF SENSORS:
radar

CONTACT
Dr. Kader Fellah
Universit Louis Pasteur Strasbourg
Service Rgional de Traitement d'Image et de Tldtection
Parc d'Innovation
Boulevard Sbastien Brant
F - 67400 Illkirch
FRANCE
phone: +33 3 8865 5200 (5192)
fax: +33 3 8865 5199
e-mail
REFERENCES
DESNOS, Y-L. & MATTEINI, V. (1993):
Review Detection and Speckle Filtering on ERS-1 Images. EARSeL Advances in Remote
Sensing, Vol. 2, N2, pp. 52-65.
LEE, J.S., GRUNES, M.R. & MANGO, S.A. (1991):
Speckle Reduction in Multipolarization Multifrequency SAR Imagery. IEEE Transactions on
Geoscience and Remote Sensing, Vol. 29, pp. 535-544.
BRUNIQUEL, J. (1996):
Contribution de donnes multi-temporelles l'amlioration radiomtrique et l'utilisation
d'images Radar synthse d'ouverture. Thse de doctorat de lUniversit Paul Sabatier,
Toulouse, No 2245.
LOPEZ, A., NEZRY, E., TOUZI, R. & LAUR, H. (1993):
Structure detection and statistical adaptive speckle filtering in SAR images. International
Journal of Remote Sensing, Vol. 14, No 9, pp. 1735-1758.
FELLAH, K., GOMMENGINGER, W., MEYER C., P DE FRAIPONT & ADRAGNA, F. (2000):
Etude Multi-date Radar. Rapport CNES, 109 p.
back to overview




Coherence Images
1 OBJECTIVE
Contrary to optical data for which only the amplitude information from the signal is useable, in remote
sensing radar it is possible to measure the amplitude and the phase of the signal as reflected by the
earth's surface. The phase data used under certain conditions allows a new source of information to
be generated: the coherence image.
2 THEORY
Since phase is a measurement resulting from the satellite-target distance, it does not contain useful
information for thematic application. On the other hand, the phase difference between two radar
acquisitions carried out under similar geometric conditions allows a quantity of information to be
derived which can be used for thematic applications. Specifically, the spatial behaviour of the phase
difference, its variability within a neighbourhood, is the key parameter that is used in coherence
imaging.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-135
The degree of coherence in general just called coherence (or interferometric correlation) and noted as
, is a measurement of the spatial variability of phase difference between two radar acquisitions.
Coherence is a statistical parameter which is not measured but only estimated over a few samples of
an area. It is information that is in addition to radar amplitude, which describes the similarity' of the
phase between two SAR acquisitions:
- High coherence (near to 1) indicates very good phase correlation of the phase between the two
images which is a consequence of: target properties vis--vis the backscattering radar
mechanisms and an absence of changes which could affect the target.
- A low coherence (tending towards 0) indicates that the target is of a non-coherent type. This can
be explained by either the properties of the target vis--vis the different mechanisms of
backscattering radar, or by changes that could have affected the target.
3 METHOD(S)
In order that two images acquired at different times can result in a useable coherence image, it is
necessary to observe certain acquisition conditions:
- Baseline: In order to calculate a coherence image between two radar acquisitions, ERS satellite
positions have to be practically the same for the two acquisitions. Above a certain baseline length,
called critical baseline, around 1200 meters for ERS, there is a complete loss of coherence and it
is no longer possible to use phase information. In general, the optimal baseline lies between 0 and
400 meters for ERS tandem acquisitions.
- Acquisition frequency: Too long a period between successive acquisitions can reduce coherence
because of the temporal variation of the targets' backscattering properties. The time scale varies
as a function of the nature of the target: for a glacier, in summer, images acquired at one day
intervals show very low correlation, whereas in other cases, acquisitions separated by several
years can show very high coherence. (Case of arid desert regions).
TYPE OF SENSORS:
radar
CONTACT
Dr. Kader Fellah
Universit Louis Pasteur Strasbourg
Service Rgional de Traitement d'Image et de Tldtection
Parc d'Innovation
Boulevard Sbastien Brant
F - 67400 Illkirch
FRANCE
phone: +33 3 8865 5200 (5192)
fax: +33 3 8865 5199
e-mail

REFERENCES
GUARNIERI, A.M. & PRATI, C. (1997):
SAR Interferometry: A Quick and Dirty' Coherence Estimator for Data Browsing. IEEE
Transactions on Geoscience and Remote Sensing, vol. 35(3), May 1997, pp. 660-669.
ICHOKU, C., KARNIELI, A., ARKIN, Y., CHOROWICZ, J., FLEURY, T., RUDANT, J.P. (1998):
Exploring the utility potential of SAR interferometric coherence images. International Journal
of Remote Sensing, vol. 19(6), pp. 1147-1160.
WEGMLLER, U. & WERNER, C.L. (1997):
Retrieval of Vegetation Parameters with SAR Interferometry. IEEE Transactions on
Geoscience and Remote Sensing, vol. 35(1), January 1997, pp. 18-24.
SERTIT (2000):
Reference manual for Coherence product. Report to ESA-SPOT IMAGE, 156 p.
back to overview
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-136
Image Fusion
1 OBJECTIVE
Image fusion in a general sense can be defined as the combination of two or more different images
to form a new image by using a certain algorithm (Van GENDEREN & POHL, 1994). It aims at the
integration of all relevant information from a number of single images into one new image. From an
information science point of view, image fusion can be divided into three categories depending on the
abstraction level of the images: pixel, feature and decision based fusion. On the pixel level, the fusion
is performed on a per-pixel basis. This category encompasses the most commonly used techniques
(VRABEL 1996). The second level requires the derivation of image features, which are then subject to
the fusion process. Decision based fusion combines either pre-classified data derived separately from
each input image or data from multiple sources in one classification scheme (BENEDIKTSSON &
SWAIN, 1992, SCHISTAD SOLBERG et al. 1994).
2 THEORY
An alternative grouping of image fusion techniques refers to the different temporal and sensor
characteristics of the input imagery. The combination of multitemporal - single sensor images
represents a valuable basis for detecting changes over time (SINGH 1989; WEYDAHL 1993;
KRESSLER & STEINNOCHER 1996). Multisensor image fusion combines the information acquired by
different sensor systems, to benefit from the complementary information inherent in the single image
data. A representative selection of studies on multisensor fusion, comprising a wide range of sensors,
is given by POHL & van GENDEREN (1998). Within this group, a focus can be found on the fusion of
optical and SAR data (HARRIS et al. 1990; SCHISTAD SOLBERG et al. 1994) and of optical image
data with different spectral and spatial resolutions (CHAVEZ et al. 1991; PELLEMANS et al. 1993;
SHETTIGARA 1992; ZHUKOV et al. 1995; GARAGUET-DUPORT et al. 1996; YOCKY 1996; VRABEL
1996; WALD et al. 1997).
The choice on the fusion method to be used is highly depending on the application of the fused image.
Substitution methods are easy to use and often implemented in standard image processing systems.
They can provide an excellent basis for visualisation products but usually distort the spectral
characteristics of the resulting images significantly. Thus they cannot be recommended for subsequent
numerical processing such as spectral classification.
If the fusion process is seen as a pre-processing step to classification spectrally stable methods
should be used. The wavelet approach leads to excellent results but is rather complex in use and not
yet general available. The Adaptive Image Fusion is less sophisticated but easy to use and available
from the author upon request.
In this description we will concentrate on the fusion of multisensor optical image data with different
spatial and spectral resolutions. High resolution data sets of this kind are typically acquired from single
platforms carrying two sensors in the optical domain - one providing panchromatic images with a high
spatial resolution, the other providing multispectral bands (in the visible and near infrared spectrum)
with a lower spatial resolution. Current examples of these platforms are SPOT3/4, IRS-1C/D, and
Landsat 7. For the near future a number of satellites with similar characteristics are announced
(CARLSON & PATEL, 1997).
3 METHOD(S)

Multiresolution Image Fusion
The motivation for merging a panchromatic with multispectral images lies in the increase of details
while preserving the multispectral information. The result is an artificial multispectral image stack with
the spatial resolution of the panchromatic image. Common methods to perform this task are arithmetic
merging procedures or component substitution techniques such as the Intensity-Hue-Saturation (IHS)
or the Principal Component Substitution procedures (CARPER et al., 1990, CHAVEZ et al., 1991,
SHETTIGARA, 1992). These techniques are valuable for producing improved image maps for visual
interpretation tasks, as they strongly enhance textural features. On the other hand, they can lead to a
significant distortion of the radiometric properties of the merged images (VRABEL 1996).

PELLMANS et al. (1993) introduced the radiometric method, where the new multispectral bands are
derived from a linear combination of multispectral and panchromatic radiances. While this method
keeps the radiometry of the spectral information, it is restricted to bands that are spectrally located
within the spectral range of the panchromatic image. An interesting approach has been presented by
ZHUKOV et al. (1995), which is based on the retrieval of spectral signatures, which correspond to
constant grey levels in the panchromatic image. The result reveals sub-pixel variations in the
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-137
multispectral bands, which are associated with grey level variations in the panchromatic image. Most
promising are methods that use wavelet transforms for fusion of multiresolution images, as they
preserve the spectral characteristics of the fused image to a high extent (RANCHIN & WALD 1993;
GARGUET-DUPORT et al. 1996; YOCKY 1996; WALD et al. 1997).

A new approach, the Adaptive Image Fusion, uses adaptive filters to extract edge information from the
high resolution image and transfers this information to the multispectral image. It was designed as a
pre-processing tool for subsequent numerical classification. The fusion product is a pre-segmented
multispectral image, with low variance within spectral image objects and sharp borders between the
objects. The inclusion of highly textured areas from the panchromatic image is not supported by the
technique, but the spectral characteristics of the multispectral image are preserved to a high extent
(STEINNOCHER 1999).
TYPE OF SENSORS:
optical
radar
CONTACT
Dr. Klaus Steinnocher
Austrian Research Center Seibersdorf
Division of Systems Research
Dept. of Environmental Planning
A - 2444 Seibersdorf
AUSTRIA
phone: +43.2254.780.3876
fax: +43.2254.780.3888
e-mail

REFERENCES
BENEDIKTSSON J.A. & P. H. SWAIN (1992):
Consensus theoretic classification methods. IEEE Trans. Syst., Man, Cybern., 22(4), pp.
688-704.
CARLSON G.R. & B. PATEL (1997):
A new area dawns for geo-spatial imagery. GIS World, 10(3), pp. 36-40.
CARPER, W.J.; LILLESAND, T.M.; & R. W. KIEFER (1990):
The use of intensity-hue-saturation transformation for merging SPOT panchromatic and
multispectral image data. Photogrammetric Eng. Remote Sensing, 56(4), pp. 459-467.
CHAVEZ, P.S.; SIDES, S.C. & J.A. ANDERSON (1991):
Comparison of three different methods to merge multiresolution and multispectral data:
Landsat TM and SPOT panchromatic. Photogrammetric Eng. Remote Sensing, 57(3), pp.
295-303.
GARGUET-DUPORT, B.; GIREL, J.; CHASSERY J.-M. & G. PAUTOU (1996):
The use of multiresolution analysis and wavelet transform for merging SPOT panchromatic
and multispectral image data. Photogrammetric Eng. Remote Sensing, 62(9), pp. 1057-
1066.
HARRIS, J.R.; MURRAY, R. & T. HIROSE (1990):
IHS Transform for the integration of radar imagery with other remotely sensed data.
Photogrammetric Eng. Remote Sensing, 56(12), pp. 1631-1641.
KRESSLER, F. & K. STEINNOCHER (1996):
Change detection in urban areas using satellite data and spectral mixture analysis. In:
International Archives of Photogrammetry and Remote Sensing, Vol. 31, Part B7, pp. 379-
383.
PELLEMANS, A.H.J.M.; JORDANS, R.W.L. & R. ALLEWIJN (1993):
Merging multispectral and panchromatic SPOT images with respect to the radiometric
properties of the sensor. Photogrammetric Eng. Remote Sensing, 59(1), pp. 81-87.
POHL, C. & J.L. VAN GENDEREN (1998):
Multisensor image fusion in remote sensing: concepts, methods and applications. Int. J.
Remote Sensing, 19(5), pp. 823-854.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-138
RACHIN, T. & L. WALD (1993):
The wavelet transform for the analysis of remotely sensed images. Int. J. Remote Sensing,
14(3), pp. 615-619.
SCHISTAD SOLBERG, A.H.; JAIN, A.K. & T. TAXT (1994):
Multisource classification of remotely sensed data: fusion of Landsat TM and SAR images.
IEEE Trans. Geosci. Remote Sensing, 32(4), pp. 768-778.
SHETTIGARA, V.K. (1992):
A generalized component substitution technique for spatial enhancement of multispectral
images using a higher resolution data set. Photogrammetric Eng. Remote Sensing, 58(5),
pp. 561-567.
SINGH, A. (1989):
Digital change detection techniques using remotely-sensed data. Int. J. Remote Sensing,
10(6), pp. 989-1003.
STEINNOCHER, K. (1999):
Adaptive fusion of multisource raster data applying filter techniques. Int'l Archives of
Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3W6, pp. 108-115.
VAN GENDEREN, J.L. & C. POHL (1994):
Image fusion: issues, techniques and applications. Proc. EARSeL Workshop Intelligent
Image Fusion, J. L. Van Genderen & V. Cappellini (Eds.), Strasbourg, France, September.
VRABEL, J. (1996):
Multispectral imagery band sharpening study. Photogrammetric Eng. Remote Sensing,
62(9), pp. 1075-1083.
WALD, L.; RANCHIN, T. & M. MANGOLINI (1997):
Fusion of satellite images of different spatial resolutions: assessing the quality of resulting
images. Photogrammetric Eng. Remote Sensing, 63(6), pp. 691-699.
WEYDAL, D.J. (1993):
Multitemporal analysis of ERS-1 SAR images over land areas. Proc. IEEE Symp. Geosci.
Remote Sensing (IGARSS_93), Tokyo, Japan, pp. 1459-1461.
YOCKY, D.A. (1996):
Multiresolution wavelet decomposition image merger of Landsat Thematic Mapper and
SPOT panchromatic data. Photogrammetric Eng. Remote Sensing, 62(9), pp. 1067-1074.
ZHUKOV, B.; BERGER, M.; LANZL, F. & H. KAUFMANN (1995):
A new technique for merging multispectral and panchromatic images revealing sub-pixel
spectral variation. Proc. IEEE Symp. Geosci. Remote Sensing (IGARSS_95), Florence,
Italy, pp. 2154-2156.
back to overview




Index Images
1 OBJECTIVE
Ratio images are usually derived from the absorption/reflection spectra and thus often provide
information on the chemical composition of the target. They enlarge small differences between various
rock types and vegetation classes, which could not be identified at original color composites. They are
therefore applied in mineral exploration and vegetation analyses (see Parameterizationpool, chapter
LAI). Combinations of Landsat TM ratios are used for mineral type detection like for example: Red 5/7,
Green 5/4 and Blue 3/1, integrating the ratios for iron oxide (3/1), for clay minerals (5/7) and ferrous
minerals (5/4) (TUCKER 1979, SABINS 1987, JENSEN 1996). They are used in soil type
classification. Indices are also used to reduce relief induced illumination effects.
2 THEORY
Indices are used to create output images by mathematically combining the DN values of different
bands. They could be generated through arithmetic operations like:
(Band X - Band Y)

or as ratios of band DN values: Band X / Band Y

or more complex: Band X - Band Y / Band X + Band Y
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-139
The output images resulting from ratio operations should generally be created in floating point format
to preserve all numerical precision, since usually A is not much greater than B and the data range
might only go from 1 to 2 or 1 to 3. For cases in which A B Integer scaling would always result in 0
and all fractional data would be lost. Faust (1989) provide an approach to handle the entire ratio range
by: Ratio = atan (A/B), representing the range smaller and larger than 1 pretty well.
3 METHOD(S)

NDVI
Index images such as the NDVI (see Parameterization pool, chapter LAI) could act as a relative
measure to compare images of different acquisition dates. The phenological stages of the vegetation
could so be integrated into the classification process.

Multitemporal profile approach
One more effective method for multitemporal crop classification is the multitemporal profile approach
(BADHWAR 1985, LO et al. 1986). The approach is based on the typical phenological development of
each crop and their associated spectral response pattern. This requires several acquisitions during the
growing season, especially in spring time to detect the plant water content. The spectral response
pattern is determined by the time (tp) of peak greenness (Gm), spectral emergence date (t0) and the
width of the profile between its two inflection points. The inflection points, t1 and t2, are related to
the rates of change in greeness early in the growing season and at the onset of senescence
(LILLESAND & KIEFER 2000). Gm, tp and include 95 % of the information of the original data.
They are important because they not only reduce the dimensionality of the original data, but also pro-
vide variables directly relatable to agrophysical parameters (BAUER 1985).

Temporal image differencing
Temporal image differencing is another approach to analyse areas of land cover change. In this
procedure DNs from one date are simply subtracted from those of the other. For visualisation
purposes in 8 bit data 127 should added to the difference image. Areas unchanged will be 127, a mid-
grey color, areas of negative change will be darker ( 127) areas of positive change will be brighter (
127). An example for temporal albedo differences is provided by ROBINOVE et al. (1981). They
calculated albedo from Landsat MSS digital data and fopund that decreases in albedo were related to
improved land use patterns (more soil moisture, organic matter and increased vegetation productivity)
and increases in albedo related to soil degradation (erosion, low soil moisture, organic matter and
productivity). Thus they were able to identify areas of soil degradation and erosion in cold desert areas
of the southwestern US.
The same is applicable for temporal image ratioing. Ratios for areas of no change tend toward 1 and
areas of change will have higher or lower ration values. Again the ratioed data are nor-mally scaled for
display purposes (LILLESAND & KIEFER 2000).

Temporal image ratioing
The same is applicable for temporal image ratioing. Ratios for areas of no change tend towards 1 and
areas of change will have higher or lower ratio values. Again the ratioed data are normally scaled for
display purposes (LILLESAND & KIEFER 2000).

TYPE OF SENSORS:
optical
CONTACT
Dr. Volker Hochschild
Friedrich-Schiller-University of Jena
Institute for Geography
Dept. of Geoinformatics, Hydrology and Modelling
Loebdergraben 32
07743 Jena
Germany
phone: +49-3641-948 855
fax: +49-3641-948 852
e-mail
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-140
REFERENCES
BADHWAR, G.D. (1985):
Classification of Corn and Soyabeans Using Multitemporal Thematic Mapper Data. Remote
Sensing of Environment, 16: 175-181.
BAUER, M.E. (1985):
Spectral Inputs to Crop Identification and Condition Assessment. Proceedings of IEEE, 73
(6): 1071-1085.
FAUST, N.L. (1989):
Image Enhancement. in: Kent, A. & James, G.W. (eds.): Encyclopedia of Computer Science
and Technology, New York.
JENSEN, J.R. (1996):
Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs.
LILLESAND, T.M. & KIEFER, R.W. (2000):
Remote Sensing and Image Interpretation. New York.
LO, T.H., SCARPACE, F.L. & LILLESAND, T.M. (1986):
Use of Multitemporal Spectral Profiles in Agricultural Land-Cover Classification.
Photogrammetric Engineering and Remote Sensing, 52 (4): 535-544.
ROBINOVE, C.J., CHAVEZ, P.S., GEHRING, D. & HOLMGREN, R. (1981):
Arid land monitoring using Landsat albedo difference images. Remote Sensing of
Environment, 11: 133-156.
SABINS, F.F. (1987):
Remote Sensing Principles and Interpretation. New York.
TUCKER, C.J. (1979):
Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote
Sensing of Environment, 8: 127-150.
back to overview




Spectral Signatures
1 Objective 1 Objective 1 Objective 1 Objective
Good knowledge of spectral characteristics of classes is essential to determine e.g. suitable data
collection periods or to interpret results of an unsupervised classification. Since the collection of
ground information to define spectral characteristics for specific classes is fairly expensive, it is of
interest if spectral information or signatures determined in earlier investigations can be used to reduce
the need for detailed ground information. Even if this is not possible, the collection of spectral data
should improve our understanding of spectral properties and their variation in space and time.
2 THEORY
Spectral properties of classes change with time and seasons. Typical examples are changes due to
crop development, e.g. from sowing through growing, ripening and harvest. They are also dependant
on data collection conditions, even after calibration. E.g. the brightness of soils decrease with
increasing soil moisture. Differences in crop spectral signatures can also be due to the underlying soil
type. Therefore it is difficult to extend spectral signatures to new data sets and normally they can not
be used directly as numerical values with new data sets. To determine in advance preferable data
collection periods to achieve reliable classification results good knowledge of the temporal
development of spectral signatures together with a crop calender are required. Examples of spectral
signatures are given in e.g. LILLESAND & KIEFER (2000).
To reduce the effect of data collection conditions, spectral data should be presented as reflectance
values, requiring complete radiometric calibration including atmospheric correction. Since significant
amount of additional information is needed for this task, simplified methods using only absolute
radiometric calibration and solar irradiance are acceptable for practical applications and to study
spatial changes in spectral properties within a region.
Tasseled cap transformation and other techniques have been devised to characterize specific
components of the spectral properties, e.g. greenness. Spectral signatures determined with
hyperspectral sensor systems can directly be compared with other signatures and allow under suitable
conditions identification of e.g. specific minerals. This is not possible with the comparatively large
bandwidth of e.g. Landsat Thematic Mapper data.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-141
TYPE OF SENSORS:
optical

CONTACT
Prof. Dr. Friedrich Quiel
Environmental and Natural Resources Information Systems
The Royal Institute of Technology
Brinellvaegen 32
100 44 Stockholm
SWEDEN
phone: +46 8 7907 346
fax: +46 8 7907 346
e-mail
REFERENCES
LILLESAND T.M. & KIEFER R.W. (2000):
Remote Sensing and Image Interpretation. 4th Edition, John Wiley & Sons, New York, NY
back to overview




Classification

CLASSIFICATION OF OPTICAL DATA
1 OBJECTIVE
Classification techniques are widely used mainly for land use mapping as base information for many
different applications. Supervised classification requires significant localized ground information for
training areas, whereas unsupervised classification typically depends on information about spectral
properties of classes to interpret clustering results.
For broad land use classes like forest, water etc, as often used in simple hydrological models, training
areas can be selected in topographic maps or identified directly in remote sensing data. Unsupervised
classification techniques can provide good results without specific ground information. For more
detailed information, e.g. on crops, ground investigations are typically needed, often requiring
substantial resources. Under certain circumstances this could be replaced by unsupervised
classification of stratified data and an interpretation of resulting clusters based on good knowledge of
crop calendars and spectral signatures.
Bands in the middle infrared improve classification accuracy significantly, e.g. separation of built up
areas from agriculture and differentiation of agricultural crops. For detailed separation of agricultural
crops multitemporal data are essential covering critical stages of the crop development cycle.
2 THEORY
Often unsupervised and supervised steps are combined in a classification procedure (SWAIN, P.H. &
DAVIS, S.H. 1978). A typical evaluation sequence is: :
- Selection of training areas (based on ground information)
- Unsupervised clustering to identify spectral subclasses
- Supervised classification (e.g. Maximum Likelihood algorithm)
- Evaluation of results, including accuracy assessment and postprocessing
Selection of training areas is the critical moment to achieve good results. Training areas should be
representative for a class including all variation within a class. Typically a couple of hundred
independent samples would be required. Pixels within a single field are often highly correlated and the
whole field might represent (statistically) a single sample. Therefore at least 10 to 20 fields per class
are needed for a representative statistical description. Only part of the available ground information
should be used as training areas, the remaining part to assess the accuracy of classification results.
Spectral properties may differ significantly within a scene, due to elevation differences, climatic
conditions, dominant species etc. The representativeness of training areas has to be checked.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-142
If qualitative data are of interest regression between the information of interest and remote sensing
data can be more appropriate. E.g. chlorophyll content in water is often calculated based on ratios
between bands.
3 METHOD(S)

Unsupervised clustering
Unsupervised clustering techniques assign sample points to clusters, minimizing the variance within
clusters and maximizing the variance between clusters. They use typically euclidean distances in the
feature space as measure. They are often iterative, therefore computationally expensive and only
applied to subsets of the data. Statistical checks are used to decide if clusters should be combined
and when to stop the iterative procedure.
K-Means
K-Means algorithm seeds a user defined number of cluster centers in the feature space and assigns
all samples to the nearest center. When all pixels are assigned, the center of gravity for each cluster is
calculated and the next iteration starts. The procedure stops when there is no significant change in the
location of class mean vectors or when a predetermined number of iterations has been reached.
Iso-data
Iso-data algorithms is similar to the K-Means algorithm, but after each iteration determines if clusters
should be combined or split, based on statistical measures and user supplied thresholds.
Histogram based
Histogram based techniques calculate multidimensional histograms, identify peak locations as cluster
centers and assign other locations to an adjacent peak. They do not require iteration and are therefore
fast. They are not very suitable for data with many bands due to the size of the feature space.

Supervised classification
A supervised classification assigns all pixels within the area of interest to one of the predefined (sub-
)classes based on statistical properties and the specific classification algorithm.
Parallelepiped classification
Parallelepiped classifiers assign pixels to classes using boxes in multidimensional feature space with
location determined by mean class vector and size determined by e.g. standard deviation. Similarly
nearest neighbor procedures assign pixels to the nearest class neighbor. These procedures are often
used in the training stage to determine pixels similar to a given training class.
Maximum Likelihood classification
The most widely used Maximum Likelihood algorithm assigns each pixel to the class with the highest
likelihood based on mean vectors and covariance matrixes of the (sub-)classes. A priori probabilities
can be included. Pixels with a likelihood below a threshold can be assigned to a reject class. Images
of the spatial distribution of the likelihood can be used to identify areas not described properly by
training areas.
The maximum likelihood algorithm requires unimodal, normal distributed classes. To achieve a better
approximation of this requirement, spectral subclasses must be identified, which are then used in the
supervised classification. Spectral subclasses are e.g. caused by differences in crop development and
agricultural practice, soil type and soil moisture. Non-linear transformations (e.g. logarithmic) can
result in data, which better correspond to normal distribution.
Sequential Maximum A Posteriori (SMAP) estimation
The Sequential Maximum A Posteriori (SMAP) estimation (BOUMAN, C. & SHAPIRO, M. 1994)
segments multispectral data using Gaussian mixture distribution. The algorithm attempts to improve
accuracy by segmenting the image into regions rather than segmenting each pixel separately. The
algorithm exploits the fact that nearby pixels are likely to have the same class. It segments the image
at various resolutions and uses the coarse scale segmentations to guide the finer scale
segmentations. In addition to reducing the number of misclassifications the SMAP algorithm generally
produces segmentation with larger connected regions of a given class. The amount of smoothing
performed in the segmentation depends on the behavior of the data in the image. If they suggest that
nearby pixels often change classes, then the algorithm will adaptively reduce the amount of
smoothing, ensuring that excessively large regions are not formed. Typically in the training stage
spectral subclasses are identified. Experiences show significant improvement in classification results,
e.g. in agricultural areas. Results are influenced by processing parameter settings.
Texture analysis
Texture is one of the significant parameters recognized by the human visual system for identifying
objects or regions of interest in an image. Image texture can be defined as the variation of gray levels
in the local environment of a pixel. Calculation of texture values can be based on gray-level co-
occurrence matrices and the derivation of texture features (HARALICK et al. 1973). The result will be
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-143
texture feature images, whose gray levels represent the local textural characteristics of a satellite
image. The information content of these images will highly depend on the spatial resolution of the
original image, i.e. texture characteristics are changing with scale. Settlements, for example, can be
detected by applying texture analysis to images with a resolution between 5 and 10 m - such as SPOT
PAN or IRS-1C/D PAN (STEINNOCHER 1997).
For high resolution data methods evaluating texture or the shape and size of objects might provide
better results e.g. in built up areas. Statistical texture measures, e.g. Haralick parameters, have been
tested extensively. Per field classification procedures can provide better results than pixel based
algorithms, but require a large number of fields for training.

Postprocessing
Postprocessing can improve classification results considerably. Isolated pixels with other class
assignment within a large homogenous area are assigned to this class or mixed pixels at the border
between classes are assigned to the dominant class using simple filters. More sophisticated
techniques have been developed, considering e.g. size and spatial arrangement of features (e.g.
VGTLE & SCHILLING 1995; QUIEL & XIE 1995).

TYPE OF SENSORS:
optical

CONTACT
Prof. Dr. Friedrich Quiel
Environmental and Natural Resources Information Systems
The Royal Institute of Technology
Brinellvaegen 32
100 44 Stockholm
SWEDEN
phone: +46 8 7907 346
fax: +46 8 7907 346
e-mail

REFERENCES
SWAIN, P. H. & DAVIS, S.H. (1978):
Remote Sensing: The Quantitative Approach. 396 p., McGraw-Hill
BOUMAN, C. & SHAPIRO, M. (1994):
A Multiscale Random Field Model for Bayesian Image Segmentation. IEEE Trans. Image
Processing, 3(2) p 162 - 177
HARALICK, R.M., SHANMUGAM,K. & I. DINSTEIN (1973):
Textural Features for Image Classification. IEEE Transactions on Systems, Man, and
Cybernetics, SMC-3, No. 6, pp. 610-621.
STEINNOCHER, K. (1997):
Texturanalyse zur Detektion von Siedlungsgebieten in hochaufloesenden panchromatischen
Satellitenbildddaten. Proc. AGIT IX, July 2-4, 1997, Salzburg (=Salzburger Geographische
Materialien, Heft 24), pp. 143-152.
QUIEL, F. & XIE, X. (1995):
Development of a method for automatic landuse mapping with SPOT data. Final report, 23
p., Environmental and Natural Resources Information Systems Stockholm.
VGTLE, T. & SCHILLING, K.-J. (1995):
Wissensbasierte Extraktion von Siedlungsbereichen in der Satellitenbildanalyse. Zeitschrift
fuer Photogrammetrie und Fernerkundung, 5/95, pp. 199-207
back to overview

ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-144
CLASSIFICATION OF RADAR DATA
1 OBJECTIVE
The classification of images could be roughly divided into classification of pixels on the basis of their
spectral components, and classification of areas and details on the basis of their morphology and
shape. In the radar image, both the radiometric and "texture" information are affected by the speckle
which therefore strongly inhibits the possibility of applying automatic processing procedures to extract
the desired information from them, and in particular to perform automatic classification. A pre-
processing, for speckle reduction is therefore needed before using any kind of classifier on SAR
images. The morphology information embedded in the image, along with local statistics (mean and
variance), could be used to attribute an artificial spectrum to the pixels of a single SAR image. Also in
case of multitemporal images a lowering of the noise and an extraction of morphological features are
helpful to perform reliable classification. After the application of speckle filtering, different supervised
and unsupervised classification methods could be applied. These methods are similar of those used
for optical data and are described above and additionally for example by (BUSH & ULABY 1978;
REMONDIERE & LICHTENEGGER 1995; WOODING et al. 1995; DOBSON et al 1996).

TYPE OF SENSORS:
radar

CONTACT
Dr. Kader Fellah
Universit Louis Pasteur Strasbourg
Service Rgional de Traitement d'Image et de Tldtection
Parc d'Innovation
Boulevard Sbastien Brant
F - 67400 Illkirch
FRANCE
phone: +33 3 8865 5200 (5192)
fax: +33 3 8865 5199
e-mail

REFERENCES
DOBSON, PIERCE & ULABY (1996):
Knowledge-based land cover classification using ERS1-JERS1 SAR composites. IEEE
Transactions on Geoscience and Remote Sensing, vol 34, No 1, pp 83-99.
BUSH & ULABY (1978):
An evaluation of radar as a crop classifier. Remote sensing of Environement, No 7, pp 15-
26.
REMONDIERE, S. & LICHTENEGGER, J. (1995):
The use of ERS SAR data for agriculture and land use: an overall assessment. Proceedings
of the Second ERS Applications Workshop, London, UK, 6-8 december 1995, ESA SP-383,
pp. 3-6.
WOODING, ATTEMA, ASCHBACHER, BORGEAUD, CORDEY, DE GROOF, HARMS,
LICHTENEGGER, NIEUWENHUIS, SCHMULLIUS & ZMUDA (1995):
Satellite Radar in Agriculture. Experience with ERS-1. ESA Publications.
back to overview




ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-145
Accuracy Assessment
1 OBJECTIVE
Because the accuracy of remotely sensed data is critical to any successful mapping project, accuracy
assessment is an important tool for anyone who applies remote sensing techniques. The user of land-
cover maps needs to know how accurate the product is in order to use the data efficiently. Although a
number of methods for accuracy assessment is available some of them are generally accepted within
the remote sensing community and can be seen as standard approaches. A comprehensive guide to
assessing the accuracy of maps generated from remotely sensed data can be found in CONGALTON
& GREEN (1999).
2 THEORY
During the processing chain of remote sensing data two basic transformations are usually performed: :
- registration of the image co-ordinate system into a certain map projection, and
- classification of the spectral pixel values into nominal classes.
Both transformations have an influence on the accuracy of the final product. Positional accuracy can
be determined by the accuracy of the geocoding process (see chapter Geocoding). The
consequences of positional uncertainty should not be forgotten when overlay operations are
performed, such as cross tabulation or multitemporal/multisensoral classification (JANSSEN & van der
WEL, 1994).
3 METHOD(S)

Thematic accuracy
Thematic accuracy refers to the correspondence between the class labels assigned to a pixel and the
true class. Prerequisites for estimating the thematic accuracy are the availability of reference data that
can be obtained by collecting ground truth data or be derived from ancillary information such as maps
or aerial photographs. As reference data are commonly used both for training of the classification
process as well as for testing the results after the classification, splitting of the data in a training and a
test set is necessary. Testing the classification result with the training data set leads to in-sample
accuracy that only indicates the spectral homogeneity of the training areas and how separable the
training classes are. For estimating the overall accuracy of the classification for the entire image the
use of an independent test data set is indispensable (out-of-sample accuracy).

Assessing the thematic accuracy is usually based on a sample population of the classification result.
In order to get a representative sample of the entire scene random sampling should be performed.
However, collection of reference data for a large sample of randomly distributed points is often difficult
and costly. Also simple random sampling might underestimate small but still important areas. To
overcome this problem stratified random sampling can be used where each land cover category is
considered a stratum and thus all categories will be represented in the accuracy assessment
(LILLESAND & KIEFER 2000). Sampling can be based on points, clusters of points or polygons, the
latter being the most common approach. However it should be emphasised that remote sensing data
are point-sampled data in which the points have a certain spatial extension. Thus individual pixels are
the most appropriate sample units if a per-pixel classification is performed (JANSSEN & van der WEL,
1994).

The most common approach to assess the accuracy of a classification result is the application of
confusion matrices, also called error matrices, confusion tables or contingency matrices (SMITS et al.
1999). These matrices are a cross tabulation of the reference data and the corresponding results of an
automated classification. They are square and the number of columns / rows equals the number of
categories of the classification. Different measures and statistics can be derived from the values in a
confusion matrix. For a better understanding of the following paragraphs an example of a confusion
matrix is given in table 1.

The rows in the matrix represent the classified data, while the columns are associated with the
reference data. The major diagonal of the matrix indicates the agreement between the classified and
the reference data. The overall accuracy is calculated by dividing the sum of correctly classified
samples by the total number of samples taken. This value is a measure of the classification as a whole
as it indicates the probability that a reference sample will be correctly classified. The overall accuracy
calculated from table 1 is (25 + 48 + 33 + 45 + 21) divided by 240 equals 72 percent.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-146
Table 1: Confusion matrix

Reference data

Urban Forest Meadow Agricult. Water Total Correct
Urban 25 2 4 7 0 38 66 %
Forest 3 48 7 3 1 62 77 %
Meadow 1 6 33 7 0 47 70 %
Agricult
.
9 3 9 45 1 67 67 %
C
l
a
s
s
i
f
i
e
d

d
a
t
a

Water 1 4 0 0 21 26 81 %

Total 39 63 53 62 23 240

Correct 64 % 76 % 62 % 73 % 91 % 72%



As the overall accuracy does not indicate the distribution of accuracy across the individual classes
further measures can be calculated. The probability that a certain reference sample will be correctly
classified is indicated by the producer's accuracy. It is calculated by dividing the number of correctly
classified samples of a class by the total number of reference samples for that class (column total).
The producer's accuracy for Meadow in table one is 33 divided by 53 equals 62 percent.

The probability that a sample of the classified image actually represents the class on the ground is
indicated by the user's accuracy. It is calculated by dividing the number of correctly classified samples
of a class by the total number of samples that were classified in that class (row total). The user's
accuracy for Meadow in table one is 33 divided by 47 equals 70 percent.

Other terms for describing class depending accuracy are error of omission and error of commission.
They are directly related to user's and producer's accuracy (JANSSEN & van der WEL, 1994): :
- error of commission (%) = 100 - user's accuracy (%)
- error of omission (%) = 100 - producer's accuracy (%).
An additional measure for overall thematic classification accuracy is the Kappa coefficient of
agreement (ROSENFIELD & FITZPATRICK-LINS, 1986). In contrast to the overall accuracy described
above this coefficient utilises all elements from the confusion matrix. It is a measure of the difference
between the actual agreement between reference data and an automated classifier and the chance
agreement between the reference data and a random classifier (LILLESAND & KIEFER 2000). It is
computed as
( )
( )


=
+ +
=
+ +
=


=
r
i
i i
r
i
i i
r
i
ii
x x N
x x x N
1
2
1 1

(1)
r = number of rows in the confusion matrix
xii = the number of observations in row i and column i (on the major diagonal)
xi+ = total of observations in row i (shown as marginal total to right of the matrix)
x+i = total of observations in column i (shown as marginal total at bottom of the matrix)
N = total number of observations included in matrix (LILLESAND & KIEFER 2000, p.574).

The Kappa coefficient calculated from table 1 is:

((240 x 172) - (38 x 39 + 62 x 63 + 47 x 53 + 67 x 62 + 26 x 23)) divided by ((240 x 240) - (38 x 39 +
62 x 63 + 47 x 53 + 67 x 62 + 26 x 23)) = 0,637.
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-147
This is an indication that the observed classification is 64 percent better than one resulting from
chance.
CONTACT
Dr. Klaus Steinnocher
Austrian Research Center Seibersdorf
Division of Systems Research
Dept. of Environmental Planning
A - 2444 Seibersdorf
AUSTRIA
phone: +43.2254.780.3876
fax: +43.2254.780.3888
e-mail

REFERENCES
CONGALTON, R.G. & K. GREEN (1999):
Assessing the accuracy of remotely sensed data: principles and practices. Lewis publishers,
Boca Raton, FL, USA.
JANSSEN, L.L.F. & F.J.M. VAN DER WEL (1994):
Accuracy assessment of satellite derived land-cover data. Photogrammetric Engineering
and Remote Sensing, 60(4), pp. 419-426.
LILLESAND, T.M. & R.W. KIEFER (2000):
Remote Sensing and Image Interpretation. 4th Edition, John Wiley & Sons, New York, NY,
pp. 568-575.
ROSENFIELD, G.H. & K. FITZPATRICK-LINS (1986):
A coefficient of agreement as a measure of thematic classification accuracy.
Photogrammetric Engineering and Remote Sensing, 52(2), pp. 223-227.
SMITS, P.C.; DELLEPIANE, G. & R.A. SCHOWENGERDT (1999):
Quality assessment of image classification algorithms for land cover mapping: a review and
a proposal for a cost-based approach. Int. J. Remote Sensing, 20(8), pp. 1461-1486.
back to overview

ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-148
SPACEBORNE OPTICAL SENSORS
Sensor Satellite Owner Launch Bands
Ground
Resolution
Swath
Width
Repeating
Orbit
Orbit
Height
ETM
Enhanced
Thematic
Mapper
Landsat-7 NASA
27 April
1999
520-900 nm-Pan
450-515 nm Blue
525-609 nm Gr.
630-690 nm Red
750-900 nm NIR
1055-1075 nm
SWIR
10400-12500 nm
TIR
2090-2350 nm
SWIR
15 m
30 m
30 m
30 m
30 m
30 m
60 m
30 m
185
km
16 days 705 km
LISS-III
Linear
Imaging Self-
Scanning
System
IRS-1 C/D
and IRS-P6
ISRO (Indian
Space
Research
Org.)
29 Sept.
1997 /
2002
520-590 nm Gr.
620-680 nm Red
770-860 nm NIR
1550-1700 nm
SWIR
23 m
23 m
23 m
70 m
142-
148
km
24 days 904 km
PAN
Panchromatic
Sensor
IRS-1 C/D
ISRO (Indian
Space
Research
Org.)
29 Sept.
1997
500-750 nm
Panchromatic
5,8 m 70 km 24 days 904 km
WiFS
Wide-Field
Sensor
IRS-1 C/D
ISRO (Indian
Space
Research
Org.)
29 Sept.
1997
620-680 nm Red
770-860 nm NIR
188 m
188 m
774
km
24 days 904 km
LISS-IV
Linear
Imaging Self-
Scanning
System
IRS-P5, P6
ISRO (Indian
Space
Research
Org.)
2001 /
2002
3 channels
Multispectral
6m 25 km 22 days .
HRVIR
High
Resolution
Visible
Infrared
SPOT 4 SPOT Image
24
March
1998
510-730 nm Pan
430-470 nm Blue
610-680 nm Red
780-890 nm NIR
1580-1750 nm
SWIR
10 m
20 m
20 m
20 m
20 m
60 km 26 days 822 km
HRVIR
High
Resolution
Visible
Infrared
(HRG)
SPOT 5 Spot Image 2002
510-730 nm Pan
500-590 nm Blue
610-680 nm Red
790-890 nm NIR
1580-1750 nm
SWIR
2.5 m
10-20 m
10-20 m
10-20 m
30 m
60-240
km
35 days 832 km
IR-MSS
Infrared
Multispectral
Scanner
CBERS 1, 2
(Chinese-
Brazilian
Earth
Resources
Satellite)
China/Brazil
(gov.)
14 Oct.
1999 /
end
2001
500-1100 nm
Pan
1550-1750 nm
2080-2350 nm
10,400-12,500
nm Thermal IR
80 m
80 m
80 m
160 m
120
km
26 days 778 km
HRC
High
Resolution
CCD Camera
CBERS 1, 2
(Chinese-
Brazilian
Earth
Resources
Satellite)
China/Brazil
(gov.)
14 Oct.
1999 /
end
2001
510-730 nm Pan
450-520 nm Blue
520-590 nm Gr.
630-690 nm Red
770-890 nm NIR
20 m
20 m
20 m
20 m
20 m
113
km
26 days 778 km
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-149
WFI
Wide Field
Imager
CBERS 1, 2
(Chinese-
Brazilian
Earth
Resources
Satellite)
China/Brazil
(gov.)
14 Oct.
1999 /
end
2001
630-690 nm Red
770-900 nm NIR
260 m
260 m
900
km
5 days 778 km
ALI
Advanced
Land Imager
EO-1 U.S. (gov.)
21 Nov.
2001
480-680 nm Pan
433-453 nm
450-510 nm
525-605 nm
630-690 nm
775-805 nm
845-890 nm
1200-1300 nm
1550-1750 nm
2080-2350 nm
4000-2500 nm
10 m
30 m
30 m
30 m
30 m
30 m
30 m
30 m
30 m
.
.
37 km
(Pan),
185
km
Co-fly with
Landsast-7,
follow by 1
minute
705 km
WIS
Wedge
Imaging
Spec.
EO-1 U.S. (gov.)
15 Nov.
1999
309 bands 30 m 9,6 km
Co-fly with
Landsast-7,
follow by 1
minute
705 km
GIS
Grating
Imaging
Spec.
EO-1 U.S. (gov.)
15 Nov.
1999
233 bands 30 m 9,6 km
Co-fly with
Landsast-7,
follow by 1
minute
705 km
.
EOS AM-2
(Landsat-8)
NASA 2004
Pan/Multispectral
like Landsat
series
15 m km days km
ASTER
Advanced
Spaceborne
Thermal
Emission and
Reflection
Radiometer
EOS AM-1
(Earth
Observing
System)
NASA/NASDA
23 Nov.
1999
520-600 nm
630-690 nm
760-860 nm
760-860 nm
backward
1600-1700 nm
2145-2185 nm
2185-2225 nm
2235-2285 nm
2295-2365 nm
2360-2430 nm
8125-8475 nm
8475-8825 nm
8925-9275 nm
10250-10950 nm
10950-11650 nm
15 m
15 m
15 m
15 m
30 m
30 m
30 m
30 m
30 m
30 m
90 m
90 m
90 m
90 m
90 m
60 km 4-16 days 705 km
MODIS
Moderate
Resolution
Imaging
Spectro
Radiometer
EOS AM-1 NASA/NASDA.
23 Nov.
1999
36 bands VIS,
NIR, MIR, TIR
250-1000
m
2300
km
1-2 days 705 km
. Rapid Eye .
4
satellites
2003 -
2005
Panchromatic +/-
22 tilting
capability
6,5 m
150
km
. 600 km
AVNIR-2
Advanced
Visible and
Near Infrared
Radiometer
ALOS
(Advanced
Land
Observing
Satellite)
NASDA 2002
520-770 nm Pan
420-520 nm Blue
520-600 nm Gr.
610-690 nm Red
760-890 nm NIR
2,5 m
10 m
10 m
10 m
10 m
70 km days km
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-150
Carterra IKONOS 2
Space Imaging
EOSAT
24 Sept.
1999
450-900 nm Pan
450-520 nm Blue
520-600 nm Gr.
630-690 nm Red
760-900 nm NIR
1 m
4 m
4 m
4 m
4 m
12 km
(pan),
60 km
11 days 680 km
. IKONOS
Space Imaging
EOSAT
2004 . 0,5 m . . .
Orbview-3 Orbview-3 ORBIMAGE Q3 2001
450-900 nm Pan
450-520 nm Blue
520-600 nm Gr.
630-690 nm Red
760-900 nm NIR
1 and 2 m
4 m
4 m
4 m
4 m
8 km 3 days 470 km
Orbview-4 Orbview-4 ORBIMAGE
23 May
2001
450-900 nm Pan
450-520 nm Blue
520-600 nm Gr.
630-690 nm Red
760-900 nm NIR
200 channel
hyperspectral +/-
45 side to side
pointing
capability
1 m
4 m
4 m
4 m
4 m
8 m
5-8 km 3 days 470 km
QickBird QuickBird-2
Earth Watch
Inc.
August
2001
450-900 nm Pan
450-520 nm Blue
520-600 nm Gr.
630-690 nm Red
760-900 nm NIR
+/- 30 in-track
and +/- 30 side
to side pointing
capability
0,82 m
3,2 m
3,2 m
3,2 m
3,2 m
.
22 km . 600 km
.
EROS-A2
(Earth
Resources
Observation
System)
West Ind.
Space
(Israel/US)
5 Dec.
2000 /
Q3 2001
500-900 nm Pan 1,8 m
12,5
km
7 days 480 km
.
EROS-B1-
B6 (Earth
Resources
Observation
System)
West Ind.
Space
(Israel/US)
2002 -
2005
500-900 nm Pan 0,82 m 16 km 7 days 600 km
HSI
Hyperspectral
Imager
Lewis TRW/NASA
22
August
1997
400-2.500 nm
384 channels
450-750 nm
Panchromatic
30 m
5 m
7,7-13
km
days 523 km
LEISA
Linear Etalon
Spectral
Imaging
Array
Lewis TRW/NASA
22
August
1997
1.000-2500 nm
256 channels
300 m 77 km days 523 km
. Clark CTA/NASA 1997 Panchromatic 3 m km days km
.
ARIES /
LEO
Australia
program
temp.
stopped
400-1.100 nm
32 channels
2.000-2.500 nm
32 channels
Panchromatic
30 m
30 m
10 m
15 km 7 days 500 km
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-151
MERIS
Medium
Resolution
Imaging
Spectrometer
ENVISAT ESA
July
2001
15 bands with a
bandwidth of 10
nm (band centre
412.5, 442.5,
490, 510, 560,
620, 665,
681.25, 705,
753.75, 760,
775, 865, 890,
900)
300 m
1150
km
3 days
780-
820 km
AVHRR
Advanced
Very High
Resolution
Radiometer
NOAA USA .
550-680 nm
725-1100 nm
3550-3930 nm
10300-11300 nm
11500-12500 nm
1000 m
1000 m
1000 m
1000 m
1000 m
2800-
4000
km
2x/days 850 km
back to overview


SPACEBORNE RADAR SENSORS
Sensor Satellite Owner Launch Bands
Ground
Resolution
Swath
Width
Repeating
Orbit
Orbit
Height
Synthetic
Aperture
Radar
SAR
TerraSAR
InfoTerra,
DSS
2004
X-Band
L-Band
1-3 m
9 m
km 6-7 days
600
km
Synthetic
Aperture
Radar
SAR
ERS-1/2
(European
Remote
Sensing
Satellite)
ESA
1991 /
1996
C-Band 30 m
100
km
35 days
785
km
Synthetic
Aperture
Radar
SAR
JERS-1 NASDA 1992 L-Band 28 m 80 km 44 days
580
km
PALSAR
ALOS
(Advanced
Land
Observing
Satellite)
NASDA 2002 L-Band (1,3 GHz) 10-100 m 70 km 46 days
691.65
km
Advanced
Synthetic
Aperture
Radar
ASAR
Envisat ESA
July
2001
C-Band (5.331
GHz)
25-150 m
100-
400
km
days
780-
820
km
Synthetic
Aperture
Radar
SAR
Radarsat-1
Canada
(gov.)
1995 c-Band (5,3 GHz) 9-100 m
75-
500
km
24 days
800
km
Synthetic
Aperture
Radar
SAR
Radarsat-2
Canada
(gov.)
2001
C-Band (5,3 GHz)
HH, VV, HV, VH
incidence angles:
10 - 60 degrees
12 beam modes
3-100 m
25-
530
km
24 days
800
km
ARSGISIP Final Report V ANNEXES D.2 Common Remote Sensing Method Pool

V-152
Synthetic
Aperture
Radar
SAR
Radarsat-3
Canada
(gov.)
2004 c-Band (5,3 GHz) . . . km
Synthetic
Aperture
Radar
SAR
TOPSAT JPL
program
temp.
stopped
L-Band (23 cm)
Tandem mission for
global topographic
mapping
30 m km 1 day km
Synthetic
Aperture
Radar
SAR
LightSAR NASA/JPL
program
temp.
stopped
L-Band (1.2575
GHz),HH,HV,VH,VV
ScanSAR
X- (9.6 GHz) or
C- (5.3 GHz)
25 m
100 m
1-3 m
100
km
8-10 days
600
km
Spaceborne
Imaging
Radar-C
Synthetic
Aperture
Radar
SIR-C SAR
SRTM
(Shuttle
Radar
Topography
Mission)
DLR / JPL
Feb.
2000 (9
day
mission)
C-Band (6 cm),
HH, HV, VH, VV
50 m
15-90
km
11 days
215
km
X-Band
Synthetic
Aperture
Radar
X-SAR
SRTM DLR / JPL
Feb.
2000 (9
day
mission)
X-Band (3 cm), VV 50 m 45 km 11 days
215
km
back to overview

You might also like