Professional Documents
Culture Documents
ENVS720
Practical Handbook
July 2011
Table of Contents
Pane Map Window
2
Advanced Remote Sensing: Practical Handbook (July 2010)
Groups
This can be moved to below the Ribbon by clicking on the small Drop
Arrow
and selecting the option: Show the Quick Access Toolbar below the Ribbon. The
Quick Access Toolbar contains commonly used commands or tools and can be
completely customized for each individual user. Tools are added to or removed
from the Quick Access Toolbar by right clicking on the tool and selecting either
ADD or REMOVE from the Quick Access Toolbar.
By default, upon startup a Viewer is opened. Notice that you can have many
Viewers running at the same time -- press on the Add Views button in the
Window Group under the Home Tab to bring up Viewer #2, Viewer #3, etc.
These viewers will be listed in the Table of Contents Pane and a viewer window
will open in the Map Pane. Pressing other buttons on the Ribbon will start menus
for corresponding modules of Imagine. Browsing through these menus and
dialog boxes is a good way to get some idea of the capabilities of ERDAS
Imagine. Again, note that many modules can be running at the same time, so you
could perform several tasks simultaneously (for example, view an image and
perform data conversion for another dataset).
2. Getting Help
3
Advanced Remote Sensing: Practical Handbook (July 2010)
manuals contain detailed description of every entry in menus and dialog boxes of
a particular module, as well as some conceptual information.
Besides on-line help, Imagine comes with several published references, most
importantly ERDAS Field Guide and Tour Guides. These texts contain detailed
coverage of principles and procedures or image and geographic information
processing employed in Imagine. The manuals are all grouped according to what
kind of document they are for example the Tour Guides are together and the
Field Guides are together.
Short, one line help is provided for buttons and dialog elements move the
cursor over buttons in the Ribbon and observe the popup help. ERDAS IMAGINE
2010 is completely customizable using the Preference Editor Located by clicking
on the ERDAS IMAGINE button.
3. Viewer
Let us now make use of Viewer to display some images. To open an image in the
viewer, right click in the name of the viewer in the Table of Contents, in this case
2D View #1. Open image wasia_1.mss.
Open the same image again, but this time look at Raster Options. Notice that
color mapping method (Display As) is set to True Color. Make sure that the
Layers to Colors is set to 3, 2, and 1 for Red, Green and Blue color guns,
respectively. Before moving ahead we'll briefly cover color mapping methods and
assignment of image bands to color guns.
Color Mapping
1. True Color. True color display is appropriate for most interval or ratio
multi-layer images. Ratio or interval data mean images where digital
numbers (DN) are quantitative and related to each other in a continuous
manner. With True Color mapping pixels with higher DN in a particular
band are displayed with brighter color in the color gun assigned to that
band. Notice that True Color does not necessarily mean that an image is
displayed in a "natural" way, just that the relations between DN are
preserved - high DNs look brighter, low DNs look darker.
2. Pseudo Color. Pseudo color is appropriate for nominal or ordinal images
in one layer. Generally, categorical data have either been classified
(meaning that the file pixels have been put into distinct categories
4
Advanced Remote Sensing: Practical Handbook (July 2010)
Usually, a three-layer set of raster layers is opened in true color (there may be
more layers in the file). These three layers correspond to the three color guns
red, green, and blue.
The most useful color assignments are those that allow you to interpret the
opened image easily. For example:
Layer assignments are expressed in R,G,B order. For example, the assignment
4,2,1 means that layer 4 is assigned to red, layer 2 to green, and layer 1 to blue.
This is the default assignment used when a 4-layer set is opened with the
Viewer.
5
Advanced Remote Sensing: Practical Handbook (July 2010)
Radiometric Enhancement
Contrast Stretch
2 standard deviations linear stretch.
Histogram Equalization stretch
Spatial Enhancement
Filtering / Convolution
Spectral Enhancement
Indices
Principal Component Analysis (PCA)
Band Ratio
Radiometric Enhancement
Contrast Stretch
Contrast stretch translates actual pixel values, digital numbers (DNs), into display
brightness levels on the monitor. The reason behind this is that DNs typically do
not cover the whole available range of values (0 - 255 for 8-bit images). To
translate image values into display levels a lookup table (LUT) is employed. LUT
is stored with every image.
6
Advanced Remote Sensing: Practical Handbook (July 2010)
while those within that range are stretched from 0 to 255. To verify this,
select the Panchromatic Tab under the highlighted Raster section. Under
the Enhancement Group select General Contrast (click on the icon) to
open the Contrast Adjust Dialog Box. Click on Breakpts to open the
Breakpoint Editor.
You now see the stretch applied to the data: gray histogram is the
distribution of original (unstretched) pixel DNs, dotted line is the stretch,
and yellow histogram is the distribution of stretched DNs. The horizontal
axis is the original (unstretched) pixel DNs, in the range 0 - 255 (for 8-bit
images). The vertical axis represents two quantities: display (stretched)
pixel DNs, and pixel counts for the original (gray) histogram. As you move
cursor within the graph, three values change: the original pixel DN is
indicated along the horizontal axis, the displayed DN and the pixel count
are displayed along the vertical axis (lower and upper values,
respectively). The stretch line itself defines the relationship between the
input (original) and output (displayed) DNs. For any point along the line (x,
y) all pixels with DN = x will get displayed with DN = y. For example,
pixels with DN = 70 will be displayed with DN ~ 130.
Notice that the line is 0 up to approximately 50, and 255 after about 88, in
between it's straight. If you check band information (Home Tab,
Information Group, Layer Info), you'll notice that (50, 88) is approximately
(mean - 2 st. dev., mean + 2 st. dev.) - hence the name, 2 standard
deviations linear stretch.
No stretch
Its possible to open an image without any stretch at all, that is, to see raw
pixel DNs. Now open the same band in another Viewer, but this time
check No Stretch option (under Raster Options). Open the Breakpoints
Editor again to see the stretch. Notice that the current stretch is defined by
the line going from (0,0) to (255,255).
7
Advanced Remote Sensing: Practical Handbook (July 2010)
Spatial Enhancement
Convolution Filtering
In this section we will use the georeferenced TM image of Atlanta. Open this file
in a viewer. To access the convolution filters:
Under the Raster Tab, Resolution Group, select Spatial then Convolution.
Tmatlanta.img is the input file. For the output file, click on the Browse button and
make sure you save your output file to the correct location. Give the output file an
appropriate name of your choice. Set the Kernel to the correct filter. Leave all
the other values at their default settings.
Apply 3x3 convolution filters in the following order: (1) High Pass, (2) Edge
Enhance, (3) Edge Detect, (4) Horizontal and (5) Vertical.
To best appreciate the effects of these techniques notice areas that have a
variety of lines or edges present (i.e. the edge of a forest, road, wetland
boundary, land/water boundary etc.). Open each of these files in its own viewer.
To view individual Views as separate from the main window, use the Pin Button
Where else in ERDAS can you access the convolution filtering process and
how different is it from accessing the process via image interpreter? (5)
What are the differences between the various filters that you have applied?
(5)
8
Advanced Remote Sensing: Practical Handbook (July 2010)
Spectral Enhancement
Make sure the entire Midlands.img is displayed in a viewer and select the right
mouse button (rmb) inside the viewer to bring up the Quick View utility menu. In
the Quick View menu select Inquire Box. In the menu that appears change the
map coordinates to file coordinates. The white box that Imagine places over the
image shows the extent of the area that will be subseted (you may choose any
area). You can move the box with out changing its dimensions by placing the
cursor inside the white box and, while holding down the left mouse button (lmb),
moving the box to another location. You can change the dimensions of the box
by holding down the lmb while the cursor arrow is on an edge or side of the box
and, then draging the cursor around (the box should follow). You can also
change the dimensions of the box by directly entering the row and column values
in the menu that appeared with the white box. Once you have positioned your
box in the area to be subseted, select the Raster Tab, Geometry Group, Subset
and Chip Button (from the Ribbon), and then select Create Subset Image from
the menu. Use the following directions to fill in the appropriate menu choices and
note that we will make an individual subset for each band. Select
Midlands.img as the input file. Choose and appropriate output file name and
make sure it is set to save in the correct location. Select Coordinate Type equal
to File then click on the button that says "From Inquire Box". Notice that your
inquire box coordinates have been automatically included in the spaces that
determine the boundary of the subset. In the lower part of the window find the
words Number of Input Layers. In the space to the right you will find the entry
"1:8", this means layers 1 through 8 will be included in the subset (the default is
to include all image layers). You want to extract each individual layer as a
separate file, so change this entry to read only the desired layer (i.e. if you want
to extract only layer 2, type 2 in the entry). Select OK and the subset process will
begin. Remember to create a subset for each of the 8 layers as well as one for all
8 layers together.
Image Indices
9
Advanced Remote Sensing: Practical Handbook (July 2010)
Stretch to Unsigned 8-bit (this saves space). Leave the other variables in their
default state. Note the function being used and the bands that are incorporated
into this function. Select OK after you have set all the variables. View the results
and answer the following question. (Note: you might want to display an infrared
color composite for comparison.)
What other vegetation (and other) indices does ERDAS imagine offer and
describe the differences in data output and functionality? (5)
When the processing stops, click OK and then open a new viewer. In that viewer
open your freshly created PCA image. Under Raster Options select Pseudo
Color and Layer 1. Do this for the 5 layers. The sixth viewer should be opened
and display the image using True Color. Six windows should now open, showing
you the individual six PCA "bands" and one composite "band". You will need to
study the images and the files containing the Eigenvector matrix (*.mtx) and
Eigenvalues (*.tbl). To view, and understand, the two output files, open each of
them in a text editor, ERDAS has a text editor that can be found by clicking on
10
Advanced Remote Sensing: Practical Handbook (July 2010)
the ERDAS Button, selecting View and then View Text File. The *.mtx
files has a table of six columns with six rows each. The columns, as said,
correspond to the six principal components and the six rows correspond to the
six bands of TM data. The numbers represent a factor score (Eigenvector) that
each band contributed to the individual component. If band 4 contributed close to
1.0 to the component, one could then assume that that specific component is a
good measure of vegetation cover. The *.tbl file gives you the Eigenvalues for
each of the six principal components. The total of these figures will give you the
total variance.
Band Ratioing
For this part of the prac you will need to open the Classic Interface for ERDAS
2010. Close ERDAS, then select: Start, All Programs, ERDAS 2010, ERDAS
IMAGINE 2010, ERDAS IMAGINE 2010 CLASSIC INTERFACE.
Band ratioing is a process by which brightness values of pixels in one band are
divided by the brightness values of their corresponding pixels in another band in
order to create a new output image. These ratios may enhance or subdue certain
attributes found in the image, depending on the spectral characteristics in each of
the two bands chosen.
Using the image that you have subseted do the following: in the menu list that
appears (under Interpreter) find and select the Utilities... option. In the Utilities
option window select Operators... In the two empty input windows select your
image as the image and in the empty output window add a filename of your
choice. Under the input files, select any bands you wish to use (some
combination may not work due to the divisible by 0 error). For instance, if
you wanted to do a 4/5 band ratio you would select layer four for input file #1 and
layer five for input file #2. Select the Operator to be used, in this case the
division symbol. Leave all other fields in their default values. Select OK in the
Two Input Operators window. Use the viewer to view the image.
11
Advanced Remote Sensing: Practical Handbook (July 2010)
In rectifying the TM image we will follow these six basic steps: (1) display
images, (2) start Geometric Correction Tools, (3) record Ground Control Points
(GCPs), (4) compute a transformation matrix, (5) resample the image, and (6)
verify the rectification process.
Display images
Start two viewers (arrange them using Session/Tile Viewers if desired). In the left
viewer open the TM image to be rectified - the input image - tmAtlanta.img. In
the right viewer open the SPOT image - the reference image - panAtlanta.img.
Fit images to frames by clicking on the Zoom to Data Extent button in the Extent
Window of the Home Tab.
12
Advanced Remote Sensing: Practical Handbook (July 2010)
Select 2D View #1 from the Contents Pane. Under the Raster Option (highlighted
above the Menu Tabs) click on Multispectral and then click on Control Points in
the Transform and Orthocorrect Window. The Set Geometric Model dialog box
opens. From Model List select Polynomial and click OK. The Multipoint
Geometric Correction window and GCP Tool Reference Setup dialog box both
open.
The GCP Tool Reference Setup dialog lists all the ways in which reference GCPs
could be obtained in Imagine. Reference GCPs are GCPs for which real-world
coordinates are known. We will be getting reference GCPs from the SPOT
image, so accept the default choice Image Layer (New Viewer) and click OK.
Navigate to the image panAtlanta.img in your home directory and click OK. The
Reference Map Information window just lists projection parameters of the SPOT
image, click OK in it.
When the Polynomial Model Properties dialog box opens, note the Status Bar. It
reads: Model has no solution. This will change after we have recorded the
Ground Control Points. Close this dialog box.
The Multipoint Geometric Correction Window contains 2 panes (the left pane is
the Input Image; the right pane is the Reference Image). Each pane contains 3
windows. The top left window of the Input image and top right window of the
Reference Image contain the entire image. There is box (you can resize this box
by clicking on the centre markers of the edges and dragging) that defines an area
that is zoomed in, in the main window. The main window also contains a zoom
box and this zoomed in area is shown in the last window of the pane. Try moving
or resizing a zoom box (make sure pointer tool is selected in the Multipoint
Geometric Correction window, otherwise you'll add a GCP) and the
corresponding window will pan and rescale. The purpose of these two additional
windows is to zoom in on some part of the image in order to position GCP
precisely.
Now we'll add four pairs of GCPs (strictly speaking, we need only 3 for
polynomial transformation of the first order, but 4 is better). A pair consists of an
input GCP (recorded in the X Input and Y Input columns of GCP Tool) and a
corresponding reference GCP (recorded in the X Ref., Y Ref. columns). To
create GCPs look for locations that are (a) present in both images, (b) easily and
accurately identifiable in both images, and (c) evenly spread inside images,
preferably closer to image edges. We must first remove the existing GCPs from
the GCP Table. Using the Shift button, highlight all the GCP points under the
Point # column. Right click and select delete selection.
13
Advanced Remote Sensing: Practical Handbook (July 2010)
As a starting point, use highway intersection in the upper left part of images.
Drag both zoom boxes to it, resize the boxes and/or windows so that you can
clearly see the detail of the intersection. Pick a convenient location, for example,
an overpass. Now switch to Create GCP tool in the Multipoint Geometric
Correction window. Add a GCP in the input image (tmAtlanta.img), then in the
reference image (panAtlanta.img).
Add 3 more pairs of GCPs in other parts of images. It is important to space GCPs
as widely as possible on the image, generally it is best to keep them close to
edges or corners. This way the polynomial transformation calculated from the
GCPs will be valid for the whole image, not just for the portion where GCPs were
concentrated. Pay attention that you don't accidentally record corresponding
GCPs into different rows - the pointers ">" show the current row for input and
reference GCPs, and these rows are not necessarily the same. To move a GCP
simply drag it to a new location. To delete a GCP right-click on the corresponding
row inside the Point # column, choose Delete Selection.
When you've all 4 pairs of GCPs, take a look at the Control Point Error field in the
button bar of the GCP Tool. Error is given in input units that are in pixels in our
case. The Total error should probably be less than 1 pixel.
At this point please save the input and reference GCPs to files. Choose File/Save
Input As... and save as Atl_in.gcc in your home directory. Use File/Save
Reference As... to save reference GCPs as Atl_ref.gcc.
Save table in the GCP Tool window as report. To do this, first unselect all rows in
the table, then right-click on any column header in the table, choose Report...
Print the report [and submit the report as part of your assignment].
At this point the transformation matrix is actually already computed. Click on the
Display Model Properties in the Geo Correction Tools to verify that the Status is
now "Model solution is current".
In the Geo Correction Tools bar click on the Display Resample Image Dialog...
button. Set Output File to tmAtlanta_georef.img in your home directory. Click OK
to start resampling.
Verify rectification
14
Advanced Remote Sensing: Practical Handbook (July 2010)
Wassia1_mss.img
Wassia2_mss.img
Wassia3_tm.img
15
Advanced Remote Sensing: Practical Handbook (July 2010)
This practical session will introduce you to the supervised land cover
classification process. Recall that we have already tried to do this through level
slicing a single image band. In this practical session, we will use the Box and
MDM classification methods. These methods are called supervised because the
user determines the types (and therefore the number) of classes to use in
creating the classification. This practical session is designed to help you
understand how these classification algorithms work. An actual classification
study has a lot more planning and steps involved! You will be collecting training
sets for this practical session through imagine to perform a simple 6 category
classification. We will be using ERDAS IMAGINE to perform these classifications.
Images:
mad_spot.img - SPOT image of Madison, WI
B. To create training sets select Signature Editor from the Main Raster Tab,
Classification Group, Supervised.
16
Advanced Remote Sensing: Practical Handbook (July 2010)
D. You will perform the following steps for each of the six classes listed
previously:
1. First zoom into an area that could be a training set (a homogeneous
area that represents a class). Then, create a polygon around the
F. Move the > indicator to the signature in the Signature Editor list if it did not
move automatically to the newly added class. Look at the histograms of the
signature for this class for each band by selecting View Histograms within the
Signature Editor Window or pressing the histogram icon on the Signature Editor
menu,
G. Change the color of the class to a color that makes sense to you. Do this by
moving the > indicator to the appropriate signature in the Signature Editor list.
Move the cursor to the Color column in the list and click with the right mouse
button. Choose a color from the list that appears, except for black. We will use
black for unclassified pixels later on. You can choose Other to create your own
color.
H. Name the signature with the name of the associated land cover type by
clicking on the name in the Signature Name column and editing it.
I. Note you can delete a signature by highlighting the signatures name under the
class column (the row will be highlighted in yellow). Then right press on the
mouse button and select the option to delete the selection.
J. Examine the statistics for each of the classes by moving the > indicator to the
appropriate signature in the Signature Editor list and selecting View Statistics
or press the histogram icon on the
K. There is many more function in the signature editor. If you want, experiment
with the various icons and pull-down menus. Remember there is Imagine Help.
17
Advanced Remote Sensing: Practical Handbook (July 2010)
L. View the graphs of these signatures by selecting View Mean Plotsfrom the
Signature Editor window. Select View Multiple Signature Modefrom the
Signature Mean Plot window. Select View Scale Y-Axis from this window
also. How can this graph help you to decide how unique these signatures
are?
M. After you collect a number of training sets for each class. Save this signature
file to your working directory by selecting File Save As. Name your file
"6classes.sig".
N. Save the statistics for your signatures by selecting File Report. Select only
Statistics and select All Signatures, then click OK. An Editor window appears
with a report of the statistics for each of the class signatures you have defined.
From the Editor window select File Save As and save this report in your
working directory as a file called "6classes.txt". Close the Editor window.
C. Under the Supervised Classification Window, save the Output Classified File
as a file called "boxclass.img" in your working directory.
D. Within the Supervised Classification Window and under the Decision Rules
select Parallelepiped for the Non-parametric Rule.
18
Advanced Remote Sensing: Practical Handbook (July 2010)
H. Right click on 2D View #1 in the Contents Pane and select Open Raster
Layer from the Viewer Context Menu. Open the file "boxclass.img". In the
Raster Options portion of the Select Layer to Add Window, deselect the Clear
Display option and then click OK.
I. Select Flicker from the Viewer context menu. You can click on the Manual
Flicker button within the Viewer Flicker window to animate the original and
classified images. Comment on the number of unclassified pixels, if any,
and on errors of commission and omission between any two classes
you choose to compare.
J. Another way to look at the classified image versus the original is to select
Swipe from the Viewer Context Menu. Then you can move the classified
image over a portion of the original image.
F. Select Open Raster Layer from the Viewer Context Menu for View #1 and
open the file "mdmclass.img". In the Raster Options portion of the Select
Layer to Add window, deselect the Clear Display option and then click OK.
19
Advanced Remote Sensing: Practical Handbook (July 2010)
G. Use Flicker from the Viewer Context Menu. You can click on the Manual
Flicker button within the Viewer Flicker window to animate the original image,
the box classified image, and the MDM classified image. Comment on the
number of unclassified pixels, if any and on errors of commission and
omission between any two classes you choose to compare. Compare
and contrast the results from the Box and MDM classifiers.
NB: Make sure you save the boxclass.img and mdmclass.img files as these
images will be used in Practical 5.
20
Advanced Remote Sensing: Practical Handbook (July 2010)
Make sure that you still have a copy of the signature file as well as the text file
you created in Practical 4 ( I will refer to them as 6classses.*). You will be using
this signature file to compare the results of the Maximum Likelihood classifier to
the Box and MDM classifiers.
A. As you open the file mad_spot.img (Viewer Context Menu Open Raster
Layer), highlight mad_spot.img in the Select Layer to Add window and
then look under the Raster Options tab. Display the image as a 3,2,1 (RGB)
true color image. Note that for SPOT imagery, band 3 is NIR, band 2 is
Red, and band 1 is Green. Hence, this will not really be a true color image.
21
Advanced Remote Sensing: Practical Handbook (July 2010)
E. From the Region Growing Properties window, select the 3 by 3 pixel icon
F. Create a training set for each the six classes that you defined in Practical
session 4 (i.e. Water, Grass, Wetland, Road, Buildings and Pines) by
performing the following steps for each class:
1. Create an AOI polygon around the class by using the Region Grow
H. Highlight all your signatures and then select View Histograms from the
Signature Editor window. In the Histogram Plot Control Panel window, set
Signatures to All Selected Signatures and Bands to All Bands, then click
Plot.
1. Comment on the shape of these histograms.
2. Comment on the separability between the histograms in each
band of any two classes.
22
Advanced Remote Sensing: Practical Handbook (July 2010)
I. To improve and add to the number of your training sets change the Region
Growing Properties (see step E). Note: use spectral class names that helps
you know what information class they belong too (such as water1, water2 ).
J. Save this signature file to your working directory by selecting File Save As
from the Signature Editor window. Navigate to your directory and name your
file "6seeds.sig". Save the statistics for your signatures by selecting File
Report. Select only Statistics and select All Signatures, then click OK. An
Editor window appears with a report of the statistics for each of the class
signatures you have defined. From the Editor window select File Save As
and save this report in your working directory as a file called "6seeds.txt".
Close the Editor window.
K. If you have not done so already, save the files "6seeds.sig" and "6seeds.txt".
You might want to look at the signature file "6seeds.txt" when you write up
your report. The file "6seeds.sig" is a binary file and cannot be viewed as
text.
A. Open your signature file called "6classes.sig" by selecting File Open from
the Signature Editor window.
23
Advanced Remote Sensing: Practical Handbook (July 2010)
B. Select Edit Parallelepiped Limits from the Signature Editor window, and
then click on Set From the Limits window. In the Set Parallelepiped Limits
window, select Std. Deviation, set this to 2, and set the Signatures to All,
then click OK and Close the Limits window. Select Classify Supervised
from the Signature Editor window. Navigate to your working directory and
then set the Output file to "boxclass.img". Set the Non-parametric Rule to
Parallelepiped, the Overlap Rule to Classify by Order, and the Unclassified
Rule to Unclassified, then click OK.
D. If you had to do steps B and C, close the 6classes.sig file if you have not
done so already and open your 6seeds.sig file in the Signature Editor
Window. Select Classify Supervised from the Signature Editor window.
Navigate to your working directory and set the Output file to "maxlike.img".
Set the Non-parametric Rule to None and the Parametric Rule to Maximum
Likelihood, and then click OK.
E. If you have more than one spectral class per information class then you will
need to recode your classifications down to six information classes. Do this
by selecting the Raster Tab Raster GIS Group Thematic Menu
Recode. Fill in the menu and of course the input file will be one of your
classifications with more than one spectral class for information class. Press
the Setup Recode button and recode the spectral classes down to the six
information classes.
F. Display the results from the Box, MDM, and Maximum Likelihood classifiers
(and Recode) in the 2DView #1 window. Select Utility Flicker from the
Viewer #1 window and animate the results from the three classification
algorithms. Ensure that you are looking at the full extent of the images
by selecting View Fit Image to Window. You may want to "hide" certain
layers so that they do not appear in the animation. You can do this by
selecting View Arrange Layers and then right mouse click on a layer and
selecting Layer Visibility. Repeat this procedure to "un-hide" layers.
24
Advanced Remote Sensing: Practical Handbook (July 2010)
G. Close the viewer. Open a new viewer and display the original image
(mad_spot.img) as a 3, 2, 1 True Color Image. Then open the results images
of the Box classifier (boxclass.img) the Minimum Distance to Mean classifier
(mdmclass.img), the Maximum Likelihood classifier (maxlike.ing) and the
Recode image in new viewers. To do this use Home Tab Window Group
Add Views Create New 2D View. Once all the views are open, use
Equalize Panes to view all the Viewer Windows evenly and compare them to
the original.
25
Advanced Remote Sensing: Practical Handbook (July 2010)
Bera_mss.img Multi-date (May 19 and July 30, 1981) Landsat MSS imagery
(one RED and one NIR band for each date) of an agricultural area
located in the Berea Creek West Quadrangle of western
Nebraska.
B. Set the Input Raster File to your copy of the Bera_mss.img" file.
C. Set the Output Cluster Layer to a file called "isodata.img" in your working
directory. This file will contain the classified image output from the ISODATA
algorithm.
E. Click on Initializing Options and note the default parameters used for
initializing the class means.
26
Advanced Remote Sensing: Practical Handbook (July 2010)
G. Note the default values listed in the Processing Options section. The
Maximum Iterations and Convergence Threshold parameters will be used to
control when the algorithm should stop. The Maximum Iterations is the
greatest number of times the algorithm will loop through the ISODATA
processing logic. The Convergence Threshold is a percentage of the
classified image that must remain unchanged between two successive
iterations for the algorithm to halt. Briefly describe how the ISODATA
algorithm will classify the input imagery. Be sure to include comments
about where the locations of the initial class means will be set and why
there are two rules to stop the processing.
H. Click OK. Watch the status bar carefully and note how many iterations it
takes to complete the classification. Which of the two criteria (number
of iterations or percent change) was used to stop the algorithm?
C. Note that you will have to interchange the raster and vector layer by
Dragging them up or down in the Table of Contents
27
Advanced Remote Sensing: Practical Handbook (July 2010)
F. From the Raster Attribute Editor window, select File Save and then File
Close.
One way to reduce this fragmentation is to pass a convolution filter (kernel) over
the image.
28
Advanced Remote Sensing: Practical Handbook (July 2010)
A. Make a copy of your isodata.img file in your working directory and name it
"isodataflt.img".
B. Select Viewer, and display the classification image in the isodataflt.img file.
C. Go to Raster Highlighted Heading Thematic Tab Enhancement
Group Statistical Filtering Menu and select Statistical Filtering.
D. Set the Function to Majority and the Window Size (kernel size) to 3 x 3.
E. Click Apply and note what happens to the image. You can undo this change
by selecting Raster Undo.
F. Answer the following questions:
1. What happened to the class fragments?
2. What happened to the class boundaries?
3. What happens if a larger window size is used?
4. Why didn't we use a mean or median value for the output function
of the filter?
29
Advanced Remote Sensing: Practical Handbook (July 2010)
You are employed as a remote sensing consultant for the department of local
government and housing. Your task is to create a land cover dataset for the
Maputaland area in KwaZulu-Natal. For this phase of the project a simple
classification is needed (level 1). You have acquired the following datasets:
1:250 000 land cover dataset created by the CSIR: Format Esri Shapefile (*.shp)
GPS points of the area Independent consultant surveyed the area and captured
these points
Using the datasets provided, create a land cover map for the Maputaland
area. Your report should include the following:
Supervised Classification
Forest coastal
Grassland
Shrub
Plantation - mature
Plantations - young
Reed swamp
Sand forest
Shrub land
Swamp forest
Woodland
Water - fresh, deep
Water - fresh, shallow
Farmlands commercial
Urban Areas Residential
Urban Areas Commercial
30
Advanced Remote Sensing: Practical Handbook (July 2010)
2732 and 2832 Topographical maps are available from map library
Accuracy Assessment
Procedures and methods that were used to obtain your results (supervised,
unsupervised and error assessment)
31
Advanced Remote Sensing: Practical Handbook (July 2010)
1. Introduction
Optical properties of soils are related first to their mineral composition, since soils
result from the transformation of weathering products of rocks. A soil reflectance
spectrum is the superimposition of spectra of the soil mineral components. Like
minerals, soils have an increasing reflectance from the visible to the shortwave
infrared, with absorption bands around 1.4 m and 1.9 m related to the soil
moisture (Figure 1).
Figure 1. Spectral reflectance curves for Newtonia silt loam at various moisture
contents (after Bowers and Hanks, 1965)
32
Advanced Remote Sensing: Practical Handbook (July 2010)
Another factor affecting optical properties of soils is soil moisture. Increasing soil
moisture leads to parallel curves of soil reflectance spectra (Figure 1). This
means that the soil moisture has an equal effect over the entire spectrum and
that the ratio between spectral bands, such as red and near-infrared bands, is
independent from soil moisture. Therefore, it is possible to define the "soil line"
concept, which is the line representing the relationship between red and near-
infrared soil reflectances (Figure 2).
Figure 2. Soil line for a silt-loam soil in Avignon-Montfavet (France) (after Baret,
1986)
The soil line is characteristic of the soil type and it is used to define some
vegetation indices (see below), which are able to correct plant canopy
reflectances from the effects of optical soil properties. The soil line is calculated
by the
least-square regression method and is expressed as follow:
(1)
where
redsoil = soil reflectance in the red band
nirsoil = soil reflectance in the near-infrared band
a,b = parameters of the soil line estimated by the least-square regression method
Other visible bands, such as the green or the blue ones, can be used instead of
the red ones. A third factor influencing soil optical properties is the content of
organic matter, which has a lower effect in the bands beyond 1.8 m. High organic
matter content can induce spectral interferences for band characteristics of some
minerals like Mn and Fe. This factor has an indirect spectral influence through its
effect on soil structure and on water retention capacity. The last factor, which can
affect soil optical properties, is the soil roughness related to its texture.
33
Advanced Remote Sensing: Practical Handbook (July 2010)
Optical properties of leaves are the same whatever the species. A green healthy
leaf has typical spectral features that differ in function of the three main optical
spectral domains (Figure 3).
In the visible bands (400-700 nm), light absorption by leaf pigments dominates
the reflectance spectrum of the leaf and leads to generally lower reflectances
(15% maximum). There are two main absorption bands, in blue (450 nm) and in
red (670nm), due to the absorption of the two main leaf pigments: the chlorophyll
a and b which account for 65% of the total leaf pigments of superior plants.
These strong absorption bands induce a reflectance peak in the yellow-green
(550 nm) band. For this reason, chlorophyll is called the green pigment. Other
leaf pigments also have an important effect at the visible spectrum. For example,
the yellow to orange-red pigment, the carotene, has a strong absorption in the
350-500 nm range and is responsible for the color of some flowers and fruits as
well as of leaves without chlorophyll. The red and blue pigment, the xantophyll,
has a strong absorption in the 350-500 nm range and is responsible for the leaf
color in fall. In the near-infrared spectral domain (700-1300 nm), leaf structure
explains the optical properties. Leaf pigments and cellulose are transparent to
34
Advanced Remote Sensing: Practical Handbook (July 2010)
Near-infrared spectral region has two main spectral regions: (1) between 700 and
1100 nm, where the reflectance is high, except in two minor water-related
absorption bands (960 and 1100 nm) and (2) between 1100 and 1300 nm, which
corresponds to the transition between the high near-infrared reflectances and the
water-related absorption bands of the shortwave infrared. The last optical domain
is the shortwave infrared (1300 -2500 nm) characterized by the light absorption
by the leaf water. Because water strongly absorbs radiation in 1450, 1950 and
2500 nm, these wavelengths cannot be used for reflectance measurements. In
the other shortwave infrared wavelengths, reflectances increase when leaf liquid
water content decreases. For all the three main spectral domains, factors
affecting leaf optical properties are: internal or external structure, age, water
status, mineral stresses and, healthiness. All the effects of these factors are
detailed in Guyot (1989).
At least fifty different vegetation indices exist. The most common used vegetation
indices are ratios of single-band or linear-combined reflectances. Rationing
allows removal of the disturbances affecting, in the same way, reflectances in
each band. Ratio-based vegetation indices can be computed from radiance
values instead of reflectance values, if radiances are measured in the same
irradiance conditions. Most of the ratio-based vegetation indices use, as spectral
bands, the red one, which is related to the chlorophyll light absorption (Figure 3)
and the near-infrared one, which is related to the green vegetation density,
because these bands contain more than 90% of the information on a plant
canopy. Also, in red and near-infrared bands, the contrast between vegetation
and soil is maximal. The first ratio-based vegetation index was the Reflectance
Ratio or Ratio Vegetation Index (RVI), which is computed as follow (Pearson and
Miller, 1972):
35
Advanced Remote Sensing: Practical Handbook (July 2010)
(2)
where
nir = reflectance in the near-infrared band
red = reflectance in the red band
Rouse et al. (1974) improved this index and defined the Normalized Difference
Vegetation Index (NDVI) which is computed as follow:
(3)
where
nir = reflectance in the near-infrared band
red = reflectance in the red band
Pinty and Verstraete (1992) showed that a better minimizing of the relative
influence of atmospheric effects is achieved by non-linearly combining single-
band reflectances into the vegetation index. They proposed the Global
Environment Vegetation Index, GEMI, calculated as follow:
(4)
with
Because RVI and NDVI do not minimize well the soil optical effect on the canopy
reflectances, vegetation indices based on the soil line concept are also used.
Richardson and Wiegand (1977) proposed the Perpendicular Vegetation Index
36
Advanced Remote Sensing: Practical Handbook (July 2010)
Figure 4. Geometrical relationships between RVI, NDVI, PVI and TSAVI (after
Baret et al., 1989)
The intersection point between the soil line and the perpendicular is (red soil, nirsoil)
(Figure 4). In this figure, RVI is the slope of the line joining the origin (0,0) to the
vegetation point (redveg. , nirveg.). It is therefore equal to tan ( ) and NDVI is equal
to tan ( - 45). PVI is computed by:
(5)
where
37
Advanced Remote Sensing: Practical Handbook (July 2010)
(6)
In this formula, PVI is explicitly a function of the soil line parameters (a and b)
that are less variables from one soil to another than soil reflectance
measurements.
PVI has been improved by Baret et al. (1989) and Baret and Guyot (1991) into
the Transformed Soil Adjusted Vegetation Index (TSAVI) which is computed as
follow:
(7)
where
For a=1 and b=0, TSAVI = NDVI. In Fig. 4, TSAVI is defined as tan(), where =
angle between the soil line and the line joining the point (red veg., nirveg.) to the
point S of the soil line which has X as abscissa.
All these commonly used vegetation indices have advantages and limits related
to experimental conditions for which they have been defined. The correlations
between themselves are low, as shown by Baret and Guyot (1991) through
theoretical simulations. Furthermore, defining the best vegetation index to be
used depends on the kind of sensors that are considered, because of the
difference in band types (wavelengths, widths, atmospheric effects) for each
sensor. It means that for example, NDVI defined from data acquired with the
NOAA-AVHRR sensor is different than NDVI defined using data acquired with
the LANDSAT-TM sensor, because of the difference between both sensors for
the red and near-infrared bands.
38
Advanced Remote Sensing: Practical Handbook (July 2010)
2. References
Baret, F., and G., Guyot, 1991. Potentials and limits of vegetation indices for LAI and
APAR assessment, Remote Sensing of Environment, 35: 161-173.
Baret, F., G., Guyot, and D.J., Major, 1989. TSAVI: a vegetation index which minimizes
soil brightness effects on LAI and APAR estimation. in Proceedings of the 12th
Canadian Symposium on Remote Sensing and IGARSS'89, Vancouver (Canada), 3:
1355-1358.
Bowers, S.A., and R.J., Hanks, 1965. Reflection of radiant energy from soil, Soil
Science, 100:130-138.
Pearson, R.L., and L.D., Miller, 1972. Remote mapping of standing crop biomass for
estimation of the productivity of the short-grass Prairie, Pawnee National Grasslands,
Colorado. in Proc. of the 8th International Symposium on Remote Sensing of
Environment, ERIM, Ann Arbor, MI, 1357-1381.
Pinty, B., and M., Verstrate, 1992. GEMI: a non linear index to monitor global vegetation
from satellites. Vegetatio, 101:15-20.
Richardson, A.J., and C.L., Wiegand, 1977. Distinguishing vegetation from soil
background information, Photogrammetric Engineering and Remote Sensing, 43(2):
1541-1552.
Rouse, J. W., R. H., Haas, J. A., Shell, D.W., Deering and J.C., Harlan, 1974. Monitoring
the vernal advancement of retrogradation of natural vegetation. Final Report, Type III,
NASA/GSFC, Greenbelt, MD, 371 pp.
39
Advanced Remote Sensing: Practical Handbook (July 2010)
Exercise # 1
Let us consider the following reflectances (in percent) measured on bare soil,
during a clear day (series I) and a cloudy day (series II), in the three SPOT-HRV
MLA wavebands (green (500-590 nm), red (620-680 nm) and near-infrared (790-
890 nm)).
In each case, present your results on the form of a correlation matrix. Draw the
scatter plot between the most correlated bands, by using data from both series
and define a possible method to distinguish between both data series.
40
Advanced Remote Sensing: Practical Handbook (July 2010)
Determine on which band the influence of the soil type is the less important for
series #2? Use mean reflectance values for a given object, if necessary. In each
case, present your results in a table.
is:
(8)
In each case, do wet and dry soils belong to the same line? Where are they
positioned on each line?
Exercise #2
41
Advanced Remote Sensing: Practical Handbook (July 2010)
In each case, present your results on the form of a correlation matrix. In each
case, do not use data acquired on water, because we are looking for
correlations related to vegetated targets.
42
Advanced Remote Sensing: Practical Handbook (July 2010)
Try to explain the difference with the results of Question 1.1 of Exercise#1. Draw
the scatter plot between the most correlated bands, by using both data series
together.
2.2. Add the data of Exercise#1. Let us consider all these data as typical
spectral signatures of classes (water, vegetation, soil ...). Which 2D-scatterplot
cannot be used to discriminate these classes?
2.3. Calculate the following vegetation indices (RVI, NDVI, GEMI, PVI, and
TSAVI) for each object and each series. For PVI and TSAVI, use adequate soil
line parameters with regard to the series number. Present your results in a table
(one per series). For each series, which index is the best to distinguish
vegetation from soil-type objects, vegetation from water-type objects as well as
water from soil-type objects? You may use relative difference between mean
values of vegetation indices to solve this question. Present your results in a table
(one per series).
43
Advanced Remote Sensing: Practical Handbook (July 2010)
Introduction
Principal Components Analysis is an image transformation technique that is used for a
variety of purposes in Remote Sensing and GIS, including data compression and
change analysis. In this exercise we will explore the nature of the Principal Components
transformation and its application in data compression. By doing so, we will also be able
to gain an appreciation for the fundamental information content of the different bands
associated with a multi-spectral image.
The spectral bands of a multi-spectral image most commonly do not contain completely
independent information. More likely, there will be some degree of correlation between
bands, indicating that they share elements of information in common. To illustrate this,
consider the example in Figure 1.
Figure 1 depicts the reflectance levels for a set of pixels by plotting their positions in
what is commonly called band space (in this case, for an image with two spectral
bands). Each of the axes represents reflectance in the spectral band indicated. Each
image pixel can thus be plotted in this space by placing its location at the intersection of
its reflectance level on each band. As can be noted, there is a significant amount of
correlation between the bands (i.e., if a pixel has a high reflectance on Band 1, it is likely
also to have a high reflectance on Band 2).
Since the bands in Figure 1 are correlated they do not each carry independent
information. The fact that you have a good chance of being able to predict the
reflectance of a pixel on one band from the reflectance on the other confirms this -- there
44
Advanced Remote Sensing: Practical Handbook (July 2010)
is some degree of redundancy in the information they carry. It is this redundancy that
Principal Components seeks to remove when it is used for the purpose of data
compression. By removing redundant data, we are left with the same information, but
with smaller volumes of data. Unfortunately, the process is not so clear cut. Unless they
are perfectly correlated, some independent information will always exist in each band.
Thus removing data after transforming to remove redundancy always implies some loss
of information. However, as we shall see in this exercise, the information we typically
reject in order to achieve data compression is often inconsequential or indeed, even
undesirable.
The weights in the transformations are collectively known as the eigenvectors. For any
given number of original bands, an equal number of transformation equations can be
produced, thus yielding an equivalent number of component images.
Perhaps the easiest way to understand the result of the Principal Components
transformation is to think of the process as a mathematical determination of a new set of
axes in band space such that the resulting images are :
This is illustrated in Figure 1 in which Component 1 (CI) is oriented along the axis of
largest variation. Component 2 (CII) will be, by definition, perpendicular to Component 1.
Note this it is oriented in the direction of lesser variation. Because there were two input
bands, there may also be two output components, and these components describe all of
the information inherent in the original set (indeed, it is possible to reverse the
transformation and thereby reconstruct the original band set).
Given this form of output, we can now see how data compression can be achieved. As a
result of the transformation, redundancy in the data has already been removed (to test
this, see if you feel you can predict the value on Component 2 given its value on
Component 1). However, we still have the same amount of data (we started with two
bands and end up with two). To achieve data compression we will have to get rid of one
of more of the new component images. In this simple example, consider what would
happen if you were to get rid of Component 2 and keep only Component 1. Since
Component 1 explains the major element of variation, most pixels would be meaningfully
related to each other (i.e., be distinguishable from one another with roughly the same
relative difference). If it is known, for example, that Component 1 contains 90% of the
original information then we will have kept 90% of the information while retaining only
half of the original data.
45
Advanced Remote Sensing: Practical Handbook (July 2010)
The procedure just described may seem risky since we achieve data compression at the
cost of some information loss. However, as will be seen in the next example, the
decision about how to balance these two is often not difficult.
The data to be used for this exercise consist of seven bands of Landsat TM data for a
location just to the west of Worcester Massachusetts in the USA. The images are
intentionally small (just 72 columns by 86 rows) in order to allow the effects on individual
pixels to be seen. The date of the image in September 10, 1987, just at the end of the
summer season. The area is largely covered by deciduous forest, with distinctive stands
of conifers (largely red and white pine) planted in the vicinity of reservoirs to reduce the
input of tanins from deciduous species and thereby enhance the visual quality of the
water supply. A small residential area is located to the north of the image.
Procedure
1. Use the display system of your software to examine the image named H87TM4 as a
grey level image. If you undertake any contrast stretching, use either a linear or a linear
with saturation contrast procedure.
The near-infrared band is often the most informative band from a multi-spectral set. The
water body farthest to the west and that farthest south are reasonably deep reservoirs
while the other lakes are shallow ponds. To the north of the image is a small residential
area. Otherwise the area is predominantly forested. To the north of both reservoirs can
be found distinctive conifer plantations (also a small one to the immediate west of the
southern reservoir). Because of their needle leaf structure these conifers (largely white
and red pine) appear darker on this band than the more prevalent deciduous trees.
2. Now examine all of the other bands in this set using a similar display procedure. The
names for the bands range from H87TM1 for TM Band 1 to H87TM7 for TM band 7. If
your system permits it, display them simultaneously on the screen.
46
Advanced Remote Sensing: Practical Handbook (July 2010)
Question 1
Comparing the seven bands to each other, visually estimate the two pairs of images that
appear to be the most alike. Then visually estimate the two pairs of images that appear
to be the most dissimilar. Which images are these? Overall, does it appear that there is
much redundancy? Make a very rough guess about what proportion of the data in these
seven bands that is truly unique (see note 2) (don't worry about making an incorrect or
imprecise guess)
3 Now run your software's Principal Components Analysis routine. If you are given a
choice between Standardized and Unstandardized, choose Unstandardized. Indicate
that you wish create 7 component images and when required, specify the names of the
input bands: H87TM1 through H87TM7. You may be offered options for scaling of these
47
Advanced Remote Sensing: Practical Handbook (July 2010)
output images -- simply choose whatever defaults are offered. Then print out any tabular
results or summary statistics it produces.
Your software should offer several tables of information about the transformation
undertaken. These might include:
COMPONENT C1 C2 C3 C4 C5 C6 C7
the eigenvectors (i.e., the transformation coefficients). We will not use these.
a table of component loadings. The loadings express the degree of correlation
between each component and each of the original bands. These will be useful.
48
Advanced Remote Sensing: Practical Handbook (July 2010)
LOADING C1 C2 C3 C4 C5 C6 C7
Question 2
Using the correlation matrix, what pair of original bands were the most correlated?
Which ones were the least correlated? Compare these to your original guesses in Q1.
Question 3
Using the loadings chart, which of the original bands is most correlated with Component
1? What is the level of correlation?
4. Use your display system to bring the image for Component 1 onto the screen (again
as a grey level image with only a simple linear contrast stretch). Then bring up the image
you identified in Q3 in a similar fashion for comparison. Notice that they look almost
identical.
49
Advanced Remote Sensing: Practical Handbook (July 2010)
Question 4
Question 5
Using the loadings chart, which of the original bands is most correlated with Component
2? What is the level of correlation?
5. Use your display system to bring the image for Component 2 onto the screen (again
as a grey level image with only a simple linear contrast stretch). Then bring up the image
you identified in 1.Q5 in a similar fashion for comparison.
Question 6
6 Let's now jump to the other end of the sequence. Use your display system to bring the
image for Component 7 onto the screen (again as a grey level image with only a simple
linear contrast stretch).
50
Advanced Remote Sensing: Practical Handbook (July 2010)
Question 7
How would you describe the pattern you see on the screen? Would you describe this as
information? What proportion of variance (i.e., information) does this component
describe?
This is a fascinating image! Notice that there are some elements which appear to have a
somewhat systematic horizontal pattern. This image most probably represents a
combination of system noise and atmospheric interference. Clearly there is little here
that we might wish to save. Thus discarding the band entirely would not only be of little
concern, but would in fact most likely be considered a benefit (since we are discarding
noise). The percentage of variance explained by this component is indicative of
information only in an Information Theoretic sense, where information is equated with
variation. However, this is not meaningful information. Therefore we can discard it
without concern.
7 Now use your display system to bring the images for Components 6, 5, 4 and 3 onto
the screen (again as a grey level image with only a simple linear contrast stretch).
51
Advanced Remote Sensing: Practical Handbook (July 2010)
Question 8
What is the percent variance explained by each of these images? Moving from the
component which explains the least to that which explains the most, at what point do you
start to see evidence of real geographic features?
Question 9
For purposes for data compression, one wishes to minimize the loss of geographically
meaningful information which maximizing the amount of data reduced. Which
components do you feel should be kept, and which ones should be rejected?
Question 10
Given your choices in Q9, what proportion of the data have you kept (i.e., what
proportion of the original number of bands). What is the proportion of variation (i.e.,
information) retained?
Observations
During the course of this exercise, we have also observed several other important
points. First, we saw that one band (the near-infrared) carried an enormous amount of
the geographically meaningful information inherent to this data set. This will not always
be the case, but is commonly so in vegetated landscapes (since leaf structure
differences show up in this wavelength region most, and the contrast of vegetation with
water and non-vegetated surfaces is strong). It is for this reason that the near-infrared
channel is often a good one to examine if you are only able to view a single band.
52
Advanced Remote Sensing: Practical Handbook (July 2010)
Next in importance was the red wavelength band. Again, this is a result of the fact that
the landscape examined here is highly vegetated. The red band is frequently called a
chlorophyll absorption band since it is in this area of the electromagnetic spectrum that
energy is absorbed most by chlorophyll for the purpose of photosynthesis.
We also noted that Components 1 and 2 carried almost 98% of all the information in the
component set. Since these are most heavily correlated with TM bands 4 and 3
respectively, we can safely assume that in vegetated landscapes, these two bands will
carry most of the geographically meaningful information. You can begin to see why
counterparts of these two wavelength bands are found in SPOT multi-spectral imagery.
Finally, we noted that the PCA procedure was very effective in feeding out the noise in
the image. Since the transformation can be reversed, this would suggest that we should
be able to do so without these elements by simply forcing the reverse coefficients for the
components concerned to zero before transformation. Also, note that it is this tendency
to order information elements into meaningful groups that underlies the use of PCA in
such areas as Change and Time Series Analysis.
Credits
This exercise was written by Ron Eastman at Clark University. The data were provided
by EOSAT Corporation.
References
The Principal Components transform is described in many Remote Sensing texts.
However, for an excellent intermediate-level discussion, you may wish to consult:
For a more detailed account illustrating the mathematics of the transformation process,
consult:
For examples of the use of Principal Components for Change and Time Series Analysis,
see:
Eastman, J.R., and Fulk, M., (1993) "Long Sequence Time Series Evaluation using
Standardized Principal Components", Photogrammetric Engineering and Remote
Sensing, 59, 8, 1307-1312.
53
Advanced Remote Sensing: Practical Handbook (July 2010)
Note 1.
Technically, this logic describes what is known as Unstandardized Principal Components
(also known as the Karhunen-Loeve or Hotelling Transform). There is second variant
known as Standardized Principal Components in which the input data are effectively
converted into standard scores (by subtracting the mean and dividing the result by the
standard deviation) before transformation.
Note 2.
If you have trouble with this question, consider a case with four bands in which two
appear almost identical and a third is quite similar, while the fourth is completely
different. The image that looks completely different clearly carries new information and
thus might be given a weight of 1. Only one of the virtually identical pair carries full
information. Therefore give one of them a weight of 1 and the other a weight of 0.1.
Finally, the image which likes fairly similar to the virtually identical pair only carries a
limited amount of new information (say 25%). Therefore give it a weight of 0.25. Adding
the weights one gets 1 + 1 + 0.1 + 0.25 = 2.35. Since there are four bands of data, this
constitutes only 2.35 / 4 = 0.59. Very roughly, then, we might estimate that only 60% of
the data offer unique information.
Appendix
54