Professional Documents
Culture Documents
MIT– IAP 2018: Adapted from course EE194A /ENV196R, taught by Dr. Magaly Koch at Tufts University
Outline
• Background
• “Alien Eyes” Exercise
• Satellites and Sensors
• “The World Above Us” Exercise
• Data Download Practice
• Image Pre/processing
• Data Import Practice
Remote sensing is the act of obtaining
information from a distance & includes:
Passive Active
• Emitted by natural sources • Artificially created energy
• Limited wavelengths • Unlimited wavelengths
(visible, infrared, thermal) (laser, radar, acoustic)
• Free • Costly
ELECTROMAGNETIC RADIATION
So what?
Quantum physics
RADIATION LAWS:
• Everything emits energy
• Wien’s Displacement Law:
Temperature dictates the
range & distribution of the
wavelengths emitted
Scattering Absorption
SCATTERING
Scattering Absorption
ENERGY REDUCTION DUE TO ATMOSPHERE(S)
CONCEPTS OF REMOTE SENSING
As we have seen, a lot happens between the light source and satellite sensor
SENSOR RESOLUTION CHARACTERISTICS
• Spatial resolution: relates to the pixel size, extent to the
overall image coverage.
• Spectral resolution: range of wavelengths recorded by
a sensor (bands: number, position and width).
• Radiometric resolution: range of integers that can be
recorded by a sensor; measured in binary computer codes.
• Temporal resolution: the time that elapses between
successive dates of imagery acquisition.
6
A Sky Full of Satellites
3
ORBIT PATTERNS
4
SCANNING SYSTEMS
15
CONCEPTS OF REMOTE SENSING
16
LANDSAT 8 (LDCM)
Characteristics of LANDSAT 8
Launched: 11 February 2013
Type: Sun-synchronous
Altitude: 705 km
Inclination: 98.2 deg
Orbital period: 99 min
Temporal resolution: 16 days (233 orbits)
Swath width: 185 km
Sensors: Operational Land Imager (OLI)
Thermal Infrared Sensor (TIRS)
Scanner: Pushbroom (along-track)
20
Take a moment to go through each characteristic of this satellite & sensor
Where do you go to
download satellite imagery?
https://lpdaac.usgs.gov/data_access
(we will look at Earth Explorer)
CONCEPTS OF REMOTE SENSING
Our eyes are best suited to understanding red, green, and blue wavelengths,
which we combine to process the image in our brain
Here is an image broken
into its three components:
Red, Green, & Blue
VISUALIZING LIGHT
This is how wavelength measurements get transferred from satellite to human brain
Object Sensor Image Monitor Eyes Brain
R R
Natural Color Image > G G
B B
We can also apply colors to wavelengths (max of 3) that our brain can’t normally see
Pseudo Color Pseudo Color Pseudo Color Pseudo Color
(Heat) (Heat) v2 (Multiband) (Multisource)
Hot R Hot R R
Med G Med G G V R
Cold B Cold B B R XR G
NIR G T B
MIR B
FIR
FIR[R];NIR[G];G[B]
True vs. False Color Images
Which image is more intuitive, and a good way to get the lay of the land?
Which image provides characteristics of the materials our eyes couldn’t normally see?
How to view satellite imagery in
different software:
Note: I will demo in ENVI, because it’s only on
computers 1 & 2, and then you will practice in ArcGIS.
Tutorials & resources for the software is below:
ENVI: http://www.harrisgeospatial.com/docs/Tutorials.html
ArcGIS: https://www.esri.com/arcgis/imagery-remote-sensing
http://virginiaview.cnre.vt.edu/tutorial/RS_in_ArcGIS_AllChapters.pdf
http://ibis.geog.ubc.ca/courses/geob373/labs/IGETT_Exercises/ArcGIS%20Image%20Analysis%20workflow.pdf
Wrap Up
Who can help you get started
Email support
gishelp@mit.edu
-For any RS or GIS related questions
Website
http://libguides.mit.edu/gis/
-Tutorials
-Access to ESRI web courses, 100+
-Access to our data repository, e.g. GeoWeb
-Upcoming and previous workshop materials
-Access to commercial RS software, ENVI
Advanced Analyses
/Additional Information
Note: this will be a subject matter of a future,
Intermediate RS Workshop. To learn more
contact gishelp@mit.edu for an appointment,
or come to open help hours listed here:
https://libguides.mit.edu/gis.
IMAGE PROCESSING WORKFLOW
SPECTRAL RATIOING
SPECTRAL RATIOING
The process of dividing the pixels in one image or band by the
corresponding pixels in a second image or band is known as image or
band ratioing. Ratio images serve to highlight subtle variations in the
spectral responses of various surface covers and to reduce the effect of
topography (illumination differences). By rationing the data from two
different spectral bands, the resultant image enhances variations in the
slopes of the spectral reflectance curves between two different spectral
ranges that may otherwise be masked by the pixel brightness variations in
each of the bands. Useful for differentiating between areas of stressed and
non-stressed vegetation; discriminating rock types etc.
RATIOS AND VEGETATION INDICES
There are two reasons for using band rationing:
1. Enhances certain aspects of the shape of spectral reflectance
curves of different surface cover types
2. Reduces undesirable effects resulting from variable illumination
(caused by variations in topography)
Shadow effect causes a decrease in the reflectance levels of any reflectance
band. However, one can compensate for changing illumination conditions by
using the fact that the percentage of brightness lost by each band is similar
→ suppression of brightness variation with ratio enhancement.
One of the most common spectral ratios used in studies of vegetation status
is the near-infrared to the equivalent red band value for each pixel location.
NIR − R
NDVI = NIR + R
Normalized Difference Vegetation Index (NDVI) is a ratio product that
exploits the fact that vigorous vegetation reflects strongly in the near
infrared and absorbs radiation in the red waveband.
NORMALIZED DIFFERENCE VEGETATION INDEX
60
Healthy
Vegetation
Reflectivity (%)
40 Dry
Vegetation
20 Soil
0
0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 µm
R IR
NIR − R
Normalized Difference Vegetation Index NDVI = NIR + R
NORMALIZED DIFFERENCE VEGETATION INDEX
7
Composite from 3 indexes
• Landsat 8 OLI-TIRS
• Phoenix, AZ area
• Date: 2014-09-03
• Local Time 10:57:56
• Path: 36, Row: 37
• False color image
• R: Bare Soil Index (BI)
• G: Soil Adjusted Total Vegetation Index
(SATVI)
• B: Modified Normalized Difference Water
Index (MNDWI)
• Pan-sharpened
IMAGE CLASSIFICATIONS
A human analyst classifies features on an image by using elements of
visual interpretation to identify homogeneous groups of pixels which
represent land cover classes.
Digital image classification uses the spectral information
represented by values in one or more bands, and attempts to classify
each individual pixel based on this spectral information. This type of
classification is termed spectral pattern recognition.
The overall objective of image classification procedures is to
automatically categorize all pixels in an image into land cover classes.
2
SPECTRAL PATTERN RECOGNITION
4
TYPES OF CLASSIFICATIONS
Common classification procedures can be broken down into two broad
groups: supervised classification and unsupervised classification.
The supervised classification method includes two steps, namely
classification and identification of surface cover types.
In contrast, the process of clustering or unsupervised classification
does not require the definition of a set of categories in terms of which
the land surface is to be described. Clustering is used to determine the
number (but not initially the identity) of distinct land cover categories
present in the image, and to allocate pixels to these categories.
Identification of the clusters (or categories) in terms of the nature of the
land cover type is a separate stage that follows the clustering
procedure.
In all cases the spectral pattern (or signature) present within the data
for each pixel is used as the numerical basis for categorization =>
different features have different combinations of DNs based on their
spectral reflectance properties.
5
Unsupervised versus Supervised
Identify Edit/evaluate
classes signatures
Edit/evaluate Classify
signatures image
Evaluate Evaluate
classification classification
16