You are on page 1of 55

A Report

On
Snow Cover Mapping in Indus basin
using Remote Sensing
Submitted By:
Vinay Kumar G

2011A2PS402H

In partial fulfilment of course


BITS C221, Practice School I
AT
NATIONAL INSTITUTE OF HYDROLOGY
ROORKEE 247667
UTTARAKHAND

BIRLA INSTITUTE OF TECHNOLOGY &


SCIENCE PILANI
PILANI 333031
RAJASTHAN

(JULY 2013)

A Report
On
Snow Cover Mapping in Indus basin
using Remote Sensing
Submitted By:
Vinay Kumar G

2011A4PS318H

Prepared in partial fulfilment of the


Practice School-I Course No. BITS C-221
AT
National Institute of Hydrology, Roorkee
A Practice School-I station of
BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI
(JULY, 2013)

ii

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE


PILANI (RAJASTHAN)
Practice School Division
Station: National Institute of Hydrology

Centre: Roorkee

Duration: From 22/05/13

To: 13/07/13

Date of Submission 13, July 2013


Title of the Project: Orientation Report

Submitted By:
Vinay Kumar G

B.E. Civil

Name of Expert: Dr. D.S.Rathore (Scientist F)


Name of PS Faculty: Dr. Chandra Shekhar
Project area: Remote Sensing and GIS
Key words: remote sensing; snow cover; watershed

Abstract:
The title of my project is snow cover mapping in Indus basin using remote sensing. The aim of
my project was to find the snow cover in Indus basin on various dates and analyse the data. This
report briefly discusses about Remote Sensing and its working. This report also explains how to
process the data obtained using remote sensing. This report explains how watershed delineation
and snow mapping of basins is done using GIS. This report discusses the snow cover of Indus
basin over an extent of period. This report is finally concluded by analysing snow cover data in
Indus basin of 6 separate dates.

__________________

__________________

Signature of Students

Signature of PS Faculty
iii

ACKNOWLEDGEMENT
I would like to thank all those who supported me during the Practice School at NIH. I would like
to thank Dr. D.S.Rathore and Dr. Tanveer Ahmed sir who supervised my training here at National
Institute of Hydrology.
I would also like to thank Dr. S.K. Jain and Dr. V.C. Goyal who mentored our training at
National Institute of Hydrology. I would like to thank our instructor Dr. Chandra Shekhar and
our co-instructor Siddharth Arora who invested their effort and time for me. I would also like to
thank all my friends who helped me during my training program.

iv

Vinay Kumar Grandhi,


B.E(Hons) Civil

TABLE OF CONTENTS

Sr no.

Topic

1.

Pg No.

Introduction

1.1

Remote Sensing

1.2

Energy interactions with atmosphere

1.3

Data Acquisition

1.4

Data Analysis

Data source, Soft Ware and Methodology

17

2.1

Data Source

17

2.2

Soft Ware

17

2.3

Methodology

19

2.3.1

Watershed Delineation

19

2.3.2

Elevation Zones

25

2.3.3

Snow Mapping

26

2.3.4

Zonal Statistics

29

Study Area

30

Results

31

4.1

Watershed Delineation

31

4.2

Elevation Zones

36

4.3

Snow Mapping

38

Conclusion

45

References

46

Bibliography

47

LIST OF ILLUSTRATIONS
IMAGES
1. Components of remote sensing
2. Differences between Active and Passive remote sensing
3. Electro-magnetic spectrum
4. Energy interactions with atmosphere
5. Types of noise in remotely sensed data
6. Satellite image and its Fourier transformation image
7. Image classification using pixel values (spectral signal)
8. An illustration showing the flow direction evaluation process
9. Raster calculator used to reassign pixel values in flow accumulation raster
10. Model used for snow delineation in ERDAS IMAGINE
11. Indus basin
12. SRTM 250 DEM used for watershed delineation
13. SRTM 250 DEM along with Indus basin watershed
14. FILL raster of Indus basin
15. FLOW DIRECTION raster of Indus basin
16. FLOW ACCUMULATION raster of Indus basin
17. FLOW ACCUMULATION raster along with OUTLET point
18. Indus basin DEM with classified ELEVATION ZONES
19. Snow Cover raster on 09 March 07
20. Snow Cover raster on 18 April 07
21. Snow Cover raster on 04 May 07
22. Snow Cover raster on 05 June 07
23. Snow Cover raster on 07 July 07
24. Snow Cover raster on 08 August 07

TABLES
1.
2.
3.
4.
5.
6.
7.

Zonal statistics of Indus basin


Zonal statistics of Snow Cover on Indus basin on 09 March 07
Zonal statistics of Snow Cover on Indus basin on 18 April 07
Zonal statistics of Snow Cover on Indus basin on 04 May 07
Zonal statistics of Snow Cover on Indus basin on 05 June 07
Zonal statistics of Snow Cover on Indus basin on 07 July 07
Zonal statistics of Snow Cover on Indus basin on 08 August 07

vi

8. Combined Zonal statistics of Snow Cover on Indus basin on all dates

GRAPHS
1. Spectral reflectance graph of various features
2. Hypsometric graph of elevation zones of Indus basin
3. Snow Cover Comparison graph of elevation zones of Indus basin on all dates.

vii

1. INTRODUCTION

1.1 Remote sensing


Remote sensing refers to techniques used for collecting information about an object and its
surroundings from a distance without physically contacting them. For example, when we look at
a location we employ remote sensing. When we hear something we employ remote sensing. Our
eyes, ears act as sensors and we collect data in the form of various energies. This data is
processed in our brain. Similarly in remote sensing, scanners are used to acquire information
about a particular object in the form of electro-magnetic energy. These are recognized then
depending on their know properties.
When looked at as a whole, remote sensing comprises of five basic components.

Image 1; Source [1]

An energy source (SUN)

Interaction of this energy with articles in atmosphere (B)

Subsequent interaction with ground target (C)

Energy recorded by sensor as data (D)

Data displayed for interpretation and processing (F)

There are 2 major types of remote sensing.

Passive Remote Sensing:


Detect naturally occurring radiation (reflected/emitted) from the terrain of interest.
Usually it is sun or heat emitted by earth.

Active Remote Sensing:


Detect reflected or backscattered radiation from the terrain of interest. In this type energy
is artificially emitted.
RADAR is one such example. Time lag between emission and return is measured and height,
distance etc features are measured.

Image 2; Source [2]

Radiation Principles
Remote sensing can be done using various sensors to obtain information about our terrain
of interest. But mostly electromagnetic energy sensors are used widely in both airborne and
spaceborne platforms.
Electromagnetic energy is a form of energy in which visible light is only a part. It is
classified as radio waves, microwaves, infrared, visible, ultraviolet, x rays, rays, cosmic rays
depending on their wavelengths. All these radiations travel at the velocity of light following
wave theory. These waves also follw wave theory and are defined by wave charecteristics like
wavelength, frequency, energy etc

Image 3; Source [3]

Wavelengths that are of greatest interest in remote sensing are

Visible 0.4 - 0.7 m


Near infrared 0.7 1.3 m
Mid infrared 1.3 3 m
Thermal infrared 3 14 m
Microwave 1 mm -1m

1.2 Energy interactions in the atmosphere


Energy interactions in the atmosphere with the objects of interest are the main reason why
remote sensing is possible with electromagnetic radiation. Every object responds differently
when electromagnetic radiation is incident on it. And using this we differentiate various features
on a terrain.
Electromagnetic energy that is incident on an object gets absorbed, transmitted and reflected. All
of these again depend on the texture and nature of the terrain.
ABSORPTION occurs when radiation penetrates into the body through the surface and the
energy is used by the molecules. It might be emitted back in any other form of energy. Emitted
radiation is useful for thermal studies. Every object absorbs energy upto some extent.
TRANSMISSION occurs when the radiation passes through the object and is not absorbed.
Transmission means the radiation is not interacting with the body. Transmitted radiation is of
least importance for remote sensing.
REFLECTION occurs when radiation is neither absorbed nor transmitted. It is the phenomenon
in which the radiation is sent back in to the atmosphere. Reflection is a property that depends on
the surface texture and nature of the surface of the object. Hence reflected radiation is used to
differentiate between features on the terrain.
The geometric manner in which a body reflects is also important.
Specular reflectors are flat reflectors that manifest mirror like reflections. Near specular
reflectors are those which are not perfect like mirrors but tend to be.

Diffuse reflectors are rough surfaces with uniform reflection in all directions. Near diffuse

reflectors are those that reflect in all directions but not uniform.
Image 4; Source [4]

Every object when electromagnetic radiation is incident it absorbs a little, transmits and also
reflects some amount of energy no matter how small it is.

EI () = ER () + EA () + ET ()
EI is the incident energy. ER is the reflected energy. EA is the energy absorbed. ET is the energy
transmitted.
REFLECTANCE OF RADIATION is the property that is used to distinguish between features.
It is the percentage of incident energy that a surface reflects back. It is a fixed characteristic of an
object. Unique objects might show different reflectance if there is a physical or chemical change.
Reflectance is not the same as reflection.
Reflectance is not obtained to be the same value always because of energy interactions
with the atmosphere. No matter what kind of radiation for it to travel to the terrain of interest
from the source and from there to the sensors it must pass through space. And this space always
interacts with the radiation and might cause changes in the radiation. This again depends on the
length of path, strength of signal, wavelength and some other factors. The effects caused through
these interactions are because of

1) SCATTERING
2) ABSORPTION

SCATTERING Is the unpredictable diffusion of radiation by particles in the atmosphere.


Rayleigh scatter is common when a radiation interacts with atmospheric molecules that are
much smaller than the wavelength of the radiation. This effect is inversely proportional to 4th
power of wavelength. This scatter is dominant at elevations more than 9 to 10 Km. this scatter is
the main cause of the blue sky and also causes haze in images. Mie scatter is the phenomenon
which occurs when radiation interacts with particles whose diameters are equal to the
wavelength of the energy beams. Tends to influence longer wavelengths more. Non-selective
scatter occurs when energy beam is incident upon particles with larger diameters than the
wavelength of the beam.
ABSORPTION results in effective loss of energy to atmosphere. This happens because of
absorption of some selective bands of energy by some constituents of atmosphere like gases,
water vapour, ozone etc Wavelength ranges which are transmitted by the atmosphere are called
Atmospheric windows. Thus remote sensing has to be done by sensing only through atmospheric

windows.

Graph 1; Source: [5]

In the above graph we can observe characteristic spectral signatures of some features like water,
soil and vegetation. Even though termed as signature these graphs cant be considered as unique
because of the atmospheric interactions and also spatial and temporal effects. As discussed above
atmospheric interactions effect the signals and so they change according to the atmosphere,
climate which is temporal. And hence spectral signature is never unique. Temporal effects can be
observed as climatic changes like clouds can be effecting the signal, reflectance can be different
if it is rainy due to which soil might become moist. Spatial effects refer to factors that cause same
types of features at a given point of time vary at different geographical locations. Some spatial
effects are climates, types of soil, practices in that locations etc

1.3 DATA ACQUISITION


Remote sensing can be done with either passive or active sensors. As discussed above passive
remote sensing involves natural energy sources while active involve man-made artificial energy
sources. Electromagnetic radiation emitted from the sources, interacts with the atmosphere,
features of interest. Combined, these factors result in energy Signals from which we extract
information. Detection of these signals can be done in two different ways.

Photographically

Electronically

Photographic processes use chemical reactions on the surface of a photographic films. These
methods are relatively simple and cheaper. They provide a high degree of spatial detail and
geometric integrity. These films act as both detecting and recording medium.
Electronic sensors generate an electronic signal corresponding to the energy variations in the
original scene. They are more advantageous than photographic methods because of the broader
spectral range of sensitivity, improved calibration potential and can transmit data electronically.
But they are not as cheap and simple as photographic methods. Electronic sensors record data on
magnetic tapes. These are later converted into photographs depending upon the requirements.
These films act as only recording medium.

Analog image digitization:


Data obtained is further analysed digitally. If the data has been obtained in photographical
methods then it is converted into digital format and then analysed. Analog image digitization
can be done by

Optical mechanical scanning

Linear or area array photodiode (or) charge coupled device (ccd) digitization

Video digitization

Storage of digital data:


Popular formats for storing digital data are

Band Sequential (BSQ)

Band interleaved by line (BLL)

Band interleaved by pixel (BLP)

Run length encoding.

1.4 DATA ANALYSIS


The analysis of remotely sensed data is performed using a variety of image processing
techniques, including

Analog (visual) image processing of the hard copy data

Appling digital image processing algorithms to digital data

Analog Image Processing:


Most of the fundamental elements of image interpretation are used in Analog image analysis, for
example size, shape, shadow, tone or colour, texture, site & association.

Digital Image Processing:

Digital image processing involves developing and rectifying images with computer aid.
Fundamental methods involved in digital image processing are image rectification or restoration,
image enhancement and image classification. Digital analysis is mostly dependent on colour and
tone of the individual pixels. Digital images are usually processed by the computers on the basis
of some equations and results in some more pictures or tabular values etc
Image rectification or restoration:
Image restoration and rectification techniques involve the correction of distortion, degradation
and noise during the imaging process. This involves both radiometric and geometric corrections.
To correct the data internal and external errors must be detected. Internal errors can be due to the
sensors or technical failures. External errors can be like atmospheric interactions which tend to
distort the signals. These processes are usually called pre-processing of the digital imagery.
Geometric Correction:
Sometimes the images can be so distorted geometrically that they cannot be used for the
processing. This may be caused by various factors like altitude, velocity if the sensor, curvature
of the earth, earths rotations (often cause for panoramic distortions), atmospheric refractions,
relief displacements etc So geometric corrections rectify these geometric distortions to an
extent such that they can be used again.
Symmetric distortions can be easily rectified by developing a mathematic model depending upon
the source of distortion and then applying some transformations corresponding to it.
Random distortions are usually corrected by geo-referencing the Ground Control Points on the
map to their coordinates. An equations is thus developed by calculating the least squares
regression and thus resampled.
Radiometric corrections
Radiometric corrections are required to rectify the errors caused because of changes in scene
illumination, viewing geometry, atmospheric conditions etc Viewing geometry corrections are
required in air-borne remote sensing more than in space-borne remote sensing.
Noise removal:

Image noise is any unwanted disturbance in image data that is due to limitation in the sensing,
signal digitization, or data recording process. The potential sources of noise range from periodic
drift or malfunction of a detector, to electronic interference between sensor components, to
intermittent hiccups in the data transmission and recording sequence. Noise can either degrade
or totally mask the true radiometric information content of a digital image.

Image 5; Source [6]

Image Enhancement:
Image enhancement algorithms are applied to an image to increase interpretability and
appearance of the image data. Image enhancement always depends upon the requirements of the
user and no ideal image enhancement. Image enhancement techniques are used for easier
processing and to decrease the complexity of the image and also to lose unwanted information
from the image.
Image enhancement techniques can be classified as

Point operations: modify the brightness values of the pixel independently.

10

Local operations: modify the brightness values of the pixel based on neighbouring
pixels

Both operations can be done on any kind of imagery. Image enhancements are done after the
image restoration process and before image classification process.
Most commonly used image enhancement techniques are

Contrast manipulation: gray-level thresholding, level slicing and contrast stretching.

Spatial feature manipulation: spatial filtering, Fourier analysis, edge-enhancement.

Multi-image manipulation: multispectral band ratioing and differencing, principal


components,

canonical

components,

intensity-hue-saturation

(I)

colour

space

transformations and de-correlation stretching.


Contrast manipulation:

Gray-level thresholding
It is used to classify an image into two classes.
Ex: - one for all pixels with gray level greater than user defined value
One for all pixels with gray level lesser than user defined value.
Thresholding is usually used to develop binary masks and later these masks are used to
operate on the image separately on each class without effecting the other.

Level slicing
Level slicing is a technique where the DNs distributed along the x-axis of an image
histogram are divided into a series of user defined intervals or slices. All the DNs in
the same interval are assigned a single DN. Each level can also be shown as a single
colour.
Level slicing is used extensively in the display of thermal infrared images in order to
show discrete temperature ranges coded by gray level or colour.

Contrast stretching
Contrast stretching is a technique wherein a particular set of the gray scale level is
stretched to the complete gray scale level.
Ex: - consider an image with gray scale varying from 80 to 190. Now these gray scale
values are stretched from 80 190 to 0 255. Hence this will increase the
interpretability

of

the

image

and

11

all

features

are

more

distinguished.

This stretching can be done on any basis like linear stretching, depending on the
frequency or we can omit some values also.
Spatial feature management:

Spatial filtering
Spatial filtering is a local operation. Spatial filters emphasize or deemphasize various
spatial frequencies of an image. Spatial frequency means the roughness of the tonal
variations in an image. If the gray level of pixels change very abruptly over a small area
then it is said to be rough tonal area or high spatial frequency and vice-versa.
Low pass filters emphasize low frequency detail and deemphasize high frequency detail
while high pass filters emphasize high frequency detail and deemphasize low frequency
detail. Low pass filters can be used to reduce random noise.

Edge enhancement
Edge enhancement delineates the edges of the shapes and details of an image and hence
making it more conspicuous and easy to interpret. These edges may be enhanced using
linear edge enhancement or non-linear edge enhancement.
Linear edge enhancement is done by applying a directional first difference algorithm
which approximates the first derivative between two adjacent pixels. Edge smoothness
or roughness depends on the kernel size that operates. Larger the kernel, smoother the
edge.
Non-linear edge enhancements are performed using non-linear combinations of the
pixels. Many algorithms are applied using various sized kernels. Sobels edge detector
and Roberts edge detector are some non-linear edge enhancement operators.

Fourier analysis
Fourier analysis is a mathematical technique for separating an image into its various
spatial frequency components. Fourier magnitude images are symmetric about the
centre and the intensity at the centre represents magnitude of lowest frequency

12

component.
Image 6; Source [7]

Fourier transforms are majorly used to remove noise from these images. It is also used
to apply filters. Low-pass or high-pass filters are applies on the Fourier transform image
and then it is converted back into the original image.
Multi image manipulation:

Spectral ratioing
Ratio images can be obtained by dividing DNs of one spectral band with corresponding
values in another spectral band. Great advantage of ratioing is that we can avoid
variations caused by scene illuminations and noise can be reduced. By ratioing correct
pair of bands we can obtain valuable information that is difficult to interpret from single
spectral band data.

Principal and canonical components


Extensive inter band correlation is a problem in the analysis of multispectral data.
Sometimes various wavelengths in digital imagery offer similar data and so it is not
necessary to use both. Principal and canonical component are such techniques which
help in reducing the dimensionality of the digital data. These methods compress all data
contained in n bands into n new bands. These new bands are generated such a way that
one single band gives information about 90% data. This method can be considered as
transformation of coordinate axes.

Image classification:
Classification process categorizes all pixels in the image into several classes or features or
themes depending upon the spectral pattern present in the pixel which is calculated on numerical
basis. This process is usually done using multispectral data.
Image classification is majorly two types namely supervised classification and unsupervised
classification.
Supervised classification

13

With supervised classification, we identify examples of the Information classes (i.e., land cover
type) of interest in the image. These are called training sites. The image processing software
system is then used to develop a statistical characterization of the reflectance for each
information class. This stage is often called signature analysis and may involve developing a
characterization as simple as the mean or the rage of reflectance on each bands, or as complex as
detailed analyses of the mean, variances and covariance over all bands. Once a statistical
characterization has been achieved for each information class, the image is then classified by
examining the reflectance for each pixel and making a decision about which signature it
resembles the most.

Image 7; Source [8]

Maximum likelihood classification


Maximum likelihood Classification is a statistical decision criterion to assist in
the classification of overlapping signatures; pixels are assigned to the class of
highest probability.
The maximum likelihood classifier is considered to give more accurate results
than parallelepiped classification however it is much slower due to extra
computations. We put the word `accurate' in quotes because this assumes that
classes in the input data have a Gaussian distribution and that signatures were
well selected; this is not always a safe assumption.

14

Minimum distance classification


Minimum distance classifies image data on a database file using a set of 256
possible class signature segments as specified by signature parameter. Each
segment specified in signature, for example, stores signature data pertaining to a
particular class. Only the mean vector in each class signature segment is used.
Other data, such as standard deviations and covariance matrices, are ignored
(though the maximum likelihood classifier uses this).
The result of the classification is a theme map directed to a specified database
image channel. A theme map encodes each class with a unique gray level. The
gray-level value used to encode a class is specified when the class signature is
created. If the theme map is later transferred to the display, then a pseudo-colour
table should be loaded so that each class is represented by a different colour.

Parallelepiped classification
The parallelepiped classifier uses the class limits and stored in each class
signature to determine if a given pixel falls within the class or not. The class
limits specify the dimensions (in standard deviation units) of each side of a
parallelepiped surrounding the mean of the class in feature space.
If the pixel falls inside the parallelepiped, it is assigned to the class. However, if
the pixel falls within more than one class, it is put in the overlap class (code
255). If the pixel does not fall inside any class, it is assigned to the null class
(code 0).
The parallelepiped classifier is typically used when speed is required. The
drawback is (in many cases) poor accuracy and a large number of pixels
classified as ties (or overlap, class 255).

Unsupervised classification
Unsupervised classification is a method which examines a large number of unknown pixels and
divides into a number of classed based on natural groupings present in the image values. Unlike
supervised classification, unsupervised classification does not require analyst-specified training
data. The basic premise is that values within a given cover type should be close together in the

15

measurement space (i.e. have similar gray levels), whereas data in different classes should be
comparatively well separated (i.e. have very different gray levels).
The classes that result from unsupervised classification are spectral classed which based on
natural groupings of the image values, the identity of the spectral class will not be initially
known, must compare classified data to some form of reference data (such as larger scale
imagery, maps, or site visits) to determine the identity and informational values of the spectral
classes. Thus, in the supervised approach, to define useful information categories and then
examine their spectral seperability; in the unsupervised approach the computer determines
spectrally separable class, and then define their information value.
Unsupervised classification is becoming increasingly popular in agencies involved in long term
GIS database maintenance. The reason is that there are now systems that use clustering
procedures that are extremely fast and require little in the nature of operational parameters. Thus
it is becoming possible to train GIS analysis with only a general familiarity with remote sensing
to undertake classifications that meet typical map accuracy standards. With suitable ground truth
accuracy assessment procedures, this tool can provide a remarkably rapid means of producing
quality land cover data on a continuing basis.

16

2. DATA SOURCES, SOFTWARE AND METHODOLOGY

2.1 Data Sources


SRTM 250:
NASA Shuttle Radar Topographic Mission (SRTM) data were acquired by Radar on-board 11day mission of NASA Shuttle in 2003. The data are available at 250 m resolution for nearly 80%
of the earth surface. Data are freely available in 5 tiles. Data have few gaps in high mountains,
deserts and water bodies etc., which are filled through processing in GIS and are made available
by Consultative Group for International Agricultural Research-Consortium for Spatial
Information (CGIAR- CSI). Recently, a derived elevation DEM is also made available at 250 m.

MODIS:
The product provides 8- day composite of the surface reflectance in 1 to 7 bands at 500 m
resolution for MODIS sensor. The product is gridded level- 3 product. Projection system is
sinusoidal. The product is derived by selecting an observation from MODIS daily L2G products
over 8 day period. The selection is based on several factors e.g. maximum observation coverage,
low view angle, absence of cloud or its shadow, aerosol loading etc. The accompanying data are
quality assessment, day of observation, solar azimuth, and view and zenith angles. The versionfive product is validated Stage two product. It has been validated spatially and temporally
through ground truth etc. The data is recommended for scientific use. Data is available in tiles of
size 10 in HDF-EOS format. The file size in pixels and lines is 2400 X 2400.

17

2.2 Soft wares


Arc-GIS:
Esris ArcGIS is a geographic information system (GIS) for working with maps and geographic
information. It is used for: creating and using maps; compiling geographic data; analyzing
mapped information; sharing and discovering geographic information; using maps and
geographic information in a range of applications; and managing geographic information in a
database. The system provides an infrastructure for making maps and geographic information
available throughout an organization, across a community, and openly on the Web.
ArcGIS desktop consists of several integrated applications including ArcMap, ArcToolbox,
ArcCatalog, and ArcGlobe.

ArcCatalog - used to organize and manage your GIS data. It also allows you to preview
datasets and view and manage metadata.

ArcMap - used to view, edit, and analyse spatial data and create maps.

ArcScene - provides the interface for viewing multiple layers of 3D data, visualizing 3D
data on a 2D surface data, creating 3D surfaces, analysing 3D surfaces.

ArcToolbox - is a component of ArcCatalog, ArcMap and ArcScene. It contains tools for


Geo-processing, data conversion, and defining map projections

MODIS reprojection tool:


The product is developed for use with higher level MODIS raster (gridded) data products
e.g. MOD09A1 available in HDF- EOS data format with Sinusoidal projection. In the software
mosaicking, sub setting (spatial and spectral), reprojection and format change functionalities are
made available. Both command line and GUI interfaces are available. The product is available on
multiple platforms. Output formats supported are raw binary, HDF-EOS and GeoTIFF. Input data
of 8, 16 and 32 bit signed/ unsigned integer and 32 bit float are supported. Output data type is
same as input. Several projections are supported including LCC, UTM, Albers equal area,
geographic etc. Resampling available are nearest neighbour, bilinear and cubic convolution.
When multiple input files are selected, these are mosaicked. Using band selector, any number of
bands may be moved to output list. In GeoTIFF output format, separate output files are created

18

for individual band. Limited support of datum conversion is also available. Spatial sub setting is
done using diagonal opposite corners points. If no output pixel size is specified, default pixel size
is taken. Default fill values are taken from bands of the MODIS data products.

ERDAS IMAGINE:
ERDAS IMAGINE is the raster-centric software GIS professionals use to extract information
from satellite and aerial images. Because it is easy to use and easy to learn, ERDAS IMAGINE
is perfect for beginners and experts alike. The vast array of tools allowing users to analyse data
from almost any source and present it in formats ranging from printed maps to 3D models,
makes ERDAS IMAGINE a comprehensive toolbox for geographic imaging and image
processing needs.
ERDAS IMAGINE is aimed primarily at geospatial raster data processing and allows the user to
prepare, display and enhance digital images for mapping use in geographic information system
(GIS) or in computer-aided design (CADD) software. It is a toolbox allowing the user to perform
numerous operations on an image and generate an answer to specific geographical questions.
By manipulating imagery data values and positions, it is possible to see features that would not
normally be visible and to locate geo-positions of features that would otherwise be graphical.
The level of brightness, or reflectance of light from the surfaces in the image can be helpful with
vegetation analysis, prospecting for minerals etc. Other usage examples include linear feature
extraction, generation of processing work flows ("spatial models" in ERDAS IMAGINE), and
import/export of data for a wide variety of formats, ortho-rectification, mosaicking of imagery,
and stereo and automatic feature extraction of map data from imagery.

2.3 Methodology
2.3.1 Delineating watershed:
Open a new, blank map document in ArcMap and use the Add Data button (
elevation model (DEM) you will be using to delineate your watersheds.

1. Create a depression less DEM:

19

) to add the digital

The Fill tool in the Hydrology toolbox is used to remove any imperfections (sinks) in the
digital elevation model. A sink is a cell that does not have a defined drainage value
associated with it. Drainage values indicate the direction that water will flow out of the cell,
and are assigned when creating a flow direction grid for the landscape. The resulting drainage
network depends on finding the 'flow path' of every cell in the grid, so it is important that the
fill step be performed prior to creating a flow direction grid.
Double-click the Fill tool to open its dialog.
The Input surface raster is the DEM grid.
Leave the Z limit blank and click OK to run the tool. Note that this process is CPU intensive,
and may take quite some time depending on the processing power of your workstation.
Once the fill process is complete, a new grid will be added to the data frame. There should be
a difference in the lowest elevation value between the original DEM and the filled DEM.
Remove the original DEM layer from the map (right-click > Remove).

2. Create Flow Direction:


A flow direction grid assigns a value to each cell that indicates the direction of flow that is,
the direction that water will flow from that particular cell. This is an extremely important step
in hydrological modelling, as the direction of flow will determine the ultimate destination of
the water flowing across the surface of the landscape.
Flow direction grids are creating using the Flow Direction tool. For every 3x3 cell
neighbourhood, the grid processor finds the lowest neighbouring cell from the centre. Each
number in the matrix below corresponds to a flow direction that is, if the centre cell flows
due north, its value will be 64; if it flows northeast, its value will be 128, etc. These numbers
have no numeric meaning, but are simply a coded directional value that indicates the steepest
descent based on elevation.

20

Flow direction matrix

Flow direction is north, cell is coded 64


Image 8; Source: [9]

Double-click the Flow Direction tool to open it. The Input surface raster should be set to the
filled DEM. The Output flow direction raster should once again default to your working
directory. Open the Environment Settings using the Environments button and confirm that the
Raster Analysis > Cell Size is set to the same as your filled DEM.
Click OK to run the tool. This process will take some time to complete, and once it has run a
new flow direction raster will be added.
3. Create Flow Accumulation:
The Flow Accumulation tool calculates the flow into each cell by accumulating the cells that
flow into each downslope cell. In other words, each cell's flow accumulation value is
determined by calculating the number of upstream cells that flow into it.
Double-click the Flow Accumulation tool to open it.
The Input flow direction raster should be set to the flow direction grid created in Step 3.
The Output accumulation raster will default to your working directory.
Accept all other defaults, check the Environment Settings to ensure that the Raster Analysis
> Cell Size property is set to the same as your filled DEM, and click OK to run the tool. This
process may take quite some time to complete.
The new flow accumulation raster will be added to your map. Each cell in the grid contains a
value that represents the number of cells upstream from that particular cell. Cells with higher
flow accumulation values should be located in areas of lower elevation, such as in valleys or
drainage channels.

21

It is very likely that the flow accumulation grid will appear dark and uninformative when
first added to the map. This can be fixed by altering the symbolization of the layer. Use the
raster calculator tool in spatial analyst-map algebra to change the symbology of flow
accumulation raster. Use the set null function. Change the pixel values of the raster. Syntax of
the function is Setnull(flow_accumulation raster<____,1). Enter the value lower than
which you want to set to 0 and all the values higher than that value are reset to 1. This
simplifies the flow accumulation raster.

Image 9

Each cell has an outlet point called a pour point that indicates the location where water
would flow out of the cell. Pour points must be located in cells of high cumulative flow or
the watersheds you delineate in the steps below will be very small.
4. Create outlet pour points:
The placement of pour points is an important step in watershed delineation. A pour point
should exist within an area of high flow accumulation because it is used to calculate the total
contributing water flow to that given point. In many cases you will already have a shape file
containing the locations of your pour points, whether they are sampling sites, hydrometric

22

stations, or another data source. However, it is also possible to create pour points yourself.
The instructions below include both procedures.
Creating pour points through visual inspection:
Open the ArcCatalog window (

). Right-click on your working directory and select New >

Shape file. Create a new point shape file, give it a descriptive name and apply the appropriate
projection information (the coordinate system should be the same as the DEM or Flow
Direction Grid you will be using). Click OK. The new, empty point layer will be added to
your map.
Zoom in to your area of interest so that you are able to see the individual flow accumulation
cells. Use the Identify tool (

) to examine the values of the flow accumulation grid. The

chosen pour point cell should be a natural outlet for the streams flowing above it and must be
on the high flow accumulation path. Your choice essentially determines the end of your
catchment; everything upstream from this point will define a single watershed.
To add a pour point, open the Editor Toolbar (Customize > Toolbars > Editor) and choose
Editor > Start Editing.

If necessary, in the Start Editing dialog, highlight the empty pour point layer and click OK.
The Create Features window will open. Highlight the pour point shape file and then move
your cursor onto your map. Add a pour point by clicking in the centre of the high flow
accumulation cell you have chosen as your outlet point. Try to place points in the centre of
the cells. Also remember to place the points 1 or 2 cells away from stream confluences.
If you are defining only one watershed then save your edits, stop the editing session and
move on to Step 5.
If you are creating more than one watershed, add a pour point for each watershed then save
your edits and exit the editing session. Open the attribute table for the layer by right-clicking
the layer name and selecting Open Attribute Table. Click the Table Options icon (

) and

select Add Field. Create a field of type Integer, precision 0 and call it UNIQUEID. Start
another editing session and enter an ID number for each individual pour point (1, 2, 3, and so

23

on). Stop editing and choose to save your edits. Watersheds are delineated based on unique
identification numbers, so this step ensures that a separate watershed will be delineated for
each individual pour point.

Loading pour points from an existing file:


In many cases you will already have a shape file that indicates the locations of hydrometric
gauging stations or watershed outlet points. If this is the case, add your point file to ArcMap.
Zoom in to each point to determine if it falls on the path of high flow accumulation. As
mentioned, if the pour points are not situated in cells of high flow accumulation then the
resultant watersheds will be very small. The Snap Pour Point tool used in the next step will
attempt to snap the pour points to the closest area of high flow accumulation.
5. Snap pour point:
Select Geo processing > Environments and set the Processing Extent and Raster Analysis >
Cell Size properties to the same as your flow accumulation grid (or the wfg layer included
with the Enhanced Flow Direction grid).
The Snap Pour Point tool accomplishes two things; it snaps the pour point(s) created or
loaded in the previous step to the closest area of high flow accumulation, and it converts the
pour points to the raster format needed for input to the Watershed tool.
Double-click the Snap Pour Point tool to open it.
Click Environments > Raster Analysis Settings > Cell Size and ensure that the cell size is set
to the same as your flow accumulation layer. Click OK.
The Input raster or feature pour point data is the pour point layer created in Step 5.
The Pour Point Field is the unique ID field created in Step 5 this is only applicable if you
are creating more than one watershed.
The Input accumulation raster is your flow accumulation layer.
The Output raster will default to your working directory.
The Snap distance is the specified distance (in map units) that the tool will use to search
around your pour points for the cell of highest accumulated flow. The snap distance should
be based on the resolution of your data and may require some trial and error to determine the

24

best value. If your pour point is not located on the high flow path, the tool will move it to the
cell within the search radius with the highest accumulated value. If your pour point is already
located on the high flow path, the tool will move the point to a downstream cell. It should be
taken care that the outlet point always stays on the path of flow accumulation (pixel with
value 1 as mentioned in raster calculator). The points must remain at the same location if they
are imported or placed by visual inspection. So, it is suggested that snap radius be set to 0.
6. Delineating Watershed:
Double-click the Watershed tool to open it (ArcToolbox > Spatial Analysis Tools >
Hydrology).
The Input flow direction raster is the flow direction raster created in Step 3, or the enhanced
flow direction layer from the OMNR.
The Input raster or feature pour point data is the raster pour point output from the Snap Pour
Points tool in Step 6.
The Pour point field can be left as default, or optionally you may choose to enter the unique
ID field created in Step 5.
The Output Raster will default to your working directory.
Click OK to run the tool.
When complete, the new watershed raster(s) will be added to your map.
7. Watershed raster to polygon:
You can convert the watershed raster to a polygon shape file for area calculations or to clip
other data sets to the watershed boundary. To do so one can use the raster to polygon tool
(Arc Toolbox > Conversion Tools> From Raster).
Double-click the Raster to Polygon tool to open it.
The Input raster is the watershed raster file created in Step 7 above.
The Output polygon features will default to your working directory.
Leave all other defaults and click OK to run the tool. The new polygon shape file will be
added as a layer to your map.

25

2.3.2 Elevation Zones:


Any raster DEM can be classified into zones depending on elevation basis. Since DEM contains
the elevation of every pixel, the lowest and the highest point of the study area can be found from
properties of the DEM. After finding the highest and lowest elevations of the study area, classify
the elevations depending on the requirement. Consider 1000m wide zones, then then zones are 01000; 1001-2000; 2001-3000 .
DEM can be classified into elevation zones by using reclassify tool in ArcMAP.
1. Add the DEM using add layer option in ArcMap.
2. Find the lowest and highest elevations of the DEM from the properties.
3. Choose the width of the zones into which you want to classify the DEM.
4. Elevation zones can be created using any tool found in the reclass option. (Arc Toolbox>
Spatial analyst> Reclass)
5. Reclass by ASCII file requires the classification data in a specified format (syntax) in any
ASCII editable extension. Then that file is to be provided as input so that classification can
be done.
6. Reclassify tool can be used to create elevation zones.
7. The input raster is the DEM file that you wish to reclassify.
8. The reclass field is the data on whose basis the classification is to be done. Enter the attribute
field that contains the elevation data in this field.
9. After inputting the above data, click on classify next to the reclass table field. Change the
method to defined interval and enter the elevation zone width in the interval field. Press ok.
Now change the first value in the table to zero.
10. Enter the new values that you wish in their respective fields.
11. Enter the location where you wish the reclassified data to be saved.

2.3.3 Snow Mapping:


The data required for snow mapping can be obtained from MODIS as mentioned above. Using
the MODIS reprojection tool required bands from tiles of study area can be mosaicked,
resampled and projected simultaneously.

26

1. Input all the files (.hdf) that comprise the snow data of the study area in the input field.
2. Select the bands 2, 4, 6 which are required for snow mapping and exclude the rest of the
bands.
3. Specify an output location for the files to be produced.
4. MODIS reprojection tool provides some basic projections. Select a projection depending on
the requirement. Enter the data required for reprojecting the data files. Set the Datum
according to your requirement.
5. Set the resampling to nearest neighbour or others depending on the requirements.
6. Now run the program and the files are mosaicked and output is produced for each band
separately.
After getting the bands 2, 4 and 6 of MODIS data separately mosaicked and reprojected then
they are processed in ERDAS IMAGINE to delineate the snow.
1. Start ERDAS IMAGINE.
2. Go to interpreter tab> utilities> layer stack.

3. Input the band 2 file in input file field. Now click on add button. Now input the band 4 file in
input field. Again click on add button. Similarly input the band 6 file in input field. Now click
on add button. After adding all the three band layers toggle on the Ignore zero in stats field.
4. Specify the output location in output file field and click ok.
5. After the 3 bands are stacked on a single image, use modeler to delineate snow.

6. Create a model that delineates snow using modeler> model maker.


7. In the model shown below all the circle blocks contain the functions by which the data files
are processed.
8. The zigzag polygonal blocks indicate either input or output.
9. The model prompts for an input file when run. Input the output file that has been obtained
from stacking bands 2, 4, 6.

27

10. This

data

is

processed

by

$n1_PROMPT_USER(2)-$n1_PROMPT_USER(3)

and

$n1_PROMPT_USER(2), the output of these processed data is then provided as input to


n4_memory and n6_memory. This data is then processed by either-1IF and output is input for
n10_memory. Data in n10_memory is then processed by either 1 IF and output is submitted
by n8_memory.

Image 10

11. Output obtained is the final delineated snow map.

28

12. This map is then added to the DEM file and the file with elevation zones for finding various
values like snow cover in each zone.

2.3.4 Zonal Statistics:


Statistics of every elevation zone can be calculated using tools in ArcMAP.
Zonal statistics can be used to compare zones among themselves or to find the specifics of a zone
like median elevation, area occupied etc which can be later analysed using graphs and other
means.
Zonal statistics can be calculated using zonal statistics (Arc toolbox> Spatial Analyst tools>
Zonal)
1. Input the file containing zone data (output file obtained by reclassification) in input raster or
feature zone data.
2. In the zone field input the attribute value on whose basis the zones have been classified or the
field that defines the zones.
3. In the input value raster input the raster file whose values you wish to calculate statistics for.
4. Specify the output location where you wish to save the statistics table.
5. Specify the type of statistics you require in specify statistics field or you can calculate all the
types by selecting all.
These statistics are generated in database format. This can be exported to excel file if required.

29

30

3. STUDY AREA
The Indus basin is formed by Indus and its tributaries. Indus River is a major river in Asia which
flows through Pakistan and India. It also has courses through western Tibet. Originating in the
lake Manasa Sarovar, the river runs through Jammu and Kashmir, Himachal Pradesh and Punjab
in India. It flows for over 3180 km. The river has a total drainage area exceeding 1165000 km2.
Annual flow of the river is estimated to be 207 km3. . The part of Indus Basin that is studied in
this report lies in between 76o east 36o north and 81o east and 31o north.

Image 11; Source [14]

31

4. RESULTS
4.1 Watershed of Indus basin:
DEM file that covers Indus Basin is taken from SRTM 250. All the method discussed above is
done and Indus basin watershed has been delineated. After the watershed raster is converted to
polygon file and used to extract just the basin from the whole SRTM 250 tile.

Image 12

Image 13

32

The watershed extends from 76o east 36o north and 81o east and 31o north. It occupies a total of
178339.1 sq km.

Image 14

This is the DEM of Indus basin after fill tool is applied to remove sink pixels which dont have
any drainage data.

33

Image 15

Flow direction tool applied on the DEM after applying fill tool. This determines the direction in
which the water flows out from the pixel and the direction in which the water enters the pixel.

34

Image 16

Flow accumulation tool is applied on the flow direction raster of Indus basin DEM. After flow
accumulation tool, raster calculator is used to make the flow accumulation raster more discrete
by resetting the pixel value to 2 values (0 and 1) only. Thus changing higher flow accumulation
values to 1 and lower flow accumulation pixel values to 0. Thus it helps in visually identifying
the pour point of the basin.

35

Image 17

The green circle on the image indicates the outlet point of the basin. Outlet point is the location
where all the water flows to from all the watershed. Outlet point always lies on high flow
accumulation pixels. Outlet point can also be thought of as the exit point of the basin or the
lowest elevation point in the basin. Outlet point is downstream of the whole basin and the whole
basin is upstream of outlet point of the basin.

36

4.2 Elevation zones:

Image 18

Above image is the elevation zone wise classified Indus basin. Each zone is 1000m wide.
Least elevation point

0948 m

Highest elevation point 8572 m

37

ZONE

PIXELS

AREA (sq km)

MIN
ELEV

MAX
ELEV

RANGE

MEAN

STD

MEDIAN

321

16.98

948

1000

52

989.77

9.43

993

37943

2007.18

1001

2000

999

1616.34

264.62

1652

138910

7348.33

2001

3000

999

2578.02

278.48

2610

442473

23406.82

3001

4000

999

3584.40

279.50

3619

1409476

74561.28

4001

5000

999

4564.68

271.91

4586

1277074

67557.21

5001

6000

999

5394.74

256.30

5362

63025

3334.02

6001

7000

999

6210.16

212.02

6137

2012

106.43

7001

7997

996

7255.86

208.96

7199

16

0.84

8022

8572

550

8263.37

183.93

8203

TABLE 1

The above table is an analysis of the elevation zones of Indus basin obtained using zonal
statistics tool.
The column pixels refers to the total number of pixels present in the zone.
The column area refers to the total area occupied by the zone in square meters.
The column mean refers to the mean elevation of the zone.
The column median refers to the median elevation in the zone.
The column std refers to the standard deviation of the elevation in the zone.
From observation it can be noticed that zones 5 and 6 occupy most of the area in Indus basin.

HYPSOMETRIC GRAPH
7948

Elevation

6948
5948
4948
3948
2948
1948
948
0

20000

40000

60000

80000

100000 120000 140000 160000

Cumulative Area
GRAPH 2

The above graph is a hypsometric graph with cumulative area on x-axis and elevation on y-axis.

38

4.3 Snow Cover:


SNOW COVER (09 MARCH 07)

Image 19

ZONES
1
2
3
4
5
6
7
8
9
TOTAL

PIXELS WITH SNOW COVER


PIXELS
AREA(sq km) SNOW
(sq km)
321
16.98
0
0
37941
2007.07
0
0
138907
7348.18
8420
445.41
442451
23405.66
250291
13240.39
1408982
74535.15
852145
45078.47
1275725
67485.85
954103
50472.05
62947
3329.89
60385
3194.36
2012
106.43
1688
89.29
16
0.84
11
0.58
2127043
112520.6
TABLE 2

39

SNOW COVER (18 APRIL 07)

Image 20

ZONES
1
2
3
4
5
6
7
8
9
TOTAL

PIXELS
321
37941
138907
442451
1408970
1275673
62936
2012
16

AREA (sq
PIXELS WITH SNOW
km)
SNOW
COVER(sq km)
16.98
0
0
2007.07
0
0
7348.18
388
20.525
23405.66
96489
5104.26
74534.51
591694
31300.61
67483.1
632157
33441.11
3329.31
59093
3126.02
106.43
1967
104.05
0.84
11
0.58
1381799
73097.17
TABLE 3

40

SNOW COVER (04 MAY 07)

Image 21

ZONES
1
2
3
4
5
6
7
8
9
TOTAL

PIXELS WITH SNOW


PIXELS
AREA(sq km) SNOW
COVER(sq km)
321
16.98
0
0
37941
2007.07
0
0
138907
7348.18
25
1.32
442451
23405.66
33073
1749.56
1408970
74534.51
425619
22515.25
1275673
67483.1
596392
31549.14
62936
3329.31
58185
3077.98
2012
106.43
1968
104.10
16
0.84
16
0.84
1115278
58998.21
TABLE 4

41

SNOW COVER (05 JUN 07)

Image 22

ZONES
1
2
3
4
5
6
7
8
9
TOTAL

PIXELS
AREA(sq km)
321
16.98
37941
2007.07
138907
7348.18
442451
23405.66
1408970
74534.51
1275673
67483.1
62936
3329.31
2012
106.43
16
0.84

PIXELS WITH SNOW


SNOW
COVER(sq km)
0
0
0
0
9
0.47
5752
304.28
286818
15172.67
495405
26206.92
58341
3086.23
1974
104.42
16
0.84
848315
44875.86
TABLE 5

42

SNOW COVER (07 JULY 07)

Image 23

ZONES
1
2
3
4
5
6
7
8
9
TOTAL

PIXELS WITH SNOW COVER


PIXELS
AREA(sq km) SNOW
(sq km)
321
16.98
0
0
37942
2007.13
0
0
138910
7348.33
21
1.11
442473
23406.82
2339
123.73
1409459
74560.38
87520
4629.80
1277036
67555.2
307150
16248.24
63020
3333.75
50101
2650.34
2012
106.43
1889
99.92
16
0.84
16
0.84
449036
23754
TABLE 6

43

SNOW COVER (08 AUGUST 07)

Image 24

ZONES
1
2
3
4
5
6
7
8
9
TOTAL

PIXELS
AREA(sq km)
321
16.98
37942
2007.13
138910
7348.33
442473
23406.82
1409459
74560.38
1277036
67555.2
63020
3333.75
2012
106.43
16
0.84

PIXELS WITH
SNOW
0
0
148
1668
38349
198124
42016
1521
12
281838

SNOW COVER
(sq km)
0
0
7.82
88.23
2028.66
10480.76
2222.64
80.46
0.63
14909.23

TABLE 7

44

Snow mapping of Indus basin was done for six different dates. The image files obtained after
delineating snow using ERDAS IMAGINE are added to the DEM of Indus basin and the
elevation zones. Zonal statistics of the snow map image with respect to the elevation zone map is
calculated and analysed. Care is to be taken that all the maps are in the same projection and all
the maps are of same pixel size or resolution.
Area of the snow cover has been calculated for every date zone wise also. The data has been
analysed in tabular and graphical format.
112520.6
73097.17
58998.22
44875.86
23754.01
14909.23

TABLE 8

SNOW COVER COMPARISION


120
zone 1
zone 4
100

% OF ZONE COVERED WITH SNOW

09-Mar
18-Apr
04-May
05-Jun
07-Jul
08-Aug

zone 1 zone 2 zone 3 zone 4


zone 5
zone 6
zone 7
zone 8 zone 9
0
0 445.41 13240.39 45078.47 50472.05 3194.36
89.29
0.58
0
0 20.52 5104.26 31300.61 33441.11 3126.02 104.05
0.58
0
0
1.32 1749.56 22515.25 31549.14 3077.98 104.10
0.84
0
0
0.47
304.28 15172.67 26206.92 3086.23 104.42
0.84
0
0
1.11
123.73
4629.80 16248.24 2650.34
99.92
0.84
0
0
7.82
88.23
2028.66 10480.76 2222.64
80.46
0.63

zone 2
zone 3
zone 5

80

zone 6
zone 7
zone 8

60

zone 9
40

20

0
15-Feb

7-Mar

27-Mar

16-Apr

6-May

26-May

DATE
GRAPH 3

45

15-Jun

5-Jul

25-Jul

14-Aug

3-Sep

5. CONCLUSION
As observed in the graphs and tables discussed above, it is clear that the area of snow cover
decreased drastically during May, June, July months which is supported by the fact that it is
summer season. Snow cover area in zone 9 didnt follow the trend due to the fact that it is at a
high altitude of more than 8000m from main sea level and also since zone 9 comprises of small
area.

46

REFERENCES:
1. http://www.ngdir.ir/Data_SD/GeoLab/Pics/GeoLabPic_1223_2.jpg
2. http://www.tankonyvtar.hu/en/tartalom/tamop425/0027_DAI6/images/DAI605.png
3. http://maxstudy.org/Chemistry/AP/2000px-EM_spectrum.svg_.png
4. http://www.csc.noaa.gov/products/gulfmex/img/lightdle.gif
5. http://tutor.nmmu.ac.za/uniGISRegisteredArea/Intake13/Remote%20Sensing%20and%20GI
S/reflect.gif
6. http://bfast.r-forge.r-project.org/seasonalbreak_TreeMort.jpg
7. http://www.sc.chula.ac.th/courseware/2309507/images/ch10_9.jpg
8. http://resources.arcgis.com/en/help/main/10.1/index.html#//00v200000005000000
9. http://www.nrcs.usda.gov/wps/portal/nrcs/detail/nh/technical/?cid=nrcs144p2_015680
10. http://modis.gsfc.nasa.gov/
11. https://lpdaac.usgs.gov/products/modis_products_table/mod09a1
12. Remote sensing and image interpretation by Lilesand and Kiefer.
13. Remote sensing and GIS applications by P.S Roy and R.S Dwivedi
14. http://foeme.files.wordpress.com/2012/12/map-of-the-indus-basin-source-us-senatereport.jpg

47

Bibliography:
1. Remote sensing and image interpretation by Lilesand and Kiefer.
2. Remote sensing and GIS applications by P.S Roy and R.S Dwivedi
3. Introductory Digital Image Processing by John R Jensen

48

You might also like