You are on page 1of 15

Satellite Imaging Technologies

1. Introduction:
Satellite imagery consists of photographs of Earth or other planets made by means of artificial satellites. The basic principle of remote sensing is that the different objects based on their structural, chemical and physical properties return (reflects or emits) different amount of energy in different wavelength ranges (commonly referred to as bands) of the electromagnetic spectrum incident upon it.

Most remote sensing systems utilize the suns energy, which is a predominant source of energy. These radiations travel through the atmosphere and are selectively scattered and/or absorbed depending upon the composition of the atmosphere and the wavelengths involved. These radiations upon reaching the earths surface interact with the target objects.

Everything in nature has its own unique pattern of reflected, emitted or absorbed radiation. A sensor is used to record reflected or emitted energy from the surface. This recorded energy is then transmitted to the users and then it is processed to form an image, which is then analyzed to extract information about the target.

Finally the information extracted is applied to assist in decision making for solving a particular problem.

There are four types of resolution when discussing satellite imagery in remote sensing: spatial, spectral, temporal, and radiometric. Campbell (2002) defines these as follows:

- Spatial resolution is defined as the pixel size of an image representing the size of the surface area (i.e. m2) being measured on the ground, determined by the sensors' instantaneous field of view (IFOV); - Spectral resolution is defined by the wavelength interval size (discreet segment of the Electromagnetic Spectrum) and number intervals that the sensor is measuring; -temporal resolution is defined by the amount of time (e.g. days) that passes between imagery collection periods for a given surface location; and - radiometric resolution is defined as the ability of an imaging system to record many levels of brightness (contrast for example). Geometric resolution refers to the satellite sensor's ability to effectively image a portion of the Earth's surface in a single pixel and is typically expressed in terms of Ground Sample Distance, or GSD. GSD is a term containing the overall optical and systemic noise sources and is useful for comparing how well one sensor can "see" an object on the ground within a single pixel.

2. Motivation:
Satellite images have many applications in meteorology, agriculture, geology, forestry, biodiversity conservation, regional planning, education, intelligence and warfare. Images can be in visible colours and in other spectra. There are also elevation maps, usually made by radar imaging. Interpretation and analysis of satellite imagery is conducted using specialized remote sensing applications. Satellite imagery is also used in seismology and oceanography in deducing changes to land formation, water depth and sea bed, by colour caused by earthquakes, volcanoes, and tsunamis. Various applications of satellite imaging are : 3D City and Urban Modelling Agriculture Archaeology

Cadastre and Land Records Coastal Management Defence Mapping Engineering & Construction Environmental Monitoring Global Warming Law Enforcement Forestry Geospatial Homeland Security Hurricane Mitigation Land Cover and Change Detection Land Development Motion Pictures Mining Natural Hazards Oil & Gas Pipeline & Transmission Surveys Sports & Tourism Wildlife & Marine Conservation

Satellite images are used in two forms analogue and digital. Analogue images are popularly known as hard copy images.

Digital images are made up of tiny picture elements known as pixels. These images are like matrices which are divided into rows and columns. It is obvious from their name that digital images have something to do with digits (or number). In fact these images are numerical representation of the features observed by a remote sensing device. It means every pixel must have some number or value to represent itself. These values are called as pixel values or digital numbers (DN). Digital number of a pixel depends upon the reflectance of a feature recorded by a sensor. Not necessarily a pixel has reflectance of a single object; it may be aggregate of many features falling in that pixel. Suppose we are interpreting a satellite image acquired by IRS-1D LISS III sensor. Its single pixel represents 23.5mx23.5m of ground area; within this area a pixel may contain composite reflectance of a road, some road side trees and a building close to it. A pixel of a satellite image has spatial and spectral properties. The spatial property represents spatial resolution of a satellite sensor while spectral one the appearance of the objects in a image. On the basis of spectral properties, satellite images are of two type- panchromatic and multispectral images. In panchromatic images pixels appears in different shades of grey thats why often these are referred to as black & white images. Panchromatic images have single spectral band covering almost whole visible range of electromagnetic spectrum, e.g. CARTOSAT-1 images. The number grey shades which can be displayed by a pixel depend on number of bits per pixel. One bit images will have only two grey levels while 8-bit will have 256 grey levels. Multispectral images contain multiple layer of spectral bands and are displayed in combination of red (R), green (G) and blue (B) colours. Hence these are coloured images e.g. IRS-P6 LISS IV MX images.

3. Algorithm and flowchart:


Orthorectification Algorithm

Orthorectified images are geodetic corrected with respect to the acquisition conditions such as viewing geometry, platform attitude, Earth rotation and of the relief effects (parallax). Such corrected images are perfectly super imposable whatever being their acquisition dates and viewing direction. Normally, the geo-location of Level 1B and Level 2 data products is based on the intersection of viewing direction and the WGS84 earth ellipsoid. The orthorectification algorithm in BEAM corrects the geolocation with respect to a specified elevation model with sufficient accuracy.

The following figure demonstrates the preconditions which are expected by the orthorectification as implemented in BEAM:

Figure 1: Preconditions for orthorectification

The geodetic point P0 is the geo-location as provided by a Level 1B or Level 2 data product. The geodetic point P1 is the actual position of the measurement. The orthorectification is actually a map projection in which each pixel in the output product clearly identifies P1. A simple map projection tries to find find the pixel corresponding to P0 in a given input product. The orthorectification tries to find the pixel corresponding to P1 in a given input product. The approximation of the Earth geoid as used in the orthorectification algorithm can be provided by MERIS/AATSR tie-point elevations or a supported DEM. Currently, only the GETASSE30 elevation model can be used. Prediction/Correction Algorithm The prediction/correction algorithm ensures that both, the direct location model and the inverse location model have the same accuracy. The direct location model enables to perform computation of coordinates in the earth reference system associated to the geo-coded image. The inverse location model enables computation of coordinates in the input product reference system.

In most of the cases the direct location model is issued from a viewing model given by analytical functions and providing accurate results, while the inverse location model is only a predictor, for example estimated by a polynomial function. Scope of the prediction/correction algorithm is to retrieve the pixel (l,p) that match the by the direct location model: f(l,p) = . Principle is to sum a series of always-smaller corrections vectors, each one being estimated by a go and return from/to the MERIS segment to/from the geocoded image. The refinement loop is stopped when the (li,pi) is close enough the first point (l0,p0) according to a predefined tolerance. Following figure illustrates the prediction/correction principle on two iterations.

Figure 2

The prediction/correction algorithm is given in the flow diagram below:

Figure 3 Flowchart

4. Implementation (existing systems)

LANDSAT Landsat satellite sensors are one of the most popular remote sensing systems, the imagery acquired from these are widely used across the globe. NASAs Landsat satellite program was started in 1972. It was formerly known

as ERTS (Earth Resource Technology Satellite) program. The first satellite in the Landsat series Landsat-1 (formerly ERTS-1) was launched on July 23, 1972 .Since then five different types of sensors have been included in various combinations in Landsat mission from Landsat-1 through Landsat-7. These sensors are Return Beam Vidicon (RBV), the Multispectral Scanner (MSS), the Thematic Mapper (TM), the Enhanced Thematic Mapper (ETM) and the Enhanced Thematic Mapper plus (ETM+). Landsat ETM (or Landsat 6) was launched in 1993 but it could not achieve the orbit. Six year later in 1999 Landsat ETM+ (or Landsat 7) was launched and it is the recent one in the series. Landsat ETM+ contains four bands in Near Infrared-visible (NIR-VIS) region with 30mx30m spatial resolution, two bands in Short Wave Infrared (SWIR) region with same resolution, one in Thermal Infrared (TIR) region with spatial resolution of 60mx60m and one panchromatic band with resolution. Its revisit period is 16 days. SPOT SPOT (Systeme Pour lObservation de la Terre) was developed by the French Centre National dEtuded Spatiales with Belgium and Sweden. The first satellite of SPOT mission, SPOT-1 was launched in 1986. It was followed by SPOT-2 (in 1990), SPOT3 (in are 1993), two SPOT-4 imaging (in systems 1998) in and SPOT-5 (in 2002). There SPOT-5- HRVIR and Vegetation.

The HRVIR records data in three bands in VIS-NIR region with 10mx10m spatial resolution, one band in SWIR region with 20mx20m spatial resolution and one panchromatic band with 5mx5m resolution. The Vegetation instrument is primarily designed for vegetation monitoring and related studies. It acquires images in three bands in VIS-NIR region and in one band in SWIR region (all with 1000mx1000m) spatial resolution.

Advanced Very High Resolution Radiometer (AVHRR)

Several

generations

of

satellites

have

been

flown

in

the NOAA-

AVHRR series. NOAA-15 is the recent in the series. The sensor AVHRR (Advanced

Very High Resolution radiometer) contains five spectral channels two in VISNIR region and three in TIR. One thermal band is of the wavelength range 3.55-3.93 mm, meant for fire detection. Spatial resolution of AVHRR is 1100mx1100m.NOAAAVHRR mainly serves for global vegetation mapping, monitoring land cover changes and agriculture related studies with daily coverage.

Indian Remote Sensing (IRS) Satellites

The Indian Remote Sensing programme began with the launch of IRS-1A in 1988. After that IRS-1B (1999), IRS-1C (1995) and IRS-1D (1997) was launched. IRS-1D carries three sensors: LISS III with three bands of 23.5mx23.5m spatial resolution in VIS-NIR range and one band in SWIR region with 70.5x70.5 m resolution, a panchromatic sensor, with 5.8mx5.8m resolution and a Wide Field Sensor (WiFs) with 188mx188m resolution. WiFS is extensively used for vegetation related studies. ISROs IRS-P6 (RESOURCESAT-1) is very advanced remote sensing system. It was launched in 2003. It carries high resolution LISS IV camera (three spectral bands in VIS-NIR region) with spectral resolution of 5.8mx5.8m which has capability to provide stereoscopic imagery. IRS-P6LISS III camera acquires images in VIS-NIR (3 spectral bands) and SWIR (one spectral band) with spatial resolution of 23.5mx23.5m. IRS-P6 AWiFS (Advanced Wide Field Sensor) operates inVIS-NIR (3 spectral bands) and SWIR (one spectral band) with spatial resolution of 56mx56m.

5. Timeline chart for systems:

1800: Discovery of Infrared by Sir William Herschel. 1826: First photographic Image taken by Joseph Nicephore Niepce. 1839: Beginning of practice of Photography. 1855: Additive Colour Theory postulated by James Clerk Maxwell. 1858: First Aerial Photograph from a balloon, taken by G. F. Tournachon.

1873: Theory of Electromagnetic Energy developed by J. C. Maxwell. 1903: Airplane invented by Wright brothers. 1909: Photography from airplanes. 1910s: Aerial Photo Reconnaissance: World War I. 1920s: Civilian use of aerial photography and Photogrammetry. 1934: American society of Photogrammetry founded. 1935: Radar invention by Robert Watson-Watt. 1939-45: Advances in Photo Reconnaissance and applications of non-visible portion of EMR: World War II. 1942: Kodak patents first false colour infrared film. 1956: Colwells research on diseases detection with IR photography. 1960: Term Remote Sensing coined by Office of Naval Research personnel 1972: ERTS-1 launched (renamed Landsat-1). 1975: ERTS-2 launched (renamed Landsat-2). 1978: Landsat-3 launched. 1980s: Development of Hyperspectral sensors. 1982: Landsat-4 TM & MSS launched. 1984: Landsat-5 TM launched. 1986: SPOT-1 launched. 1995: IRS 1C launched. 1999: Landsat-7 ETM+ launched. 1999: IKONOS launched. 1999: NASAs Terra EOS launched.

2002: ENVISAT launched. 2003: ISRO's RESOURCESAT-1 (IRS P6) launched. 2005: ISRO's CARTOSAT-1 launched. 2007: ISRO's CARTOSAT-2 launched.

6. Conclusion
Satellites have opened up a new scope of planetary explorations for man. To find a life form on another planet, exploring the resources of home planet earth itself, a constant and detailed analysis to keep a track of any changes that takes place on earth or even for surveillance purposes, satellite imaging is useful for all. And it is still to develop. Research has expanded to include analysis of hyperspectral data acquired simultaneously in tens to hundreds of narrow channels. New algorithms have been developed both to exploit the spectral information of these sensors and to better deal with the computational demands of these enormous data sets. It is an excellent tool for environmental assessments, mineral mapping and land cover mapping, wildlife habitat monitoring and general land management studies. Actual detection of materials is dependent on the spectral coverage, spectral resolution, and signal-tonoise of the spectrometer, the abundance of the material and the strength of absorption features for that material in the wavelength region. In remote sensing situations, the surface materials mapped must be exposed in the optical surface and the diagnostic absorption features must be in regions of the spectrum that are reasonably transparent to the atmosphere.

In case of space exploration the satellites probe deeper and further into space capturing images of planets and phenomenon never known or seen before, and then transmits it back to earth. More and more satellites are put into orbit around earth to analyze climate, vegetation, infrastructures and other resources. Both types of satellites demand different technologies. The changes in requirements also demands a change in technology used and it does changes as the technologies continues to be

constantly developed, new horizons are explored in the satellite imaging field as man continues his space odysseys.

References: 1. Lovholt, F., Bungum, H., Harbitz, C.B., Glimsal, S., Lindholm, C.D., and Pedersen, G. "Earthquake related tsunami hazard along the western coast of Thailand." Natural Hazards and Earth System Sciences. Vol. 6, No. 6, 979997. November 30, 2006.

2. Campbell, J. B. 2002. Introduction to Remote Sensing. New York London:

The Guilford Press.

3. http://www.satimagingcorp.com

4. "Envisat MERIS Geometry Handbook" written by Serge RIAZANOFF, Director of VisioTerra and Professor at the University of Marne-la-Valle (France).

5. http://satimagingblog.wordpress.com/

You might also like