You are on page 1of 17

DigitalImageProcessingS2GeologyUniversityofKerala

DIGITALIMAGEPROCESSING:INTRODUCTORYNOTES

1A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

Unit1:Imagerectificationandrestoration,imageenhancement,contrastmanipulation, multiimagemanipulation,imageclassification,datamerging. There are twoapproaches inthe use ofremotesensing:1.the standardphotointerpretation of images2)useofdigitalimageprocessingandclassificationtechniquesbywhichinformationis extractedfromthesensordatasets(imagesandotherdata). Digitalimageprocessingtechniquesinvolvethreestages: Preprocessing:Radiometricandgeometriccorrections(ThisisknownasImageRestorationand Rectification) ImageEnhancementandImagetransformations ImageClassification There is a variety of methods for all these like: contrast stretching, band ratioing, band transformation, Principal Component Analysis, Edge Enhancement, Pattern Recognition, and UnsupervisedandSupervisedClassificationetc. Radiometriccorrectionsincludecorrectingthedataforsensorirregularitiesandunwanted sensor or atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation measured by the sensor. Geometric corrections include correcting for geometric distortions due to sensorEarth geometry variations, and conversionofthedatatorealworldcoordinates(e.g.latitudeandlongitude)ontheEarth's surface. Image enhancement improves the appearance of the imagery to assist in visual interpretation and analysis. Examples of enhancement functions include contrast stretchingtoincreasethetonaldistinctionbetweenvariousfeaturesinascene,andspatial filteringtoenhance(orsuppress)specificspatialpatternsinanimage. Imagetransformationsareoperationssimilarinconcepttothoseforimageenhancement. Butimageenhancementoperationsarenormallyappliedonlytoasinglechannelofdataat atime,whereasimagetransformationsusuallyinvolvecombinedprocessingofdatafrom multiplespectralbands.Arithmeticoperations(i.e.subtraction,addition,multiplication, division)areperformedtocombineandtransformtheoriginalbandsintonewimages whichbetterdisplayorhighlightcertainfeaturesinthescene.Methodslikespectralor bandratioing,principalcomponentsanalysisetcaresomeofthetechniquesusedtomore efficientlyrepresenttheinformationinmultichannelimagery. Image classification andanalysis operations are used to digitally identify andclassify pixelsinthedata.Classificationisusuallyperformedonmultichanneldatasets(A)and 2A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

thisprocessassignseachpixelinanimagetoaparticularclassortheme(B)basedon statisticalcharacteristicsofthepixelbrightnessvalues.Twomostcommonmethodsfor imageclassificationaresupervisedandunsupervisedclassification. 1.Imagerectification Remotelysensedimagesandaerialphotographsarewidelyusedbecauseoftheireaseof acquisitionanduptodatedness.TheyhavesignificantroleinnumerousGISapplications. Unlikeexistingmaps,rawsatelliteimagesandscannedaerialphotographshavealocal coordinatesystem.Theydonothavetherightprojection.Alsoimagesdonothaveany scaleorproperorientation.Duringdataacquisitiontheplatformonwhichthesensoris mounted is in a state of constant motion. Any deviation of the sensor position and orientation from the norm will lead to geometric distortions in the resultant satellite images. Whyshouldimagesbegeometricallyrectified? 1.Imagerectificationensuresthatgeometricdistortionsintheremotesensingimagery areeliminatedorreducedtoanacceptablelevel. 2.InordertobeanalyzedwithdatafromothersourcesinaGIS,rawsatelliteimages andaerialphotographshavetobeprojectedtoacommongroundreferencesystem,which enablesimagesobtainedatdifferenttimestobespatiallyregisteredwithoneanother. 3.Ifremotelysenseddataareusedtodetectchangesorupdateexistingmaps,theymust be reprojected to a coordinate system with a known geometry identical to that of the digitalmapstoberevised. WhatarethesourcesofGeometricDistortion? Manyfactorscontributetogeometricdistortionsofsatelliteimagery.Theyarerelatedto thetarget,thesensor,andtheplatform. 1.ErrorsAssociatedwiththeEarth(target) a.EarthRotation b.EarthCurvature 2.SensorDistortions a.OffNadirScaleDistortion b.ScanningMirrorInconsistency

3A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

3.ErrorsAssociatedwiththePlatform a.Position b.Orientation c.Velocity 1.a.EarthRotation TheEarthspinsataconstantangularvelocityof360aday,or463m/s.Duringscanning inacquiringlinesofimagery,theEarthrotatessomedistancefromthewesttotheeast. Crosstrackscanning,aswithLandsatimagery,takeslongertoobtainaframeofimagery thanalongtrackscanningcommonlyassociatedwithSPOT.Afterascan,theEarthshifts eastward by a certain distance, for instance, 9.26 m after a scan duration of 20 microseconds.Whenthescannerreturnstoitsformerpositiontobeginthenextscan,this positionhasmovedeastwardby9.26m.Instead,thegroundofthesamedistancetothe left has been displaced here to replace the former position, resulting in a gradual westward shift in the ground swath being scanned. As the number of scan lines accumulates,thestartpositionshiftstotheleftcumulativelyfromthefirstlinetothelast. Althoughtherawimageisrecordedasasquaretheactualgroundcoveredbytheimageis skewedtowardtheleftasaconsequenceoftherotation.

Earthrotationcausesaconstantdisplacementinthestartpositionofscanlinesonly.It doesnotaffectthenumberofscanlinesinanimagenorthenumberofpixelsinascan line.Thiskindofdistortionishencesystematicandcanbecompletelyeliminatedduring imagerectification. 1.b.EarthCurvature

4A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

The threedimensional surface of the Earth is not flat but curved with a varying topographic relief. Recording of this surface into satellite images is a process of transforminga3Dsurfaceontoa2D medium,during which geometricdistortions are introduced.TheseverityofgeometricdistortioncausedbyEarthcurvaturedependsonthe scanningangleandtheswathwidthD,bothofwhicharerelatedtothesensoraltitude H.Theapproximationofthecurvedsurfacewitha flatsurfacemaycauseanegligible errorifthefieldofview(FOV)ofthesensingsystemisverysmall.

OffnadirScaleDistortion Scandirectiondistortioniscausedbythenonuniformityinobjectdistance,whileimage distance(e.g.,focallength)remainsconstantduringscanning.Thistypeofdistortionis especiallyseeninairbornethermalinfraredandradarimagerywhichisacquiredataclose rangetotheground.Duringascan,thescenedirectlybeneaththescannerhastheshortest objectdistance.Asthescanningmirrorrotatesawayfromthisnadirpositiontheobject distancebecomesincreasinglylonger.Ontheotherhand,thescanninginstantaneousfield ofview(IFOV)()remainsconstantirrespectiveofthescanangle.Therefore,alarger areaonthegroundiscoveredbythesamescanningIFOVfurtherawayfromthenadir.In the obtainedimagery,thisincreasedgroundis recordedatthesamedimensionas the nadirgroundbecauseofthefixedfocallength,resultinginoffnadirimagecompression. Thefurtherawayanobjectisfromthenadirposition,thelargerthecompression.This scandirectiondistortioniscalledtangentialscaledistortion.

Tangential scale distortion occurs only in the direction perpendicular to the flight direction. Along the flight direction there is no such distortion. Consequently, regular shapesofgrids,diamonds,andcirclesonthegroundarenolongerregularintheacquired image.Thiskindofdistortionisespeciallyseveretowardtheendofthescanline,where ismaximal.

5A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

ScanningMirrorInconsistency Two problems related to the scanning mirror emerge during scanning, velocity inconsistencyandscanskew.Themirrorrotatesnonlinearlyacrossascanlikeapendulum. Itsvelocityisthemaximumatthenadirandgraduallydecreasestozeroattheextreme. Thevelocityalwaysalternatesbetweenthesetwoextremes.Consequently,thegroundis notsweptlinearly,causingdistortionalongthescandirection.Thissystematicdistortion canbeeliminatedcompletelythroughapplicationofsomekindofmathematicalformula. Scanskewreferstothemotionofthespacecraftawayfromtheplanneddirectionalongthe groundtrackduringthetimerequiredtocompleteascan.Thiscausesthegroundswath scanned, not normal to the ground track but slightly skewed (crossscan geometric distortion).Itcausesrandomerrorsthatareimpossibletodealwith. ErrorsAssociatedwiththePlatform Two types of parameters determine the status of the sensor in space, position and orientation,eachbeingassociatedwiththreecomponents. Position Amongthethreeparameters(Xeasting,Ynorthing,andZaltitude)definingtheposition of the platform in space, altitude (Z) is the most critical as it affects the scale of the obtainedimagery.Ahigheraltitudethanthenominalcausesalargergroundareatobe covered,resultingintheimagehavingasmallerscale.Conversely,aloweraltitudeleadsto asmallergroundareatobesensed.Hence,theimagehasalargerscale.Thedepartureofa spacecraftoraircraftfromitsnominalaltitudeistranslatedintoascalevariation.Theothertwo coordinates(eastingandnorthing)governthegeographicareatobecovered.Achangein thepositioncausesaslightlydifferentareafromtheplannedonetobesensed.Thereisno changeinimagescale. Orientation Sensororientationisdefinedbythreeparameters,roll(w),pitch(f ),andyaw(k ).Roll referstotherotationaroundtheflightdirection(Xaxis),whoseincrementpointstothe right if this rotation occurs clockwise. It results in a change in scale in the direction perpendiculartotheflightdirection.Thescaleiseitherlargerorsmallerthanthenominal, dependinguponthelocationinrelationtotheXaxis.Thescalealongalllinesparallelto the flight direction is constant. Pitch is the rotation around the Yaxis, or a direction perpendiculartotheflightdirection.Itseffectonscaledistortionisidenticaltorollexcept thatthedistortionoccursinthe Xdirection(theflightdirection).Yawisdefinedasthe rotationaroundthe Zaxis(plumbdirection).Unlikerollandpitch,yawexertsnodirect impactonthegeometryoftheobtainedimage(e.g.,nochangeinscale).Instead,aslightly differentareafromtheplannedoneiscoveredinayawedimage.Thus,itseffectisvery similartothatof(X,Y).

6A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

Velocity Inordertoobtainimageryofahighgeometricfidelity,theplatformmustbemovingata constantvelocityduringdataacquisition.Anyinconsistencyinitsvelocitywillleadto imagedistortionalongtheflightdirection.Ifthevelocityisfasterthanthenorm,thenthe imageisstretched,otherwiseitiscompressed.Thisgeneralizationalsoappliestoacross trackscanning.Suchaninconsistencyinplatformvelocityisrandomanditsimpactonthe acquiredimagerycannotbeeliminated. NatureofDistortions Alldistortionsmentionedabovefallintotwobroadcategoriesintermsoftheirnature: systematic or random. Systematic errors can be completely eliminated through image rectification. Butrandomerrors arenonsystematicandunpredictable.Their haphazard behaviorintheimagerymeansthattheseerrorscannotbecompletelyremoved,thoughit 7A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

ispossibletosuppressthemtoanacceptablelevelthroughimagerectification. Both Earth rotation and curvature cause systematic distortions that can be completely eradicated. So can the distortion caused by inconsistency in scanning mirror velocity. By comparison,mostoftheerrorsrelatedtotheorientationandpositionofthesensor,andthevelocity oftheplatformarerandom.Thereforetocorrecttheseerrorsgeometricregistrationoftheimageryto aknowngroundcoordinatesystemmustbeperformed. Correctingthedistortions Thegeometricregistrationprocessinvolvesidentifyingtheimagecoordinates(i.e.row, column)ofseveralclearlydiscerniblepoints,calledgroundcontrolpoints(orGCPs),inthe distorted imageandmatching themtotheirtruepositions ingroundcoordinates (e.g. latitude,longitude).Thetruegroundcoordinatesaretypicallymeasuredfromamap.This is imagetomap registration. Once several welldistributed GCP pairs have been identified, the coordinate information is processed by the computer to determine the proper transformation equations to apply to the original (row and column) image coordinatestomapthemintotheirnewgroundcoordinates.

GCPs are easy to identify on satellite imagery, and their positions can be pinpointed accurately on a map.

Geometric registration may also be performed by registering one (or more) images to another image, instead of to geographic coordinates. This is called imagetoimage 8A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

registration and is often done prior to performing various image transformation proceduresorformultitemporalimagecomparison. Toactuallygeometricallycorrecttheoriginaldistortedimage,aprocedurecalled resamplingisusedtodeterminethedigitalvaluestoplaceinthenewpixellocationsofthe correctedoutputimage.Theresamplingprocesscalculatesthenewpixelvaluesfromthe originaldigitalpixelvaluesintheuncorrectedimage.Therearethreecommonmethods forresampling:nearestneighbour,bilinearinterpolation,andcubicconvolution.

Nearestneighbourresampling:usesthedigitalvaluefromthepixelintheoriginalimage which is nearesttothenewpixel location in thecorrectedimage.This is thesimplest methodanddoesnotaltertheoriginalvalues,butmayresultinsomepixelvaluesbeing duplicatedwhileothersarelost.Thismethodalsotendstoresultinadisjointedorblocky imageappearance. Bilinearinterpolationresampling takesaweightedaverageoffourpixelsinthe originalimagenearesttothenewpixellocation.Theaveragingprocessalterstheoriginal pixel valuesandcreatesentirelynewdigitalvaluesintheoutputimage.Thismaybe undesirable if further processing and analysis, such as classification based on spectral response,istobedone.

9A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

Cubicconvolutionresamplingcalculatesadistanceweightedaverageofablockofsixteen pixelsfromtheoriginalimagewhichsurroundthenewoutputpixellocation.Aswith bilinearinterpolation,thismethodresultsincompletelynewpixelvalues.However,these twomethodsbothproduceimageswhichhaveamuchsharperappearanceandavoidthe blockyappearanceofthenearestneighbourmethod.

ImageEnhancement
Enhancementsareusedtomakeiteasierforvisualinterpretationandunderstandingof imagery.Theadvantageofdigitalimageryisthatitthemanipulationofthedigitalpixel values in an image. Although radiometric corrections for illumination, atmospheric influences,andsensorcharacteristicsmaybedonepriortodistributionofdatatotheuser, theimagemaystillnotbeoptimizedforvisualinterpretation.Remotesensingdevices, particularlythoseoperatedfromsatelliteplatforms,mustbedesignedtocopewithlevels oftarget/backgroundenergywhicharetypicalofallconditionslikelytobeencounteredin routineuse.Withlargevariationsinspectralresponsefromadiverserangeoftargets(e.g. forest,deserts,snowfields,water,etc.)nogenericradiometriccorrectioncouldoptimally accountforanddisplaytheoptimumbrightnessrangeandcontrastforalltargets.Thus, foreachapplicationandeachimage,acustomadjustmentoftherangeanddistributionof brightnessvaluesisusuallynecessary. Contrast enhancement: In raw imagery, the useful data often populates only a small portionoftheavailablerangeofdigitalvalues(commonly8bitsor256levels).Contrast enhancementinvolveschangingtheoriginalvaluessothatmoreoftheavailablerangeis used,therebyincreasingthecontrastbetweentargetsandtheirbackgrounds.Thekeyto understandingcontrastenhancementsistounderstandtheconceptofanimagehistogram. Ahistogramisagraphicalrepresentationofthebrightnessvaluesthatcompriseanimage. 10A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

The brightness values (i.e. 0255) are displayed along the xaxis of the graph. The frequencyofoccurrenceofeachofthesevaluesintheimageisshownontheyaxis.

Histogramstretching Bymanipulatingtherangeofdigitalvaluesinanimage,graphicallyrepresentedbyits histogram, we can apply various enhancements to the data. The simplest type of enhancementisalinearcontraststretch(alsoknownashistogramstretching).Thisinvolves identifying lower and upper bounds from the histogram (usually the minimum and maximumbrightnessvaluesintheimage)andapplyingatransformationtostretchthis rangetofillthefullrange.Inourexample,theminimumvalue(occupiedbyactualdata)in thehistogramis84andthemaximumvalueis153.These70levelsoccupylessthanone thirdofthefull256levelsavailable.Alinearstretchuniformlyexpandsthissmallrangeto coverthefullrangeofvaluesfrom0to255.Thisenhancesthecontrastintheimagewith light toned areas appearing lighter and dark areas appearing darker, making visual interpretationmucheasier. 11A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

Spatial filtering is another technique: it enhances the appearance of an image. Spatial filtersaredesignedtohighlightorsuppressspecificfeaturesinanimagebasedontheir spatialfrequency.Spatialfrequencyisrelatedtotheconceptofimagetexture,whichrefers tothefrequencyofthevariationsintonethatappearinanimage."Rough"texturedareas ofanimage,wherethechangesintoneareabruptoverasmallarea,havehighspatial frequencies,while"smooth"areaswithlittlevariationintoneoverseveralpixels,havelow spatialfrequencies.Acommon filteringprocedure involvesmovinga'window'ofafew pixels in dimension (e.g. 3x3, 5x5, etc.) over each pixel in the image, applying a mathematical calculation using the pixel values under that window, andreplacing the centralpixelwiththenewvalue.Thewindowismovedalonginboththerowandcolumn dimensionsonepixelatatimeandthecalculationisrepeateduntiltheentireimagehas beenfilteredanda"new"imagehasbeengenerated.Byvaryingthecalculationperformed andtheweightingsoftheindividualpixelsinthefilterwindow,filterscanbedesignedto enhanceorsuppressdifferenttypesoffeatures. Alowpassfilterisdesignedtoemphasizelarger,homogeneousareasofsimilartoneand reducethesmallerdetailinanimage.Thus,lowpassfiltersgenerallyservetosmooththe appearanceofanimage.Averageandmedianfilters,oftenusedforradarimageryare examplesoflowpassfilters.

a)Originalimage;(b)5x5meanfilterresult;and(c)9x9meanfilterresult Highpassfiltersdotheoppositeandservetosharpentheappearanceoffinedetailinan image.Oneimplementationofahighpassfilterfirstappliesalowpassfiltertoanimage and then subtracts the result from the original, leaving behind only the high spatial frequencyinformation. Directional,oredgedetectionfilters aredesignedtohighlight linearfeatures,suchasroadsorfieldboundaries.Thesefilterscanalsobedesignedto enhance features which are oriented in specific directions. These filters are useful in applicationsingeology,forthedetectionoflineamentsetc.

12A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

Multiimagemanipulation

Image transformations typically involve the manipulation of multiple bands of data, whetherfromasinglemultispectralimageorfromtwoormoreimagesofthesamearea acquired at different times (i.e. multitemporal image data). Either way, image transformationsgeneratenewimagesfromtwoormoresourceswhichhighlightparticular features or properties of interest, better than the original input images. Basic image transformationsapplysimplearithmeticoperationstotheimagedata.Imagesubtractionis oftenusedtoidentifychangesthathaveoccurredbetweenimagescollectedondifferent dates.Typically,twoimageswhichhavebeengeometricallyregisteredareusedwiththe pixel(brightness)valuesinoneimage(1)beingsubtractedfromthepixelvaluesinthe other(2).Scalingtheresultantimage(3)byaddingaconstant(e.g.127inthiscase)tothe outputvalueswillresultinasuitable'difference'image.Insuchanimage,areaswhere therehasbeenlittleornochange(A)betweentheoriginalimages,willhaveresultant brightnessvaluesaround127(midgreytones),whilethoseareaswheresignificantchange hasoccurred(B)willhavevalueshigherorlowerthan127brighterordarkerdepending onthe'direction'ofchangeinreflectancebetweenthetwoimages.Thistypeofimage transformcanbeusefulformappingchangesinurbandevelopmentaroundcitiesandfor identifyingareaswheredeforestationisoccurring. Spectralratioing isoneofthemostcommontransformsappliedtoimagedata.Image ratioingservestohighlightsubtlevariationsinthespectralresponsesofvarioussurface covers. By ratioing the data from two different spectral bands, the resultant image enhances variations in the slopes of the spectral reflectance curves between the two differentspectralrangesthatmayotherwisebemaskedbythepixelbrightnessvariations ineachofthebands.Thefollowingexampleillustratestheconceptofspectralratioing. Healthy vegetationreflectsstronglyinthenearinfraredportion ofthespectrumwhile absorbingstronglyinthevisiblered.Othersurfacetypes,suchassoilandwater,show nearequalreflectancesinboththenearinfraredandredportions.Thus,aratioimageof LandsatMSSBand7(NearInfrared0.8to1.1mm)dividedbyBand5(Red0.6to0.7 mm)wouldresultinratiosmuchgreaterthan1.0forvegetation,andratiosaround1.0for soilandwater.Thusthediscriminationofvegetationfromothersurfacecovertypesis significantlyenhanced.Also,areasofunhealthyorstressedvegetation,whichshowlow 13A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

nearinfraredreflectance,canbeidentifiedastheratioswouldbelowerthanforhealthy greenvegetation.

(a)OriginalSPOTPanimage;(b)smoothedSPOTPanimagewitha5x5smoothing filter;and(c)theratioimagebetween(a)and(b) A used image transformation technique for vegetation analysis is the Normalized DifferenceVegetationIndex(NDVI)whichhasbeenusedtomonitorvegetationconditions oncontinentalandglobalscalesusingtheAdvancedVeryHighResolutionRadiometer (AVHRR)sensoronboardtheNOAAseriesofsatellites. Principal components analysis: Different bands of multispectral data are often highlycorrelatedandthuscontainsimilarinformation.Forexample,LandsatMSSBands4 and 5 (green and red, respectively) typically have similar visual appearances since reflectances for the same surface cover types are almost equal. Image transformation techniquesbasedoncomplexprocessingofthestatisticalcharacteristicsofmultibanddata setscanbeusedtoreducethisdataredundancyandcorrelationbetweenbands.Onesuch transformiscalledprincipalcomponentsanalysis.Theobjectiveofthistransformationisto reducethedimensionality(i.e.thenumberofbands)inthedata,andcompressasmuchof theinformationintheoriginalbandsintofewerbands.The"new"bandsthatresultfrom this statistical procedure are called components. This process attempts to maximize (statistically)theamountofinformation(orvariance)fromtheoriginaldataintotheleast numberofnewcomponents.Asanexampleoftheuseofprincipalcomponentsanalysis,a sevenbandThematicMapper(TM)datasetmaybetransformedsuchthatthefirstthree principal components contain over 90 percent of the information in the original seven bands.Interpretationandanalysisofthesethreebandsofdata,combiningthemeither visuallyordigitally,issimplerandmoreefficientthantryingtousealloftheoriginal sevenbands.Principalcomponentsanalysiscanthusbeusedeitherasanenhancement techniquetoimprovevisualinterpretationortoreducethenumberofbandstobeusedas inputtodigitalclassificationprocedures

14A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

ImageClassificationandAnalysis
Ahumananalystattemptingtoclassifyfeaturesinanimageusestheelementsofvisual interpretationtoidentifyhomogeneousgroupsofpixelswhichrepresentvariousfeatures orlandcoverclassesofinterest.Digitalimageclassificationusesthespectralinformation representedbythedigitalnumbersinoneormorespectralbands,andattemptstoclassify each individual pixel based on this spectral information. This type of classification is termedspectralpatternrecognition.Ineithercase,theobjectiveistoassignallpixelsinthe image to particular classes or themes (e.g. water, quarry, ore zones, leachate spread, deciduousforest,rice,coconutpalmetc.).Theresultingclassifiedimageiscomprisedofa mosaicofpixels,eachofwhichbelongtoaparticulartheme,andisessentiallyathematic "map"oftheoriginalimage.Whentalkingaboutclasses,weneedtodistinguishbetween informationclassesandspectralclasses.Informationclassesarethosecategoriesofinterest thattheanalystisactuallytryingtoidentifyintheimagery,suchasdifferentkindsofsoils, differentforesttypesortreespecies,differentgeologicunitsorrocktypes,etc.Spectral classes are groups of pixels that are uniform (or nearsimilar) with respect to their brightnessvaluesinthedifferentspectralchannelsofthedata.Theobjectiveistomatch the spectral classes in the data to the information classes of interest.Rarely is there a simple onetoone match between these two types of classes. Rather, unique spectral classes may appear which do not necessarily correspond to any information class of particularuseorinteresttotheanalyst.Alternatively,abroadinformationclass(e.g.forest) maycontainanumberofspectral subclasses withuniquespectralvariations.Usingthe forestexample,spectralsubclassesmaybeduetovariationsinage,species,anddensity, orperhapsasaresultofshadowingorvariationsinsceneillumination.Itistheanalyst's jobtodecideontheutilityofthedifferentspectralclassesandtheircorrespondenceto usefulinformationclasses. Supervisedclassificationandunsupervisedclassification Commonimageclassificationprocedurescanbebrokendownintotwobroadsubdivisions basedonthemethodused:supervisedclassificationandunsupervisedclassification.Ina supervised classification, the analyst identifies in the imagery homogeneous representativesamplesofthedifferentsurfacecovertypes(informationclasses)ofinterest. Thesesamplesarereferredtoastrainingareas.Theselectionofappropriatetrainingareas isbasedontheanalyst'sfamiliaritywiththegeographicalareaandtheknowledgeofthe actual surface cover types present in the image. Thus, the analyst is "supervising" the categorizationofasetofspecificclasses.Thenumericalinformationinallspectralbands for the pixels comprising these areas are used to "train" the computer to recognize spectrallysimilarareasforeachclass.Thecomputerusesaspecialprogramoralgorithm todeterminethenumerical"signatures"foreachtrainingclass.Oncethecomputerhas determinedthesignaturesforeachclass,eachpixelintheimageiscomparedtothese signatures and labeled as the class it most closely "resembles" digitally. Thus, in a supervisedclassificationfirsttheinformationclassesareidentifiedwhicharethenusedto 15A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

determinethespectralclasseswhichrepresentthem.

Unsupervised classification in the reverse of the supervised classification process. Spectralclassesaregroupedfirst,basedsolelyonthenumericalinformationinthedata, andarethenmatchedbytheanalysttoinformationclasses(ifpossible).Programs,called clustering algorithms, are used to determine the natural (statistical) groupings or structuresinthedata.Usually,theanalystspecifieshowmanygroupsorclustersaretobe lookedforinthedata.Inadditiontospecifyingthedesirednumberofclasses,theanalyst mayalsospecifyparametersrelatedtotheseparationdistanceamongtheclustersandthe variationwithineachcluster.Thefinalresultofthisiterativeclusteringprocessmayresult insomeclustersthattheanalystwillwanttosubsequentlycombine,orclustersthatshould bebrokendownfurthereachoftheserequiringafurtherapplicationoftheclustering algorithm. Thus, unsupervised classification is not completely without human intervention. However, it does not start with a predetermined set of classes as in a supervisedclassification. DataIntegrationandAnalysis Intheearlydaysofanalogremotesensingwhentheonlyremotesensingdatasourcewas aerial photography, the capability for integration of data from different sources was limited.Today,withmostdataavailableindigitalformatfromawidearrayofsensors, data integration is a common method used for interpretation and analysis. Data integration fundamentally involves the combining or merging of data from multiple sourcesinanefforttoextractbetterand/ormoreinformation.Thismayincludedatathat are multitemporal, multiresolution, multisensor, or multidata type in nature. Imagery collectedatdifferenttimesisintegratedtoidentifyareasofchange.Multitemporalchange detection can be achieved through simple methods such as these, or by other more complexapproachessuchasmultipleclassificationcomparisonsorclassificationsusing 16A.P.PRADEEPKUMAR

DigitalImageProcessingS2GeologyUniversityofKerala

integratedmultitemporaldatasets.Multiresolutiondatamergingisusefulforavarietyof applications. The merging of data of a higher spatial resolution with data of lower resolution can significantly sharpen the spatial detail in an image and enhance the discriminationoffeatures. SPOTdata arewellsuitedtothisapproachasthe10metre panchromatic data can be easily merged with the 20 metre multispectral data. Additionally, the multispectral data serve to retain good spectral resolution while the panchromaticdataprovidetheimprovedspatialresolution. Datafromdifferentsensorsmayalsobemerged,bringingintheconceptofmultisensor datafusion.Anexcellentexampleofthistechniqueisthecombinationof multispectral opticaldatawithradarimagery.Thesetwodiversespectralrepresentationsofthesurface can provide complementary information. The optical data provide detailed spectral information useful for discriminating between surface cover types, while the radar imagery highlightsthestructural detailintheimage.Applications ofmultisensordata integrationgenerallyrequirethatthedatabegeometricallyregistered,eithertoeachother or to a common geographic coordinate system or map base. This also allows other ancillary(supplementary)datasourcestobeintegratedwiththeremotesensingdata.For example,elevationdatainDigitalElevationorDigitalTerrainModels(DEMs/DTMs),may becombinedwithremotesensingdataforavarietyofpurposes.DEMs/DTMsmaybe useful in image classification, as effects due to terrain and slope variability can be corrected,potentiallyincreasingtheaccuracyoftheresultantclassification.DEMs/DTMs are also useful for generating threedimensional perspective views by draping remote sensing imagery over the elevation data, enhancing visualization of the area imaged. Combiningdataofdifferenttypesandfromdifferentsources,suchaswehavedescribed above,isthepinnacleofdataintegrationandanalysis.Inadigitalenvironmentwhereall thedatasourcesaregeometricallyregisteredtoacommongeographicbase,thepotential forinformationextractionisextremelywide.ThisistheconceptforanalysiswithinaGIS database.Anydatasourcewhichcanbereferencedspatiallycanbeusedinthistypeof environment.ADEM/DTMisjustoneexampleofthiskindofdata.Otherexamplescould includedigitalmapsofsoiltype,landcoverclasses,forestspecies,roadnetworks,and manyothers,dependingontheapplication.Theresultsfromaclassificationofaremote sensingdatasetinmapformat,couldalsobeusedinaGISasanotherdatasourceto updateexistingmapdata.Inessence,byanalyzingdiversedatasetstogether,itispossible toextractbetterandmoreaccurateinformationinasynergisticmannerthanbyusinga singledatasourcealone. References 1. CanadaCentreforRemoteSensingRemoteSensingTutorial258p 2. LiuJGandMasonPJ2009EssentialImageProcessingandGISforRemoteSensing WileyBlackwellChichester462p

17A.P.PRADEEPKUMAR

You might also like