You are on page 1of 11

National Imagery Interpretability Rating Scales (NIIRS):

Overview and Methodology


John M. Irvine
ERIM International, 1101 Wilson Blvd., Suite 1100, Arlington, VA, 22209-2248
Abstract
For over 20 years, the National Imagery Interpretability Rating Scale (NIIRS) has served as a standard to
quantify the interpretability or usefulness of imagery. The need for a NIIRS arose from the inability of simple
physical image quality measures, such as resolution, to adequately predict image interpretability. The NIIRS defines
the levels of image interpretability by the types of tasks an analyst can perform with imagery of a given rating level.
The NURS provides a simple, yet powerful, tool for assessing and communicating image quality and sensor system
requirements. While the scale itself is simple, the process of developing the scale is both complex and resource
intensive. Rigorous methods are needed to:

.
.
.

Develop appropriate image interpretation tasks,

Relate these tasks to the various levels of image quality, and

Validate that the scale is usable in practice and has the desirable properties of a rating scale.
This paper presents three different NIIRS corresponding to three types of imagery: Visible, IR, and Radar. The paper
also discusses the methodology used to develop and validate these rating scales.

Key Words:

Image quality, interpretability, NIIRS, scale

Introduction
The National Imagery Interpretability Rating Scale (NIIRS) is a task-based scale for rating imagery acquired
from imaging systems. The NIIRS originated in the Intelligence Community and is the standard used by imagery
analysts, collection managers, imagery scientists, and sensor designers (Leachtenauer [1996], Mayer,
et. al., [19951). The initial NIIRS was developed by a government/contractor team in the early 1970's. The team
operated under the auspices of the Imagery Resolution Assessment and Reporting Standards (IRARS) Committee of the
US Government.
The need for NIIRS arose from the inability of simple physical image quality measures, such as scale or
resolution, to adequately predict image interpretability. More complex measures, such as modulation transfer function
(MTF) based metrics, were not successful in communicating information to imagery analysts. (See Leachtenauer [1996]).
The NIIRS has been revised, updated, and expanded over time. Today, separate scales exist for different types of
imagery as shown in Table 1 (Erdman, et. al., [1994], Leachtenauer [1996]). The imagery analysis tasks that comprise
the NIIRS have focused mainly on military equipment, although recent studies have focused on scales composed of civil
or environmental image exploitation tasks (Greer and Caylor [1992], Hothem, et. al., [1996]). A full set of scales appear
at the end of this paper.
The basis for the NIIRS is the concept that imagery analysts should be able to perform more demanding
interpretation tasks with higher quality imagery. The NIIRS consists of 10 graduated levels (0 to 9), with several
interpretation tasks or criteria forming each level. These criteria indicate the level of information that can be
extracted from an image of a given interpretability level. With a NURS 2 panchromatic image, for example, analysts
should be able to detect large hangars at an airfield, while on NIIRS 6 imagery they should be able to distinguish
between models of small/medium helicopters.

SPIE Vol. 3128 0277-786X197/$1O.OO

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

93

Table 1. Summary of NIIRS

Scale

Comment

Image Type

Visible NIIRS
Civil NIIRS

Visible panchromatic
Visible panchromatic

Radar NIIRS
JR NIIRS

Synthetic Aprture Radar

Equivalent to Visible NIIRS,


but focused on non-military

tasks
MS IIRS

Thermal JR
Experimental effort to apply
Multispectral Imagery
spanning the Visible, near JR. NIIRS methodology to MSI
and shortwave JR

The NIIRS provides a common framework for discussing the interpretability, or information potential, of
imagery. NIIRS, therefore, serves as a standardized indicator of image interpretability for:
Communicating the relative usefulness of the imagery,
Documenting requirements for imagery,
S Managing the tasking and collection of imagery,
. Assisting in the design and assessment of future imaging systems, and
Measuring the performance of sensor systems and imagery exploitation devices.

.
.
.

NIJRS ratings are made by qualified imagery analysts using the NIIRS criteria for the corresponding image
type. Although criteria are defined only at the integer level, most experienced analysts can provide ratings at the
decimal level. Criteria NIIRS ratings are made by examining the image and determining which of the NIJRS criteria
could be accomplished on the image if the objects or features defined by the criteria were present. The features
referenced by the criteria need not be present. The analyst determines the highest level on the scale at which the
criteria and all lower rated criteria could be satisfied.

Scale Development
The methodology for developing the NJIRS has been applied with multiple types of imagery and offers a
robust approach to developing a scale. The general approach is to use image exploitation tasks to indicate the level of
interpretability for imagery. If more challenging tasks can be performed with a given image, then the image is deemed
to be of higher interpretability. A set of standard image exploitation tasks or "criteria" define the levels of the scale.
The purpose of the scale development methodology is to select "good" criteria to form the scale and to associate these
criteria with the appropriate levels of image interpretability. The process involved five steps (Figure 1):

Image Scaling Evaluation. Imagery with varying scene content and quality is scaled with respect to image

quality on a Subjective Quality Scale (SQS).


Development of candidate criteria, Based on image exploitation tasks that are relevant to the analysts
working with this type of imagery.
Criteria Scaling EvaluationS Exploitation criteria are rated using marker images along the SQS. Criteria
are rated in terms of the image interpretability required to satisfy the exploitation task.
Construction of the actual scale, Using the data from the image and criteria scaling evaluations.
Scale Validation Evaluation, The scale constructed from the criteria is used to rate imagery to assess the
properties of the final scale.
S

94

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Image Scaling
The objective of the image scaling evaluation is to obtain ratings of image interpretability for a set of images.
The data collected in this evaluation serves several functions. First, they indicated that image interpretability is
essentially a unidimensional measurement, implying that analysts can rate or sort imagery consistently according to
interpretability. Second, "marker images" for the criteria scaling experiment are identified from the image scaling
data. Finally, the image scaling data provides a numerical rating of interpretability for each image so that subsequent

statistical analyses are possible.

Figure 1. Overview of the NIIRS Development Process


Image scaling procedures are conducted separately for images with different classes of scene content.
Traditionally, these categories were defined by the military orders-of-battle representing the equipment observed in
the image. More recent studies have based this categorization on various classes of natural and cultural features.

A Subjective Quality Scale (SQS), with nominal endpoints at 0 and 100, is physically represented on a light
table with a tape marked in increments of five SQS units. An image is chosen to represent each end of the scale. Image
analysts then sort a set of hardcopy images along this scale with respect to the subjective interpretability of each
image. Each image analyst provides an SQS rating for each image (Figure 2). Ratings above 100 or below 0 are
permitted if the image analyst felt that an image is either better than the 100 point marker image or worse than the 0
point image.

Worst

lol
IMAGE
TO BE
RATED

Best

Iiool

Figure 2. Image Scaling Evaluation

95

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Using these SQS ratings, each image is evaluated with respect to its subjective image interpretability and to
the consistency exhibited across all image analysts rating the image. Both statistics, the mean SQS rating and the
variability of the ratings of that image across image analysts, are important. The low variability across image
analysts indicated how consistently the image interpretability of that image is perceived.
An important purpose of the image scaling procedure is to provide marker images for the criteria scaling
evaluations. Five marker images, representing the 0, 25, 50, 75, and 100 points on the SQS, are needed. Zero and 100
point images are selected as part of the image scaling procedure. The image analysts' ratings either corroborate the
selection of these images or indicate that other images need to be chosen. In selecting the marker images, it is important
not only that the images be representative of their respective points on the SQS, but also that they be consistently

rated across analysts, i.e., have low variability.


Development of Candidate Criteria
Criteria are statements regarding the exploitation task to be achieved. Typically, each criterion consists of
three parts: a recognition level, an object, and a qualifier (Figure 3). The recognition level indicates the type of task,
i.e., detect, identify, or distinguish among the object(s). The object is either a piece of equipment, a structure, or a
natural feature. The qualifier provides examples or other information to clarify the nature of the task.

/1

Detect medium aircraft (e.g., F-15)

Recognitk'n Level

Object

Qualifier

Figure 3. Elements of a Criterion

These criteria have to satisfy certain guidelines: The criteria must reference features or equipment that are
generally current, recognizable, and familiar to most image analysts. The equipment or features should be common
enough to be well imagined by most image analysts when not present in the imagery. The wording should be clear,
concise, and interpreted the same way by all image analysts. The criteria should be based on observables rather than
requiring image analysts to make deductions from context.

Criteria Scaling
The primary objective of the criteria scaling is to link the criteria to levels of image interpretability, as
represented by the SQS. Part of the criteria scaling procedure has already been described in the Image Scaling section.
The same SQS marker tape is used, but this time five images representing the 0, 25, 50, 75, and 100 points on the SQS are
appropriately placed on the light table. Rather than scaling images along the SQS, the image analysts now scale
criteria. Each criterion is written on a separate card. Using the marker images as points of reference, the image analysts
place each criterion along the SQS by the level of image interpretability required to just accomplish the exploitation
task (Figure 4).
Construction of the Scale
After the criteria have been rated, all the information necessary to develop a scale is available. The NIIRS is
constructed by mapping the SQS ratings for the criteria to the 10-point NIIRS. The only remaining task is to establish a
transformation from the theoretically infinite SQS to the actual NIIRS levels. The choice of this transformation is, in
principle, arbitrary. Criteria are selected for the final scale on the basis of their image analyst ratings. A criterion is
more likely to be chosen if its mean rating is close to an integer NIIRS level and if it demonstrates low variability (the
image analysts rated it consistently). The number of levels in the NIIRS is determined by the variability of the
criteria ratings. The transformation from SQS to NIIRS, therefore, defines the maximum number of NIIRS levels to be
10 (0 to 9), while still guaranteeing adequate separability of the levels.

96

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

LI

Worst

Best
I

E1 Fml

L:1

CRITERION
TO BE
RATED
Figure 4. Criteria Scaling Evaluation

Scale Validation
Once the NIIRS had been developed, the final step was to validate the scale. Certain properties are necessary
for effective use of an image interpretability scale. The validation evaluation provides the information necessary to
verify that the scale possesses the desirable properties. The properties are:

.
.
.
.
.

Linearity: The increments in the scale need to correspond to equal changes in image interpretability.
Separability: Images that differ in interpretability by a full NIIRS unit should be clearly distinguishable
from each other.
System Independence: Ratings need to be independent of the sensor platform, so that the NIIRS can be used
to rate imagery of a given type from multiple sensor systems.
Usability: The scale should be easy for image analysts to use. Moreover, image analysts should be able to
apply the scale consistently.
Criteria Set Equivalence: A rating for an image of a given interpretability should be equivalent regardless
of the content of the scene.

This validation step ensures that the scale had these desired properties. It is the only step in the process that
entails actual use of the NIIRS per se. For this step, image analysts use the new NIIRS to rate a set of imagery. Their
ratings, in turn, are analyzed to determine whether the scale did indeed embody the desired properties. After rating
the imagery, image analysts also respond to a brief questionnaire about the NIIRS. Responses to these questions
indicate analysts' subjective reaction to using the NIIRS, while the actual NIIRS ratings provide a more objective
assessment of the scale. Both sources of information are analyzed to assess the strengths and weaknesses of the final
scale.

Conclusions
The methods presented here, with some minor variations, were used to develop the current set of NIIRS. It has
been shown for the image types covered to date that interpretability can be scaled on a single continuum, even though
there may be multiple underlying physical image quality dimensions. In repeated evaluations, image analysts have
successfully scaled on a single continuum imagery varying in scale, resolution, noise, and contrast, as well as scene. The
variability in scaling multispectral imagery is comparable to that observed for panchromatic imagery.

Although the NURS development approach has thus far been applied only to hardcopy imagery in specific
portions of the spectrum, there is no reason why the approach should be limited to such applications. NIIRS ratings can

97

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

be made using softcopy imagery, although the characteristics of the image display will affect the ratings (See
Leachtenauer and Salvaggio [1996]).
The growth in the commercial remote sensing arena over the next few years will offer an opportunity to extend
the use of NIIRS beyond its original customer base. The various NURS provide valuable standards for assessing and
quantifying the quality of the imagery acquired by the emerging commercial satellite systems. When used with an
image quality equation (IQE), which predicts the NIIRS as a function of physical collection parameters, these scales
yield a method for tasking commercial systems effectively. (See Leachtenauer, et. al., [1997]). Using the NIIRS with
IQE to manage tasking and collection ensures that user requirements are satisfied, while managing collection resources

efficiently.

References
1. Colburn L.P., J.M. Irvine, J.C. Leachtenauer, W.A. Malila, N.L. Salvaggio (1996), The General Image Quality
Equation (GIQE), Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meetings.
April 1996.
2.

Erdman, C., K. Riehi, L. Mayer, J. Leachtenauer, E. Mohr, J. Irvine, J. Odenweller, R. Simmons, and D. Hothem,
(1994), Quantifying Multispectral Image Interpretability, Proceedings of the International Symposium on Spectral
Sensing Research '94.

3. Greer, J.D. and J. Caylor (1992), "Development of an Environmental Image Interpretability Rating Scale"
SPIE Vol. 1763 Airborne Reconnaissance XVL 1992, 151-157.

4. Hothem, D., J.M. Irvine, E. Mohr, K.B. Buckley (1996), "Quantifying Image Interpretabifity For Civil Users",
Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meetings. April 1996.
5.

J.M. and J.C. Leachtenauer (1996), A Methodology For Developing Image Interpretability Ratings Scales,
Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meetings. April 1996.
Irvine,

6. Leachtenauer, J. (1996), "National Imagery Interpretability Ratings Scales: Overview and Product Description",
Prceedings of the American Society of Photogrammetry and Remote Sensing Annual Meetings. April 1996
7.

Leachtenauer, J., Malila, W., Irvine, J., Colbum, L., and Salvaggio, N.,
Applied Optics. (forthcoming).

8.

Leachtenauer, J.C. and

(1997), "General Image Quality

Equation",

N.L. Salvaggio, NIIRS Prediction: Use of the Briggs Target, Proceedings of the American
Society of Photogranunetry and Remote Sensing Annual Meetings. April 1996.

9. Mayer, L.M., C.D. Erdman, K. Riehi (1995), "Imagery Interpretability Ratings Scales", Presented to the Society for
Information Display, May 1995.
10. Mohr E.J., D. Hothem, J.M. Irvine, and C. Erdman, The Multispectral Imagery Interpretation Scale (MS IIRS),
Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meetings. April 1996.

98

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Visible National Imagery Interpretability Rating Scale

March 1994

Rating Level 1

Rating Level 6

Detect a medium-sized port facility and/or distinguish between


taxi-ways and runways at a large airfield.

Distinguish between models of smalllmedium helicopters

Rating Level 2

Identify the shape of antennas on EW/GCI/ACQ radars as


parabolic, parabolic with clipped corners or rectangular.
Identify the spare tire on a medium-sized truck.
Distinguish between SA-6, SA-il, and SA-17 missile airframes.
Identify individual launcher covers (8) of vertically launched
SA-N-6 on SLAVA-class vessels.
Identify automobiles as sedans or station wagons.

Detect large hangars at airfields.


Detect large static radars (e.g., ANIFPS-85, COBRA DANE,
PECHORA, HENHOUSE).

Detect military training areas.


Identify an SA-5 site based on road pattern and overall site
configuration.
Detect large buildings at a naval facility (e.g., warehouses,
construction hall).
Detect large buildings (e.g., hospitals, factories).

(e.g., HELIX A from HELIX B from HELIX C, HIND D from


HIND E, HAZE A from HAZE B from HAZE C).

Rating Level 7
Identify fitments and fairings on a fighter-sized aircraft
(e.g., FULCRUM, FOXHOUND).

Rating Level 3
Identify the wing configuration (e.g., straight, swept, delta) of all
large aircraft (e.g., 707, CONCORD, BEAR, BLACKJACK).
Identify radar and guidance areas at a SAM site by the
configuration, mounds, and presence of concrete aprons.
Detect a helipad by the configuration and markings.
Detect the presence/absence of support vehicles at a mobile
missile base.
Identify a large surface ship in port by type (e.g., cruiser, auxiliary
ship, noncombatant/merchant).
Detect trains or strings of standard rolling stock on railroad tracks
(not individual cars).

Rating Level 4
Identify all large fighters by type (e.g., FENCER, FOXBAT,
F-15, F-14).
Detect the presence of large individual radar antennas
(e.g., TALL KING).
Identify, by general type, tracked vehicles, field artillery, large
river crossing equipment, wheeled vehicles when in groups.
Detect an open missile silo door.
Determine the shape of the bow (pointed or blunt/rounded) on a
medium-sized submarine (e.g., ROMEO, HAN, Type 209,
CHARLIE II, ECHO II, VICTOR Il/Ill).
Identify individual tracks, rail pairs, control towers, switching

points in rail yards.

Identify ports, ladders, vents on electronics vans.


Detect the mount for antitank guided missiles (e.g., SAGGER on
BMP-l).
Detect details of the silo door hinging mechanism on Type Ill-F,
III-G, and Il-H launch silos and Type III-X launch control silos.
Identify the individual tubes of the RBU on KIROV-, KARA-,
KRIVAK-class vessels.
Identify individual rail ties.

Rating Level 8
Identify the rivet lines on bomber aircraft.
Detect horn-shaped and W-shaped antennas mounted atop
BACKTRAP and BACKNET radars.
Identify a hand-held SAM (e.g., SA-7/l4, REDEYE, STINGER).
Identify joints and welds on a TEL or TELAR.
Detect winch cables on deck-mounted cranes.
Identify windshield wipers on a vehicle.

Rating Level 9
Differentiate cross-slot from single slot heads on aircraft skin
panel fasteners.
Identify small light-toned ceramic insulators that connect wires of
an antenna canopy.
Identify vehicle registration numbers (VRN) on trucks.
Identify screws and bolts on missile components.
Identify braid of ropes (1 to 3 inches in diameter).
Detect individual spikes in railroad ties.

Rating Level 5
Distinguish between a MIDAS and a CANDID by the presence of
refueling equipment (e.g., pedestal and wing pod).
Identify radar as vehicle-mounted or trailer-mounted.
Identify, by type, deployed tactical SSM systems
(e.g., FROG, SS-21, SCUD).
Distinguish between SS-25 mobile missile TEL and Missile
Support Vans (MSVs) in a known support base, when not covered
by camouflage.
Identify TOP STEER or TOP SAIL air surveillance radar on KIROV,
SOVREMENNY, KIEV, SLAVA, MOSKVA, KARA, or KRESTAIl-class vessels.
Identify individual rail cars by type (e.g., gondola, flat, box)

and/or locomotives by type (e.g., steam, diesel).

99
Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Civil National Imagery Interpretability Rating Scale - October 1995


Rating Level 0

Rating Level 6

Interpretability of the imagery is precluded by obscuration,


degradation, or very poor resolution.

Detect narcotics intercropping based on texture.


Distinguish between row (e.g., corn, soybean) crops and small
grain (e.g., wheat, oats) crops.
Identify automobiles as sedans or station wagons.
Identify individual telephone/electric poles in residential
neighborhoods.
Detect foot trails through barren areas.

Rating Level 1
Distinguish between major land use classes (e.g., urban,
agricultural, forest, water, barren).
Detect a medium-sized port facility.
Distinguish between runways and taxiways at a large airfield.
Identify large area drainage patterns by type (e.g., dendritic,
trellis, radial).

Rating Level 2
Identify large (i.e., greater than 160 acre) center-pivot irrigated
fields during the growing season.
Detect large buildings (e.g., hospitals, factories).
Identify road patterns, like clover leafs, on major highway

systems.
Detect ice-breaker tracks.
Detect the wake from a large (e.g., greater than 300') ship.

Rating Level 3
Detect large area (i.e., larger than 160 acre) contour plowing.
Detect individual houses in residential neighborhoods.
Detect trains or strings of standard rolling stock on railroad tracks
(not individual cars).
Identify inland waterways navigable by barges.
Distinguish between natural forest stands and orchards.

Rating Level 4
Identify farm buildings as barns, silos, or residences.
Count unoccupied railroad tracks along right-of-way or in a
railroad yard.
Detect basketball court, tennis court, volleyball court in urban
areas.
Identify individual tracks, rail pairs, control towers, switching
points in rail yards.
Detect jeep trails through grassland.

Rating Level 7
Identify individual mature cotton plants in a known cotton field.
Identify individual railroad ties.
Detect individual steps on a stairway.
Detect stumps and rocks in forest clearings and meadows.

Rating Level 8
Count individual baby pigs.
Identify a USGS benchmark set in a paved surface.
Identify grill detailing and/or the license plate on a
passenger/truck type vehicle.
Identify individual pine seedlings.
Identify individual water lilies on a pond.
Identify windshield wipers on a vehicle.

Rating Level 9
Identify individual grain heads on small grain (e.g., wheat, oats,
barley).
Identify individual barbs on a barbed wire fence.
Detect individual spikes in railroad ties.
Identify individual bunches of pine needles.
Identify an ear tag on large game animals (e.g., deer, elk, moose).

Rating Level 5
Identify Christmas tree plantations.
Identify individual rail cars by type (e.g., gondola, flat, box) and
locomotives by type (e.g., steam, diesel).
Detect open bay doors of vehicle storage buildings.
Identify tents (larger than 2 person) at established recreational
camping areas.
Distinguish between stands of coniferous and deciduous trees
during leaf-off condition.
Detect large animals (e.g., elephants, rhinoceros, giraffes) in
grasslands.

100

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Radar National Imagery Interpretability Rating Scale - August 1992


Rating Level 0

Rating Level 6

Interpretability of the imagery is precluded by obscuration,


degradation, or very poor resolution.

Distinguish between variable and fixed-wing fighter aircraft


(e.g., FENCER versus FLANKER).
Distinguish between the BAR LOCK and SIDE NET antennas
at a BAR LOCK/SIDE NET acquisition radar site.
Distinguish between small support vehicles (e.g., UAZ-69,
UAZ-469) and tanks (e.g., T-72, T-80).
Distinguish between the raised helicopter deck on a KRESTA II
(CG) and the helicopter deck with main deck on a KRESTA I (CG).
Identify a vessel by class when singly deployed (e.g., YANKEE I,

Rating Level 1
Detect the presence of aircraft dispersal parking areas.
Detect a large cleared swath in a densely wooded area.
Detect, based on presence of piers and warehouses, a port facility.
Detect lines of transportation (either road or rail), but do not
distinguish between.

Rating Level 2

DELTA I, KRIVAK II FFG).

Detect the presence of large (e.g., BLACKJACK, CAMBER,


COCK, 707, 747) bombers or transports.
Identify large phased array radars (e.g., HEN HOUSE, DOG HOUSE)

by type.
Detect a military installation by building pattern and site
configuration.
Detect road pattern, fence, and hardstand configuration at SSM
launch sites (missile silos, launch control silos) within a known
ICBM complex.
Detect large non-combatant ships (e.g., freighters or tankers) at a
known port facility.
Identify athletic stadiums.

Rating Level 3

Detect cargo on a railroad flatcar or in a gondola.

Rating Level 7
Identify small fighter aircraft by type (e.g., FISHBED, FITTER,
FLOGGER).

Distinguish between electronics van trailers (without tractor) and


van trucks in garrison.
Distinguish, by size and configuration, between a turreted, tracked
APC and a medium tank (e.g., BMP-1/2 versus T-64).
Detect a missile on the launcher in an SA-2 launch revetment.
Distinguish between bow mounted missile system on KRIVAK I/lI
and bow mounted gun turret on KRIVAK III.
Detect road/street lamps in an urban, residential area or military

complex.

Detect medium-sized aircraft (e.g., FENCER, FLANKER, CURL,


COKE, F-15).
Identify an ORBITA site on the basis of a 12 meter dish antenna
normally mounted on a circular building.
Detect vehicle revetments at a ground forces facility.
Detect vehicles/pieces of equipment at a SAM, SSM, or ABM fixed

missile site.
Determine the location of the superstructure (e.g., fore, amidships,
aft) on a medium-sized freighter.
Identify a medium-sized (approximately six track) railroad
classification yard.

Rating Level 4
Distinguish between large rotary-wing and medium fixed-wing
aircraft (e.g., HALO helicopter versus CRUSTY transport).
Detect recent cable scars between facilities or command posts.
Detect individual vehicles in a row at a known motor pool.
Distinguish between open and closed sliding roof areas on a single
bay garage at a mobile missile base.
Identify square bow shape of ROPUCHA class (LST).
Detect all rail/road bridges.

Rating Level 5
Count all medium helicopters (e.g., HIND, HIP, HAZE, HOUND,

PUMA, WASP).
Detect deployed TWIN EAR antenna.
Distinguish between river crossing equipment and medium/heavy
armored vehicles by size and shape (e.g., MTU-20 versus
T-62 MBT).
Detect missile support equipment at an SS-25 RTP
(e.g., TEL, MSV).
Distinguish bow shape and length/width differences of SSNs.
Detect the break between railcars (count railcars).

Rating Level 8
Distinguish the fuselage difference between a HIND and a HIP

helicopter.
Distinguish between the FAN SONG E missile control radar and the
FAN SONG F based on the number of parabolic dish antennas
(three vs. one).
Identify the SA-6 transloader when other SA-6 equipment is
present.
Distinguish limber hole shape and configuration differences
between DELTA land YANKEE I (SSBNs).
Identify the dome/vent pattern on rail tank cars.

Rating Level 9
Detect major modifications to large aircraft (e.g., fairings, pods,
winglets).
Identify the shape of antennas on EW/GCIJACQ radars as
parabolic, parabolic with clipped corners, or rectangular.
Identify, based on presence or absence of turret, size of gun tube,
and chassis configuration, wheeled or tracked APCs by type
(e.g., BTR-80, BMP-l/2, MT-LB, Ml 13).
Identify the forward fins on an SA-3 missile.
Identify individual hatch covers of vertically launched SA-N-6
surface-to-air system.
Identify trucks as cab-over-engine or engine-in-front.

101

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Infrared National Imagery Interpretability Rating Scale - April 1996


Rating Level 1

Rating Level S (Coat.)

Distinguish between runways and taxiways on the basis of size,


configuration or pattern at a large airfield.
Detect a large (e.g., greater than 1 square kilometer) cleared area
in dense forest.
Detect large ocean-going vessels (e.g. , aircraft carrier, supertanker, KIROV) in open water.
Detect large areas (e.g., greater than 1 square kilometer) of
marsh/swamp.

Detect a deployed TET (transportable electronics tower)


at an SA-lO site.
Identify the stack shape (e.g., square, round, oval) on large
(e.g., greater than 200 meter) merchant ships.

Rating Level 2

Rating Level 6
Detect wing-mounted stores (i.e., ASM, bombs) protruding from
the wings of large bombers (e.g., B-52, BEAR, Badger).
Identify individual thermally active engine vents atop diesel

locomotives.

Detect large aircraft (e.g., C-141, 707, BEAR, CANDID,


CLASSIC).
Detect individual large buildings (e.g., hospitals, factories) in an
urban area.
Distinguish between densely wooded, sparsely wooded and open
fields.
Identify an SS-25 base by the pattern of buildings and roads.
Distinguish between naval and commercial port facilities based
on type and configuration of large functional areas.

Rating Level 3
Distinguish between large (e.g., C-141, 707, BEAR, A300
AIRBUS) and small aircraft (e.g., A-4, FISHBED, L-39).
Identify individual thermally active flues running between the
boiler hall and smoke stacks at a thermal power plant.
Detect a large air warning radar site based on the presence of
mounds, revetments and security fencing.
Detect a driver training track at a ground forces garrison.
Identify individual functional areas (e.g., launch sites, electronics
area, support area, missile handling area) of an SA-5 launch
complex.
Distinguish between large (e.g., greater than 200 meter)
freighters and tankers.

Rating Level 4
Identify the wing configuration of small fighter aircraft
(e.g., FROGFOOT, F-16, FISHBED).
Detect a small (e.g., 50 meter square) electrical transformer yard in
an urban area.
Detect large (e.g., greater than 10 meter diameter) environmental
domes at an electronics facility.
Detect individual thermally active vehicles in garrison.
Detect thermally active SS-25 MSV's in garrison.
Identify individual closed cargo hold hatches on large merchant

ships.

Rating Level S

Distinguish between a FIX FOUR and FIX SIX site based on


antenna pattern and spacing.
Distinguish between thermally active tanks and APCs.
Distinguish between a 2-rail and 4-rail SA-3 launcher.
Identify missile tube hatches on submarines.

Rating Level 7
Distinguish between ground attack and interceptor versions of the
MIG-23 FLOGGER based on the shape of the nose.
Identify automobiles as sedans or station wagons.
Identify antenna dishes (less than 3 meters in diameter) on a radio
relay tower.
Identify the missile transfer crane on a SA-6 transloader.
Distinguish between an SA-2/CSA-1 and a SCUD-B missile
transporter when missiles are not loaded.
Detect mooring cleats or bollards on piers.

Rating Level 8
Identify the RAM airscoop on the dorsal spine of FISHBED J/K/L.
Identify limbs (e.g., arms, legs) on an individual.
Identify individual horizontal and vertical ribs on a radar antenna.
Detect closed hatches on a tank turret.
Distinguish between fuel and oxidizer Multi-System Propellant
Transporters based on twin or single fitments on the front of the
semi-trailer.
Identify individual posts and rails on deck edge life rails.

Rating Level 9
Identify access panels on fighter aircraft.
Identify cargo (e.g., shovels, rakes, ladders) in an open-bed,
light-duty truck.
Distinguish between BIRDS EYE and BELL LACE antennas based
on the presence or absence of small dipole elements.
Identify turret hatch hinges on armored vehicles.
Identify individual command guidance strip antennas on an
SA-2/CSA-1 missile.
Identify individual rungs on bulkhead mounted ladders.

Distinguish between single-tail (e.g., FLOGGER, F-16,


TORNADO) and twin-tailed (e.g., F-IS, FLANKER, FOXBAT)
fighters.
Identify outdoor tennis courts.
Identify the metal lattice structure of large (e.g., approximately 75
meter) radio relay towers.
Detect armored vehicles in a revetment.

102

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

Multispectral Imagery Interpretability Rating Scale


Rating Level 1
Distinguish between urban and rural areas.
Identify a large wetland (greater than 100 acres).
Detect meander flood plains (characterized by features such as
channel scars, ox bow lakes, meander scrolls).
Delineate coastal shoreline
Detect major highway and rail bridges over water (e.g., Golden
Gate, Chesapeake Bay)
Delineate extent of snow or ice cover.

Rating Level 2
Detect multi-lane highways.
Detect strip mining.
Determine water current direction as indicated by color differences
(e.g., tributary entering larger water feature, chlorophyll or
sediment patterns).
Detect timber clear-cutting.
Delineate extent of cultivated land.
Identify riverine flood plains.

Rating Level 3
Detect vegetation/soil moisture differences along a linear feature
(suggesting the presence of a fence line).
Identify major street patterns in urban areas.
Identify golf courses.
Identify shoreline indications of predominant water currents.
Distinguish among residential, commercial, and industrial areas
within an urban area.
Detect reservoir depletion.

Rating Level 4
Detect recently constructed weapon positions (e.g., tank,
artillery, self-propelled gun) based on the presence of
revetments, berms, and ground scarring in vegetated areas.
Distinguish between two-lane improved and unimproved roads.
Detect indications of natural surface airstrip maintenance or
improvements (e.g., runway extension, grading, resurfacing,
bush removal, vegetation cutting).

February 1995

Rating Level 4 (Cont.)


Detect landslide or rockslide large enough to obstruct
a single lane road.
Detect small boats (15 - 20' in length) in open water.
Identify areas suitable for use as light fixed-wing aircraft
(e.g., Cessna, Piper Cub, Beechcraft) landing strips.

Rating Level 5
Detect automobile in a parking lot.
Identify beach terrain suitable for amphibious landing operation.
Detect ditch irrigation beet fields.
Detect disruptive or deceptive use of paints or coatings on
buildings/structures at a ground forces installation.
Detect raw construction materials in ground forces deployment
areas (e.g., timber, sand, gravel).

Rating Level 6
Detect summer woodland camouflage netting large enough to cover
a tank against a scattered tree background.
Detect foot trail through tall grass.
Detect navigational channel markers and mooring buoys in water.
Detect livestock in open but fenced areas.
Detect recently installed minefields in ground forces deployment
area based on a regular pattern of disturbed earth or vegetation.
Count individual dwellings in subsistence housing areas
(e.g., squatter settlements, refugee camps).

Rating Level 7
Distinguish between tanks and three-dimensional tank decoys.
Identify individual 55-gallon drums.
Detect small marine mammals (e.g., harbor seals)
on sand/gravel beaches.
Detect underwater pier footings.
Detect foxholes by ring of spoil outlining hole.
Distinguish individual rows of truck crops.

103

Downloaded from SPIE Digital Library on 14 Nov 2011 to 128.196.206.199. Terms of Use: http://spiedl.org/terms

You might also like