Professional Documents
Culture Documents
303
304
systems (ES) are such an example, where the compiled knowledge and experience of a human expert are used in lieu of
having the system develop its own experience, duplicating
that of the expert. This is not to say that learning cannot be
part of an ES.
The methods and techniques of AI are well suited for applications not amenable to standard, procedural, problem-solving techniques. Examples of such applications are where the
available information is uncertain, sometimes erroneous, and
often inconsistent. In such a case, using quantitative algorithmic calculations may not lead to the solution, whereas use of
plausible and logical reasoning may. The approach taken by
AI leads generally to a nonoptimal, but acceptable, solutions
reached by using rules of thumb and logical inferencing mechanisms.
For such an approach, the system is represented by a factual description in the form of chunks of meaningful data
(knowledge) related to the system state and by the relationships among those data. An external, domain-independent inferencing mechanism makes it possible to draw new conclusions from existing knowledge resulting in changes and
updates of the knowledge base. The AI discipline concerned
with these issues is called knowledge representation. There
are various paradigms of how to represent human knowledge:
predicate calculus, production rules, frames and scripts, and
semantic networks. The selected representation scheme must
express all necessary information, support efcient execution
of the resulting computer code, and provide a natural scheme
for the user. AI is concerned with qualitative, rather than
quantitative, problem solving. Thus the selected knowledge
representation and the used tools must be able to (a) handle
qualitative knowledge, (b) allow new knowledge to be created
from a set of facts, (c) allow for representation applicable to
not only a specic situation but also to general principles, and
(d) capture complex semantic meaning and allow for metalevel reasoning (reasoning about the knowledge itself, not just
the domain).
A distributed-intelligence system (DIS) is the concept of a
system operated by a machine and managed by a human. The
human operator is involved in planning, making decisions,
and performing high-level functions, whereas the machine
portion of the system executes most of the systems regular
operational functions, collects and stores data, and handles
routine decision situations with a limited number of options
(4). Such an approach requires further research in the area of
manmachine interface and physiopsychological aspects related to the stress and anxiety factors.
Decision support systems (DSS) are computer systems that
advise human operators or are automated systems that make
decisions within a well-dened area. The systems are used
where similar decision processes are repeated, but where the
information to decide upon may differ. Some DSS are known
as expert systems (ES): they imitate human expert behavior.
Decision procedures of an expert are analyzed and transformed into rules and subsequently implemented into the system. The ES is a computer program providing solutions to
problems normally requiring a human expert with an appropriate domain knowledge and experience. The experts are employed to solve problems requiring planning or decision making. They frequently use rules of thumbheuristics based on
experience, analogies, and intuitive rationale to explain the
behavior associated with their area of expertise. Development
305
is to manage the allocation of NAS resources and limit airborne delays. These objectives are accomplished by implementing TFM initiatives: ground delay program (GDP),
ground stop program (GSP), miles/minutes-in-trail restriction
(MITR), and severe weather avoidance program (SWAP). The
center is staffed by experienced specialists with an extensive
knowledge of ATC procedures and familiar with the impact
of weather conditions and possible TFM initiatives on NAS
performance (10).
There are a wide variety of computer tools based on available aircraft data to support the specialist operations. Data
can be displayed in both character and graphic formats, showing, for instance, all aircraft scheduled in a specic sector
within a specic timeframe, or all aircraft scheduled to arrive
at a specic airport. The hourly arrival demand for an individual airport may be displayed and printed. Weather conditions are displayed graphically, including areas of limited
ceiling and visibility, precipitation, expected thunderstorms,
and jet streams. There is easy access to alphanumeric local
ground weather reports, altitude proles, and briengs from
radar observation.
The trafc management specialist responds to the weather
situation and to requests from the major airports in the cluster. In cases when an airport acceptance rate is anticipated
to decline (deteriorating weather conditions, airport conguration change) the ow controller may consider implementation of the GDP for that airport. The program can be implemented for any combination of the en-route centers, from
adjacent to the entire system. The scope of the program, in
terms of duration and affected areas, is based on the current
situation and determined as the result of the controllers
knowledge and experience. The GPD software recomputes departure times, and estimates predicted delays. When the computation predicts acceptable delays in the system, the specialist sends the new schedule to the centers and the airlines for
implementation. In the case when an airport is unable to operate or experiences severely reduced capacity with already
long delays and surplus trafc, the specialist may order the
GSP for the ights destined to the affected airport. Both GDP
and GSP affect only the aircraft scheduled for later departure.
Any action is coordinated with all interested parties before
implementation. The shift supervisor has the nal authority
on whether or not the proposed plans are implemented.
A regional trafc management unit may request MITR in
cases of reduced acceptance rate of the arrival sector caused
by weather, trafc volume, or stafng problems. The situation
is analyzed and coordinated by the area cluster specialist, and
the outcome is conveyed to the affected centers. The role of
ow control is limited to a mediation between two adjacent
centers.
Severe weather conditions en-route may force a center to
request more forceful measures as a signicant rerouting of
trafc. A separate cluster of ow personnel manages the implementation of SWAP rerouting. The position is equipped
with an additional workstation with a database of airport congurations under different weather conditions and the preferential routes among all major airports. The main role is to
provide coordination for the new routing.
There is a signicant amount of domain knowledge involved in TFM activities. For example, some of the airports
require implementation of a GDP/GSP for the entire system,
whereas others may restrict the program to the adjacent cen-
306
INTELLIGENT TRAINING
Computer-aided instruction (CAI) is a common use of computers in education and training. CAI tools incorporate well-prepared course materials and lessons plans into routines optimized for each student. However, conventional CAI tools are
limited to either electronic page-turners or drill-and-practice
monitors, severely limiting the overall effectiveness of the
system in a situation where declarative knowledge is sought.
The incorporation of AI techniques into CAI spawned the
creation of intelligent tutoring systems (ITS) capable of modeling the student learning process, drawing conclusions from
student problem solving behavior, and modifying the sequence in which material is presented to the student (12). ITS
is intended to help individual students identify their specic
weaknesses and rectify them effectively and to be sensitive to
the students preferred style of learning. The objective of some
researchers is to produce entirely autonomous ITS based on
pedagogical expertise and the principles in the domain
knowledge.
The major blocks of a modern simulation-based ITS are (a)
simulator, (b) domain expert, (c) student model, (d) evaluator,
(e) scenario generator, (f) training manager, and (g) user interface. The simulator represents the real-life system for
which the student is being trained. The domain expert contains the body of knowledge that should be presented and
taught to the student. It is also used for evaluation of student
performance and the overall learning progress. To achieve
these objectives, most systems generate and store all feasible
solutions to the problems in the same context as the student,
so that their respective answers can be compared. The student model contains knowledge about the students understanding of the material. This knowledge is extremely important in the decision making process affecting the choice of
subsequent tutoring strategies. The evaluation module is
used to evaluate the student performance based on the situation assessment derived from the simulation status. The scenario generator is used to generate realistic training scenarios appropriate for the student. The core of the system is the
training manager, containing the knowledge about teaching
methods. The training manager, based on the current evaluation, selects the next scenario component from the scenario
generator monitoring the students performance. The module
uses a decision making process based on teacher experience
and assessment of past training sessions. Finally, the success
of an ITS depends signicantly on user-interface quality. All
of the components are coordinated by a communication entity
referred to as a blackboard. Any ITS module can place information on the blackboard making it available to all other
modules.
An ITS is often in the form of a computer-based problem
solving tutor, a coach, a laboratory instructor, or a consultant.
For the development of an ITS for aerospace (ATC specialists,
pilots, astronauts, and airline dispatchers), the most suitable
tutoring strategy seems to be a combination of coaching and
guided-discovery learning. The student is in full control of the
activity for which the tutoring is provideda simulated version of a real system. Simulation is used because it provides
an attractive motivational context for discovery learning. The
coaching task of the ITS is to foster the learning inherent in
the activity itself by emphasizing existing learning opportunities and by transforming failures into learning experiences.
307
308
designer would typically use a rule-based approach to implement this model. Dynamic input is provided by both the pilot
and the ight computers and by inertial reference computers
(if available), allowing a control model to be exercised. Often,
a sophisticated formula or Bayesian loop is used to control
the limits of the autopilot system. Engineers are concerned
with the pilots induction of out-of-phase or undampened input to the system, and thus, in some aircraft, such as those
produced by Airbus Industrie, the autopilot system will actually retain and enforce control input limits made by the pilot.
Navigation
Methods of capturing knowledge of both control and navigation activities are varied. Certain things are known about the
aircraft that are derived from the engineering process. Other
systemic effects, such as the role of the pilot, are less certain,
and there is a need to capture expert knowledge. One such
method is the use of models that present reasonable representations of the expert.
Multiple model integration is used to reduce the need to
explicitly dene the knowledge for all cases and creates specic rules that re in general conditions (environment is dened heuristically). This method employs both detailed and
general knowledge acquisition and modeling, while yielding
high condence in the rules that re. Piloting is well suited
for such implementations, because the use of procedural
knowledge to induce rules can be used to meet the need for
specicity, whereas the general environmental conditions
may be described using generalizations. The use of concept
mapping is another method of reducing knowledge acquisition
problems in complex situations. Concept mapping allows specialized knowledge in the form of heuristics and problem-solving methods to be explicitly associated by the knowledge users with static facts and general knowledge (19). Specic to
concept mapping is the use of two techniques: rst, the use of
combing multiple input whereby the experts have collectively
generated a summary map of the knowledge required for the
particular domain; the second technique is that of indexing,
which results in the development of themes and key concepts
that emerge from the relationships generated in the summary
mapping process.
Communication
When exploring pilot communication activities, a number of
different communications take place where ES are employed.
Pilots receive information from the aircraft systems in the
form of displays, and send information to each other and to
others on the ground. A remarkably clear pattern-of-information needs exists during a large percentage of the time pilots
are ying. Using this pattern, ES designers have implemented systems that anticipate and provide the information
needed when it is needed. Typical systems include the automated information and crew alerting systems used to monitor
aircraft systems, detect trends and anomalies in the system,
and alert the crew to the problem. These are truly ES, in that
they gather data and, rather than merely responding to it,
they analyze it, consider alternative responses, and then initiate action.
These ES are found on most transport and military aircraft
and are developed using engineering data to derive functional
limits, which in turn support both rule-based and inputdriven algorithms. Other forms of ES, which support pilots by
managing information, are used to communicate data from
309
310
nents as represented by the frame-based model. Knowledgebased autonomous test engineer (KATE), developed for the
National Aeronautics and Space Administration (NASA) by
Boeing Space Operations (27), is a generic software shell for
performing model-based monitoring, fault detection, diagnosis, and control. The four subsystems are (1) simulation, (2)
monitoring, (3) diagnosis, and (4) control. The system originated in the mid 1980s as a tool to support the operation of
the launch processing system. KATE was particularly designed to check sensor operation for the Space Shuttle liquidoxygen loading system. The system is based on a model of the
sensor structure and diagnoses sensor failures. By separation
of the system structure from the component functions, a more
generic tool was designed. During the early 1990s, the system
started its operational application monitoring the tanking
data. The system was redesigned and implemented in C
programming language using popular Motif windowing environment on a UNIX workstation to serve as a part of the vehicle health management system.
Yet another facet of ES application is in the area of planning and scheduling. One example of such application is the
automatic cockpit crew scheduling developed by Japan Airlines and NEC (28) The system is designed to prepare
monthly schedules for ight crews. The system knowledge is
represented in frames and rules. The systems distributed architecture allows it to run inferencing on slave computers,
with the master computer serving as a cooperative inference
area and the monitor of data integrity. The backtracking technique is used to break a deadlock when the crew assignment
can not be found. Another example is an ES tool to support
shift duty assignments for an airport staff (29). The rulesbased system combines forward-chaining inference and constraints-relaxation techniques. It produces a timetable starting with the initial assignment and continuing through the
iterative improvement process. The prototype has been tested
in airport operations.
FUTURE TRENDS
ES already plays a vital role in the safety and effectiveness
of complex systems. Their future in aerospace includes autonomous vehicles in both military and passenger aircraft; cooperating ES, such as those that would provide separation of
aircraft in ight; ATC systems that improve the safety and
efciency of airspace use and of airports; and, to some extent,
training systems that deliver individualized lessons to students.
The need to capture knowledge regarding the human operator in the aerospace system is clear; however, the ability to
accurately and effectively describe that knowledge in todays
complex systems is becoming less practical using old techniques. The future of ES design will focus on practical knowledge engineering techniques that use the target system as a
means of collecting information and creating knowledge about
the users. In such a systemic approach, knowledge engineering will evolve to include knowledge about the human,
human systems interfaces, and the systemic effects on human
operators interpretation of the system feedback. The use of
such developing technologies as neural networks and ES that
adapt will be more prominent than in systems in use today.
The adaptive system is capable of both induction and adapta-
311
312
2. A. L. Elias and J. D. Pararas, Potential use of articial intelligence techniques in air trafc control, Transportation Research
Circular, TRB, National Research Council, Washington, DC, AI
Workshop Report, 1985, pp. 1731.
Reading List
9. L. Zadeh, The role of fuzzy logic in the management of uncertainty in expert systems, Fuzzy Sets Syst. (11), 199227, 1983.
AEROSPACE SIMULATION
T. I. Oren, Articial intelligence and simulation, AI Applied to Simulation, 18 (1), 38, 1986.
A. M. Wildberger, Integrating an expert system component into a
simulation, AI Papers, 20 (1), 132135, 1988.
R. H. Michaelsen, D. Michie, and A. Boulanger, The technology of
expert systems, BYTE, 10: 303312, April 1985.
F. Hayes-Roth, P. Klahr, and D. J. Mostow, Knowledge acquisition,
knowledge programming, and knowledge renement, in P.Klahr
(ed.), The Rand Corporation, R-2540-NSF, 1980, Reading, MA: Addison-Wesley, 1986, pp. 310349.
A. Gerstenfeld, Simulation combined with cooperating expert systems: an aid for training, screening, plans and procedures, J. ATC,
30: 3335, 1988.
D. Spencer, Development environment for an ATC expert system, in
Transportation Research Circular, TRB, National Research Council, Washington, DC, AI Workshop Report, 1985, pp. 3237.
C. A. Shively, AIRPACK: Advisor for intelligent resolution of predicted aircraft conicts, Transportation Research Circular, TRB,
National Research Council, Washington, DC, AI Workshop Report, 1985, pp. 5864.
A. Gonzalez et al., Simulation based expert system for training air
trafc controllers, in M. B. Fishman (ed.), Advances in Articial
Intelligence Research, Greenwich, CT: JAI Press, 1989, pp.
295308.
R. Steeb et al., Distributed problem solving for air eet control:
framework and implementation, in P. Klahr (ed.), The Rand Corporation, N-2139-ARPA, 1984, Reading, MA: Addison-Wesley,
1986, pp. 391432.
P. McKinnon, Living with articial intelligence, J. ATC, 29: 2325,
1987.
Web Sites
Federal Aviation Administration http://www.faa.gov
MITRE Center for Advanced Aviation System Development http://
www.caasd.org
National Aeronautics and Space Agency Ames Research Center
Advanced Air Transportation Technologies http://aatt.arc.nasa.gov
Massachussets Institute of TechnologyLincoln Laboratory http://
www.ll.mit.edu
AI resources http://www.cs.reading.ac.uk/people/dwc/ai.html
ANDREW J. KORNECKI
JAMES W. BLANCHARD
Embry-Riddle Aeronautical
University
313
AEROSPACE SIMULATION
313
tokens represented military units, were constructive simulations. The sand table has been computerized. It now approximates the mechanics of vehicles and even the cognitive processes of troops. Computer representations of processes
ranging from water management to bacterial growth to hypersonic ow are all constructive simulations.
Virtual simulation employs live players in a simulated environment. There are still other simulations in which inanimate objects, for example, engines, sensors, control systems,
or even entire missiles or unmanned aircraft are operated and
tested in a virtual environment.
This article addresses virtual simulation, as it applies to
the ight crews of aerospace vehicles. Regardless of the purpose of the simulation, the subject is the techniques for creating an effective virtual environment for the human pilot.
Simulators are widely used for training. Complete pilot
training in a virtual simulator is not practical. A simulator
suitable for this purpose, classied by the Federal Aviation
Administration (FAA) as level D, is much more expensive
than a trainer aircraft. Level D simulators are produced only
for very expensive aircraft and are used for, among other
things, transition training of airline pilots to new types of airliners.
On the other hand, supplementary use of simulators in
ight training has long proved useful. Training pilots to y
by reference to instruments only has been accomplished since
World War II by combining ight time with simulator time.
Simulators offer some unique training advantages:
AEROSPACE SIMULATION
Whenever one process is represented by another, a simulation
is in progress. A terminology developed recently by the military includes three categories of simulation: live, constructive,
and virtual. In live simulation, actual equipment is operated
by live crews. Practicing engine out procedures in an airplane
with good engines or training instrument procedures while
ying in good weather are live simulations. So are war game
exercises played with aircraft and tanks.
Constructive simulation replaces both equipment and
crews by symbols. The classical sand-table exercises, where
Reduction of risk.
Reduced environmental impact.
Saving of Time. The simulation can be limited to the maneuver being trained. There is no need to perform a preight check of an aircraft, go through engine start procedure, taxi to the runway, and y to the practice area
before training can begin. No time is wasted on returning, landing, and taxiing back after the ight. The
simulator can be reset to repeat a maneuver. For instance, when training landing approaches, the simulator
can be reset after each approach, putting it in a position
to start another approach. In live training the airplane
must be own around to the initial position, which may
take anywhere from 3 min. to 15 min.
Control of Weather. No time is lost due to bad weather.
Yet adverse weather conditions can be conjured on demand.
Training Analysis. The simulator can be frozen for a
discussion between trainee and instructor, then continue
to y from that position.
Repeatability. Flight histories can be recorded and replayed.
Beyond individual and crew training, the military uses virtual simulation for collective training. Entire units are
trained while both sides of a battle are simulated. Collective
training is accomplished by a technology known as distributed interactive simulation (DIS), which involves communications between large numbers of virtual simulators located at
separate sites. Each simulator includes in the virtual environment it creates the vehicles represented by other simulators.
The ultimate goal is a virtual battleeld on which live, vir-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
314
AEROSPACE SIMULATION
tual, and constructive simulations can interact. The advantages of DIS (exploited already in the Gulf War of 1991) include:
Display system
yy
;;
Pilot
Control
loader
Motion platform
Control imputs
Image
generator
Instrument
generator
Loader
controller
Math
model
Washout
filter
AEROSPACE SIMULATION
integrating the equations of motion of many aerospace vehicles in real time. It is easier to program the full equations
than to linearize them.
The avor of a typical mathematical model in a virtual
simulator may best be conveyed by an overview of the equations governing a rigid vehicle. A rigid body is a six-degreeof-freedom system. The variables of state are
xe
ve
q
Position of CG in earth
cartesian system
Velocity of CG in earth
cartesian system
Orientation expressed as a
unit quaternion
Angular velocity in body
coordinate system
3 components
3 components
4 components
3 components
xe = ve
mxe = F
q = 12 q
b
J
b +
b (J
b) = M
where m is the mass of the vehicle, J is the moment of inertia
and M
are the force and the moment
(a 3 3 matrix), F
applied to the vehicle.
Orientation can be expressed by specifying the heading,
pitch attitude, and bank. These three angles, a variation on
the ones introduced by Euler to study the spinning top, are
called Euler angles. This is the preferred formalism for human consumption. However, Euler angles are unsuitable for
virtual simulation because they develop singularities at (and
lose accuracy near) the orientations of facing straight up or
down.
The preferred way of expressing orientations internally in
a computer is as unit quaternions. Quaternions are four components entities, which may be viewed as the sum of a number and a vector. Quaternions obey the normal algebraic rules
of addition and multiplication with the product of two vectors
being given by
V = U V U V
U
Under these rules, quaternions form a ring. All nonzero quaternions are invertible.
A well-known theorem due to Euler states that any two
orientations can be bridged by a single rotation. Let the rotation from the reference orientation to the current orientation
be characterized by the axis unit vector e and the angle .
Then the current orientation may be represented by the unit
quaternion
q = cos 12 + e sin 12
This representation has no singularities and maintains uniform accuracy over the entire (curved and compact) three-dimensional space of orientations. However, the constraint
q 1 must be enforced against truncation errors. Actually,
315
Airloads();
t dt;
Ve Ae*dt;
Xe Ve*dt;
Omegb Jin*(Mb Omegb(J*Omegb)*dt;
q (q*Omegb)*(0.5*dt); q q/abs(q);
;
void step(void)
double t;
vector Xe, Ve, Ae, Omegab;
matrix J;
quaternion q;
;
The symbol denotes the vector product. Arithmetic operations are overloaded for the user-dened types. Thus * denotes the product of numbers; of a number by a vector, a matrix or a quaternion; of a matrix by a vector; of two matrices;
or of two quaternions. The compiler determines the correct
operation based on context. For the product q*Omegb (a quaternion by a vector), the compiler converts the vector to a quaternion and employs quaternion multiplication. The overloaded operations of addition and multiplication of vectors,
matrices, and quaternions are dened in appropriate header
les (1).
The procedure Airloads() computes the earth acceleration Ae and the body moment Mb. Aerodynamic computations
are usually based on tables of coefcients and on the local
ow eld. Often, steady-state aerodynamics for the instantaneous state is used even in transient conditions (adiabatic assumption). Computational uid dynamics (CFD) is, at this
writing, incapable of real-time performance.
Methods of integration more accurate than Eulers are often employed. The powerful RungeKutta methods are not
suitable when control inputs are sampled only once per step.
However, the AdamsBashforth methods that infer trends
from previous steps have been used to advantage.
In many cases, describing the vehicle as a rigid body is not
adequate. Examples include helicopters, where apping and
exing of rotor blades is important, and large aircraft and
space structures, where structural modes interact with the
control dynamics. In these cases, additional state variables
and additional equations of motion are brought into play. The
engine and other systems require modeling, too.
TIMING ISSUES
The computation cycle including the sampling of control inputs, the supporting calculation of forces and moments, the
316
AEROSPACE SIMULATION
integration over a time interval t, and the output to the instrument, visual, motion, and tactile cueing systems is called
a simulation frame. All the computations for the frame must
be accomplished within the time period t.
Timing may be accomplished by clock-generated interrupts
at an interval of t. The interrupt starts the frame. Once the
frame is complete, computation is suspended until the next
interrupt. This method ensures precise timing but, inevitably,
wastes some capacity. Another approach is to run the frames
continuously and adjust t to agree with real time. This ensures the smallest possible t while maintaining real time on
the average, although individual frames may vary slightly.
The time step used in integrating dynamic equations must
not be excessive, in the interest of accuracy. Models of exible
and articulated vehicles place additional burden on the host
computer, due not only to the additional degrees of freedom
but, more signicantly, to the higher frequencies that come
into play. The rule of thumb is that the frame rate must be
at least ten times the typical frequency of the system being
modeled. Frame rates for modeling rigid vehicles are typically
between 30 and 60 frames per second (fps). However, for helicopter rotors, frame rates as high as 120 fps are common.
The frame rates of different subsystems of a simulator
need not be the same. Even when the dynamic computation
requires 120 fps, the visual display may be adequate at 60 fps
or even 30 fps, while the motion system and control loader
may run signicantly higher frame rates, sometimes as high
as 5000 fps. The frame rates of subsystems must be commensurate when precise interrupt-driven synchronization is implemented.
Another timing issue involves the interval between control
input and observable feedback. The key concepts here are
(2,3):
Latencythe excess delay of simulator response over
ight vehicle response
Transport delaythe delay between control input and
simulator response, including computation time but excluding any modeled delay
The transport delay is easier to determine, because it does
not require access to the ight vehicle. If the math model is
perfect and reproduces the delay inherent in the vehicle exactly, then the transport delay is equal to the latency.
The principle of physical equivalence requires zero latency.
It is impossible to have the transport delay at zero, because
computations do take time. Some compensation is achieved
by not modeling the propagation time of control signals in
control rods, wires, and hydraulic lines (at the speed of sound
in the particular medium). Still, control responses in virtual
simulators are typically delayed.
The pilot expects feedback to control inputs. If this feedback is delayed, the pilot may be induced to increase the input. A delay in any cue will tend to exaggerate the control
inputs. In the context of harmonic inputs and disturbances,
the delay is translated into a phase lag and it limits the frequency of disturbances that can be controlled.
The FAA accepts a latency of 150 ms for airplane simulators (2) and 100 ms for helicopter simulators (3) for level D
certication. Practical experience indicates that simulators
subject to this amount of delay are effective. The helicopter
value, 100 ms, is representative of the state of the art at this
writing. Current simulators of high performance military aircraft also keep the transport delay to less than 100 ms.
Apart from the amount of the delay, there is the issue of
the relative delay of different cues. The relative timing of visual, aural, and motion cues is important. Cues received out
of order may cause simulator sicknessa condition where an
experienced pilot becomes nauseated in the simulator.
COCKPIT DISPLAYS
Flight and engine instruments are the cueing devices that are
easiest to implement in a virtual simulator. The Link trainers
of WW II fame used analog computers to drive needles in electrically actuated replicas of airspeed indicators, altimeters,
tachometers, and other airplane instruments. The devices
were used to teach control of an airplane by sole reference
to instruments, which made visual displays unnecessary. The
Link devices had rudimentary motion capability of questionable delity. The task was to train a pilot deprived of visual
cues to disregard motion cues and react to instrument readings only. A number of postwar xed-base devices whose general architecture was the same as that of the Link device accomplished the same end. They were useful in teaching
instrument ight and maintaining instrument prociency
and were accepted by the FAA for training and currency
credits.
With the advent of microprocessor technology, even low
end simulators became digital. Computer graphics made it
possible to use graphical images of instruments in place of
hard replicas. The rst FAA-accepted device to exploit this
capability was the Minisimulator IIC, which came on the
market in 1981. The IIC used most of its computational
throughput to create the crude two-dimensional graphical
representation of the instruments. But graphics techniques
soon improved, and graphically displayed cockpit instruments
became commonplace in actual cockpits as well as in simulators.
In addition to instruments, many modern cockpits include
other displays. Some, like moving maps and horizontal situation displays, are two-dimensional. Others, such as low-lightlevel TV (LLTV) and forward-looking infrared (FLIR), offer a
view of the three-dimensional outside scene. The three-dimensional graphic displays are computed by the same methods as visual displays discussed in the next section.
IMAGE GENERATION
Creating a visual display of the outside scene is by far the
most computationally demanding task in a virtual simulator.
Early image generators (IG) used analog methods. A television camera would y under computer control over a miniature scene or an aerial photograph. Early digital image generators offered night scenes with only discrete points of light
visible. The technology soon advanced to dusk and eventually
to daylight scenes.
Data about the three-dimensional environment in which
the ight takes place is kept in a database. Terrain and other
objects are described as wireframes delimited by polygons.
Each polygon is endowed with color and/or texture. There
have been efforts to create an open database format; at this
writing, the formats in use are mostly proprietary.
AEROSPACE SIMULATION
Object
Screen
Eyepoint
Figure 2. The three-dimensional scene is transformed into a twodimensional graphic on the image plane by projecting along rays that
meet at the eyepoint.
317
318
AEROSPACE SIMULATION
Eyepoint
Screen
Screen
Screen
move relative to each other. Should the pilots eye deviate from the nominal eyepoint, the perspective would become distorted. During forward ight this would create
the impression of a spurious sideways component of motion.
Stereopsis. When the pilots two eyes observe the same
image from slightly different vantage points, the two retinal impressions differ. This difference is the raw material for stereopsis, which determines apparent distance.
The distance so determined is that of the image rather
than of the objects it represents. The stereopsis cue
might conict with other cuesfor example, perspective
cues and cues based on the size of familiar objects.
These effects are most pronounced with a small, nearby display, such as a monitor screen. They ag the image as a
small, at picture. A human being can transcend this detail
when appreciating art. To some extent, one can transcend it
during training of specic tasks. Screen displays as close as
one meter have been used successfully and accepted well
by experienced pilots. However, to attempt physical equivalence, one must do better. This is where the display system
comes in.
A screen projection is a signicant improvement over a
monitor screen. The image may be projected either from the
front of the screen or, with a suitable screen, from the rear.
Back projection has the advantage that the projector is out of
the way of the pilot and cab structure. It is possible to place
the projector so as to avoid distortion and the need for distortion correction.
A larger image placed, typically, three meters away is easier to perceive as real. The accommodation is only 0.3 diopter
from innity. Parallax with nearby objects, such as the cockpit structure and instruments, is approximately correct.
Innity optics is a more effective solution. The image is
optically placed innitely far away. Accommodation is exactly
AEROSPACE SIMULATION
Video
monitor
ire
fle
ct
i ve
pl
at
Spherical mirror
Se
319
Projector
Spherical screen
Spherical mirror
Motion
m
platfor
Actuator
Figure 5 shows an elegant solution: an innity optics system that can serve several crewmembers and provide them
with a correct, wide-angle outside view regardless of their position in the cockpit. The picture is back-projected by a number of projectors (only one is shown) onto a spherical screen.
The simulator crew views this display through a large concave spherical mirror. The screen and mirror are concentric
with their radii matched to put the screen at the focal surface
of the mirror as viewed from the cab. The mirror creates a
virtual image located out at innity that can be seen from
anywhere in the cab.
Neither the projected image nor the one viewed through
innity optics offers correct stereopsis, parallax, or accommodation for objects that are not far away. This is signicant for
operations where nearby objects play a role, including aerial
refueling, spacecraft docking, and maneuvering helicopters
near terrain and objects.
Stereopsis can be achieved by offering separate images for
the two eyes. When this is done, the stereo cue is expected to
overpower the accommodation cue and the parallax cue with
which it is not consistent.
Three-dimensional images that are inherently correct in
stereopsis, accommodation, and parallax for any viewer and
for multiple viewers at the same time can be produced by holography. But holography requires creation of an interference
pattern with resolution of the order of the wavelength of visi-
320
AEROSPACE SIMULATION
ble light (in the order of 108 m). This capability is not yet
available in real time.
Separate images for the two eyes (or for that matter, for
two crew members) can be offered with projection systems
and innity optics systems by use of polarized light or of electronically timed shutters. In the former case, two separate
images are projected on the screen using mutually orthogonal
polarization. The pilot views the display through polarizing
lenses, so that each eye sees only one image. In the latter
case, the two images alternate. The pilot views the display
through electronically timed liquid crystal shutters. These
block each eye when the image intended for the other is projected.
Head (or helmet)-mounted displays (HMD) offer separate
collimator-like display systems for the two eyes. The HMD
requires head tracking to determine the instantaneous orientation of the eyepoint. Head movement can sweep a narrow
eld of view over a much wider eld of regard. These systems
typically induce the pilot to substitute head movement for eye
movement, and the natural ability to notice moving objects in
ones peripheral vision cannot be exercised.)
The quality of HMD depends on the precision of head
tracking and its latency. The display requires a fast update
rate to keep up with fast image changes due to abrupt head
movement. HMDs typically require individual tting. The
size and weight of an HMD is a burden on the civilian pilots.
Even military pilots, used to ying with a helmet, often object. Besides, the HMD precludes the use of operational helmets and viewing devices in the simulator.
The eyepoints used for the HMD are generic. They represent the eye positions of a typical pilot. Static adjustment to
the pilots seat position, torso height, and eye separation is
feasible. Dynamic adjustment to body and head movement is
not in the current systems.
For use with an HMD, the database models the inside of
the cab as a black silhouette. The HMD reects its images on
beam-splitters that allow the pilot to see through into the cab.
Even so, there is a potential problem when two crew members
sit side by side. The silhouette of the other crew members
head cannot be predicted perfectly and will not register accurately. Bright outside scenery may show through the edges
of the other crew members helmet.
Brightness is an issue for all simulator displays. One must
assess the brightness available at the source and how much
of it reaches the observers eye through the display system
optics. These estimates are too involved to be presented here.
The bottom line is that there is no difculty in creating what
an observer will accept as a daylight scene. The brightness of
this scene is far below actual daylight. Pilots do not use their
sunglasses in simulators. Simulator cabs are darkened during
operation unlike aircraft cockpits in daytime. By the same
token, problems of observing certain dimly lit displays in sunlight do not arise in the simulator.
It was not possible to describe in this section all the types
of display systems in current use. Some of the ones not covered are calligraphic displays, multi-resolution displays, and
area of interest displays.
MOTION CUES
Motion cues, by denition, are those cues that result from the
pilot being moved bodily. Awareness of motion through sight
where g is the local acceleration of gravity and a is the acceleration of the cab relative to an inertial system. Rotation relative to an inertial frame is also measurable.
It is these parameters, namely, the three components of
specic force and the three components of angular velocity,
that serve as motion cues for the human body, as for any
other physical system. The inner ear contains organs (otholiths and semicircular canals) specically adapted to sense
these parameters. The motion parameters are felt also by
other parts of the bodyfor example, the sinking sensation
in the pit of the stomach when an elevator starts its descent.
So long as the six motion parameters are reproduced correctly, there is no need to investigate the mechanism of human perception. Any and all mechanisms respond as they do
in ight.
It takes a six-degree-of-freedom motion system to create
the six motion cues. With the simulator cab on a motion platform, the pilot can sense rotational rates around the three
body axes (yaw, pitch, and roll) and linear acceleration forward, sideways, and up (surge, sway, and heave). The six parameters vary from one point in the moving cab to another.
However, with a rigid cab representing a rigid vehicle, if the
parameters are correct at one point, they are correct at every point.
When the replication of the motion parameters is only approximately correct, the errors vary from point to point in the
simulator cab. It is then necessary to select a sensing point
where the errors are minimized. The choice of a sensing point
is inuenced by the theory of perception. For example, if it is
the inner ear which processes the motion cues, then the sensing point should coincide with the pilots head.
The fact that uniform motion is intrinsically undetectable
allows a pilot to have the same sensations in a stationary
simulator as in a fast-moving airplane. However, acceleration
and rotation are sensed. It is impossible to replicate the acceleration of the ight vehicle exactly while keeping the motion
platform in the connes of a room. For instance, during the
takeoff run, an airplane accelerates from rest to ying speed.
In the process, it might roll over a few thousand feet of runway. Should the motion platform be subject to a surge acceleration equal to the airplanes, it, too, would translate a few
thousand feet and out of the connes of the building that
houses the simulator.
The above discussion demonstrates that a conned motion
platform, of necessity, violates the principle of physical equivalence under some circumstances. One attempts to replicate
the motion cues approximately, and, to the extent possible,
deviate from the true motion parameters to a degree that is
undetectable by a human subject.
In the case of the takeoff roll, the specic force, in body
coordinates, is inclined to the rear and is slightly larger than
AEROSPACE SIMULATION
321
322
AEROSPACE SIMULATION
equal to the full-scale motion and got better subjective evaluation from the pilots. Not even the VMS was capable of altogether good full-scale motion. The motion system of the
VMS has been upgraded in the wake of the Ref. 6 results.
When consistent motion and visual cues are available, the
motion cues should be sensed by the pilot earlier. An acceleration step of magnitude a results in a displacement at2. This
displacement is not sensed visually until it has grown to the
visual detection threshold x, which takes a time delay
t =
2x
a
CONTROL LOADING
In the early days of aviation (and to this day in light aircraft),
pilot controls were coupled mechanically to aerodynamic control surfaces. Pilots relied on the control feel, dominated by
aerodynamic forces, as a major cue. The function of the control loader is to reproduce this cue in a ight simulator.
In the meantime, aircraft have evolved. Hydraulically actuated controls have become the norm. Electronic controls are
the trend of the future. These irreversible control systems do
not feed aerodynamic forces back to the pilot. Articial feel
systems (usually springs) are used to provide the pilot with a
semblance of the expected feel. Increased reliance on instrument readings makes up for the deciency.
Control loaders are fairly expensive. A high-quality loader
may cost more than a light airplane. This creates a paradoxical situation: a control loader can be economically justied
only in those cases in which the most important cues that
it can provide are suppressed. A very sophisticated piece of
equipment simulates a generic system of two masses, with
springs, dampers, and linkage. This is traditionally approximated by a near linear model. Nevertheless, control loaders
are important in special situationsfor instance, hydraulic
failure, giving rise to signicant control forces.
The techniques of control loading are similar to the ones
employed in motion system. The high-end control loaders are
hydraulic, with electric systems starting to catch up. Through
the 1980s, control loaders were controlled by analog computers. In the 1990s, digital controllers caught up, some of them
using frame rates as high as 5000 fps.
NETWORKING OF SIMULATORS
Long-haul networking came into its own in the 1990s. Air
combat simulators with dual cockpits engaging one another
have been in existence since the 1960s. By the 1980s, several
simulation facilities had connected their simulators by a local
area network. The concept was taken a step further by the
Defense Advanced Research Projects Agency (DARPA). In the
SIMNET project (7), large-scale networking, including remotely located facilities, was carried out successfully.
The SIMNET project used low-delity simulators with
crude visual displays. Active controls and instruments were
limited to the ones normally used or monitored during combat. Everything else was eliminated or represented by static
props and pictures. The purpose was to recreate the feel, the
pressures, and the confusion of a battleeld. In a test conducted in 1989, about 400 players participated, including
tank crews and helicopter crews at separate army installations.
SIMNET achieved its networking in two stages. Local networking tied simulators within one facility together by use of
Ethernet. The long-haul link between different facilities used
commercial 56 kbaud lines. The local and long-haul protocols
were different.
Like the local networking that preceded it, SIMNET addressed a set of matching simulators specically designed to
interact. By 1989, there were also isolated demonstrations of
long-haul communications between existing high-delity simulators that were separately and independently designed and
owned. In 1979, an F-15 simulator located at Williams Air
Force Base engaged an F-4 simulator at Luke Air Force Base.
Both bases are in Arizona, and the distance between them is
80 km. The network link used four telephone lines.
AEROSPACE SIMULATION
In 1989 a long-haul link between an AH-64 Apache simulator located in Mesa, Arizona and a Bell 222 simulator located
in Fort Worth, Texas was demonstrated. The Arizona simulator was in the facility of the McDonnell Douglas Helicopter
Company. The Texas device was in the plant of Bell Helicopter Textron. The distance between the two facilities is 1350
km. The link employed a 2400 baud modem over a standard
telephone line.
These experiments showed that long-haul networking of
dissimilar simulators was practical. But a communications
protocol was missing. Rather than reinvent the interface by
mutual arrangement between each pair of facilities, an industry standard for interfacing simulators was needed. By conforming to the standard, a simulation facility could ensure
compatibility with every other facility that conformed.
An open industry standard for networking of simulators
was rst addressed at a conference held in Orlando, Florida,
in August 1989 (8). The conference adopted the local SIMNET
protocol as the starting point for the new standard. The term
coined for the new protocol was distributed interactive simulation (DIS). Work on DIS continued in biannual meetings in
Orlando. In 1993, the DIS protocol was formalized as IEEE
Standard 1278-1993 (9). Work on upgrades continues.
The number of players involved in SIMNET was large
enough to enforce some of the mandatory rules of large scale
networking: The participating simulators must be independent. Each must be able to join the game or withdraw without
interfering with the operation of the others. The failure of any
single simulator must not disrupt the game.
But the SIMNET protocol also involved design decisions
tailored to the low processing power of the SIMNET devices.
Some of these design details were not desirable in general.
The lessons of the long-haul SIMNET protocol were lost and
had to be relearned.
The technical challenges of long-haul networking are
mostly two: bandwidth and transmission delays. These issues
exist in local networking, but long distances between networked simulators render both issues more critical.
When a large number of simulators interact, current state
information about each vehicle must be broadcast for the benet of all. Broadcasting all this information at the rate at
which it is createdtypically 40 to 60 times a second
creates prohibitively large information ows. Methods for reducing the required bandwidth were needed.
One method, introduced in SIMNET, is called dead reckoning. This term, borrowed from navigation, refers to the extrapolation of a vehicles motion based on its previously
known state. The SIMNET dead reckoning scheme has each
simulator withhold its broadcasts so long as its state information can be reproduced with acceptable accuracy by extrapolation. The originating simulator (the sender) determines
whether this is the case by simulating the extrapolation process of the remote simulator (the receiver). For each simulation frame, the result of the extrapolation is compared to the
state of the vehicle computed for that frame. No broadcasts
are made until the difference exceeds a preselected threshold.
Other methods for relieving the bandwidth bottleneck include (a) bundling of packets at each node and (b) long-haul
transmission of changed information only.
The second technical issue is delay. Remote information is
outdated information. A delay corresponding to the speed of
light is a hard minimum imposed by the laws of nature. It
323
amounts to 3.33 s/km. Over global distances of several thousand kilometers, the delay is comparable to a simulation
frame. The delay in actual communications lines is roughly
double the above. With a satellite link, the round trip to geostationary altitude imposes a delay of 200 ms, and the mechanics of the equipment on the satellite increases this to half
a second or more. Further delays are caused by processing
packets by servers at network nodes.
An aircraft traveling at 400 knots covers 1 m in about 5
ms. A rotorcraft ying at, say, 100 knots takes 20 ms to cover
1 m. Position discrepancies due to communications delays are
visible in close formation ying. Hit-or-miss decisions for projectiles are affected.
Delays in communications channels are not predictable
and not repeated precisely. A constant delay will make the
remotely simulated vehicle appear to lag behind, whereas a
variable delay will make it appear to jump around. To compensate for the delay, remote data must be extrapolated to
the current time over the delay period t.
Initially, there was the misconception that, so long as
sender and receiver used the same dead reckoning scheme,
the receiver error would never exceed the threshold imposed
by the sender. The fallacy of this view was soon exposed (10).
The sender withholds its broadcasts until after the threshold
has been exceeded. At that time, the sender broadcasts an
update. But the update does not reach the receiver until t
later. All this time, the receivers error continues to grow.
Even when the update arrives, the receiver is not at liberty
to exploit it. Immediate reversion to the more recent data
would cause a visible jump in the image. This would make
the image jitter and betray that it is the image of a remotely
simulated entity. The receiver must implement smoothing.
Depending on the particular smoothing algorithm, the receiver will maintain the state error longer or even continue to
grow it for a while after the update is received.
This way, the receivers error always exceeds the senders
threshold, and, in long-haul networking, by a very signicant
margin (11). Dead reckoning, which, for the sender, is a bandwidth saving device, becomes a mandatory accuracy maintenance procedure for the receiver. Needless to say that dead
reckoning by the sender increases the delay and so does any
bandwidth saving scheme that requires processing at the
nodes.
The receiver must extrapolate the state in each packet
over the delay that the packet experienced. To make this possible, it is necessary to include a timestamp with the variables
of state in each data packet. The stamp is the time for which
the variables are valid as opposed to the time at which they
were computed or transmitted. The receiver subtracts the
timestamp from the time at which the variables are to be displayed and extrapolates over the difference. The error in the
dead reckoned state depends on the accuracy of the timestamp as well as on the extrapolation algorithm (10).
The DIS protocol specied a timestamp since the 1990
draft. Two versions of a timestamp were recognized: an absolute timestamp produced by a clock synchronized to universal
time coordinates (UTC) and a relative timestamp produced by
a free running local clock. The relative timestamp can be used
to correct for the jumping around effect of variable delay, but
not for the lagging behind that the delay itself causes.
To produce an absolute timestamp, clocks at remotely located simulation facilities must be synchronized to within a
324
Normal
Requirements
Acknowledgments
Transmit queue
protocol
Required
Deliver packets in
order queued
Receive queue
protocol
Process packets in
order received
Halt transmission
Halt process
Required
Ask for retransmission
Ask for retransmission
Simulation
Requirements
Useless
Deliver most recent
packet and
discard others
Process most recent
packet and
discard others
Impossible
Impossible
Required
Discard
Forget
AMNON KATZ
University of Alabama
374
AIR TRAFFIC
Air taxi
20%
Military
10%
General aviation
16%
Air carrier
54%
Figure 1. Types of air traffic operations controlled at ARTCC Centers in 1996. Total number of operations was 40.7 million.
AIR TRAFFIC
In todays world, air travel is a primary mode of transportation. During 1996, nearly 575 million passengers boarded
scheduled air carrier flights in the United States. Over the
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
AIR TRAFFIC
CityAirport
1995
Enplanements
1995
Operations
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Chicago OHare
Atlanta Hartsfield
DallasFort Worth
Los Angeles
San Francisco
Miami
Denver
New York JFK
Detroit Metropolitan
Phoenix Sky Harbor
31,255,738
27,350,320
26,612,579
25,851,031
16,700,975
16,242,081
14,818,822
14,782,367
13,810,517
13,472,480
892,330
747,105
873,510
716,293
436,907
576,609
487,225
345,263
498,887
522,634
375
traffic controllers instruct pilots when to change their direction, speed, or altitude to avoid storms or to maintain traffic
separation. Not all aircraft follow the airway system. Depending on traffic load, weather, and aircraft equipment, it is
possible for the controller to clear the aircraft on a direct
route. Of the 20 ARTCCs in the continental United States,
the five busiest in 1995 were Chicago, Cleveland, Atlanta,
Washington, and Indianapolis (3).
The FAAs Air Traffic Control System Command Center
(ATCSCC) is responsible for managing traffic flow across the
United States. The ATCSCC is located in Herndon, Virginia.
The command center oversees the entire ATC system and provides flow information to the other ATC components. If an
area is expecting delays due to weather or airport construction, the command center issues instructions to reduce traffic
congestion by slowing or holding other traffic arriving at the
trouble area.
The FAA operates numerous navigation aids (NAVAID) to
assist aircraft operations. En route navigation primarily uses
the VORTAC or VOR/DME system. A VOR/DME system consists of a network of VOR/DME radio navigation stations on
the ground that provide bearing and distance information. An
aircraft must have the proper radio equipment to receive the
signals from these systems. Civilian traffic obtains bearings
from the VOR (very high frequency, or VHF, omnidirectional
range) component and distance from the DME (Distance Measuring Equipment). Military traffic uses the TAC or TACAN
(Tactical Airborne Navigation) signal. The VOR/DME system
is the NAVAID that defines the airways.
Instrument approaches to an airport runway require electronic guidance signals generated by transmitters located
near the runway. Precision approaches use the Instrument
Landing System (ILS). The ILS provides horizontal (localizer)
and vertical guidance (glideslope). A Category I ILS approach
typically allows an aircraft to descend to 200 feet AGL without seeing the runway environment. Continued descent requires that the runway environment be in view. Each airport
runway with a precision approach typically requires dedicated ILS equipment installed and certified for that runway.
Nonprecision approaches are commonly defined using VOR/
DMEs, nondirectional beacons (NDB), and localizers. A nonprecision approach does not provide glide slope guidance and,
therefore, limits the minimum altitude allowed without visual
contact with the runway.
AIRSPACE CAPACITY
The number of aircraft operations, both civilian and military,
continues to grow, which strains the capacity of the airspace
system. Over the period 1980 to 1992, traffic in the United
States grew at an average annual rate that was 0.4 percentage point faster than the increase in capacity (3). By 2005,
the number of air carrier passengers is expected to grow from
550 million (1995) to 800 million. During the same period, the
number of air carrier domestic departures is expected to grow
from 7.6 million to 8.9 million. Todays restricted airspace system will not be able to accommodate the rapid growth in aviation (3).
Delay in air carrier operations is one method of measuring
system capacity. From 1991 to 1995, the number of air carrier
operations increased more than 18% while the number of air
376
AIR TRAFFIC
Minutes
7.0
6.0
5.0
4.0
3.0
2.0
1.0
0.0
Gate-hold
Taxi-out
Airborne
Taxi-in
Figure 2. The average delay per flight phase (in minutes) during an
air carriers scheduled revenue flight.
;;
;
;;
;
;
;;;
;;
;;
y;y;;;
;;
;;;;
y;;;;
;;
y;;;
;yy;;y;
;;
300
Other
NAS equipment
Closed runaways/ taxiways
Terminal volume
Weather
250
(Thousands)
200
150
100
50
Figure 3. The number of delayed air carrier flights (in thousands) for the period
1991 to 1995. The reasons for the delay
are shown.
1991
1992
1993
1994
1995
AIR TRAFFIC
900
800
377
CityAirport
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Chicago OHare
DallasFort Worth
Atlanta Hartsfield
Los Angeles
Miami
Phoenix Sky Harbor
St. Louis Lambert
Las Vegas McCarran
Oakland Metropolitan
Detroit Metropolitan
1995
Operations
2010
Operations
%
Growth
892,330
873,510
747,105
716,293
576,609
522,634
516,021
508,077
502,952
498,887
1,168,000
1,221,000
1,056,000
987,000
930,000
736,000
645,000
682,000
573,000
675,000
30.9
39.8
41.3
37.8
61.3
40.8
25.0
34.2
13.9
35.3
26,407,065
33,706,000
27.6
700
Millions
600
500
400
300
200
100
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
CityAirport
Chicago OHare
Atlanta Hartsfield
DallasFort Worth
Los Angeles
San Francisco
Miami
Denver
New York JFK
Detroit Metropolitan
Phoenix Sky Harbor
1995
2010
%
Enplanements Enplanements Growth
31,255,738
27,350,320
26,612,579
25,851,031
16,700,975
16,242,081
14,818,822
14,782,367
13,810,517
13,472,480
50,133,000
46,416,000
46,553,000
45,189,000
28,791,000
34,932,000
22,751,000
21,139,000
24,220,000
25,408,000
60.4
69.7
74.9
74.8
72.4
115.1
53.5
43.0
75.4
88.6
543,439,185
919,145,000
69.1
378
AIR TRAFFIC
7000
6564
6000
5764
5972
6174
6377
5547
5306
5000
4720
4784
4920
1996
1997
1998
5036
5027
1999
2000
4000
3000
2000
1000
At the end of 1995, U.S. air carriers had firm orders placed
for 604 new aircraft and options on an additional 799 aircraft.
The price tag for the firm orders was $35.5 billion. The firm
orders were distributed among aircraft from Airbus Industries, Boeing Commercial Aircraft Company, McDonnellDouglas Aircraft Company, and the Canadian Regional Jet.
The most popular aircraft on order was the Boeing 737, with
218 firm orders and 260 options.
FREE FLIGHT
In April 1995, the FAA asked RTCA, Inc., an independent
aviation advisory group, to develop a plan for air traffic management called Free Flight (6). Free Flight hopes to extend
airspace capacity by providing traffic flow management to aircraft during their en route phase. By October 1995, RTCA
had defined Free Flight and outlined a plan for its implementation (7).
The Free Flight system requires changes in the current
method of air traffic control. Today, controllers provide positive control to aircraft in controlled airspace. Free Flight will
allow air carrier crews and dispatchers to choose a route of
flight that is optimum in terms of time and economy. Economic savings will be beneficial both to the air carriers and
to the passengers. Collaboration between flight crews and air
traffic managers will be encouraged to provide flight planning
that is beneficial to the aircraft and to the NAS. User flexibility may be reduced to avoid poor weather along the route, to
avoid special-use airspace, or to ensure safety as aircraft enter a high-density traffic area such as airports. The new system will offer the user fewer delays from congestion and
greater flexibility in route determination (3).
Flights transitioning the airspace in Free Flight will have
two zones surrounding the aircraft. A protected and an alert
zone are used to provide safety for the flight. The size and
shape of the zones depend on the size and speed of the aircraft. The goal is that the protected (or inner) zones of two
2001
2002
2003
2004
2005
2006
2007
AIR TRAFFIC
379
sufficient accuracy for an aircraft to make a Category I instrument approach (200 ft ceiling/1800 ft visibility) (9).
The LAAS is dedicated to a single airport or airports in the
same area. A GPS base station is located at or near the airport. The differential correction signal is broadcast to all aircraft within a 30 mile region using an RF datalink. The LAAS
is more accurate than the WAAS since the aircraft are in
closer proximity to the base station providing the corrections.
The LAAS DGPS can be used for Category II approaches (100
ft ceiling/1200 ft visibility) and Category III approaches (0 ft
ceiling). Category III has three subcategories (A, B, C) with
visibility minimums of 700 ft, 150 ft, and 0 ft, respectively
(10).
The LAAS DGPS is also useful for ground navigation. Accurate positioning on the airport surface increases the pilots
situational awareness during taxi operations. It also provides
ATC with an accurate knowledge of all ground traffic.
Automatic Dependent Surveillance-Broadcast. An ATC radar
screen displays aircraft position using the airport surveillance
radar (ASR) and the secondary surveillance radar (SSR). The
ASR transmits a radar signal that reflects from the aircraft
skin. The SSR interrogates an aircrafts transponder, which
returns the aircrafts transponder code and its altitude. Aircraft equipped with a newer Mode-S transponder can return
additional data, such as heading and velocity.
The proposed NAS architecture phases out the SSR system. It will be replaced with Automatic Dependent Surveillance-Broadcast (ADS-B). At approximately twice per second,
the aircrafts on-board ADS-B broadcasts aircraft position
(latitude/longitude/altitude) and status information using the
Mode-S transponder. The ADS-B periodically broadcasts the
flight identification and the aircrafts ICAO (International
Civil Aviation Organization) address. For air carriers, the
flight identification is the flight number (for example, NW132)
that passengers, pilots, and controllers use to identify a particular flight. The ICAO address is a unique number that is
assigned to an aircraft when it is manufactured.
ADS-B provides controllers with the accurate aircraft identification and position needed to implement Free Flight. ADSB also provides information to the ground controller during
airport surface operations. Positive identification and accurate position (using LAAS DGPS) during taxi-in and taxi-out
operations are especially important for safe and timely operations in low-visibility conditions (11).
Traffic Alert and Collision Avoidance System (TCAS). TCAS is
an airborne surveillance system that monitors nearby aircraft
and detects impending collisions. The position and altitude of
nearby traffic are shown on a cockpit display. TCAS transmits
transponder interrogation signals similar to the SSR groundbased system. Aircraft receiving the signal respond with a
normal transponder reply that includes altitude. The TCAS
can determine the bearing to the aircraft using a multielement antenna.
TCAS protects a safety zone around the aircraft. A track is
started for every traffic target detected by TCAS. The collision
avoidance logic calculates the time to a possible conflict with
each of the traffic targets. If the time to a collision or nearmiss counts down to 45 s, a traffic advisory is generated informing the pilot of the situation. If the time gets to 25 s, a
resolution advisory is issued. A resolution advisory commands
380
AIR TRAFFIC
AIR TRAFFIC
381
Extended runways
New runways
6
5
4
3
2
2010+
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1
Figure 6. The number of new runways and
runway extensions being planned for US airports.
382
operations using land and hold short operations on intersecting runways (3).
Simultaneous approaches can be performed on runways
that are not parallel provided that VFR conditions exist. VFR
conditions require a minimum ceiling of 1000 ft and minimum
visibility of 3 miles. The VFR requirement decreases runway
capacity in IFR (Instrument Flight Rules) conditions and
causes weather-related delays. Simultaneous instrument approaches to converging runways are being studied. A minimum ceiling of 650 ft is required. The largest safety issue is
the occurrence of a missed approach (go-around) by both aircraft. An increase in system capacity of 30 arrivals per hour
is expected (3).
Reduced Separation Standards. A large factor in airport capacity is separation distance between two aircraft. The main
factor in aircraft separation is generation of wake vortexes.
Wake vortexes are like horizontal tornadoes created from an
aircraft wing as it generates lift. Wake vortex separation
standards are based on the class of the leading and trailing
aircraft. Small aircraft must keep a 4 nautical mile (nm) separation when trailing behind large aircraft. If the lead aircraft
is a Boeing 757, then a small aircraft must trail by 5 nm.
Large aircraft only need to trail other large aircraft by 3 nm.
The FAA and NASA are studying methods of reducing the
wake vortex separation standards to increase capacity. Any
reduction in the spacing standards must ensure that safety is
preserved (3).
EMERGING TECHNOLOGIES
Several new technologies are being developed that are not
specifically defined in the NAS. One technology that will increase system capacity is the roll-out and turn-off (ROTO)
system. The ROTO system reduces runway occupancy time
for arrivals by providing guidance cues to high-speed exits.
The ROTO system with a heads-up display gives steering and
braking cues to the pilot. The pilot is able to adjust braking
and engine reversers to maintain a high roll-out speed while
reaching the exit speed at the appropriate time. In low visibility, ROTO outlines the exit and displays a turn indicator.
Present ROTO development uses steering cues to exit the
runway; future systems could provide automatic steering capability (11).
BIBLIOGRAPHY
1. The Airline Handbook, Air Transport Association, 1995.
2. Air Traffic, FAA Administrators Fact Book, April 30, 1997, http://
www.tc.faa.gov//ZDV/FAA/administrator/airtraffic.html
3. 1996 Airport Capacity Enhancement Plan, Federal Aviation Administration, Department of Transportation. (http://www.bts.
gov/NTL/data/96_ace.pdf)
4. North American Traffic Forecasts 19802010: Executive Summary,
International Air Transport Association (IATA), 1994 edition.
(http://www.atag.org/NATF/Index.html)
5. Growth in Air Traffic To Continue: ICAO Releases Long-Term Forecasts, press release, International Civil Aviation Organization,
Montreal, Canada, March 1997.
6. FAA and Aviation Community to Implement Free Flight, press
release, FAA News, Washington, DC, March 15, 1996.
JAMES M. RANKIN
Ohio University
382
383
J50
J67
J189
J92
RBL
Oakland ARTCC
LLC
J65-126
ENI
J1-3
J143
-9
J32
FMG
J32
J84
OAL
J92
BTY
J8
AVE
EHF
J6
J1
J86
50
-65
J1-
-12
J5-
J88
Jet route
J9
J6
-6
RZS
J5
5
J50
J17
J8 F IM
812
6
LAX
ZLA
-10
J9
J9
PMD
46
0-1
PDZ
SLI
DAG
-11
46
0-1
-10 -107
J9
0
J6
J9
-10
J60
BLD
J7
2-8
J64
07
0-1 H E C
J6 -107
J128
J6
J76
PGS
J9
EED
J64-128
6
J23
36
J4-10
J2
-104 T N P
J10-231
J4-1 P K
74-9
J74-96
04
E
6
J78
J4
-13
J5 J78-134
-1
4-1
J
0
1
6
9
69
J21 04
TRM
2
J5
JLI
B L H -10-65
69
J2-18
J50J1
PSP
J9
OCN
J1
MZB
J1
J6
LAS
J10
J11
J7
J11
-1
J65
6
8-12
1
J50
J6
07
J50
J9
J6-8
CZQ
J1
J501
J189
J110
FRA
J110
SNS
BSR
J58-80
J65
ECA
10
-94
MVA
J84
LIN
J5
J58-80
J7
OAK
J126
J88
VORTAC
J3
J9
SAC
PYE
J9
ZOA
TIJ
IPL
J2-18
BZA
Figure 1. Oakland and Los Angeles Air Route Traffic Control Center airspace.
in the shape of an upside-down wedding cake. At higher altitudes, the ARTCCs take on the responsibility for providing
the ATM services to the aircraft. The process is reversed as
the aircraft nears the destination airport.
The main types of equipment used in ATM are the radars,
displays, computers, and communications equipment. Radars
provide information regarding the positions of the aircraft
within the airspace. This information is processed in conjunction with the flight plans to predict future locations of the
aircraft. The display of this information is used by the air
traffic controllers in the facilities to determine if the established rules and procedures would be violated in the near fu-
384
80
Millions of operations
60
40
20
Aircraft handled
Instrument operations
Flight services
Airport operations
0
1960
1970
1980
1990
2000
385
overwhelmed the system at some airports. Flow control measures such as ground holding and airborne holding were put
into practice for matching the traffic rate with airport acceptance rate.
The traffic growth starting from the middle of the fourth
phase of the ATM development to the present is shown in Fig.
2. The graphs in the figure are based on the data provided in
the FAA Air Traffic Activity report (4), FAA Aviation Forecasts publication (5), and the FAA Administrators Fact Book
(6). It should be noted that the number of airport operations
is representative of usage by all aircraft operators including
general aviation while the aircraft handled is representative
of higher-altitude traffic reported by the ARTCCs. Several interesting trends can be observed from the graphs: traffic
growth subsequent to the Airline Deregulation Act of 1978,
traffic decline after the PATCO strike in 1981, and the eventual recovery after approximately 3 years. All the graphs except the one for flight service usage show an increasing trend.
The decreasing trend in the flight service usage since 1979 is
due to (a) improved cockpit equippage, with part of the service
being provided by the airline operations centers (AOCs), and
(b) consolidation of the FAA flight service facilities.
Fiscal year
Figure 2. Air traffic activity historical data.
386
387
388
termed terminal doppler weather radar (TDWR) has been developed to provide windshear data within the terminal areas.
This system will be integrated with the low-level windshear
alert system (LLWAS) to enhance the weather prediction accuracy (12,13). LLWAS uses direct anemometer measurements. Plans have been made to field automated surface
weather observing systems at small and medium-sized airports. This system, known as the automated weather observing system (AWOS), is designed to provide data to the
national observation network. Traditionally, vertical wind
profiling data consisting of windspeed, temperature, pressure,
and humidity aloft have been measured by launching balloon
systems from widely distant locations. In the future vertical
wind profiling will be done using a microwave Doppler system. An important resource for aviation weather is the wind
and temperature data observed by thousands of aircraft for
navigation and performance monitoring. Some airlines already have their flights provide wind and temperature data
periodically via ACARS downlink. As datalink technologies
mature, it will be possible to collect the airborne observation
data in large databases to augment the data collected by the
ground-based observation systems. Access to airborne observation data will enable identification of turbulence regions
which are usually much smaller than what can be predicted
using the ground-based systems (12). Finally, improved
weather observations will also be available from weather satellite systems using radar and radiometer measurements of
winds, temperature, humidity, and precipitation.
In addition to the enhancements in the weather sensor systems, the computational and information processing algorithms are also expected to improve. Computational algorithms will make short-term forecasts (nowcasts) possible
within 10 min of thunderstorm formation by detecting temperature and moisture boundaries in the observation data.
The currently available weather systems that generate large
amounts of data which the aviation user has to sort through
to obtain the needed facts will be replaced by rule-based
weather information systems (12). These systems will provide
precise weather messages in contrast with the often lengthy
and ambiguous weather briefings provided by the presently
available systems.
Decision Support Systems
As progress is made toward a more cooperative and flexible
air traffic environment, the biggest challenge for ATM is to
improve or at least retain the current levels of safety. Currently, safety is defined in terms of separation requirements.
Lateral separation is maintained largely by constraining the
traffic to fly on fixed airways. Vertical separation is achieved
by constraining the aircraft to fly at assigned altitudes. Longitudinal separation is maintained by ensuring that the aircraft
on the same airway are separated by a physical distance as a
function of the relative speed of the aircraft, their location
with respect to the surveillance radar, and their weight class.
The path constraints make the traffic movement predictable,
which in turn makes it possible to identify separation violations that are likely to occur in the future. In a flexible air
traffic environment with few constraints on traffic movement,
decision support systems will be needed for achieving the
same or better levels of predictability. These systems will predict the future positions of the aircraft, check if they would
389
390
GLOBAL ATM
Although ATM has been discussed in terms of the domestic
air traffic operations within the United States, it is recognized
that civic aviation is an international activity. There are 183
International Civil Aviation Organization (ICAO) member nations that are interested in the development of airborne systems, ground systems, standards, and procedures for enabling
seamless operations worldwide. For achieving this goal, the
ICAO develops standards which are collectively known as the
International Standards and Recommended Practices (1). Except for a few minor differences, the ATM system in the
United States conforms to the ICAO standards.
The airspace in Europe is shared by several nations, and
the 36 member states of the European Civil Aviation Confer-
391
BIBLIOGRAPHY
1. M. S. Nolan, Fundamentals of Air Traffic Control, Belmont, CA:
Wadsworth, 1994.
2. S. Kahne and I. Frolow, Air traffic management: Evolution with
technology, IEEE Control Syst. Magazine, 16 (4): 1221, August
1996.
3. G. A. Gilbert, Historical development of the air traffic control system, IEEE Trans. Commun., 21: 364375, 1973.
4. N. Trembley, FAA Air Traffic Activity, Washington, DC: Federal
Aviation Administration, US Department of Transportation,
1994.
5. Office of Aviation Policy and Plans, FAA Aviation Forecasts
Fiscal Year 19922003, Washington, DC: Federal Aviation Administration, US Department of Transportation, 1992.
6. Office of Business Information and Consultation, Administrators
Fact Book, Washington, DC: Federal Aviation Administration, US
Department of Transportation, 1996.
7. T. S. Perry, In search of the future of air traffic control, IEEE
Spectrum, 34 (8): 1935, August 1997.
8. Final Report of the RTCA Task Force 3 Free Flight Implementation, RTCA, Inc., Washington, DC, October 26, 1995.
9. B. W. Parkinson and J. J. Spilker, Jr. (eds.), Global Positioning
392
ALARM SYSTEMS
System: Theory and Applications, Vols. I and II, Washington, DC:
American Institute of Aeronautics and Astronautics, 1996.
B. SRIDHAR
G. B. CHATTERJI
NASA Ames Research Center
AIRCRAFT COMPUTERS
The aircraft industry and the computer industry are relative newcomers in two centuries of technical innovation. It is only natural that these powerful industries have merged to provide continuous improvements in
capabilities and services for aircraft customers. Landau (1) defines an aircraft as any structure or machine
designed to travel through the air. He then defines a computer as a person who computes or a device used
for computing. From these definitions, an aircraft computer is a device used on (or in association with) any
air-traveling machine or structures used to make computations. Computers can be found in every aspect of
the aircraft industry. On the aircraft, there are computers for flight control and display, computers monitoring and regulating flight functions, computers recording and processing flight activities, computers providing
passenger entertainment, and computers providing communication and navigation. Equally important are the
ground-based computers at airports, maintenance depots, and air traffic control stations that provide services
for all aspects of flight.
Figure 1 shows a typical aircraft central computer (CC) used in modern fighter aircraft. This particular
computer is also referred to as a fire-control computer (FCC), because it directs the delivery of weapons in
conjunction with the aircrafts sensor systems.
Aircraft Analog Computers. Early aircraft computers were used to take continuous streams of inputs
to provide flight assistance. Examples of aircraft analog inputs are fuel gauge readings, throttle settings, and
altitude indicators. Landau (1) defines an analog computer as a computer for processing data represented by a
continuous physical variable, such as electric current. Analog computers monitor these inputs and implement
a predetermined service when some set of inputs calls for a flight control adjustment. For example, when fuel
levels are below a certain point, the analog computer would read a low fuel level in the aircrafts main fuel
tanks and would initiate the pumping of fuel from reserve tanks, or balancing fuel between wing fuel tanks.
Some of the first applications of analog computers to aircraft applications were for automatic pilot applications,
where these analog machines took flight control inputs to hold altitude and course. The analog computers use
operational amplifiers to build the functionality of summers, adders, subtracters, and integrators on the electric
signals.
Aircraft Digital Computers. As the technologies used to build digital computers evolved, digital computers became smaller, lighter, and less power-hungry, and produced less heat. This made them increasingly
acceptable for aircraft applications. Digital computers are synonymous with stored-program computers. A
stored-program computer has the flexibility of being able to accomplish multiple different tasks simply by
changing the stored program. Analog computers are hard-wired to perform one and only one function. Analog
computers data, as defined earlier, are continuous physical variables. Analog computers may be able to recognize and process numerous physical variables, but each variable has its unique characteristics that must
be handled during processing by the analog computer. The range of output values for the analog computer is
bounded as a given voltage range; if they exceed this, they saturate. Digital computers are not constrained
by physical variables. All the inputs and outputs of the digital computer are in a digital representation. The
processing logic and algorithms performed by the computer work in a single representation of the cumulative
data. It is not uncommon to see aircraft applications that have analog-to-digital and digital-to-analog signal
1
AIRCRAFT COMPUTERS
converters. This is more efficient than having the conversions done within the computers. Analog signals to
the digital computer are converted to digital format, where they are quickly processed digitally, and returned
to the analog device through an digital-to-analog converter as an analog output for that device to act upon.
These digital computers are smaller, more powerful, and easier to integrate into multiple areas of aircraft
applications.
Landau (1) defines a digital computer as a computer for processing data represented by discrete, localized
physical signals, such as the presence or absence of an electric current. These signals are represented as a
series of bits with word lengths of 16, 32, and 64 bits. See microcomputers for further discussion.
Wakerly (2) shows number systems and codes used to process binary digits in digital computers. Some
important number systems used in digital computers are binary, octal, and hexadecimal numbers. He also
shows conversion between these and base-10 numbers, as well as simple mathematical operations such as
addition, subtraction, division, and multiplication. The American Standard Code for Information Interchange
(ASCII) of the American National Standard Institute is also presented, which is Standard No. X3.4-1968
for numerals, symbols, characters, and control codes used in automatic data-processing machines, including
computers.
Microcomputers. The improvements in size, speed, and cost through computer technologies continually implement new computer consumer products. Many of these products were unavailable to the average
consumer until recently. These same breakthroughs provide enormous functional improvements in aircraft
computing. Landau (1) defines microcomputers as very small, relatively inexpensive computers whose central
processing unit is a microprocessor. A microprocessor (also called MPU or central processing unit [CPU]) communicates with other devices in the system through wires (or fiber optics) called lines. Each device has a unique
AIRCRAFT COMPUTERS
address, represented in binary format, that the MPU recognizes. The number of lines is also the address size in
bits. Early MPU machines had 8-bit addresses. Machines of 19701980 typically had 16-bit addresses; modern
MPU machines have 256 bits.
Common terminology for an MPU is random-access memory (RAM), read-only memory (ROM), input
output, clock, and interrupts. RAM is volatile storage. It holds both data and instructions for the MPU. ROM
may hold both instructions and data. The key point of ROM is that it is nonvolatile. Typically, in an MPU,
there is no operational difference between RAM and ROM other than its volatility. Inputoutput is how data
are gotten to and from the microcomputer. Output may be from the MPU, ROM, or RAM. Input may be from
the MPU or the RAM. The clock of an MPU synchronizes the execution of the MPU instructions. Interrupts
are inputs to the MPU that cause it to (temporarily) suspend one activity in order to perform a more important
activity.
An important family of MPUs that greatly improved the performance of aircraft computers is the Motorola
M6800 family of microcomputers. This family offered a series of improvements in memory size, clock speeds,
functionality, and overall computer performance.
Personal Computers. Landau (1) defines personal computers as electronic machines that can be owned
and operated by individuals for home and business applications such as word processing, games, finance, and
electronic communications. Hamacher et al. (3) explain that rapidly advancing very large-scale integrated
circuit (VLSI) technology has resulted in dramatic reductions in the cost of computer hardware. The greatest
impact has been in the area of small computing machines, where it has led to an expanding market for personal
computers.
The idea of a personally owned computer is fairly new. The computational power available in hand-held
toys today was only available through large, costly computers in the late 1950s and early 1960s. Vendors such as
Atari, Commodore, and Compaq made simple computer games household items. Performance improvements
in memory, throughput, and processing power by companies such as IBM, Intel, and Apple made facilities
such as spreadsheets for home budgets, automated tax programs, word processing, and three-dimensional
virtual games common household items. The introduction of Microsofts Disk Operating System (DOS) and
Windows has also added to the acceptance of the personal computers through access to software applications.
Improvements in computer technology offer continual improvements, often multiple times a year. The durability
and portability of these computers is beginning to allow them to replace specialized aircraft computers that
had strict weight, size, power, and functionality requirements.
Avionics
In the early years of aircraft flight, technological innovation was directed at improving flight performance
through rapid design improvements in aircraft propulsion and airframes. Secondary development energies
went to areas such as navigation, communication, munitions delivery, and target detection. The secondary
functionality of aircraft evolved into the field of avionics. Avionics now provides greater overall performance
and accounts for a greater share of aircraft life-cycle costs than either propulsion or airframe components.
Landau (1) defines avionics [avi(ation) + (electr)onics] as the branch of electronics dealing with the
development and use of electronic equipment in aviation and astronautics. The field of avionics has evolved
rapidly as electronics has improved all aspects of aircraft flight. New advances in these disciplines require
avionics to control flight stability, which was traditionally the pilots role.
Aircraft Antennas. An important aspect of avionics is receiving and transmitting electromagnetic signals. Antennas are devices for transmitting and receiving radio frequency (RF) energy from other aircraft,
space applications, or ground applications. Perry and Geppert (4) illustrates the aircraft electromagnetic spectrum, influenced by the placement and usage of numerous antennas on a commercial aircraft. Golden (5)
AIRCRAFT COMPUTERS
illustrates simple antenna characteristics of dipole, horn, cavity-backed spiral, parabola, parabolic cylinder,
and Cassegrain antennas.
Radiation pattern characteristics include elevation and azimuth. The typical antenna specifications are
polarization, beam width, gain, bandwidth, and frequency limit.
Computers are becoming increasingly important for the new generation of antennas, which include phased
array antennas and smart-skin antennas. For phased array antennas, computers are needed to configure the
array elements to provide direction and range requirements between the radar pulses. Smart-skin antennas
comprise the entire aircrafts exterior fuselage surface and wings. Computers are used to configure the portion
of the aircraft surface needed for some sensor function. The computer also handles sensor function prioritization
and deinterleaving of conflicting transmissions.
Aircraft Sensors. Sensors, (the eyes and ears) of aircraft, are electronic devices for measuring external
and internal environmental conditions. Sensors on aircraft include devices for sending and receiving RF energy.
These types of sensors include radar, radio, and warning receivers. Another group of sensors are the infrared
(IR) sensors, which include lasers and heat-sensitive sensors. Sensors are also used to measure direct analog
inputs; altimeters and airspeed indicators are examples. Many of the sensors used on aircraft have their own
built-in computers for serving their own functional requirements such as data preprocessing, filtering, and
analysis. Sensors can also be part of a computer interface suite that provides key aircraft computers with the
direct environmental inputs they need to function.
Aircraft Radar. Radar (radio detection and ranging) is a sensor that transmits RF energy to detect
air and ground objects and determines parameters such as the range, velocity, and direction of these objects.
The aircraft radar serves as its primary sensor. Several services are provided by modern aircraft radar. These
include tracking, mapping, scanning, and identification. Golden (5) states that radar is tasked either to detect
the presence of a target or to determine its location. Depending on the function emphasized, a radar system
might be classified as a search or a tracking radar.
Stimson (6) describes the decibel (named after Alexander Graham Bell) as one of the most widely used
terms in the design and description of radar systems. The decibel (dB) is a logarithmic unit originally devised
to express power ratios, but also used to express a variety of other ratios. The Power ratio in dB is expressed
as 10 log10 P2 /P1 , where P2 and P1 are the power levels being compared. Expressed in terms of voltage the gain
is (V 2 /V 1 )2 dB provided the input voltage V 1 and output voltage V 2 are across equal resistances.
Stimson (6) also explains the concept of the pulse repetition frequency (PRF), which is the rate at which
a radar systems pulses are transmitted: the number of pulses per second. The interpulse period T of a radar
is given by T = 1/PRF. For a PRF of 100 Hz, the interpulse period would be 0.01 s.
The Doppler effect, as described by Stimson (6), is a shift in the frequency of a radiated wave, reflected
or received by an object in motion. By sensing Doppler frequencies, radar not only can measure range rates,
but can also separate target echoes from clutter, or can produce high-resolution ground maps. Computers are
required by an aircraft radar to make numerous and timely calculations with the received radar data, and to
configure the radar to meet the aircrews needs.
Aircraft Data Fusion. Data fusion is a method for integrating data from multiple sources in order to
give a comprehensive solution to a problem (multiple input, single output). For aircraft computers, data fusion
specifically deals with integrating data from multiple sensors such as radar and infrared sensors. For example,
in ground mapping, radar gives good surface parameters, while the infrared sensor provides the height and size
of items in the surface area being investigated. The aircraft computer takes the best inputs from each sensor,
provides a common reference frame to integrate these inputs, and returns a more comprehensive solution than
either single sensor could have given.
Aircraft Navigation. Navigation is the science of determining present location, desired location, obstacles between these locations, and best courses to take to reach these locations. An interesting pioneer of aircraft
navigation was James Harold Doolittle (18861993). Best known for his aircraft-carrier-based bomber raid on
Tokyo in World War II. General Doolittle received his masters and doctor of science degrees in aeronautics
AIRCRAFT COMPUTERS
from Massachusetts Institute of Technology, where he developed instrumental blind flying in 1929. He made
navigation history by taking off, flying a set course, and landing without seeing the ground. For a modern aircraft, with continuous changes in altitude, airspeed, and course, navigation is a challenge. Aircraft computers
help meet this challenge by processing the multiple inputs and suggesting aircrew actions to maintain course,
avoid collision and weather, conserve fuel, and suggest alternative flight solutions.
An important development in aircraft navigation is the Kalman filter. Welch and Bishop (7) state that
in 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear
filtering problem. Since that time, due in large part to advances in digital computing, the Kalman filter has
been the subject of extensive research and application, particularly in the area of autonomous or assisted
navigation. The Kalman filter is a set of mathematical equations that provides an efficient computational
(recursive) implementation of the least-squares method. The filter is very powerful in several aspects: it
supports estimation of past, present, and even future states, and it can do so even when the precise nature of
the modeled system is unknown.
The Global Positioning System (GPS) is a satellite reference system that uses multiple satellite inputs to
determine location. Many modern systems, including aircraft, are equipped with GPS receivers, which allow the
system access to the network of GPS satellites and the GPS services. Depending on the quality and privileges
of the GPS receiver, the system can have an instantaneous input of its current location, course, and speed
within centimeters of accuracy. GPS receivers, another type of aircraft computer, can also be programmed to
inform aircrews of services related to their flight plan.
Before the GPS receiver, the inertial navigation systems (INS) was the primary navigation system on
aircraft. Fink and Christiansen (8) describe inertial navigation as the most widely used self-contained technology. In the case of an aircraft, the INS is contained within the aircraft, and is not dependent on outside
inputs. Accelerometers constantly sense the vehicles movements and convert them, by double integration,
into distance traveled. To reduce errors caused by vehicle attitude, the accelerometers are mounted on a
gyroscopically controlled stable platform.
Aircraft Communications. Communication technologies on aircraft are predominately radio communication. This technology allows aircrews to communicate with ground controllers and other aircraft. Aircraft
computers help establish, secure, and amplify these important communication channels.
Aircraft Displays. Displays are visual monitors in aircraft that present desired data to aircrews and
passengers. Adam and Gibson (9) illustrate F-15E displays used in the Gulf War. These illustrations show
heads-up displays (HUDs), vertical situation displays, radar warning receivers, and low-altitude navigation
and targeting system (Lantirn) displays typical of modern fighter aircraft. Sweet (10) illustrates the displays of
a Boeing 777, showing the digital bus interface to the flight-deck panels and an optical-fiber data distribution
interface that meets industry standards.
Aircraft Instrumentation. Instrumentation of an aircraft means installing data collection and analysis
equipment to collect information about the aircrafts performance. Instrumentation equipment includes various
recorders for collecting real-time flight parameters such as position and airspeed. Instruments also capture
flight control inputs, environmental parameters, and any anomalies encountered in flight test or in routine
flight. One method of overcoming this limitation is to link flight instruments to ground recording systems,
which are not limited in their data recording capacities. A key issue here is the bandwidth between the aircraft
being tested and its ground (recording) station. This bandwidth is limited and places important limitations on
what can be recorded. This type of data link is also limited to the range of the link, limiting the aircrafts range
and altitude during this type of flight test. Aircraft computers are used both in processing the data as they are
being collected on the aircraft and in analyzing the data after they have been collected.
Aircraft Embedded Information Systems. Embedded information system is the latest terminology
for an embedded computer system. The software of the embedded computer system is now referred to as embedded information. The purpose of the aircraft embedded information system is to process flight inputs (such
as sensor and flight control) into usable flight information for further flight-system or aircrew utilization. The
AIRCRAFT COMPUTERS
embedded information system is a good example of the merging of two camps of computer science applications.
The first, and larger, camp is the management of information systems (MIS). The MIS dealt primarily with
large volumes of information, with primary applications in business and banking. The timing requirements of
processing these large information records are measured in minutes or hours. The second camp is the real-time
embedded computer camp, which was concerned with processing a much smaller set of data, but in a very timely
fashion. The real-time camps timing requirement is in microseconds. These camps are now merging, because
their requirements are converging. MIS increasingly needs real-time performance, while real-time systems are
required to handle increased data-processing workloads. The embedded information system addresses both
needs.
Aircraft and the Year 2000. The year 2000 (Y2K) has been a major concern for the aircraft computer
industry. Many of the embedded computers on aircraft and aircraft support functions are vulnerable to Y2K
faults, because of their age. The basic problem with these computers has been that a year is represented by
its low-order two digits. Instead of the year having four digits, these computers saved processing power by
using the last two digits of the calendar year. For example, 1999 is represented as 99. This is not a problem
until you reach the year 2000, represented as 00. Even with this representation, problems are limited to those
algorithms sensitive to calendar dates. An obvious problem is when an algorithm divides by the calendar date,
which is division by 0. Division by 0 is an illegal computer operation, causing problems such as infinite loops,
execution termination, and system failure. The most commonly mentioned issue is the subtraction of dates to
determine time durations and to compare dates. There problem is not that the computer programs fail in a
very obvious way (e.g., divide-by-zero check) but, rather that the program computes an incorrect result without
any warning or indication of error. Lefkon and Payne (11) discuss Y2K and how to make embedded computers
compliant.
Aircraft Application Program Interfaces. An application programming interface (API) is conventionally defined as an interface used by one program to make use of the services of another program. The human
interface to a system is usually referred to as the user interface, or, less commonly, the humancomputer
interface. Application programs are software written to solve specific problems. For example, the embedded
computer software that paints the artificial horizon on a heads-up display is an application program. A switch
that turns the artificial horizon on or off is an API. Gal-Oz and Isaacs (12) discuss APIs and how to relieve
bottlenecks of software debugging.
Aircraft Control. Landau (1) defines, a control as an instrument or apparatus used to regulate a
mechanism, or a device used to adjust or control a system. There are two concepts with control. One is the act
of control. The other is the type of device used to enact control. An example of an act of control is when a pilot
initiates changes to throttle and stick settings to alter flight path. The devices of control, in this case, are the
throttle and stick.
Control can be active or passive. Active control is force-sensitive. Passive control is displacement-sensitive.
Mechanical control is the use of mechanical devices, such as levers or cams, to regulate a system. The
earliest form of mechanical flight control was wires or cables, used to activate ailerons and stabilizers through
pilot stick and foot pedal movements. Today, hydraulic control, the use of fluids for activation, is usual. Aircraft
control surfaces are connected to stick and foot pedals through hydraulic lines. Pistons in the control surfaces are
pushed or pulled by associated similar pistons in the stick or foot pedal. The control surfaces move accordingly.
Electronic control is the use of electronic devices, such as motors or relays, to regulate a system. A
motor is turned on by a switch, and quickly changes control surfaces by pulling or pushing a lever on the
surface. Automatic control is a system-initiated control, which is a system-initiated response to a known set
of environmental conditions. Automatic control was used for early versions of automatic pilot systems, which
tied flight-control feedback systems to altitude and direction indicators. The pilot sets his desired course and
altitude, which is maintained through the flight controls automatic feedback system.
To understand the need for computers in these control techniques, it is important to note the progression
of the complexity of the techniques. The earliest techniques connected the pilot directly to his control surfaces.
AIRCRAFT COMPUTERS
As the aircraft functionality increased, the pilots workload also increased, requiring his (or his aircrews) being
free to perform other duties. Additionally, flight characteristics became more complex, requiring more frequent
and instantaneous control adjustments. The use of computers helped offset and balance the increased workload
in aircraft. The application of computers to flight control provides a means for processing and responding to
multiple complex flight control requirements.
Aircraft Computer Hardware. For aircraft computers, hardware includes the processors, buses, and
peripheral devices inputting to and outputting from the computers. Landau (1) defines hardware as apparatus
used for controlling spacecraft; the mechanical, magnetic, and electronic design, structure, and devices of a
computer; and the electronic or mechanical equipment that uses cassettes, disks, etc. The computers used
on an aircraft are called processors. The processor takes inputs from peripheral devices and provides specific
computational services for the aircraft.
There are many types and functions of processors on aircraft. The most obvious processor is the central
computer, also called the mission computer. The central computer provides direct control and display to the
aircrew. The federated architecture (discussed in more detail later) is based on the central computer directing
the scheduling and tasking of all the aircraft subsystems. Other noteworthy computers are the data-processing
and signal-processing computers of the radar subsystem and the computer of the inertial navigation system.
Processors are in almost every component of the aircraft. Through the use of an embedded processor, isolated
components can perform independent functions as well as self-diagnostics.
Distributed processors offer improved aircraft performance and, in some cases, redundant processing
capability. Parallel processors are two or more processors configured to increase processing power by sharing
tasks. The workload of the shared processing activity is distributed amongst the pooled processors to decrease
the time it takes to form solutions. Usually, one of the processors acts as the lead processor, or master, while
the other processor(s) act as slave(s). The master processor schedules the tasking and integrates the final
results. On aircraft, this is particularly useful in that processors are distributed throughout the aircraft. Some
of these computers can be configured to be parallel processors, offering improved performance and redundancy.
Aircraft system redundancy is important, because it allows distributed parallel processors to be reconfigured
when there is a system failure. Reconfigurable computers are processors that can be reprogrammed to perform
different functions and activities. Before computers, it was very difficult to modify systems to adapt to their
changing requirements. A reconfigurable computer can be dynamically reprogrammed to handle a critical
situation, and than returned to its original configuration.
Aircraft Buses. Buses are links between computers (processors), sensors, and related subsystems, for
transferring data inputs and outputs. Fink and Christiansen (8) describe two primary buses as data buses
and address buses. To complete the function of an MPU, a microprocessor must access memory and peripheral
devices. This is accomplished by placing data on a bus, either an address bus or a data bus, depending upon the
function of the operation. The standard 16-bit microprocessor requires a 16-line parallel bus for each function.
An alternative is to multiplex the address or data bus to reduce the number of pin connections. Common buses
in aircraft are the Military Standard 1553 Bus (Mil-Std-1553) and the General-Purpose Interface Bus (GPIB),
which is the IEEE Standard 488 Bus.
Aircraft Software. Landau (1) defines software as the programs, routines, etc. for a computer. The
advent of software has provided great flexibility and adaptability to almost every aspect of life. This is especially
true in all areas of aerospace sciences, where flight control, flight safety, in-flight entertainment, navigation,
and communications are continuously being improved by software upgrades.
Operation Flight Programs. An operational flight program (OFP) is the software of an aircraft embedded
computer system. An OFP is associated with an aircrafts primary flight processors, including the central
computer, vertical and multiple display processors, data processors, signal processors, and warning receivers.
Many OFPs in use today require dedicated software integrated support environments to upgrade and maintain
them as the mission requirements of their parent aircraft are modified. The software integrated support
environment [also called avionics integrated support environment (AISE), centralized software support activity
AIRCRAFT COMPUTERS
(CSSA), and software integration laboratory (SIL)] not only allows an OFP to be updated and maintained, but
also provides capabilities to perform unit testing, subsystem testing, and some of the integrated system testing.
Assembly Language. Assembly language is a machine (processor) language that represents inputs
and outputs as digital data and that enables the machine to perform operations with those data. For a good
understanding of the Motorola 6800 Assembler Language, refer to Bishop (13). According to Seidman and Flores
(14) the lowest-level (closest to machine) language available to most computers is assembly language. When
one writes a program in assembly code, alphanumeric characters are used instead of binary code. A special
program called an assembler (provided with the machine) is designed to take the assembly statements and
convert them to machine code. Assembly language is unique among programming languages in its one-to-one
correspondence between the machine code statements produced by the assembler and the original assembly
statements. In general, each line of assembly code assembles into one machine statement.
Higher-Order Languages. Higher-order languages (HOLs) are computer languages that facilitate human language structures to perform machine-level functions. Seidman and Flores (14) discuss the level of
discourse of a programming language as its distance from the underlying properties of the machine on which it
is implemented. A low-level language is close to the machine, and hence provides access to its facilities almost
directly; a high-level language is far from the machine, and hence insulated from the machines peculiarities. A
language may provide both high-level and low-level constructs. Weakly typed languages are usually high-level,
but often provide some way of calling low-level subroutines. Strongly typed languages are always high-level,
and they provide means for defining entities that more closely match the real-world objects being modeled.
Fortran is a low-level language that can be made to function as high-level by use of subroutines designed for the
application. APL, Sobol, and SETL (a set-theoretic language) are high-level languages with fundamental data
types that pervade their language. Pascal, Cobol, C, and PL/I are all relatively low-level languages, in which
the correspondence between a program and the computations it causes to be executed is fairly obvious. Ada is
an interesting example of a language with both low-level and high-level properties. Ada provides quite explicit
mechanisms for specifying the layout of data structures in storage, for accessing particular machine locations,
and even for communicating with machine interrupt routines, thus facilitating low-level requirements. Adas
strong typing qualities, however, also qualify it as a high-level language.
High-level languages have far more expressive power than low-level languages, and the modes of expression are well integrated into the language. One can write quite short programs that accomplish very complex
operations. Gonzalez (15) developed an Ada Programmers Handbook that presents the terminology of the
HOL Ada and examples of its use. He also highlights some of the common programmer errors and examples
of those errors. Sodhi (16) discusses the advantages of using Ada. Important discussions of software life-cycle
engineering and maintenance are presented, and the concept of configuration management is presented.
The package concept is one of the most important developments to be found in modern programming
languages, such as Ada, Modula-2, Turbo Pascal, C++, and Eiffel. The designers of the different languages
have not agreed on what terms to use for this concept: package, module, unit, and class are commonly used. But
it is generally agreed that the package (as in Ada) is the essential programming tool to be used for going beyond
the programming of very simple class exercises to what is generally called software engineering, or building
production systems. Packages and package like mechanisms are important tools used in software engineering
to produce production systems. Feldman (17) illustrates the use of Ada packages to solve problems.
Databases. Database are essential adjuncts to computer programming. Databases allow aircraft computer applications the ability to carry pertinent information (such as flight plans or navigation waypoints) into
their missions, rather than generating them in route. Databases also allow the aircrew to collect performance
information about the aircrafts various subsystems, providing a capability to adjust the aircraft in flight and
avoid system failures.
Elmasri and Navathe (18) define a database as a collection of related data. Data are described as known
facts that can be recorded and have implicit meaning. (A simple example consists of the names, telephone
numbers, and addresses of an indexed address book. A database management system (DBMS) is a collection
AIRCRAFT COMPUTERS
of programs that enable users to create and maintain a database. The DBMS is hence a general-purpose
software system that facilitates the processes of defining, constructing, and manipulating databases for various
applications.
Verication and Validation. A significant portion of the aircraft computers life-cycle cost is system and
software testing, performed in various combinations of unit-level, subsystem-level, integrated-system-level,
developmental, and operational testing. These types of tests occur frequently throughout the life of an aircraft
system because there are frequent upgrades and modifications to the aircraft and its various subsystems. It
is possible to isolate acceptance testing to particular subsystems when minor changes are made, but this is
the exception. Usually, any change made to a subsystem affects other multiple parts of the system. As aircraft
become increasingly dependent on computers (which add complexity by the nature of their interdependences),
and as their subsystems become increasingly integrated, the impact of change also increases drastically.
Cook (19) shows that a promising technology to help understand the impact of aircraft computer change is
the Advanced Avionics Verification and Validation (AAV&V) program developed by the Air Force Research
Laboratory.
Sommerville (20) develops the concepts of program verification and validation. Verification involves checking that the program conforms to its specification. Validation involves checking that the program as implemented meets the expectations of the user.
Figure 2 shows an aircraft avionics support bench, which includes real components from the aircraft
such as the FCC line replaceable unit (LRU) sitting on top of the pictured equipment. Additional equipment
includes the buses, cooling, and power connection interfaces, along with monitoring and displays. On these
types of benches, it is common to emulate system and subsystem responses with testing computers such as the
single-board computers illustrated.
Figure 3 shows another verification and validation asset called the workstation-based support environment. This environment allows an integrated view of the aircrafts performance by providing simulations of the
aircrafts controls and displays on computer workstations. The simulation is interfaced with stick and throttle
controls, vertical situation displays, and touch-screen avionics switch panels.
10
AIRCRAFT COMPUTERS
Object-Oriented Technology. Object-oriented (OO) technology is one of the most popular computer
topics of the 1990s. OO languages such as C++ and Ada 95 offer tremendous opportunities to capture complex
representations of data and then save these representations in reusable objects. Instead of using several
variables and interactions to describe some item or event, this same item or event is described as an object.
The object contains its variables, control-flow representations, and data-flow representations. The object is a
separable program unit, which can be reused, reengineered, and archived as a program unit. The power of this
type of programming is that when large libraries of OO programming units are created, they can be called
upon to greatly reduce the workload of computer software programming. Gabel (21) says that object-oriented
technology lets an object (a software entity consisting of the data for an action and the associated action) be
reused in different parts of the application, much as an engineered hardware product can use a standard type of
resistor or microprocessor. Elmasri and Navathe (18) describe an object-oriented database as an approach with
the flexibility to handle complex requirements without being limited by the data types and query languages
available in traditional database systems.
Open System Architecture. Open system architecture is a design methodology that keeps options for
updating systems open by providing liberal interfacing standards. Ralston and Reilly (22) state that open
architectures pertain primarily to personal computers. An open architecture is one that allows the installation
of additional logic cards in the computer chassis beyond those used with the most primitive configuration of the
system. The cards are inserted into slots in the computers motherboardthe main logic board that holds its
CPU and memory chips. A computer vendor who adopts such a design knows that, since the characteristics of
the motherboard will be public knowledge, other vendors who wish to do so can design and market customized
logic cards. Open system architectures are increasingly important in modern aircraft applications, because of
the constant need to upgrade these systems and utilize the latest technical innovations. It is extremely difficult
to predict interconnection and growth requirements for next-generation aircraft. This is exactly what an open
architecture attempts to avoid the need for.
ClientServer Systems. A clientserver system is one in which one computer provides services to
another computer on a network. Ralston and Reilly (22) describe the file-server approached as an example
of client-server interaction. Clients executing on the local machine forward all file requests (e.g. open, close,
AIRCRAFT COMPUTERS
11
read, write, and seek) to the remote file server. The server accepts a clients requests, performs its associated
operation, and returns a response to the client. Indeed, if the client software is structured transparently, the
client need not even be aware that files being accessed physically reside on machines located elsewhere on the
network. Clientserver systems are being applied on modern aircraft, where highly distributed resources and
their aircrew and passenger services are networked to application computers.
Subsystems. The major subsystems of an aircraft are its airframe, power plant, avionics, landing gear,
and controls. Landau (1) defines a subsystem as any system that is part of a larger system. Many of the
subsystems on an aircraft have one or more processors associated with them. It is a complex task to isolate
and test the assorted subsystems.
Another layer of testing below subsystem testing is unit testing. A unit of a subsystem performs a function
for it. For example, in the radar subsystem, the units include its signal processor and its data processor. In order
to test a system adequately, each of its lowest-level items (units) must be tested. As the units affect and depend
upon each other, another layer of testing addresses that layer of dependences. In the same fashion, subsystem
testing is performed and integrated with associated subsystems. It is important to test not only at the unit and
the subsystem level, but at the system and operational level. The system level is where the subsystems are
brought together to offer the system functionality. System integration is the process of connecting subsystem
components into greater levels of system functionality until the complete system is realized. The operational
level of testing is where the subsystem is exercised in its actual use.
Line Replaceable Units. LRUs are subsystems or subsystem components that are self-contained in
durable boxes containing interface connections for data, control, and power. Many LRUs also contain built-in
test (BIT) capabilities that notify air and maintenance crew when there is a failure. A powerful feature of LRUs
is that functionality can be compartmentalized. When a failure is detected, the LRU can easily be pulled and
replaced, restoring the aircraft to service within moments of detection.
Graceful Degradation. All systems must have plans to address partial or catastrophic failure. System
failure in flight controls is often catastrophic, while system failure in avionics can be recovered from. For
this reason, most flight-critical systems have built-in redundant capabilities (sometimes multiple layers of
redundancy), which are automatically activated when the main system or subsystem fails. Degraded system
behavior occurs when the main system fails and backup systems are activated. The critical nature of system
failure requires immediate activation of backup systems and recognition by all related subsystem of the new
state of operation. Graceful degradation is the capability of aircraft computers to continue operating after
incurring system failure. Graceful degradation is less than optimal performance, and may activate several
layers of decreasing performance before the system fails. The value of graceful degradation is that the aircrew
has time to respond to the system failure before there is a catastrophic failure.
Aerospace
Computer technologies have helped provide a continuum of improvements in aircraft performance that has
allowed the airspace where aircraft operate to increase in range and altitude. Landau (1) defines aerospace
as the earths atmosphere and the space outside it, considered as one continuous field. Because of its rapidly
increasing domain of air and space travel, the United States Air Force is beginning to refer to itself as the
United Sates Aerospace Force. Modern airspace vehicles are becoming increasingly dependent on information
gleaned from ground stations, satellites, other airspace vehicles, and onboard sensors to perform their mission.
These vehicles use signals across the electromagnetic spectrum. Antennas can be found in multiple locations on
wings, the fuselage, tails, and draglines. If antennas are located too close together, their signals can interfere
with each other; this is called crossed frequency transmission. This interference reduces the efficiency of
each affected antenna. Placement of multiple antennas requires minimizing the effects of crossed frequency
transmissions. Techniques for this include antenna placement, filtering, and timing. This presents another
12
AIRCRAFT COMPUTERS
challenge for aircraft computers to sort and process these multiple signals. Perry and Geppert (4) show how
the aircraft electromagnetic spectrum is becoming busy, and thus, dangerous for aerospace communications.
Legacy Systems. Legacy systems are fielded aircraft, or aircraft that are in active use. Probably the
only nonlegacy aircraft are experimental or prototype versions. Legacy aircraft are often associated with aging
issues, more commonly known as parts obsolescence. A growing problem in these systems is the obsolescence
of entire components, including the many computers used on them. Aircraft, like many other systems, are
designed with expected lifetimes of 10 to 15 years. Because of the high replacement costs, lifetimes are often
doubled and tripled by rebuilding and updating the aircraft. To reduce costs as many as possible of the original
aircraft components are kept. Problems arise when these components are no longer produced or stockpiled.
Sometimes subsystems and their interfaces have to be completely redesigned and produced at great cost
in order to keep an aircraft in service. System architectures and standard interfaces are constantly being
modified to address these issues. Aircraft evolve during their lifetimes to a more open architecture. This open
architecture, in turn, allows the aircraft components to be more easily replaced, thus making further evolution
less expensive.
Unmanned Air Vehicles. Unmanned air vehicles (UAVs) are aircraft that are flown without aircrews.
Their use is becoming increasingly popular for military applications. Many of the new capabilities of UAVs
come from the improved computers. These computers allow the vehicles to have increased levels of autonomy
and to perform missions that once required piloted aircraft. Some of these missions include reconnaissance
and surveillance. These same types of missions are finding increasing commercial importance. UAVs offer
tremendous advantages in life-cycle cost reductions because of their small size, ease of operation, and ability
to be adapted to missions.
ManMachine Systems
An aircraft is an example of a manmachine system. Other examples are automobiles and boats. These
machines have the common attribute of being driven by a human. Landau (1) defines manmachine systems as
sets of manually performed and machine-performed functions, operated in conjunction to perform an operation.
The aircraft computer is constantly changing the role of the human in the aircraft machine. The earliest aircraft
required the constant attention of the pilot. Improved flight control devices allowed the pilot freedom for leisure
or for other tasks. Modern aircraft computers have continued the trend of making the aircraft more the machine,
and less the man system.
Human Factors of Aircraft Computers. Human factors is the science of optimal conditions for human comfort and health in the human environment. The human factors of aircraft computers include the
positioning of the controls and displays associated with the aircrews workloads. They also provide monitoring
and adjustment of the aircraft human environment, including temperature, oxygen level, and cabin pressure.
ManMachine Interface. The manmachine interface is the place where mans interactions with the
aircraft coordinate with the machine functionality of the aircraft. An example of a manmachine interface is the
API, which is where a person provides inputs to and receives outputs from computers. These types of interfaces
include keyboards (with standard ASCII character representation), mouse pads, dials, switches, and many
varieties of monitors. A significant interface in aircraft comprises their associated controls and displays, which
provide access to the flight controls, the sensor suite, the environmental conditions, and the aircraft diagnostics
through the aircrafts central computer. Control sticks, buttons, switches, and displays are designed based on
human standards and requirements such as seat height, lighting, accessibility, and ease of use.
VoiceActivated Systems. Voiceactivated systems are interfaces to aircraft controls that recognize and
respond to aircrews verbal instructions. A voice-activated input provides multiple input possibilities beyond
the limited capabilities of hands and feet. Voiceactivated systems have specifed sets of word commands, and
are trained to recognize a specific operators voice.
AIRCRAFT COMPUTERS
13
Aircraft Computer Visual Verication. Visual verification is the process of physically verifying
(through sight) the correct aircraft response to environmental stimuli. This visual verification is often a testing
requirement. It is usually done through the acceptance test procedure (ATP) and visual inspections of displays
through a checklist of system and subsystem inputs. Until recently, visual verification has been a requirement
for pilots, who have desired the capability to see every possibility that their aircraft might encounter. This
requirement is becoming increasingly difficult to implement, because of the growing complexity and workload
of the aircrafts computers and their associated controls and displays. In the late 1980s to early 1990s, it used
to take about 2 weeks to visually verify the suite of an advanced fighter systems avionics. This can no longer
be accomplished at all with current verification and validation techniques. Several months would be required
to achieve some level of confidence that todays modern fighters are flight-safe.
Air Trafc Control. Air traffic control is the profession of monitoring and controlling aircraft traffic
through an interconnected groundbased communication and radar system. Perry (23) describes the present
capabilities and problems in air traffic control. He also discusses the future requirements for this very necessary
public service. Air traffic controllers view sophisticated displays, which track multiple aircraft variables such
as position, altitude, velocity, and heading. Air traffic control computers review these variables and give the
controllers continuous knowledge of the status of each aircraft. These computers continuously update and
display the aircraft in the groundbased radar range. When potential emergency situations, such as collision,
arise, the computer highlights the involved aircraft on the displays, with plenty of lead time for the controller
to correct each aircrafts position.
14
AIRCRAFT COMPUTERS
or recommend flight control adjustments. Additional feedback may come from global positioning, from groundbased navigation systems through radio inputs, and from other aircraft. The computer is able to integrate
these inputs into the onboard flight control inputs, and provide improved recommendations for stable flight.
Real-Time Systems
The computers on aircraft are required to perform their functions within short times. Flight control systems
must make fine adjustments quickly, in order to maintain stable flight. Sensor suites must detect and analyze
potential threats before it is too late. Cabin pressure and oxygen must be regulated as altitude changes. All
these activities, plus many others on aircraft, must happen in real time.
Nielsen (25) defines a real-time system as a controlled (by software or firmware) system that performs
all of its process functions within specified time constraints. A real-time system usually includes a set of
independent hardware devices that operate at widely differing speeds. These devices must be controlled so
that the system as a whole is not dependent upon the speed of the slowest device. Hatley and Pirbhai (26)
describe timing as one of the most critical aspects of modern real-time systems. Often, the systems response
must occur within milliseconds of a given input event, and every second it must respond to many such events
in many different ways.
Flight-Critical Systems. Flight-critical systems are those activities of an aircraft that must be completed without error in order to maintain life and flight. The aircraft flight controls, engines, landing gear, and
cabin environment are examples of flight-critical systems. Failures in any of these systems can have catastrophic results. Flight-critical systems are held to tight levels of performance expectations, and often have
redundant backups in case of failure.
Federated Systems. Federated systems are loosely coupled distributed systems frequently used in
aircraft system architectures to tie multiple processors in multiple subsystems together. The loose coupling
allows the multiple subsystems to operate somewhat autonomously, but have the advantage of the shared
resources of the other subsystems. A typical aircraft federated system might include its central computer, its
INS, its radar system, and its air-vehicle management system. The INS provides the radar with the aircrafts
present position, which is reported to the pilot through displays put forth by the central computer. The pilot
adjusts his course through the air-vehicle management system, which is updated by the INS, and the cycle is
repeated. These subsystems perform their individual functionality while providing services to each other.
Cyclic Executive. A cyclic executive on an aircraft computer provides a means to schedule and prioritize
all the functions of the computer. The executive routine assigns the functions and operations to be performed by
the computer. These assignments are given a specific amount of clock time to be performed. If the assignment
does not complete its task in its allocated time, it is held in a wait state until its next clock period. From the
beginning of the clock period to its end is one clock cycle. High-priority functions are assigned faster clock cycles,
while low-priority functions are assigned slower cycles. For example, the high-priority executive function might
be assigned a speed of 100 cycles per second, while some lower-priority function might have 5 cycles per second
to complete its tasks. Sometimes the latter might take several clock cycles to perform a task. An additional
feature of cyclic executives is that they are equipped with interrupts, which allow higher-priority systems to
break into the executive assignments for system-level assigned tasking.
There are several types of scheduling methodologies that provide performance improvements in cyclic executives. One of the more prominent is rate monotonic analysis (RMA), which determines the time requirement
for each function and the spare time slots, and then makes time assignments.
AIRCRAFT COMPUTERS
15
BIBLIOGRAPHY
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
S. Landou Webster Illustrated Contemporary Dictionary, Encyclopedic Edition, Chicago: J. G. Ferguson, 1992.
J. F. Wakerly, Digital Design Principles and Practices, Englewood Cliffs, NJ: Prentice-Hall, 1985, pp. 148, 53138.
V. C. Hamacher Z. G. Vranesic S. G. Zaky Computer Organization, 2nd ed., New York: McGraw-Hill, 1984.
T. Perry L. Geppert Do portable electronics endanger flight, IEEE Spectrum, 33 (9): 2633, 1996.
A. Golden Radar Electronic Warfare, Washington: AIAA Education Series, 1987.
G. W. Stimson Introduction to Airborne Radar, El Segundo, CA: Hughes Aircraft, 1983, pp. 107, 151231.
G. Welch G. Bishop An introduction to the Kalman filter, Department of Computer Science, University of North Carolina
at Chapel Hill, Chapel Hill, NC, http://www.cs.unc.edu/welch/media/pdf/kalman.pdf, 1997.
D. Fink D. Christiansen Electronics Engineers Handbook, 3rd ed., New York: McGraw-Hill, 1989.
J. Adam T. Gibson Warfare in the information age, IEEE Spectrum, 28 (9): 2642.
W. Sweet The glass cockpit, IEEE Spectrum, 32 (9): 3038, 1995.
D. Lefkon B. Payne Making embedded systems year 2000 compliant, IEEE Spectrum, 35 (6): 7479, 1998.
S. Gal-Oz M. Isaacs Automate the bottleneck in embedded system design, IEEE Spectrum, 35 (8): 6267, 1998.
R. Bishop Basic Microprocessors and the 6800, Hasbrouck Heights, NJ: Hayden, 1979.
A. Seidman I. Flores The Handbook of Computers and Computing, New York: Van Norstrand Reinhold, 1984, pp.
327502.
D. W. Gonzalez Ada Programmers Handbook, Redwood City, CA: Benjamin/Cummings, 1991.
J. Sodhi Managing Ada Projects, Blue Ridge Summit, PA: TAB Books, 1990.
M. B. Feldman E. B. Koffman Ada Problem Solving and Program Design, Reading, MA: Addison-Wesley, 1992.
R. Elmasri S. B. Navathe Fundamentals of Database Design, 2nd ed., Redwood City, CA: Benjamin/Cummings, 1994.
R. Cook The advanced avionics verification and validation II final report, Air Force Research Laboratory Technical
Report ASC-99-2078, Wright-Patterson AFB.
I. Sommerville Software Engineering, 3rd ed., Reading, MA: Addison-Wesley, 1989.
D. Gabel Software engineering, IEEE Spectrum, Vol. 31 (1): 3841, 1994.
A. Ralston E. Reilly Encyclopedia of Computer Science, New York: Van Nostrand Reinhold, 1993.
T. Perry In search of the future of air traffic control, IEEE Spectrum, 34 (8): 1835, 1997.
J. J. D Azzo C. H. Houpis Linear Control System Analysis and Design, 2nd ed., New York: McGraw-Hill, 1981, pp.
143146.
K. Nielsen Ada in Distributed Real-Time Systems, New York: Intertext, 1990.
D. J. Hatley I. A. Pirbhai Strategies for Real-Time System Specification, New York: Dorset House, 1988.
READING LIST
G. Buttazo Hard Real-Time Computing Systems, Norwell, MA: Kluwer, 1997.
R. Comerford PCs and workstations, IEEE Spectrum, 30, (1): 2629, 1993.
D. Dooling Aerospace and military, IEEE Spectrum, 35 (1): 9094, 1998.
J. Juliussen D. Dooling Small computers, aerospace & military, IEEE Spectrum, 32 (1): 4447, 7679, 1995.
K. Kavi Real-Time Systems, Abstractions, Languages, and Design Methodologies, Los Alamitos, CA: IEEE Computer Society
Press, 1992.
P. Laplante Real-Time Systems Design and Analysis, an Engineers Handbook, Piscataway, NJ: IEEE Press, 1997.
M. S. Roden Analog and Digital Communication Systems, 2nd ed., Englewood Cliffs, NJ: Prentice-Hall, 1985.
H. Taub Digital Circuits and Microprocessors, New York: McGraw-Hill, 1982.
C. Weitzman Distributed Micro/Minicomputer, Englewood Cliffs, NJ: Prentice-Hall, 1980.
CHARLES P. SATTERTHWAITE
Air Force Research Laboratory Embedded Information
System Engineering Branch (AFRL IFTA)
26
27
28
tion of every system or installed component is not necessary when the remaining operative equipment provides
an acceptable level of safety. This was recognized in the
mid 1950s. Consequently, regulatory agencies granted
permission to operate with certain items of equipment
inoperative; the intent being to permit revenue operations to a location where repairs or replacements could
be made. This action permits economic aircraft utilization as well as offering a reliable flight schedule to the
flying public without compromising flight safety. Contemporary practice demands that consideration be
given to deferability in the design as a conscious activity when defining system architecture and functionality.
It should be noted that even with a MEL, no go conditions will not be totally eliminated.
3. A third strategy assures that no-go conditions can be
minimized. It involves both a design and a maintenance
management technique. This design approach embraces
the incorporation of features that are extra to those required for certification. The predominant strategy for
this is the same as that used to avoid safety related
failures; that is the inclusion of redundancy, fault tolerance and fail safe, fail passive features but beyond that
required to certify the design. This is not without its
price. It increases the number of failure possibilities. It
adds more items that can fail. It results in equipment
that is more complex and integrated which makes fault
isolation more difficult. It adds to the cost of the aircraft. But this approach, judiciously applied, greatly reduces the consequences of any single failure. Excess features in the design put initial failures of a system into
the economic rather than the safety related failure category.
AIR CARRIER MAINTENANCE REQUIREMENTS
Maintenance requirements are dictated by numerous factors:
regulatory provisions, type of equipment, fleet size, route
structure, and flying schedules. The type of equipment establishes maintenance frequency cycles. The size of the fleet determines quantitative maintenance work load. Route structure and flight schedules influence the location and number
of stations which must possess the capability of performing
the work.
Regulatory Provisions
The definition of maintenance requirements, addressing
safety related failure for an aircraft, begins during design and
certification. The Federal Aviation Regulations (FARs) are
published in the Code of the Federal Regulations (CFR). 14
CFR 25.1329 requires the preparation of instructions for continuing airworthiness. These instructions must include,
among other things, the following:
. . . Scheduling information (scheduled maintenance) for each
part of the airplane and its engines, auxiliary power units, propellers, accessories, instruments, and equipment that provides the
recommended periods at which they should be cleaned, inspected,
adjusted, tested, and lubricated, and the degree of inspection, the
applicable wear tolerances, and work recommended at these periods. The recommended overhaul periods and necessary references
29
30
SCHEDULED MAINTENANCE
Scheduled maintenance (sometimes referred to as routine or
recurrent maintenance) includes: (1) the mandatory tasks defined by the FAA Maintenance Review Board (MRB) Report,
(2) the accomplishment of recurring airworthiness directives
(ADs), and (3) discretionary (economic) checks, inspections or
modifications. The FAA issues ADs when an unsafe condition
has been found to exist in particular aircraft, engine, propellers, or appliances installed on aircraft, and that condition is
likely to exist or develop in other aircraft, engines, propellers,
or appliances of the same type design. Once an AD is issued,
no person may operate an aircraft to which the AD applies
except in accordance with the requirements of that AD.
Discretionary maintenance tasks are those not required by
the MRB report. They include for example:
Repair of items not related to airworthiness that is, economic failures
Modifications to cabin interiors such as installing passenger entertainment or refurbishing seats
Exterior painting or refurbishment
Manufacturers service bulletins not related to an airworthiness directive
Packaging Scheduled Maintenance
Scheduled maintenance requirements are grouped into work
packages known as blocks. The principle of blocks is to accomplish all of the mandatory tasks in small packages. This
allows greater utilization of the aircraft since the aircraft is
removed from service for short periods rather than for a single extended overhaul period. The principle is shown in Fig.
1. Regardless of the means in which the tasks are packaged,
all of the required tasks defined by the MRB will be accomplished when all of the defined blocks have been accomplished. The complete package of defined blocks is sometimes
referred to as a complete overhaul cycle.
Blocks have numerous names within the maintenance
community. The exact nomenclature, composition and number of blocks varies between airlines. The following typical
groupings illustrate the concept.
Daily Check. This exists under several common names;
post flight, maintenance pre-flight, service check, overnight to
name a few. It is the lowest scheduled check. It is a cursory
inspection of the aircraft to look for obvious damage and deterioration. It checks for general condition and security and
reviews the aircraft log for discrepancies and corrective action. The accomplishment of the daily check requires little
specific equipment, tools, or facilities.
It is a basic requirement that the aircraft remains airworthy. Usually this check will be accomplished every 2460
hours of accumulated flight time. Examples of daily check
items include:
A Check. This is the next higher level of scheduled maintenance. It is normally accomplished at a designated mainte-
31
Flight time
(hours)
10
80
400
1,600
16,000
Check level
Preflight
A check
+
Preflight
B check
+
A check
+
Preflight
C check
+
B check
+
A check
+
Preflight
D check
+
C check
+
B check
+
A check
+
Preflight
Check
type
Number in
cycle
Man-hours
A/C daily
utilization
Flying days
per year
Preflight
1,600
240
200
Approximate
Out of
check
service time
occurrence
per check
Daily
1 hour
2/month
1 shift
40
36
3/year
11.5 shifts
10
450
1/year
1012 shifts
1,500
8 years
1518 shifts
Remarks:
1. The higher check always includes
the lower check.
2. Block maintenance addresses
inspections of the airframe and
installed systems.
3. Individual component maintenance
is not included.
4. Repair or replacement arising from
inspections is not included.
5. A Quick opening doors, servicing,
detail walkaround.
6. B Cowl, access panels, compartment doors opened lubrication, filter
changes, operational checks.
7. C Major access panels and
fairings removed, system test,
corrosion control, lubrication.
8. C Major structural inspections,
NDT work, internal structure.
Example
The size of the B and C checks has become too large
Divide the check into parts
Allocate the resultant parts or segments
Append them to the A and B checks
A
Existing
C/3
2C/3
3C/3
C/3
2B/3
3B/3
Segmented
32
and systems. Each check includes the requirements of traditional lower check work items and portions of C and D checks
at the required task intervals.
Phased checks may occur at 200 to 800 flight hour intervals, depending upon the work packaging plan and other airline operating variables.
Changing Scheduled Maintenance Frequencies
Individual airlines, when first placing a given aircraft model
into service, use the aircraft MRB document for defining
maintenance tasks and intervals. However, as experience is
gained on the equipment, and advanced techniques are developed for flight and maintenance operations, the FAA allows
for escalation of task intervals.
Actuarial techniques, using condition monitoring data, are
employed by the airlines to petition the FAA for a change in
specified intervals.
UNSCHEDULED MAINTENANCE
Unscheduled maintenance (nonroutine, nonrecurrent) is ad
hoc. It is maintenance performed to restore an item to airworthiness by correction of known or suspected malfunction and/
or defect. The resolution of aircraft malfunctions and/or defects is not always straightforward and often requires troubleshooting. Figure 3 shows a typical process that an airline
might follow to troubleshoot an aircraft problem.
Examples of unscheduled maintenance include:
Resolution of aircraft log discrepancies (both pilot generated and those discovered by the mechanic)
Special inspections initiated by the airline engineering
group
Special inspections, repairs, or replacements arising
from airworthiness directives (ADs)
Structural repairs arising from damage incurred during
operations
The nature of unscheduled maintenance dictates that it may
be performed anywhere within the maintenance environment,
that is, during scheduled maintenance or on the flight line
while the aircraft is assigned to scheduled revenue service.
THE MAINTENANCE ENVIRONMENT
For clarity the maintenance environment is divided into three
distinct categories of activity. However, in day to day operations this separation is blurred. Work normally accomplished
while the aircraft is removed from the revenue schedule may
occasionally be accomplished while the aircraft is flying the
schedule.
Line Maintenance
Line maintenance is that maintenance activity performed
while the aircraft is committed to the revenue schedule. It
may be subdivided into Gate or Turnaround.
Gate Maintenance. This maintenance is performed prior to
the aircraft departure. It is incidental to flight operations.
The flight line (gate) environment is the most demanding. It
33
Research
the
problem
on ground
Yes
Is
advance
information
available
?
Yes
No
Yes
Fix the
problem
Yes
Is a
simple fix
apparent
?
No
No
Troubleshoot the
problem
Determine corrective
action
Sign-off logbook
Enter report in
maintenance
history, or
Action per airline
policy
No
Does the
problem
exist
?
Obtain parts,
equipment,
and necessary
materials
Mechanic attempts
to verify or duplicate
the problem
Do the corrective
action
Verify problem
correction
Mechanic
Meets airplane
Checks logbooks
Evaluates problem
Does the
problem
continue
?
No
Yes
Can a
fix be found
with detailed
troubleshooting
on the
gate
?
Perform initial
assesment of
the problem
Can
problem
be deferred
per the
MEL
?
No
Yes
Yes
Can a
fix be found
with detailed
troubleshooting
on the
gate
?
Perform initial
assesment of
the problem
Yes
Is there
time to
work on the
problem
?
No
No
Defer
maintenance
per MEL
34
AOA sensor
FO pitot probe
TAT probe
ADM
ADIRU-R
ADM
ADIRU-R
ADM
To user systems
Figure 4. Basic elements of the Boeing
737-700 air data inertial reference system. Note: FO - first officer, ADM - Air
Data Module, TAT - Total Air Temperature, Alt - alternate, AOA - angle of attack, stby - standby, inst - instrument,
press - pressurization.
ADM
To stby airspeed
electronic engine controller, are often located with the equipment they control (e.g., on the engines). Figure 5 shows the
equipment racks and their locations on a Boeing 777. Figure
6 shows the system controllers that are located on the E1 and
E2 racks, located in the main equipment center, on a Boeing
777.
Aircraft System Communication
Aircraft systems use a variety of means for communication.
Early designs relied almost entirely on analog signals. More
recent designs make extensive use of digital data buses of increasing sophistication. Many of the digital data buses used
on jet transport aircraft are specified in documents developed
by Aeronautical Radio Incorporated (ARINC). ARINC is a corporation initiated and supported by airlines to provide technical services to the airlines. ARINC has developed a wide variety of standards and guidance documents for aircraft systems.
Two principal standards for communication between aircraft
systems are:
ARINC 429. Until the mid 1970s, communication between
aircraft systems was almost entirely accomplished using analog signals, for which a separate wire, or pair of wires, was
E7 rack
E17 rack
E11 rack
E12 rack
E10 rack
E16 rack
E15 rack
E16 rack
E5 rack
E2/E3
Main equipment center
Forward equipment center
Figure 5. Boeing 777 equipment rack locations.
AOA sensor
Capt FO
static static
port port
Alt
static
port
To
stby
inst
and
cabin
press
Capt FO
static static
port port
Alt
static
port
Right
Airplane information
Management System (AIMS)
Cabinet
E2-1
Window
Heat
Cntrl
Unit
left fwd
and right
Quick
Access
Recorder
(QAR)
Air
Cabin
Supply
Temperature
Cabin
Controller
Press Clt
(CTC)
(ASCPC)
right
left
Traffic Alert
and Collision
Avoidance Sys
Computer
(TCAS)
Distance
Measuring
Equipment
Interrogator
(DME)
left
Air
Cabin
VHF
Supply
Temperature
Comm
Cabin
Controller
Xcvr
Press Clt
(CTC)
(VHF)
(ASCPC)
right
center
left
Instr
Land
Sys
Rcvr
(ILS)
left
VOR
Rcvr
Marker
Beacon
(VOR)
left
Autopilot Flight
Director
Computer
(AFDC)
left
Audio
Manag
Unit
Arbrn
Vibration
Mon
Unit
left
E1-2
VOR
VHF
Rcvr
Comm
Mkr
Xcvr
BCn
(VHF)
(VOR)
right
right
Air
Traffic
Control
Trans
(ATC)
right
Distance
Measuring
Equipment
Interrogator
(DME)
right
APU
Generator
Control
Unit
(APU-GCU)
VHF
Comm
Xcvr
(VHF)
left
Instr
Land
Sys
Rcvr
(ILS)
center
Air
Traffic
Control
(ATC)
left
E1-3
Pre-Rec
Announcement
Passenger
Inflight
Information
Computer
Audio
Entertainment
Player
2
Audio
Entertainment
Player
1
Audio
Entertainment
Multiplex
1
Entertainment
Multiplexer
Controller
(EMC)
E2-4
FCDC
Battery
center
Audio
Entertainment
Player
3
Cabin
System
Management
Unit
(CSMU)
Pass Address
Cabin
Interphone
Cont
(PACI)
E1-4
Actuator
Control
Elec
(ACE)
center
Autopilot Flight
Director
Computer
(ADC)
right
Weight
Bal
Comp
A
Calib
Module
A
Calib
Module
B
Weight
Bal
Comp
B
E2-5
Primary Flight
Computer
(PFC)
center
Audio
Entertainment
Multiplex
2
Grnd prox
Warm Comp
ADF left
Transformer
Rectifier
Unit
(TRU)
left
ADF left
Generator
Control
Unit
(GCU)
left
E2-3
Arbrn
Vibration
Mon
Unit
right
Generator
Control
Unit
(GCU)
left
E1-1
E2-2
Instr
Land
Sys
Rcvr
(ILS)
left
Bus
Power
Control
Unit
Sel Cal
Dec Unit
Transformer
Rectifier
Unit
(TRU)
left
Window
Heat
Cntrl
Unit
left fwd
and right
Actuator
Control
Elec
(ACE)
left 2
35
Proximity
Sensor Electronics
Unit
(PSEU)
1
Engine
Data
Interface
Unit
(EDIU)
left
Actuator
Control
Elec
(ACE)
left 1
Warning
Electronics
Unit
(WEU)
left
E1-5
E2-6
Standby Attitude
Air Data Ref Unit
(SAARU)
E2-7
E2 rack
(looking aft)
Flap/Slat
Electronics
Unit
(FSEU)
1
FCDC
Battery
left
Primary Flight
Computer
(PFC)
left
E1-6
E1 rack
(looking aft)
The digital data buses have become much more reliable than
the analog wires that they replaced. However, when problems
do occur in the systems that use digital data buses, troubleshooting requires more sophisticated tools than the voltmeters that were sufficient for troubleshooting most analog systems. Fortunately, aircraft design has evolved over the years
to include these more sophisticated tools.
Maintenance Design Evolution
Just as aircraft system design has evolved, electronic support
of maintenance has evolved over the years, based on the need
and available technology. Since jet transport aircraft can be
in service for over 30 years, there are systems in service in
each of the categories identified next. As a result, mechanics
need to be able to support equipment encompassing a wide
range of maintenance capabilities.
Manual Detection and Isolation. Early aircraft systems were
relatively simple, and most importantly, were relatively iso-
36
Arinc 429
Arinc 629
LRU A
Connectivity:
LRU A
LRU B
LRU B
LRU C
LRU C
Transmission rate:
2 Mbits/sec
Medium:
Voltage mode
Current mode
Format:
PS
AS
RM
29
Bit 32
Data
S
D
I
11 9
Label
Ext
1
H-L Sync
Label word
Label
8
P
A
R
20 1
P
A
R
Data word
4
20
L-H Sync
L-H Sync
Collins
Analog Built In Test Equipment. In time, aircraft design engineers realized that the output of the fault detection monitors
could be made available to support mechanic troubleshooting.
With these, the concept of fault balls was born, and was
incorporated on some systems as early as the 1940s. Fault
balls are indications, normally on the front of an LRU (i.e.,
system controller), that a fault has been detected. They were
originally mechanical, but later were replaced with small
light emitting diodes (LEDs). In many cases, the LRU front
panel contained a test switch to command the LRU to test
itself, in a manner similar to how ground support equipment
could test the LRU. This capability became known as built-in
test equipment (BITE). A typical LRU with front panel BITE
PASS
FAIL
TPR
TEST
XPNDR
UPPER ANT
LOWER ANT
RAD ALT
HDNG
R/A
T/A
TTR-920
the front panel of the LRU. The digital logic could produce
codes that could better isolate the cause of the fault. The digital display, as shown in Fig. 9, offered the capability to display many different codes or even text to identify each type
of fault that was detected. Some of the later LRUs had the
capability to initiate ground tests and display the results in
codes of text. The codes often pointed to some description in
a manual that could be used to isolate and correct a fault.
Many systems on the Boeing 757/767, Airbus A300/310,
McDonnell Douglas DC-10, and Lockheed L-1011 employ
this approach.
Common Fault Display SystemARINC 604. As the number
of systems grew, use of separate front panel displays to maintain the systems became less effective, particularly since each
LRU often used a different technique to display its fault data.
In addition, some of the systems had become increasingly integrated with each other. Digital data buses, such as ARINC
429, began to be used during this time period. Autopilot systems, as they were among the first to use these digital data
buses and depend on sensor data provided by many other systems, have been a driving force in definition of more sophisticated maintenance systems. The more sophisticated monitoring was necessary to meet the integrity and certification
requirements of its automatic landing function. For example,
the Boeing 767 Maintenance Control and Display Panel
(MCDP) brought together the maintenance functions of many
related systems (i.e., flight control computers, flight management computers, and thrust management computers). As the
next step, ARINC 604 defined, in 1986, a central fault display
system (CFDS) which brings to one display the maintenance
indications for potentially all of the systems on the aircraft.
This approach enabled more consistent access to maintenance
data across systems, a larger display than each of the systems
could contain individually, and saved the cost of implementing front panel displays on many of the associated system
controllers. In this approach, the CFDS is used to select the
system for which maintenance data is desired, and then it
sends the maintenance text from that system to the display.
This approach was used on some of the systems on later Boeing 737s, and most systems on the Airbus A320/330/340, and
37
38
Figure 10. Airbus A320 CFDS menus showing aircraft systems displaying information on the
multi-purpose control and display unit (MCDU), which is located in the flight deck.
Subordinate
LRU #1
Subordinate
LRU #2
Subordinate
LRU #3
...
Member
system #2
Member
system #n
System
LRU
Central
maintenance
computer
system
Electronic
library
system
Central
maintenance
computer
Onboard
maintenance
data
Airplane
condition
monitoring
Other
data and
functions
Control
: and
display
Printer
Airplane to
ground station
data link
39
Onboard Maintenance System Functions. An onboard maintenance system provides the following primary functions:
Detect And Isolate Faults. When equipment fails, the mechanic needs help in determining what has failed. Systems
contain monitors to determine whether and where failures
have occurred.
Generate Maintenance Messages. A maintenance message is
the data (identification number and text) displayed to the mechanic identifying what has failed, and what action should be
taken to correct the fault. A maintenance message identifies
a specific procedure in a fault isolation manual. The objective
is that only one maintenance message is produced when a
single fault exists. Note: Multiple maintenance messages
(which could be produced by several LRUs monitoring faults
and simultaneously detecting one) tend to confuse the mechanic.
Correlate Maintenance Messages to Flight Deck Effects. Flight
deck effects (FDEs) are messages to the flight crew identifying loss of function and actions that may need to be taken
during the flight due to an aircraft malfunction. The FDEs
are not intended to identify how to correct the fault. The flight
crew will report FDEs that have occurred, and will expect the
mechanic to disposition (i.e., correct or defer) them. The maintenance system relates which maintenance message identifies
the fault that caused the flight deck effect.
Store, Display and Report Messages and Related Flight Deck
Effects. The maintenance message and related flight deck effects are stored in CMCS memory, displayed to the mechanic
and/or transmitted electronically to ground stations. Transmission to ground stations prior to aircraft arrival allows
ground mechanics to be prepared to fix or properly disposition
the reported faults.
Each system must detect fault conditions to prevent the system from using failed components. Systems contain monitors
sufficient to detect faults as necessary to meet safety requirements and other economic objectives. Figure 13 illustrates the
fault detection and processing concept used on the Boeing
777. When a member system detects a fault, it:
1. Reports to the flight crew display system that the condition should be annunciated (if necessary) to the level
necessary to identify the specific required flight crew
awareness/actions, and/or aircraft dispatch limitations.
This indication is known as a flight deck effect (FDE).
Flight deck effects are normally displayed as a part of
the basic flight crew display system. They provide information at the level that will best support flight crew
determination of their response to this condition. In
general, this means that a function is lost or degraded.
For example, a pilot need not know which component
caused a function to be lost, as his actions only change
based on which function has been lost.
2. Reports this fault to the CMCS (to the level necessary
to indicate to the mechanic what needs to be done to
correct the faultsometimes this may require additional monitors to provide better isolation than those
used to identify that a fault has occurred). This indication is known as a fault report.
3. The flight crew display system reports to the CMCS
that the flight deck effect is being displayed. Based one
or more received fault report(s), the CMCS generates a
message for the maintenance crew, and correlates it
with the flight deck effect. This message is known as a
maintenance message. The maintenance message contains an identification number, which points to a procedure in fault isolation manuals, and text to indicate
what has been detected, and, optionally, the LRUs that
could contain the fault. In a federated BITE system
(where there is no CMCS consolidation function, e.g.,
40
Fault detection/
isolation by
subsystem BITE
Maintenance
message data
FDE activity
FDE
data
Yes
Choose
to fit it
?
(Alert/status)
Log book
Fault processing/
correlation
No
Defer and
dispatch
Fault
isolation
manual
Dispatch
deviation
guide
Airplane maintenance
manual and
supporting data
Figure 13. Boeing 777 CMCS fault detection and processing concept.
EXTENDED
LINE
MAINTENANCE MAINTENANCE
OTHER
HELP
41
REPORT
EICAS Status
AFDC L
(2)
NO LAND #3
Fault Code: 221 033 00
EICAS Status
ACTIVE
31MAR91 2351z
NO LAND #
Fault Code: 221 032 00
EICAS Advisory
ACTIVE
1APR91 12581z
GO
BACK
SHOW
NOTES
Wrap-around monitors, that detect whether output circuitry may be faulty, by feedback and interrogation of
the output signal
Activity monitors, that detect if input signals are being
received (normally used for monitoring signals received
on data buses)
Out of range monitors, that detect if an input signal is
within the expected range of values for that signal
Comparison monitors, that detect whether two or more
signals agree. A comparison between two inputs often
cannot be used to determine which input is incorrect; a
third input can be used to determine which input is incorrect
Command response monitors, that detect if components
such as actuators, valves, and relays are properly following the command signal. Care must be taken in the use
of monitor results for maintenance. If their characteristics are not clearly identified, the resulting maintenance
indications may confuse the mechanic or cause unnecessary maintenance actions (such as the removal of equip-
42
Aircraft blocks
in at gate
Flight
deck
effects
?
Aircraft blocks
out of gate
No
No
Logbook
entries
?
Yes
MEL
defer
?
Yes
No
Fault isolate
CMCS
FIM
Deferral
procedure
Correct
Confirm
(test)
Return to
service
Figure 15. Airplane turnaround flowchart.
Flight deck
effects
Pitot
probe
Fault reports:
No autoland
Air data
Dynamic
pressure
43
Airspeed
Air
data
module
Air data
inertial
reference
unit
Autopilot
computer
Pressure
is invalid
Airspeed
is invalid
As a result, the CMCS cannot practically produce the perfect answer (the single faulty LRU) in all cases. It can point
the mechanic to a small group of LRUs in almost all cases.
Even in this case, if it is reliable in doing this, it is still a very
necessary and effective tool to aid in mechanic correction of
aircraft problems.
Central Maintenance Computer System Fault Storage. Once
maintenance messages and correlated FDEs are identified,
they may be stored for later retrieval by maintenance personnel. This is particularly critical where the fault is intermittent or can only be detected in certain conditions, since in
these cases the monitors may not be detecting the fault by
the time that the aircraft returns to the ground. This storage
of maintenance messages and correlated FDEs is called fault
history. In order to be effective, the system must be designed
so that maintenance messages are stored in fault history only
for fault conditions. In particular, ground maintenance activity often induces perceived fault conditions which are detected
by the various system monitors. For example, an LRU is expected to transmit on a data bus when the aircraft is flying;
if it stops transmitting in flight, a real fault condition exists,
and a maintenance message should be recorded. During
maintenance, if a circuit breaker for this LRU is opened,
other LRUs will report that this given LRU is no longer transmitting. This is not a real fault in the LRU, and thus, maintenance messages should not be recorded. Therefore, maintenance messages for these conditions are normally not stored
in fault history when the aircraft may be undergoing maintenance. The CMCS uses flight phases to determine when a
message should be stored. Flight phases identify specific regions of the aircraft flight cycle (including engine start, taxi
out, takeoff, climb, cruise, descent, approach, roll-out, taxi in,
engine shutdown, and maintenance).
Ground Tests
Ground tests are designed to allow the mechanic to verify
proper installation and operation of all or part of the system.
They are initiated based on user request. Ground tests are
often used to verify whether a fault has been corrected. For
some faults, ground tests are designed to re-create conditions
under which a fault can be detected, and then determine if
the fault exists. One very important issue regarding use of
these tests is to make sure that they are not run at an inappropriate time. For example, a flight control system should
not run a maintenance test while the pilot is flying the aircraft, as hazardous conditions could result. The applicable
systems contain safeguards to prevent such inappropriate
ground test operation.
Data Load/Configuration Identification
Data load is used to load new software or data into an LRU.
Much of the functionality of modern systems is incorporated
into software. As changes to this functionality are desired,
either to correct problems or add new features, software updates are required. Data loading provides the means to efficiently install the new software onto the aircraft. Data loading shares one common issue with ground tests. Each system
must provide safeguards to make sure that software can only
be loaded when it is safe to do so. Otherwise, loading of software into a flight control system while the aircraft is in flight,
for example, could have hazardous consequences. Another important issue with data loading is that the airline must make
sure that the resulting software configuration is legal for
flight. To support this determination, the system must provide a configuration identification function, in which it can
request and display software and hardware configuration for
any of the applicable systems. This tool can also be used by
the airlines to track what LRUs are installed on each aircraft.
44
Transmitter fault
Radio
altimeter
Autopilot
Warning
system
Flight
management
No input from
radio altimeter
No input from
radio altimeter
No input from
radio altimeter
Receiver fault
Autopilot
Maintenance message:
Radio altimeter has no output.
Radio
altimeter
Warning
system
Flight
management
No input from
radio altimeter
Maintenance message:
Radio altimeter has no output.
Reporting
Reporting consists of the capability to transmit the results of
the various CMCS functions to output devices such as a
printer, a disk drive, or to ground stations via an aircraft to
ground data link. The latter is gaining increasing use, as airlines realize the benefits of knowing what faults have occurred on an aircraft prior to the aircraft arrival. With this
information, they can be prepared for any maintenance action
that may be required when the aircraft lands. This reporting
also consolidates information in the hands of maintenance
planning personnel so that they can plan for maintenance activities during overnight or longer maintenance periods.
The CMCS can be programmed to transmit fault information automatically in a variety of ways as desired by the airlines. Reports can be transmitted as faults are detected, or a
summary of the faults detected during the flight can be transmitted toward the end of a flight. In addition to this, ground
stations can request transmission of fault information or system configuration information at any time. The latter is use-
OTHER
FUNCTIONS
HELP
LINE
EXTENDED
MAINTENANCE MAINTENANCE
EXISTING FLIGHT
DECK EFFECTS
REPORT
OTHER
FUNCTIONS
PRESENT LEG
FAULTS
GROUND TESTS
45
HELP
REPORT
LINE
EXTENDED
MAINTENANCE MAINTENANCE
OTHER
FUNCTIONS
HELP
REPORT
EXISTING FAULTS
INPUT MONITORING
SYSTEM
CONFIGURATION
FAULT HISTORY
EXIT
MAINTENANCE
CENTRAL MAINTENANCE
OPTIONS
DATA LOAD
ENGINE BALANCING
MAINTENANCE PLANNING
SHOP FAULTS
MAINTENANCE
ENABLE/DISABLE
EXIT MAINTENANCE
CENTRAL MAINTENANCE
COMPUTER SWITCH CONTROL
SPECIAL FUNCTIONS
EXIT MAINTENANCE
propagate more widely between systems. As a result, mechanics are more dependent on systems such as the CMCS to help
them determine how to correct a given problem. Devices such
as the CMCS will need to grow in complexity to allow accurate identification of the faulty components, and the effects of
those faults. Use of aircraft system models in CMCS design
is expected to increase in order to support this growing complexity.
Greater Use of Downlinked Information
With the limited amount of time a typical commercial aircraft
may have between flights, advance (prior to arrival) information of faults that have occurred can facilitate more timely
disposition of these conditions. If the condition is deferrable,
this advance information can give maintenance personnel
time to consider the options and decide on a course of action.
If the condition is to be fixed before the next flight, the information can allow maintenance personnel to be prepared with
replacement equipment when the aircraft lands. Transmission of this data can also aid in planning future maintenance
activityfaults reported in these transmissions can more
readily be scheduled for repair when the equipment, time,
and personnel are available. Airlines are making increasing
use of this capability as more aircraft support it.
Greater Use of Prognostics
The airplane condition monitoring system provides capabilities to identify trends in performance, in part to determine if
and when equipment may benefit from maintenance. Increas-
46
thoring, interchange, delivery, and use of digital data produced by aircraft, engine, and the component manufacturers.
ATA Specification 2100 will replace ATA Specification 100,
when all support documents have transitioned to digital format. ATA Specification 2100 is not limited to particular functional areas for aircraft as the ATA Specification Number 100
is, although further development of functional requirements
may be added during ATA Specification 2100s lifetime.
Air Transport Association Chapter-Section-Subject Numbering
System. Whether in paper or digital form, a standard numbering system is used throughout most jet transport technical
documentation. It follows ATA Specification Number 100
which specifies all technical data be organized by this number
system. The numbering system specified in ATA Specification
100 is known as the ATA chapter-section-subject numbering
system. The numbering system consists of three elements.
The first element assigns an ATA chapter number to each
aircraft system. For example, ATA Chapter 28 is for the fuel
system, ATA Chapter 34 is for navigation systems, and so on.
The second element assigns an ATA section number for each
subsystem. For example, a subsystem for the fuel system
might be Indicating, and has a section number 30 assigned.
Therefore, any document referencing a fuel indicating system
component would start with the ATA Chapter section number
28-30. The third element is a unique number assigned by the
aircraft manufacturer for a specific component. For example,
a fuel system temperature sensor, which is used to provide a
temperature indication in the flight deck, might have a ATA
subject (or sometimes referred to as unit) number 06 assigned. All references to this component in the technical manuals would use the number (or portions of this number) 28
3006.
A list of the ATA chapter-section-subject numbering system contained in ATA Specification Number 100 is as follows:
ATA Chapter 5: Time limits/maintenance checks.
Manufacturers recommended time limits, maintenance
checks, and inspections.
ATA Chapter 6: Dimensions and areas. The area, dimensions, stations, and physical locations of the major
structural members of the aircraft. Also includes zone
locations.
ATA Chapter 7: Lifting and shoring. Charts showing
lifting and jacking points for maintenance, overhaul and
repair. Standard jacking procedures and lifting and
shoring for abnormal conditions.
ATA Chapter 8: Leveling and weighing.
ATA Chapter 9: Towing and taxing.
ATA Chapter 10: Parking and mooring.
ATA Chapter 11: Required placards. The location and
pictorial illustrations of placards, stencils, and markings.
ATA Chapter 12: Servicing. Replenishment of all aircraft
system reservoirs (fluid and gaseous), oil changes, lubrication, and toilet draining and flushing. Filter types and
locations. Also cold weather maintenance and exterior
cleaning.
ATA Chapter 20: Standard practices. Airframe stan-
47
48
49
Unscheduled maintenance
Structural
damage
Through stop
turn around daily
planned checks
Maintenance
planning
data
document
Structural
Structural
repair
repair
manual
manual
Flight faults
Fault
reporting
manual
Flight faults
Ground faults
Service problems
BITE
manual
Dispatch
deviation
guide
Maintenance
tips
Task cards
and indexes
Airplane
maintenance
manual
Fault
isolation
manual
Job
completion
Supporting data
System
schematics
manual
Wiring
diagram
manual
Illustrated
parts
catalog
Standard
wiring
practices
manual
Each airline typically defines its maintenance training requirements in the airlines overall maintenance program.
This maintenance program is reviewed and approved by the
government regulatory agency.
Training for Mechanics and Technicians. The initial training
for mechanics to get their certification and ratings is referred
to as ab initio training (meaning from the beginning). Ab initio training is offered by specialized aviation schools, at colleges and universities that have aviation programs, or even
by some of the airlines. Many of these schools, in addition to
preparing the mechanic for his certification and rating, offer
various levels of degrees, ranging from diplomas of completion
to Bachelors and Masters Degrees in Aviation Maintenance
and other aviation specialties. In the United States these
training schools are covered under FAR Part 147, Aviation
Maintenance Technician Schools. It prescribes the requirements for issuing aviation maintenance technician school certificates and associated ratings and the general operating
50
rules for the holders of those certificates and ratings. The following ratings are issued under FAR Part 147: (1) Airframe,
(2) Powerplant, and (3) Airframe and Powerplant.
The number of courses and the length of time it takes to
get a mechanics certificate and rating varies from country to
country. In the United States, to complete all of the required
courses and to fulfill the practical experience requirement
takes approximately 2 years. Once course work is complete,
the mechanic must pass written, oral, and practical examinations before being issued a certificate and associated rating
for the particular area they studied. In the United States it
is either an Airframe, Powerplant, or combined Airframe and
Powerplant (A&P) rating. The regulations for certification of
mechanics is covered in FAR Part 65, Certification: Airmen
Other Than Flight Crewmembers. It prescribes the requirements for issuing the following certificates and associated ratings and the general operating rules for the holders of those
certificates and ratings: (1) Air traffic control tower operators,
(2) Aircraft dispatchers, (3) Mechanics, (4) Repairmen, and (5)
Parachute riggers. A proposed new FAR, Part 66, specifies
new rules for aviation maintenance personnel.
creases retention of need to know data. Users of ATA Specification 104 include airline training departments, manufacturer training departments, computer based training (see
CBT later in this article) vendors, and regulatory agencies.
ATA Specification 104 specifies five levels of target students, their entry level requirements, and the objectives that
a particular level of training is intended to achieve. The five
levels are as follows:
Aviation Associations and Councils. Many aviation associations and councils have been formed by the airlines, manufacturers, and aviation specialty schools to provide guidelines to
colleges and universities for aviation maintenance training
and accreditation. Several key associations and councils involved in aviation maintenance training are:
Aviation Technician Education Council (ATEC). This organization is made up of FAA approved Aviation Maintenance
Technician schools (FAR Part 147 schools), the industry (airlines, manufacturers, etc.), and governmental agencies. It was
founded in 1961 to further the standing of FAA approved
schools with education and industry, and to promote mutually
beneficial relations with all industry and governmental agencies. This organization is very active in FAR 147 regulations
and the rewrite of FAR Part 65 and 66.
Council on Aviation Accreditation (CAA). The CAA is an independent council which sets standards for all aviation programs taught in colleges and universities in America. It is
responsible for hearing and ruling on accreditation applications by these institutions and to review the quality of these
programs every five years. Its members include the faculty of
aviation institutions and industry members such as aircraft
and engine manufacturers and airlines.
ATA Specification 104 Maintenance Training Subcommittee. This subcommittee of the ATA developed ATA Specification 104. It contains the guidelines for aircraft maintenance
training which most airlines and aircraft and engine manufacturers follow (see the next section).
Air Transport Association Specification 104 Guidelines for Aircraft Maintenance Training. ATA Specification 104 Guidelines
for Aircraft Maintenance Training was developed by the
Maintenance Training Subcommittee, which was made up of
representatives from the airlines and airframe/engineer manufacturers. Its purpose is to provide a better understanding
of the training requirements of the various job function/skill
mixes resident in airline maintenance operations. By following these guidelines, training programs development/packaging is more precisely oriented to the skill/job of the students.
This enhances the student acceptance of the training and in-
51
The MTS lessons typically focus on the maintenance performed on the flight line between flights, either during a turnaround or overnight/base maintenance. This concept, named
line oriented scenarios, focuses on the material recently covered in the classroom and CBT. The students put the knowledge gained in the classroom and skill gained in CBT to work
by performing real maintenance tasks in the MTS.
ELECTRONIC PERFORMANCE SUPPORT TOOLS
Because of the vast quantity and array of technical documentation that are necessary to perform maintenance on jet
transports, digitizing the data and making it accessible from
a computer became necessary. Beginning in the early 1990s,
aviation manufacturers began digitizing their maintenance
documents, thus making them accessible from a variety of devices such as PCs, laptops, or other specially built devices designed specifically for aircraft maintenance support. Because
these devices aid in the performance of maintenance tasks,
they became known as electronic performance support tools.
Each maintenance electronic performance support tool is
essentially an electronic library of maintenance documents. It
consists of a digital database of technical data or technical
documents that are accessed via a retrieval software program.
Typically the tool is nothing more than a CD ROM containing
the technical documents already described, loaded into a laptop computer. As electronic support tools evolved, many specially built devices were designed specifically for aircraft
maintenance. Figure 20 shows a Boeing 777 portable maintenance access terminal.
Besides the variability of types, electronic performance
support tools also vary in what they do or can perform. Often
they contain not just the technical documents that are used
for a reference when performing a maintenance task, but also
additional features such as training programs, case-based-
52
JACK HESSBURG
RICHARD REUTER
WILLIAM AHL
Boeing Commercial Airplane Group
Benefits
Electronic performance support tools offer many more benefits than just portability and relief from the use of paper and
microfilm documents. Because they consist of digital data,
they are easily updated and can even be on-line. This eliminates the expense of paper revisions and the labor to revise
maintenance documentation. As electronic performance support tools have evolved, they also include many user friendly
features that paper/microfilm cannot offer, such as indexing
systems for ease of access and fast retrieval of information, or
hyperlinking, which allows quick and direct movement from
document to document.
Future Considerations
As technology has advanced, so have the types of electronic
performance support tools. From nothing more than software
on a CD loaded on a laptop in the mid 1990s, electronic performance support tools are expected to evolve to small wearable computers seen through dedicated goggles or safety
glasses for viewing. Devices, such as a hand held computer
with a touch sensitive liquid crystal display (LCD) with low
frequency transceiver are expected to be on-line to the airlines computer system. They eventually will be on-line to the
aircraft manufacturer and therefore always up-to-date. Peripheral devices such as barcode readers could be connected
to these devices to record a multitude of information, such as
the users name, the aircraft tail number of the aircraft being
worked on, the serial number of the parts removed, and the
maintenance task followed.
BIBLIOGRAPHY
1. ARINC Specification 429-12, Mark 33 Digital Information Transfer System (DITS), Annapolis, MD: Aeronautical Radio Inc.
AIRCRAFT NAVIGATION
353
AIRCRAFT NAVIGATION
Historically, pilots flew paths defined by VOR (VHF Omnidirectional Radiorange) radials or by nondirectional beacon signals using basic display of sensor data. Such paths are restricted to be defined as a path directly to or from a
navigation station. Modern aircraft use computer-based
equipment, designated RNAV (Area Navigation) equipment,
to navigate without such restrictions. The desired path can
then be direct to any geographic location. The RNAV equipment calculates the aircraft position and synthesizes a display of data as if the navigation station were located at the
destination. However, much airspace is still made available
to the minimally equipped pilot by defining the paths in terms
of the basic navigation stations.
Aircraft navigation requires the definition of the intended
flight path, the aircraft position estimation function, and the
steering function. A commonly understood definition of the
intended flight path is necessary to allow an orderly flow of
traffic with proper separation. The position estimation function and the steering function are necessary to keep the aircraft on the intended flight path.
Navigation accuracy is a measure of the ability of the pilot
or equipment to maintain the true aircraft position near the
intended flight path. Generally, navigation accuracy focuses
mostly on crosstrack error, although in some cases the
alongtrack error can be significant. Figure 1 shows three components of lateral navigation accuracy.
Standardized flight paths are provided by government
agencies to control and separate aircraft in the airspace. Path
definition error is the error in defining the intended path.
This error may include the effects of data resolution, magnetic variation, location survey, and so on.
Position estimation error is the difference between the position estimate and the true position of the aircraft. This component is primarily dependent upon the quality of the navigation sensors used to form the position estimate.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
354
AIRCRAFT NAVIGATION
Intended path
Path definition
Error
Intended path
X NM
95% Accuracy limit
2X NM
99.999% Integrity limit
True position
of aircraft
AIRWAYS
Published airways provide defined paths for much of en route
airspace. Generally, airways are defined by great-circle segments terminated by VOR stations. In remote areas, nondirectional beacons (NDBs) are used in the airway structure.
Figure 3 shows an aeronautical chart of airways. In the
AIRCRAFT NAVIGATION
355
For purposes of transitioning from one airway to another, the intersections of airways are often defined by
named fixes. Navigation equipment can store the network
of airways and intersections for use by the pilot in defining
the path. This allows the pilot to enter the intended flight
path in terms of the airway identifiers. Airborne equipment
generally does not store directional or other conditional airway restrictions.
For airways defined by VOR stations, the pilot is expected
to navigate using the VOR at the closest end of the segment
unless a changeover point (COP) is defined on the airway. The
defined changeover point may not be at the midpoint of the
airway segment to account for radio interference or other
unique characteristics of the situation.
Some airways are designated as RNAV airways and are
available only to aircraft operating with RNAV equipment.
Such airways do not have the restriction that a receivable
VOR or NDB be used to define the great-circle path. It is expected that the RNAV equipment uses available navigation
stations or GPS to compute the aircraft position. Because conventional non-RNAV airways are defined by VOR or NDB stations, traffic becomes concentrated near those stations. RNAV
airways offer a significant advantage by allowing the airspace
planner the ability to spread the aircraft traffic over a greater
area without the installation and support of additional navigation stations.
TERMINAL AREA PROCEDURES
To provide a fixed structure to the departure and arrival of
aircraft at an airport, published procedures are provided by
the authorities. Such procedures are known as standard instrument departures (SIDs) and standard arrival routes
(STARs). Figure 4 is an example of an SID chart. Generally,
the instructions provided in SIDs and STARs are intended to
be flown by the pilot without the aid of RNAV equipment. In
order to incorporate the procedures into the RNAV equipment, the instructions must be reduced to a set of instructions
that can be executed by the equipment. A subsequent section
describes this process in more detail.
Standard approach procedures are issued by the authorities to assist pilots in safe and standardized landing operations. The generation of the approach procedures accounts for
obstacles, local traffic flow, and noise abatement. Historically,
the approach procedures are designed so that RNAV equipment is not required. That is, the pilot can execute the approach using basic sensors (VOR, DME, ADF) until landing
visually. For operations in reduced visibility situations, there
are Category II and III instrument landing system (ILS) approaches that require automatic landing equipment. In addition, there are RNAV and global positioning system (GPS) approaches that require RNAV equipment. Modern RNAV
equipment is capable of storing the defined approach path
and assist the pilot in flying all approaches. Figure 5 is an
example of an approach chart.
NAVIGATION SENSOR SYSTEMS
RNAV equipment receives information from one or more sensor systems and forms an estimate of the aircraft position. If
more than one sensor type is available, the position estima-
tion algorithm will account for the quality differences and automatically use the data to generate a best estimate of position. Complementary filters or Kalman filters are commonly
used to smooth and blend the sensor data. The common sensors used for position estimation are GPS, DME, LORAN,
VOR, and IRS. The data from each of the sensor types have
unique characteristics of accuracy, integrity, and availability.
In addition, each of the sensor types requires unique support
functions.
Sensor Accuracy
The accuracy characteristic of a sensor can be expressed as
the 95th percentile of normal performance. For any specific
sensor, the wide variation in conditions in which it can be
used makes it difficult to generalize the accuracy with specific
numbers. The following data represent the accuracy under
reasonable conditions.
356
AIRCRAFT NAVIGATION
provide integrity is with redundant measurements. By comparison of the redundant measurements, an error in one of
the measurements can be detected and in some cases removed
from consideration.
GPS has a function known as receiver autonomous integrity monitoring (RAIM), which provides integrity. This function can be used when sufficient signals of satellites are available. This is usually the case when the GPS receiver is
receiving signals from five or more satellites. The status of
RAIM is provided to the RNAV equipment and is important
in approach operations using the GPS sensor.
For RNAV systems that use VOR and DME signals, if
there are not redundant signals available, the position solution is vulnerable to the effects of radio signal multipath and
to the navigation database integrity. The DME signal
multipath problem occurs in situations where the local terrain supports the reflection of the radio signal to or from the
DME station. The navigation database integrity is difficult to
ensure, especially for DMEs that are associated with military
TACANs. Military TACANs are sometimes moved, and the
information does not get included in the navigation database
in a timely fashion.
NAVIGATION COORDINATE REFERENCE
The WGS-84 ellipsoid has become the standard for aeronautical navigation. This reference can be viewed as a surface of
revolution defined by a specified ellipse rotated about the
earth polar axis. The semimajor axis of the ellipse lies in the
equatorial plane and has a length of 6378137.000 m. The
semiminor axis is coincident with the earth polar axis and
has a length of 6356752.314 m. Paths between two fixes on
the WGS-84 spheroid are defined as the minimum distance
path along the surface, known at the geodesic path between
the two points. In general, the geodesic path does not lie on a
plane but has a geometric characteristic of torsion. However,
for reasonable distances, there is no significant error by approximating the path as a portion of a great circle of the appropriate radius.
Most of the fixes defined in the world were specified in a
reference system other than WGS-84. An effort is under way
to mathematically convert the data from the original survey
coordinate system to that of the WGS-84 coordinate system.
At the same time, when possible, the survey of the location is
being improved.
COURSE OF THE GREAT CIRCLE PATH
The basic path for airways is a direct path between two fixes,
which may be a VOR station, an NDB station, or simply a
geographical location. In terminal area procedures the most
common path is defined by an inbound course to a fix. The
RNAV equipment approximates such paths as segments of a
great circle. Considering the case of a path defined as a radial
of a VOR, the actual true course depends upon the alignment
of the VOR transmitter antenna with respect to true north.
The angular difference between the zero degree radial of the
VOR and true north is called the VOR declination. When the
VOR station is installed, the 0 VOR radial is aligned with
the magnetic north so the VOR declination is the same as the
magnetic variation at the station at the time of installation.
AIRCRAFT NAVIGATION
True
north
True
north
Magnetic
north
Magnetic
north
CT
C T
CM
C M
357
Using the ARINC-424 leg types, most terminal area procedures can be encoded in such a way that the RNAV equip-
358
AIRCRAFT NAVIGATION
Missed approach
path
AF
CA
CD
CI
CR
CF
DF
FA
FC
FD
FM
HA
HF
HM
IF
PI
TF
RF
VA
VD
VI
VM
VR
ment can generally fly the procedure in a fashion that is similar to the pilot navigation. However, there are significant
limitations to this concept.
First, the concept assumes that the RNAV equipment has
sufficient sensor data to accomplish the proper steering and
leg terminations. Lower-end RNAV systems designed for
smaller aircraft often do not have sensors providing heading
or barometric altitude. Without a heading sensor, the system
cannot fly the heading legs properly. Substituting track legs
for heading legs is not always satisfactory. In the same way,
legs that are terminated by an altitude (CA, FA, VA, and HA)
require that the RNAV system have access to barometric altitude data. The use of geometric altitude determined by GPS
data will introduce several errors. The geometric altitude ignores the nonstandard state of the pressure gradient of the
atmosphere. The geometric altitude ignores the undulations
of the mean sea level. Finally, the GPS sensor is accurate
in the vertical axis to about 150 m, which is less accurate
than altimeters.
A second limitation to the concept of using the ARINC-424
leg types has to do with the diversity of instructions that may
Runway
transitions
En route
transitions
Common route
Approach
transitions
Final approach path
BIBLIOGRAPHY
1. RTCA Inc., Minimum Aviation System Performance Standards for
Required Navigation Performance RNP-RNAV, DO-236, current
ed., Washington, DC: RTCA Inc.
GERALD E. BENDIXEN
Rockwell Collins, Inc.
359
ATTITUDE CONTROL
45
ATTITUDE CONTROL
Attitude control is the field of engineering science that deals
with the control of the rotational motion of a rigid body about
a reference point (typically the center of mass). Attitude control systems are commonly used in controlling the orientation
of spacecraft or aircraft. As a spacecraft orbits the Earth, it
may have to move in space in such a way that its antenna
always points to a ground station for communication or its
on-board telescope keeps pointing to a distant star. A fighter
aircraft may be required to turn very fast and maneuver aggressively to shoot down enemy airplanes or to avoid an incoming missile. A civilian airplane may need to keep a con-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
46
ATTITUDE CONTROL
ATTITUDE CONTROL
(1)
(2)
and the integration extends over the entire body. In Eq. (2)
the vector v r denotes the inertial velocity of the mass element dm (see Fig. 1).
47
k
cm
j
R
K
dm
r
O
J
Y
X
Figure 1. Inertial and body-fixed reference frames. The body-fixed
reference frame is located at the center of mass cm. The vector r
denotes the location of the element mass dm in the inertial frame
and the vector denotes the location of the mass in the body frame.
48
ATTITUDE CONTROL
(3)
where is the angular velocity of the moving frame. The velocity of the mass element dm is thus given by
v = R +
(4)
dm +
R
H=
) dm
(
(5)
The first integral in the previous expression vanishes, because the origin is at the center of mass
Jx =
(y2 + z2 ) dm,
Jxy =
B
Jy =
(x2 + z2 ) dm,
Jxz =
xz dm
dm = 0
(6)
H=
(7)
= xi + yj + zk
(8)
H = J
(9)
Jx
J = Jxy
Jxz
Jxy
Jy
Jyz
Jxz
Jyz
Jz
(10)
(11)
B
Jz =
xy dm
B
(x2 + y2 ) dm,
Jyz =
yz dm
B
(12)
The inertia matrix J, also called the inertia tensor, is symmetric and positive definite. One can therefore choose a reference
frame such that the matrix J is diagonal. This particular
choice of body-fixed axes is called the axes of principal moments of inertia. The directions of these axes are exactly those
determined by the eigenvectors of the matrix J.
The components of Eq. (12) resolved along the principal
axes are given by
Jx x = (Jy Jz )y z + Mx
Jy y = (Jz Jx )z y + Mz
where
(13)
Jz z = (Jx Jy )x y + Mz
where Jx, Jy, Jz are the three principal moments of inertia (the
eigenvalues of the matrix J), x, y, z are the components of
the angular velocity vector along the principal axes, as in Eq.
(8), and Mx, My, Mz are the components of the applied moment
along the same set of axes, i.e., M Mxi My j Mzk.
Equation (12) or Eq. (13) is the starting point for most attitude control problems.
Kinematics of the Attitude Motion
The solution of Eq. (13) provides the instantaneous angular
velocity of the body about its center of mass. It does not capture the instantaneous orientation of the body with respect
to, say, the inertial reference frame. In particular, integration
of the angular velocity vector does not, in general, give any
useful information about the orientation of the body. The orientation of the body is completely determined if we know the
orientation of the body-fixed frame with respect to the inertial
reference frame, used in deriving Eq. (13). The rotation matrix
R between the body-fixed and the inertial reference frames is
used to completely describe the body orientation. The rotation
matrix is a 3 3 matrix having as columns the components
of the unit vectors of the inertial frame expressed in terms of
the unit vectors of the body-fixed frame.
In other words, i, j, k denote the unit vectors of the body
frame and I, J, K denote the unit vectors in the inertial frame,
a vector V having coordinates (Vx, Vy, Vz) and (VX, VY, VZ) with
ATTITUDE CONTROL
(14)
dR
= S()R
dt
where S() is the skew-symmetric matrix (S ST)
y
0
z
S() = z
0
x
y
x
0
(16)
(17)
RRT = RT R = I
49
(19)
1
0
0
Rx () = 0
cos
sin
0 sin cos
cos 0 sin
1
0
Ry ( ) = 0
(20)
sin 0
cos
cos
sin 0
Rz ( ) = sin cos 0
0
0
1
We can use Eq. (16) to find the orientation of the body at any
instant of time if the corresponding angular velocity vector
of the body is known. In particular, the matrix differential
equation in Eq. (16) can be integrated from the known initial
attitude of the body to propagate the attitude for all future
times. This process will require the integration of the nine
linear but time-varying differential equations for the elements of the matrix R in order to obtain R(t) at each time t.
Careful examination of the matrix R, however, reveals that
the nine elements of this matrix are not independent from
each other, since the matrix R must necessarily satisfy the
constraints in Eq. (18). An alternative approach to solving Eq.
(16) is to parameterize the matrix R in terms of some other
variables and then use the differential equations of these
variables in order to propagate the attitude history.
Euler Angles. The minimum number of parameters that can
be used to parameterize all nine elements of R is three. [Notice that Eq. (18) imposes six independent constraints among
the elements of R.] The Euler angles are the most commonly
used three-dimensional parameterization of the rotation matrix R. They have the advantage that they are amenable to
physical interpretation and can be easily visualized.
Using the Euler angles we can describe the final orientation of the body-axis frame by three successive elementary
Z = z
y
k
j
y= y
Y
x
x= x
50
ATTITUDE CONTROL
(21)
and thus
cos cos
sin
cos sin
cos cos
(22)
The components of the angular velocity vector in the bodyframe are given in terms of the rates of these Euler angles by
x = sin +
y = cos sin + cos
(23)
(26)
q = q0 + q1 i + q2 j + q3 k
(27)
The quantity
R(q0 , q1 , q2 , q3 )
2
q0 + q21 q22 q23
= 2(q1 q2 q0 q3 )
2(q1 q3 + q0 q2 )
2(q1 q2 + q0 q3 )
q20 q21 + q22 q23
2(q2 q3 q0 q1 )
(24)
qi = ei sin ,
2
i = 1, 2, 3
(25)
2(q1q3 q0 q2 )
2(q2q3 + q0 q1 )
q20 q21 q22 + q23
(28)
q 0
q0
0 x y z
q
0
z
y
1 1 x
q 1
=
(29)
q 2 2 y z
0
x q 2
q 3
z
y
x
0
q3
These equations are linear and do not involve any trigonometric functions as the corresponding kinematic equations in
terms of the Euler angles. Integration of these equations to
obtain attitude information can thus be performed very fast
on a computer. In addition, the attitude description in terms
of q0, q1, q2, q3 is global and nonsingular. For these reasons the
Euler parameters have increasingly gained popularity in
many attitude-control applications.
The main disadvantage when using the Euler parameters
is that they are difficult to visualize. The orientation needs to
be transformed to an Euler angle sequence if they are to be
meaningful, for example to a pilot or an engineer. The Eulerian angles (, , ) in terms of the Euler parameters can
be computed, for example, from
sin = 2(q1 q3 q0 q2 )
tan =
2(q1 q2 + q0 q3 )
q20 + q21 q22 q23
tan =
2(q2 q3 + q0 q1 )
q20 q21 q22 + q23
ATTITUDE CONTROL
Jx x = (Jy Jz )y z
Jy y = (Jz Jx )z x
(31)
Jz z = (Jx Jy )x y
Assuming a nonsymmetric body (Jx Jy Jz), equilibrium (or
steady-state) solutions correspond to permanent rotations
with constant angular velocity about each of the three axes.
For the sake of discussion, let us assume that Jx Jy Jz.
Recall that in the absence of any external torques the angular momentum vector H remains constant in inertial space.
Since the body rotates, H does not appear constant for an
observer sitting in the body-fixed frame. Nevertheless, the
magnitude of H is constant. This is evident from Eqs. (15)
and (18). Thus,
H2 = H
(32)
51
(33)
is also constant. We can use these two expressions to determine the behavior of the angular velocity vector in the
body-fixed frame.
By dividing Eqs. (32) and (33) by their left-hand sides, we
obtain
y2
x2
z2
+
+
=1
2
2
(H/Jx )
(H/Jy )
(H/Jz )2
(34)
x2
(2T/Jx )
y2
(2T/Jy )
z2
(2T/Jz )
=1
(35)
Figure 3. The closed curves on the angular momentum ellipsoid denote the path of the tip of the angular velocity vector. Rotations about
the x and z axis are stable, whereas rotations about the y axis are
unstable. Here y is the intermediate moment-of-inertia axis.
52
ATTITUDE CONTROL
Jx x = Mx
(36)
(37)
(38)
can be used to put the system in the standard form of Eq. (12)
+ M
J
J =
control system must keep the angular velocity vector with respect to the inertial frame at zero. For small angular deviations and small angular rates, we can use the Euler angles to
describe the orientation of the body frame with respect to the
inertial frame. Since the angles and their rates are small, we
can linearize Eqs. (13) and (24) to obtain
(39)
Jy y = My
(40)
Jx z = Mz
= x
= y
= z
(41)
The attitude motions about the three body axes are decoupled. The control system can independently control the motion about each individual axis. A control law of the form
Mx = k1 k2 ,
k1 > 0, k2 > 0
(42)
(43)
where a and b positive numbers with b a. The transfer function of this controller is
s+a
Mx (s)
= k
(s)
s+b
(44)
k1 > 0, k2 > 0
(45)
ATTITUDE CONTROL
k>0
53
the spacecraft with respect to this frame, the equations of motion can be written as (8,10)
Jx x = (Jy Jz )z 32 (Jy Jz ) + Mx
Jy y = 32 (Jx Jz ) + My
(46)
(51)
Jz z = (Jy Jx )x + Mz
= + x
= + y
= + z
M = J d kJ(
(47)
If, for instance, d dk the previous control law will generate a pure rotation of the body about its z axis with angular
velocity d. A special case of this situation occurs when the
final spin axis of the spacecraft is also required to point along
a specified direction in the inertial frame (i.e., for a spin-stabilized vehicle). The linear control given by Coppola and McClamroch (10)
Mx = (Jy Jz )z ( + d ) Jx d k1 k2
My = (Jz Jx )d ( d ) + Jy d k3 k4
(48)
Mz = k5 (z d )
for some positive scalars k i, will keep the body z axis aligned
with the inertial Z axis (assuming that x, y, , are small),
whereas the control law
sin cos
k 2 x
1 + cos cos
sin
My = k1
k 3 y
1 + cos cos
Mz = k3 (z d )
Mx = k1
(49)
(52)
Mx = 42 (Jz Jy ) k1 k2 (Jx + Jz Jy )
My = 32 (Jx Jz ) k3 k4
for some positive numbers ki, can be used to make the spacecraft rotate about its y axis such that its z axis points always
toward the Earth.
Optimal Reorientation Maneuvers. Because of limited onboard resources (e.g., power consumption or propellant), a
spacecraft control system may be required to achieve the control objectives in the presence of certain constraints. For instance, it is clearly desirable to design control algorithms that
minimize the fuel consumption during a particular maneuver
(assuming gas jets are used as attitude actuators). Another
example is the reorientation of an optical telescope or antenna
in minimum time.
For small-angle reorientation maneuvers about individual
principal axes, the linear equations in Eqs. (40) and (41) can
be used. Linear quadratic methods provide optimal controls
for a quadratic penalty on the error and the control input.
These methods have been discussed elsewhere (12,13).
Referring to Eq. (13), Windeknecht (14) showed that the
control law that minimizes the quantity
for some positive ki, can be used to bring the spin-axis (assumed to be the body z axis) along the inertial Z axis from
almost every (not necessarily small) initial state (11).
Spacecraft in Orbit. Another important special case of the
previous control laws is the stabilization of a spacecraft in a
circular orbit of radius Rc, such that its z axis points always
towards the Earth. The orbital angular velocity is
=
rg
Rc
J = H (tf ) +
tf
M (t) 2 dt
(54)
is given by
M =
H
(tf t) +
(55)
tf
J =
(50)
(53)
Mz = (Jy Jx ) k5 k6 + (Jx + Jz Jy )
2
tf
H 2 dt +
M 2 dt
(56)
is given by
H
M = (t)H
where
t t
1
(t) = tanh f
(57)
(58)
54
ATTITUDE CONTROL
(59)
tf
J =
M dt
(60)
M = M
H
H
(61)
value of the torque). For instance, assuming that the maxi y, the control
mum available torque about the pitch axis is M
law that will bring the motion about the y body axis to rest
y to M
y (or vice versa)
in minimum time switches from M
according to whether the initial state (, ) is above or below
the switching curve in Fig. 4. The switching occurs when
and satisfy the switching condition
2
=
y
2M
Jy
(62)
2My/Jy
My = My
Typical path
My = + My
Typical path
Switching curve
Figure 4. Bangbang minimum time control of a single-axis attitude
maneuver. If the initial orientation and velocity of the body is below
the switching curve, the control logic will switch from the maximum
to the minimum possible torque. The opposite is true if the initial
condition is above the switching curve.
ATTITUDE CONTROL
x
cos
y = 0
z S
sin
sin
x
0 y
cos
z B
0
1
0
(63)
L = Jx p Jxz r + qr(Jz Jy ) Jxz pq
M = Jy q + rp(Jx Jz ) Jxz (p2 r2 )
x
cos
y = sin
z W
0
0
x
0 y
1
z S
sin
cos
0
(64)
cos cos
cos sin
sin
L = qSbC
sin
cos
0
sin cos
sin sin
cos
cos
sin 0
cos
= sin cos 0 0
sin
0
0
1
M = qScC
w
,
u
sin =
0
1
0
sin
0
cos
(65)
v
VT
(66)
q =
Ailerons
Elevator
el
, r, N
Body
z axis
(69)
b
b
C p+
C r + Cl + Cl a + Cl r
a
r
2VT l p
2VT l r
c
c
M = qSc
Cm 0 + Cm +
Cm q q +
Cm + Cm e
e
2VT
2VT
b
p
N = qSb
Cn p p +
Cn r r + Cn + Cn a + Cn r
a
r
2VT
2VT
(70)
Rudder
, q, M
1
V 2
2 T
L = qSb
Body
y axis
(68)
N = qSbC
The body, wind, and stability axes for positive and are
shown in Fig. 5. From Fig. 5 we have immediately that the
angle of attack and the sideslip angle satisfy the following
expressions
tan =
(67)
55
at
iv
, p, L
in
x axis
(wind)
x axis
(body)
x axis
(stability)
L = Jx p Jxz (r + pq)
M = Jy q + Jxz (p2 r2 )
N = Jz r Jxz ( p qr)
(71)
56
ATTITUDE CONTROL
p =
r =
2
Jxz
Jx
Jx2 Jy Jx + Jxz
Jxz
pq
(Jx Jy + Jz )qr +
L+
N
(72)
2
. Once the moments L, M, and N are
where JxJz J xz
known, the angular velocity can be computed by integrating
Eq. (72).
Euler Angles
The orientation of an airplane is given by the three Euler
angles , , and from Eq. (22), also referred to as roll, pitch,
and yaw, respectively. The kinematic equations of the airplanes rotational motion are thus given by Eq. (24), repeated
below for convenience
(73)
Equations (72) and (73) can be integrated to completely describe the attitude evolution of the aircraft. It should be
pointed out, however, that the aerodynamic forces and moments depend on the altitude and speed of the airplane. The
rotational equations are thus coupled with the translational
(flight path) equations of motion. A complete, six-degree-offreedom system that includes the translational equations is
required to accurately describe the current position and velocity of the airplane. The complete nonlinear equations can be
decomposed into the longitudinal equations, which describe
the motion in the xz plane, and the lateral equations, which
describe the motion outside the xz plane. The longitudinal
part of the airplanes motion includes, in addition to and q,
the forward and vertical velocity of the center of mass. The
lateral equations, in addition to , , p, and r will include the
side velocity of the center of mass. A more complete discussion of the airplanes complete set of equations of motion may
be consulted (see, for example, Ref. 26).
Aircraft Actuators
Control of an airplane is achieved by providing an incremental lift force on one or more of the airplanes surfaces. Because
these control surfaces are located at a distance from the center of mass, the incremental lift force generates a moment
about the airplanes center of mass. The magnitude of the moment is proportional to the force and the distance of the control surface from the center of the mass.
The main control actuators used for changing an airplanes
attitude motion are the elevators, the rudder, and the ailerons. Additional configurations may include canards (small
surfaces located ahead of the main wing) or thrust vectoring
devices (for military aircraft). Figure 5 shows the main control surfaces of an airplane.
Elevators. Elevators are relatively small surfaces located
close to the tail of the airplane. Deflecting the elevators produces moments about the pitch axis of the airplane. Elevators
are thus, primarily, pitch-control devices. The transfer function between the elevator deflection e and the pitch angle
is given by
K (s2 + 2 s + 2 )
(s)
= 2
2 )(s2 + 2 s + 2 )
e (s)
(s + 2phph s + ph
sp sp
sp
(74)
The ph, ph and sp, sp are the damping ratio and natural
frequency of the phugoid and short-period modes, respectively.
Rudders. The rudder is a hinged flap that is part of the
vertical surface located at the tail of the airplane. It is primarily a yaw-control device and is the main directional control device of the airplane. In addition to directional control,
the rudder is used to compensate for unwanted directional
yaw deflections caused by the airelons when an airplane is
banked to execute a turning maneuver.
Airelons. Airelons differ from the previous two control devices, because they incorporate two lifting surfaces. Ailerons
are located at the tips of the main wings of the airplane. Roll
control is achieved by the differential deflection of the ailerons. They modify the lift distribution of the wings (increase
it in one wing and decrease it in the other) so that a moment
is created about the x axis.
Spoilers. Roll moment is also produced by deflecting a wing
spoiler. Wing spoilers are small surfaces located on the upper
wing surface and cause flow separation when deflected. Flow
separation in turn causes a reduction in lift. If only one
spoiler is used at a time, the lift differential between the two
wings will cause a rolling moment. In some aircraft roll control is also produced by tail surfaces moving differentially.
Roll. The rolling (lateral) motion is not, in general, decoupled from the yawing (directional) motion. The transfer functions from a and r to and are coupled. The transfer function from aileron deflection to roll angle is given by
K (s2 + 2 s + 2 )
(s)
=
2)
a (s)
(s + 1/Ts )(s + 1/Tr )(s2 + 2D D s + D
(75)
K (s2 + 2 s + 2 )
(s)
=
2)
r (s)
(s + 1/Ts )(s + 1/Tr )(s2 + 2D D s + D
(76)
(77)
ATTITUDE CONTROL
The transfer function from r to is more difficult to approximate. Often, the dutch roll approximation found in
McLean (25)
K
(s)
= 2
2)
r (s)
(s + 2D D s + D
(78)
is good enough.
The short period, the roll, and the dutch-roll modes are the
main principal modes associated with the rotational motion
of the aircraft and are much faster than the phugoid and spiral modes, which are primarily associated with changes of the
flight-path (translational motion). The slow phugoid and spiral modes can be controlled adequately by the pilot. Control
systems are required, in general, for controlling or modifying
the rotational modes. In addition, the maneuverability of the
aircraft is primarily determined by the rotational modes.
Stability Augmentation and Aircraft Attitude-Control Systems
An automatic flight control system (AFCS) typically performs
three main tasks: (1) modifies any unsatisfactory behavior of
the aircrafts natural flying characteristics, (2) provides relief
from the pilots workload during normal cruising conditions
or maneuvering, and (3) performs several specific functions,
such as automatic landing. In addition, an AFCS may perform several secondary operations, such as engine and aircraft component monitoring, flight-path generation, terrainfollowing, collision avoidance. Here we briefly outline the
fundamental operations of only the first two tasks.
Control systems that are used to increase the damping or
stiffness of the aircraft motion so as to provide artificial stability for an airplane with undesirable flying characteristics
are called stability augmentation systems (SAS). Typical uses
of SAS are in increasing the damping ratio of the short period
motion in pitch (pitch rate SAS), providing damping in the
roll subsidence mode (roll rate SAS), modifying the dutch roll
mode (yaw rate SAS), and increasing the maneuverability of
the aircraft by reducing static stability margins (relaxed
static stability SAS).
The SAS typically uses gyroscopes as sensors to measure
the body-axes angular rates, processes them on-board using a
flight-control computer, and generates the appropriate signals
to the servomechanisms that drive the aerodynamic control
surfaces.
In addition to stability augmentation systems, which are
used to modify the characteristics of the natural modes of the
airplane, attitude-control systems (ACS) are used to perform
more complex tasks. In contrast to the SAS, they use signals
from many sensors and control several of the aircrafts surfaces simultaneously. As a result, attitude control systems are
multivariable control systems and therefore more complex in
their operation than SAS. Common ACS for a typical aircraft
are pitch ACS, roll angle ACS, coordinated-turn control systems, wing levellers, and sideslip suppression systems. A
more in-depth discussion of ACS can be found in McLean (25)
and Stevens and Lewis (23).
The aircraft dynamics change considerably with the flight
conditions, such as speed and altitude. The control design process involves linearization of the nonlinear equations of motion about steady state (trim) conditions. Steady-state aircraft
flight is defined as a condition where all motion (state) variables are constant or zero. That is, linear and angular velocity
57
are constant (or zero) and all accelerations are zero. Examples
of steady-state flight conditions involving the rotational degrees of freedom include: (1) steady turning flight (
0), (2) steady pull-up ( 0), and (3) steady roll
( 0).
A control system designed for a certain steady-state condition may perform very poorly at another condition or even
lead to instability. A control system must therefore be
adapted during the flight to accommodate the wide variations
in aircraft dynamics occurring over the flight envelope. Typically, several controllers are designed for different conditions
and then gain-scheduled during the flight. Gain scheduling
amounts to switching between the different controllers or adjusting their parameters (i.e., gains) as the airplanes flight
conditions change. Dynamic pressure is commonly used to
schedule the controllers because it captures changes of both
altitude and speed. Other parameters, such as angle of attack
are used as well. Care must be taken when switching controllers during gain scheduling to avoid unacceptable transients.
Extensive simulations are required to ensure that the gainscheduled control system performs satisfactorily. The U.S.
government periodically releases a series of publications (e.g.,
27), with guidelines and specifications for acceptable performance of flight-control systems.
58
ATTITUDE CONTROL
tion (i.e., this is the direction that the gripper typically approaches an object). The s (or y) direction is the sliding direction (i.e., the direction along which the fingers of the gripper
slide to close or open). The n (or z) direction is normal to the
plane defined by the a and s directions. The (a, s, n) frame
attached to a gripper is shown in Figure 6.
The roll, pitch, and yaw angles completely describe the orientation of the end effector. They are given by
The torques generated at the joints will specify a commanded time history for i(t) and i(t). Equations (79) and
(81) can be used to find the corresponding angular position
and velocity of the end effector. This is the so-called forward
kinematics problem.
As an example, consider the general equation of a robotic
manipulator (29)
M( ) + C(, ) + K( ) =
= f 1 (1 , . . ., n )
= f 2 (1 , . . ., n )
(79)
= f 3 (1 , . . ., n )
where the functions f 1, f 2, and f 3 are determined by the specific geometry of the manipulator. Differentiating the previous equation with respect to time, one obtains
f
= 1 1 + +
1
f
= 2 1 + +
1
f
= 3 1 + +
1
f1
n
n
f2
n
n
f3
(81)
where J() is a 3 n matrix and (1, . . ., n). The matrix J() is often called the Jacobian kinematics.
Joint 2
Lin
Link 2
y2
k3
x2
x1
Z
z2
Joint 1
v = d 2( d ) 2 ( d ),
(83)
Base (link o)
X
Figure 6. Typical robotic manipulator consisting only of revolute
joints. The attitude of the gripper is given by the orientation of the
(a, s, n) body frame. The geometry of the manipulator determines the
orientation of this frame with respect the the joint angles 1, 2,
and 3.
>0
(84)
Joint 4
Joint 3
z1 2
= M( )v + C(, ) + K( )
where
= J( )
y1
(80)
(82)
(85)
ATTITUDE CONTROL
subsequently proposed, both for the angular-velocity equations (e.g., 31), and the complete velocity/orientation equations (e.g., 32,33).
Controlling flexible spacecraft also presents great challenges. Control laws using onoff thrusters, for example, may
excite the flexible modes of lightweight space structures, such
as trusses or antennas. Modern control theory based on statespace models has been used to control these systems with
great success. An in-depth discussion on the effect of flexibility on spacecraft reorientation maneuvers can be found in the
literature (12,34,35).
Research into fail-safe control systems for aircraft has also
been an active area. The main emphasis has been placed on
the design of reconfigurable flight-control systems and, more
specifically, attitude-control systems. The idea is to construct
intelligent control systems with high levels of autonomy that
can reprogram themselves in case of an unexpected failure, so
as to fly and land the airplane safely. The use of multivariable
modern control theory (23) along with the use of redundant
sensors and actuators and smart materials promise to change
the current method of designing and implementing control
systems for aircraft.
Traditionally, the airplane control surfaces are connected
directly to the cockpit through mechanical and hydraulic connections. A pilot command corresponds to a proportional surface deflection. In many recent military and civilian aircraft,
the commands from the pilot are sent electronically to the
control computer instead. The computer generates the appropriate control deflection signals based on its preprogrammed
control law. This method is called fly-by-wire, since the pilot
does not have direct command of the control surfaces. The
on-board control computer is responsible for interpreting and
executing the pilot commands. Redundant computers or
backup mechanical connections are used to guard against possible computer failures. The term fly-by-light is also used
when the pilot and control commands are sent using fiberoptic connections.
BIBLIOGRAPHY
1. J. R. Wertz, Spacecraft Attitude Determination and Control, Dordrecht: D. Reidel, 1980
2. W. Wiesel, Spaceflight Dynamics, New York: McGraw-Hill, 1989.
3. T. R. Kane, P. W. Likins, and P. A. Levinson, Spacecraft Dynamics, New York: McGraw-Hill, 1983.
4. M. D. Shuster, A survey of attitude representations, J. Astronaut.
Sci. 41 (4): 439517, 1993.
5. J. Stuelpnagel, On the parameterization of the three-dimensional
rotation group, SIAM Rev. 6 (4): 422430, 1964.
6. D. T. Greenwood, Principles of Dynamics, Englewood Cliffs, NJ:
Prentice-Hall, 1988.
7. E. T. Whittaker, Analytical Dynamics of Particles and Rigid Bodies, New York: Dover, 1944.
8. A. E. Bryson, Control of Spacecraft and Aircraft, Princeton, NJ:
Princeton University Press, 1994.
9. R. E. Mortensen, A globally stable linear attitude regulator, Int.
J. Cont. 8 (3): 297302, 1968.
10. V. Coppola and H. N. McClamroch, Spacecraft attitude control,
in W. S. Levine (ed.), The Control Handbook, Boca Raton, FL:
CRC Press, 1996.
59
11. P. Tsiotras and J. M. Longuski, Spin-axis stabilization of symmetric spacecraft with two control torques, Syst. Cont. Lett. 23
(6): 395402, 1994.
12. J. L. Junkins and J. Turner, Optimal Spacecraft Rotational Maneuvers, New York: Elsevier, 1986.
13. A. E. Bryson and Y.-C. Ho, Applied Optimal Control: Optimization, Estimation, and Control, Washington, DC: Hemisphere,
1975.
14. T. G. Windeknecht, Optimal stabilization of rigid body attitude,
J. Math. Anal. Appl. 6 (2): 325335, 1963.
15. K. S. P. Kumar, On the optimum stabilization of a satellite, IEEE
Trans. Aerospace Electron Syst., 1 (2): 8283, 1965.
16. M. Athans, P. L. Falb, and R. T. Lacoss, Time-, fuel-, and energyoptimal control of nonlinear norm-invariant systems, IRE Trans.
Automatic Contr., 8: 196202, 1963.
17. J. L. Junkins, C. K. Carrington, and C. E. Williams, Time-optimal magnetic attitude maneuvers, J. Guid., Contr., Dynam., 4
(4): 363368, 1981.
18. J. R. Etter, A solution of the time-optimal Euler rotation problem,
in Proceedings of the AIAA Guidance, Navigation, and Control
Conference, Vol. 2, Washington, DC: AIAA, 1989, pp. 14411449.
19. E. B. Lee and L. Markus, Foundations of Optimal Control Theory.
Malabar, FL: Krieger, 1986.
20. K. D. Bilimoria and B. Wie, Time-optimal reorientation of a rigid
axisymmetric spacecraft, in Proceedings of the AIAA Guidance,
Navigation, and Control Conference, Washington, DC: AIAA,
1991, Paper 91-2644-CP.
21. S. L. Scrivener and R. C. Thomson, Survey of time-optimal attitude maneuvers, J. Guid., Contr., Dynam., 17 (2): 225233, 1994.
22. C. R. Nelson, Flight Stability and Automatic Control, New York:
McGraw-Hill, 1989.
23. B. L. Stevens and F. L. Lewis, Aircraft Control and Simulation,
New York: Wiley, 1992.
24. M. Pachter and C. H. Houpis, Flight control of piloted aircraft, in
W. S. Levine (ed), The Control Handbook, Boca Raton, FL: CRC
Press, 1996.
25. D. McLean, Automatic Flight Control Systems, New York: Prentice Hall, 1990.
26. B. Etkin, Dynamics of Flight: Stability and Control, New York:
Wiley, 1982.
27. U.S. Air Force, MIL-STD-1797A: Flying Qualities of Piloted Aircraft, Washington, DC: Government Printing Office 1991.
28. M. W. Spong and M. Vidyasagar, Robot Dynamics and Control,
New York: Wiley, 1989.
29. J. J. E. Slotine and W. Li, Applied Nonlinear Control, Englewood
Cliffs, NJ: Prentice Hall, 1991.
30. P. E. Crouch, Spacecraft attitude control and stabilization: applications of geometric control theory to rigid body models, IEEE
Trans. Auto Contr, 29 (4): 321331, 1984.
31. D. Aeyels, Stabilization by smooth feedback of the angular velocity of a rigid body, Syst. Contr. Lett., 6 (1): 5963, 1985.
32. H. Krishnan, M. Reyhanoglu, and H. McClamroch, Attitude stabilization of a rigid spacecraft using two control torques: a nonlinear control approach based on the spacecraft attitude dynamics,
Automatica, 30 (6): 10231027, 1994.
33. P. Tsiotras, M. Corless, and M. Longuski, A novel approach for
the attitude control of an axisymmetric spacecraft subject to two
control torques, Automatica, 31 (8): 10991112, 1995.
34. D. C. Hyland, J. L. Junkins, and R. W. Longman, Active control
technology for large space structures, J. Guid., Contr., Dynam.,
16 (5): 801821, 1993.
35. S. A. Singh, Robust nonlinear attitude control of flexible spacecraft, IEEE Trans. Aerospace Electron. Syst., 23 (2): 380387,
1987.
60
AUTHORING SYSTEMS
Reading List
T. R. Kane and D. A. Levinson, Theory and Applications. New York:
McGraw-Hill, 1985. The basic equations for rigid-body dynamics.
Special issue on attitude representations, J. Astronaut. Sci., 41 (4):
1993. An exhaustive presentation of different attitude representations.
M. L. Curtis, Matrix Groups. New York: Springer-Verlag, 1979. A
mathematical treatment of attitude motion, along with connections with group theory and Lie algebraic concepts.
F. P. J. Rimrott, Introductory Attitude Dynamics. New York: SpringerVerlag, 1989. Complete treatment of the dynamics of spacecraft
with momentum wheels.
P. C. Hughes, Spacecraft Attitude Dynamics. New York: Wiley, 1986.
Classic reference. Complete analysis of stability problems for single and dual-spin spacecraft.
D. L. Mingori, Effects of energy dissipation on the attitude stability
of dual-spin satellites, AIAA J. 7: 2027, 1969. More on the dynamics of dual spin.
R. J. Kinsey, D. L. Mingori, and R. H. Rand, Nonlinear control of
dual-spin spacecraft during despin through precession phase lock,
J. Guid., Contr., Dynam., 19 (1): 6067, 1996.
J. T. Wen and K. Kreutz-Delgado, The attitude control problem, IEEE
Trans. Auto. Contr. 36 (10): 11481162, 1991. Theoretical analysis
of attitude control.
D. McRuer, I. Ashkenas, and D. Graham, Aircraft Dynamics and
Automatic Control. Princeton, NJ: Princeton University Press,
1973.
J. Roskam, Flight Dynamics of Rigid and Elastic Airplanes. Kansas:
University of Kansas Press, 1972.
Special issue on aircraft flight control, Int. J. Contr., 59 (1): 1994.
Recent advances in aircraft control.
R. M. Murray, Z. Li, and S. S. Sastry, A Mathematical Introduction
to Robotic Manipulation. Boca Raton, FL: CRC Press, 1994. Mathematical treatment of attitude dynamics, rotation matrices.
T. I. Fossen, Guidance and Control of Ocean Vehicles. New York: Wiley, 1994. Attitude-control applications to marine vehicles.
PANAGIOTIS TSIOTRAS
University of Virginia
626
ELECTRONIC WARFARE
ELECTRONIC WARFARE
Electronic warfare (EW) is the systems discipline that exploits an adversarys use of the electromagnetic spectrum to
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
ELECTRONIC WARFARE
627
Radar
Weapons
guidance
Electronic
warfare
Communications
Sonar
tion, and EA and EP can use some of the same sensing and
CM equipment for distinct operational objectives.
This article includes a description of the EW time line and
the various phases of conflict. Also provided is a summary
description of the signal environment in which EW systems
operate. Those interested in more detailed descriptions of the
EM communications, radar, and navigation technology
against whose signals EW systems operate are referred to the
appropriate sections of this encyclopedia. A discussion of EW
functional areas ES, EP, and EA provides a functional framework for supporting EW technologies.
ment. The EW time-line stage in a specific engagement depends on the deployment of forces and the perceived imminence of hostile engagement. Note that the technologies used
in the various stages of the engagement are dynamic, and EW
systems and weapon systems technologies evolve to overcome
susceptibilities. The boundaries and definitions of EW timeline stages are redefined with each new advance in weapon
and EW technology.
Electronic Support
Electronic support provides operational intelligence that is related to radiated signals in the battle group or theater envi-
Countertargeting
Hostile force
Missile
defense
Own force
628
ELECTRONIC WARFARE
ronment. Surveillance includes monitoring of both combatants and commercial transports. Control of contraband and
critical materials is an EW surveillance mission that provides
critical intelligence data to the area commander. Surveillance
of noncooperative combatant forces provides deployment intelligence in the area of observation. Early threat-warning information extracted from surveillance data occurs by recognizing hostile force weapons-related transmissions.
Within the lethal range of hostile force weapons, battle
space surveillance updates are required rapidly. Deployment
and operational modes of hostile forces are monitored closely
to determine imminence of hostile activity. In some environments, potentially hostile forces remain within weapons lethal range and a high level of vigilance is necessary to maintain security.
Air Defense
Air defense is used to maintain control of the battle group
airspace and defend against threat aircraft and missiles. Battle group surveillance, implemented by the combination of
EW, infrared/electro-optic (IR/EO), and radar sensors, provides environmental data required for air defense. Electronic
combat techniques and weapons are used to counter an airborne threat.
Air defense is an extensive, complex, electronic combat interaction between hostile forces. EW assets are a key tool of
the battle force commander and of the individual elements
within the command. These assets provide information for developing tactical intelligence in all phases of the engagement.
The outcome of the air battle is by no means established by
the quantity of EW assets possessed by each of the opposing
forces, but depends greatly on how the EW assets are used
in conjunction with other sensor systems, weapons, and air
defense tactics.
Aircraft ships and/or battlefield installations participate in
air defense. Own force aircraft operating at altitude can engage a threat force at long line-of-sight ranges. Aircraft, together with ship and battlefield installations, provide coordinated air defense as the hostile force approaches own force
locations. The EW objective in the early air defense or outer
air battle is to prevent threat force detection and location of
own force. Electronic combat actions that prevent or delay
own force detection provide a distinct advantage by allowing
additional time to develop tactics to counter the threat force.
In addition, the threat force battle time line and interplatform coordination are perturbed. Fragmentation or dissolution of the hostile force attack can occur if own force electronic
combat is effective in the outer battle.
As the hostile force overcomes the outer battle electronic
attack and approaches the own force within weapons range,
air defense assumes the role of denying targeting information
to the hostile sensors. The EW objective at this stage of the
engagement is to prevent hostile force weapons launch by denying targeting data to their sensors. Electronic combat surveillance, warning, and countermeasure assets are used for
countertargeting. Surveillance sensors assess hostile force deployment and provide information about the adversarial tactics being used. Warning sensors indicate the status of threat
sensors as they attempt to acquire targeting data for weapons
systems handoff. Countermeasure assets, including jamming,
spoofing, and decoying, continue to provide a virtual environ-
Countertargeting (CTAR) is a subset of radar electronic countermeasures (ECM) used in electronic attack. CTAR provides
specially modulated radio-frequency (RF) signal transmissions to counter hostile force long-range surveillance or targeting radar. The transmission modulation can be amplitudemodulated (AM) or frequency-modulated (FM) noise, or
combinations of these, and they can be pulsed or continuouswave. CTAR transmission is used both to disrupt and interfere with the threat radar operation, thereby preventing it
from correctly locating and identifying own force target(s).
Countertargeting success criteria includes mission completion prior to threat force interdiction or weapon launch. Realistically, the results of a CTAR electronic attack against a
hostile force are probabilistic, in that some opposing forces at
some time during the battle time line succeed in launching
missiles. CTAR can delay and reduce the coordination of hostile missile firings and, consequently, reduce the number of
missiles fired and the attrition of personnel, ships, and aircraft.
Terminal Defense
Terminal defense against electronically controlled missiles
and guns is the final phase of the EW battle time line. Weapons are launched in the terminal phase of hostile force engagement, and EP and EA capability is brought to bear on the
weapons and their electromagnetic (EM) guidance and control
signals. Onboard jamming and false-target radiation that is
effectively used for countertargeting is less effective for terminal defense. Jamming or false-target radiation makes the target platform vulnerable to missiles with home-on-jam capability. Home on jam is an electronic counter countermeasure
that exploits the target countermeasures radiation to steer
the missile to the target. Consequently, off board countermeasures, or decoys, are used to lure the missile away from the
high-value target.
THE ELECTRONIC WARFARE ENVIRONMENT
Threat Systems
Electronic warfare interacts with an adversarys EM systems
for signal exploitation and potentially for electronic attack.
Threat systems of EW interest include radar, communications, and weapons control. Some of the threat systems exploited by EW are briefly described in the following.
ELECTRONIC WARFARE
tween surveillance sites and between combat units. Communications networks range from basic field radio networks to
long-distance, wide-area systems and point-to-point, highdata-rate installations. Communications systems cover the
spectrum from very low frequency (5 Hz) to the frequencies of
visible light, and they can be either free-space transmissions
or confined to a transmission line. Free-space transmission
links may be line of sight or cover longer distances by reflecting from the ionosphere, atmospheric layers, or troposcatter, or by relaying via satellite.
Command and control communication links, using HF direct microwave and satellite relay, disseminate voice and digital data transmissions to land forces, air forces, and ships.
Land combat units use ultrahigh frequency (UHF) (300 MHz
to 3 GHz), very high frequency (VHF) (30 MHz to 300 MHz),
land lines, and cellular phones over shorter distances mainly
for voice transmissions. Surveillance activities and weapons
sites may exchange data via voice or digital data link over a
transmission path appropriate for the link span. Such links
are used to transmit surveillance radar reports to an operations center or directly to a SAM battery. Communicationlink data rates depend on link bandwidth, modulation technique, and signal-to-noise ratio. Individual transmission-link
throughput rates are in the range of hundreds of megabytes
per second. Computer technology has enabled increased communication-link capacity for handling and processing data.
The high data rates attainable permit transmission from airborne observers and between precision weapons and launch
platforms.
Communications in hostile environments are transmitted
via protected cable between fixed sites, thus providing protection from physical damage, security from intercept, and immunity from jamming. Mobile communications require freespace transmissions that are susceptible to intercept and jamming. Communications counter-countermeasures, complex
modulation, encryption, and spatial radiation constraints are
used to mitigate the effects of EA. The use of modulation techniques increases privacy, reduces interference, improves reception, and reduces the probability of detection. Spread-spectrum communication systems that use four categories of
signal modulation (direct sequence-modulated, frequencyhopping, intrapulse FM [chirp], and time-hopping) provide
some level of signal protection from detection, demodulation,
and interference. However, this is at the expense of increased bandwidth.
Passive Weapons Sensors. Electro-optical and infrared (EO/
IR) systems sense spectral energy that is radiated by an object or reflected from an object from a source such as the sun,
moon, or stars. The electro-optical spectral regions are categorized according to atmospheric propagative characteristics or
Communications. Communications systems provide information exchange for command and control to coordinate be-
Frequency
Range
PRF
Range
Radar Function
GCI
IC
30 MHz to
3.0 GHz
100 pps to
500 pps
3.0 GHz to
10.0 GHz
1000 pps to
3000 pps
629
Surveillance
30 MHz to
3.0 GHz
100 pps to
500 pps
TA
TT, AA
3.0 GHz to
8.0 GHz
1000 pps to
2000 pps
6.0 GHz to
10.0 GHz
2000 pps to
4000 pps
Space Surveillance
30 MHz to 1.0
GHz
630
ELECTRONIC WARFARE
Nonimaging reticles
Quadrant-fixed reticle
Target
Error
V
Time
Image
decenter
Image
Spin-scan
reticle
V
Time
Conscan
Time
Time
Detectors (4)
V
No error
V
Time
V
Error
Freq.
Time
Nutation
Pseudoimaging
Rosette
Search
FOV
Petal
Constant
Target
Search 1
FOV
1
V
0
Video
Time
;;
Full imaging
Linear
detector
array
Scanned
array
Target
Target
image
2 D array
a four-element square array. Tracking is achieved by balancing the signal on all four detectors. In spin scan, a spinning
reticle provides phase and amplitude information with respect to a fixed reference. With conscan, the target image is
nutated by using a scanning mirror or optical wedge imaged
onto a fixed reticule or pattern of detectors. The nutated target image generates a modulated frequency proportional to
the angular and radial offset from the center. In the transverse-line scan approach, a rotating or reciprocating mirror
at a depressed elevation angle generates a scan line transverse to the missile axis, and the forward motion of the missile creates the orthogonal axis of the search pattern. With
the rosette scan, a petal pattern is scanned over a small instantaneous field of view (IFOV) by two counterrotating optical elements.
ELECTRONIC WARFARE
Rosette-scan tracking is accomplished by balancing the signal output from all petals with the target present in the central apex of the rosette. The small IFOV of the transverseline scan and rosette scan provide high spatial resolution and
the ability to resolve multiple sources within the scanned field
of view. Focal-plane arrays, scanning-linear arrays, or twodimensional arrays of detectors in the image plane provide
high-resolution pictures of the target space. Many imageprocessing algorithms are available to classify targets and establish track points. Figure 3 illustrates the basic features of
common seekers.
Passive electro-optic sensors are desirable targeting and
weapons guidance systems because they radiate no energy to
warn the target of an impending attack. These sensor systems
are vulnerable to decoys, with thermal signatures similar to
true targets and to high-intensity sources that can saturate
the electro-optic sensor detector or cause physical damage.
ELECTRONIC WARFARE FUNCTIONAL AREAS
Threat systems use the EM spectrum extensively. This section discusses functional aspects of EW. The relationships
that govern their systems application are described in the
following section. These functional areas are electronic support (ES), electronic protection (EP), and electronic attack
(EA). Electronic attack uses countertargeting (CTAR), jamming, false-target generation, and decoys to defeat the threat
sensors. Electronic protection uses electronic support and
electronic attack for own-platform self-protection.
Electronic Support
Electronic support provides surveillance and warning information to the EW system. ES is a passive, nonradiating, EW
system function that provides a fast accurate assessment of
the EM radiating environment. ES is the aspect of EW that
involves techniques to search for, intercept, locate, record,
and analyze radiated energy for exploitation in support of military operations. Electronic support provides EW information
for use in EA and EP and in tactical planning. ES directly
provides threat identification/detection and early warning. It
also provides data for electronic countermeasures (ECM),
electronic counter-countermeasures (ECCM), threat avoidance, target acquisition, and homing.
Electronic support provides timely EM environment information for the EW system. The spatial and spectral environment over which ES operates may span a hemispherical spatial segment and a spectrum of tens of gigahertz. In tactical
EW systems, signals in the environment are analyzed and reports of environment activity are provided on the order of a
second after threat signal reception.
Electronic Attack
As an EW function, EA provides an overt active response capability against enemy combat systems with the intent of degrading, deceiving, neutralizing, or otherwise rendering them
ineffective or inoperative. EA responds to threat systems to
protect multiple platform or battle group units. EA includes
measures and countermeasures directed against electronic
and electro-optical systems by using the electromagnetic spectrum (radio, microwave, infrared, visual, and ultraviolet fre-
631
quencies). EA technical functions include radio and radar signal jamming, false target generation, and the use of decoys
for threat system confusion and distraction.
Electronic attack is reactive to environment threats. To
function effectively, therefore, the EA system requires threat
information from the environment, including threat classification, bearing and, if possible, range. These functions are
performed by the ES system or by other surveillance systems
such as radar or infrared search and track (IRST). Effective
EA response selection requires knowledge of the threat class
and operating mode. Threat signal data are derived from
measuring signal parameters (frequency, scan type, scan
rates, pulse-repetition frequency, or continuous-wave radiation characteristics). Absence of radiation may indicate that
the threat uses a passive RF or an electro-optical sensor. The
detected threat electronic parameters are compared to an extensive emitter database. The EW database, derived from intelligence sources, is used to identify the threat and correlate
the threat and operating mode with effective EA techniques.
Operational threat exploitation is often impeded by intelligence gaps and/or threat use of parameters reserved for
wartime.
Nondestructive Electronic Attack. Nondestructive EA produces electromagnetic signals at a predetermined radio, infrared, visual, or ultraviolet frequency with characteristics that
temporarily interfere with the threats receiving system, that
is, power level, frequency, and polarization. EA degrades or
overcomes threat system operation by overpowering the target signal at the threat sensor. Dazzling is laser or highpower lamp EO/IR jamming. Dazzling saturates the detectors
or focal-plane arrays of electro-optical (infrared, visual, ultraviolet) guided missiles and target-tracking systems. Deceptive
EA presents a confusing signal to the threat sensor that degrades its performance to the point where it is no longer effective. Power levels used for deception are less than those required for jamming because deception does not require threat
sensor saturation.
Destructive Electronic Attack. Destructive EA physically
damages or destroys the threat electronic system. Specially
designed missiles such as the HARM missile, shown being
released from an A-6 aircraft in Fig. 4, are equipped with radar-homing seekers that attack the threat radar antenna and
nearby electronic equipment within the blast radius of the
missile warhead. More recently, similar seekers have been
fitted to loitering remotely piloted vehicles for a similar purpose. Advances in high-power microwave and laser technology
have made directed energy more practical. At very high power
levels, microwave energy destroys the components in a missile seeker or threat radar, rendering them inoperative. Highpower lasers also physically damage both RF and electro-optical threat systems.
Electronic Protection
Electronic protection provides EW protection for the host platform. Key environment surveillance and threat-warning information is provided by the ES system function (as it is for
EA). EP responds to threats in the environment with information for evasive action and with the countermeasure responses described previously. EP is primarily directed against
632
ELECTRONIC WARFARE
Figure 4. HARM missile (shown after separation from an EA-6B aircraft) is an EW weapon for physically destroying the source of hostile radiation.
the terminal threat targeted on the host platform, and preferred EP techniques use decoys that are less susceptible to
the home-on-jam weapon mode.
ELECTRONIC WARFARE TECHNICAL AREAS
Technical areas that support the ES, EA, and EP functional
EW systems areas are discussed in this section. All aspects of
EW are addressed by modeling and simulation because this
is the most practical means for functional evaluation. System
architectural analyses address the formulation of efficient EW
system configurations to provide the operational functions required within the constraints of available equipment, techniques, and technology. Technical areas that address ES primarily are signal detection, measurement, and processing
issues that deal with environment surveillance and warning.
Technical areas associated with EA and EP include CTAR
jamming and false-target generation, EO/IR CM, and decoys.
Also included in these technical area discussions are technology challenges to EW technologies for future capability.
Modeling and Simulation for Electronic Warfare
Electronic warfare uses modeling and simulation extensively
in three areas of investigation: research into new hardware;
threat domination/exploitation; and tactics development. The
effectiveness of an EW architecture or equipment suite is assessed by using a computer model and parametric studies run
against the model. Estimates of a threat systems capabilities
are incorporated into the model as environment sources because acquiring foreign hardware and measuring its performance is difficult. Environment signal models stimulate the
EW system model. The EA effectiveness modeled against the
threat is measured, and tactics are developed to further reduce threat system efficiency.
Modeling and simulation (M&S) combine detailed antiship
missile models with ship models, antiair missile models with
aircraft models, electromagnetic propagation models, and
chaff RF decoy models. (Chaff RF decoys are described later).
Chaff effectiveness evaluation considers the spatial relationship between the missile seeker and the ship while accounting for radar clutter and multipath returns. Signals at the
missile are processed through the seeker receiver and missile
guidance and tracking logic. A chaff cloud(s) injected into the
simulation provides a false radar target signal at the missile
seeker. By varying the amount of chaff and/or the chaff round
spatial relationship with respect to both the defended ship
and the threat missile, chaff effectiveness and tactics can be
evaluated. However, the accuracy of the M&S results depends
on the accuracy of the models used. An accurate missile sensor and control model is necessary to determine the effects of
the complex signal returns from the target ship and the chaff
on the missile controls and resultant flight path. In a simulated engagement, detailed missile functions are required to
provide an accurate assessment of chaff effectiveness. These
functions include monopulse antenna processing, range and
angle tracking, missile guidance, and aerodynamics. Multiple
threat seeker modes, such as acquisition, reacquisition, track,
home-on-jam (HOJ), and simulated coherent combinations of
signal segments are also required in the model.
Target ship, aircraft, and chaff radar cross section (RCS)
must be accurately modeled. Typically, a multireflector target
simulation is used to represent the RCS signature. Ideally, a
model of thousands of scatterers would provide greater accuracy. However, careful selection of several hundred scatterers
is adequate.
The accuracy of the missile and target interaction depends
on the propagative environment model including multipath.
Typically, a ray-tracing algorithm models the propagation of
RF energy. Useful models rely on a stochastic representation
of clutter as a function of wind speed, grazing angle, frequency, polarization, and ducting. Modeling of an ocean environment can be extended to include reflection from wave segments. Models are verified by using measured field test data.
Electronic Warfare System Architectures. The EW system architecture ties system functional elements into an efficient
configuration optimized to the operational mission. Figure 5
shows a typical EW system architecture. The system performs
signal acquisition and parameter measurement, direction
finding, countermeasure generation, and decoy deployment.
The system central processing unit (CPU) provides sensor
and countermeasure coordination and EW system interface
with other onboard systems.
Fusing the measurements of EW sensors and processors is
a complex technological challenge. This information includes
radar, communications, EO/IR, direction finding, and signal
analysis. Data fusion within the EW system requires algorithmic development and significant enhancement in computational throughput. The EW system includes antenna(s), receiver(s), and processor(s) elements that provide data on
signals in the environment. System sensors detect and measure threat signal characteristics. Multiple sensor subsystems
measure the characteristics of the signal. For example, a signal acquisition detects the presence of a signal and measures
the envelope characteristics (frequency, time of arrival, and
signal duration). Another sensor that may include multiple
antennas and receivers provides signal bearing-angle data.
Separate subsystem sensors measure intrapulse signal modulation and/or received polarization.
ELECTRONIC WARFARE
WB
converter
Channelizer
Synthesizer
DF
antenna
Tuner
CM RCVR ANT
Comm
Nav
Display
CPU
Encoder
Tuners
Decoys
Phase
quantizer
CM XMIT ANT
Receiver
633
Techniques
generator
Transmitter
A countermeasures receiver may use an independent electromagnetic environment interface. The countermeasures receiver accepts signals from the environment and provides
them to the techniques generator. Target signals designated
by CPU algorithms are selected for countermeasure generation as are the countermeasure modulation techniques to be
applied. The resulting jamming signals are amplified to the
desired power levels and radiated into the environment.
Decoys are part of the EW system architecture. This subsystem is controlled by the CPU based on sensor inputs. Decoys provide the important function of separating the countermeasure signal source from the host platform. In this
operational mode, decoys provide alternative highly visible
targets to divert a weapon from its intended target. Also required are the means, such as the coordination of jamming
with the use of decoys, to neutralize the HOJ weapons threat.
Surveillance and Warning
Electronic support surveillance and warning perform the
functions of noncooperative intercept and exploitation of radiated energy in the EM environment. Surveillance and warning detection relationships are those associated with communications systems. Additional signal detection constraints
result because the signals spatial location and its characteristics may not be known. Signal unknowns require tradeoffs of
detection sensitivity and environment search. Once detected
and measured, environment signals require sophisticated signal processing for signal sorting, formation, and characterization before they can be correlated with signal intelligence
libraries for classification. Some fundamental tradeoff relationships for detection and warning are discussed below.
Threat Signal Detection. Threat signal detection occurs as
the electronic support system is illuminated above the system
sensitivity level with signals that satisfy the single-pulse detection criteria. Detection is performed as the ES system
scans the environment. Detection metrics include incident radiation sensitivity, detection probability, false detection probability, corruption probability, simultaneous detection, and
throughput rate.
Aircraft are often used to carry electronic warfare battlefield surveillance equipment. The operating altitude of sur-
veillance aircraft provides a long line-of-sight range to the horizon. The range to the electromagnetic horizon accounting
for nominal atmospheric refractions is given by
R=
3
h
2
1/2
(1)
(TD NM)
PT
(2)
(3)
634
ELECTRONIC WARFARE
RMAX =
1/2
P G Gr
(4 )3
tS t
N
kTBn L
(5)
MIN
G = 2K/
20
1014
1012
1010
108
106
105
104
103
Pf0 = 1016
15
10
5
102
101
10
15
0.001 0.01
0.1
0.5
0.9
Probability of detection
0.99 0.999
POL
(TD R)N
N!
=
N
TD R
1+
N
N=1
(6)
ELECTRONIC WARFARE
(7)
Observer
R /2
Target
/2
R
Range
Baseline
L
Figure 7. Emitter location geometry supporting Eq. (7), with observer track and signal measurement angles indicated.
635
subtended by the maximum difference in observation bearings with respect to the target, which provides location measurement error for the condition /2. The range R from
the observer to the target is given by
R=L
sin( )
sin
(8)
636
ELECTRONIC WARFARE
Figure 8. The MMIC receiver, a combination of monolithic microwave, analog, and digital circuits, performs signal selection and conversion to a convenient intermediate frequency.
Wideband Interconnections. Electronic warfare sensors require broad access to the electromagnetic environment to provide quick response to hostile electromagnetic activity. For
convenience and efficiency, central stowage of signal processing functional elements is important. To assure signal
visibility, environment apertures, antennas, and EO/IR sensors must occupy locations on the periphery of the aircraft,
ship, or land vehicle. Wideband interconnects transmit electromagnetic environment data from the EW system apertures
to processing subsystems.
With the current RF bandwidth of the electronic warfare
environment expanding through tens of gigahertz, just finding a medium that supports that level of frequency coverage
is a challenge. At light frequencies, however, a 100 GHz spectrum spans less than a third of 1% of light frequency. In addition, low-loss-transmission optical fibers provide a nearly
lossless means to transfer wide spectra across a platform. Indeed, wideband interconnect technology is developing the use
of fiber optics.
Usable optical fiber bandwidth is limited by dispersion.
Conventional fiber exhibits dispersion of 20 ps/km/nm of
bandwidth. A typical signal operating within a 10 MHz bandwidth would exhibit dispersion of less than 0.1. Clearly,
bandwidth limitations are elsewhere in the link.
Detectors have also been developed to provide bandwidths
on the order of tens of gigahertz. High RF operating frequency
detection is performed by using small-geometry detectors that
exhibit maximum power limitations. Limitation in maximum
power levels applied to the detector restricts the output signal
intensity range. Recent developments in distributed detector
elements are extending detector power-handling capabilities.
Dynamic range is a significant fiber-optic link metric because the EW sensor system must process low-power signals
on the horizon in an environment with high-power local
transmissions. Modulator and detector attenuation reductions are technological issues being addressed to enhance the
dynamic range performance of fiber-optic links.
Countertargeting
Countertargeting (CTAR) is the technical area that provides
the means for protecting the host platform or force from
Bragg cell principle
Laser
Bragg cell
cte
Defle
Acoustically induced
diffraction grating
Transducer
f1
Signal input
f2
Defle
t
d ligh
ction
angle
f1
f2
O
p
t
i
c
a
l
s
e
n
s
o
r
Undeflected light
Figure 9. The acousto-optic Bragg regime signal transform processing principle used for signal-frequency analysis, sensitivity enhancement, and direction-finding functions.
ELECTRONIC WARFARE
Program start
Initialized serial port
and generators
Send dwell
check for data
Yes
Data
available?
No
No
Five repetitions
completed?
Yes
Are all
PRI/pulse widths
done?
No
Yes
Set generator 2 for next
frequency/sens power.
Freq = 3.5GHz?
637
Yes
No
Program finish
Figure 10. CTAR functional diagram showing sequence used in engaging a surveillance or targeting radar signal.
638
ELECTRONIC WARFARE
grammed pattern. False-target deception techniques are generated to emulate true target returns. The threat-radar operator, in response to deception, may conclude that all detected
targets are genuine and simply select false targets for weapons engagement, or, if deception is suspected, time and computational resources must be used to identify the true target
prior to engagement. In automated weapons systems, the EA
subsystem may create so many false targets that the radar
computer becomes overloaded. Because Doppler radar and
missile seekers process large numbers of ambiguous radar returns to fix the true target, they are particularly vulnerable to
coherent false-target techniques. An effective CTAR approach
combines jamming and deception. Jamming creates a radial
strobe that obscures the true target, whereas the deceptive
CTAR provides false targets that project through the jamming strobe.
Figure 13. PPI radar scope without and with jamming, showing the
effects of CTAR jamming on the threat radar display.
J P B
R
PJ 4BR
(9)
ELECTRONIC WARFARE
where Rb is the burn through range, J/S is the ratio of jammer-to-signal power required to jam the victim radar, PR is
the effective radiated power of the radar, PJ is the effective
radiated power of the jammer, is the radar cross section of
the target, BJ is the jamming signal bandwidth, and BR is the
processing bandwidth of the radar receiver. This equation
models the case with the jammer located on the radar target
platform.
Jammer-to-Signal-Power Relationships. The J/S power ratio
at the threat radar is a concept central to predicting EA effectiveness. To degrade the threat radar, an interfering jammer
power J of sufficient strength is required to overcome the target-reflected signal at the radar S. For effective EM noise
jamming, the J/S required is 0 dB to 6 dB minimum, depending on the noise modulations used and the detailed characteristics of the threat. The minimum J/S ratio required for
effective CTAR deception techniques varies from 0 dB for
false targets, to 0 dB to 6 dB for range deception, to 10 dB to
25 dB for angle-tracking deception, and to 20 dB to 40 dB for
monopulse deception. Equations (10)(12) are based on two
typical EA tactical situations. Self-protection CTAR [Eq. (10)]
addresses the case with the target in the threat radar main
beam. Support CTAR [Eq. (11)] addresses the case of the target in the threat main radar beam but with the EA jamming
emanating from a separate platform and radiating into an
arbitrary bearing of the threat radar antenna pattern. In both
cases, the radar is assumed monostatic (i.e., the radar receiver and transmitter are collocated).
J/S for self-protection EP CTAR:
J/S =
4Pj Gj Br R2
Pr Gr g2 Bj
(10)
where Pj is jammer power output; Gj is gain of jammer antenna in direction of radar; Br is radar receiver noise bandwidth; R is radar-to-jammer range; Pr is radar power output;
Gr is gain of radar antenna in target direction; is target
radar cross section; g2 is propagation one-way power gain
(square of the ratio of field strength to free-space field
strength due to direct and reflected ray combination), 0 g2
4 (interferometer lobing); and Bj is the jammer noise bandwidth.
J/S for support EA:
J/S =
(11)
Pr Gr 2 g4
(4 )3 R4
(12)
639
Equation (12) defines the signal at the receiver of a monostatic radar. Note that the power received at the radar is directly proportional to the target radar cross section and inversely proportional to the fourth power of the range R (R is
the separation between the target and radar). Therefore, as
the radar cross section is reduced, the signal at the radar is
correspondingly reduced. If the cross section is sufficiently reduced, the target becomes indistinguishable from the radar
noise and background clutter. Low observable platforms, such
as the B-2 and F-117 aircraft, provide sufficiently low radar
cross section to make radar detection difficult. The implication of radar cross-sectional reduction technology to CTAR is
twofold: first, with sufficiently low radar cross section, EP
may not be necessary, and secondly, if the cross section
merely lowers the signal power at the radar, then a lower
power, low-cost CTAR transmitter becomes sufficient to provide the J/S necessary to achieve the desired level of survivability.
Countermeasure Technology. Countermeasure technology
addresses the evolving threat in addition to the need for economic force protection. Significant advances in radar, communications, EO/IR weapons sensors, and weapons control present heightened challenges to maintaining effective EA
capability.
Radar
Countermeasures
Technology. Countertargeting
equipment for use against advanced synthetic aperture radar
(SAR) or inverse synthetic aperture (ISAR) surveillance and
targeting radar requires wide instantaneous bandwidths and
high processing speeds. Furthermore, because these radars
use coherent processing, CTAR effectiveness consequently requires coherent radar signal storage and reproduction to enhance effectiveness. Digital RF memory (DRFM) technology
is being developed to convert the analog radar RF signals into
a digital format for convenient storage. As required, the radar
signal is retrieved from storage and converted to RF for use in
countermeasure waveform generation. Technology limitations
and costs constrain currently available DRFM designs, each
optimized for a specific application.
Radio-frequency-tapped delay lines provide precise timing
between portions of the CTAR waveform. Analog RF-tapped
delay lines use surface acoustic wave (SAW) and acoustic
charge-transport technology. Research is underway to create
digital tapped-delay lines. Noise modulation is commonly applied to CTAR signals, and high-quality tunable noise sources
are required. The output EA stage is the transmitter/antenna
combination that generates and radiates the CTAR signal.
Antennas for EA applications, once considered a dedicated
asset, are currently envisioned as multifunction phased-array
antennas with elements fed by solid-state amplifiers.
Radio-frequency isolation between the countermeasures
transmitter and the receiver is a common problem of countermeasures-equipped platforms. The countermeasure signal appears at the receiver antenna. When the transmitter and receiver are insufficiently isolated, the countermeasure signal
interferes with lower level threat signal reception from the
environment. Interference demands careful attention to antenna design, isolation, and platform siting.
Radar Countermeasure Signal Source Technology. Electronic
attack transmitters require signal sources that can be rapidly
switched in azimuth, elevation, frequency, and polarization to
generate multiple high-power beams with low sidelobes over
large multioctave bandwidths. CTAR requirements for eco-
640
ELECTRONIC WARFARE
ety of air-to-surface, air-to-air, and surface-to-air EO/IR missile weapons. These missiles can inflict severe damage to the
smaller craft used for littoral warfare.
Electro-optic system target detection range depends on detector sensitivity and resolution. A target image is defined by
contrast with the background. Sensitivity determines
whether the contrast is discernible. Resolution depends on
the spatial environment angle illuminating the detector,
which is a function of detector surface area and focusing optics. The distance at which target features are resolvable determines the maximum operating range of the system.
The target signature detectability is not determined by the
absolute temperature of the object but rather by the contrast
between the target and background within a given spectral
band. Environment backgrounds range from the cold, uniform
background of space to thermally cluttered land areas. Solar
interaction with the target and background reflection and
heating further degrade the background contrast with the
target. Typical target contrasts range from about 1 kW/sr
(kilowatt per steradian) in the 2 m to 3 m atmospheric window for an aircraft engine to tens of kilowatts per steradian
for ships in the 8 m to 12 m window. Target aspect, especially the location of hot spots, greatly influences the signature.
Electro-Optic/Infrared Countermeasures. Electro-optic/infrared countermeasures are constrained by specular atmospheric
propagative characteristics, as is the threat (Fig. 14). The contrast of the target to the background within the weapon sensors specular passband, the type of seeker spatial localization
processing, and available practical radiation sources are also
prime considerations.
The missile fly-out and CM sequence of events occurs in
several seconds. As part of an integrated electronic warfare
suite, the EO/IR EA system is designed to engage a large
number of missiles launched in a coordinated attack. Figure
Atmospheric
transmission
0.2
0.5
0.2
Nd
0.5
10
Er
Gas
1
Er
Ho, Tm
2
20
CO2
CO2
Hf
5
10
20
BBO OPO
BBO OPO+SHG
Lasers
plus
frequency
conversion
Diodes
Excimer
Lasers
PPLN OPO
FHG
0.2
FHG
ZnGeP2 OPO
Nd + SHG
0.5
10
Wavelength ( m)
Figure 14. EO/IR atmospheric transmission spectral segments and laser and laser harmonics
countermeasures source spectral regions.
20
ELECTRONIC WARFARE
641
60
Minimum range for intercept
Decoy
Time from ASM impact (s)
50
CIWS
Antimissile
missiles
40
Range by which
EO/IR EA
Inform must be
passed
30
Handoff
from
EO/IR EA
Handoff
to
EO/IR EA
20
RF decoy
Antimissile
missile launch
IR decoy
CIWS
10
0
0
10
ASM Range from the ship (km)
15
20
Figure 15. Missile attack time line showing launch, acquisition, and homing
phases of the missile as well as the CM
attack on missile sensors and control circuits.
The small beam divergence of lasers can result in highradiance, low-power sources that provide the J/S power ratios
needed for effective EA. Two laser sources, primary lasers
and nonlinearly shifted lasers, are available for CM applications. Lasers shifted by nonlinear conversion include harmonic generation and tunable optical parametric oscillators
(OPOs). Primary lasers do not produce spectral lines in all of
the potential threat passbands of interest and are susceptible
to notch-filter counter-countermeasure techniques. Although
harmonic generating EA techniques provide additional wavelengths, they are also subject to counter CM. Promising
sources for IR/EO CM are tunable OPOs pumped by diodepumped, solid-state lasers. Two nonlinear materials currently
demonstrating the highest potential are periodically poled
lithium niobate (PPLN) and zinc germanium phosphide
(ZnGeP2). Figure 14 shows the primary lasers of interest and
the wavelength coverage possible with PPLN and ZnGeP2
OPOs.
Although noncoherent sources provide wide angular protection, high-resolution detection is necessary to point and
track the threat system and effectively use laser power.
Timely threat detection and warning ES is essential to the
success of all nonpreemptive EA.
Electro-Optic/Infrared Countermeasure Technology. Key
EO/IR EA technologies required to counter threat performance improvements include higher throughput data processing using more capable algorithms, laser beam steering,
and decoy launcher design. Needed processing improvements
include faster signal processing, more efficient image processing, and false alarm reduction. High-performance, highspeed beam steering, preferably nonmechanical, is required
to reduce response time in multiple threat environments. Improved decoy launchers to position decoys quickly and accurately within the scenario are also needed.
Low observability technologies are being developed to decrease or mask the IR/EO signatures of targets. Target signature reduction increases the effectiveness of conventional
countermeasure responses by reducing the jamming power required to counter the missile system effectively. Low observ-
642
ELECTRONIC WARFARE
ability enables applying new technologies to IR/EO countermeasures by reducing the size, weight, and power
requirements of decoy and laser CM sources. For example,
diode laser and diode-pumped nonlinear optical sources can
be integrated with unmanned aerial vehicles to produce new
classes of CM devices and tactics. Large-area spectrally selective sources and obscurants provide advanced capability
against spatially and spectrally discriminating threats. Primary laser and laser-pumped nonlinear sources are important evolving technologies. Launchers and vehicles that provide rapid and precise CM placement with realistic kinematic
performance are areas of increasing importance.
Decoy Countermeasures
Decoys are EW devices, usually expendable, deployed from
the platforms to be protected. Decoys generate a jamming response to the threat or false targets. In either case, the decoy
lures the threat away from the intended target toward the
decoy. A jamming decoy generates a cover signal that masks
the target signal. Thereby the threat sensor signal fidelity is
degraded, making detection and tracking of the intended target more difficult. A jamming signal may also activate the
antijam home-on-jam mode of the weapon system. As false
targets, the decoys generate credible target signatures to provide weapon system seduction or distraction. Decoys create
confusion that causes weapons to attack false targets.
Decoys may be either passive or active. A passive decoy
generates a countermeasure response without the direct, active amplification of the threat signal. Principal examples of
passive decoys are chaff and corner reflectors in the RF spectrum and flares in the EO/IR spectrum.
Decoy Operational Employment. Decoys provide EA capability across the entire EW battle time line. Decoys are used
primarily for EP missile defense and self-protection missile
defense but also for countersurveillance and countertargeting applications.
Jamming is used in conjunction with decoys to obscure the
target signal at the threat radar during decoy deployment.
As decoys are deployed, jamming ceases and the threat radar
acquires the decoy as a target or transfers radar tracking
from the target to the decoy. Threat radar acquisition of the
decoy as a target is probable because decoys present prominent signatures.
Decoys used for missile defense perform either seduction,
distraction, or preferential acquisition functions. A single decoy type may perform multiple functions, depending on deployment geometry with respect to the launch aircraft or ship
and the stage of electronic combat.
Decoys are used in a seduction role as a terminal defense
countermeasure against missile weapons systems. A seduction decoy transfers the lock of the missile guidance radar or
EO/IR sensor from the defending platform onto itself. The decoy that generates a false-target signature is initially placed
in the same threat tracking gate, missile sensor range, and/
or angle segment as the defending target and is subsequently
separated from the launching platform. The decoy signature
captures the missile guidance sensor, and the target lock is
transferred from the ship or aircraft to the decoy. Typically,
the decoy is separated in both range and angle from the defending target to assure target-to-missile physical separation
Figure 16. ALE-129 RF chaff round with the bundle of reflector elements partially deployed from the canister.
ELECTRONIC WARFARE
643
Frequently, persistent seduction decoys perform a distraction function after separating sufficiently from the defended
platform. This residual distraction further minimizes the
number of distraction decoys required in an engagement.
An EA preferential acquisition decoy provides a signature
to the missile seeker such that during acquisition the missile
seeker senses the real target only in combination with the
decoy signature. In the end game, the decoy signature in the
missile field of view biases the aim point of the missile tracker
away from the intended target.
The preferential acquisition concept requires decoys positioned close to the defending platform. Decoys can be towed
behind the target aircraft or tethered to the defending ship.
The AN/ALE-50 (Fig. 22) is a towed decoy used for air defense
preferential acquisition, and the EAGER decoy (Fig. 23) is being developed for ship defense preferential acquisition.
Chaff Decoys. A chaff decoy is composed of multipletens
of thousands to millionsof electrically conductive dipole filament elements deployed in the air to reflect and scatter radar signal radiation and create a false-target radar response.
Figure 24 shows a typical deployed chaff decoy. The chaff decoy frequency response is determined by the length of the dipole elements, and the chaff radar cross-sectional (RCS) mag-
Figure 20. NATO Sea Gnat MK-216 distraction decoy deployed from
a rocket launcher.
644
ELECTRONIC WARFARE
Figure 24. Deployed chaff round shown as a burst of reflector elements against a sky background.
RCS(m ) =
0.018c2 N
f2
(13)
parent target signature. Figure 26 shows a multifaceted triangular corner reflector that provides wide angular coverage.
The apparent RCS normal to a triangular corner reflector
is given by
2
RCS(m ) =
4L4 f 2
3c 2
(14)
Figure 25. Radar PPI display showing target reflections from multiple chaff decoys.
plied to the signal before retransmission to enhance effectiveness. The apparent radar cross section of an active RF decoy
is given by
2
RCS(m ) =
(Pd Gd 4R2 )
Pr Gr
(15)
RCS(m ) =
(Gt c 2 )
4 f 2
(16)
where Gt is the combined electronic and antenna gains (receive and transmit) of the decoy, c is the speed of light (3
108 m/s), and f is the frequency in hertz.
Decoy Effectiveness. A distraction decoy is deployed at an
extended range from the defending platform and provides an
alternate target for seeker lock-on. Distraction decoys require
deployment before seeker lock-on to engage the radar in its
acquisition process. Usually more than one distraction decoy
is used to defend a platform. An estimate of the effectiveness
of the distraction decoy is given by
1
Ps = 1
N+1
(17)
645
future systems include broad bandwidth microwave and millimeter-wave components (e.g., antennas and amplifiers).
Microwave and millimeter-wave output power sources are
required with high power, efficiency, and duty cycle to support the projected threat environments. The future RF threat
environment is expected to be densely populated with longpulse radar. Higher decoy radiated power at higher duty cycles will be needed to prevent decoy saturation as the number
of simultaneous threat signals in the environment increases.
Ultra high speed countermeasure frequency set on circuitry is necessary to queue jammer frequency rapidly. Signals with rapid frequency hopping and frequency chirping require rapid activation for effective countermeasures. Spatially
large and efficient spectrally matched IR materials and radiating structures are needed to counter multispectral, imaging
IR seekers. Safe, nontoxic, highly opaque, broad-spectrum IR
and electro-optical obscuration materials are required to
mask targets and confuse image-processing seekers. Efficient,
primary power sources capable of high peak power and dense
energy storage are needed to provide the increasing demand
for electrical power used in decoy systems.
Reading List
J. S. Accetta and D. L. Shumaker (eds.), The Infrared and ElectroOptical Systems Handbook; D. H. Pollock (ed.), Vol. 7, Countermeasure Systems, Ann Arbor, MI: Infrared Information Analysis Center, and Washington, D.C.: SPIE Optical Engineering Press, 1993.
B. Blake, Janes Radar and Electronic Warfare Systems, Surrey, U.K.:
Janes Information Group, 1993.
J. A. Boyd et al., Electronic Countermeasures, Los Altos, CA: Peninsula Publishing, 1978.
E. J. Chrzanowski, Active Radar Electronic Countermeasures, Norwood, MA: Artech House, 1990.
N. C. Currie, Techniques of Radar Reflectivity Measurement, Dedham,
MA: Artech House, 1984.
R. D. Hudson, Jr., Infrared Systems Engineering, New York: WileyInterscience, 1969.
W. L. McPherson, Reference Data for Radio Engineers, New York:
Howard W. Sams, 1977.
R. J. Schlesinger, Principles of Electronic Warfare, Los Altos, CA: Peninsula Publishing, 1961.
M. I. Skolnik, Radar Handbook, New York: McGraw-Hill, 1970.
L. B. Van Brunt, Applied ECM, Vol. 1, Dunn Loring, VA: EW Engineering, 1978.
W. Z. Wolfe and G. J. Zississ (eds.), The Infrared Handbook, revised
ed., Ann Arbor, MI: Environmental Res. Inst. Michigan, 1985.
ANTHONY E. SPEZIO
ALAN N. DUCKWORTH
FRANCIS J. KLEMM
STANLEY A. MOROZ
JAMES M. TALLEY
Naval Research Laboratory
670
671
Figure 1. The AN/PVS-5 goggle provides a good image with moonlight illumination. In use, it covers the entire upper portion of the
face.
gyro horizon without looking inside at the cockpit instruments. Figure 3 illustrates symbology superimposed on ANVIS imagery. The HUD allows the pilot to keep heads up and
eyes out, because the pilot need not focus his eyes and attention inside the cockpit to view important instrument information.
The primary problem with using ANVIS on helicopters is
lack of compatibility with the cockpit instrument lighting.
Modern image intensifiers amplify ambient light 2000 to 3000
times; cockpit lights can blind the goggles due to reflected
glare off the canopy or off other objects in the cockpit. The
problem is corrected by adding a spectral filter to ANVIS
which rejects blue-green light, and only blue-green instru-
672
used by the copilot/gunner to locate and engage targets. However, the TADS thermal imager has three fields of view with
the wide field of view identical to the PNVS field of view. The
copilot/gunner can use the TADS image in a pilotage mode in
exactly the same way that the pilot uses the PNVS. A helmet
tracker senses the copilots head motion and moves the TADS
to align the line of sight of the thermal imager. The copilot
views the image via a helmet-mounted display.
Heads-up instrument symbology is an integral part of the
PNVS and TADS systems on the Apache helicopter. Both pilot and copilot can view important flight and status information superimposed on the thermal imagery. With symbology
superimposed on his night vision imagery, the pilot does not
have to focus his eyes inside the cockpit to determine critical
information such as altitude, heading, or caution status.
Combinations of Thermal Imagers and Image Intensifiers
Both pilot and copilot use ANVIS to fly. The panel displayed HNVS imagery is used to cross reference and verify
the information provided by the ANVIS. The aviators use
HNVS as a backup, and as a cross reference for terrain avoidance, target location, check point verification, and during low
illumination or poor visibility conditions where ANVIS vision
is degraded.
The newest Army helicopter, currently in development, is
the RAH-66 Comanche; Comanche is a reconnaissance and
light attack helicopter. The Comanche Night Vision Pilotage
System will integrate an advanced, high -resolution thermal
imager, an I 2 camera, and flight symbology into a single package. The pilotage sensors will be mounted on the nose of the
aircraft in a manner similar to Apache; however, the nose
turret will include both thermal and I 2 sensors. The pilot will
wear a binocular helmet display rather than the monocular
display worn by Apache aviators. The field of view of the
NVPS with the new helmet-mounted display will be 30 vertical by 52 horizontal.
673
of fiberoptic bundles with the core etched away. The plate has
millions of channels (holes) with photoemissive material on
the inside of the channels. Each face of the MCP is metalized,
and a high voltage is applied across the plate. As electrons
strike the inside of the MCP channels, secondary electrons
are emitted. Multiple secondary electrons are emitted for each
cathode electron. The secondary electrons are accelerated by
the voltage along the channel, the secondary electrons strike
the channel wall and cause more electrons to be emitted, and
the electron multiplication process is repeated.
The amplified electrons from the MCP are accelerated to
the phosphor, where a brighter version of the cathode image
is formed. The fiberoptic twist erects this image. The eyepiece
magnifies the image for presentation to the eye. ANVIS provides a scene to eye light gain of about 3000. In the absence of
fog or obscurants, ANVIS performs well under clear starlight
illumination. Generally, ANVIS provides good imagery with
naked-eye visibility exceeding 200 m to 300 m and minimum
light levels of 7E-5 footcandles (2).
Thermal Imagers
Objective
lens
Cathode
MCP
Phosphor
Fiber optic
twist
Thermal imagers like the Apache helicopter PNVS detect radiation in the 8 m to 12 m spectral band. This band is
chosen because the atmosphere has a window where the
transmission of thermal energy is good. Everything near room
temperature radiates at these wavelengths. The emissivity of
natural objects is generally above 70%; most human-made objects are also highly emissive. It should be noted, however,
that thermal sensors derive their images from small variations in temperature and emissivity within the scene. Typically, the thermal scene is very low contrast even under good
thermal viewing conditions. Scene thermal contrast is affected by the amount of solar heating during the day. Thermal contrast is decreased by the presence of clouds. Thermal
contrast can be poor at night, particularly after extended periods of clouds or precipitation.
In current thermal imagers like the PNVS, a linear array
of infrared detectors is used. Figure 7 illustrates the theory
Eyepiece
Electrons
Microchannel plate
(MCP)
674
Scan
mirror
Afocal
optics
Imaging
lens
Detector
array
Light
(thermal energy)
Electric reformat
and display
675
Good
Adequate
Inadequate
5
1
9
13
35
18
17
13
9
30
3
3
Good
Adequate
Inadequate
16
32
2
0
45
36
18
9
8
1
1
10
676
Limiting
Resolution
Ocular
Overlap (%)
Normal eyesight
Normal eyesight
0.9 cy/mrad
0.6
0.9
0.6
0.5
0.6
Normal
100
100
100
50
100
100
75
677
Limiting Resolution
40
40
40
60
60
60
0.9 cy/mrad
0.4
0.5 at edge/1.1 at center
0.6
0.3
0.2 at edge/0.9 at center
lution at the center was also evaluated. Table 4 gives the combinations evaluated in the second test which was flown during
February and March, 1988. Four subject pilots participated;
each subject flew four trails of each task.
During this test, goggle configuration did not affect altitude and airspeed performance. Once the task was defined in
the baseline flight, execution did not vary significantly in
terms of the airspeed or altitude which was maintained. The
highest workload and lowest confidence ratings were given to
the 60, 0.3 cy/mrad goggle simulators. In this test, the pilots
consistently selected the higher resolution and smaller field
of view devices over the larger field of view but lower resolution devices.
If resolution at the edge of a 60 degree device was substantially poorer than resolution at the center, two of the pilots
consistently rated the 40 degree field of view goggles higher
even when the 60 degree goggles had equivalent or better resolution in the central portion of the field of view. The other
pilots rated these 40 and 60 devices as equal.
After test completion, the pilots were asked to explain this
preference. The response was that, with the 60 goggles, they
would see an object and then lose it. This characteristic of
the goggles was particularly bothersome during the 360
hover turn out of ground effect but also affected performance
during lateral flight, NOE, and contour flight. It is likely that
ocular tracking is important in the performance of all these
tasks and that poor resolution at the edge of the field of view
would therefore lead to adverse pilot reaction. However, ocular tracking was not measured during the test.
During 1994, a flight test was conducted to test the hypothesis that using an 18 ocular overlap in a 52 total FOV
might result in abnormal eye and head movement patterns
(12). A fully overlapped design was also flown for comparison.
The flight test further determined if the difference would impact pilot performance of the prescribed flight tasks. Flight
tasks included NOE, contour, out of ground effect hover, and
lateral flight.
On the basis of the eye tracking data collected during the
flight, the partial overlap does constrain the eye at the center
of the FOV and significantly reduces the amount of time that
the eye uses the outer portion of the total FOV. Averaged
across all pilots and tasks, the percentage of eye fixations that
occur outside the central 18 when using partial overlap was
reduced by 60% (p 0.0170) as compared to the full overlap
(full 24%, partial 9%). There is no difference between
tasks (p 0.2836).
Looking at horizontal eye movement, the mean rms amplitude across the five subjects for the partial overlap was only
70% of the rms for the full overlap. This 30% reduction was
significant (p 0.0136). No statistically significant difference
678
head motion. The pilot will see a blurred image for the same
reason that a photograph will be blurred if the exposure time
is too long for the motion being captured.
Two pilots flew an AH-1 Cobra from the front seat using
helmets and helmet-mounted displays from the Apache helicopter with a small video camera mounted on the helmet. The
camera FOV was 30 vertical by 40 horizontal and provided
unity magnification through the helmet display. The test
camera had a limiting resolution of about 0.5 cy/mrad and
electronic gating to control the dwell time for each video field.
Selectable exposure times ranged from 1/60 s (one field) to
under a millisecond. The pilots visor was down and taped so
that he flew solely by sensor imagery. The pilots performed
hover, lateral flight, NOE, and contour tasks. The flight experiment was performed in January, 1989, at Fort A.P. Hill,
Virginia.
Image blur at 1/60 s exposure time was unacceptable. Blur
was present with either aircraft or head motion, and the blur
interfered with task accomplishment. With an exposure time
of 1/120 s, image blur was noticeable with head motion but
no conclusion was reached regarding impact on performance.
No image blurring was noted at 1/240 s exposure time.
Visual acuity is not degraded for ocular tracking rates up
to about 30 per second, and ocular tracking is probably important during pilotage. The exposure time for each snapshot
taken by a video camera should be short enough that images
crossing the sensor FOV at up to 30 per second are not
blurred. Note that acceptable exposure time depends on sensor resolution; exposure time should shorten as sensor limiting resolution improves.
Impact of Image Processing Delays. In advanced helicopter
pilotage systems, digital processing will be used to enhance
imagery and add symbology. Digital processing adds a delay
between when the image is captured by the sensor and when
it is seen by the observer. This kind of delay is not present in
currently fielded systems; the impact of this delay on flight
performance is unknown. A flight test was conducted to qualitatively assess the performance impact of delaying pilotage
video (14).
Two aviators participated in the test and alternated as
subject and safety pilot. The subject pilots wore Apache helmets and viewed a helmet-mounted camera through the
Apache helmet-mounted display. The camera and display provided a 30 vertical by 40 horizontal, unity magnification image to the subject pilot. During the test, a cloth was draped
over the subjects visor so that all visual cues came from the
helmet display. A video digitizer provided a variable delay
between camera and display. All flights were in daylight and
good weather.
The project pilot established baselines for several, aggressive flight maneuvers using normal day, unaided vision. The
maneuvers included rapid sidestep, pop-up, longitudinal acceleration and deceleration, rapid slalom, nap-of-the-earth,
and contour flight. After practicing unaided and with the sensor hardware set for zero delay, the subject pilots repeated
the maneuvers with the video delay increased after each iteration of the task set. Test results are based on subject and
safety pilot assessments of flight performance.
On the basis of the qualitative assessment of these two
pilots, there appears to be no performance impact from a 33
ms image processing delay.
Delays of 100 ms or more impaired the subject pilots ability to make stable, aggressive maneuvers. All hover tasks
were more difficult; sometimes a stable hover could not be
achieved. Alternate strategies were developed for NOE and
contour to compensate for the image processing delay. The
subjects experienced the feeling that the aircraft motion was
ahead of the visual scene.
On the basis of this limited flight test, processing delays of
up to 33 ms cannot be sensed by the pilot and appear to have
no impact on flight performance. However, with an image processing delay of 100 ms, the pilot senses that aircraft movement is ahead of the displayed image. During these flights,
and without prior training with delayed imagery, the 100 ms
delay led to significant flight control problems.
EVALUATION
Current night pilotage sensors like the ANVIS image-intensified goggle and the PNVS thermal imager provide a significant capability to fly helicopters at very low altitudes in order
to hide behind hills, trees, and other terrain objects; this capability enhances the survivability of tactical helicopters on
the modern battlefield. The availability of heads-up aircraft
status symbology, that is, symbology superimposed on the
night vision imagery, is a critical feature of these pilotage systems. Further, aviators report that their ability to perform
night missions is greatly enhanced when both image-intensified and thermal imagers are available on the helicopter.
Flight experiments and the results of user surveys provide
guidelines for design improvements. NOE and contour flight
can be accomplished with reasonable workload using a pilotage system with 40 FOV and 0.6 cycles per milliradian limiting resolution; this resolution provides the pilot 20/60 visual
acuity. Improving either FOV or resolution beyond these values will lessen pilot workload and lead to increased confidence. However, since the ability to resolve scene detail is
important for terrain flight, night sensors should have sufficient sensitivity to provide 0.6 cycles per milliradian resolution under low thermal contrast or low scene illumination
conditions. In advanced systems, this minimum level of image
quality should not be traded for increased field of view.
BIBLIOGRAPHY
1. Anonymous, Lighting, Aircraft, Interior, Night Vision Imaging
System (NVIS) Compatible, MIL-L-85762A, 1988.
2. D. Newman, ANVIS/PNVS Comparison Flight Test, Fort Belvoir:
U.S. Army Night Vision and Electro-Optics Laboratory, 1982.
3. C. Nash, AH-64 Pilotage in Poor Weather, Fort Belvoir: U.S. Army
Center for Night Vision and Electro-Optics, NV-12, 1987.
4. R. Vollmerhausen, C. Nash, and J. Gillespie, Performance of AH64 Pilotage Sensors during Reforger 87, Fort Belvoir: U.S. Army
Center for Night Vision and Electro-Optics, NV-1-30, 1988.
5. T. Bui and J. Gillespie, Night Pilotage Sensor Field Assessment,
Fort Belvoir: U.S. Army Center for Night Vision and Electro-Optics, NV-91-4, 1990.
6. G. Youst, J. Gillespie, and S. Adams, Desert Storms Night Vision
and Electro-Optical Equipment Suitability Survey, Fort Belvoir:
U.S. Army Night Vision and Electro-Optics Directorate, AMSELNV-0099, 1992.
679
RICHARD H. VOLLMERHAUSEN
U.S. Army Communications and
Electronics Command
302
MISSILE CONTROL
MISSILE CONTROL
A missile control system consists of those components that
control the missile airframe in such a way as to automatically
provide an accurate, fast, and stable response to guidance
commands throughout the flight envelope while rejecting uncertainties due to changing parameters, unmodeled dynamics,
and outside disturbances. In other words, a missile control
system performs the same functions as a human pilot in a
piloted aircraft; hence, the name autopilot is used to represent the pilotlike functions of a missile control system. Missile
control and missile guidance are closely tied, and for the purposes of explanation, a somewhat artificial distinction between the two roles is now made. It must be remembered,
however, that for a guided missile the boundary between
guidance and control is far from sharp. This is due to the
common equipment and the basic functional and operational
interactions that the two systems share. The purpose of a
missile guidance system is to determine the trajectory, relative to a reference frame, that the missile should follow. The
control system regulates the dynamic motion of the missile;
that is, the orientation of its velocity vector. In general terms,
the purpose of a guidance system is to detect a target, estimate missile-target relative motion, and pass appropriate instructions to the control system in an attempt to drive the
missile toward interception. The control system regulates the
motion of the missile so that the maneuvers produced by the
guidance system are followed, thereby making the missile hit
or come as close as required to the target. The autopilot is the
point at which the aerodynamics and dynamics of the airframe (or body of the missile) interact with the guidance system. Instructions received from the guidance system are
translated into appropriate instructions for action by the control devices (e.g., aerodynamic control surfaces, thrust vecJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
MISSILE CONTROL
Guidance
command
Controller
Actuator
Aerodynamic
control surface
Missile
dynamics
Sensor
toring or lateral thrusters) that regulate the missiles flightpath. A block diagram describing these missile control system
operations is depicted in Fig. 1 where the function of each
component is further explained as following.
COMPONENTS OF MISSILE CONTROL SYSTEMS
Sensor Units
Sensor units measure some aspects of the missiles motion.
Gyroscopes and accelerometers are the two primary sensor
units used in any missile control system. They provide the
information of rotational and translational motions of a missile, respectively.
1. Gyroscope. A gyroscope is a mechanical device containing an accurately balanced rotor with its spin axis
passing through the center of gravity. When the rotor
rotates at a high speed, it assumes the rigidity characteristics that resist any force tending to displace the rotor from its plane of rotation. The tendency of a gyroscope to maintain its spin direction in the inertial space
allows us to measure, with respect to the spin direction,
the angular motion of the missile on which the gyroscope is mounted. Some recent gyroscopes, such as
fiber-optical gyroscopes and ring-laser gyroscopes, do
not use a spinning rotor. They calculate the body rate
by use of the Sagnac effect. Fiber-optical gyroscopes
have an especially high specification with reasonable
cost.
2. Accelerometer. The basic principle of operation of an accelerometer consists of the measurement of the inertial
reaction force of a mass to an acceleration. The inertial
reaction force of the mass causes a displacement of the
mass, which is suspended in an elastic mounting system within the missile, and the acceleration of the missile can be read from the displacement of the suspended
mass. Velocity and position information can be obtained
by integrating the accelerometer signal. One must avoid
placing the accelerometer near an antinode of the principal bending mode of the missile; otherwise, the vibration pick-up at this point may result in destruction of
the missile.
3. Altimeter. The altimeter, which is an instrument used
to measure altitude, is another sensor unit frequently
employed in cruise missile systems. There are two common types of altimeters. A pressure altimeter, which is
simply a mechanical aneroid barometer, gives an approximate altitude from which a more accurate value
can be calculated; on the other hand, radio altimeters
give absolute altitude directly. In radio altimeters, a
303
transmitter radiates a frequency-modulated wave toward the earth, and the reflected signal is received on
a separate antenna and combined with the signal taken
directly from the transmitter. The frequency difference
between the transmitted and the reflected signals indicates the height of the missile. Radio altimeters can be
used to maintain automatically a missile at a preset altitude.
Controller Units
Controller units can be regarded as the brain of a missile,
which tell a missile how to deflect the control surfaces or how
to alter the thrust direction. The controller is in the form of
preprogrammed logic and/or numerical operations installed
in the on-board computer of a missile. There are two inputs
to the controller units. One is from the sensor units, which
provide the information about the actual motions of a missile,
and the other input is from the guidance system, which provides the information about the commanded motions of a missile. The commanded motion and the actual motions are compared and manipulated in the controller units via a series of
logic and/or numerical operations in order to output an intelligent decision, which renders the actual motions of a missile
to match the commanded motions as closely as possible when
fed into the actuator units. The series of operations involved
in the controller unit is called control law. The most widely
used control laws include amplification, integration, and differentiation of the error signal between the commanded motions and the actual motions.
1. Amplification. The amplification of error signal improves the robustness of the missile control system
against uncertainties present in missile dynamics.
2. Integration. The integration of error signal effectively
increases the closeness between the commanded motions and the actual motions.
3. Differentiation. The differentiation of error signal provides the trend of error propagation and decreases the
required time for the actual motions to track the commanded motions.
With the increasing computation power of on-board computers, more advanced control laws can be implemented in
the missile control loop to improve the agility of a missile.
This point is addressed in more detail later.
Actuator Units
Actuator units are energy transformation devices. They receive the command from controller units and transfer it into
enough power to operate control surfaces in order to direct
304
MISSILE CONTROL
changed by the action of actuators, which exert forces on control surfaces or on exhaust vanes. Altering missile heading
by deflecting control surfaces is called aerodynamic control,
whereas altering missile heading by deflecting exhaust vanes
or by changing the jet direction is called thrust vector control.
A control surface is not effective until the airflow across the
surface has attained sufficient speed to develop a force. When
missile speed is not high enough during the beginning of
launch, the aerodynamic control is not effective, and its role
is taken over by thrust vector control. The following two sections are dedicated to missile aerodynamic control and missile
thrust vector control.
MISSILE AERODYNAMIC CONTROL
To control a missile accurately via aerodynamic forces, two
general types of control surfaces (i.e., primary and secondary
controls) are used. Primary control surfaces include ailerons,
elevators, rudders, and canards; secondary control surfaces
include tabs, spoilers, and slots. An understanding of missile
aerodynamics is needed before a discussion of how these two
groups of control surfaces work.
Missile Aerodynamics
Missile aerodynamics, like other flight vehicle aerodynamics,
is basically an application of Bernoullis theorem, which says
that if the velocity of air over a surface is increased, the pressure exerted by the air on the surface must decrease, thus
keeping the total energy constant. The top surface of a missile
wing section has a greater curvature than the lower surface.
The difference in curvature of the upper and lower surfaces
builds up the lift force. Air flowing over the top surface of the
wing must reach the trailing edge of the wing in the same
time as the air flowing under the wing. To do this, air passing
over the top surface must move at a greater velocity than air
passing below the wing because of the greater distance the
air must travel via the top surface. The increased velocity
means a corresponding decrease of pressure on the surface
according to the Bernoullis theorem. Therefore, a pressure
differential is created between the upper and lower surface of
the wing, forcing the wing upward and giving it lift. Besides
the wing, any other lifting surfaces and control surfaces of a
missile exhibit exactly the same function.
The three-dimensional motion of a missile can be described
in the body-axis coordinate system as shown in Fig. 2. The
longitudinal line through the center of the fuselage is called
the roll axis (x axis), the line that is perpendicular to the x
axis and parallel to the wings is called the pitch axis (y axis),
and the vertical line is considered as the yaw axis (z axis).
The origin of the body-axis coordinate system (x, y, z) locates
at the center of gravity. The three-dimensional missile motion
can be resolved into two planar motions: pitch plane motion
and yaw plane motion, where pitch plane is normal to the
pitch axis, and yaw plane is normal to the yaw axis. The
angle, measured in the pitch plane, between the projected
missile velocity and the roll axis is called the angle of attack
(AOA) denoted by . The angle, measured in the yaw plane,
between the projected missile velocity and the roll axis is
called the angle of sideslip denoted by . The resultant force
on the wing or body can also be resolved into two components:
the component in the pitch plane is called normal force, and
MISSILE CONTROL
305
Yaw
rotates on
vertical axis
Rudder
Rudder
tab
Roll
rotates on
longitudinal axis
Center of Gravity
Pitch
rotates on
lateral axis
Wing
elevator tab
elevator
aileron tab
aileron
y, Y, V
Canard
Iyy, M, q
Ixx, L, p
x, X, U
Izz, N, r
z, Z, W
Relative wind
Figure 2. Schematic demonstration of the nomenclature used in missile dynamics. The locations
of the primary control surfaces (rudder, elevator, aileron, and canard) and the secondary control
surface (tabs) are shown. The definition of the roll, pitch, and yaw motions is also shown.
the component in the yaw plane is called side force. The normal force can be further resolved into two components: the
component perpendicular to the projected missile velocity (in
the pitch plane) is called lift and the component along the
projected missile velocity is called drag. In many tactical missiles (e.g., short-range air-to-air missiles), the wing providing
the lift force is not prepared. They keep a suitable AOA in the
flight, where the lift force is produced by control fins or stability fins. Some fundamental control-related missile aerodynamics are surveyed in the following list. Readers who are
interested in advanced missile aerodynamics can refer to
Refs. 3 and 4 for details.
1. Lift Force. Lift force is the force by which aerodynamic
control surfaces can change the attitude of a missile.
Lift force depends on the contour of a wing, AOA, air
density, area of the wing, and the square of the airspeed. The common equation for lift is given as
L = CL
AV 2
2
(1)
where L is the lift; CL is the lift coefficient, which depends on the wing contour and the AOA; is the air
density; A is the area of the wing; and V is the airspeed.
The lift coefficient CL is determined by wind-tunnel
tests and is plotted versus AOA as a characteristic
curve for the particular airfoil. As the AOA increases,
the lift coefficient increases linearly to a certain maximum value, which is the point where the air no longer
flows evenly over the wing surface but tends to break
away. This breaking away is called the stalling angle.
After the stalling angle is reached, the lifting force is
rapidly lost, as is the airspeed. For a fixed AOA, the lift
AV 2
2
(2)
where CD is the coefficient of drag obtained from characteristic curves of airfoils via wind-tunnel tests. For a
small AOA, CD changes very little with the AOA. As the
AOA increases, CD increases. The drag coefficient is
usually quite small when compared with the lift coefficient. There are three sources of air drag. The skin friction of air on the wing is called profile drag; the air
resistance of the parts of a missile that do not contribute to lift is called parasite drag; and the part of airfoil
drag that contributes to lift is called induced drag. CL,
CD, and other aerodynamic coefficients can be evaluated
from empirical techniques, computational fluid dynamics (CFD) modeling, or by the processing of wind tunnel
test data. It should be noted that various degrees of uncertainty are associated with each of these methods,
with wind tunnel measurements usually being accepted
as the most accurate.
3. Wingtip Vortex. The asymmetric wingtip vortex, which
has a remarkable effect causing row-yaw instability at
a high AOA, is always a challenge to missile control system design. As air flows about a wing, the pressure of
306
MISSILE CONTROL
MISSILE CONTROL
307
308
MISSILE CONTROL
Jet control
Auxilary
directional
thrust
Deflection
charges
Deflects
jet stream
Jet vans
Gimbaled
engine
Changes
direction of
thrust
A wing-control configuration consists of a relatively large allmoving wing located close to the center of gravity of the missile and a set of tail or stabilizing surfaces at the aft end
of missile. This all-moving wing serves as an aforementioned
variable-incidence control surface. This type of control is used
mostly in an air-to-air missile because of its extremely fast
response characteristics. If the right and left moving wings
are controlled by separate servos, they can be used as ailerons
and elevators; the word elevons as mentioned earlier is applied to such a dual-purpose control surface. There are two
main advantages in using wing-control configuration:
Air Inlet Consideration. Instantaneous lift can be developed as a result of wing deflection via a pivoted mechanism with little increase of missile AOA. This low value
of AOA is advantageous particularly from the standpoints of inlet design for air-breathing power-plant and
guidance-seeker design. For example, if the propulsion
system is a ram jet, the air inlet is likely to choke if the
body AOA is large, say 15 or more. The use of wing control can greatly reduce the chance of inlet choke and
maintain the engine efficiency by keeping the body AOA
MISSILE CONTROL
309
310
MISSILE CONTROL
over the aerodynamic lifting surface; consequently, rolling moments are induced on the airframe. Hence, roll
stabilization or control is a critical issue for cruciform
missiles.
2. Monowing. The monowing arrangements are generally
used on cruise-type missile (i.e., missiles design to
cruise for relatively a long range like crewed aircraft).
This type of design is generally lighter and has less
drag than the cruciform configuration. The wing area
and span are, however, somewhat larger. Although the
monowing missile must bank to orient its lift vector in
the desired direction during maneuvering flights, the
response time may be sufficiently fast and acceptable
from a guidance-accuracy standpoint. The induced-roll
problem for the monowing configuration is substantially
less severe than that associated with the cruciform configuration. A separate set of lateral control surfaces,
such as flaps, spoilers, and wing-tip ailerons, is generally used in a monowing design. This stems from the
fact that the canard or tail surfaces that are usually
employed for pitch control on monowing design are generally inadequate for lateral control.
3. Triform. This type of wing arrangement, which employs three wings of equal area spaced 120 apart, is
seldom used because no noticeable advantage can be realized. Results of a brief preliminary analysis indicate
that the total wing area of the triform is equal to that
used on a cruciform arrangement and that consequently
no noticeable change in drag may be realized. In addition, little or no weight saving will be realized, even
though one less arrangement or fitting is required because the total load remains the same.
MISSILE CONTROL STRATEGY
Because the missile control system (autopilot) is commanded
by the missile guidance system, the autopilot command structure is dependent on guidance requirements for various mission phases.
Separation (Launch) Phase. A body rate command system is typically used during launch because of its robustness to the uncertain aerodynamics.
Agile Turn. During an agile turn, directional control of
the missiles velocity vector relative to the missile body
is desired. This amounts to commanding AOA or sideslip,
and regulating roll to zero.
Midcourse and Terminal Phases. An acceleration command autopilot is commonly employed in these two
phases.
End of Homing Phase. At the end of terminal homing,
the missile attitude may be commanded to improve the
lethality of the warhead.
Among these four autopilot structures, separation, midcourse,
and endgame autopilots are in general well understood and
have been implemented in production missiles. Autopilot designs for agile turns are significantly less well understood.
Reference 8 gives a detailed discussion of the challenges involved in agile turn, and several solution techniques were
provided there.
Up to now, the existing missile control strategies in various mission phases include two major categories: skid-to-turn
(STT) strategy and bank-to-turn (BTT) strategy. It is interesting to note that the progress in control strategy for crewed
aircraft is from BTT to direct sideslip control (i.e., STT),
whereas the progress in missile control strategy is from STT
to BTT. The applications and limitations of STT and BTT will
be introduced in the following sections.
Skid-to-Turn Strategy
In STT the missile roll angle may be either held constant or
uncontrolled; in either case, the magnitude and orientation of
the body acceleration vector is achieved by permitting the
missile to develop both an AOA and a sideslip angle. The
presence of the sideslip imparts a skidding motion to the
missile; hence the name skid-to-turn. The STT missile autopilot receives the guidance command interpreted in terms of
the Cartesian system. In the Cartesian system, the missileguidance system produces two signals, a leftright signal and
an updown signal, which are transmitted to the missile-control system by a wire or radio link to rudder servos and elevator servos, respectively. If a cruciform missile adopts STT control strategy, the two servo channels can be made identical
because of the identical pitch and yaw characteristics of a cruciform missile as mentioned earlier. Hence, in STT missiles,
both pitch control and yaw control are called lateral control,
which is different from the definition of aircraft control.
The other control loop of the STT missile is roll control,
which is used to stabilize the missile roll position. For a perfect performance of the STT missile, it is assumed that the
missile will remain in the same roll orientation as at launch
during the whole flight. In this ideal case, updown signals,
if sent to the elevator servos, should then result in a vertical
maneuver only; and leftright signals, if sent to the rudder
servos, should result in a horizontal maneuver only. However,
a missile, except for a monowing missile, is not designed like
an airplane and there is no tendency to remain in the same
roll orientation. In fact, it will tend to roll for many reasons
such as accidental rigging errors, asymmetrical aerodynamic
loadings, and atmospheric disturbances. Two methods ensure
that leftright commands are performed by rudder servos and
updown commands are performed by elevators. The first
method applies a quick roll servo (with bandwidth larger than
that of lateral servos) to stabilize the roll dynamics and to
recover the missile to the original roll orientation. The second
method allows the missile to roll freely but installs a roll gyro
and resolver in the missile to ensure that the commands are
mixed in the correct proportions to the elevators and rudders.
However, roll stabilization (the first method) is generally
more preferred for the following reasons:
There are many occasions when roll position control is
necessary, for example, to ensure that the warhead or
altimeter always points downward.
If the missile is free to roll, high roll rates may cause
cross-coupling between the pitch and yaw channels and
tend to unstabilize the system.
An STT missile with properly controlled roll motion may
provide the following advantages:
Same degree of vertical and horizontal maneuverability
can be achieved.
MISSILE CONTROL
With STT control it is possible to resolve three-dimensional target and missile motion into two independent
planar motions and to consider the pitch and yaw channels as an independent two-dimensional problem. Hence,
both guidance law and control system design can be done
via two-dimensional analysis. This simplification makes
it possible to apply the classic control theory, which
treats single-input single-out (SISO) system to the missile autopilot design.
Bank-to-Turn Strategy
The concept of BTT stems from the motion of crewed aircrafts,
which use ailerons to bank (roll) to the left or right. During a
left or right turn, a small amount of rudder is also applied in
an attempt to make the air flow directly along the longitudinal axis of the aircraft. Hence, in BTT motion, there is no
sideslip and no net side force. From a passengers point of
view, this method of maneuvering is the most comfortable because the total force experienced is always symmetrically
through the seat. When BTT concept is applied to missile control, the missile is rolled first so that the plane of maximum
aerodynamic normal force is oriented to the desired direction
and the magnitude of the normal force is then controlled by
adjusting the pitch attitude (AOA). If we consider the guidance command for an STT missile as being expressed in the
Cartesian coordinates (x, y) where x is the rightleft command and y is the updown command, then the guidance
command for a BTT missile can be considered as being expressed in the polar coordinates (r, ) where is the angle to
roll and r is the distance to be steered in the pitch plane.
Therefore,BTT strategy is sometimes called polar control or
twist-and-steer control.
Although BTT control has been used in crewed aircraft for
a long time, the interest in BTT missile control only began in
the late 1970s. The principle motivation for developing the
BTT missile autopilot stems from the successful application
of ramjet propulsion technology to missile system. Several
ramjet missiles were developed in the late 1970s, including
ramjet interlab air-to-air technology (RIAAT program,
Hughes), advanced common intercept missile demonstration
(ACIMD program, Naval Weapons Center), advanced strategic air-launched multi-mission missile (ASALM program,
McDonnell Douglas and Martin-Marietta). These BTT programs are thoroughly surveyed in Ref. 9. All these ramjet
missile programs require autopilot to prevent missile maneuvers from shading the inlet (i.e., the AOA needs to be small
and positive) and to limit sideslip in order to increase engine efficiency and thereby maximize range. The conventional
STT strategy cannot satisfy these limitations on and . The
applicability of the ramjet missile requires investigation in
the following areas:
1. Monowing Configuration. Ramjet missiles have two inlets external to the main body and there is room for
only one pair of wings (i.e., monowing).
2. Variable-Incidence Wing Control. Because the inlets
could accept only a small AOA as a result of interference from the body, the use of variable-incidence wing
control, which can provide instantaneous lift without increasing the AOA of the body, is very suitable for ramjet
engines.
311
mU
Hx
L
X
d
d
(3)
mV = Y ,
Hy = M
dt
dt
Hz
mW
N
Z
312
MISSILE CONTROL
R
Q
R
0
P
U + QW RV
X
m V + RU PW = Y
+ PV QU
Z
W
P
Ixx Ixy Ixz
I
I
I
Q
xy
yy
yz +
Ixz Iyz
Izz
R
P
Q
Ixx Ixy Ixz
L
Iyy Iyz Q = M
P Ixy
R
0
Ixz Iyz
Izz
N
(4a)
m(U + QW RV ) = X
(6a)
m(V + RU PW ) = Y
+ PV QU ) = Z
m(W
xx = L
PI
(6b)
yy + PR(Ixx Izz ) = M
QI
zz + PQ(Iyy Ixx ) = N
RI
(6c)
(6d)
(6e)
(6f)
(4b)
QU ) = Z,
m(W
0
Iyy Q = M
(7a)
Izz R = N
(7b)
2. Yaw dynamics:
For a missile with monowing configuration, the xz plane is a
plane of symmetry. Consequently, Iyz Ixy 0 from the definition of moment of inertia. Hence, Eqs. (4) may be simplified
as follows:
m(U + QW RV ) = X
(5a)
m(V + RU PW ) = Y
+ PV QU ) = Z
m(W
xx + QR(Izz Iyy ) Ixz (R + PQ) = L
PI
yy + PR(Ixx Izz ) + Ixz (P2 R2 ) = M
QI
(5b)
(5c)
(5d)
(5e)
(5f)
m(V + RU0 ) = Y,
3. Roll dynamics:
Ixx P = L
(7c)
MISSILE CONTROL
Y
Y
Y
v+
r+
r
v
r
r
= Y0 + yv v + yr r + y r r
Y (V, R, r ) = Y (V0 , R0 , r 0 ) +
U0 + zq
mq
2. Yaw dynamics:
v
yv
=
r
nv
U0 + yr
nr
(8)
z e
w
+
e
m e
q
(9)
v
y
+ r r
n r
r
(10)
3. Roll dynamics:
p = lp p + l a a
(11)
(12)
follow-up units as a complete missile control system as described at the beginning of this article.
Classic Control Design
313
(13)
Figure 4 depicts the block diagram of a lateral autopilot performing side force control, where a rate gyro measuring yaw
rate and an accelerometer measuring side acceleration are
used as feedback sensors. The missiles aerodynamic transfer
function in Fig. 4 are obtained from Eq. (10). The controller
is in the form of proportion and integration (PI). The problem
of autopilot design is to design properly the seven parameters
KP, KI, Ka, Kg, s, s, and Ks such that the actual missile side
force y follows the commanded side force yd as quickly as possible. Among the seven parameters, the two controller gains
KP and KI can be further tuned to satisfy different flight conditions. The remaining five parameters have fixed values and
cannot be tuned on line. The selection of the seven parameters is aided by such tools as root locus, Bode, Nyquist, or
Nicholls plots that enable visualization of how the system dynamics are being modified. The performance specifications of
the side force response may be given in the frequency domain
(e.g., bandwidth and gain/phase margins) or in the time domain (e.g., overshoot, damping ratio, rise time, and settling
time).
The classic control design process of missile autopilot can
be summarized in the following steps. Detailed procedures
and practical design examples can be found in Refs. 5 and
11. How aerodynamic derivatives affect the missile autopilot
design is discussed in Ref. 12. A useful review of classically
designed autopilot controllers may be found in Ref. 13, where
the relative merits of proportional and PI autopilot controllers
are discussed and the novel cubic autopilot design is introduced.
1. Based on the system requirements analysis, the designer selects a flight control system time constant, a
damping ratio, and an open loop cross-over frequency
that will meet the system requirements for homing accuracy and stability.
2. The autopilot gains are calculated. The gains such as
KP and KI in Fig. 4 are obtained in a variety of linearized flight conditions and must be scheduled by appropriate algorithms to account for the changing environment.
(14)
3. A model of the flight control system is developed. Initially the flexible body dynamics are neglected and the
rigid body stability is analyzed to determine if adequate
phase and gain margins have been achieved. If not, the
response characteristics are modified and the design is
iterated.
It can be seen that the characteristics of the open-loop responses in Eqs. (12) and (14) are determined by the related
aerodynamic derivatives. For example, to ensure that the
open-loop yawing motion (i.e., without control) is stable, we
must have yv nr 0. If the open-loop motion is unstable or
is near the margin of instability, then autopilot must be installed to form a closed-loop system that integrates missile
dynamics, sensor units, controller units, actuator units, and
n2 = yn nr + U0 nv
314
MISSILE CONTROL
Rudder angle
Side force
command
+
yd
Actuator
Controller
Kp +
Ks
KI
s2
s2
Rate gyro
+
+
Missile dynamics
Kg
+ 2 s
+1
Yaw rate
r
Side force
response
n rs + nv y n r yv
y rs2 y rnr s U0(n r yv nv y r
Is
Accelerometer
+
K
Lateral acceleration
Figure 4. An autopilot structure performing side force command tracking. Both missile and
rudder servos are modeled as second-order dynamics; the gyro and accelerometer are modeled as
constant gains; and the controller is in the form of proportion and integration with tuning gains
KP and KI.
Ref. 16. The technique has been applied to the control of the
extended medium-range air-to-air missile in Ref. 17.
LQR Autopilot Design. LQR control theory is a well-established control system design technique (18). The LQR control
gains are all obtained simultaneously from the minimization
of a suitable performance index (usually the integral of a quadratic cost function). The design is synthesized in the time
domain as opposed to the complex frequency domain. Reference 14 demonstrates the effectiveness of LQR design techniques for the missile flight control problemdescribing the
application of various LQR formulations to the design of single-plane lateral acceleration autopilot controllers. Reference
19 further considers the advantages obtainable by combining
classical PI and modern LQR methodologies for a multivariable airframe model with high frequency structural modes.
Robust Autopilot Design. Robust control methods provide
the means to design multivariable autopilots that satisfy performance specifications and simultaneously guarantee stability when the missile deviates from its nominal flight condition
or is subject to exogenous disturbance. Several investigations
have been undertaken specifically to research missile autopilot robustness. Early work was directed toward specific configurations and problems (20), with more recent work using
the robust control system synthesis techniques of quantitative feedback theory (QFT) (21), H control (22), -synthesis
(23), normalized coprime factor loop-shaping H control (24),
and linear matrix inequality (LMI) self-scheduling control
(25). Research has also been carried out on a number of related ways of assessing the robustness of missile autopilot
controller design (26). A good literature survey in robust autopilot design can be found in Ref. 15. The robust control design is formulated to minimize the following effects:
Parameter Variation. Aerodynamic derivatives, moment
of inertia, and the center of gravity may have significant
variations over the entire missile flight envelope.
MISSILE CONTROL
Coupling Dynamics. The residual error caused by inexact cancellation in decoupling pitch and rollyaw dynamics for BTT missiles needs to be addressed.
Unmodeled Dynamics. Most missile autopilot design consider missile rigid-body dynamics only, and the missile
flexible modes are regarded as unmodeled dynamics. Robust control design allows the unmodeled dynamics to be
taken into account to avoid structural vibration or instability.
Sensor Noises. Autopilot needs to attenuate the effects
caused by sensor noises, calibration errors, drifts, and
parasitic dynamics.
Tracking Error. A successful missile interception depends on the ability of autopilot to track the guidance
commands. The uncertainties and noises in the seeker
output and in the prediction of target maneuvers may
affect the autopilot tracking performance.
Nonlinear Autopilot Design. Nonlinear control techniques
used in missile autopilot design include feedback linearization (27), variable structure control (VSC) with a sliding mode
(28), and nonlinear H control (29). The motivations of nonlinear autopilot design come from the concerns of the three common kinds of missile nonlinearities: dynamic couplings, nonlinear aerodynamics, and actuator limitations.
Dynamic Couplings. Missile dynamics are coupled kinematically and inertially. The kinematic coupling terms
can be isolated by casting the missile dynamic equations
in the stability axes, whereas the inertial couplings, such
as the rollyaw coupling into pitch, can be accommodated by the feedback linearization approach because the
extent of coupling is measurable.
Nonlinear Aerodynamic. Nonlinear aerodynamics are
the result of the nonlinear and uncertain characteristics
of the stability coefficients and control coefficients. A
nonlinear control scheduling, as a function of Mach number, AOA, dynamic pressure, and so on, can be designed
to remove control uncertainties caused by nonlinear
aerodynamics and to approximately equalize the control
effectiveness.
Actuator Limitations. The missile control surfaces have
their limitations in the amounts of deflection and deflection rate. To avoid saturating the control surfaces, a command-limiting mechanism designed by dynamic inversion analysis needs to be implemented. Nonlinear
dynamic inversion analysis also leads to an early understanding of design limitations, fundamental feedback
paths, and a candidate feedback control structure. References 30 and 31 discuss some techniques used in nonlinear autopilot design.
Adaptive Autopilot Design. Adaptive control systems attempt to adjust on-line to accommodate unknown or changing
system dynamics as well as unknown exogenous system disturbances. There are two general classes of adaptive control
laws: direct and indirect. A relatively simple indirect adaptive
control solution for the autopilot design challenge is gain
scheduled adaptation (32), where the autopilot is designed offline for a number of operating conditions and the required
gains are prestored against related flight conditions. In con-
315
trast, direct adaptive controls such as the self-tuning regulator (33) and model reference adaptive control (34) update the
autopilot gains directly on the basis of the history of system
inputs and tracking errors.
Intelligent Autopilot Design. Missile autopilot design task
requires tuning parameters to achieve desirable performance.
By augmenting a neural network in the tuning process, the
parameter adjustment process can be standardized. This can
be done as follows. First, build the desired flying qualities into
the performance model. The autopilot structure is prefixed
with the parameters undetermined. Then by comparing the
actual system performance with the desired flying qualities,
the neural network is trained to learn the rules of tuning.
Accordingly, the autopilot parameters can be updated to meet
the requirements. Application of neural network techniques
to missile autopilot design and to future generation flight control system was investigated in Refs. 35 and 36.
BIBLIOGRAPHY
1. C. T. Myers, Guided MissilesOperations, Design and Theory.
New York: McGraw-Hill, 1958.
2. B. D. Richard, Fundamentals of Advanced Missiles. New York:
Wiley, 1958.
3. M. R. Mendenhall, Tactical Missile Aerodynamics: Prediction
Methodology. Washington DC: Amer. Inst. Aeronautics and Astronautics, 1992.
4. J. N. Nielsen, Missile Aerodynamics. New York: McGraw-Hill,
1960.
5. P. Garnell, Guided Weapon Control Systems, 2nd ed., Oxford: Pergamon, 1980.
6. W. A. Kevin and B. J. David, Agile missile dynamics and control.
Proc. AIAA Guidance Navigation Control Conf., San Diego, CA,
July 1996.
7. S. S. Chin, Missile Configuration Design. New York: McGrawHill, 1961.
8. A. Arrow, Status and concerns for bank-to-turn control of tactical
missiles. AIAA J. Guidance, Control, Dynamics, 8 (2): 267274,
1985.
9. F. W. Riedel, Bank-to-Turn Control Technology Survey for Homing Missiles, NASA CR-3325, 1980.
10. D. E. Williams, B. Friendland, and A. N. Madiwale, Modern control theory for design of autopilots for bank-to-turn missiles,
AIAA J. Guidance, Control, Dynamics, 10 (4): 378386, 1987.
11. J. H. Blakelock, Automatic Control of Aircraft and Missiles. New
York: Wiley, 1991.
12. F. W. Nesline and M. L. Nesline, How autopilot requirements
constraint the aerodynamic design of homing missiles. Proc.
Amer. Control Conf., 1984, pp. 716730.
13. M. P. Horton, Autopilots for tactical missiles; an overview. Proc.
Inst. Mechanical Eng., Part 1, J. Syst. Control Eng., 209 (2): 127
139, 1995.
14. C. F. Lin, Advanced Control System Design. Englewood Cliffs, NJ:
Prentice-Hall, 1991.
15. H. Buschek, Robust autopilot design for future missile system,
Proc. AIAA Guidance, Navigation, and Control Conference, New
Orleans, 1997, pp. 16721681.
16. B. A. White, Eigenstructure assignment for aerospace applications, in A. J. Chipperfield and P. J. Flemming (eds.), IEE Control
316
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
MISSILE GUIDANCE
Engineering Series, No. 48, London: Peregrinus, 1993, pp.
179204.
K. Sobel and J. R. Clotier, Eigenstructure assignment for the extended medium range missile, AIAA J. Guidance, Control, Dynamics, 13 (2): 529531, 1992.
R. E. Kalman, Contributions to the theory of optimal control, Boletin de la Sociedad Mathematica mexicana, 5: 102119, 1960.
F. W. Nesline, B. H. Wells, and P. Zarchan, A combined optimal/
classical approach to robust missile autopilot design, AIAA J.
Guidance, Control, Dynamics, 4 (3): 316322, 1981.
F. W. Nesline and P. Zarchan, Why modern controllers can go
unstable in practice, AIAA J. Guidance, Control, Dynamics, 7 (4):
495500, 1984.
D. G. Benshabat and Y. Chait, Application of quantitative feedback theory to class of missiles, AIAA J. Guidance, Control, Dynamics, 16 (1): 4752, 1993.
M. J. Ruth, A classic perspective on application of H control theory to a flexible missile airframe, Proc. AIAA Guidance, Navigation Control Conf., Boston, MA: 1989, pp. 10731078.
R. T. Reichart, Robust autopilot design using -synthesis, Proc.
Amer. Control Conf., San Diego, CA, 1990, pp. 23682373.
S. R. Baguley and B. H. White, A Study of H robust control for
missile autopilot design, Royal Military College of Science, Tech.
Rep., Shrivenham, UK.
P. Apkarian, J. M. Biannic, and P. Gahinet, Self-scheduled H
control of missile via linear matrix inequalities, AIAA J. Guidance, Control, Dynamics, 18 (3): 532538, 1995.
K. A. Wise, Comparison of six robustness tests evaluating missile
autopilot robustness to uncertain aerodynamics, AIAA J. Guidance, Control, Dynamics, 15 (4): 861870, 1992.
H. J. Gratt and W. L. McCowan, Feedback linearization autopilot
design for the advanced kinetic energy missile boost phase, AIAA
J. Guidance, Control, Dynamics, 18 (5): 945950, 1995.
R. D. Weil and K. A. Wise, Blended aero & reaction jet missile
autopilot design using VSS techniques, Proc. 30th IEEE Conf.
Decision Control, Brighton, UK, 1991, pp. 28282829.
K. A. Wise and J. L. Sedwick, Nonlinear H optimal control for
agile missiles, AIAA J. Guidance, Control, Dynamics, 19(1): 157
165, 1996.
P. K. Menon and M. Yousefpor, Design of nonlinear autopilots for
high angle of attack missiles. Proc. AIAA Guidance, Navigation,
Control Conf., San Diego, CA, 1996.
K. A. Wise and J. L. Sedwick, Nonlinear H optimal control for
agile missiles. AIAA-95-3317, Proc. AIAA Guidance, Navigation,
Control Conf., Baltimore, 1995, pp. 12951307.
W. J. Rugh, Analytical framework for gain scheduling, Proc.
Amer. Control Conf., San Diego, CA, 1990, pp. 16881694.
C. F. Price and W. D. Koenigsberg, Adaptive control and guidance
for tactical missiles, Reading, MA: Analytical Sci. Corporation.
N. D. Porter, Further investigations into an adaptive autopilot
control system for a tail controlled missile based on a variation of
the model reference technique, Royal Aircraft Establishment, Tech.
memor. DW8, Farnnborough, UK.
M. B. McFarland and A. J. Calise, Neural-adaptive nonlinear autopilot design for an agile anti-air missile. Proc. AIAA Guidance,
Navigation, Control Conf., San Diego, CA, 1996.
M. L. Steinberg and R. D. DiGirolamo, Applying neural network
technology to future generation military flight control systems.
Int. Joint Conf. Neural Netw., 1991, pp. 898903.
CIANN-DONG YANG
CHI-CHING YANG
HSIN-YUAN CHEN
National Cheng Kung University
MISSILE GUIDANCE
Missile guidance addresses the problem of steering, or
guiding, a missile to a target on the basis of a priori known
target coordinate information and/or real-time target measurements obtained from onboard and/or external sensors.
Lark-Guided Missile
V-1 Buzz Bomb. Powered by a pulse-jet engine, generating 2670 N (600 pounds) of thrust, the V-1 reached
a speed of 322 km per hour (200 miles per hour) and
had a range of about 241 km (150 miles). Weighing 21,138 N (4750 pounds) with an 8900 N (2000
pound) high-explosive warhead, the V-1 was launched
from a long ramp with the aid of a hydrogen peroxide/potassium permanganate-propelled booster motor. A gyroscope, magnetic compass, and a barometric
altimeter were used to correct deviations in altitude
and direction. Despite its 0.8045 km (0.5 mile) accuracy, the V-1 proved very useful as a terror weapon
against large cities. Near impact, the control surfaces
would lock and spoilers would be deployed from the
tail to induce a steep dive. At this point, the pulsejet usually ceased functioning. The eerie silence that
followed warned people below of the impending impact. The V-1 was launched by the thousands against
London and the Belgian port of Antwerp during 1944,
1945. Well over 10,000 V-1s were launched against
Great Britain, in all kinds of weather, by day and
night. Although Royal Air Force pilots had some success in shooting down V-1s, the V-1s proved effective
as terror weapons.
V-2 Rocket. The V-2, which was developed at the
secret Peenemunde
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Missile Guidance
deployed in 1979 and 1988, respectively. Both accommodate nuclear MIRVs and are deployed in Ohio-class (Trident) submarines, each carrying 24 missiles with eight 100
kiloton warheads per missile. Trident II missiles weigh
roughly 65 tons and are about 44 feet long and 7 feet
wide. For comparison sake, it is worth noting that the bomb
dropped on Hiroshima on August 6, 1945 (designated Little Boy) was a 8,900 lb, 10 feet long, 2.33 feet diameter,
1316 kiloton uranium-235 based gun-type ssion weapon.
Similarly, the bomb dropped on Nagasaki three days later
(designated Fat Man) was a 10,800 lb, 10.67 feet long, 5
feet diameter, 21 kiloton plutonium-239 based implosiontype ssion weapon.
Nuclear Non-Proliferation: SALT, ABM, and MAD
The rst major Nuclear Non-Proliferation Treaty (NNPT)
opened for signature on July 1, 1968. In addition to addressing what nations could rightfully possess nuclear
weapons and relevant nuclear proliferation issues, it addressed disarmament and stockpile reduction as well as
the peaceful use of nuclear technology (i.e., energy generation). The treaty is revisited periodically by participating
states. Because of the large number of Soviet nuclear warheads during the Cold War, some in the United States felt
that U.S. ICBM elds were threatened. On March 14, 1969,
President Nixon announced his decision to deploy a missile defense system (called Safeguard) to protect U.S. ICBM
elds from attack by Soviet missiles. This decision initiated
intense strategic arms negotiations between the United
States and the Soviet Union. The Strategic Arms Limitation Talks (SALT), between the United States and the Soviet Union, led to a 1971 agreement xing the number of
ICBMs that could be deployed by the two nations. The Antiballistic Missile (ABM) Treatysigned by the U.S. and the
Soviet Union on May 26, 1972was designed to implement
the doctrine of mutually assured destruction (MAD). MAD
was intended to discourage the launching of a rst strike by
the certainty of being destroyed by retaliation. The treaty
prohibits/limits deployment of certain sea, air, and spacebased missiles and sensors. A key motivation behind these
arrangements was to perpetuate the existing balance of
power and avoid the economic chaos that would result from
a full-scale arms race. In 1976, in view of technical limitations imposed by the ABM treaty, the U.S. Congress ordered
the closing of Safeguard only four months after becoming
operational. In 2001, the ABM treaty came under attack
in the U.S. Congress as the United States and Russia (former Soviet Union) discussed how to differentiate between
theater and strategic missile defenses.
BMD and SDI
In 1983, President Reagan initiated the Ballistic Missile
Defense (BMD) program under the Strategic Defense Initiative (SDI). SDI would focus on space-based defense research. Because SDI deployment would contravene the
ABM treaty, many critics felt SDI, with its potential offensive use, would escalate the arms race. In 1984, the Strategic Defense Initiative Organization (SDIO) was formed. In
1987, Judge Abraham D. Sofaer, State Department Legal
Advisor, concluded that the ABM treaty did not preclude
Missile Guidance
lished the Joint Program Ofce (JPO) for the National Missile Defense (NMD). On June 24, 1997, the rst NMD ight
test was successfully completed. During this test an Exoatmospheric Kill Vehicle (EKV) sensor was used to identify
and track objects in space. In 2007, Lockheed Martin is expected to begin ight testing of a THAAD system at the
Pacic Missile Range (Kauai, Hawaii). To appreciate the
formidable problems associated with developing a THAAD
system, it is necessary to understand the issues associated
with the design of missile guidance systems. These issues
will be addressed in subsequent sections.
MISSILE GUIDANCE, NAVIGATION, AND CONTROL
SUBSYSTEMS
We begin our technical discussion by describing the subsystems that make up a missile system. In addition to a
warhead, a missile contains several key supporting subsystems. These subsystems may include 1) a target-sensing
system, 2) a missile-navigation system, 3) a guidance system, 4) an autopilot or control system, and 5) the physical
missile (including airframe and actuation subsystem); see
Fig. 1.
Target-Sensing System
The target-sensing system provides target information to
the missile guidance system, e.g. relative position, velocity,
line-of-sight angle, and rate. Target-sensing systems may
be based on several sensors, e.g., radar, laser, heat, acoustic,
or optical sensors. Optical sensors, for example, may be as
simple as a camera for a weapon systems ofcer (WSO)
to visualize the target from a remote location. They may
be a sophisticated imaging system (see below). For some
applications, target coordinates are known a priori (e.g., via
satellite or other intelligence) and a target sensor becomes
irrelevant.
Navigation System
A navigation system provides information to the missile guidance system about the missile position in space
relative to some inertial frame of reference, e.g., atEarth constant-gravity model for short-range ights
and rotating-Earth variable-gravity model for long-range
ights. To do so, it may use information obtained from a
variety of sensors, which may include simple sensors such
as accelerometers or a radar altimeter. It may include more
sophisticated sensors such as a global positioning system
(GPS) receiver or an optical terrain sensor that relies on
Missile Guidance
Missile Guidance
Strategic Missiles
Strategic missiles are used primarily against strategic
targets, that is, resources that permit an enemy to conduct large-scale military operations (e.g., battle management/command, control, and communication centers; industrial/weapons manufacturing centers; and so on). Such
targets are usually located far behind the battle line.
As such, strategic missiles are typically designed for
long-range missions. Although such missiles are usually
launched from naval vessels or from missile silos situated below ground, they are sometimes launched from aircraft (e.g., strategic bombers). Because such missiles are
intended to eliminate the most signicant military targets, they typically carry nuclear warheads rather than
conventional warheads. Strategic missiles typically operate at orbital speeds (5 miles per second), outside the
atmosphere, and over intercontinental distances. They use
rockets/thrusters/fuel and require very precise instrumentation for critical mid-course guidance. GPS has made such
systems very accurate.
Tactical Missiles
Tactical missiles are used primarily against tactical targets, that is, resources that permit an enemy to conduct
small-scale military operations (for example, a ship, an
aireld, and a munitions bunker). Such targets are usually located near the battle line. As such, tactical missiles are typically designed for short- or medium-range
missions. Such missiles have generally carried conventional explosive warheads, the size of which depends on
the designated target. Tactical missiles sometimes carry
nuclear warheads in an effort to deter the use of tactical nuclear/chemical/biological weapons and to engage the most
hardened targets (e.g., enemy nuclear strategic missile silos). Tactical missiles typically operate at lower speeds (< 1
mile per second), inside the atmosphere, and over short-tomedium distances (e.g., 150 miles). They typically use aerodynamic control surfaces (discussed below) and require adequate instrumentation for mid-course and terminal guidance. A target sensor (e.g., radar seeker) permits such missiles to engage mobile and highly maneuverable targets.
Exoatmospheric Missiles
Exoatmospheric missiles y their missions mostly outside
the Earths atmosphere. Such missiles are used against
long-range strategic targets. Because they y outside the
atmosphere, thrusters are required to change direction.
Such thrusters use onboard fuel. To maximize warhead
size, and because missile weight grows exponentially with
fuel weight, it is important that guidance and control systems for long-range missiles (e.g., strategic and exoatmospheric) provide for minimum fuel consumption.
Endoatmospheric Missiles
Endoatmospheric missiles y their missions inside the
Earths atmosphere. Such missiles are used against strategic and tactical targets. In contrast to exoatmospheric missiles, endoatmospheric missiles may use movable control
surfaces such as ns (called aerodynamic control surfaces),
Missile Guidance
anti-armored vehicle AGM-65 Maverick. Other ASM systems include the Advanced Medium-Range Air-to-Air Missile (AIM-120 AMRAAM) and the airborne laser (ABL) system being developed by several defense contractors. The
ABL system has been considered for boost-phase intercepts
during which the launched missile has the largest thermal
signature and is traveling at its slowest speed.
Air-to-Air Missiles (AAMs)
AAMs are launched from aircraft against aircraft, ballistic
missiles, and most recently against tactical missiles. Such
missiles are typically light, highly maneuverable, tactical
weapons. AAMs are generally smaller, lighter, and faster
than ASMs because ASMs are typically directed at hardened, less-mobile, targets. Some SAMs and ASMs are used
as AAMs and vice versa. Examples of AAMs are the AIM7 Sparrow, AIM-9 Sidewinder, AIM-54 Phoenix, and the
AIM-120A AMRAAM.
Guidance Methods: Fixed Targets with Known Fixed
Positions
A missile may be guided toward a target having a known
xed position using a variety of guidance methods and/or
navigational aids, e.g., inertial, terrain, stellar, and satellite guidance and navigation.
Inertially Guided Missiles. Inertially guided missiles use missile spatial navigation information relative to some inertial frame of reference to guide a
missile to its designated target. For short-range missions, one may use a at-Earth constant-gravity inertial frame of reference. This approach is not appropriate for long-range missions, approaching intercontinental distances, for which the Earth may not be
treated as at. For such missions, the sun or stars provide an inertial frame of reference. One can also use an
Earth-centered variable-gravity frame. Position information is typically obtained by integrating acceleration information obtained from accelerometers or by
pattern-matching algorithms exploiting imaging systems. Because accelerometers are sensitive to gravity,
they must be mounted in a xed position with respect
to gravity. Typically, accelerometers are mounted on
platforms that are stabilized by gyroscopes or startracking telescopes. Terrain and stellar navigation
systems are examples of imaging systems. Satellite
navigated missiles use satellites for navigation. Some
satellite guided missiles use the Navstar GPSa constellation of orbiting navigation satellitesto navigate and guide the missile to its target. GPS has
increased accuracy (reduced miss distance) signicantly.
Guidance Methods: Mobile Targets with Unknown
Positions
If the target position is not known a priori, the aforementioned methods and aids may be used in part, but
other real-time target acquisition, tracking, navigation,
and guidance mechanisms are required. The most com-
Missile Guidance
Cruise Missiles
Cruise missiles are typically SSMs that use inertial and
terrain following navigation/guidance systems while cruising toward the target. When near the target, endgame guid-
Missile Guidance
ance is accomplished by either homing in on 1) target emitted/reected energy, and 2) a target feature by exploiting
a forward-looking imaging system and an onboard stored
image, or by 3) using a more detailed terrain contour with
a more-accurate downward-looking sensor. Cruise missiles
offer the ability to destroy heavily defended targets without risking air crew. Because they are small, they are difcult to detect on radar, particularly when they hug the
terrain. Examples of cruise missiles are the AGM-86, Tomahawk (9), and Harpoon. The Tomahawk uses a TERCOM
guidance during the cruise-phase. For terminal guidance,
a conventionally armed Tomahawk uses an electro-optical
Digital Scene-Matching Area Correlator (DSMAC) guidance system that compares measured images with stored
images. This technique is often referred to as an offset navigation or guidance technique. At no time during the terminal scene-matching process does the missile look at the
target. Its sensor always looks down. DSMAC makes Tomahawk one of the most accurate weapon systems in service
around the world.
Skid-to-Turn and Bank-to-Turn Missiles
Skid-to-turn (STT) missiles, like speed boats, skid to turn.
Bank-to-turn (BTT) missiles, like airplanes, bank to turn
(5, 1016). BTT airframe designs offer higher maneuverability than conventional STT designs by use of an asymmetrical shape and/or the addition of a wing. BTT missile
autopilots are more difcult to design than STT autopilots
because of cross-coupling issues. STT missiles achieve velocity vector control by permitting the missile to develop
angle-of-attack and side-slip angles (5). The presence of
slide-slip imparts a skidding motion to the missile. BTT
missiles ideally should have no side-slip. To achieve the
desired orientation, a BTT missile is rolled (banked) so
that the plane of maximum aerodynamic normal force is
oriented to the desired direction. The magnitude of the
force is controlled by adjusting the attitude (i.e., angle-ofattack) in that plane. BTT missile control is made more difcult by the high roll rates required for high performance
(i.e., short response time) (4). STT missiles typically require pitch-yaw acceleration guidance commands, whereas
BTT missiles require pitch-roll acceleration commands. An
overview of tactical missile control design issues and approaches is provided in Reference 17.
GUIDANCE ALGORITHMS
In practice, many guidance algorithms are used (4, 8,
1820). The purpose of a guidance algorithm is to update
missile guidance commands that will be issued to the autopilot. This update is to be performed on the basis of missile and target information. The goal of any guidance algorithm is to steer the missile toward the target, which
results in an intercept within an allotted time period (that
is, until the fuel runs out or the target is out of range).
The most common algorithms are characterized by the following terms: proportional navigation, augmented proportional navigation, and optimal (8, 20). To simplify the mathematical details of the exposition to follow, suppose that the
missile-target engagement is restricted to the pitch plane
1
[Vt sin((t) t (t)) + Vm sin((t) m (t))]
R(t)
(1)
(2)
(3)
Missile Guidance
that the Stinger is an example of a re-and-forget supersonic SAM that uses PNG with passive IR/UV homing.
1. PNG Performance: Non-maneuvering Target,
Heading Error. First, consider the impact of a heading error on PNG missile acceleration requirements
when the target moves at a constant speed in a straight
line. Under the simplifying assumptions given above,
the resulting commanded acceleration is as follows:
Vm HEN
t
acPNG (t) =
1
tf
tf
N2
(4)
This expression shows that PNG immediately begins removing any heading error and continues doing so throughout the engagement. The acceleration
requirement decreases monotonically from acPNGmax =
Vm HEN
acPNG (0) =
to zero as the ight progresses. A
tf
larger N results in a larger initial missile acceleration
requirement, but a lesser endgame missile acceleration
requirement. The larger the N, the faster the heading
error is removed.
2. PNG Performance: Target Undergoing Constant
Acceleration. Now, consider the impact of a constant
target acceleration at on PNG missile acceleration requirements. Under the simplifying assumptions given
above, the resulting commanded acceleration is as follows:
N
acPNG (t) =
1
N 2
t
1
tf
N2
at
(5)
acPNG (t) = N
ZEMPLOS (t)
2
tgo
(6)
y+
N
tf t
y+
N
(tf t)2
y = at
y(tf ) = 0
(7)
(8)
10
Missile Guidance
def
m
Vm
gQSref CL
=
=
=
Vm
Splan
Sref
(9)
From this, it follows that decreases with increasing missile altitude and with decreasing missile speed Vm .
FG
1 + FGRA
(11)
L = FGRA = NVc
n s +
s+
V m
(12)
<
Vm
|R|NVc
Vm
|R|NVc
(14)
= FG[ RAam ] =
am = FG[ R]
From the above, it follows that we require the guidancecontrol-seeker bandwidth to satisfy
<<
<
(13)
designers make the guidance-control-seeker bandwidth small but sufciently large to accommodate
missile maneuverability (i.e., satisfy the lower inequality). In such a case, radome effects are small
and the guidance loop remains stable yielding zero
miss distance after a sufciently long ight. One can,
typically, improve homing performance by increasing
and N. If they are increased too much, radome effects become signicant, miss distance can be high,
and guidance loop instability can set in.
When is large (e.g., at low altitudes or high speeds),
designers would still like to make the guidancecontrol-seeker bandwidth sufciently large to accommodate missile maneuverability (i.e., satisfy the
lower inequality). This, result generally, can be accomplished provided that radome effects are not too signicant. Radome effects will be signicant if Vm is too
small, (|R|, N, Vc ) are too large, or is too small (i.e.,
too high an altitude and/or too low a missile speed
Vm ).
Given the above, it therefore follows that designers are
generally forced to trade off homing performance (bandwidth) for stability robustness properties. Missiles using
thrust vectoring (e.g., exoatmospheric missiles) experience
similar performance-stability robustness trade-offs.
Augmented Proportional Guidance (APNG)
Advanced guidance laws reduce acceleration requirements
and miss distance but require more information (e.g., timeto-go and missile-target range) (19). In an attempt to
take into account a constant target acceleration maneuver
at , guidance engineers developed augmented proportional
guidance (APNG). For APNG, the commanded acceleration
is given by
+
acA PNG (t) = NVc (t)
1
1
Nat = ac PNG (t) + Nat
2
2
(15)
1 2
ZEM
), where ZEM = y + y tgo + at tgo
is
2
tgo
2
the associated zero effort miss distance. Equation (15)
shows that APNG is essentially PNG with an extra term to
account for the maneuvering target. For this guidance law,
Missile Guidance
1
t
N 1
2
tf
N2
at
(16)
In contrast with PNG, this expression shows that the resulting APNG acceleration requirements decrease with
time rather than increase. From the expression, it follows that increasing N increases the initial acceleration
requirement but also reduces the time required for the
acceleration requirements to decrease to negligible levels. For N = 4, the maximum acceleration requirement
1
for APNG, acA PNGmax = Nat , is equal to that for PNG,
2
N
]at . For large N = 5, APNG requires a
acPNGmax = [
N 2
larger maximum acceleration but less acceleration than
PNG for t 0.2632tf . As a result, APNG is more fuel efcient for exoatmospheric applications than
t PNG. Finally, it
should be noted that APNG minimizes 0 f ac2 ()d subject
to zero miss distance, linear dynamics, and constant target
acceleration (8).
PNG Command Guidance Implementation
To implement PNG in a command guidance setting (i.e., no
seeker), a differentiating lter must be used to estimate
the LOS rate. As a result, command guidance is more susceptible to noise than homing guidance. This issue is exacerbated as the engagement takes place further from the
tracking station, noise increases, and guidance degrades.
Within Reference 25, the authors address command-guided
SAMs by spreading the acceleration requirements over tgo .
The method requires estimates for target position, velocity, acceleration, and tgo but takes into account nonlinear
engagement geometry.
Advanced Guidance Algorithms
Classic PNG and APNG were initially based on intuition. Modern or advanced guidance algorithms exploit optimal control theory, i.e. optimizing a performance measure subject to dynamic constraints. Even simple optimal control formulations of a missile-target engagement
(e.g., quadratic acceleration measures) lead to a nonlinear
two-point boundary value problem requiring creative solution techniques, e.g., approximate solutions to the associated HamiltonJacobiBellman equationa formidable
nonlinear partial differential equation (23). Such a formulation remains somewhat intractable given todays computing power, even for command guidance implementations
that can exploit powerful remotely situated computers. As
a result, researchers have sought alternative approaches
to design advanced (near-optimal) guidance laws. In Reference 20, the authors present a PNG-like control law that
optimizes square-integral acceleration subject to zero miss
distance in the presence of a one pole guidance-controlseeker system.
Even for advanced guidance algorithms (e.g., optimal
guidance methods), the effects of guidance and control system parasitics must be carefully evaluated to ensure nominal performance and robustness (20). Advanced (optimal)
11
Variants of PNG
Within Reference 20, the authors compare PNG, APNG,
and optimal guidance (OG). The zero miss distance (stability) properties of PPNG are discussed within Reference
24. A nonlinear PPNG formulation for maneuvering targets is provided in Reference 27. Closed form expressions
for PPNG are presented in Reference 28. A more complex
version of PNG that is quasi-optimal for large maneuvers (but requires tgo estimates) is discussed in Reference
29. Two-dimensional miss distance analysis is conducted
in Reference 21 for a guidance law that combines PNG and
pursuit guidance. Within Reference 30, the authors extend
PNG by using an outer LOS rate loop to control the terminal geometry of the engagement (e.g., approach angle).
Generalized PNG, in which acceleration commands are issued normal to the LOS with a bias angle, is addressed
in Reference 31. Three-dimensional (3D) generalized PNG
is addressed within Reference 32 using a spherical coordinate system xed to the missile to better accommodate
the spherical nature of seeker measurements. Analytical
solutions are presented without linearization. Generalized
guidance schemes are presented in Reference 33, which result in missile acceleration commands rotating the missile
perpendicular to a chosen (generalized) direction. When
this direction is appropriately selected, standard laws result. Time-energy performance criteria are also examined.
Capturability issues for variants of PNG are addressed in
Reference 34 and the references therein. Within Reference
35, the authors present a 2-D framework that shows that
many developed guidance laws are special cases of a general law. The 3-D case, using polar coordinates, is considered in Reference 36.
12
Missile Guidance
46. To develop useful estimation techniques, much attention has been placed on modeling the target. Initially,
researchers used simple uncorrelated target acceleration
models. This process however, yielded misleading results,
which led to the use of simple dynamical modelspoint
mass and more complex. Both Cartesian and spherical
coordinate (47) formulations have been investigatedthe
latter better reecting the radial nature of an engagement. Single and multimodeled EKFs have been used (48)
to address the fact that no single model captures the dynamics that may arise. Low-observability LOS measurements make the problem particularly challenging (48). Target observability is explored in Reference 49 under PNG
and noise-free angle-only measurements in 2-D. A method
for obtaining required estimates for APNG (e.g., y, y, at ,
tgo ) is presented in Reference 50. As no single (tractable)
model and statistics can be used to accurately capture the
large set of possible maneuvers by todays modern tactical ghters, adaptive ltering techniques have been employed. Such lters attempt to adjust the lter bandwidth
to reect the target maneuver. Some researchers have used
classic NeymanPearson hypothesis testing to detect bias
in the innovations to appropriately reinitialize the lter.
Threshold levels must be judiciously selected to avoid false
detections that result in switching to an inappropriate estimator.
Long-Range Exoatmospheric Missions: Weight
Considerations
For long-range exoatmospheric missions approaching intercontinental ranges, orbital speeds are required (e.g.,
20,000 ft/sscond or 13,600 miles/hour or 4 miles/second).
To study such interceptors, two new concepts are essential. Fuel-specic impulse, denoted Isp , is dened as the
ratio of thrust to the time rate of change of total missile weight. It corresponds to the time required to generate a weight equivalent amount of thrust. Fuel-efcient
missiles have higher fuel-specic impulses. Typical tactical missile fuel-specic impulses lie in the range of 200
to 300 seconds. Fuel-mass fraction, denoted mf, is dened
as the ratio of propellant weight Wprop to total weight
WT = Wprop + Wstructure + Wpayload . SAMs, for example, have
a larger fuel-mass fraction than AAMs because SAMs
must travel through the denser air at lower altitudes. For
fuel-specic impulses less than 300 seconds, large fuelmass fractions (approaching 0.9) are required for exoatmospheric applications. A consequence is that it takes considerable total booster weight to propel even small payloads
to near-orbital speeds. More precisely, it can be shown (8)
that the weight of the propellant required for a single-stage
booster to impart a speed change AV to a payload weighing
Wpayload is given by
Wprop = Wpayload mf
exp
V
gIs p
1 (1 mf )exp
V
gIs p
(17)
Missile Guidance
of the payload Wpayload . Staging can be used to reduce total booster weight for a given fuel-specic impulse Isp and
(approximate) fuel-mass fraction mf . Efcient propellant
expenditure for exoatmospheric intercepts is addressed
within Reference 51. Three-dimensional mid-course guidance for SAMs intercepting nonmaneuvering high-altitude
ballistic targets is addressed within Reference 52. Neural
networks are used to approximate (store) optimal vertical
guidance commands and estimate tgo. Feedback linearization (39) is used for lateral guidance commands.
Acceleration Limitations
Endoatmospheric missile acceleration is limited by altitude, speed, structural, stall AOA, and drag constraints
stall AOA at high altitudes and structural limitations
at low altitudes (see Eq. 9). Exoatmospheric interceptor acceleration is limited by thrust-to-weight ratios and
ight timethe latter is because, when the fuel is exhausted, exoatmospheric missiles cannot maneuver. For
the ying cylinder considered earlier, the lateral accelerA
QSref CL
0.5Vm2 Sref
ation A in gees is given by
=
=
[ +
g
W
W
Splan 2
] (8). For L = 20 ft, D = 1 ft, W = 100 1bs, Vm =
1.5
Sref
3000 ft/s, and = 20 deg, at an altitude of 25,000 ft, the
resulting acceleration is A 20g.
THAAD Systems
Recent research efforts have focused on the development
of THAAD systems. Calculations show that high-altitude
ballistic intercepts are best made head-on so that there
is little target deceleration perpendicular to the LOS (8),
because such decelerations appears as a target maneuver
to the interceptor. EKF methods have been suggested for
estimating target ballistic coefcients and state information to be used in OG laws. Estimating ballistic coefcients
W
def
( =
where CDO is the zero lift drag coefcient) is
Sref CDa
particularly difcult at high altitudes where there is little
1
drag adrag =
gVm2 . Also, the high closing velocity of a bal2
listic target engagement signicantly decreases the maximum permitted guidance system bandwidth for radome
slope stability. Noise issues can also signicantly exacerbate the ballistic intercept problem.
FUTURE DEVELOPMENTS
Future developments will focus on theater-class ballistic missiles, guided projectiles, miniature kill vehicles,
space-based sensors for missile defense and boost-phase
interceptors.
The Age of Air-Breathing Hypersonic Flight
During the Gulf Wars, it often took considerable time to
get a missile on a critical target (e.g., Iraqi leadership),
which gave further impetus for a prompt global strike
(PGS) capabilityone that permits accurate strikes across
thousands of miles in minutes. Many have suggested the
13
ACKNOWLEDEMENTS
This research has been supported, in part, by a 1998
White House Presidential Excellence Award from President Clinton, the National Science Foundation (NSF),
NASA, Western Alliance to Expand Student Opportunities
(WAESO), Center for Research on Education in Science,
Mathematics, Engineering and Technology (CRESMET), and a Boeing A.D. Welliver Faculty Fellowship. For
additional information, please contact aar@asu.edu or visit
http://www.eas.asu.edu/ aar/research/mosart/mosart.html.
BIBLIOGRAPHY
1. Neufeld, M. J., The Rocket and the Reich; Harvard University
Press: Cambridge, MA, 1995.
2. Fossier, M. W., The Development of Radar Homing Missiles. J.
Guidance Cont. Dynamics, 1984, 7,pp 641651.
3. Haeussermann, W., Developments in the Field of Automatic
Guidance and Control of Rockets. J. Guidance Contr. Dynamics 1981, 4, 225239.
4. Blakelock, J. H., Automatic Control of Aircraft and Missiles,
McGraw-Hill: New York, 1992.
5. Williams, D. E., Friedland, B., Madiwale, A. N., Modern control
theory for design of autopilots for Bank-to-Turn Missiles, J.
Guidance Contr., 1987, 10,pp 378386.
14
Missile Guidance
6. Lin, C. F., Modern Navigation, Guidance, and Control Processing; Englewood Cliffs, NJ, Prentice-Hall: 1991.
7. Khou, K., Doyle, J. C., Essentials of Robust Control; PrenticeHall: Upper Saddle River, NJ, 1998.
8. Zarchan, P., Tactical and Strategic Missile Guidance; AIAA
Inc.: New York, 1990.
9. Macknight, N., Tomahawk Cruise Missile; Motorbooks International: New York, 1995.
10. Arrow, A., An Analysis of Aerodynamic Requirements for Coordinated Bank-to-Turn Missiles. NASACR 3544, 1982.
11. Feeley, J. J., Wallis, M. E., Bank-to-Turn Missile/Target Simulation on a Desk Top Computer, The Society for Computer
Simulation International, 7984, 1989.
12. Reidel, F. W., Bank-to-Turn Control Technology Survey for
Homing Missiles. NASAR 3325, 1980.
13. Kovach, M. J., Stevens, T. R., Arrow, A., A Bank-to-Turn Autopilot Design for an Advanced Air-to-Air Interceptor; Proc. of
the AIAA GNC Conference; Monterey, CA, August 1987.
14. Rodriguez, A. A., Cloutier, J. R., Performance Enhancement
for a Missile in the Presence of Saturating Actuators, AIAA J.
Guidance Cont. Dynamics 1996, 19:pp 3846.
15. Rodriguez, A. A., Yang, Y., Performance Enhancement for Unstable Bank-to-Turn (BTT) Missiles with Saturating Actuators. Int. J. Control. 1996, 63pp 641678.
16. Rodriguez, A. A., Monne, M., Evaluation of Missile Guidance
and Control Systems on a Personal Computer. J. Soc. Comput.
Simul. 1997, 68pp. 363376.
17. Ridgely, D. B.; McFarland, M. B., Tailoring Theory to Practice in Tactical Missile Control. IEEE Contr. Syste. Mag., 1999
19,pp 4955.
18. Cloutier, J. R., Evers, J. H., Feeley, J. J., Assessment of Airto-Air Missile Guidance and Control Technology, IEEE Contr.
Syst. Mag., 1989, pp 2734.
19. Riggs, T. L., Vergaz, P. L., Advanced Air-to-Air Missile Guidance Using Optimal Control and Estimation.AFATL-TR-8156, Air Force Armament Laboratory, Eglin AFB, FL.
20. Nesline, F. W., Zarchan, P., A New Look at Classical versus Modern Homing Guidance. J. Guidance Contr. 1981, 4,pp
7885.
21. Jark, J., Kabamba, P. T., Miss Distance Analysis in a New Guidance Law; Proc. of the American Control Conference;June 1999,
pp 29452949.
22. Waldmann, J., Line-of-Sight Rate Estimation and Linearizing
Control of an Imaging Seeker in a Tactical Missile Guided by
Proportional Navigation. IEEE Trans. on Contr. Syst. Technol.,
2002, 10,pp 556567.
23. Bryson, A. E., Ho, Y. C., Applied Optimal Control: Optimzation,
Estimation, and Control; HPC: New York, 1975.
24. Shukla, U. S., Mahapatra, P. R., The Proportional Navigation
DilemmaPure or True? IEEE Trans. Aerosp Electron. Syst.,
1990, 26,pp 382392.
25. Ghose, D., Dam, B., Prasad, U. R., A Spreader Acceleration
Guidance Scheme for Command Guided Surface-to-Air Missiles; Proc. IEEE National Aerospace and Electronics Conference; 1989, pp 202208.
26. Oh, J. H., Solving a Nonlinear Output Regulation Problem:
Zero Miss Distance of Pure PNG. IEEE Trans. Automat. Contr.
2002, 47,pp 169173.
27. Yang, C. D., Yang, C. C., Optimal Pure Proportional Navigation
for Maneuvering Targets. IEEE Trans. Aerosp. Electron. Sys.
1997 33,pp 949957.
28. Kecker, K., Closed form Solution of Pure Proportional Navigation, IEEE Trans. Aerosp. Electron. Syst. 1990, 26,pp 526
533.
29. Axelband, E., Hardy, F., Quasi-Optimum Proportional Navigation. IEEE Transa. Automat. Contr. 1970, 15,pp 620
626.
30. White, B. A.; Rbikowski, R.; Tsourdos, A., Aim Point Guidance:
An Extension of Proportional Navigation to the Control of
Terminal Guidance; Proc. American Control Conference;June
2003, pp 384389.
31. Yuan, P. J., Hsu, S. C., Solutions of Generalized Proportional
Navigation with Maneuvering and Nonmaneuvering Targets,
IEEE Trans. Aerosp. Electron. Syst., 1995, 31,pp 469474.
32. Yang, C. D., Cang, C. C., Analytical Solution of Generalized
3D Proportional Navigation, Proc. 34th IEEE Conference on
Decision and Control;December 1995, pp 39743979.
33. Yang, C. D., Hsiao, F. B., Yeh, F. B., Generalized Guidance
Law for Homing Missiles. IEEE Trans. Aerosp. Electron. Syst.,
1989, 25,pp 197212.
34. Chakravarthy, A., Dhose, D., Capturability of Realistic Generalized True Proportional Navigation. IEEE Trans. Aerosp.
Electron. Syst., 1996, 32,pp 407418.
35. Yang, C. D., Yang, C. C., A Unied Approach to Proportional
Navigation. IEEE Trans. Aerosp. Electron. Syst., 1997, 33,pp
557567.
36. Tyan, F., An Unied Approach to Missile Guidance Laws: A
3D Extension, Proc. American Control Conference; 2002, pp
17111716.
37. Aggarwal, R. K., Optimal Missile Guidance for Weaving Targets. Proc. 35th IEEE Decision and Control; 1996, 3, pp
27752779.
38. Zarchan, P., Tracking and Intercepting Spiraling Ballistic
Missiles; IEEE Position Location and Navigation Symposium;March 2000, pp 277284.
39. Shima,T., Golan, O. M., Bounded Differential Games Guidance
Law for a Dual Controlled Missile; Proc. American Control
Conference;June 2003, pp 390395.
40. Khalil, H., Nonlinear Systems, 2nd ed.; Prentice Hall: Englewood Cliffs, NJ, 1996.
41. Vincent, T. L., Worgan, R. W., Guidance against Maneuvering
Targets using Lyapunov Optimization Feedback Control, Proc.
American Control Conference;May 2002, pp 215220.
42. Zouan, Z.; Yunan, H.; Wenjin, G., Lyapunov Stability Based
Three-Dimensional Guidance for Missiles Against Maneuvering Targets; Proc. 4th World Congress on Intelligent Control
and Automation;June 2002, pp 28362840.
43. Manchester, I. R.; Savkin, A. V., Circular Navigation Guidance
Law for Precision Missile/Target Engagements; Proc. 41st
IEEE Conference on Decision and Control;December 2002, pp
12871292.
44. Balakrishnan, S. N.; Stansbery, D. T.; Evers, J. H.; Cloutier,
J. R., Analytical Guidance Laws and Integrated Guidance/Autopilot for Homing Missiles. Second IEEE Conference
on Control Applications;September 1993, pp 2732.
45. Shamma, J. S., Cloutier, J. R., Existence of SDRE Stabilizing Feedback. IEEE Trans. Automat. Contr. 2003, 48,pp 513
517.
46. Tahk, M. J.; Ryoo, C. K.; Cho, H., Recursive Time-to-Go Estimation for Homing Guidance Missiles. IEEE Trans. Aerosp.
Electron. Syst. 2002, 38,pp 1324.
47. DSouza, C. N.; McClure, M. A.; Cloutier, J. R., Spherical Target State Estimators; Proc. American Control Conference;June
29July 1, 1994, 16751679.
Missile Guidance
48. Rago, C.; Mehra, R. K., Robust Adaptive Target State Estimation for Missile Guidance Using the Interacting Multiple
Model Kalman Filter; IEEE 2000 Position Location and Navigation Symposium;March 2000, pp 355362.
49. Jahk, M. J.; Ryu, H.; Song, E. J., Observability Characteristics
of Angle-Only Measurement Under Proportional Navigation;
Proc. 34th SICE Annual Conference; International Session Papers,July 1995, pp 15091514.
50. Williams, D. E.; Briedland, B., Target Maneuver Detection and
Estimation; Proc. 27th IEEE Conference on Decision and Control;December 1988, pp 851855.
51. Brainin, S.; McGhee, R., Optimal Biased Proportional Navigation. IEEE Trans. Automat. Contr., 1968, 13,pp 440442.
52. Song, E. J.; Tahk, M. J., Three-Dimensional Midcourse Guidance Using Neural Networks for Interception of Ballistic Targets, IEEE Trans. Aerosp. Electron. Syst. 2002 38,pp 404414.
ARMANDO A. RODRIGUEZ
Arizona State University
15
A case is made here for applying the methodology directly to the data set at hand, which provides a direct
least-squares solution based on the available data without making any assumptions of the underlying statistics
(3). In this review we will introduce the computationally efficient and numerically robust direct-data-domain
methodology for both radio direction finding and adaptive processing. Another ramification of this approach is
that it is quite straightforward to allow for mutual coupling between the sensors collecting the data. This issue
is addressed later on.
where dn is the location of the nth antenna element, i is the direction of arrival of the signal from the end-fire
direction as shown in Fig. 1, and Ai is its complex amplitude. P is the total number of signals incident on the
array and needs to be determined. For a uniformly spaced array dn = nd, where d = is the interelement
spacing (as per Fig. 1). Here we have used a single snapshot, i.e., the phasors V n are measured across the entire
array at a single time instant. It is further stipulated that all P signals are narrowband and the wavelength of
transmission is . So the goal here is to estimate the 2P unknowns of Ai and i from the measured voltages V n .
As long as there are 2P antenna elements, the problem can be solved by fitting a sum of complex exponentials to
the voltages V n . This is computed through the matrix-pencil approach (4,5,6), which is very robust when applied
to noisy data. Of course, in a real situation there is noise in the data and hence we need more than 2P antenna
elements. The conventionally used methodology of ESPIRIT (7,8) requires the formation of a covariance matrix,
which is computationally more intensive than the Matrix-Pencil Technique. From a statistical point of view
both the methods have similar variances for the estimates in presence of noise. However, it is important to note
that additional processing is required, as in ROOT-MUSIC, where actual directions of arrival are obtained,
which involves factoring a high-order polynomial to estimate the strengths of the various signals.
Case B:
When the antenna elements are spaced nonuniformly, then clearly the above approach based on a single
snapshot is not applicable. The processing is to be done in the time domain. In that case one uses the model
where dn is the location of the nth antenna element, f 0 is the frequency of transmission, and i is the phase
associated with the ith incident field. Therefore, Ai is considered to be real. It is important to note in this
scenario that if there are coherent multipaths (i.e., the signal and an undesired multipath component of the
signal are in phase), a nonuniformly spaced array cannot separate them from a single snapshot without any
additional processing. Temporal information is also necessary (9). The various components can be extracted
using MUSIC (7) and many of its derivatives, and also ESPIRIT (7) may be used for a certain class of array
geometries.
between the look-direction constraint and the true direction of arrival of the desired signal. Correction for
this uncertainty is accomplished in the least-squares procedures by establishing look-direction constraints
at multiple angles of the adaptive receiver pattern within the transmitter main-beam extent. The multiple
constraints are established by using a uniformly weighted array pattern for the same-size array as the adaptive
array under consideration. Multiple points of constraints in the received adaptive beam pattern to be formed
are chosen for the nonadapted array pattern, and a row corresponding to each constraint is implemented in
the matrix equations presented below:
Here Zi represent the various constraints along specific look directions of: the receiver beam pattern and are
defined by [Zi = exp (j2d/) cos i ]. Z0 corresponds to the SOI. Here X i are the actual voltages measured at the
ith antenna element due to SOI, jammer, clutter, and thermal noise. W i are the adaptive weights, and Ci are
numerical prefixed values of the constraints imposed on the adapted beam to be formed. Let L be the number
of look-direction constraints, and M + 1 be the number of weights to be calculated. Therefore M L + 1 is the
number of jammers that can be nulled.
The first L + 1 equations in Eq. (3) define the main beam constraints of the adapted receiver pattern.
The remaining equations use data from the N + 1 elements, and each entry computes the difference between
neighboring elements, thereby canceling the SOI and hence containing only undesired signal components. The
number of equations must equal the number of weights, and therefore M = L + N M. This leads to the
relationship N = 2M L between the number of weights, number of constraints, and number of elements.
Using the forwardbackward data from a single snapshot, the maximum number of weights or the degrees of
freedom that can be achieved for a direct-data-domain approach is approximately N/1.5 + 1, as opposed to N
+ 1 for the conventionally used statistical method. So there is a slight loss in the degrees of freedom. However,
we gain the ability to deal with a highly nonstationary environment where the signal environment may change
even from a snapshot to snapshot and thereby allow for blinking jammers.
In a phased array, the angle extent of the received beam is established by the main beam of the transmitted
wave (usually between the 3 dB points of the transmitted field pattern). Target returns within the angle extent
must be coherently processed, but with the appropriate steering vector. In that case, the excitation function Y
[right-hand side of Eq. (3)] would have several nonzero elements, depending on the number of constraints used
for the main beam. This is called a multiple-constraint receive beam pattern, as opposed to constraining it at a
single point based on the assumed direction of arrival of the signal of interest. The advantage of dealing with
multiple constraints as opposed to a single constraint in the main beam is illustrated next.
Consider a 21-element array with N = 20 and M = 11. The beam is considered to be pointed broadside (
= 90 ), and target returns can be expected over the main beam out to the 3 dB points (5 ). For the broadsidepointed array, consider a target located in the main beam at = 94 instead of = 90 . The target signal-to-noise
ratio at each element is 20 dB, and we assume no jammers or clutter present. Figure 2 shows the main-beam
region of the antenna pattern after adaptation. Since the target is not at the look-direction constraint point
(i.e., = 90 ), the adaptive process considers it as an interfering source and attempts to null it. Because the
target is relatively near to the look-direction constraint, the process is not able to form a perfect null. Figure
3 shows the complex array gain along the target direction for 10 random samples of the noise. The point
represents the nonadapted array gain in the target direction. Note that the gain in the target direction is
reduced in each case. In addition, there is a wide variation in the array gain from one random sample to the
other. Now, if one were to process the returns from different pulses in a pulse burst that was to be coherently
integrated, then this variation in the received signal would have significant influence on that integration. We
now illustrate how to overcome it.
We establish a multiple constraint on the receive pattern as shown in Fig. 4 at 85 , 87.5 , 90 , 92.5 , and
95 . So the receive signal would not be nulled if it were located anywhere within the 10 beam width. For this
particular case, the excitation vector Y would be of the form Y T = [13, 7.72 + j8.32, 7.72 j8.32, 0.816 +
j7.149, 0.816 j07.149, 0, 0, 0, 0, 0, 0, 0, 0]. The corresponding receive beam pattern with the five constraints
is shown in Fig. 4. We now consider the same example as before. However, as seen from Fig. 5 (using the same
data from 10 random samples of noise), there is no reduction of the array gain along the direction of the target,
and for all the ten runs the array gain vectors are very nearly aligned. The five-constraint approach permits
effective radar processing across the main beams extent with no effect of the loss of gain in the target direction.
The adaptive process has been prevented from nulling the target.
In summary, the main-beam constraint allows the look-direction constraint to be established over a finite
beam width while maintaining the ability to adaptively null jammers in the side-lobe region. Although the
main-beam gain can become degraded if the signal becomes very strong, this does not appear to be a serious
limitation for practical radar-processing.
known direction 0 and some interference sources (called J i ) from unknown directions. In the absence of mutual
coupling, each individual source presents a linear phase progression across the face of the array. Therefore, the
voltage at the i-th element due to the incident fields is
where, um = cos m , S is the complex intensity of the signal incident from direction 0 , J m is the intensity of
the mth interference source arriving from direction m , and ni is the additive noise at each element. Let =
exp(jk x u0 ) represent the phase progression of the signal between one element and the next. Hence, the term
V i 1 V i+1 has no signal component. This is illustrated through the last K equations of Eq. (3), where, K =
(N e + 1)/2.
The last K 1 rows of the matrix contain only interference and noise terms. Setting the product of these
terms with the weights to zero nulls the interference in a least-squares sense. The equation represented by the
first few rows constrains the gain of the array along the direction of the signal. It can be shown that if M + 1
K, the signal can be recovered and
Fig. 4. Uniformly weighted array pattern with the location of the five constraints.
It is important to point out that there may be signal cancellation if the actual direction of arrival of the signal
of interest is slightly different from the assumed direction of arrival. However, this can be avoided by selecting
the first row of the matrix and replacing it by placing an a priori 3 dB constraint on the receive beam width
of the adaptive pattern as the optimization process progresses. This prevents signal cancellation when there
is uncertainty in the direction of arrival.
Let us consider a signal corrupted by three jammers that are incident on the array. To focus only on the
effects of mutual coupling, it is first assumed that there is no mutual coupling between the antenna elements
and the voltages at the ports of the array is given by Eq. (4). These voltages are then passed to the signal
recovery subroutine to find the adaptive weight using Eq. (3), and the signal is estimated using Eq. (5). Next,
we consider a realistic antenna array as shown in Fig. 6, where each wire antenna element is centrally loaded
with an impedance.
The details of the chosen array are presented in Table 1 and illustrated in Fig. 5. The receiving algorithm
tries to maintain the gain of the array in the direction of 0 = 45 while automatically placing nulls at in the
interference directions. All signals and jammers arrive from the elevation = 90 . The base signal and jammer
intensities and directions of arrival i are given in Table 2. In all simulations the jammer intensities, the
directions of arrival of the jammers, and the signal intensity are used only to find the voltages input to the
receiving algorithm. The receiving algorithm itself uses only the direction of arrival of the signal; that is, only
the look direction is considered to be known.
The signal is kept constant at 1.0 V/m as given in Table 2. The intensity of the first jammer, arriving from
= 75 , is varied from 1.0 V/m (0 dB with respect to the signal) to 1000.0 V/m (60 dB) in steps of 5 V/m. If
the jammers are properly nulled, we expect the reconstructed signal to have no residual jammer component.
Therefore, as the jammer strength is increased, we expect the reconstructed signal to remain constant.
Figure 7(a) presents the results on the magnitude, and Fig. 7(b) on the phase, for the adapted signal using
the receiving algorithm when mutual coupling is absent and the antenna array is considered to be an ideal one
as shown in Fig. 1. The magnitude of the reconstructed signal is indistinguishable from the expected value of
1.0 V/m. This figure demonstrates that, in the absence of mutual coupling, the receiving algorithm is highly
accurate and can still null a strong jammer.
Figure 8(a,b) show the results for the magnitude and phase, respectively, of the received signal when
using the measured voltages that are affected by mutual coupling. Here, the array consists of seven wires.
The magnitude of the reconstructed signal varies approximately linearly with respect to the intensity of the
jammer. This is because the strong jamming is not nulled and the residual jammer component completely
overwhelms the signal.
The reason the signal cannot be recovered when mutual coupling is taken into account can be understood
visually by comparing the adapted beam patterns in the ideal case of no mutual coupling with the case where
mutual coupling is present. In Fig. 9(a) we see the beam pattern for the ideal case. The pattern clearly displays
the three deep nulls at the directions of the interference. The high side lobes are in the region where there
is no interference. Because of the deep nulls, the strong interference can be completely nulled and the signal
recovered correctly. Figure 9(b) shows the beam pattern when the mutual coupling is taken into account. As is
10
clear, the gain of the antenna along the signal direction is considerably reduced. The pattern nulls are shallow
and are displaced from the desired locations. The shallow nulls result in inadequate nulling of the interference;
hence the signal cannot be recovered.
The receiving antenna is next assumed to be a linear array of N e elements as illustrated in Figure 6.
The elements are parallel, thin equispaced dipoles. Each element of the array is identically point loaded at
the center. The dipoles are x-directed, of length L and radius a, and are placed along the x axis, separated by
distance x. The array lies in the XZ plane.
11
Fig. 8. Signal recovery for a realistic array in the presence of mutual coupling.
We begin by analyzing the response of the antenna array to an incident field Einc . Since the array is
composed of thin wires, the following simplifying assumptions are valid (15,16): (1) The current flows only
in the direction of the wire axes (here the z direction). (2) The current and charge densities on the wire are
approximated by filaments of current and charge on the wire axes (which lie in the y = 0 plane). (3) Surface
boundary conditions can be applied to the relevant axial component of the wire axes.
The integral equation that characterizes the current on the wires and describes the behavior of the array
is (15,16)
We solve this equation using the method of moments to obtain the MOM impedance matrix. The basis
functions used are piecewise sinusoids as described in Ref. 15 and shown in Fig. 10. P (chosen odd) basis
functions are used per element. Using these basis functions and a Galerkin formulation, Eq. (6) is reduced to
12
Fig. 9. Antenna beam pattern, (a) for an idealized array without taking into account mutual coupling in the analysis, and
(b) for a realistic array.
where I is the MOM current vector containing the coefficients of the expansion of the current in the sinusoidal
basis, V is the MOM voltage vector representing the inner product of the weighting functions and the incident
field, and Z and Y are the MOM impedance and admittance matrices respectively. Both matrices are of order
N N, where N = N e P is the total number of unknowns used in the MOM formulation.
13
Assuming that the incident field is linearly polarized and arrives from direction (, ), it can be written
in the functional form as
where k = k( cos sin + sin sin + cos ) is the wave vector associated with the direction of arrival of
the incident signal. Using P(odd) basis functions on each antenna, the current on the structure can be written
as
where f p,n (z) is the p-th basis function on the nth element whose functional form is given by
where z = L/(P + 1) and zp,n = z0,n + pz. z0,n is the z-coordinate of the bottom of the n-th antenna as shown
in Fig. 10. Substituting Eq. (9) in (6) and using testing functions f q,m (z), the entries of [V] are given by
where xm is the x-coordinate of the axis of the m-th antenna. For the impedance matrix [Z] the elements are
given by
14
Fig. 10. Basis functions assumed in the electromagnetic analysis using the MOM analysis.
with
For the case m = n, i.e., both subsections i and l are on the same antenna element, the term (xm xn ) is
set to a, the radius of the wire (15). An analytic expression for the entries of the MOM impedance matrix is
derived in Ref. 15.
Because of the choice of a piecewise sinusoid basis and the choice of an odd number of basis functions per
antenna element, only one basis function is nonzero at the port. This is illustrated in Fig. 10, where the basis
function ZL is the only one contributing to the current at the port. Therefore, the measured voltage at the port
of the nth antenna is given by
i.e., the measured voltage at a port of the array is directly proportional to the coefficient of the basis function
corresponding to the port. The MOM analysis results in a matrix equation that relates the coefficients of
the current expansion to the MOM voltages through the admittance matrix. Since the MOM impedance and
admittance matrices are independent of the incident fields, they can be evaluated a priori. The measured
15
voltages at the ports of the antenna are related to the current coefficients by Eq. (10). Using this equation and
Eq. (7), the N e -dimensional vector of measured voltages can be written as
where ZL is the N e N e diagonal matrix with the load impedances as its entries, Y port is the matrix with the
rows of Y that correspond to the ports of the array, [V] is the MOM voltage vector of order N, i.e., the number
of unknowns in the MOM analysis, and Y port is a rectangular matrix of order N e N with N > N e . Since
Y port is a rectangular matrix with more columns than rows, Eq. (9) represents an underdetermined system of
equations. Our goal is to estimate some part of V given V meas . Therefore, we need a method to collapse the N e
N matrix Y port to an N e N e matrix.
The proposed method is most easily understood when illustrated with an example. If P unknowns are
used per wire element, N = N e P. Consider the case with N e = 2 and P = 3. Then N = 6, and basis function
2 corresponds to the port on the first element, while basis function 5 corresponds, to the port on the second
element. In this case, Eq. (11) can be written as
If the signal and all the jammers are incident from approximately the same elevation , the entries in V
are not all independent of each other. From Eq. (7), if weighting functions i and i + 1 belong to the same array
element,
16
where V is the vector of length N e whose entries are the MOM voltages that correspond to the ports, and B is
the N e N e matrix that relates the measured voltages to V .
Equation (18) is a relation between the measured voltages and the MOM voltages that correspond to the
ports of the array. In a practical application, the measured voltages are the given quantities and are affected
by mutual coupling. The MOM voltages on the right-hand side of Eq. (18) are the voltages that are directly
related to the incident fields and so are free from the effects of mutual coupling. Both vectors are of order N e ,
the number of ports. Therefore, this equation can easily be solved for the MOM voltages corresponding to the
ports of the antenna. Furthermore, if the elevation angle of interest () is fixed, the matrix B can be evaluated
a priori. Hence the computational cost of eliminating the mutual coupling is limited to the solution of a small
matrix equation.
The open-circuit voltages are the voltages that would be measured at the ports of the array if the ports
were open-circuited. In Ref. 17 the authors assume that these voltages are free of the effects of mutual coupling.
However, the open-circuit voltage at a particular element is the voltage measured in the presence of the other
open-circuited elements. Therefore the effect of mutual coupling has been reduced but not eliminated. Mutual
coupling can be assumed to have been eliminated only when there is nothing impeding the path of the incident
fieldsnot even the array itself.
We proceed with the same example presented earlier, where the intensity of the incident signal is held
constant at 1.0 V/m. The intensity of the first jammer is varied from 1.0 V/m to 1000 V/m (60 dB above the
signal) in steps of 5 V/m. For each value of the jammer intensity, the MOM voltage vector is calculated and the
measured voltages are calculated. In the first scenario the measured voltages are used to find the open-circuit
voltages. The open-circuit voltages are passed to the direct-data-domain algorithm of Ref. 18. In the second
scenario Eq. (17) is used to find the voltage vector V . These voltages are used to recover the signal and null the
jammers using the same algorithm. If the jammers are properly nulled, the reconstructed signal magnitude
should remain constant as a function of jammer strength.
Figure 11 presents the results when the open-circuit voltages are used to recover the signal. As can be
seen, the recovered signal shows a near-linear relation to jammer strength. This indicates that the jammer
has not been adequately nulled and the residual jammer strength has overwhelmed the signal.
The results of compensating for the mutual coupling using the technique presented in this paper are
shown in Fig. 12(a) for the magnitude and 12(b) for the phase. The magnitude of the reconstructed signal
varies between 0.996 V/m and 1.004 V/m, that is, the error in the signal recovery is very small. This figure
shows that the strong jammer has been effectively nulled and the signal can be reconstructed.
The reason that the use of the open-circuit voltages is inadequate to compensate for the mutual coupling,
while the technique presented here is adequate, is illustrated using the adapted beam patterns for the two
cases. The adapted beam pattern associated with using the open-circuit voltages is shown in Fig. 13(a). The
nulls are placed in the correct locations. However, they are shallow, resulting in inadequate nulling of the
interference.
The beam pattern associated with compensating for the mutual coupling using the technique presented
in this paper is shown in Fig. 13(b). The nulls are deep and placed in the correct directions. This demonstrates
that the mutual coupling has been suppressed enough to null even a strong jammer.
17
Figures 11 and 13 allow us to conclude that using the open-circuit voltages does reduce the effect of mutual
coupling somewhat. However, the reduction is inadequate to suppress strong interference. This is because the
open-circuit voltage at an array element is the voltage in the presence of the other open-circuited elements.
The direct-data-domain technique along with the MOM presented proves to be far superior in compensating for
mutual coupling. This is because by using multiple basis functions per antenna element, the mutual-coupling
information has been represented accurately.
Effect Of Noise
To illustrate the effect of thermal noise on the adaptive signal corrupted by three jammers as given in Table 3,
we consider an array of z-directed dipoles that is centrally terminated by a 50 resistance. Seven unknowns
per wire are used in the MOM analysis, leading to a total of 91 unknowns. The signal-to-noise ratio was
set at 13 dB. Note that jammer 1 is a strong jammer (66 dB with respect to the signal). For each of the 13
antenna channels, a complex Gaussian random variable is added to the measured voltages due to the signal
and jammers. This set of voltages, affected by noise, is passed to the signal recovery routine described earlier.
The computational procedure is repeated 500 times with different noise samples. These 500 samples are used
to evaluate the mean and the variance of the parameter of interest. The output signal to interference plus noise
ratio (SINR) in decibels is defined as
The results of the above simulation are presented in Table 4. When the measured voltages are used
directly to recover the signal, thenmainly due to the high bias in the estimate of the signalthe output SINR
is only 6.35526 dB. The high bias can be directly attributed to the inadequate nulling of the strong jammer.
18
Fig. 12. Signal recovery in a realistic array after taking mutual coupling into account: (a) magnitude, (b) phase.
However, when the mutual coupling is eliminated using the technique presented in this paper, the jammers
are completely nulled, yielding accurate estimates of the signal. The total interference power is suppressed to
nearly 20 dB below the signal.
The examples presented here illustrate how one can effectively deal with the effects of mutual coupling
between the sensors. Using the MOM with multiple basis functions per element allows us to reduce the mutual
coupling to an extent where it becomes inconsequential. Hence, the effects of mutual coupling in the analysis
have not been eliminated but rather taken into account.
19
Fig. 13. Antenna beam pattern (a) using open-circuit voltages and (b) after allowing for the presence of mutual coupling.
Epilogue
For the deployment of any realistic phased arrays, the electromagnetic nature of the array must be taken
into account. We have shown that the mutual coupling between the elements of the array causes adaptive
algorithms to fail. This problem is associated with both covariance-matrix approaches (as stated earlier in Ref.
17) and the direct-data-domain approach (investigated here).
To properly characterize the antenna, the MOM is used. The use of multiple basis functions per element
in a practical manner is a major advance and provides a pragmatic approach to the design of phased-array
antennas. Recognizing that the MOM voltage vector is free from mutual coupling eliminates the mutual
coupling from consideration. By using a relationship between the entries of the MOM voltage vector, a squarematrix equation is developed between the given measured voltages and the relevant entries of the MOM voltage
vector. It is shown that this method works very well in the presence of strong interfering sources.
20
Through a successful coupling of the electromagnetic analysis with the signal-processing algorithms used
in radio direction finding and adaptive antennas, it is possible to make wide use of realistic phased-array
antennas.
BIBLIOGRAPHY
1. T. K. Sarkar et al. A pragmatic approach to adaptive antennas, IEEE Antennas Propag. Mag., 42 (2): 3955, 2000.
2. Y. Hua T. K. Sarkar A note on the CramerRao bound for 2-D direction finding based on 2-D array, IEEE Trans. Signal
Process. 39: 12151218, 1991.
3. S. Choi D. Shim T. K. Sarkar A comparison of tracking-beam arrays and switching-beam arrays operating in a CDMA
mobile communication channel, IEEE Antennas Propag. Mag., 41 (6): 1022, 1999.
4. Y. Hua T. K. Sarkar Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in
noise, IEEE Trans. Acoust. Speech Signal Process., 38: 814824, 1990.
5. T. K. Sarkar O. Pererira Using the matrix pencil method to estimate the parameters of a sum of complex exponentials,
IEEE Antennas Propag. Mag., 37 (1): 4855, 1995.
6. F. Del Rio T. K. Sarkar Comparison between the matrix pencil method and the Fourier transform technique for high
resolution spectral estimation, Digital Signal Process. Rev. J. 6 (2): 108125, 1996.
7. P. Stoica R. Moses Introduction to Spectral Analysis, Englewood Cliffs, NJ: Prentice-Hall, 1997.
21
8. A. Medouri et al. Estimating one- and two-dimensional direction of arrival in an incoherent/coherent source environment, IEICE Trans. Commun., E80-B (11): 17281740,1997.
9. T. K. Sarkar S. Nagaraja M. C. Wicks A deterministic direct data domain approach to signal estimation utilizing
uniform 2D arrays, Digital Signal Process. Rev. J., 8 (2): 114125, 1998.
10. R. A. Monzingo T. W. Miller Introduction to Adaptive Arrays, New York: Wiley, 1980.
11. T. K. Sarkar N. Sangruji An adaptive nulling system for a narrow-band signal with a look-direction constraint utilizing
the conjugate gradient method, IEEE Trans. Antennas Propag., 37: 940944, 1989.
12. T. K. Sarkar et al. A deterministic least squares approach to adaptive antennas, Digital Signal Process. Rev. J., 6 (3):
185194, 1996.
13. S. Park T. K. Sarkar Prevention of signal cancellation in adaptive nulling problem, Digital Signal Process. Rev. J., 8
(2): 95102, 1998.
14. S. Park T. K. Sarkar A deterministic eigenvalue approach to space time adaptive processing, Proc. IEEE Antennas and
Propagation Soc. Int. Symp., 1996, pp. 11681171.
15. T. K. Sarkar B. J. Strait D. C. Kuo Special programs for analysis of radiation by wire antennas, Technical Report
AFCRL-TR-73-0399, Syracuse University, June 1973.
16. A. R. Djordjevic et. al. Analysis of Wire Antennas and Scatterers: Software and Users Manual, Norwood MA: Artech
House, 1995.
17. I. J. Gupta A. A. Ksienski Effect of mutual coupling on the performance of adaptive arrays, IEEE Trans. Antennas
Propag., 31: 785791, 1983.
18. R. S. Adve T. K. Sarkar Estimation of the effects of mutual coupling in an adaptive nulling system with a look direction
constraint, IEEE Trans. Antennas Propag., 48: 2000.
19. T. K. Sarkar B. J. Strait Optimization methods for arbitrarily oriented arrays of antennas in any environment, Radio
Sci., 11 (12): 959967, 1976.
TAPAN K. SARKAR
RAVIRAJ ADVE
University of Toronto
MAGDALENA SALAZAR PALMA
Polytechnic University of Madrid
contributions to the scattered fields come from regions of the rough surface that are in the vicinity of the (stationary phase) specular points on the rough surface. For this reason, for example, the single scatter physical
optics approach cannot be used to correctly predict the cross-polarized fields scattered in the plane of incidence.
The physical optics approach also fails to correctly predict the polarization (vertical and horizontal)
dependence of the backscatter cross sections when the surface slopes are very small even when the large radius
of curvature criterion is satisfied. Thus, for perfectly conducting surfaces with very large radii of curvature,
the Kirchhoff approximations for the surface current Js (Js = 2nxH i , where n is the unit vector normal to the
rough surface and H i is the magnetic field of the incident electromagnetic waves) are correctly used to predict
the physical optics co-polarized fields provided that the specular points contribute significantly to the scattered
fields. If, however, the mean-square slopes are very small, such that for backscatter at oblique incidence no
specular points exist, the physical optics solutions fail no matter how large the radii of curvature of the rough
surface.
Because of these limitations that are inherent in the two most familiar scattering theories, researchers
have attempted to develop more rigorous scattering theories that can bridge the broad range of scattering
problems not covered by either the perturbation theory or the physical optics approach. When the small slope
and height criteria as well as the large radii of curvature criteria and conditions for deep phase modulation
and specular point scattering are satisfied, the perturbation solutions and the physical optics solutions are
in agreement with each other. When neither the perturbation nor the familiar physical optics solutions are
individually applicable to the random rough surfaces considered (as in the case of microwave backscatter
from the sea surface and the enhanced backscatter observed in controlled laboratory experiments), both the
perturbation solutions and the physical optics solutions fail (6).
This has provided the motivation to develop several versions of hybrid-perturbed physical optics approaches that combine the salient features of both of these theories (2,5,7,8,9). It has also been shown that
the enhanced backscatter that has been observed from very rough surfaces is due to multiple scattering
(10,11,12,13). One problem with these hybrid solutions based on a two-scale surface model is that the results
critically depend upon wavenumber kd where spectral splitting is assumed to occur (8). In general, using these
hybrid approaches, one cannot choose kd such that the large-scale surface hl and the small-scale surface hs (that
rides on the large scale surface) simultaneously satisfy the physical optics and the perturbation restrictions,
respectively. It has also been shown that even when a hybrid solution is used to approximately determine the
copolarized cross section through a suitable choice of kd , it cannot be used to determine the cross-polarized
cross section (14,15).
The full-wave solutions are not restricted to electromagnetic scattering by layered media with irregular
interfaces. Scattering due to inhomogeneities in the complex electrical permittivities and magnetic permeabilities in each layer can also be accounted for in the analysis.
The full-wave solutions can also be used to determine the coupling between the radiation fields, the lateral
waves, and the guided (surface) waves of the layered structures. They can be used to determine the scattered
near fields as well as far fields. Both large-scale and small-scale (including subwavelength) fluctuations of the
rough surface and medium parameters are accounted for in the analysis.
Schelkunoffs Generalized Telegraphists Equations for Bounded Irregular Waveguides and the
Use of Local Mode Expansions
Generalized telegraphists equations, which are based on the use of complete expansions of the electromagnetic
waves (into vertically and horizontally polarized radiation fields, lateral waves, and surface waves) as well as
on the imposition of exact boundary conditions at the rough interfaces of irregular stratified media, have been
derived (16,17,18) for electromagnetic fields scattered by irregular stratified media with rough interfaces. The
analytical procedures used to derive these generalized telegraphists equations are similar to those advanced
by Schelkunoff (19), to solve problems of mode coupling in irregular waveguides with finite cross sections and
impedance boundary conditions. Since the field expansions do not converge uniformly on the irregular boundaries, Schelkunoff (19,20) employed precise mathematical procedures to avoid term-by-term differentiation of
the field expansions (infinite sets of TE and TM modes for cylindrical waveguides).
The method used to convert Maxwells equations into sets of generalized telegraphists equations for the
reflected and transmitted wave amplitudes in irregular layered structures is shown schematically in Fig. 1.
The intrinsic properties duality, reciprocity, realizability, and invariance to coordinate transformations are also
listed in Fig. 1. For open structures consisting of half-spaces (such as the irregular-layered media considered in
this work), the complete field expansions are associated with integrals along two branch cuts (the radiation and
the lateral wave terms) and residues at pole singularities (waves guided by the stratified structure) (16,17,18).
Schelkunoffs method has also been used to solve problems of mode coupling in a wide class of irregular
waveguides such as waveguide tapers and waveguide bends, as well as in waveguides with nonperfectly
conducting surfaces that are characterized by impedance boundary conditions (19,20). In all these bounded
waveguide systems, the field expansions are expressed in terms of infinite, discrete sets of propagating and
evanescent waveguide modes associated with the characteristic equations for cylindrical waveguides with ideal,
perfectly conducting boundaries. In waveguides of arbitrarily varying cross sections with finitely conducting
boundaries, the modes of the ideal cylindrical waveguides, while complete, do not individually satisfy the correct
boundary conditions, and the mode expansions do not uniformly converge on the irregular boundaries. To keep
his analysis rigorous, Schelkunoff (19,20), employed rather tedious, but necessary, mathematical procedures
on imposing exact boundary conditions. Thus, for example, orders of integration and differentiation are not
interchanged in order to account for the nonuniform convergence of the field expansions and the fact that
the range of the cross section variables (limits of the corresponding integrals) are not constant. The coupling
between the waveguide modes is due to the nonideal boundary conditions.
In an attempt to reduce the number of significant coupled, spurious modes that need to be accounted for
in multimode waveguides with irregular cross sections, new generalized telegraphists equations were derived
based on field expansions in terms of a complete set of waveguide modes that individually satisfy the boundary
conditions locally. Thus, for example, in waveguides with abrupt or gradual tapers, waveguide modes in uniform
tapers (with constant flare angles) were used in the local field expansions (21,22,23). In waveguide bends with
arbitrarily varying curvatures, the fields were expressed in terms of local annular waveguide modes (24,25)
and in waveguides with varying impedance boundaries, modes that locally satisfy the impedance boundary
conditions were used (26,27). The modal equations for the local waveguide modes were usually more difficult
to solve than those for ideal cylindrical waveguides. However, the generalized telegraphists equations can be
solved numerically (using the RungeKutta Method) more readily when the local modal expansions are used,
since coupling into the spurious local modes is bunched more tightly around the incident mode. This is because
the local modes individually satisfy the local boundary conditions in the waveguide.
These analytical and numerical results were validated experimentally in a series of controlled laboratory
experiments used to synthesize waveguide transition sections (28). These controlled laboratory studies were
first conceived by Wait (29,30,31), to study VLF radio wave propagation in scaled laboratory models of the earthionosphere waveguide. In these models, the ionosphere effective boundary was simulated by an absorbing foam
material with a specified complex dielectric coefficient and thickness (manufactured by Emerson Cumming) (32,
33).The earths curvature was also simulated in these laboratory models using a nondissipative inhomogeneous
dielectric material to load the interior of the straight model waveguide (34,36). This experimental procedure
to stimulate curvature was carried out in the scaled model at microwave frequencies (scaling factor 106 ).
It is analogous to the mathematical earth-flattening technique developed by Kerr (37). The dominant mode
in the (simulated) curved model waveguide had the same characteristics of the earth-detached mode in the
earth-ionosphere waveguide. They can be expressed in terms of Airy integral functions (instead of sinusoidal
functions in empty, rectangular waveguides).
Generalized Telegraphists Equations for Irregular Stratied Media with one or Two Half-Spaces
Following the extensive analytical, numerical, and experimental work on electromagnetic wave propagation in
bounded irregular waveguide structures, propagation in irregular stratified structures with one or two infinite
half-spaces were analyzed using the full-wave method. Approximate impedance boundary conditions (38,39)
were replaced by exact boundary conditions at the rough interface between two media characterized by different
complex permittivities and permeabilities (16,18). Furthermore, scattering due to laterally inhomogeneous
permittivities and permeabilities in each layer of the irregular stratified media are also accounted for in the
analysis.
The initial impetus for this work was the complex and intriguing sloping beach problem considered by
Wait and Schlak (40) in which the sea was modeled (two-dimensionally) as a small-angle wedge region adjacent
to horizontal dry land. Exact modal expansions of the fields in the four wedge-shaped regions (sea water, wet
land under the sea, dry land, and free space) involve KontorowichLebedev transforms (41). The relationships
between the Fourier, Watson, and KontorowichLebedev transforms have been obtained through the use of
a generalized Bessel transforms (42). The analytical solution based on the KontorowichLebedev transforms
involve integration over the order of the Bessel functions. Schlak and Wait (40) employed a geometric optics
approach which give exact results for parallel stratified media. However, these results were shown by them
to be nonreciprocal even for the small wedge angles they considered. King and Husting (43), who conducted a
series of controlled experiments on laboratory models, showed that the results were more accurate when the
direction of propagation was toward the apex of the wedge (rather than away from it).
Maxwells equation for the transverse (y, z) components (denoted by subscript T) of the electric E and
magnetic H fields can be expressed as follows:
and
The electric and (dual) magnetic current densities are J (A/m2 ) and M (V/m2 ). The exact boundary conditions imposed at each of the interfaces of the irregular layered structure are the continuity of the tangential
components of the electric and magnetic fields
The full-wave, complete expansions for the vertically (V) and horizontally (H) polarized electric and magnetic
fields are given in terms of the transverse basis functions
in which the y-dependent scalar basis functions P for the vertically (P = V) and horizontally polarized waves
associated with the radiation fields, the lateral waves, and the surface waves of the layered structure are (18)
and
In the above equations, R and T are associated with the Fresnel reflection and transmission coefficients,
vr is the y component of the wave vector in medium r kr (u, vr , w), vq 1,q = vq 1 vq , and the z-dependent scalar
function is
The wave impedances and admittances for the vertically and horizontally polarized waves are
The transverse components of the electric and magnetic fields are expressed completely as follows:
and
in which the symbol v denotes summation (integration) over the complete wave vector spectrum consisting of
the radiation term and lateral waves (associated with branch cut integrals) and the waveguide modes (or bound
surface waves) of the layered structure (associated with the residues at the poles of the reflection coefficients).
In Eqs. (17) and (18) the scalar field transforms for the vertically (P = V) and horizontally (P = H) polarized
electric and magnetic fields are
and
and
in which c (w, z) = (1/2) exp(iwz) and use has been made of the biorthogonal relationships
In Eq. (25) the Kronecker delta P,Q implies that the vertically (P, Q = V) and horizontally (P, Q = H) polarized
basis functions are orthogonal. Furthermore, the Dirac delta function (w w ) appearing in Eq. (25) is a result
of the Fourier transform completeness and orthogonality relationships:
The corresponding completeness and orthogonality relationships satisfied by the scalar basis functions P
are
and
10
in which ZV and Y H are the wave impedances and admittances for the vertically and horizontally polarized
waves, respectively. Furthermore, the symbol
(v, v ) in Eqs. (25) and (29) is the product of the Kronecker
delta q,r and the Dirac delta function (v, v ) for the radiation and lateral wave terms or the Kronecker delta
v,v for the bound guided (surface) waves of the layered structure. Thus the radiation fields, the lateral waves,
and the guided waves of the full-wave spectrum are mutually orthogonal (16,17). The radiation fields and the
lateral waves are associated with branch cut integrals in the complex wave number plane [with branch points
at k = k0 (uppermost medium) and k = km (lowermost medium)]. The guided waves of the layered structure are
associated with the residues at the poles of the composite reflection coefficient seen from above or below the
layered structure.
In this work, it is convenient to express the vertically and horizontally polarized scalar field transforms
[Eqs. (19) and (20)] in terms of the vertically and horizontally polarized forward wave amplitude aP and
backward wave amplitude bP as follows:
Upon substituting the complete field transforms into the transverse components of Maxwells equation
[Eqs. (1)(4)], making use of the biorthogonal relationships [Eq. (28)], and imposing the exact boundary conditions at each interface of the irregular layered structure [ Eq. (5)], the following generalized telegraphists
equations are derived (see Fig. 1):
where AP and BP are associated with the source terms J and M in Eqs. (1)(4). Furthermore, SBA PQ and SAB PQ
are transmission scattering coefficients, while SAA PQ and SBB PQ are reflection scattering coefficients. These
scattering coefficients vanish when the layered medium is horizontally stratified with homogeneous medium
in each layer. In this case, the forward and backward wave amplitudes for the vertically and horizontally
polarized waves are decoupled and analytical closed form solutions are readily obtained. However, if the rough
surface height or the complex permittivities and permeabilities are functions of x and z, the wave amplitudes
are coupled. In the general case, the basic functions P do not individually satisfy the irregular boundary
conditions and the complete field expansion do not uniformally converge at the boundaries. Thus, on following
precise mathematical procedures (16,17,18,19,20), the orders of integration (summation) and differentiation
cannot be interchanged. As a result, the rigorous derivations of the generalized telegraphists equations [Eqs.
(32) and (33)] are rather tedious.
11
The intrinsic properties of the full-wave solutions are (see Fig. 1) duality, reciprocity, realizability, and
invariance to coordinate transformation. All the above properties follow directly from Maxwells equations.
[(1)(4)], and they are not a result of any additional constraints imposed on the results. A two-dimensional
scalarized version of this problem has also been analyzed (44,45,46,47). When the lowermost and/or uppermost
half-space is perfectly conducting or a good conducting medium, the two boundary conditions [ Eq. (5)] at the
lowermost (and/or uppermost) interface can be replaced by a single surface impedance boundary condition:
In Eq. (34) the unit vector n is normal to the interface and points into the conducting half-space. For an
isotropic conducting half-space, the surface impedance Zs is a scalar. In general, the surface impedance can be
represented by a dyad. The impedance boundary condition has been used to simplify the analysis of irregular
layered structures (48,49,50,51). Contributions from integrals associated with one (or two) branch cuts are
eliminated when impedance boundary conditions are used.
The generalized telegraphistss equations have also been derived for irregular multilayered cylindrical
structures (52,53,54,55) and irregular spherical structures (56,57,58).
For the irregular cylindrical and spherical layered structures, the complete expansions of the fields in
terms of cylindrical/spherical harmonies are related to the Watson transformations (59).When the innermost
regions of the cylindrical/spherical structures are highly conducting and impedance boundary conditions are
used, the contribution from the continuous portion of the wave spectrum can be ignored and the solutions are
expressed in terms of discrete waveguide modes (42). A set of generalized telegraphists equations similar to
Eqs. (32) and (33) have been derived for the irregular cylindrical/spherical structures (52,53,54,55,56,57,58).
However, it should be noted that for the spherical case, the wave admittances/impedances and propagation
coefficients for the forward and backward propagating wave amplitudes are not the same. For the spherical/cylindrical cases, solutions of the model (characteristic) equations are far more complicated, and numerical
techniques have been developed to trace the loci of the complex roots of the characteristic equations (60).
These procedures have been used to solve problems of electromagnetic wave propagation in naturally
occurring or man-made perturbed models of the earth-ionosphere waveguide. Experiments in controlled laboratory models (based on the pioneering work by Wait) have been conducted to validate the analytical results
(29,31,34).
Iterative Solutions to the Generalized Telegraphists Equations and Their Relationships to the
Small Perturbation Solution and the Physical/Geometrical Solutions to Rough Surface
Scattering
Iterative analytical procedures, as well as numerical techniques, are used to solve the generalized telegraphists
equations [Eqs. (32) and (33)] for the forward and backward wave amplitude scattered by two-dimensionally
rough surfaces (see Fig. 3). An overview of the results are shown schematically in Fig. 4. The analytical procedures are dealt with first in this section. To obtain the single scatter approximations for the wave amplitudes,
the expressions for the primary fields impressed upon the rough interface due to the sources are first derived
from Eqs. (32) and (33) upon neglecting all the coupling terms manifested by the scattering coefficients SBA PQ .
When the sources are in the far field, the primary, incident fields impressed upon the rough surface are vertically and horizontally polarized plane waves propagating in the direction of the (free space) wave vector
. Thus, the primary electric
ki 0 = ki 0x ax + ki 0y ay + ki 0z az = k0 ni , where ni is a unit vector and k0 =
12
where RP 0 is the P = V, H polarized Fresnel reflection coefficient for waves incident from medium 0 (free
space) upon medium 1 (see Fig. 3), and aP is the unit vector in (P = V) or perpendicular (P = H) to the plane
of incidence. The primary fields are proportional to the local basis function P 0 (v, y) given by Eq. (11). The
corresponding vertically or horizontally polarized field transforms and wave amplitudes are obtained using
Eqs. (19), (20), and (31). In view of the biorthogonality relationships [(25)], the primary wave amplitudes are
proportional to the delta functions corresponding to the polarization (Q = V, H) and direction ki 0 (wi 0 , vi 0 , wi 0 )
of the incident waves. When these expressions for the primary wave amplitudes are substituted for aQ and
bQ on the right-hand side of Eqs. (32) and (33) (with the source terms AP and BP suppressed) the (iterative)
differential equations for single scattered wave amplitudes are obtained. The solutions for these single scatter
wave amplitudes are substituted into the expressions for the field transforms [Eqs. (17) and (18)] to obtain
the single scattered fields. Since both vertically and horizontally polarized incident waves are considered and
both like- and cross-polarized scattered waves result from two-dimensionally rough surfaces, the results for
the diffuse scattered fields are presented here in matrix notation.
In Eq. (36)
where EP s and EP i are the vertically (P = V) and horizontally (P = H) polarized components of the scattered
fields and the incident waves (at the origin), respectively.
The 2 2 scattering matrix S is given by
13
Fig. 3. Relationships between the incident and scatter wave normals ni and nf , respectively, local tangent planes (r rs )
np = 0, and planes parallel to the stationary phase planes (r rs ) ns = 0 for rough surface scattering.
and
14
where 0 is the elevation angle (measured from the y axis and is the azimuth angle measured from the x axis.
Furthermore, Ci 0 = cos i 0 , C 0 = cos 0 , Si 0 = sin i 0 , S 0 = sin 0 . The corresponding quantities associated with
medium 1 are denoted by the subscript 1 and 1 , is related to 0 by Snells law. The vector v is
and rs and r are position vectors from the origin to points on the rough surfaces and to the observation point,
respectively:
15
Furthermore,
In Eq. (36) the integrations are over the rough surface (transverse) variables xs and zs as well as the wave
vector variables k y and k z . The first term Gf contains the exponent exp(ivy h) while the second term Gf d does
not. On integrating the second term with respect to xs and zs , the delta functions are obtained:
Thus the second term Gf d can be readily shown to be the specularly reflected wave from a flat surface at y = 0,
since RVV and RHH reduce to the Fresnel reflection coefficients for the vertically and horizontally polarized
waves and RVH 0, RHV 0 for the specular case k ks = ki + 2k0 cos i 0 ay , v 2k0 cos i 0 ay .
Note that the results in Eq. (36) are in complete agreement with the earlier work in which it is assumed
that the vector n normal to the rough surface is restricted to the xy plane (hz = 0) (25). This is because the
restriction does not constrain the unit vector n to lie in the scatter plane (normal to k xki ).
In the recent work by Collin (61), the author uses a different full-wave approach to the problem of
scattering of plane waves from perfectly conducting surfaces: He uses a pair of odd and even scalar basis
functions for the Dirichlet and Neumann boundary conditions. These basis functions and the corresponding
reciprocal basis functions (chosen to be their complex conjugates) are explicit functions of y and implicit
functions of x and z [through the expression for h(x, z), the rough surface height]. The resulting source free wave
equation is further (Fourier) transformed in x and z to obtain an equation with a dyadic operator for the vector
field transform (and equivalent, slope-dependent sources that account for scattering) rather than generalized
telegraphists equations for the scalar wave amplitude. Upon inverting the dyadic operator, evaluating the
residue at k0 , and integrating by parts, Collins results are also shown to be in complete agreement with the
full-wave results for the perfectly conducting case (|r | , r = 1). Collin referred to the results for the diffuse
scattered fields (25) as the original full-wave solutions (see Fig. 4).
The above first-order iterative solutions for the single scattered fields [Eq. (36)] are restricted to rough
surfaces with small mean-square slopes s < 0.1 (3). This is because the scattering coefficients S PQ (, = A,
B) appearing in the generalized telegraphists equations [Eqs. (32) and (33)] are explicitly dependent on the
slopes of the rough surface. Alternatively, in Collins work, the equivalent source terms are slope-dependent.
However, unlike the small perturbation solution, the full-wave solutions are not restricted to rough surfaces
with small mean-square heights. Furthermore, the full-wave solutions [Eq. (36)] can be used to evaluate the
near fields, the far fields, and the fields in the intermediate region. Thus, this work can be applied to probing
subwavelength structures, an area that has attracted much interest in near field optics. In addition, the firstorder scattering results can be extended to multiple scattering. In particular, the full-wave approach has been
used to account for double scatter that is associated with observed backscatter enhancement (12).
When the observation point is at a very large distance from the rough surfaces (k1 r k0 L 1 and
k0 r k0 l 1), the integration with respect to the scatter wave vector variables (k 0y , k 0z ) can be performed
16
analytically using the stationary phase method. Thus, if the observation point is in the direction
the diffuse far fields scattered from the rough surface are
The expression for S(kf , ki ) in Eq. (52) is the same as the expression for S(k , ki ) in Eq. (36) except that the
scatter wave vector k is replaced by kf , where kf 0 = k0 nf [Eq. (51)] and kf 1 the wave vector for y < h(xs , zs ) is
related to kf 0 through Snells law. Furthermore,
and
In Eq. (54), vy = k0 (Ci 0 + Cf 0 ) = k0 (cos i 0 + cos f 0 ). When the integrations with respect to xs and zs are
performed, the term Gf D is shown to be the flat-surface quasi-specular (zero-order) scattered field which is
proportional to (4Ll/vx Lvz l) sin vx L sin vz l. The expression for the quasi-specular scatter term Gf D is the same
as the expression for the total field Gf except that rs in Gf is replaced by rt in Gf D [Eq. (52)]. Thus, for h(xs ,
zs ) = 0, they are identical and Gf s = 0.
It is readily shown that the full wave-solution [Eq. (52)] reduces to the small-heightsmall-slope perturbation solution of Rice provided that it is assumed that k0 h 1. Thus on retaining the first two terms of the
Taylor series expansion of exp(ivy h) it follows that
In this small-heightsmall-slope limit, the full-wave solution is indistinguishable from the small perturbation solution for the far fields scattered by slightly rough surfaces (see Fig. 3). These limiting forms of the
full-wave solutions are, however, no longer invariant to coordinate transformations since h(x, z) does not appear
in the exponent. Furthermore, it is shown that they are valid only if the height and slopes are of the same
order of smallness.
Turning now to the high-frequency limit, it is assumed that the radii of curvature of the rough surfaces
are very large compared to wavelength. The unit vector np normal to these large-scale patches of rough surfaces
is assumed to have arbitrary orientation. Thus the planes of incidence and scatter with respect to the reference
coordinate system (normal to ni xay and nf xay , respectively) are not the same as the local planes of incidence
and scatter with respect to the local coordinate system (normal to ni xnp and nf xnp , respectively). Furthermore,
the sines and cosines of the angles of incidence and scatter appearing in the scattering coefficients [Eqs. (39)
(42)] are not the same as the sines and cosines of the local angles of incidence and scatter. In order to account
for the arbitrary slope of the large-scale surface, the surface element scattering matrix S(kf , ki ) in Eq. (36) is
17
replaced by
In Eq. (56) the matrix operator T i decomposes the waves which are vertically and horizontally polarized with
respect to the reference plane of incidence (normal to ni xay ) into vertically and horizontally polarized waves
with respect to the local plane of incidence (normal to ni xn). Similarly, the matrix operator T f decomposes the
waves which are vertically and horizontally polarized with respect to the local plane of scatter (normal to nf xn)
back into vertically and horizontally polarized waves with respect to the reference plane of scatter (normal to
nf xay ). Thus if i and f are the angles between the reference and local planes of incidence and the reference
and local planes of scatter, respectively, then
where
Furthermore, the cosines of the local angles of incidence and scatter appearing in Sn (kf , ki ) [Eq. (56)] are
given by
18
while Sin 0 and Sfn 0 are the sines of the local angles of incidence and scatter. The corresponding quantities
associated with medium 1 are denoted by the subscript 1. The local angles of incidence and scatter in medium
1 are related to the local angles of incidence and scatter in medium 0 through Snell law. Implicit in Eq. (56) are
the self-shadow functions U(nf np ) and U(ni np ) (where U is the unit step function) since the local angles
of incidence and scatter are less than 90 . Furthermore, cos(f i ) and sin(f i ) appearing in Eq. (38) are
replaced by (62)
The above changes represented by Eq. (56) constitutes the transformation into the large-scale (patch) coordinate
system (see Fig. 5). It is readily shown that at high frequencies, the major contributions come from the vicinity
of the stationary-phase, specular points on the rough surface where np is along the bisector between nf and
ni (see Fig. 3). Pursuant to the transformation Eq. (56) it can be shown that at these stationary-phase
points, RVV and RHH reduce to the familiar Fresnel reflection coefficients while the cross-polarized terms RVH
and RHV vanish at the specular points. Thus in these limits, the full-wave solution [Eq. (52)] reduces to the
physical optics solution for the diffuse scattered fields (4). If, in addition, Eq. (52) is evaluated analytically
using stationary-phase approximations, the full-wave solution reduces to the geometric optics solution (see
Fig. 4). However, in order to account for multiple scatter at the same rough surface, it is necessary to return to
the original form [Eq. (36)] even at high frequencies (13).
Full-Wave Solutions for the Radar Cross Sections for Multiple-Scale Rough Surfaces
The normalized bistatic radar cross sections PQ for two-dimensionally rough surfaces are dependent on the
polarizations of the scattered (first superscript P = V, H) and incident (second superscript Q = V, H) waves.
It is defined as the following dimensionless quantity that depends on the incident and scatter wave-vector
directions:
19
In Eq. (64) the area Ay is the radar footprint, r is the distance from the rough surface to the far-field observation
point. When the rough-surface statistical characteristics are homogeneous though not necessarily isotropic,
the (ensemble average) full-wave radar cross section based on the original (denoted by subscript 0) full-wave
analysis [Eq. (52)] is expressed as follows:
where SPQ (nf , ni ) is the surface element scattering coefficient for incident waves in the direction ni and
polarization Q = V (vertical), H (horizontal), and scattered waves in the direction nf and polarization P = V,
H. It should be noted that the scattering coefficients SPQ (nf , ni ) are not functions of slope. In Eq. (65), Q(nf ,
ni ) is expressed in terms of the surface height joint characteristic function 2 and characteristic function as
follows:
20
where k0 is the free-space wavenumber and rdt is the projection of rS1 rS2 (where rS1 and rS2 are position
vectors to two points on the rough surface) on the mean plane (y = 0) of the rough surface y = h(x, z) (see Fig. 3):
and drdt = dxd dzd . The vector v is given by Eq. (44). For homogeneous isotropic surfaces with Gaussian joint
surface height probability density functions
and Q(nf , ni ) [Eq. (66)] can be expressed as follows for L, l lc (the autocorrelation length):
and the integrals in Eq. (66) can be expressed in closed form in terms of the rough-surface height spectral
density function [the Fourier transform of the surface height autocorrelation function hh
= h2
R(rd )].
However, the solution [Eq. (66)] based on the original full-wave solution is not restricted to surfaces with
small mean-square heights. Since it is based on the first-order single-scatter iterative solution, it is nevertheless
restricted to surfaces with small slopes (2 s < 0.1).
When slopes of the rough surface are not small and the scales of roughness are very large compared to
wavelength, solutions based on the transformation [Eq. (56)] can be used. Thus, the diffuse scatter cross section
21
is expressed as follows:
The (statistical) mean scattering cross section for random rough surfaces is obtained by averaging Eq. (74) over
the surface heights and slopes at points rs1 and rs2 . The coherent component of Eq. (75) is defined as
The above expression for the radar cross sections for two-dimensional random rough surfaces involves integrals
over the random rough-surface heights and slopes and the surface variables xs1 , xs2 , zs1 , zs2 . This expression
can be simplified significantly if the radii of curvature of the large-scale (patch) surface are assumed to be
very large compared with the free-space wavelength. In this case, the slope at point 2 may be approximated
by the value of the slope at point 1 (hx2 hx1 , hz2 hz 1). If, in addition, the rough surfaces are assumed to be
statistically homogeneous, the cross section is expressed as follows:
in which the analytical expressions for the conditional joint characteristic functions are
22
where P2 (nf , ni |ns ) is Sancers (63) shadow function and ns is the value of np at the specular points.
For random rough surfaces characterized by a four-dimensional Gaussian surface high/slope coherence
matrix we obtain
and
In Eq. (85), C is the Gaussian surface height autocorrelation function and lcx , lcz are correlation lengths in the
x and z directions, respectively.
When the surface is isotropic (lcx = lcz = lc and 2 x = 2 z = 2 s ), Eqs. (81), (83), and (84) reduce to
where
23
and
For the assumed isotropic surface with Gaussian statistics, the four-dimensional integral [Eq. (28) with
Eqs. (86) and (88)] can be expressed as a three-dimensional integral using a Bessel function identity (4).
The resulting full-wave incoherent diffuse scatter cross section that accounts for surface height/slope
correlations is expressed as
where
Furthermore,
and
24
In Eq. (93),
and
In Eq. (90), p(hx , hz ) is the probability density function for the slopes (assumed here to be Gaussian).
It is shown that the above results in which the correlations between the surface heights and slopes have
been accounted for in the analysis reduce to the small perturbation results when the heights and slopes are of the
same order of smallness and reduce to the physical/geometrical results in the high-frequency limit (64). These
full-wave results have also been compared with numerical and experimental results for one-dimensionally (3)
and two-dimensionally (64) rough surfaces.
When the rough surface consists of multiple scales of roughness as in the case of sea surfaces, two scale
models have been introduced to obtain the scatter cross sections. Thus, the surface is assumed to consist of a
small-scale surface that is modulated by the slopes of the large-scale surface and the cross section is expressed
as a sum of the cross sections for the large- and small-scale surfaces. However, Brown (8) has shown that
the hybrid-perturbationphysical-optics results critically depend upon the choice of the spatial wavenumber kd
that separates the large-scale surface from the small-scale surface. To apply this hybrid-perturbationphysicaloptics approach, the Raleigh rough-surface parameter = 4k2 0 h2 s
must be chosen to be very small compared
to unity. This places a very strict restriction on the choice of kd . As a result, scattering from the remaining
surface consisting of the larger-scale spectral components with kl < kd may not be adequately analyzed using
physical optics (see Fig. 4).
The above Raleigh rough-surface parameter does not place any restriction on the choice of kd when the
full-wave analysis is used. Furthermore, it is shown (65) that the full-wave solution for these multiple-scale
rough surfaces are expressed as weighted sums of two cross sections:
where PQ l
is the cross section associated with the surface consisting of the larger spectral components
(kl < kd ), while PQ s
is the cross section associated with the surface consisting of the smaller spectral components ks > kd . Scattering by the small-scale surface is modulated by the slopes of the large-scale surface,
while scattering by the small-scale surface is diminished by a coefficient (less than unity) that is equal to the
magnitude of the small-scale characteristic function squared [Eq. (69)].
Thus, using the full-wave approach, extensive use is made here of the full-wave scattering cross-section
modulation for arbitrarily oriented composite rough surfaces. Thus, the incoherent diffuse radar cross sections
of the composite (multiple scale) rough surface is obtained by regarding the composite rough surface as an
ensemble of individual patches (several correlation lengths in the lateral dimension) of arbitrary orientation
(see Fig. 5). The cross section per unit area of the composite rough surface is obtained by averaging the cross
sections of the individual arbitrarily oriented pixels. It is shown that the (unified full wave) cross section of the
composite rough surface is relatively stationary over a broad range of patch sizes. In this broad range of values
of patch sizes, the norm of the relative error is minimum.
25
where a x , a y , and a z are the unit vectors in the fixed (reference) coordinate system associated with the mean
plane y = h0 = 0 and hx = h/x, hz = h/z. The unit vectors ax and az are tangent to the mean plane of the
patch. The angles and are the tilt angles in and perpendicular to the fixed plane of incidence (the x , y
plane).
The cosines of the angles of incidence and scatter in the patch coordinate system can be expressed in
terms of the cosines of the angles of incidence and scatter in the fixed reference coordinate system (primed
quantities; see Fig. 5) as follows:
and
The surface element scattering coefficient for the tilted pixel is expressed as follows (66):
in which SPQ p the elements of the 2 2 scattering matrix Sp are obtained from SPQ on replacing the angles i 0
and f 0 by i 0 and f 0 , respectively. Furthermore, cos(f i ) and sin(f i ) are replaced by the cosine and
the sine of the angle (f i ) between the planes of scatter and incidence (with respect to the pixel coordinate
system (see Fig. 5) (62). The matrices T f p and T i p relate to the vertically and horizontally polarized waves in the
reference coordinate system to the vertically and horizontally polarized waves in the local (patch) coordinate
system (66). Thus
where
26
and
The angles and can be expressed in terms of the derivatives of h(x, z) as follows:
The radar cross section (per unit area) for the tilted patch can be expressed as follows:
and
where
in which
Thus, in Eq. (108) both |DPQ p |2 and Qp are functions of the slopes hx and hz of the tilted patch mean
plane (see Fig. 5). For a deterministic composite rough surface, the slopes (that modulate the orientation of the
27
patch) are known. The radar scatter cross section for this composite surface is given by summing the fields of
the individual patches. However, if the composite surface height is random, the tilted pixel cross section (per
unit area) [Eq. (108)] for the rough surface is also a random function of the pixel orientation. Thus, in order to
determine the cross section per unit area of the composite random rough surface, it is necessary to evaluate
the statistical average of PQ p . The cross section of the composite random rough surface is given by
where
denotes the statistical average (over the slope probability density function p(hx , hz ) of the tilted patch).
The mean-square slope of the tilted patch is given in terms of the surface height spectral density function
[Eq. (65)]. In Eq. (114), the upper limit kp is the wavenumber associated with the patch of lateral dimension
Lp = 2/kp .
In the expression for Qp (nf , ni ) [Eq. (109)], the surface height autocorrelation function for the rough
surface associated with the patch is given in terms of the Fourier transform of the surface height spectral
density function as follows:
where it is assumed that the surface is homogeneous and isotropic, k = (k2 x + k2 z )1/2 .
Illustrative examples of the results obtained for the scatter cross section using the above procedures have
been published (66).
For purposes of comparisons, the generalized telegraphists equations [Eqs. (32) and (33)] have also been
solved numerically for one-dimensionally rough surfaces (67). The procedures used are outlined here. On
extending the range of the wave vector variable u from to , the Eqs. (32) and (33) are combined into one
coupled integrodifferential equation for the forward and backward scattered wave amplitudes a(x, u) and a(x,
u), respectively.
On extracting the rapidly changing part exp(iux), the wave amplitudes are expressed as
The total wave amplitude is the sum of the source-dependent primary wave amplitude AP p and the diffusely
scattered wave amplitude AP s due to the surface roughness:
28
The primary wave amplitude is obtained from Eqs. (32) and (33) on ignoring the coupling terms S PQ . The
resulting integrodifferential equation for the diffusely scattered term AP s is converted into an integral equation
with mixed boundary conditions. This expression is integrated by parts to get rid of the singularity in the
scattering coefficient. The resulting integral equation is solved numerically using the standard moments
method. Finally the field transforms [Eqs. (17) and (18)] are used to obtain the results for the electromagnetic
fields from the wave amplitudes. For the far fields, these expressions can be integrated analytically (over the
wavenumber variable) using stationary-phase techniques. These results show that for surfaces with small to
moderate slopes the proceeding analytical results are valid (67).
When the observation points are near the surface, it is necessary to account for coupling between the
radiation fields, the lateral waves, and the surface waves associated with rough surface scattering (68,69). When
the rough surface is assumed to be perfectly conducting, the contribution from the branch cut integral associated
with the lateral waves vanishes and there are no residue contributions (associated with surface waves) from
the singularities of the reflection coefficients. When the approximate impedance boundary condition is used,
the lateral wave contribution is eliminated.
The full-wave method can also be used to determine the fields scattered upon transmission across rough
surfaces (70). When scattering from more than one rough interface in an irregular stratified media is considered,
in general, it becomes necessary to account for scattering upon reflection and transmission across rough
surfaces. This topic is reviewed in the next section.
29
Fig. 6. Two previously investigated models with only one rough interface.
illustrated in Fig. 7. The upper interface for y = h01s (xs , zs ) between medium 0 and medium 1 is
where the mean value of h01s is < h01s >= h01 . The lower interface y = h12s (xs , zs ) between medium 1 and
medium 2 is
The unit vectors normal to the large-scale rough interfaces between medium 0 and 1 and between medium 1
and 2 are
where
Using the full-wave approach (71), the diffuse first-order scattered fields can be expressed as the sum
30
where EPQ SU (r) is associated with scattering from the upper interface, and EPQ SD (r) is associated with scattering
from the lower interface. For eiwt time harmonic plane wave excitations, the incident electric field of magnitude
EiP 0 is
31
where F PQ mnU (m, n = 0, 1) and F PQ 11D are scattering coefficients associated with the upper and lower interfaces.
The integration is over the rough surface variables xs and zs as well as the wave number variables v0 and w of
the scattered wave vector k0 . The superscripts of EPQ s denote P (P = H, V) polarized scattered fields due to Q
(Q = H, V) polarized incident fields. The fields expressed by Eqs. (126) and (127) are at the observation point
y h01s :
The position vectors to points on the upper and lower rough interfaces are
32
In Eqs. (131)(134) the complex sines and cosines of the incident and scatter angles in medium 1 and 2 are
related by Snells law:
Equations (126) and (127) contain the expressions for the Fresnel reflection (RP ) and transmission (T P )
coefficients for vertically and horizontally polarized waves, the wave impedance , and refractive index n (76).
The physical interpretations of Eqs. (126) and (127) are illustrated in Figs. 8 to 12 (71,72,75,76). Equation
(126) represents scattering due to the upper rough interface, and Eq. (127) represents scattering due to the
lower rough interface. The first term on the right-hand side of Eq. (126) associated with the scattering coefficient
F PQ 00U accounts for scattering upon reflection from above the rough upper interface (see Fig. 8). The second
term in Eq. (126) associated with F PQ 01U accounts for waves that undergo multiple reflections in medium 1
and are scattered upon transmission back to 0 (see Fig. 9). The third term in Eq. (126) associated with F PQ 10U
accounts for scattering upon transmission from medium 0 to 1 followed by multiple reflections in medium 1
before wave transmission back to medium 0 (see in Fig. 10). The fourth term in Eq. (126) associated with
F PQ 11U accounts for multiple reflections in medium 1 before scattering upon reflection in medium 1 from below
the upper interface, followed by multiple reflections in medium 1 before transmission back to medium 0 (see
Fig. 11). The single term in Eq. (127) associated with the scattering coefficient F PQ 11D accounts for multiple
reflections in medium 1 before scattering upon reflection in medium 1 from above the lower interface, followed
by multiple reflections in medium 1 before transmission back to medium 0 (see Fig. 12). It is shown that for
uniform layered structures, the full-wave solutions sum up to the classical solutions (71,72,73,74,75).
The diffuse scattered fields are evaluated at a point in the far-field region above the upper interface. The
stationary phase method is used to evaluate the integrals over the scatter wave vector variables v0 and w in
Eqs. (126) and (127). Thus, the scattered far fields at rf (the position vector from origin to the receiver) are
Fig. 9. Scattering upon transmission (across upper interface) from medium 1 to medium 0.
Fig. 10. Scattering upon transmission (across upper interface) from medium 0 to medium 1.
expressed as follows:
33
34
Fig. 11. Scattering upon reflection (in medium 1) below the upper interface.
Fig. 12. Scattering upon reflection (in medium 1) above the lower interface.
and
where the position vectors to the mean upper and lower surfaces are
35
and
and the terms associated with multiple bounces in the coating material are
The geometric series expansions appearing in Eqs. (142) and (143) are used whenever H D (xs , zs ) is not
constant, in order to perform necessary integrations by parts that explicitly involve the derivative of the rough
surface heights (75).
The normalization coefficients are
For parallel stratified structures (no roughness), the full-wave solutions reduce to the exact, classical
solution. The solutions for the like- and cross-polarized diffuse scattered fields presented here can be applied
to scattering from irregular layered media with arbitrarily varying rough interfaces such that the thickness of
36
the intermediate layer is also arbitrary when random rough surfaces are considered. The rough surface height
probability density functions are characterized by a family of gamma functions rather than the standard
Gaussian probability density functions to ensure that H D (xs , zs ) 0 (78).
The polarimetric solutions can be applied to remote sensing of dielectric coating materials on rough
surfaces. In particular, it is possible to determine the optimal polarizations of the transmitter and receiver such
that the presence of clutter from the rough interfaces can be suppressed, in order to facilitate the detection of
buried mines for example.
Acknowledgments
The manuscript was prepared by Ronda Vietz and Dr. Dana Poulain in the Center for Electro-Optics.
BIBLIOGRAPHY
1. S. O. Rice, Reflection of electromagnetic waves from a slightly rough surface, Commun. Pure Appl. Math., 4: 351378,
1951.
2. G. R. Valenzuela, Scattering of electromagnetic waves from a tilted slightly rough surface, Radio Sci., 3 (11): 10511066,
1968.
3. E. Bahar, B. S. Lee, Full wave solutions for rough surface bistatic radar cross sections: Comparison with small perturbation, physical optics, numerical, and experimental results, Radio Sci., 29 (2): 407429, 1994.
4. P. Beckmann, A. Spizzichino, The Scattering of Electromagnetic Waves from Rough Surfaces, New York: Macmillan,
1963.
5. D. E. Barrick, W. H. Peake, A review of scattering from surfaces with different roughness scales, Radio Sci., 3 (8):
865868, 1968.
6. J. T. Johnson et al., Backscatter enhancement of electromagnetic waves from two dimensional perfectly conducting
random rough surfaces: A comparison of Monte Carlo simulations with experimental data, IEEE Trans. Antennas
Propag., 44 (5): 748756, 1996.
7. J. W. Wright, A new model for sea clutter, IEEE Trans. Antennas Propag., AP-16 (2): 217223, 1968.
8. G. S. Brown, Backscattering from a Gaussian-distributed perfectly conducting rough surface, IEEE Trans. Antennas
Propag., AP-28: 943946, 1978.
9. G. L. Tyler, Wavelength dependence in radio wave scattering and specular-point theory, Radio Sci., 11 (2): 8391, 1976.
10. E. R. Mendez, K. A. ODonnell, Observation of depolarization and backscattering enhancement in light scattering from
Gaussian random surfaces, Opt. Commun., 61 (2): 9195, 1987.
11. A. A. Maradudin, E. R. Mendoz, Enhanced backscatter of light from weakly rough random metal surfaces, Appl. Opt.,
32 (19): 33353343, 1993.
12. E. Bahar, M. El-Shenawee, Vertically and horizontally polarized diffuse multiple scatter cross sections of one dimensional random rough surfaces that exhibit enhanced backscatter-full wave solutions, J. Opt. Soc. Amer. A, 11 (8):
22712285, 1994.
13. E. Bahar, M. El-Shenawee, Enhanced backscatter from one-dimensional random rough surfaces: Stationary-phase
approximations to full wave solutions, J. Opt. Soc. Amer., 12 (1): 151161, 1995.
14. J. C. Daley, W. T. Davis, N. R. Mills, Radar sea return in high sea states, Nav. Res. Lab. Rep., 7142: 1970.
15. E. Bahar, M. A. Fitzwater, Like and cross polarized scattering cross sections for random rough surfacesTheory and
experiment, J. Opt. Soc. Amer., Spec. Issue Wave Propag. Scattering Random Media, 2 (12): 22952303, 1985.
16. E. Bahar, Depolarization of electromagnetic waves excited by distribution of electric and magnetic sources in inhomogeneous multilayered structures of arbitrarily varying thicknessGeneralized field transforms, J. Math. Phys., 14 (11):
15021509, 1973.
17. E. Bahar, Depolarization of electromagnetic waves excited by distribution of electric and magnetic sources in inhomogeneous multilayered structures of arbitrarily varying thicknessFull wave solutions, J. Math. Phys., 14 (11): 15101515,
1973.
18. E. Bahar, Depolarization in nonuniform multilayered structuresFull wave solutions, J. Math. Phys., 15 (2): 202208,
1974.
37
19. S. A. Schelkunoff, Generalized telegraphists equations for waveguides, Bell Syst. Tech. J., 31: 784801, 1952.
20. S. A. Schelkunoff, Conversion of Maxwells equations into generalized telegraphists equations, Bell Syst. Tech. J., 34:
9951045, 1955.
21. E. Bahar, Propagation of radio waves in a model nonuniform terrestrial waveguide, Proc. Inst. Electr. Eng., 113 (11):
17411750, 1966.
22. E. Bahar, Generalized scattering matrix equations for waveguide structures of varying surface impedance boundaries,
Radio Sci., 2 (3): 287297, 1967.
23. E. Bahar, Wave propagation in nonuniform waveguides with large flare angles and near cutoff, IEEE Trans. Microw.Theory Tech., MTT-16 (8): 503510, 1968.
24. E. Bahar, Fields in waveguide bends expressed in terms of coupled local annular waveguide modes, IEEE Trans.
Microw. Theory Tech., MTT-17 (4): 210217, 1969.
25. E. Bahar, G. Govindarajan, Rectangular and annular modal analyses of multimode waveguide bends, IEEE Trans.
Microw. Theory Tech., MTT-21 (15): 819824, 1973.
26. S. W. Maley, E. Bahar, Effects of wall perturbations in multimode waveguides, J. Res. Natl. Bur. Stand., 68D (1): 3542,
1964.
27. E. Bahar, Computations of mode scattering coefficients due to ionospheric perturbation and comparison with VLF radio
measurements, Proc. Inst. Electr. Eng., 117 (4): 735738, 1970.
28. E. Bahar, G. Crain, Synthesis of multimode waveguide transition sections, Proc. Inst. Electr.Eng., 115 (10): 13951397,
1968.
29. E. Bahar, J. R. Wait, Propagation in a model terrestrial waveguide of nonuniform height, theory and experiment, J.
Res. Natl. Bur. Stand., 69D (11): 14451463, 1965.
30. E. Bahar, Propagation of VLF radio waves in a model earth ionosphere waveguide of arbitrary height and finite surface
impedance boundary: Theory and experiment, Radio Sci., 1 (8): 925938, 1966.
31. E. Bahar, J. R. Wait, Microwave model techniques to study VLF radio propagation in the earth ionosphere waveguide,
in J. Fox (ed.), Quasi-Optics, New York: Interscience, 1964, pp. 447464.
32. E. Bahar, Propagation in a microwave model waveguide of variable surface impedance: Theory and experiment, IEEE
Trans. Microw. Theory Tech., MTT-14 (11): 572578, 1966.
33. E. Bahar, Analysis of mode conversion in waveguide transition section with surface impedance boundaries applied to
VLF radio propagation, IEEE Trans. Antennas Propag., AP-16 (6): 673678, 1968.
34. J. R. Wait, E. Bahar, Simulation of curvature in a straight model waveguide, Electron. Lett., 2 (10): 358, 1966.
35. E. Bahar, Scattering of VLF radio waves in the curved earth ionosphere waveguide, Radio Sci., 3 (2): 145154, 1968.
36. E. Bahar, Inhomogeneous dielectric filling in a straight model waveguide to simulate curvature of waveguide boundaries, Proc. Inst. Electr. Eng., 116 (1): 8486, 1969.
37. D. E. Kerr, Propagation of Short Radio Waves, MIT Radiat. Lab. Ser. 13, New York: McGraw-Hill, 1951.
38. E. Bahar, Radio wave propagation over a rough, variable impedance, boundary, Part I. Full wave analysis, IEEE Trans.
Antennas Propag., AP-20 (3): 354362, 1972.
39. E. Bahar, Radio wave propagation over a rough, variable impedance, boundary, Part II. Full wave analysis, IEEE
Trans. Antennas Propag., AP-20 (3): 362368, 1972.
40. G. A. Schlak, J. R. Wait, Electromagnetic wave propagation over a nonparallel stratified conducting medium, Can. J.
Phys., 45: 36973720, 1967.
41. M. J. Kontorowich, N. M. Lebedev, Kontorowich Lebedev Transforms, Academy of Science USSR, J. Phys., 1: 229241,
1939.
42. E. Bahar, Generalized Bessel transform and its relationship to the Fourier, Watson and KontorowichLebedev transforms, J. Math. Phys., 12 (2): 179185, 1971.
43. R. J. King, C. H. Husting, Microwave surface impedance measurements of a dielectric wedge on a perfect conductor,
Can. J. Phys., 49: 820830, 1971.
44. E. Bahar, Generalized Fourier transform for stratified media, Can. J. Phys., 50 (24): 31233131, 1972.
45. E. Bahar, Radio wave propagation in stratified media with nonuniform boundaries and varying electromagnetic
parametersFull wave analysis, Can. J. Phys., 50 (24): 31323142, 1972.
46. E. Bahar, Electromagnetic wave propagation in inhomogeneous multilayered structures of arbitrary thickness
Generalized field transforms, J. Math. Phys., 14 (8): 10241029, 1973.
38
47. E. Bahar, Electromagnetic wave propagation in inhomogeneous multilayered structures of arbitrary thicknessFull
wave solutions, J. Math. Phys., 14 (8): 10301036, 1973.
48. E. Bahar, Generalized WKB method with applications to problems of propagation in nonhomogeneous media, J. Math.
Phys., 8 (9): 17351746, 1967.
49. E. Bahar, Propagation of radio waves over a nonuniform layered medium, Radio Sci., 5 (7): 10691076, 1970.
50. E. Bahar, Radiation from layered structures of variable thickness, Radio Sci., 6 (12): 11091116, 1971.
51. E. Bahar, Radiation by a line source over nonuniform stratified earth (with G. Govindarajan), J. Geophys. Res., 78 (2):
393406, 1973.
52. E. Bahar, Radio wave propagation in nonuniform multilayered cylindrical structuresGeneralized field transforms, J.
Math. Phys., 15 (11): 19771981, 1974.
53. E. Bahar, Radio wave propagation in nonuniform multilayered cylindrical structuresFull wave solutions, J. Math.
Phys., 15 (11): 19821986, 1974.
54. E. Bahar, Field transforms for multilayered cylindrical and spherical structures of finite conductivity, Can. J. Phys.,
53 (11): 10781087, 1975.
55. E. Bahar, Propagation in irregular multilayered cylindrical structures of finite conductivityFull wave solutions, Can.
J. Phys., 53 (11): 10881096, 1975.
56. E. Bahar, Electromagnetic waves in irregular multilayered spheroidal structures of finite conductivityFull wave
solutions, Radio Sci., 11 (2): 137147, 1976.
57. E. Bahar, Computations of the transmission and reflection scattering coefficients in an irregular spheroidal model of
the earth-ionosphere waveguide, Radio Sci., 15 (5): 9871000, 1980.
58. E. Bahar, Radio waves in an irregular spheroidal model of the erath ionosphere waveguide (with M. A. Fitzwater),
IEEE Trans. Antennas Propag., AP-28 (4): 591592, 1980.
59. J. R. Wait, Waves in Stratified Media, New York: Macmillan, 1962.
60. E. Bahar, M. A. Fitzwater, Numerical technique to trace the loci of the complex roots of characteristic equations in
mathematical physics, SIAM J. Sci. Stat. Comput., 2 (4): 389403, 1981.
61. R. E. Collin, Electromagnetic scattering from perfectly conducting rough surfaces (a new full wave method), IEEE
Trans. Antennas Propag., AP-40 (12): 14161477, 1992.
62. E. Bahar, Full wave solutions for the depolarization of the scattered radiation fields by rough surfaces of arbitrary
slope, IEEE Trans. Antennas Propag., AP-29 (3): 443454, 1981.
63. M. L. Sancer, Shadow corrected electromagnetic scattering from randomly rough surface, IEEE Trans. Antennas
Propag., AP-17: 577585, 1969.
64. E. Bahar, B. S. Lee, Radar scatter cross sections for two dimensional random rough surfacesFull wave solutions and
comparisons with experiments, Waves Random Media, 6: 123, 1996.
65. E. Bahar, Scattering cross sections for composite random surfacesFull wave analysis, Radio Sci., 16 (6): 13271335,
1981.
66. E. Bahar, Y. Zhang, A new unified full wave approach to evaluate the scatter cross sections of composite random rough
surfaces, IEEE Trans. Geosci. Remote Sens., 34 (4): 973980, 1996.
67. E. Bahar, Y. Zhang, Numerical solutions for the scattered fields from rough surfaces using the full wave generalized
telegraphists equations, Int. J. Numer. Model., 10: 8399, 1997.
68. E. Bahar, Excitation of lateral waves and the scattered radiation fields by rough surfaces of arbitrary slope, Radio Sci.,
15 (6): 10951104, 1980.
69. E. Bahar, Excitation of surface waves and the scattered radiation fields by rough surfaces of arbitrary slope, IEEE
Trans. Microw. Theory Tech., MTT-28 (9): 9991006, 1980.
70. E. Bahar, B. S. Lee, Transmission scatter cross sections across two-dimensional random rough surfacesFull wave
solutions and comparison with numerical results, Waves Random Media, 6: 2548, 1996.
71. E. Bahar, Physical interpretation of the full wave solutions for the electromagnetic fields scattered from irregular
stratified media, Radio Sci., 23 (5): 749759, 1988.
72. S. M. Haugland, Scattering of electromagnetic waves from coated rough surfaces full wave approach, Thesis, University
of NebraskaLincoln, 1991.
73. E. Bahar, S. M. Haugland, A. H. Carrieri, Full wave solutions for Mueller matrix elements used to remotely sense
irregular stratified structures, Proc. IGARSS 91 Remote Sens.: Global Monit. Earth Manage., Espoo, Finland, Vol. 1,
1991, pp. 14791482.
39
74. S. M. Haugland, E. Bahar, A. H. Carrieri, Identification of contaminant coatings over rough surfaces using polarized
IR scattering, Appl. Opt., 31 (19): 38473852, 1992.
75. E. Bahar, M. Fitzwater, Full wave physical models of nonspecular scattering in irregular stratified media, IEEE Trans.
Antennas Propag., AP-S 37 (12): 16091616, 1989.
76. R. D. Kubik, E. Bahar, Electromagnetic fields scattered from irregular layered media, J. Opt. Soc. Amer. A, 13 (10):
20502059, 1993.
77. E. Bahar, Full wave-co-polarized non specular transmission and reflection scattering matrix elements for rough surfaces, J. Opt. Soc. Amer. A, 5: 18731882, 1988.
78. R. D. Kubik, E. Bahar, Radar polarimetry applied to scattering from irregular layered media, J. Opt. Soc. Amer. A, 15:
20602071, 1996.
EZEKIEL BAHAR
University of Nebraska-Lincoln
RADAR ALTIMETRY
ALTIMETRY, RADAR
Satellite-based radar altimetry over the worlds oceans is
the main theme of this article. Rather than measure the
unknown clearance of the radar above potentially hazardous topography (which is one rationale for an aircraft
radar altimeter for example), satellite-based altimeters are
designed to measure the height of the oceans surface relative to an objective reference such as the Earths mean
ellipsoid. Such sea surface height measurements have become essential for a wide variety of applications in oceanography, geodesy, geophysics, and climatology [1]. A satellitebased altimeter circles the Earth in about 90 minutes, generating surface height measurements along its nadir track.
These measurements accumulate, providing unique synoptic data that have revolutionized our knowledge and understanding of both global and local phenomena, from El Nino
to bathymetry. A satellite-based radar altimeter also provides measurements of signicant wave height and wind
speed along its nadir track.
Although one might view these altimeters as relatively
simple instruments, their phenomenal measurement accuracy and precision requires elegant microwave implementation and innovative signal processing. This article
provides an overview of the applications that drive these
requirements, and a description of the resulting state-ofthe-art design concepts.
A nadir-viewing altimeter in a repeat-track orbit is constrained by a fundamental trade-off between temporal coverage (revisit period D days) and spatial coverage (track
separation at the equator W kilometers): DW = constant for
a given inclination and altitude. If more than one altimeter
is under consideration, either as independent assets or as a
pre-planned constellation, then the space/time trade-space
is enlarged, and more measurement objectives may be satised. The limitations imposed by this constrain have motivated multi-beam or wide swath altimeter concepts,
although all such architectures imply a compromise on
height measurement accuracy. The leading example of this
genre is reviewed at the end of this article.
The sea surface height (SSH) measurement objectives
of space-based altimeters can be grouped into three broad
categories: large-scale dynamic sea surface topography,
mesoscale oceanic features, and the cryosphere nearpolar sea ice and continental ice sheets. Satellite altimeters dedicated to determining the oceans large scale dynamic surface topography are characterized by absolute
sea surface height measurement accuracy on the order of
centimeters along tracks of more than 1000 km, and orbits
that retrace their surface tracks every 10 to 20 days. In
contrast, mesoscale missions focus on sea surface height
signals of less than 300 km in length. This application
requires measurement precision sufcient to sustain relative height measurements, and for geodetic data, relatively dense track-to-track spacing. Geosat is the leading
example of this category, for both geodetic (non-repeat)
and mesoscale (exact-repeat) orbits. Observation of oceanic
and polar ice sheets requires that the altimeter have robust range and spatial resolution, accuracy, and precision
in response to the non-zero average surface slope in both
the along-track and cross-track direction of the continental
glaciers. Suitable orbits must have near-polar inclination,
and multi-year relative accuracy. CryoSat is reviewed as
the rst example of this class of radar altimeter mission.
Radar altimeters must provide accurate and precise
SSH measurements from a spacecraft whose roll and pitch
attitudes are not known exactly. These requirements can
be satised by the pulse-limited altimeter paradigm, which
is characterized by (1) large time-bandwidth pulse modulation, (2) antenna directivity that illuminates a surface
area larger than the spatially-resolved footprint, and (3)
extensive non-coherent (post-detection) waveform averaging. The design of the TOPEX altimeter is described as
an example. Footprint resolution and measurement precision can be improved by combining coherent and increased
incoherent processing, exemplied by the delay-Doppler
altimeter, which borrows applicable techniques from synthetic aperture radar (SAR). The article closes with an
overview of future developments and advanced mission
concepts.
RADAR ALTIMETER SATELLITES
All satellite radar altimeters to date (Table 1) are incoherent pulse-limited instruments, as described in a later
passage. Since 1973 height measurement accuracy has improved, due primarily to dedicated effort and increasing
skill applied to estimation and correction of systematic errors. Performance also has benetted from improved onboard hardware and algorithms, and improved orbit determination. The Jason-1 altimeter represents the state-ofthe-art in absolute sea surface height measurement accuracy (as of the year 2006). On-line access to descriptions
of most of these radar altimeter missions may be found at
[2].
Orbits
An altimeters SSH accuracy on large scales depends to
rst order on how well the height of the altimeter itself
can be determined. Given the state-of-the-art in satellite
tracking systems, the dominant error in satellite (radial)
position determination is uncertainty in knowledge of the
gravity eld (often expressed in term of geoid height) [3].
At lower orbit altitudes, the higher-frequency components
of the gravity eld are enhanced. The impact can be signicant. For example, gravity variations of about 400 km
wavelength are 100 times larger at an altitude of 500 km
than they are at 1000 km. In general, the accuracy of precision orbit determination is better for higher altitudes.
Atmospheric drag is approximately ten times larger at
800 km than at 1200 km [4]. For example, over one orbit
at 1200 km altitude, drag imposes a 1-cm decay on the
orbit radius. At 800 km altitude, the effect is ten times
larger, resulting in a 10-cm decay per orbit. Atmospheric
drag increases signicantly during periods of higher solar
are activity, the peaks of which occur approximately every
eleven years.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Altimetry, Radar
Table 1. Summary of Satellite Radar Altimeters
Altimeter
AgencyYearOrbitInclination
Repeat (days)(degrees)
Altitude
(km)
Equatorial
Spacing (km)
Band
Propagation
Measurements
Accuracy
Skylab (3)
GEOS-3
Seasat
Geosat
ERS-1
TOPEX1
Poseidon1
ERS-2
GFO
RA-2
Jason-1
Jason-2
CryoSat
NASA1973No48
NASA19758No115
NASA197817,3108
USN198593, 17.05108
ESA199173, 35, 17698.5
NASA1992 9.91666
CNES1992 9.91666
ESA1995 3598.5
USN1998 17.05108
ESA2002 3598.5
CNES2001 9.91666
CNES(2008)9.91666
ESA(2009)36992
435
845
800
800
785
1336
1336
781
800
800
1336
1336
720
60
160, 800
4, 160
20 800
315
315
80
160
80
315
315
Ku
Ku
Ku
Ku
Ku
C, Ku
Ku
Ku
Ku
S, Ku
C, Ku
C, Ku
Ku
None
None
H2 O
None
H2 O
H2 O, e
H2 O
H2 O
H2 O
H2 O, e
H2 O, e
H2 O, e
None
50 m
50 cm
20 cm
10 cm
7 cm
2 cm
5 cm
7 cm
5 cm
7 cm
1.5 cm
(1.5 cm)
(5 cm)
Altimetry, Radar
was maneuvered into a tandem phasing so that the measurements of the two altimeters could be cross-calibrated.
The follow-on mission Jason-2 will be identical to Jason-1,
and may also include an experimental wide swath ocean
altimeter (outlined in the closing sections of this article).
If the altimeter is not the primary payload, then the
resulting mission and orbit are likely to be determined
by other requirements, which may compromise altimetry.
The European Space Agencys satellite altimeters (Selenia
Spazio) on ERS-1 and ERS-2, as well as the advanced radar
altimeter RA-2 [9] (Alenia Spazio) on ESAs Envisat, are of
second priority with respect to the other instruments on
their respective spacecraft. Their sun-synchronous orbits
are less than optimum for precision altimetry. The orbit
of ERS-1 was adjusted during its mission to a long repeat
period (176 days). That long repeat-period generated a relatively dense surface sampling grid useful for estimating
sea ice cover, geodesy, and bathymetry, but is less than optimum for most other applications.
ow rate; the resulting slope signals are indicative of largescale oceanic circulation patterns.
The altimeters measurements
Whereas the objective is determination of the distance between the radar and the sea surface, the altimeter actually measures round-trip delay tT . The altimeters relative
height h is derived from the measured time delay by h = tT
c/2, where c is the speed of light. At the accuracy required of
an oceanographic altimeter, this deceptively simple proportionality must take into account the small but signicant
retardation of the radars microwaves as they propagate
through the atmosphere and the ionosphere.
In addition to sea surface height, the satellite radar altimeters waveform supports two other oceanographic measurements: signicant wave height (SWH), and surface
wind speed (WS). Over a quasi-at sea, a pulse-limited
altimeters idealized mean waveform is a step function,
whose rise time is equal to the compressed pulse length,
and whose position on the time-delay axis is determined
by the altimeters height (Fig. 2). If the sea surface is modulated by gravity waves, the altimetric depth of the surface increases, which reduces the slope of the waveforms
leading edge. Hence, SWH is proportional to the waveform
rise time. If the sea surface is under stress from wind, the
resulting ne-scale roughness decreases the power of the
pulse reected back to the altimeter. Hence,WS is inversely
related to mean waveform power. In practice, the inections of the idealized at-surface response function waveform are softened by the pulse weighting, and the waveform plateau is attenuated over time by the weighting of
Altimetry, Radar
(1)
where the last three terms on the right hand side of Eq. (2)
entail corrections to be derived from electromagnetic (EM)
reection and propagation phenomena. Orbit radial height
(hO ) is determined through extensive instrumentation and
analysis, with a net uncertainty. The magnitude of the un-
Altimetry, Radar
f12
f12 f22
h1
f22
f12 f22
h2
ch/R
(2)
where R = (RE + h)/RE is a consequence of the spherical observation geometry. For typical satellite radar altimeters,
the pulse-limited footprint over a quasi-at surface is on
the order of two kilometers in diameter. The pulse-limited
Altimetry, Radar
PT G2 2 CR c 0
(4)3 h3 R
(5)
The power described by Eq. (7) is proportional to compressed pulse length , and to the inverse cube of height
h3 .
Flat surface response
area is
AP = rP2 =
ch
R
(3)
As the pulse continues to impinge and spread over the surface, the resulting pulse-limited annuli all have areas equal
to that of the initial pulse-limited footprint. Hence, the received power tends to maintain the level corresponding to
the peak of the initial response. The pulse-limited areas expand in response to increasing large-scale surface roughness, which in the oceanographic context is expressed as
signicant wave height SWH.
Radiometric response
The classical single-pulse radar equation that describes the
post-processing peak power P is
PT G2 ()2 CR
P=
(4)3 h4
(4)
where is the effective radar cross section, PT is the transmitted power, G() is the one-way power gain of the antenna
=1
2h
1
[t
]0
c
1
2h
0 < [t
]1
c
2h
1
]
1 < [t
(6)
Altimetry, Radar
DERAMP ON RECEIVE
A satellite-based radar altimeter needs to measure the distance accurately, but only for an essentially planar surface,
oriented orthogonally to the radars line-of-sight. Conservative design suggests that all radar resources should be
concentrated near the reection from that surface. Hence,
ocean-viewing altimeters have a small range window that
tracks the delay and strength of the surface reection. The
oceans surface has a signicant wave height of less than 20
m or so, and its radar backscatter coefcient spans 3 dB to
20 dB, to cite parameters used in the testing of the TOPEX
altimeter. In practice, range gate delay and backscatter
tracking are met with two servo-regulator feedback loops
(Fig. 4). The rst loop is a second-order height tracker consisting of range position (alpha tracker) and range rate
(beta tracker). The second loop is the receiver gain control (AGC). Altimeter height measurement is given by the
setting of the range delay coarse and ne values, corrected
by the remaining height error measured from the waveforms position in the tracker. Surface wind speed and signicant wave height are derived from the AGC values and
the waveforms shape, respectively.
The precision of an individual height measurement is
determined by range resolution. If a simple short pulse
were transmitted, then the height resolution would equal
the pulse length. The principal disadvantage of a short
pulse is that it contains little energy. The inherent resolution of a pulse is inversely proportional to its bandwidth.
Most radar altimeters use some form of modulation on the
transmitted signal to maintain a large bandwidth within
a longer pulse, thus increasing the transmitted energy at
no loss of resolution. A well-established modulation technique used in many airborne radar altimeters is frequencymodulated continuous wave (FM-CW), from which height
is proportional to the frequency difference between the
transmitted and received signals. An alternative approach
is pulse compression, whereby a relatively long large timebandwidth pulse is transmitted, and then processed (compressed) after reception to a simple short pulse of unity
time-bandwidth.
Satellite-based radar altimeters use a different and specialized form of modulation and demodulation. The relatively distant and narrow range window typical of an
ocean-viewing satellite radar altimeter is ideal for the full
deramp (stretch) technique [17] which was rst applied to
altimetry by MacArthur [18] in the Seasat altimeter. The
dening feature of the full-deramp technique is that the
transmitted pulse length is longer that the depth of the
Figure 4. The functional diagram of a modern satellite altimeter is centered on the waveform tracker, whose outputs are: (1)
translated into science data to be returned via telemetry, and (2)
transformed into closed loop timing and gain controls for the radar.
range window.
The full deramp (dechirp) technique employs a transmitted chirp (linear FM signal) of duration TP , bandwidth
BP , chirp rate kP , and center frequency f0 . For a pulse initiated at t = 0, the transmitted frequency is f0 + kP t, t
TP , as shown in Fig. 5. The bandwidth is BP = kP TP ,
and the associated time-bandwidth product is kP TP2 . Pulse
bandwidths and the time-bandwidth products for satellite
radar altimeters are large, on the order of 300 MHz and
30,000 respectively. The compressed pulse length is given
by the inverse bandwidth of the transmitted pulse, or alternatively, by the original pulse length TP divided by the
time-bandwidth product. Thus, a full deramped altimeters
height resolution is
=
1
seconds or
k P TP
c
meters
2 kP TP
(7)
Altimetry, Radar
cies are retained, to produce the set of deramped data signals shown in the gure.
The key to many characteristics unique to a radar altimeter lies in this deramp domain. The deramped signal
from the mth individual scatterer at time delay tm is a CW
segment of length TP and frequency
fm = 2k p (tm tC ),
tm TR
(8)
Altimetry, Radar
Tracking
The TOPEX Ku-band channel averages 228 pulses over
a so-called track interval of about 50 ms to produce the
smoothed waveforms delivered to the tracker at a 20 Hz
rate. For each waveform, the range window (Fig. 2) is partitioned into 128 sample positions or bins, each of size
equal to the radars range resolution. Groups of bins are
organized into tracking gates of various sizes whose outputs are used to calculate the parameters that control the
altimeters feedback loops, and to provide the rst-order
science data from the instrument [20]. The tracking algorithm, based on an Intel 80186 microprocessor, iterates at
the waveform input rate of 20 Hz. Each tracking gate is
normalized so that its gain is inversely proportional to its
width, which is the number of samples that it spans. The
range width of each gate is a power of two times the intrinsic range resolution of the altimeter.
The noise gate estimates the mean noise level from samples 5 through 8, which occur well before the waveform begins to respond to surface reections. The mid-point of the
waveforms leading edge is tracked to keep it centered between samples 32 and 33. The AGC gate spans samples 17
through 48, which are centered on bin 32.5, the so-called
track point.
The output of the AGC gate is fed back to control the
altimeters gain loop. TOPEX is required to measure waveform power (proportional to sigma-0) with an accuracy of
1 dB and a precision of 0.25 dB. In response to the
waveform levels observed in the AGC gate, the receiver attenuator is adjusted in 1 dB steps. To meet the accuracy
and precision requirements, from pulse to pulse the attenuator setting is dithered between neighboring steps. This
has the effect of interpolating the mean AGC setting to
an effective accuracy of less than 0.1 dB when averaged
10
Altimetry, Radar
Figure 7. The oceans bottom topography causes subtle variations in the local gravity eld, which are expressed as small tilts
in the oceans surface. These are observable by satellite altimetry.
Altimetry, Radar
DELAY-DOPPLER
The delay-Doppler technique leads to better measurement
precision, a smaller effective footprint at nadir, and increased tolerance of along-track surface gradients typical
of continental ice sheets. The central innovation in the
delay-Doppler concept [24, 25] is that it combines the benets of coherent and incoherent signal processing, rather
than relying exclusively on incoherent averaging as is the
case for all conventional satellite radar altimeters. The coherent processing stages, patterned after well-established
methods developed for synthetic aperture imaging radar
(SAR), allow much more of the instruments radiated power
to be converted into height measurement data. One consequence of delay-Doppler signal processing is that less
transmitted power is required than with a conventional altimeter. The delay-Doppler technique also enjoys the benets of the pulse-limited range measurement geometry.
The coherent processing transforms groups of data into
the Doppler frequency domain, where delay corrections are
applied, analogous to SAR range curvature correction [26].
Doppler processing determines the size and location of the
along-track footprint, which is (1) smaller than the pulselimited diameter, (2) a constant of the system, and (3) relatively immune to surface topographic variations. Waveforms are incoherently summed corresponding to each surface position as the altimeter progresses along track. One
direct result is that each height measurement from a delayDoppler altimeter has more incoherent averaging than is
possible from a conventional radar altimeter.
The delay-Doppler technique exploits coherence between pulses, in contrast to the pulse-to-pulse incoherence
that is the norm for conventional pulse-limited altimeters.
11
h( fD ) R
2 h 2
f
8V 2 D
(11)
where V is the velocity of the spacecraft along its orbit. Recall that the deramped data in the range direction appear
as constant (CW) frequencies. Each range delay increment
translates into an equivalent CW frequency shift. These
unwanted frequency shifts may be nullied by multiplying
the data eld by equal and opposite CW signals prior to the
range IFFT, analogous to the ne tracking frequency shift
of a conventional radar altimeter. The result is evident in
Fig. 9, which compares the at surface response waveform
(as it would appear in the delay-Doppler domain) before
and after delay compensation.
Implementation
The delay-Doppler altimeter introduces additional alongtrack processing steps (Fig. 10) after the range deramp and
before the range IFFT. The net effect of the extra processing
is to transform the signal space from one to two dimensions.
A Fourier transform is applied to these data in the alongtrack dimension, implemented in real time on-board as a
set of parallel FFTs that span the range window width.
Signals in the resulting two-dimensional deramp/Doppler
domain are phase shifted to eliminate the unwanted range.
The delay correction phase functions are
( fD , t) = exp{+ j2kP
2
h( fD )t}
c
(12)
12
Altimetry, Radar
quency in the time delay direction is proportional to (minimum) delay relative to the range track point, and frequency in the along-track direction is proportional to the
scatterers along-track position relative to the zero-Doppler
position.
The remaining data processing is carried out in parallel,
consisting of a range IFFT at each Doppler frequency bin,
detection, and assignment of the height estimates to their
respective along-track positions. The process is repeated
over subsequent blocks of data, from which many looks are
accumulated at each along-track position. As the altimeter passes over each scatterer, the corresponding height
estimates move in sequence from the highest Doppler lter to each lower frequency lter, until the scatterer is out
of sight. Thus, the nal waveform at each along-track position is the average (incoherent sum, normalized) of estimates from all Doppler lters. If the Doppler lters are
designed to span the along-track antenna beamwidth, then
all data along-track contribute to the height estimates.
The resulting coverage is shown in Fig. 11, which contrasts the scanning beam of a conventional altimeter with
Altimetry, Radar
13
Ideally, the along-track zero-Doppler position is equivalent to the geometric sub-satellite point, nadir. The alongtrack location of the zero-Doppler plane is independent of
satellite attitude, and also is independent of terrain slope.
Thus, the height measurements at all Doppler frequencies
can be located along-track with respect to zero Doppler.
In practice, the zero Doppler bin location may not coincide
with nadir. A vertical spacecraft velocity component adds
a Doppler shift to the signals. Vertical velocity and its implied Doppler error can be estimated. Offsetting Doppler
shifts can be applied in response to a spacecraft vertical
velocity component to assure registration of the Doppler
bins with their corresponding along-track positions dened
with respect to nadir.
is required. Focus operations would be required if the Fresnel radius were larger than the along-track cell dimension.
If a smaller cell size is desired such as for altimetry over
land, or a very high satellite altitude or longer radar wavelength were chosen, then the along-track processor would
have to incorporate phase matching to focus the data.
Unfocused condition
The foregoing is predicated on a simple isometry between
Doppler frequency and along-track spatial position. This
equivalence is valid for an along-track resolution that is
comparable to or larger than the rst Fresnel zone. In synthetic aperture radar parlance, this zone is known as the
unfocused SAR resolution. Using the classic quarter wavelength criterion, the radius a0 of the rst Fresnel zone is
a0 =
h
2
which for a Ku-band altimeter leads to an along-track (unfocused) dimension of 180 m from an altitude of 800 km (or
about 230 m from an altitude of 1334 km). As these quantities are less than the nominal delay-Doppler along-track
cell size of 250 m, the processing task is trivial: no focusing
Incoherent Averaging
There are two stages in a delay-Doppler altimeter at which
incoherent averaging takes place: within each Doppler bin,
and across neighboring bins. Detected returns from many
pulses are averaged together to build the multi-look waveform within each bin. For a typical satellite altimeter, these
waveforms would accumulate wihin each 250-m bin at
about a 26 Hz rate. Subsequent averaging (incoherent integration) over adjacent waveforms typically extends over
0.1 s (or 1.0 s), during which time the antenna illumination
pattern progresses in the along-track direction by an appreciable distance, approximately 0.6 km (or 6 km). Alert:
the relative location of each delay-Doppler-derived height
estimate is synchronized to coincide with the forward motion of the instrument, thus eliminating along-track elongation of the footprint as is the case for a conventional al-
14
Altimetry, Radar
Figure 12. DDA processing increases the number of independent samples of the surface return, which reduces the intrinsic noise, thus
improving the sea surface height measurement precision.
timeter. The result is that a delay-Doppler altimeter generates signicantly more incoherent averaging than a conventional altimeter, and at less compromise in along-track
footprint size (Fig 12).
One immediate bent is better measurement precision.
Consider the case of height precision in the context of
geodetic requirements. Figure 13 shows a plot of height
precision versus SWH for a delay-Doppler altimeter and a
conventional radar altimeter (RA). The plot shows that the
DDA meets the height precision requirement of 1 cm at 3
m SWH, a result that is consistent with previous analyses
[27]. The gure also shows that the DDA is about half as
sensitive as an RA to increasing SWH. This is important for
geodetic applications, as measurement precision degraded
by larger signicant wave heights is a major source of noise
in Geosat surface slope estimates [6].
=
0
1
2h
[t
]0
c
1
2h
(13)
0 < [t
]1
c
1
2h
1 < [t
]
c
where is the compressed pulse length (Eq. 7). The curve of
Eq. 13represents the (average) strength of the altimeters
response to illumination of a quasi-at surface as a function of time delay, just as in the conventional case. Note
that the response to a at surface for all regions t < have
much less relative power for the delay-Doppler altimeter
than for the conventional radar altimeter described by Eq.
(6). The cross-track (time-delay) width of fD (t) is approximately equal to .
t
=
t
t
=
Radiometric response
Flat surface response
The customary concept of at surface response applies only
to the delay time dimension for a delay-Doppler altimeter. This means that the inherent delay/elevation ambiguity characteristic of pulse-limited altimeters is reduced
from two spatial dimensions to only one dimension. The
cross-track ambiguity that remains is suggested in Fig. 8,
which shows that at any given Doppler frequency, there
are two possible sources for reections having a given
(relative) time delay. These arise from either side of the
minimum delay locus, which nominally is the sub-satellite
track. Of course, the point of rst reection (at zero relative delay time) may be to one side of the sub-satellite
track, as would be true in general when there is a non-zero
cross-track terrain slope. The cross-track ambiguity and
the delay/elevation ambiguity both may be at least partially resolved through application of other means such as
the monopulse phase sensing technique.
The delay-Doppler altimeter can take advantage of reections from the entire length of the antenna illumination
pattern in the along-track direction to estimate the height
of each resolved patch of sub-satellite terrain. This implies that substantially more integration is possible than
in a pulse-limited altimeter. Under the assumption that
the dominant scattering mechanism is non-specular, the
integration gain is linear in power. It follows that the total
power arising from each resolved cell is larger for the delayDoppler altimeter than for a conventional pulse-limited altimeter, even though the post-processing footprint size is
smaller.
Height estimation for each resolved scattering cell benets from integration as long as that cell is illuminated by
the antenna pattern. For each scattering cell, the equivalent along-orbit integration is governed by the length h of
the antenna footprint, expanded by the orbital factor R .
The along-orbit integration may be interpreted in terms of
Altimetry, Radar
AD = 2h cR
(14)
The post-processing power of the delay-Doppler atsurface response function is
PD =
PT G2 ()2 CR 0
2 cR
(4)3 h5/2
(15)
15
to measure such cross-track surface slopes. The phasemonopulse technique uses this principle to estimate the
angle of arrival of reections from a tilted surface collected
through two antennas separated in the cross-track direction of the altimeter (Fig. 13). In a radar altimeter that
uses phase-monopulse [28], a scatterer at cross-track distance y away from nadir precipitates a path length difference h, observable through the cross-channel differential phase. The cross-track phase-monopulse technique can
measure the presence of small (mean) cross-track surface
slopes. Once measured, the slope data can be applied to recover accurate estimates of the height h of (gently) sloping
surfaces. The cross-track phase-monopulse technique complements the delay-Doppler technique, which is an alongtrack enhancement.
D2P Airborne Testbed
The rst embodiment of the delay-Doppler altimeter combined with a phase-monopulse cross-track receiver is the
D2P radar developed at the Johns Hopkins Uniersity Applied Physics Laboratory [29]. The D2P is a coherent airborne radar altimeter that operates from 13.72 to 14.08
GHz (Ku-band). The system transmits a linear FM chirp
signal at 5 Watts peak power, with pulse lengths ranging
from 0.384 to 3.072 microseconds. The system uses two receiver channels and a pair of antenna arrays, separated by
a 14.5 cm baseline, to provide for angle measurements in
the cross track direction. The system provides real time display of the delay-Doppler spectrum and cross-track phase
of a burst sequence (typically 16 consecutive pulses). The
D2P system typically is installed into a P-3 research aircraft. Recent campaigns include ights to Greenland, Svalbard, Antarctica, and over sea ice.
FUTURE DIRECTIONS
CryoSat
CryoSat [30] is the rst satellite of the European Space
Agencys Living Planet Programme to be realized in the
framework of the Earth Explorer Opportunity Missions.
The mission concept was selected in 1999 with launch originally anticipated in 2004. Unfortunately, the launch (October 2005) failed. A rebuild of CryoSat-2 was approved by
ESA, now scheduled for launch in 2009. The Cryosat orbit
will have high-inclination (92 ) and a long-repeat period
(369 days, with a 30-day sub-cycle), designed to provide
dense interlocking coverage over the polar regions. Its aim
is to study possible climate variability and trends by determining the variations in thickness of the Earths continental ice sheets and marine sea ice cover.
The Cryosat altimeter will be the rst of its kind:
SAR/Interferometric Radar ALtimeter (SIRAL), whose advanced modes are patterned after the D2P altimeter [31],
and whose ight hardware has extensive Poseidon heritage. Unlike previous radar altimeter missions, Cryosat
will downlink all altimetric data. These data will support
three modes: conventional, interferometric, and synthetic
aperture. The conventional (pulse-limited) mode will be
used for open ocean (for calibration and sea surface height
16
Altimetry, Radar
AltiKa
WSOA
The wide-swath ocean altimeter (WSOA) [32] has been promoted by the Jet Propulsion Laboratory as a means to
overcome the dominant time/space coverage dilemma that
confronts ocean altimetry. The standard altimeter measurement geometry is strictly nadir-viewing: only one subsatellite height prole is gathered during each pass of the
spacecraft. Whereas nadir heights can be very accurate,
the surface heights of all regions between nadir tracks remain unobserved, and hence unknown. Many applications
would prefer a substantially wider swath of simultaneous
height measurements.
Several altimeters have been proposed over the years
that would scan the surface below with a set of altimetric
beams arrayed orthogonally to the sub-satellite path. The
goal is reasonableto generate a wide swath of height measurements, rather than the single sub-satellite line of data
points typically available. However, there are problems
with this general approach. The dominant difculty is that
the measurement is based on triangulation, rather than
the much more robust (minimum) range measurement of
nadir altimetry. Off-nadir triangulation is extremely sensitive to the satellites roll angle error . Height accuracy
within a beam-limited paradigm, at an off-nadir measurement angle , depends to rst order on h(tan sec2 ),
which increases rapidly from zero as the off-nadir angle
is increased. In contrast, a pulse-limited nadir altimeters
height measurement accuracy is not degraded in response
to small attitude errors at the spacecraft. The height accuracy requirements typical of oceanographic applications
of a few cm cannot be met by a single-pass multi-beam or
wide swath system given the state-of-the-art of controlling
or determining spacecraft (roll) attitude control.
The WSOA concept promises to overcome this roadblock by combining swaths from ascending and descending
passes. The accurate nadir heights from one pass will be
applied to remove systematic cross-track height errors in
the intersecting swath.
Dual-use altimetry
To date, the two themes of dynamic mesoscale ocean topography and geodesy have remained disjoint. Geodesy requires a non-repeating orbit, whereas traditional oceanographic altimetry, including mesoscale observations, relies
on exact-repeat orbits. Recent investigations suggest that
the two objectives could be satised by one altimeter in
a non-repeating orbit, if adequate near-simultaneous ancillary data were available from a more conventional mission such as Jason. The feasibility of dual-use altimetry is
a work in progress [35]. If veried, such a mission could
BIBLIOGRAPHY
1. L.-L. Fu and A. Cazanave, Satellite Altimetry and the Earth
Sciences, Academic Press, 2001, 463 pages.
2. URL/AVISO, http://www.aviso.oceanobs.com/, July 2003.
3. V. L. Pisacane,Satellite techniques for determining the geopotential of sea surface elevations, Journal of Geophysical Research, vol.91, pp. 23652371, 1986.
4. M. E. Parke, R. H. Stewart, D. L. Farless, and D. E. Cartwright,
On the choice of orbits for an altimetric satellite to study
ocean circulation and tides, Journal of Geophysical Research,
vol.92, pp. 1169311707, 1987.
5. R. K. Raney, On orbit selection for ocean altimetry, IEEE
Transactions Geoscience and Remote Sensing, (to appear),
2003.
6. URL/Geodesy, http://www.ngdc.noaa.gov/mgg/bathymetry/
predicted/explore.HTML, (accessed July 2003).
7. D. T. Sandwell and W. H. F. Smith, Marine gravity anomaly
from Geosat and ERS-1 satellite altimetry, J. Geophys. Res.,
vol.102, pp. 1003910054, 1997.
8. Special Sections: Geosat Science and Altimeter Technology,
in Johns Hopkins APL Technical Digest, Vol10, No. 4, 1989.
9. URL/Jason,
http://www-aviso.cls.fr/html/missions/jason/
welcome uk.html, (accessed July 2003).
10. URL/RA-2,
http://envisat.esa.int/instruments/tourindex/ra2/, (accessed July 2003).
11. P. C. Marth, J. R. Jensen, C. C. Kilgus, J. et al., Prelaunch
performance of the NASA altimeter for the TOPEX/Poseidon
Altimetry, Radar
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Project, IEEE Transactions on Geoscience and Remote Sensing, vol. 31, pp. 315332, 1993.
A. R. Zieger, D. W. Hancock, G. S. Hayne, and C. L. Purdy,
NASA radar altimeter for the TOPEX/Poseidon project, Proceedings of the IEEE, vol.79, pp. 810826, 1991.
S. J. Keihm, M. A. Janssen, and C. S. Ruf, TOPEX/Poseidon
microwave radiometer (TMR) III: Wet troposhperic range correction and pre-launch error budget, IEEE Transactions on
Geoscience and Remote Sensing, vol.33, pp. 147161, 1995.
S. Musman, A. Drew, and B. Douglas, Ionospheric effects on
Geosat altimeter observations, J. of Geophysical Research,
vol.95, pp. 29652967, 1990.
D. B. Chelton, J. C. Ries, B. J. Haines, et al., Satellite Altimetry, in Satellite Altimetry and Earth Sciences, International
Geophysics Series, L.-L. Fu andA. Cazanave, Eds. San Diego:
Academic Press, 2001, pp. 1122.
R. K. Moore and C. S. Williams, Jr., Radar return at
near-vertical incidence, Proceedings of the IRE, vol. 45, pp.
228238, 1957.
G. S. Brown, The average impulse response of a rough surface
and its applications, IEEE Antennas and Propagation, vol. 25,
pp. 6774, 1977.
W. J. J. Caputi, Stretch: a time-transformation technique,
IEEE Transactions on Aerospace and Electronic Systems, vol.
AES-7, pp. 269278, 1971.
J. L. MacArthur, C. C. Kilgus, C. A. Twigg, and P. V. K. Brown,
Evolution of the satellite radar altimeter, Johns Hopkins
APL Technical Digest, vol. 10, pp. 405413, 1989.
E. J. Walsh, Pulse-to-pulse correlation in satellite radar altimetry, Radio Science, vol. 17, pp. 786800, 1982.
D. B. Chelton, E. J. Walsh, and J. L. MacArthur, Pulse compression and sea-level tracking in satellite altimetry, Journal
of Atmospheric and Oceanic Technology, vol. 6, pp. 407438,
1989.
D. T. Sandwell and W. H. F. Smith, Bathymetric Estimation, in Satellite Altimetry and Earth Sciences, L.-L. Fu andA.
Cazenave, Eds. New York: Academic Press, 2001, pp. 441
457.
M. M. Yale, D. T. Sandwell, and W. H. F. Smith, Comparison of along-track resolution of stacked Geosat, ERS-1 and
TOPEX satellite altimeters, J. Geophys. Res., vol. 100, pp.
1511715127, 1995.
W. H. F. Smith and D. T. Sandwell, Global seaoor topography
from satellite altimetry and ship depth soundings, Science,
vol. 277, pp. 19561961, 1997.
R. K. Raney, The delay Doppler radar altimeter, IEEE
Transactions on Geoscience and Remote Sensing, vol. 36, pp.
15781588, 1998.
R. K. Raney, Delay compensated Doppler radar altimeter.
United States Patent 5, 736, 957, 1998.
R. K. Raney, Radar fundamentals: technical perspective, in
Principles and Applications of Imaging Radar, Manual of Remote Sensing, F. Henderson andA. Lewis, Eds., 3 ed. New York:
Wiley Interscience, 1998, pp. 9130.
J. R. Jensen and R. K. Raney, Delay Doppler radar altimeter: Better measurement precision, in Proceedings IEEE Geoscience and Remote Sensing Symposium IGARSS98. Seattle,
WA, 1998, pp. 20112013.
J. R. Jensen, Angle measurement with a phase monopulse
radar altimeter, IEEE Transactions on Antennas and Propagation, vol. 47, pp. 715724, 1999.
URL/D2P, http://fermi.jhuapl.edu/d2p, Johns Hopkins University Applied Physics Laboratory, (accessed July 2003).
17
31. URL/CryoSat,http://www.esa.int/export/esaLP/cryosat.html,
European Space Agency, (accessed July 2003).
32. R. K. Raney and J. R. Jensen, An Airborne CryoSat Prototype: The D2P Radar Altimeter, in Proceedings of the
International Geoscience and Remote Sensing Symposium
IGARSS02. Toronto: IEEE, 2002.
33. URL/WSOA,
http://ibib.grdl.noaa.gov/SAT/pubs/Jason2
paper.doc, (accessed July 2003).
34. R. K. Raney and D. L. Porter, WITTEX: An innovative threesatellite radar altimeter concept, IEEE Transactions on Geoscience and Remote Sensing, vol. 39, pp. 23872391, 2001.
35. URL/JHUAPL, http://fermi.jhuapl.edu/, (accessed July
2003).
36. W.
H.
F.
Smith
and
R.
Scharoo,
ftp://falcon.grdl.noaa.gov/pub/walter/combi anim.gif,
NOAA, (accessed July 2003).
R. KEITH RANEY
Johns Hopkins University,
Laurel, MD
RADAR APPLICATIONS
Radar (radio detection and ranging) systems attempt to infer information about remotely located objects from
reflections of deliberately generated electromagnetic waves at radio frequencies. Typically, the information
sought is detection of the presence of target objects in the midst of clutter, recognition (classification) of targets,
and estimation of target parameters such as range (distance from the radar antenna), bearing (azimuth and
elevation), orientation, velocity, acceleration, or backscattering cross section (reflectivity) distribution.
Early radar systems could only scan the environment, to detect an aircraft when it appeared in their
beam and to measure its range and bearing. The range resolution cell was determined by the length of the
unmodulated transmitted pulse and was much larger than the aircraft. Thus, it was reasonable to model the
aircraft as a point target and the system interference as white Gaussian thermal noise. Subsequently, the
detection problem was reduced to that of detecting a point target in white Gaussian noise. Modern radar
systems, however, are expected to perform the much more sophisticated tasks just stated for multiple targets
simultaneously, at the finest possible target resolution and with the highest possible accuracy. Additionally, the
domain of utilization of radar techniques has expanded beyond the traditional aircraft detection and ranges
to applications such as estimation of the parameter (range, velocity, acceleration, and backscattering cross
section) distribution of spread targets, aerial imaging, and ground or foliage penetrating radar imaging. To
achieve their expanded tasks, modern radar systems combine high-quality hardware with sophisticated signal
design and processing algorithm development and implementation based on statistical descriptions of both the
target characteristics and the clutter distributions.
Applications of modern radar can be found in the military, the civilian, and the scientific regime. Military
applications include search and surveillance of enemy targets; navigation, control, and guidance of weapons;
battlefield surveillance; and antiaircraft fire control. Among civilian applications, prominent are those in air,
water, and land transportation, including aircraft navigation; collision avoidance with both other aircraft and
terrain obstacles; detection and avoidance of weather disturbances and clean-air turbulence; altimetry; air
traffic control; shore-based ship navigation; collision avoidance for ships and small boats; harbor and waterway
traffic control; collision avoidance of land vehicles; tracking of vehicles; and traffic law enforcement; as well as
space applications in detection and tracking of satellites and control of rendezvous and docking of space vehicles.
Finally, scientific applications include remote sensing of the Earths environment from aircraft and satellites
for planetary observation; weather radar for study and monitoring of precipitation, clouds, and major weather
disturbances; ground mapping; ground-penetrating radar for detection of buried objects; foliage-penetrating
radar for detection of hidden targets; and high-resolution imaging of objects and terrain via synthetic aperture
imaging radars.
RADAR APPLICATIONS
where f 0 is the carrier frequency at which the radar operates and u(t) is a pulse of duration T and bandwidth
B smaller than the carrier frequency. The pulse illuminates a point target and gets reflected back towards the
antenna. Let be the round-trip delay between the time at which the rising edge of the pulse leaves the radar
antenna, gets reflected by the target, and is received back at the antenna. Since the target is moving, this delay
will be a function (t) of time. Ignoring amplitude attenuation and constant phase shifts due to reflection, the
received pulse will be
The information about the target motion is contained in the round-trip delay (t) as a function of time and the
distortion it causes on the received pulse sR (t). The delay (t) depends on the target position at the instant of
reflection, which for a signal received at time t, occurs at time t (t)/2. Thus:
where R(t) is the target range, that is, the distance from the radar antenna to the target, as a function of time.
If the target moves slowly enough for the delay (t) to be approximately constant within the duration T
of the illuminating pulse, then the target can be regarded as stationary. However, it is often the case that the
target moves very fast in comparison with the pulse duration and different instants of the pulse are differently
delayed. In the general case, the relation among the target motion, the round-trip delay, and the received
pulse is too complicated to be tractable. Several simplifications can lead, however, to tractable mathematical
relations. Assume that the delay (t) is a smooth enough time function to be expandable into a Taylor series
around the time instant t0 = (t = 0) 0 at which the leading pulse edge is received back at the receiver:
where (k) 0 = dk /dtk (t0 ) is the kth derivative of the round-trip delay evaluated at the time instant t0 .
Define now the target parameters of interest:
RADAR APPLICATIONS
and use Eqs. (3) and (4) to relate them to the target range and its derivatives. Algebraic manipulation gives the
target parameters as the following functions of the range R0 = R(t0 /2) and its derivatives R(k) 0 = (dk /dtk )R(t0 /2)
at time t0 /2:
The approximations in Eqs. (6)(8) are valid simplifications for the practical cases of R(1) 0 c of the exact
expressions on p. 59 in Ref. 1.
From Eq. (8), it is seen that even the simplified expression for the target hyperacceleration 0 is still
a complicated function of target range derivatives. Higher-order terms in Eq. (4) have coefficients that are
nonmanageable functions of target range derivatives. Fortunately, practical radar systems need only deal
with targets moving sufficiently smoothly for only the delay and Doppler coefficients and, occasionally, the
acceleration coefficient (and, rather rarely, the hyperacceleration coefficient) to be significant in the expansion
in Eq. (4). Additionally, only the delay term t0 is significant in the complex envelope u[t D(t)], while the
higher-order terms affect only the phase in the exponential in Eq. (2). That is, the pulse received by the radar
from a single target illuminated with the pulse of Eq. (1) is
where t0 , v0 , 0 , and 0 are the target delay, Doppler (velocity), acceleration, and hyperacceleration parameters.
Matched-Filter Response to Received Pulse. The received pulse is processed through a bank of
filters, each matched to a different set of values of the target parameters. The filter matched to the set of
parameter values (, v, , ) has impulse response
Assume now that the input to this matched filter is the received pulse in Eq. (9), multiplied by an unknown
complex-valued amplitude Aei and corrupted by additive noise, that is, Aei sR (t) + n(t).The output of the filter
RADAR APPLICATIONS
will be
where T 0 is the time interval during which the radar is in receive mode (e.g., the time interval between two
successive pulse transmissions). Equation (11) consists of two terms: a term due to noise and a signal term
containing the ambiguity function
Clearly, the magnitude of the signal term at any time t is maximized if the filter parameters are selected equal to
the target parameters, that is, if = t0 , v = v0 , = 0 , and = 0 . Target detection can be performed by monitoring
thematched-filter outputs at each instant t and examining whether they exceed a preset threshold or not. If
the threshold is exceeded, then target detection is declared. If a target is thus detected, its parameters are
subsequently estimated as the parameters of the matched filter that produces maximum output. A simplification
to this target detection or estimation rule can be achieved by noticing that the matched-filter delay is not
significant in that it only corresponds to a shift in the time instant of occurrence of the maximum of the matchedfilter output. Indeed, a change in the round-trip delay t0 only changes the time at which the maximum occurs.
Thus, only a bank of matched filters needs to be used, in which the delay is fixed to zero and the target range
is estimated from the time instant of occurrence of the maximum of the matched-filter output. If, however, the
other target parameters are significant, an entire bank of filters needs to be used, with each filter matched to
different target parameter values. In summary, the criterion for declaring target detection is
and the set of values (t, v , g , ) that provides the maximum constitutes the target parameter estimates. The
threshold is set so as to keep the probability of a false alarm below a specified maximum tolerance. Since the
complex amplitude Aei is unknown and varying, constant false alarm (CFAR) techniques need to be utilized
to set the threshold adaptively.
Distributed-Target Measurements. The theory of single-target measurements needs to be modified
and extended if the radar is to operate at a resolution that is sufficiently high for the spread of one or more
target parameters to exceed the corresponding resolution bin. Examples of such targets include the terrain,
vegetation foliage, extended manmade objects such as buildings with more than one smooth surface, or even
aircraft when the resolution bin is significantly smaller than its typical dimensions. In these cases, the pulse
received at the radar antenna can be considered as the superposition of a small or large or even infinite number
of reflections from individual scattering centers on the target.
Serious difficulties in extending the theory of single- to multiple- (distributed-) target measurements
arise if the scattering centers of the target are not stationary during illumination with the radar pulse or
new ones emerge or several disappear due to target motion. Additionally, the target may be dispersive, that
RADAR APPLICATIONS
is, its significant scattering centers may vary with frequency, making the target behavior rather complex. For
the theory of distributed-target measurements to remain tractable, the assumption needs to be made that the
target is represented by a possibly infinite, yet fixed, set of scattering centers. Additionally, no dispersion can
be allowed, that is, the scattering centers need to be frequency independent.
With these assumptions in mind, consider a target consisting of N scattering centers illuminated with
the pulse of Eq. (1). The reflected pulse measured at the radar antenna will be
producing the signal part at the output of the matched filter of Eq. (10):
Under the assumptions of stationarity of the target scattering centers during illumination with the radar pulse
and their independence, cross terms in the signal part of the magnitude squared of the matched-filter output
will be relatively small. Thus
Considering the limit of N densely packed scattering centers, the magnitude squared of the matched-filter
output becomes
In Eq. (17), |A(t0 , v0 , 0 , 0 )|2 is the target backscattering cross-section distribution as a function of the delay,
Doppler (velocity), acceleration, and hyperacceleration parameters. Clearly, if the magnitude squared of the
ambiguity function consists of a single central spike with very narrow width, that is, if
then
In other words, the matched-filter response represents (and measures) the target backscattering cross section
for the particular values of delay, Doppler, acceleration, and hyperacceleration coefficients to which the filter
is matched. Consequently, a bank of matched filters, each adjusted to a different delay, Doppler, acceleration,
and hyperacceleration parameters, yields the entire target cross-section distribution. A simplification can be
RADAR APPLICATIONS
obtained by considering only matched filters corresponding to the delay = 0 in the bank and utilizing the
entire matched-filter output for parameter distribution estimation.
RADAR APPLICATIONS
Monopulse Tracking Radar. The sequential-lobing and conical-scan tracking radars require a train of
echo pulses in order to extract the angular error signal. This echo train must contain no amplitude-modulation
components other than the modulation produced by the scanning; otherwise the tracking accuracy will be
degraded. On the other hand, pulse-to-pulse amplitude modulations have no effect on tracking accuracy if the
angular measurement is based on a single pulse rather than on several. If more than one antenna beam is
used simultaneously, it is possible to extract angular error information from a single pulse from the relative
phase or the relative amplitude of the echo signal received in each beam. Tracking radars that derive angular
error information from a single pulse are known as simultaneous lobing or monopulse radars. An example of a
simultaneous lobing technique is the amplitude-comparison monopulse, in which the echos received from two
offset antenna beams are combined so that both the sum and the difference signals are obtained simultaneously.
The sum signal provides range information, while the difference signal provides angular error information in
one angular direction.
Track-While-Scan Radar. A search radar can obtain the track of a target by marking the coordinates
of the target from scan to scan. Such a radar is called track-while-scan radar and either requires a human
monitor to mark the target path manually or uses a digital computer to perform automatic detection and
tracking. The automatic detection is achieved by quantization of the range into intervals equal to the range
resolution. At each range bin, the detector integrates the number of pulses expected to be returned from a
target as the antenna scans past and compares them with a threshold to indicate the presence or absence of a
target. When a new detection is received, an attempt is first made to associate it with an existing track. When
the detection is declared independent of existing tracks, the radar attempts to make a smooth estimate of the
targets present position and velocity, as well as a predicted position and velocity. One method to achieve this
is by using either the so-called - tracker or a Kalman filter that utilizes a dynamic model for the trajectory
of a maneuvering target and the disturbance or uncertainty of the trajectory.
Navigation Radar. Navigation radar is used to provide the necessary data for piloting an aircraft from
one position to another without any need for navigation information transmitted to the aircraft from a ground
station. A self-contained aircraft navigation system utilizes a continuous-wave Doppler radar to measure the
drift angle and true speed of the aircraft relative to the Earth. The drift angle is the angle between the centerline
(heading) of the aircraft and the horizontal direction (ground track). A navigation radar requires at least three
non-coplanar beams to measure the vector velocity, that is, the speed and its direction, of the aircraft. Such
a radar measures the vector velocity relative to the frame of reference of the antenna assembly. This vector
velocity can be converted to a horizontal reference on the ground by determining the direction of the vertical
and the aircraft heading by some auxiliary means. Usually, the radar uses four beams initially symmetrically
disposed about the aircraft axis, with two facing forward and two facing rearward. If the aircraft vector velocity
is not in the direction of the aircraft heading, the two forward-facing beams will not read the same Doppler
frequency. This Doppler difference can be fed in a servomechanism that will align the axes of the antennas
with the ground track of the aircraft. The angular displacement of the antennas from the aircraft heading is
the drift angle, and the magnitude of the Doppler frequency is a measure of the speed along the ground track.
The use of the two rearward beams is similar, but improves the accuracy considerably by reducing the errors
caused by vertical motion of the aircraft and pitching movements of the antennas.
High-Resolution Imaging Radar
A radar image is a visual representation of the spatial microwave reflectivity distribution of a target illuminated
by the electromagnetic radiation emitted by the radar. Equivalently, a radar image represents a collection of
reflection coefficients assigned to an array partitioning the target space. Thus, a radar image is generated by
the same physical mechanism that generates an optical image observed by a human observer, in which the
optical reflectivity distribution is reconstructed. In humans, however, the aperture size of the imaging system
RADAR APPLICATIONS
is on the order of 10,000 wavelengths, orders of magnitude (in wavelengths) greater than the aperture size
of the corresponding radar imaging systems. Since the resolution of an imaging system, that is, its ability to
represent distinctly two closely spaced elements, is inversely related to its aperture size, radar imaging systems
would appear primitive when compared to their optical counterparts. Whereas a single optical image is usually
sufficient for target recognition, several radar images of the same target, corresponding to various viewing
angles, are usually required. However, the usefulness of radar imaging systems is not undermined by their
lower-resolution capabilities. Advantages of radar imaging systems over their optical counterparts include
their day or night capability, since they supply their own illumination and their all-weather capability, since
radio waves propagate through clouds and rain with only limited attenuation. Additionally, larger aperture
sizes (and, thus, higher resolution) can be synthesized from the given physical aperture using techniques such
as those described later in this article.
Direct Imaging Radar. Direct imaging radar systematically scans a three-dimensional volume in angle
and range with short pulses emitted from a pencil-beam antenna and range gating and displays the intensity of
the received signals as a function of the spatial coordinates interrogated. The spatial resolution is established
by the angular (beam width) and range (pulse duration) resolution of the sensor without subsequent processing.
If range gating is not used, then range is not resolved and the radar image is a two-dimensional projection of
the reflectivity distribution along the radar line-of-sight. Direct imaging is the simplest form of radar imaging,
requiring minimal data processing and allowing the target to be stationary. However, it requires very large
aperture and subnanosecond pulses for a high degree of spatial resolution, while, due to beam widening, its
cross-range resolution degrades as the range increases.
Synthetic Imaging Radar. Synthetic imaging radar attempts to overcome the limitations of direct
imaging radar and create fine spatial resolution by synthetic means in which results from many observations
of the target at different frequencies and illumination angles are coherently combined. The term synthetic
here refers to the synthesis of resolution commensurate with short-pulse, large-aperture illumination from a
number of elemental measurements of illumination with not-as-short pulses and not-as-large aperture.
Range Processing Radar. The first task of imaging radar involves discrimination on the basis of range.
High resolution in the determination of range is achieved when the transmitted pulse duration T is narrowed
down and the corresponding system bandwidth B is increased, so that the timebandwidth product (TB) is
constant. Maximum sensitivity is accomplished when the timebandwidth product is set to unity, that is,
TB = 1. Thus, the required range resolution can be achieved when target reflections are measured over a
band of frequencies. Any radar waveform that supports an extended bandwidth can be used, the specific type
of waveform only determining the necessary implementation of the receiver for coherently processing the
wide-band signal.
In contrast to direct imaging methods, in which all the spectral components of the signal must be present
simultaneously, synthetic imaging methods require that the spectral components be present sequentially. In
the simplest implementation of high range resolution by synthetic means, several narrow-band measurements
are made at discrete frequency increments. Such radars are called stepped-frequency systems and can be
either continuous wave, at each frequency emitting an unmodulated sinusoid, or pulsed, amplitude-modulating
each frequency sinusoid. Stepped-frequency continuous-wave systems are susceptible to aliased responses and
transmitter coupling, shortcomings alleviated by pulsing the transmitter and time-gating the receiver as in a
pulsed, stepped-frequency system. Although individual narrow-band responses have insignificant resolution
potential, the coherent combination of the responses provides the resolution allowed by the total bandwidth
spanned. Alternatively, high range resolution can be accomplished using swept-frequency (linear FM) systems
and corresponding wide-band receivers. Range resolution in swept-frequency systems is achieved by measuring
the difference in instantaneous frequency between the instant of emission of the radar pulse by the transmitter
and the instant of its reception back at the receiver.
Synthetic Aperture Processing Radar. High resolution in the cross-range direction can be obtained by
scanning a focused beam across the object. If the aperture that forms the scanning beam is focused at the
RADAR APPLICATIONS
target plane, the minimum lateral extent of the focused spot is approximately
where is the wavelength, R is the observation distance, and D is the aperture dimension. Resolution of two
adjacent object points on a plane perpendicular to the line-of-sight of the radar is possible if their distance
is greater than the spot dimension. Thus, for a fixed wavelength and observation distance, the resolution is
increased by increasing the aperture size. High-resolution direct imaging radars would, therefore, need to have
physically large aperture.
Synthetic imaging radars synthesize equivalent large aperture for high-resolution cross-range imaging
by sequentially stepping a sensor through small incremental distances and storing samples of the amplitudes
of the corresponding received signals. The stored signals are coherently summed to produce signals equivalent
to those that would be received by the corresponding large physical aperture. In effect, synthetic aperture
radars (SARs) coherently process signals scattered from the same target for various viewing angles by utilizing
relative motion between the sensor and the target. Depending on the type of relative motion between sensor
and target, synthetic aperture radars can be linear, spotlight, or inverse.
Linear Synthetic Aperture Radar. In linear SAR, also called stripmap SAR, the radar sensor is moved
along a linear path and images stationary targets in its line-of-sight. Linear SAR is widely used for mapping
terrain features and ground-based objects from airborne platforms.
Spotlight Synthetic Aperture Radar. Spotlight SAR involves observing a target with the radar antenna
fixed on it while the viewing angle is changed.
Inverse Synthetic Aperture Radar. Inverse SAR involves a stationary radar viewing targets rotating
about an axis perpendicular to the line-of-sight.
Doppler Processing Radar. Spatial resolution in cross range, that is, along an axis perpendicular to the
radar line-of-sight, can be obtained if a target rotates relative to the radar sensor and the target reflections are
Doppler processed. This is possible since the Doppler frequency shift in waves reflected by a rotating target is
proportional to the lateral offset of the reflector along an axis normal to the axis of rotation and the line-of-sight.
Indeed, if d and R0 (R0 d) are the distances of a reflecting point and the radar sensor, respectively, from the
center of a target rotating at an angular velocity , then the distance of the reflecting point from the radar
sensor at time t is approximately
According to Eq. (6), the Doppler coefficient at time t in the received wave will be
From Eq. (22), it is clear that the Doppler coefficient for every reflecting point in a target rotating with angular
velocity is a harmonic function of time, the amplitude of which is proportional to the instantaneous lateral
distance of the reflecting point from the center of rotation. Doppler processing of the received signal for crossrange resolution can be done on-line by either a bank of contiguous filters or by first sampling it and then
analyzing it with Fourier transform processors of sufficiently high speed. Off-line processing, on the other
hand, can be performed by recording the received signal for later processing. In either case, the signal is
usually frequency translated to retain only its complex envelope.
10
RADAR APPLICATIONS
Holographic Processing Radar. Optical holography records the spatial distribution of the intensity of
the interference of light waves diffracted by an object and a reference beam in a hologram. This overcomes
the difficulty associated with lack of optical phase sensitive storage media. Later, the hologram can be used to
reconstruct the light waves associated with the original object by illumination with the reference beam used
in the recording step. A holographic reconstruction allows a viewer the perception of a virtual image of the
original object.
Microwave holography follows recording and reconstruction procedures analogous to optical holography. In
microwave holography, the field amplitude scattered from an object coherently illuminated from a transmitter
is mapped over a prescribed recording aperture by a coherent detector that is scanned over the aperture. The
detected bipolar signal, representing the complex envelope of the time-varying field, is added to a bias level
sufficient to make the resultant always positive. The resulting signal is used to produce a film transparency with
an amplitude transmittance function that is real and positive. The area probed by the detector represents the
hologram aperture, the reference signal for the coherent detector represents the reference beam, and the signal
scattered from the object is the object beam. A variation, known as scanned holography, of the (conventional)
procedure described previously attempts to scan the transmitter and the receiver independently and offers
some advantages in resolution.
RADAR APPLICATIONS
11
during this period, with subsequent distortion of the reflectivity and velocity radar images, the returned
estimates are considered highly valuable.
Practically, more significant than the reflectivity and velocity distributions are estimates of the rainfall
rate and wind velocity. Doppler radar, however, measures the range velocity of hydrometeors rather than air,
and often this differs significantly from the range component of wind. Nevertheless, since hydrometeors quickly
respond to wind forces, their terminal velocities give negligible bias estimates of the range component of the
wind.
Turbulence Measurement. The mean velocity and spectrum width measured by Doppler radar are
weighted averages of point velocities. Therefore, they are sufficient to depict motion on scales larger than
the resolution cell, but cannot infer the details of the flow inside the cell. Nevertheless, Doppler radar offers
the possibility of measurement and study of turbulence on scales smaller than the resolution cell if a firm
connection between the statistical-physical properties of the atmosphere and Doppler-derived measurements
is established.
Clean Air Observation. A radar designed to identify and track precipitating storms can also detect
echos from scatterers in fair weather. In such cases, the distribution of spatial reflectivity in clean air can be
associated with meteorological phenomena such as turbulent layers, waves, and fronts, flying birds and insects,
or atmospheric pollutants. Clean air echos not related to any visible scatterers have been conclusively proven
to emanate from refractive-index irregularities.
Waves reflected by sharp, quasipermanent changes in the dielectric permittivity of the atmosphere form
the coherent component in the echo received by the radar. Coherent echos exist if the scattering medium does
not time-modulate the amplitude or phase of the transmitted radar pulses, even though spatial variations
may exist. Coherent echos appear as peaked and narrow components in the Doppler spectrum. On the other
hand, incoherent components are contained in the echo signal if time-varying (turbulent) scatter is present.
Incoherent echos demonstrate themselves as broad components in the Doppler spectrum.
12
RADAR APPLICATIONS
and space-to-space applications or short-range atmospheric applications, in which the propagation loss penalty
does not outweigh fine resolution.
Ladar Information Processing. A ladar measures a targets range, position, velocity, and motion
by modulating its laser beam, detecting the reflected return, and processing the return signal to derive the
desired information. Methods have been developed for amplitude, frequency, and phase modulation and for
modulation by polarization. Laser radiation can be modulated both by direct action on coherent signals inside
the laser during their generation (internal modulation) and through action on the radiated light outside the
laser (external modulation). A number of electro-optical, acousto-optical, and mechanical beam modulation
devices are available with different inherent modulation rates, yielding amplitude or frequency modulation of
the transmitted beam.
Solid-state lasers cannot provide the necessary spectral purity to utilize phase processing of ladar signals.
Gas lasers, such as helium-neon and carbon dioxide, however, have high spectral purity and can be modulated
in amplitude or frequency with bandwidths of up to 525 MHz (yielding a resolution of approximately 30 cm)
with relatively low drive powers. Ladar signal-processing techniques are similar to those used in microwave
radar. In fact, the same circuits for signal-envelope processing may be employed in many cases. The use of
ladar allows the exploitation of highly precise and unique methods for angle estimation and tracking.
RADAR APPLICATIONS
13
targets sought are large relatively to the average wavelength and the soil inhomogeneities. In this case, imaging
would play a (secondary) role in reducing the number of false alarms of the detection procedure. If small targets,
such as mines or weapons, are of interest, they would be hard to distinguish from clutter and the role of imaging
would be enhanced. Thus, it is difficult or perhaps pointless to develop a single radar system for the detection
of both large or deep and small or shallow targets. The wide-frequency-range requirement imposes stringent
requirements in the range of both the electronics and the size of the relevant antennae and contributes to
the delay of development of this significant radar application. However, ground- or foliage-penetrating radar
technologies are presently an area of significant research investigation.
Current Trends
Besides research in ground- and foliage-penetrating radar technologies, significant research is also conducted
in the development of, so-called, space-time adaptive processing (STAP) algorithms. STAP refers to multidimensional adaptive filtering algorithms that simultaneously combine the signals from the elements of an array
antenna and the multiple pulses of a coherent radar waveform. STAP can improve the detection of low-velocity
targets obscured by mainlobe clutter, detection of targets masked by sidelobe clutter, and detection in combined
clutter and jamming environments.
Significant research is also conducted into the use of signal processing tools other than the traditional
Fourier transformbased ones for target detection and recognition. Such tools are, for example, based on the
theories of, so-called, wavelet-induced multiresolution analyses (WIMA) of signals. A WIMA allows for the
decomposition and simultaneous representation of a signal in time and scale and, therefore, is capable of
processing signals at different scales. WIMA-based radar target detection and recognition is being actively
researched.
BIBLIOGRAPHY
1. A. W. Rihaczek, Principles of High Resolution Radar, Norwood, MA: Artech House, 1996.
READING LIST
C. G. Bachman, Laser Radar Systems and Techniques, Dedham, MA: Artech House, 1979.
L. J. Battan, Radar Observation of the Atmosphere, Chicago: University of Chicago Press, 1973.
P. Bello, Joint estimation of delay, Doppler, and Doppler rate, IRE Trans. Inf. Theory, IT-6 330341, June 1960.
W. G. Carrara, R. S. Goodman, R. M. Majewski, Spotlight Synthetic Aperture Radar: Signal Processing Algorithms, Norwood,
MA: Artech House, 1995.
I. Cindrich et al. (eds.), Aerial Surveillance Sensing Including Obscured and Underground Object Detection, Bellingham,
WA: SPIEThe Society for Optical Engineering, 1994, Vol. 2217.
N. K. Del Grande, I. Cindrich, P. B. Johnson (eds.), Underground and Obscured Object Imaging and Detection, Bellingham,
WA: SPIEThe Society for Optical Engineering, 1993, Vol. 1942.
A. J. Devaney et al., Automatic target detection and recognition: a wavelet approach, Final Report on ARPA Grant F4962093-1-0490, 1995.
R. J. Doviak, D. S. Zrnic, Doppler Radar and Weather Observations, San Diego, CA: Academic Press, 1992.
A. K. Fung, Microwave Scattering and Emission Models and Their Applications, Norwood, MA: Artech House, 1994.
E. J. Kelly, The radar measurement of range, velocity, and acceleration, IRE Trans. Mil. Electron., MIL-5 5157, 1961.
E. J. Kelly, R. P. Wishner, Matched-filter theory for high-velocity, accelerating targets, IEEE Trans. Mil. Electron., 5669,
January 1965.
R. Meneghini, T. Kozu, Spaceborne Weather Radar, Norwood, MA: Artech House, 1990.
D. L. Mensa, High Resolution Radar Cross-Section Imaging, Norwood, MA: Artech House, 1991.
14
RADAR APPLICATIONS
GEORGE A. TSIHRINTZIS
Northeastern University
The maximum output SNR, the most frequently used criterion for radar detection, is defined as the ratio of
the maximum instantaneous output signal power to the output noise power. The input SNR is a major limiting
factor for radar detection performance.
For a fixed input SNR, a linear time-invariant filter whose frequency response function maximizes the
output SNR is called a matched filter. Matched filtering transforms the raw radar data into a form that is
suitable for (1) generating the optimal decision for detection; (2) estimating the target parameters with a
minimal rms error, or (3) obtaining the maximum resolving power for a group of targets. The characteristics
of matched filters can be described by either a frequency-domain transfer function or a time-domain impulse
response function, each being related to the other by the Fourier transform. In the frequency domain, the
matched-filter transfer function H() is the complex conjugate of the spectrum of the signal. Thus, in general
terms
where S() is the spectrum of the input signal s(t) and T is a delay constant required to make the filter
physically realizable. The normalizing factor k and the delay constant are generally ignored in formulating the
underlying significant relationship. This simplification yields
Equation (3) reveals that the bandwidth of the receiver must be the same as that of the signal. This is
understandable, because if the bandwidth of the receiver is wide compared with that occupied by the signal
energy, extraneous noise may be introduced into the excess bandwidth, which lowers the output signal-to-noise
ratio. On the other hand, if the receiver bandwidth is narrower than the signal bandwidth, the noise energy is
reduced along with part of the signal energy. The result is again a lowered SNR. When the receiver bandwidth
is identical to the signal bandwidth as in the case of the matched filter, the output SNR is maximized. The
conjugate in Eqs. (2) and (3) allows the phases of S() and H() to cancel each other out, and leaves the output
signal spectrum a linear phase, e jT , which results in a peak at the time instant T in the output.
The corresponding time-domain relationship between the signal to be detected and the matched filter
is obtained from the inverse Fourier transform of H(). This leads to the result that the impulse response
of a matched filter is a replica of the time inverse of the known signal function. Thus, if h(t) represents the
matched-filter impulse response, the relationship equivalent to Eq. (2) is given by
Figure 1 illustrates the relationship given by Eqs. (3) and (5), where s(t) is a pulsed linear frequencymodulated (LFM) signal with the form
The phase from H() is the negative of that from S(), while h(t) is the time reversal of the s(t).
Figure 2(a) shows a received signal, which is the signal s(t) of Eq. (6) corrupted by a 6 dB Gaussian noise;
that is, the input SNR is 6 dB. It is difficult to detect the existence of the signal s(t) from this figure. However,
after the received signal is processed by the matched filter, the detector output peak in Fig. 2(b) clearly indicates
the existence of the signal.
The output from the matched filter, as shown in Fig. 3, is the convolution between the received signal and
the matched-filter impulse response, that is,
Fig. 1. (a) Signal s(t) and (b) matched-filter h(t) relations. Phase in units of degrees. The phase from H() is the negative
of that from S(), while h(t) is the time reversal of s(t).
Fig. 2. (a) Signal corrupted by noise and (b) the matched-filter output. The peak in the matched-filter output indicates
the existence of the signal.
Sampling y(t) at t = T yields the maximum output signal value, that is,
Fig. 4. Block diagram of a cross-correlation, which is another implementation of the matched filter.
where Es represents the signal energy. It can be easily verified that the expectation of y(t)max is Es , because the
second term in Eq. (8) represents the noise whose mean is zero. This can be easily seen from Fig. 2(b) in which
the maximum signal energy occurs at t = T = 2, and the maximum value is close to the expectation of Es = 0.18
in this experiment. A detailed analysis of the matched filter will be given in the section entitled Analysis of a
Matched Filter.
Equation (7) describes the output of the matched filter as the cross-correlation between the received
signal and a replica of the transmitted signal. This implies that the matched filter can be replaced by a
cross-correlation that performs the same mathematical operation, as shown in Fig. 4. The received signal is
multiplied by a delayed replica of the transmitted signal s(t t1 ), and the product is passed through a low-pass
filter. The cross-correlation tests for the presence of a target at only one time: t1 . Targets at other time delays,
or ranges, may be found by varying t1 . However, this requires a longer search time. The search time can be
reduced by adding parallel channels, each containing a delay line corresponding to a particular value of t1 , as
well as a multiplier and a low-pass filter.
Since the cross-correlation and the matched filter are equivalent mathematically, the choice of which one
to use in a particular radar application is determined by the practicality of implementation. The matched filter,
or an approximation, has been generally preferred in the vast majority of applications.
Decision Criteria for Radar Signal Detection
The statistical detection problem consists of examining the received radar waveform r(t) in a resolution
cell to determine which of the following two hypotheses is true. The first hypothesis H 1 asserts that a target is
present, and the received signal contains the target signature and noise. The second hypothesis H 0 states that
the target is absent, and only noise is present in the received signal. The problem can be compactly stated as
The conditional probability density function completely describes the received signal statistically in both
cases:
For reasons of simplicity, r is assumed to be a single sampled point of the received radar signal. The extension
from a single sampled point to multiple sampled points is straightforward. The likelihood ratio is defined as
The likelihood ratio (r) is also called the likelihood statistic. It is a random variable since it is a function
of the random variable r. The maximum likelihood (ML) decision criterion, which chooses the hypothesis that
most likely causes the observed signal, is
This expression means that H 1 is selected if (r) is greater than 1; otherwise H 0 is selected. It can be seen that
the ML criterion is a very simple decision criterion.
To describe the detection performance better, the probabilities of detection and false alarm are used in
radar detection. The probability of detection refers to the probability of asserting the presence of a target when
the target is indeed present
where R0 is the decision boundary. The proper value of the boundary R0 depends upon the criterion of decision.
The probability of false alarm is the probability of asserting the presence of a target when the target is actually
absent:
A sketch of the two density functions is shown in Fig. 5, where Pd and Pfa are, respectively, shown by the
vertically and the horizontally hatched areas. If the observed value r is large, we would be confident in picking
H 1 . If r is small, we would pick H 0 , as shown in Fig. 5.
Obviously, a decision rule should be selected to maximize Pd while restricting the Pfa . The simplest rule in
this class, which is extensively used in radar detection, is the Neyman-Pearson criterion. This criterion specifies
a decision boundary that maximizes the probability of detection (Pd ) while maintaining a fixed probability of
Fig. 5. Probability of false alarm Pfa and probability of detection Pd , which are functions of the threshold R0 .
false alarm Pfa . The detection problem under the NeymanPearson criterion can be formulated as follows:
The optimum decision region can be found by using the calculus of extrema and forming the objective function
The integration interval in Eq. (17) is related to choosing the hypothesis H 1 , as illustrated in Fig. 5. It is clear
that J and hence Pd are maximized by choosing the hypothesis H 1 when
and is determined by the required false alarm probability . In radar detection, the choice of is based upon
operational considerations, that is, the need to keep the false alarm rate within acceptable bounds (e.g., a few
false alarms per second). A typical value of for radar detection is 10 6 .
Other popular criteria are the Bayes criterion and the minimum error probability (MEP) criterion. The
Bayes criterion minimizes the average cost of the decision. Symbols denoted by C00 , C01 , C10 , and C11 represent
the costs for a correct miss (no target is declared when no target is present), a false dismissal (no target is
declared when a target is present), a false alarm, and a correct detection, respectively. Also denoted are the a
priori probabilities P(H 0 ) and P(H 1 ) by P0 and P1 , respectively. The Bayes rule makes the likelihood ratio test
where = [P0 (C10 C00 )]/[P1 (C01 C11 )]. If we select the cost of an error to be 1 and the cost of a correct decision
to be 0, C01 = C10 = 1, and C00 = C11 = 0. In this case, minimizing the average cost is equivalent to minimizing
the probability of error. Therefore, the MEP rule is the same as expression Eq. (21), but with = P0 /P1 . If the a
priori probabilities are equal, that is, P0 = P1 , the MEP rule coincides with the ML rule with =1.
Implementation of Decision Criteria
Let us suppose that the observed signal r has the following Gaussian distribution conditional probability
density functions,
where denotes the mean of the received signal value and 2 represents the noise variance. The likelihood
ratio test is therefore
In Eq. (25) it is seen that the likelihood ratio test, in which (r) of Eq. (24) is compared with a threshold , is
transformed into a comparison of the observable r with the threshold in Eq. (25), which is a function of . As an
example, supposing P(H 0 ) and P(H 1 ) are known, with P(H 0 )/P(H 1 ) = 2, then the decision rules are choose H 1 if
Since the a priori probability of H 0 is twice that of H 1 , the MEP rule requires a larger value of R0 for the
selection of H 0 than the ML, in which this information is not used. The MEP scheme therefore yields a better
decision rule in this case.
For a NeymanPearson criterion, suppose a value of Pfa = 10 4 can be tolerated. The threshold is
determined from
to be = 3.72. So H 1 is chosen if
A typical illustration of these thresholds for this examples three decision criteria is given in Fig. 6.
An important observation is that these criteria employ the likelihood ratio test. In other words, the test is
performed by simply processing the received data to yield the likelihood ratio and then comparing it with the
threshold, which depends upon the criterion used. Thus, in practical situations where the a priori probabilities
and the cost may vary, only the threshold changes, and the computation of the likelihood ratio is not affected.
As observed previously, in radar detection it is very hard to define the Bayes cost Cij ; moreover, it is
also practically impossible to define or evaluate the a priori probabilities P0 and P1 , that is, the probabilities
that, in a given resolution interval, a target is present or absent. These are the main reasons why the Bayes
and minimum error probability criteria cannot be used in radar detection. In contrast, for the same reason,
the NeymanPearson criterion is particularly well suited to radar detection, owing to its concept of the Pfa
threshold fixed a priori, while Pd is maximized.
where is the threshold of the likelihood ratio just as R0 is the threshold of the observed signal, and p (|H 1 ) in
Eq. (30) is the conditional probability density function of the variable . Similarly, Pfa of Eq. (14) is rewritten
as
Since is a ratio of two non-negative quantities, it takes on values from 0 to . When the threshold is 0,
the hypothesis H 1 is always true and thus Pfa = Pd = 1. When the threshold is , the hypothesis H 0 is always
true and thus Pfa = Pd = 0. These are clearly depicted in Fig. 7.
Of course, ROC curves may be drawn for any hypothesis test involving a threshold, but the ROC curves
have particularly useful properties for the likelihood ratio test. One is the fact that the slope of the ROC at a
particular point on the curve represents the threshold value of the likelihood ratio . Taking the derivative of
Eqs. (30) and (31) with respect to , we have
10
and
Also,
Combining Eqs. (32), (33), and (35), the slope of the ROC curve obtained is
In the NeymanPearson criterion, the slope of the ROC curve at a particular point represents the likelihood
ratio threshold of achieving Pd and Pfa at that point. In the Bayes criterion, the threshold is determined by
a priori probabilities and the costs. Consequently, Pd and Pfa are determined on the point of the ROC at which
the tangent has a slope of .
Since the ROC curves are always concave and facing downward, it is possible to determine an optimum
value (the knee) for Pfa , such that a small decrease of its value causes a fast decrease of Pd , while any increase
has a very small effect (the saturation zone, where the rate of change is nearly 0).
Finally, we note that the most important part of the ROC curve is the upper left-hand (northwest) corner.
This is the so-called high-performance corner, where a high-detection probability occurs with a low false-alarm
probability. This part of the plot could be stretched out by the use of appropriate (such as logarithmic) scales.
Consider a signal s(t) with the spectrum S(f ) and finite energy
11
For the input signal s(t) and the filter with transfer function H(f ), the instantaneous power of the output signal
y(t) is
For white noise with a two-sided noise power spectral density N 0 /2, the output power spectral density is
|H(f )|2 N 0 /2. Therefore, the noise power at the filter output is
Using Eqs. (39) and (40) in Eq. (37) in leads to the following:
where T denotes the time at which the maximum value of |y(t)|2 occurs.
Using Schwarzs inequality
we obtain
yielding the requirement of the matched filter. From discussions above it is evident that the maximum signalto-noise ratio can be expressed as
Equation (45) indicates that the detection capability of a particular signal depends only on its energy content,
and not on the time structure of the signal. However, it is necessary to process the signal through a matched
filter to obtain this condition in practice. We note that Es /N 0 is defined as the input SNR, and it is clear from
Eq. (45) that the maximum output SNR for the matched filter is twice that of the input SNR if the noise is
white.
12
In general case, when the noise is nonwhite (colored noise), the derivation of the matched filter can be
carried out in a similar way. If the power spectral density of the nonwhite noise is N(f ), then Eq. (40) is written
as
Therefore, by multiplying and dividing the integrand of the numerator of Eq. (41) by
(46):
The conjugate is not needed in the denominator because N(f ) is always real (and nonnegative). If the noise
is whitethat is, if N(f ) is a constant over the band of H(f )then Eq. (48) is the same as Eq. (44) for the
white noise. The matched filter for nonwhite noise can be interpreted as the cascade of two filters. The first
is the whitening filter. This filter makes the noise spectrum flat
one, whose transfer function is 1/
(white). The second one is matched to the signal filtered by the whitening filter, that is, to the whitening signal
.
with the spectrum S(f )/
We note that it is not necessary that the noise be Gaussian for Eq. (45) to hold, but only that its power
spectral density be flat over the frequency band of interest. To summarize, the matched filter maximizes the
output SNR over all probability densities, provided the power spectral density (PSD) is a constant. In the event
that the noise PSD is nonwhite (colored noise), the matched impulse response corresponds to the modified
signal spectrum S(f )e j2fT /
13
The total probability of error for a radar receiver consists of the false-alarm probability Pfa and the
false-dismissal probability Pfd . A false dismissal declares no target when a target is present, that is,
For equal a priori probabilities P(H 0 ) = P(H 1 ) = 1/2, the total probability of error is
Supposing p(r|H i ), i = 0, 1 is a Gaussian distribution that is given by Eqs. (22) and (23), Pe can be expressed as
where
where
is the complementary error function. Recalling that is the expectation of the matched filter output at time T
under the H 1 hypothesis
14
Fig. 8. Minimum error probability Pe |min versus maximum output signal-to-noise ratio SNR|max .
and
The application of Eqs. (55) and (57) to Eq. (53) leads to the following minimum probability of error:
It is clear from Eq. (58) that Pe |min is inversely proportional to SNR|max , because erfc(x) is a monotonic decreasing
function. In other words, a lower probability of error means a higher output SNR, and requires a higher input
SNR. Figure 8 shows the relationship between Pe |min and SNR|max . This curve should be shifted to the left by
3 dB if it is plotted with respect to the input SNR, since SNR|max is twice the input SNR. For example, a 10 5
error probability corresponds to an output SNR of 18.6 dB, and 15.6 dB of input SNR is required. Therefore, it
is justifiable to use the signal-to-noise ratio criterion in radar detection.
15
Noncoherent Detections
A received radar signal is a bandpass random process because it is modulated on a carrier. Radar detection is
classified into coherent and noncoherent detections depending upon whether the carrier phase at the receiver
is available. Specifically, the matched filter and the cross-correlation discussed previously are coherent because
they require the knowledge of the carrier phase. The envelope and all the other nonlinear detections are
noncoherent due to their ignorance of the phase information in the received signal. To understand the nonlinear
detections, we introduce the representations of bandpass signals and bandpass processes.
Representation of Band-Pass Signals. The concept of a band-pass signal is a generalization of
the concept of monochromatic signals. A bandpass signal is a signal x(t) whose spectrum X(f ) is nonzero for
frequencies in a usually small neighborhood of some high frequency f 0 , that is,
where the frequency f 0 is referred to as the central frequency (carrier frequency) of the bandpass signal. A
radar signal that is modulated on a carrier is a bandpass signal. It is assumed that the band-pass signal is
real-valued. Figure 9(a) illustrates the spectrum of a bandpass signal x(t). A real-valued bandpass signal x(t)
can be represented as the real part of a complex signal x+ (t), called the preenvelope or analytic signal of x(t),
where
and
is the Hilbert transform of x(t). The spectrum of the preenvelope signal is readily found from the Fourier
transform of Eq. (60) to be
The spectrum of the preenvelope signal is obtained by deleting the negative frequencies from X(f ) and multiplying the positive frequencies in X(f ) by two, as illustrated in Fig. 9(b).
The spectrum of the complex envelope is obtained by shifting X + (f ) to the left by f 0 , that is,
and
16
Fig. 9. (a) Amplitude spectrum of a band-pass signal x(t). (b) Amplitude spectrum of preenvelope x+ (t). (c) Amplitude
spectrum of complex envelope x (t). The spectrum of x+ (t) is twice the positive spectrum of x(t), and the spectrum of x (t) is a
low-pass version of that x+ (t).
It is clear that x (t) is a low-pass signal, meaning that its frequency components are located around the
zero frequency. x (t) is the low-pass representation of the bandpass signal x(t). In general, x (t) is a complex signal
having xc (t) and xs (t) as its real and imaginary parts:
where xc (t) and xs (t) are low-pass signals, respectively, and are called the in-phase and quadrature components
of the bandpass signal x(t). Notice that x(t) is the real part of x+ (t). Using Eq. (65), we obtain
17
Fig. 10. (a) Band-pass description and (b) complex envelope description of a system. Complex envelope description
simplifies the analysis of a bandpass signal.
This is the canonical representation for a bandpass signal in terms of the in-phase component xc (t) and quadrature component xs (t) of the complex envelope associated with the signal.
The complex envelope can be employed to find the outputs of bandpass systems driven by bandpass
signals. Accordingly, by analyzing the complex envelope representation of a band-pass signal, we may develop
the complex low-pass representation of the bandpass system by retaining the positive-frequency half of the
) denote the transfer function of the complex low-pass
transfer function H(f ), and shift it to the left by f 0 . Let H(f
system so defined. The analysis of the bandpass system with transfer function H(f ) driven by the bandpass
signal with spectrum X(f ), as depicted in Fig. 10(a), is replaced by an equivalent but simpler analysis of a
) driven by a complex low-pass input with spectrum X(f
),
complex low-pass system with transfer function H(f
).
as shown in Fig. 10(b). The complex low-pass output y (t) is obtained from the inverse Fourier transform of Y(f
Having determined y (t), we may find the desired band-pass output y(t) simply by using the relation
The bandpass to low-pass transformation is also true for bandpass random processes. X(t) is a bandpass
process if its power spectral density Sx (f ) = 0 for |f f 0 | W. X(t) can be represented by its in-phase component
X c (t) and quadrature component X s (t) in the same way that a bandpass signal does. Specifically,
where X c (t) and X s (t) are two low-pass processes representing the real and imaginary parts of the complex
Envelope Detection and Square-Law Detection. The matched filter is the optimal detection for an
exactly known signal (i.e., phase, amplitude, and Doppler frequency are known) in a background of white noise.
However, both the matched filter and the cross-correlation need to generate a synchronous reference, which
18
is difficult to realize. In a typical radar application, the range between the target and the radar represents a
very large number of transmitted signal wavelengths. This makes specifying the phase of the return signal
extremely difficult, and we usually assume that the signal phase of the return signal is a random variable
uniformly distributed over an angle of 2 rad. The matched-filter detection is often used to set a standard of
performance, as it represents the optimal detection when all signal parameters are exactly known.
The synchronization problem in the matched-filter detection is obviated in a practical system by employing
an envelope detection. The envelope V(t) of a bandpass signal x(t) is given by
where
The procedure for obtaining the envelope is shown in Fig. 11, which is the extraction of the in-phase and
quadrature components and the derivation of the envelope from them. Specifically, the multiplication of x(t) by
2 cos(2f 0 t) in the in-phase channel yields
The mixing operation produces two images besides the expected low-pass component. The product 2x(t)
cos(2f 0 t) is passed through an ideal low-pass filter (the integrator in Fig. 11), which rejects the images
and leaves xc (t). Similar operations in the quadrature channel produce xs (t). The square sum of the quadrature
components yields the envelope V(t).
Removing the square-root operation from Fig. 11 yields the square-law detection. A detailed performance
analysis of these detections is given in the section entitled Performance Analysis of Coherent and Noncoherent
Detections.
The envelope can be extracted alternatively by passing the band-pass signal x(t) through a rectifier and a
low-pass filter, as illustrated in Fig. 12. Such a description can sometimes simplify the analysis and is easier to
implement physically because the various rectifiers are readily available from the diodes and the transistors.
The output of the full-wave linear rectifier is proportional to the magnitude of its input, while the output of the
full-wave square-law (quadratic) rectifier is proportional to the squared magnitude of its input. The half-wave
rectifier, of course, gives only the positive portion of its input. Fig. 13 shows these transfer characteristics.
Referring to Fig. 12, we may write
19
where LF indicates the low-frequency portion. Also, considering the full-wave quadratic rectifier in place of the
full-wave linear rectifier, we may write
The band-pass signal x(t) in Eqs. (74) and (75) has the following form:
Since the envelope V(t) is slowly varying compared to the carrier frequency f 0 , the first term in Eq. (77) is
concentrated around zero frequency. The fact that the term is the square of the envelope means that the
bandwidth will be somewhat greater than that of V(t). The second term in Eq. (77) will be concentrated around
2f 0 with a bandwidth that depends on both the envelope square V 2 (t) and the phase modulation (t). In most
cases of interest, the bandwidth of the total modulation will be small enough compared to f 0 so that the low-pass
filter following the rectifier will easily separate the low-frequency portion of Eq. (77).
20
Fig. 13. Various rectifier characteristics for (a) full-wave linear rectifier, (b) half-wave linear rectifier, (c) full-wave squarelaw (quadratic) rectifier, and (d) half-wave square-law (quadrature) rectifier.
The low-pass filter will remove all of the terms in the curly bracket except for the first term. Thus, if the
bandwidth of V(t) is not too large, a very good approximation of the envelope can be obtained. A similar
analysis can be carried out to show that the half-wave linear and the half-wave quadratic rectifiers extract the
envelope V(t).
We note that the envelope detection is referred to as a linear detection, due to its transfer characteristic
stipulating that the output is proportional to the input when the input is positive, as illustrated in Fig. 13(a).
The operation of the envelope detection, however, is of course highly nonlinear, and as a result the output
consists of a dc term proportional to the envelope, plus an infinite number of harmonics of theinput at 2f 0 , 4f 0 ,
etc. It is for this reason that the envelope detection must be passed through a low-pass filter, thus eliminating
the unwanted harmonics. Similar comments apply to the square-law detection.
Justication of the Noncoherent Detections. The justification of the envelope and the square-law
detections by the likelihood ratio criterion is given in this subsection. The radar detection process may include
21
down-conversion of the carrier frequency to a more manageable intermediate frequency (IF). This step, however,
is irrelevant to the results we are going to obtain and is therefore omitted.
Consider the signal to be a carrier pulse of the form
for 0 t T, and n(t) is a Gaussian white-noise process with two-sided spectral density N 0 /2.
The detection problem described by Eq. (79) consists of examining the received waveform r(t) and determining whether it consists of a signal plus noise or noise alone. The optimal detection, as previously described,
forms the likelihood ratio which is compared against a threshold.
The sampling bandwidth B, which is the reciprocal of the sampling interval, must be sufficiently large to
pass along essentially all of the signal energy, which will be the case if B 1/T. In this case, by the sampling
theorem we know that the number of samples k is given by k = 2BT. Given these conditions, the likelihood
ratio can be written as
where the noise variance 2 is (N 0 /2)2B = N 0 B. Recall from the sampling theorem (2,3) that for any two
band-limited functions u(t) and v(t) we can write
22
Because the signal is of finite duration, the approximation in passing from the discrete to the continuous
representation improves as B is allowed to become very large.
In the noncoherent case, in Eq. (83) is unknown. Since no auxiliary information about is available, it
is reasonable to assume to be uniformly distributed over 2 rad. An average likelihood ratio is
where
23
and I0 is the modified Bessel function of order zero. The likelihood ratio test is
where the signal energy Es = A2 T/2. Thus the natural logarithm of the modified Bessel function I0 is the
optimum noncoherent detection characteristic. For a large SNR, the likelihood ratio test can be approximated
as
The implementation of the likelihood ratio test of Eq. (89) has been shown in Fig. 11, where the in-phase and
the quadrature channels generate the xc and xs in Eq. (86), respectively. Summation of the square of xc and
xs yields the square-law detection of Eq. (90). Taking the square root in addition to the square-law detection
yields the envelope detection of Eq. (89).
24
For the noncoherent detection, the in-phase and the quadrature components from Eq. (86) are
For white Gaussian noise with two-sided spectral density N 0 /2, we have the following quantities:
25
under the H 1 and H 0 hypotheses, respectively. The averaged joint probability density function of p1 (X c , X s |) is
For an envelope (V =
The threshold R0 is therefore determined from the false-alarm probability Pfa using
as
26
This can be put into a more convenient dimensionless form with the changing of variable x =
which
from
where
cos , X s =
The threshold R0 can be determined from the false-alarm probability Pfa , which is given by
to be
27
Changing the variable x = Z/N 0 T/4 in Eq. (108) yields a more convenient dimensionless form
28
Hence
Equation (112) indicates that the quadrature noise ns (t) sin(2f 0 t) has been rejected by the low-pass filter from
the output. The first term in Eq. (112) is the signal component with output signal power A2 T 2 /4, while the
second term represents the in-phase noise component with output noise power E[n2 c (t)]T 2 /4. Using Ref. 2
29
The input signal power is A2 /2 and the input noise power is E[n2 (t)] = 2 . The input SNR is therefore
and we have the following relationship between the input and output SNRs for coherent detection:
It is clear that coherent detection gives a 3 dB improvement in SNR. The reason for this improvement is that
the multiplier and low-pass filter in Eq. (110) eliminate the quadrature noise component ns (t) sin(2f 0 t).
On the other hand, in the noncoherent case both the in-phase and the quadrature noise components
come in to play. To analyze the square-law detection easily, we use the equivalent scheme of Fig. 12 with the
square-law rectifier characteristic of y = x2 , as shown in Fig. 13(c), replacing the linear rectifier. The output Z
of a square-law detection for the following received signal:
This output can be regarded as composed of three terms. The first term, A2 T/2, is the desired output signal
component with output signal power (A2 T/2)2 . The second term, ATnc (t), represents the carrier-noise component
with the associated output noise power A2 T 2 2 . The third term, 12 [n2 c (t) + n2 s (t)], is the self-noise component.
The associated noise power is
where E[n4 (t)] = 3{E[n2 (t)]}2 = 34 has been used in the last step. With these results, we can write the output
SNR as
30
Fig. 15. Output signal-to-noise ratios for the matched filter and the square-law detection.
If the input SNR is much larger than 1, the output SNR is approximately equal to 12 SNRin , with the square-law
detection thus causing a 3 dB reduction in signal-to-noise ratio. For input signal-to-noise ratios that are much
less than 1, 2SNRin is negligible compared with 1, and Eq. (120) shows that SNRout is now equal to SNR2 in . In
this case, the square-law detection causes a very serious degradation of the signal-to-noise ratio.
The relationship between SNRout and SNRin for the matched filter and the square-law detection is shown
in Fig. 15. It is clear from both Fig. 14 and Fig. 15 that the noncoherent detection is inferior to the coherent
detection for low input signal-to-noise ratios and approximates the coherent detection for high input signal-tonoise ratios in the detection probability and the output signal-to-noise ratio.
31
We note that in military communication systems, a large timebandwidth product signal is referred to as
a spread-spectrum signal (2). It provides resistance to jamming and has a low interception probability because
the signal is transmitted at low power. Among various spread-spectrum signals, the direct sequence spread
spectrum (DSSS) signal, where the transmitted signal is modulated by a pseudorandom sequence, is used in
code devision multiple access (CDMA) communications (6). The frequency-hopped spread spectrum (FHSS) is
another widely used spread spectrum signal in modern communication systems (2,7).
Wigner-Ville Distribution and Ambiguity Function of an LFM Signal. For an LFM signal the
analytic form is
which has an initial frequency f 0 and increases at a frequency rate m. Since an LFM signal is a nonstationary
signal, the best way to describe it is through such distribution functions as the Wigner-Ville distribution (WVD)
and the ambiguity function (AF). The WVD of a signal s(t) is defined as
WVD is the Fourier transform (with respect to the delay ) of the signals correlation function. It relates the
time and the instantaneous frequency of a signal. Substituting the LFM signal of Eq. (121) into this definition
yields [15]
where f i (t) is given in Eq. (122) and the sine function is defined as
Figure 16 shows the WVD for an LFM signal with f 0 = 20, m = 12, and T = 2. It is seen from the WVD that
the instantaneous frequency linearly increases with the time in accordance with Eq. (122), whereas this
relationship is not observable from the spectrum of the signal, which is also shown at the top of Fig. 16.
The ambiguity function (AF) is defined as
32
where and denote the frequency shift and the delay, respectively. AF is the Fourier transform (with respect to
the time t) of the signals correlation function, and it relates the delay and the Doppler frequency (or frequency
shift). Note that the AF and the WVD form a two-dimensional (2-D) Fourier pair, that is,
where
F and F 1 denote the Fourier and its inverse operators, respectively. Applying Parsevals theorem
u(t)v(t) dt = U(f )V(f ) df to Eq. (125), we obtain
The AF can therefore be regarded as the matched-filter output with a different delay and frequency shift .
The AF has proven to be an important tool in analyzing and constructing radar signals by relating range
and velocity resolutions. By constructing signals having a particular ambiguity function, desired performance
33
Fig. 17. Ambiguity function of the LFM signal. The AF is symmetric about the origin, and its greatest value appears
above the origin.
characteristics are achieved. For example, the magnitude AF of the LFM signal in Eq. (121) is
which is shown in Fig. 17 for the same LFM signal in Fig. 16. The AF is symmetric about the origin = = 0,
and the greatest value appears above the origin. The time delay is related to the range, and the frequency
shift is related to the Doppler shift. Thus AF describes the range-Doppler ambiguity of the transmitted signal.
An ideal radar signal is the one whose AF is a thumbtack function because it leaves the least ambiguity in
resolving the range and Doppler shift.
Detection of Multiple LFM Signals. The matched-filter detection is the optimal detection if all of the
signal information (phase, initial frequency, and frequency rate) is available. However, these parameters are
difficult to specify because accurate values of the range, the velocity, and the acceleration of a target are not
available. Noncoherent detection is thus preferred. Next, we are going to consider the noncoherent detection
of multiple LFM signals in a noise background.
For multiple LFM signal detection, it is often the case that the frequency rate is the only parameter of
interest in practice (8). In other words, the frequency rates distinguish different LFM signals. Such a scenario
occurs in the radar detection of a small, fast-moving missile launched from a relatively slow-moving aircraft.
Multiple LFM signals can be detected by locating maxima in the frequency rate in many applications.
AF of Multiple LFM Signals. The input signal to be analyzed is modeled by a linear sum of two (may be
extended to more than two) LFM signals with frequency rates m0 and m1 as given by
34
Here i represent the carrier (or the initial) frequency, which is proportional to the velocity of the target, and
the frequency rate mi is proportional to the acceleration. The AF defined by
where
The last two terms of Eq. (131) are interference terms generated by the two LFM components in the signal r(t),
due to the nonlinearity of the AF. Using the following identity
Figure 18(a) shows the AF [Eq. (133)] of a signal composed of two LFM signals which may represent two
targets with different velocities and accelerations. Although there is cross-term interference, we can identify
35
Fig. 18. The ambiguity functions of a bicomponent LFM signal (a) without noise, and (b) with the additive white Gaussian
noise (SNR = 6 dB).
the two straight lines representing the bicomponent signal in Fig. 18(a). However, the two LFM signals are
not obvious if they are corrupted by noise. Figure 18(b) is identical to Fig. 18(a) except that the two signals are
corrupted by Gaussian white noise with SNR = 6 dB.
Detecting Multiple LFM Signals Using Radon-Ambiguity Transform. Recall that the Radon transform
(9), commonly used for the reconstruction of images in computer tomography, is defined by
for < s < and /2 < < /2, where the function specifies the direction of integration. The parameter s
represents the shifted location of the origin. Equation (134) actually represents the sum of the values of f (x, y)
along the line that makes an angle with the x axis and is located at a distance s from the origin. The Radon
Wigner transform 9, 10 is a special case when f (x, y) in Eq. (134) takes the WVD of a multicomponent LFM
36
Fig. 19. (a) The WVD of a bicomponent signal. (b) the AF of the bicomponent signal.
signal. The WVD of a bicomponent signal is graphically drawn in Fig. 19(a). The RadonWigner transform of
Fig. 19(a) should produce two maxima in the resulting - plane. Figure 19(b) is the AF of the same signal in Fig.
19(a). The AF is the 2-D Fourier transform of the WVD; thus they share the same angles of 0 and 1 as shown
in the WVD. However, the initial frequencies shown in Fig. 19(a) have disappeared in Fig. 19(b), since they
have been mapped into the phase of the AF. This also explains why the AFs of the two chirps pass through the
origin in the - plane. Thus, by applying the Radon transform to the phase-free ambiguity function, detection
of multicomponent signals can be reduced from the 2-D search problem in the RadonWigner transform to a
1-D search problem. The advantage of the ambiguity function over the WVD has been shown in the kernel
design for the time-frequency analysis (11). This work can be extended to the detection of multicomponent
signals (12).
Since all directions of interest pass through the origin of the ambiguity plane, the Radon transform with
parameter s set to 0 is applied to the phase-free ambiguity function of Eq. (133). We essentially compute the
line integral along a straight line with its direction specified by the function ( m) in the ambiguity plane.
Therefore the detection statistic can be formed by the so-called Radon-ambiguity transform (12) as
Since the infinite integrals in Eq. (135) usually diverge, it is necessary to first remove the constant term
from the integrand. Specifically, for m = mi (i = 0, 1) and assuming m0 m1 > 0, we have from Eq. (133)
37
with
Removing the constant from Eq. (136) and substituting it into Eq. (135) yield
For m = m0 or m1 (i.e., am = 0), it is clear that (m) is finite. By Eqs. (137) and (138), we have (m) as
m m0 or m m1 . Therefore, by calculating (m) and comparing it to a preset threshold, the multicomponent
signals can be detected.
Finite-Length Signal. Now we consider a bicomponent finite-length signal as given by
with the assumption that 0 = 1 and m0 > m1 for simplicity purposes. The modulus of the ambiguity function
of r(t) for = m can be calculated by making use of the following integral,
to yield
38
Fig. 20. The (m) of two equal-amplitude LFM signals with T = 40, m0 = 200/T 2 , m1 = 100/T 2 . Solid line: (m). Dashed
line: auto terms only. Dotted line: cross-terms only. The two peaks indicate the existence of the equal-amplitude FM
signals.
where
while am in Eq. (141) is defined by Eq. (137). For || T, the first two terms in Eq. (141) represent the auto
terms of the signal, while the rest express the cross-terms. Figure 20 shows the integral of Eq. (141) over ,
that is, (m), for two LFM signals with equal amplitudes. Also shown in Fig. 20 are the integrals of the auto
terms and cross-terms of Eq. (141).
39
Output Signal-to-Noise Ratio Analysis. The output SNR of the statistics in Eq. (135) can be analyzed
by making use of the following quantities (12):
to find
It is seen from Eq. (142) that there is a 3 dB loss in SNR between the input and the output when the input
SNR is high, and the output SNR degrades severely when the input SNR is low, illustrating a typical nonlinear
detection characteristic.
Conclusion
We have presented the techniques of radar signal detection, as well as the related performance analyses. The
following conclusions can be drawn.
Among various detection criteria, the NeymanPearson criterion is particularly well suited to radar detection, owing to its concepts of a priori fixed Pfa and maximized Pd .
The coherent detection, in the form of a matched filter or a cross-correlation, is the optimal detection for an
exactly known signal (i.e., phase, amplitude, and Doppler frequency are known) in a background of white
noise.
In a typical radar application, the range between the target and the radar represents a very large number
of transmitted signal wavelengths. This makes specifying the phase of the return signal extremely difficult,
and a noncoherent detection has to be used.
The noncoherent detection is inferior to the coherent detection for low input signal-to-noise ratios and
approximates the coherent detection for high input signal-to-noise ratios.
There is an inherent conflict between long-range detection and high-range-resolution capability for the
unity time-bandwidth signal. Large time-bandwidth signals such as an LFM signal do not have such a
conflict.
Large time-bandwidth signals can be described by the ambiguity function or the WignerVille distribution.
The Radon-ambiguity transform can be used to detect multiple LFM signals.
BIBLIOGRAPHY
1.
2.
3.
4.
C. E. Cook, M. Bernfeld, Radar Signals: An Introduction to Theory and Application, New York: Academic Press, 1967.
J. G. Proakis, M. Salehi, Communication Systems Engineering, Englewood Cliffs, NJ: Prentice-Hall, 1994.
J. Minkoff, Signals, Noise, and Active Sensors, New York: Wiley, 1992.
J. Brown, E. V. D. Glazier, Signal Analysis, New York: Reinhold, 1964.
40
5.
6.
7.
8.
9.
10.
11.
12.
READING LIST
M. Barkat, Signal Detection and Estimation, Norwood, MA: Artech House, 1991.
B. Bouachache, Time-frequency signal analysis, in S. Haykin (ed.), Advances in Spectral Estimation and Array Processing,
Englewood Cliffs, NJ: Prentice-Hall, 1991, Vol. 1, Chap. 9, pp. 418517.
J. V. DiFranco, W. L. Rubin, Radar Detection, Englewood Cliffs, NJ: Prentice-Hall, 1968.
J. L. Eaves, E. K. Reedy (eds.), Principles of Modern Radar, New York: Van Nostrand-Reinhold, 1987.
G. Galati (ed.), Advanced Radar Techniques and Systems, Stevenage, UK: Peregrinus, 1993.
H. V. Poor, An Introduction to signal Detection and Estimation, New York: Springer, 1988.
D. C. Schleher, MTI and Pulsed Doppler Radar, Norwood, MA: Artech House, 1991.
H. Urkowitz, Signal Theory and Random Processes, Norwood, MA: Artech House, 1983.
MINSHENG WANG
Texas Instruments Incorporated
ANDREW K. CHAN
Texas A & M University
feature map (SOM) has been used as a feature extractor (5) for radar target recognition. In combination
with Kohonens learning vector quantizer (LVQ) for supervised classification, SOM has been applied to the
recognition of ground vehicles from MMW HRR radar signatures (5). Perlovsky, et al. (6) suggested the modelbased neural network to include a priori information to an ANN. This approach can reduce the search space
of the neural network by incorporating a priori information to the adaptability of an ANN. The fuzzy neural
network approach (7) is also suggested to classify targets that may belong to more than one class. The advances
in neural network approaches can improve the performance of target recognition algorithms further. There are
many approaches to incorporating information from many different sources for radar target recognition. By
fusing the information from more than one sensor, the accuracy of radar target recognition may be improved.
Two approaches in utilizing information from multiple sensors are discussed in this article. IFSAR provides
elevation information, in addition to two-dimensional radar images, by processing interference between radar
returns received by two different antennas. By processing IFSAR images and fusing to SAR of visual images,
the accuracy of the target recognition can be improved substantially. The approach of combining IFSAR and
visual images using image registration approach is discussed in this article. There are statistical approaches
in data fusion, and Bayesian data fusion approaches are used in radar target recognition (8). In this approach,
features from polarimetric SAR images are fused to improve the recognition accuracy.
Radar target recognition is a complex problem, and no single algorithm performs better than other
algorithms with different types and modes of radar. In this article, different approaches for radar target
recognition are discussed in terms of radar types and approaches.
One approach to achieve this is to train a classifier with features extracted from scaled signals of x(t). For
example, different features at m different scales are extracted from scaled signals y1 (t), y2 (t), . . ., ym (t); then
the classifier is trained with these multiscale features. If the number of scales included in the training is large
enough, the classifier will classify signals having large-scale changes.
However, there are at least two potential problems with this approach if the signal is a discrete signal {x(i),
i = 1, . . ., N}. First, the original signal is defined only at discrete points, and the signal at the finer scale is not
defined at certain points. Second, feature extraction is performed multiple times with a single training sample,
and the computational complexity increases linearly as the number of scales increases. These difficulties can
be solved by the hierarchical modeling approach. The hierarchical modeling approach presented in this section
extracts multiscale features without adding much computational complexity.
A discrete signal can be scaled to a coarser scale or a finer scale by decimation filtering or interpolation,
respectively. We will first consider the decimation filtering of a signal and its effect on the statistical model,
and then we will consider the scaling to a finer scale as a modeling process. A decimation filter is defined as a
local averaging (finite impulse response [FIR] filtering) followed by a down-sampling process, as shown in Fig.
1. If the down-sampling rate is m, the decimation-filtered signal represents the signal at the scale reduced by
the factor of m. Let H be a FIR filter of length r and be the down-sampling operator of factor m.
Suppose that a signal at a coarser scale ym (i) is obtained by decimation-filtering of the original signal x(i).
where {w(i)} is a zero mean white noise sequence with variance 2 w , and aj s and bj s are real coefficients.
Equation (5) can be rewritten as
where
and is the unit delay operator, and we assume that the roots of Ap () and Bq () lie inside of the unit circle for
stability and invertability of the model.
To find features at coarser scale, the model at a coarser scale should be considered. The following theorem
summarizes the results on the modeling of a decimation-filtered ARMA process.
The decimation-filtered process {ym (i)} defined in Eq. (4) follows an ARMA(p,q) model, where the order of
AR polynomial is p, the order of the MA polynomial is q = [(p(m 1) + r + q 1)/m], and the model parameters
can be obtained from the model parameters of x(i).
where
The AR parameters are estimated by solving the above YuleWalker equations. By using the estimated
AR parameters, the MA component of x(i) can be obtained by filtering AR component from x(i).
The power spectral density of the ARMA process x(t) is estimated from the correlations of xma (t) and the AR parameters estimated by YuleWalker equations. The ELS power spectrum estimation algorithm is summarized
as follows.
Step 3: Compute the sample correlation of MA component xma (i) that is obtained by removing the AR
component.
For each training sample x(i), the models at the other scales (both coarser and finer scales) are obtained
by the hierarchical modeling approach presented in the previous section. The model at a coarser scale is
obtained using Theorem 1. The AR polynomial is obtained by Eq. (10), and the correlation of the signal at
the coarse scale is obtained with a proper choice of smoothing filter H, such as a Gaussian filter. Thus, the
spectral density of the signal at a coarser scale is obtained by the ELS algorithm. The model at a finer scale is
obtained by the approach explained in step 2. The AR polynomial of the signal at a finer scale is obtained under
no-hidden-periodicity assumption. The correlation function at a finer scale is obtained by disaggregation (9),
and the ARMA spectrum at a finer scale is obtained by the ELS estimation algorithm. The multiscale feature
extraction algorithm is summarized as follows.
Multiscale Spectral Feature Extraction Algorithm:
Step 1: Each radar return is normalized to zero mean and unit variance by
wherem
and 2 are sample mean and sample variances of x(i). M K-dimensional features from M scales
(including coarser and finer scales) are obtained from the normalized radar returns by the following
procedure.
Step 2: For each training sample, the AR parameters and correlations are estimated by the ELS algorithm.
For k = 0, 1, . . ., K 1, the power spectrum is estimated at = k/K. The logarithm of the power spectral
density forms a K-dimensional feature vector.
Step 3: At each coarser scale, a feature vector is obtained by estimating the power spectrum using the
ELS method, with model parameters obtained by the hierarchical modeling approach. The logarithm of
the power spectral density forms a K-dimensional feature vector at a coarser scale. Feature vectors at
multiple scales are obtained by repeating this step at coarser scales.
Step 4: At each finer scale, a feature vector is obtained by estimating the power spectrum using the ELS
method, with model parameters obtained by the hierarchical modeling approach. This is repeated for
other finer scales, and multiple K-dimensional feature vectors are obtained from the logarithm of the
power spectral density.
Classification is done by a minimum distance classifier with multiple prototypes. In this approach, each
training sample generates M prototypes corresponding to M scales. Therefore, if there are N training signals
for each class, then NM prototypes will be available for each class. Let us assume that there are N k prototypes
z1k , . . ., zkNk in the class k {1, . . ., K}. For a test pattern x, the distance to the class k is defined by
where the intersample distance d(x, z) is the Eucledian distance between x and z. The distance Di is the
smallest of the distances between x and each of the prototypes of the class k. The test pattern x is classified by
the minimum distance decision rule: x is classified into class k if Dk < Di for all i = k.
In Ref. 9, the hierarchical model-based features are tested with NCTI data. Figure 2 shows a typical
NCTI radar signature and estimated power spectral density. In Ref. 9, about 95% of classification accuracy is
reported with 5000 MMW RAR radar signatures.
Fig. 2. A HRR radar signature from NCTI database and its power spectrum estimated by hierarchical modeling.
for selecting, developing, clustering, and compressing features into a useful set, and they provide automatic
knowledge acquisition and integration techniques for target recognition systems.
Feed-forward neural networks have been used as a pattern classifier for the target recognition. (10). Let
xi be the multidimensional feature vector extracted from a radar image, and let S be the index set of the target
patterns.
where T is the threshold and (x) is the heavyside step function. Roth (10) showed that detection of target
patterns out of a set of P of patterns can be handled by the preceding feed-forward neural network.
Neural networks have been also used as feature extractors for target recognition. Kohonens selforganizing map (SOM) and LVQ have been used in the two-stage target recognition approach (5). SOM is
based on unsupervised competitive learning where only one output node, or one per local group of nodes at
a time, gives the active response to the current input signal, and it clusters input vectors into preselected C
classes by adapting connection weights to nodes in the network, and is used as a feature extractor in Ref. 5. At
each iteration of the SOM algorithm, the best matching node c is selected by
where x is the current input vector, and {m1 , . . ., mC } is the set of nodes (cluster centers). Then each node mi
located in the neighborhood of the node c is adapted by the learning rule:
where the gain i (t) can be a simple monotonically decreasing function of time or a Gaussian gain function
defined by
The learning rate (t) and the kernel width (t) are monotonically decreasing functions, and their exact forms
are not critical. LVQ is used as a supervised classifier of the features extracted by SOM in Ref. 5. In Ref. 5,
more than 94% of accuracy is reported in the target recognition experiment, with MMW data having five types
of ground vehicles.
Recently, the model-based neural network (MBNN) was introduced (6) to combine a priori knowledge
of models of data with adaptivity to changing data properties. The learning and adaptation of the MBNN
is done by iterative estimation of association weights and model parameters. Different statistical models for
different physical processes, background clutter, outlier, target pixels, and so on, are also introduced in Ref. 6.
This approach has the potential to improve target recognition performances by allowing inclusion of a priori
information in addition to the adaptability of a neural network.
Fuzzy neural networks are also used in radar target recognition. Fuzzy ARTMAP and EMAP neural
networks are suggested (7) for radar target recognition. Fuzzy neural networks allow us to make soft decisions
in classifying a target, and each input vector can belong to more than one class. The fuzzy association between
the input vector and the classified target can improve the performance and the complexity of the adaptation.
from IFSAR data: The data are noisy and the spatial resolution is much inferior to that of visual data. The
spatial resolution is further degraded by the noise removal step. Figure 3 shows a height map produced by a
real IFSAR. A typical IFSAR elevation image is noisy and needs to be filtered before it can be reliably used.
Also, there are regions with no data that result either from the fact that the original scene was not on a
rectangular grid or from radar geometry effects, which cause some points not to be mapped. Interpolation and
nonlinear filtering techniques are used to filter the elevation data.
Positioning of IFSAR and visual data allows for the fusion of clues from both sensors for target recognition.
It is needed to overcome various difficulties resulting from the limitations of the sensor. For example, building
detection requires the extraction and grouping of features such as lines, corners, and building tops to form
buildings (12). The features extracted from visual data usually contain many unwanted spurious edges, lines,
and so on that do not correspond to buildings. The grouping stage requires complex and computationally
intensive operations. Further, the height of a building is typically estimated by extracting shadows and sun
angle when available and is not reliable when the shadows are cast on adjacent buildings. Another drawback
of methods based exclusively on visual data lies in their sensitivity to imaging conditions.
IFSAR elevation data can be used in conjunction with visual data to overcome the aforementioned difficulties. Current IFSAR technology provides sufficient elevation resolution to discriminate building regions
from surrounding clutter. These building regions are not well defined from a visual image when the buildings
have the same intensity level as their surrounding background. Similarly, a building having different colors
may be wrongly segmented into several buildings. IFSAR data are not affected by color variations in buildings
and therefore are better for building detection.
Figure 4 shows a visual image and edges detected by the Canny operator for the area shown in Fig. 3. The
top part of Fig. 4 shows a building with two different roof colors and roof structures on many buildings. Many
spurious edges not corresponding to the building appear in the edge map shown on the bottom right of Fig. 4.
Using the IFSAR elevation map shown in Fig. 3, buildings and ground regions are labeled using a two class
10
classifier. The IFSAR and visual images are registered. Figure 5 shows the result of registration of a visual
image and the segmented elevation image. Features corresponding to roads, parked cars, trees, and so on are
suppressed from the visual images using the segmented buildings derived from the IFSAR image.
The locations and the directions of edges in the segmented image are estimated and are used to locate
edges of buildings in the visual image. In the visual image, an edge pixel corresponding to each edge pixel in
the registered height image is searched in the direction perpendicular to the estimated direction in the height
11
Fig. 5. Buildings segmented from the IFSAR image overlaid to visual image.
image. If an edge is found within a small neighborhood, the edge pixel is accepted as a valid edge of a building.
If such a pixel is not found in the neighborhood, the edge is not accepted. Figure 6 shows the refined edges
obtained by searching in the neighborhoods of height edges. Most of building edges in the height image are
found while the unwanted edges are removed.
12
pattern-matching classifier. The basic structure of the SDF and MACE filter is characterized in the frequency
domain by
where H denotes the DFT of the spatial matched filter. The matrix X is composed of a set of target training
vectors obtained by taking the DFT of the target training images. The vector U represents a set of constraints
imposed on the values of the correlation peaks obtained when the training vectors are run through the spatial
matched filter. The matrix A represents a positive definite weighting matrix. A is an identity matrix for SDF
13
Fig. 7. Block diagram of a typical baseline target recognition system. [Adapted from Novak et al. (13)].
where N is the number of training images and p is the dimension of the training vectors.
In the QDCC, the DFT of the spatial matched filter is expressed by
where m1 and m2 are means of the DFTs of the training images for classes 1 and 2, respectively. S is a diagonal
matrix defined by
where M 1 and M 2 are matrices with elements of m1 and m2 placed on the main diagonal, and X i and Y i are ith
training vectors from classes 1 and 2, respectively.
In the shift-invariant 2-D pattern-matching classifier, the correlation scores are calculated by
where T is the DFT of the dB-normalized test image and Ri is the ith reference template.
Novak et al. (2) did extensive experiment with the high-resolution (1 ft 1 ft) fully polarimetric SAR data.
In the four-class classification experiment using four types of spatial matched filter classifiers, it is reported
that all targets are correctly classified (2).
14
Fig. 8. A typical data fusion approach for target recognition. [Adapted from Heuter et al. (8)].
detectors are independently designed. The target recognition using multiple sensors is formulated as a twostage decision problem in Ref. 8. A typical radar target recognition approach using data fusion is illustrated in
Fig. 8. After the prescreening, single-source classifications are performed first; then the fusion of decision are
performed.
The data fusion problem is treated as an m-hypothesis problem with individual source decisions being
the observations. The decision rule for m-hypothesis is written as
Since the prior probability and the distribution of features cannot be estimated accurately, a heuristic function
is used (8). It is a direct extension of Bayesian approach introduced by Varshney (16), and the function gi () is
generalized to include the full threshold range:
where P0 and P1 are prior probabilities; 1 and 0 are the sets of all i such that {gi (u) T i } and {gi (u) < T i },
respectively, with T i being the individual source threshold for partitioning decision regions; and the probabilities Pf i and Pd i are false alarm rates and probabilities of detections of each local sensor. The probabilities
Pf i and Pd i are defined by the cumulative distribution functions (CDF) for each decision statistic. In practice,
the CDFs are quantized and estimated from training on the individual sensors classifier error probabilities.
In a distributed scenario, the weighting can be computed at each sensor and transmitted to the fusion center,
where they will be summed and compared to the decision threshold. In 8, the data fusion approach is applied
to multiple polarimetric channels of a SAR image, and substantially improved classification performance is
reported.
Summary
In radar target recognition, different types of radar are employed for different applications. In this article,
radar target recognition approaches for different radar systems are discussed.
15
BIBLIOGRAPHY
1. A. W. Rihaczek, S. J. Hershkowitz, Radar Resolution and Complex-Image Analysis, Norwood, MA: Artech House, 1996.
2. L. M. Novak, Radar target identification using spatial matched filters, Pattern Recognition, 27 (4): 607617, 1994.
3. J. D. Wald, D. B. Krig, T. DePersia, ATR: Problems and possibilities for the IU community, Proc. ARPA Image Understanding Workshop, January 1992, San Diego, CA, pp. 255264.
4. M. W. Roth, Survey of neural network technology for automatic target recognition, IEEE Trans. Neural Netw., 1: 2843,
1990.
5. A neural clustering approach for high resolution radar target classification, Pattern Recognition, 27 (4): 503513, 1994.
6. L. I. Perlovsky et al., Model-based neural network for target detection in SAR images, IEEE Trans. Image Process., 6:
203216, 1997.
7. M. A. Rubin, Application of fuzzy ARTMAP and ART-EMAP to automatic target recognition using radar range profiles,
Neural Netw., 8: 11091116, 1995.
8. A. Hauter, K. C. Chang, S. Karp, Polarimetric fusion for synthetic aperture radar target classification, Pattern Recognition, 30 (5): 769775, 1997.
9. K. B. Eom, R. Chellappa, Non-cooperative target classification using hierarchical modeling of high range resolution
radar signatures, IEEE Trans. Signal Process., 45: 23182327, 1997.
10. M. W. Roth, Neural networks for extraction of weak targets in high clutter environments, IEEE Trans. Syst. Man
Cybern., 19: 12101217, 1989.
11. H. A. Zebker, R. M. Goldstein, Topographic mapping from interferometric synthetic aperture radar observations, J.
Geophys. Research, 91: 49934999, 1986.
12. R. Chellappa et al., On the positioning of multisensor imagery for exploitation and target recognition, Proc. IEEE, 85:
120138, 1997.
13. L. M. Novak, A comparison of 1-D and 2-D algorithms for radar target classification, Proc. IEEE Int. Conf. Syst. Eng.,
August 1991, pp. 612.
14. R. R. Tenney, N. R. Sandell, Detection with distributed sensors, IEEE Trans. Aerosp. Electron. Syst., 17: 501510, 1981.
15. Z. Chair, P. K. Varshney, Optimal data fusion in multiple sensor detection systems, IEEE Trans. Aerosp. Electron. Syst.,
22: 98101, 1986.
READING LIST
J. S. Baras, S. I. Wolk, Model-based automatic target recognition from high-range-resolution radar returns, SPIE Proc.,
2234: 5766, 1994.
M. Basseville, A. Benveniste, A. S. Willsky, Multiscale autoregressive processes, Part I & II, IEEE Trans. Signal Process.,
40: 19151954, 1992.
B. Bhanu, Automatic target recognition: State of the art survey, IEEE Trans. Aerosp. Electron. Syst., 22: 364379, 1986.
V. Cantoni et al., Recognizing 2D objects by a multi-resolution approach, Proc. Int. Conf. Pattern Recognition, Vol. 3,
Jerusalem, Israel, October 1994, pp. 310316.
N. C. Currie, R. D. Hayes, R. N. Trebits, Millimeter-Wave Radar Clutter, Norwood, MA: Artech House, 1992.
I. Daubechies, The wavelet transform, time-frequency localization and signal analysis, IEEE Trans. Inf. Theory, 36: 961
1005, 1990.
D. M. Dunn, W. H. Williams, T. L. DeChaine, Aggregate versus subaggregate models in local area forecasting, J. American
Statistical Assoc., 71: 6871, 1976.
J. Geweke, Temporal aggregation in the multiple regression model, Econometrica, 46: 643662, 1978.
C. W. J. Granger, M. J. Morris, Time series modeling and interpretation, J. Royal Statistical Soc., A-139: 246257, 1976.
S. Kingsley, S. Quegan, Understanding Radar Systems, New York: McGraw-Hill, 1992.
D. C. McKee et al., Model-based automatic target recognition using hierarchical foveal machine vision, SPIE Proc., 2755:
7079, 1996.
R. A. Mitchell, R. Dewall, Overview of high range resolution radar target identification, Proc. Automatic Target Recognition
Working Group Conf., Monterey, CA, November 1994.
16
F. A. Pino, P. A. Morettin, R. P. Mentz, Modelling and forecasting linear combinations of time series, Int. Statistical Rev.,
55: 295313, 1987.
O. Rioul, A discrete-time multiresolution theory, IEEE Trans. Acoust. Speech Signal Process., 41: 25912606, 1993.
M. L. Skolink, Introduction to Radar Systems, New York: McGraw-Hill, 1980.
N. S. Subotic et al., Multiresolution detection of coherent radar targets, IEEE Trans. Image Process., 6: 2135, 1997.
L. G. Telser, Discrete samples and moving sums in stationary stochastic processes, J. American Statistical Assoc., 62:
484499, 1967.
D. R. Wehner, High Resolution Radar, Norwood, MA: Artech House, 1987.
W. Wei, The effect of temporal aggregation of parameter estimation in distributed lag model, J. Econometrics, 8: 237246,
1978.
W. W. S. Wei, D. O. Stram, Disaggregation of time series models, J. Royal Statistical Soc., B-52: 453467, 1990.
M. A. Wincek, G. C. Reinsel, An exact likelihood estimation procedure for regression ARMA time series models with
possibly non-consecutive data, J. Royal Statistical Soc., B-48: 303313, 1986.
KIE B. EOM
George Washington University
RADAR TRACKING
Radar tracking is the ability to determine the position and velocity vector of a target at any particular instant
in time, to predict its position in the future, and to distinguish the desired target from other targets and clutter.
For a typical radar, the direction from the radar antenna (or antennas) to the target is generally determined
in the polar coordinates of range (distance), azimuth (horizontal) angle, and possibly vertical angle. For a
sophisticated coherent radar, tracking targets in Doppler frequency space may also be required. Thus radar
tracking can be one dimensional (range, angle, or Doppler), two dimensional (range and azimuth angle), three
dimensional (range, azimuth angle, and elevation angle), or four dimensional (range, azimuth angle, elevation
angle, and Doppler). For some systems, radar information is converted to Cartesian coordinates, and the
tracking functions are performed in coordinates such as latitude, longitude, and height.
Target tracking is necessary for a number of reasons. In order to direct a weapon such as a missile or
a projectile to a target, the range, future range, and angles from the radar to the target must be determined
by the radar. By knowing the position of the target relative to that of the missile, the guidance computer can
direct the missile to the target. Aircraft controllers must know an aircrafts location relative to other aircraft
in the vicinity, and by tracking the positions of all the aircraft in their assigned sectors, they can control the
spacing of the aircraft to ensure flight safety.
RADAR TRACKING
the amplitude of the target echoes at the receiver. The output of the track ball can provide readout of the
target range and azimuth angle or provide the required range and angle information to weapons systems for
targeting purposes. Although this was a satisfactory technique for tracking slow-moving targets such as ships,
it is certainly a tedious process.
To aid in the tracking of ships and aircraft, a rate-aided device was added to some systems. With rateaided tracking, the operator needed to make only fine adjustments to account for changes of the target range
and angle rates with respect to the radar. With this configuration, the radar operators were better able to track
faster-moving objects such as aircraft. Still, this tracking function required the constant attention of the radar
operator.
Automated target tracking evolved as a necessary tool to allow the radar operator to perform the tracking
function efficiently. After range and angle trackers are locked onto the target, the tracker then senses any
error between the current target position and that predicted by the tracker and automatically and continuously adjusts the tracker functions either on a pulse-to-pulse or scan-to-scan basis. As a result, automatic
radar tracking can maintain target track more accurately than a human operator and can better follow fast
maneuvering targets.
RADAR TRACKING
Tracking Basics
For automatic target tracking, a sequential procedure must be used to acquire the target and initiate track.
The three steps are target detection, target acquisition, and target track.
Target Detection. In order for the received echo signal from the target to be detected by the radar, the
receive signal strength in that particular range cell must be stronger than the residual noise in the radar and
other interfering signals in that range cell. For a target separated from clutter, the primary interfering source
is receiver noise. Although it is desired to declare a targets presence with high probability, it is also necessary
to keep the probability of false alarm (declaring a target detection when no target is present) as low as possible.
The two values are tied closely together: for a given signal-to-noise (SNR), lowering the detection threshold to
increase the probability of detection threshold also increases the probability of false alarm. Depending upon
the target detection criteria, a SNR of 8 to 15 dB is generally required to keep the probability of detection
reasonably high, while keeping the probability of false alarms at or below 10 6 . Probability of detection vs.
false alarm curves are available in Blake (1) and a number of other sources.
In many cases, the single-pulse SNR may be below the threshold, but the SNR can be improved by
integrating a number of pulses. For coherent operation, the SNR improvement is directly proportional to n,
the number of pulses coherently integrated. For noncoherent operation, the SNR improvement for small n, is
usually near n0.8 in practical radar systems where n < 20. Most real targets are composed of complex reflecting
surfaces; the scattering contributions of these separate reflecting surfaces tend to add and subtract vectorially
to the overall radar cross section (RCS) of the target. The fluctuations in RCS caused by these surfaces will
affect the probability of detection and false alarm. Swerling (2) has derived the probability of detection and
false alarm curves for both slowly varying and rapidly varying target RCS fluctuations. For these cases, the
required SNR required can be obtained from this set of curves.
Target Acquisition. Target acquisition for tracking can be done either manually or automatically. For
manual target acquisition, the operator needs to point the radar antenna (or an angle cursor) on the azimuth
angle to the target and designate the desired target range. Alternately, the operator could use a light pen if
available to designate the target azimuth angle and range to the tracker. When the particular target is within
the acquisition limits of the tracker, the acquisition process can be initiated to lock the tracker up on the target
range and azimuth.
For automatic target acquisition, the tracker must have either a designated philosophy for selecting the
target for track acquisition, or the tracker must have sufficient capability of tracking all the targets satisfying
the track initiation criteria. For example, for a radar altimeter, the track would be initiated on the closest radar
returns to the radar. For a scanning surveillance radar, the tracker would need to have sufficient capability to
track all the targets satisfying the track criteria.
Range Track
When the target has been acquired by the tracker, the tracker must determine not only the range and angular
positions but also the velocity vector of the target, and it must determine the velocity components in range and
angle in order to maintain track on the target. This is especially important in order to maintain track during
conditions of track fade or during momentary passage of other targets or clutter returns. Target trackers differ
in complexity and include: (1) dedicated single target trackers, (2) track-while-scan target trackers, and (3)
multiple target trackers. For scan-to-scan and multiple target trackers, association algorithms are required to
keep track of the targets, especially during crossing target events.
Dedicated Range Trackers. Dedicated target trackers generally use radars with antennas that spotlight the desired target with the antenna beam and keep the antenna beam spotlighted on the target during the
entire tracking process. This type of tracker is generally used with weapon systems that require continuously
RADAR TRACKING
updated position information on the target. This is a relatively simple type of tracker and will be used to
explain the principles of tracking. The tracking process will be described as composed of the following process:
range tracking, angle tracking, and Doppler tracking.
For a radar system, the range from the radar to a target is precisely determined from the time delay
between the transmission of the radar signal and the receipt of the radar echo from the target arriving back at
the radars receive antenna. The range (R) from the radar antenna to the target is then given by
where
c = speed of light (2.997 108 m/s),
= delay time between transmit and receive target echo.
Because radar signals travel at the speed of light, the range to the target is approximately 150 m for each
microsecond of time delay between the time the radar signal is transmitted and when the return echo signal
reflected from the target arrives at the receiver.
Range Tracking with an FMCW Radar. The simplest type of radar used for range tracking is that of
frequency modulated carrier wave (commonly referred to as an FMCW) radar. One of the prime advantages
of using an FMCW radar is that, for a given signal to noise ratio, the average transmit power is much less
than the peak power required for a pulse-type radar. Transmit signal frequency is generally swept linearly
over a period of time, such as shown in Fig. 2. This signal is transmitted toward the target and returns with a
time delay (). By comparing the frequency of the received signal with that currently being generated by the
transmitter, the time delay (and hence the range) can be determined from the equation
where
f = difference frequency between the received signal and the transmit signal,
df /dt = rate of change in frequency versus time for the transmit signal.
The circuitry for a FMCW ranging system is rather simple, as shown in Fig. 3. The transmitter is coupled
to the antenna through a circulator, for example to isolate the transmit signal from the receiver input. Transmit
signal is reflected by the target, received by the antenna, and then mixed with the current transmit signal.
The mixed signal is then amplified, filtered to remove the radio-frequency (RF) transmitter and receive signal
RADAR TRACKING
components, and coupled to a frequency discriminator circuit. The frequency discriminator provides an output
voltage that is proportional to the input frequency. Thus the output signal is proportional to the range to the
target.
For a moving target, the frequency of the returns from the target are not only affected by the range to the
target but also by the target velocity with respect to the radar. In order to separate frequency change effects
resulting from range from those resulting from target velocity, an up/down ramp waveform, such as that shown
in Fig. 4 can be used. The frequency change caused by velocity essentially moves the entire receive frequency
up or down, and by averaging the frequency difference between the up frequency and down frequency portions
of the waveform, both the range and the target velocity can be determined from the following equations. The
range is determined from the average of the frequency differences during the positive frequency ramp and the
negative frequency ramp, thus
The velocity of the target relative to that of the radar is a function of the frequency difference between the
positive ramp portion and the negative ramp portion. The velocity of the target in the direction toward the
radar (positive Doppler frequency) is then
RADAR TRACKING
and hence is proportional to the target range. Extreme linearity and slope calibration of the frequency sweep
is required for accurate range determination. For example, a 1% error in the linearity of the linearity or slope,
can have an equivalent error in the range determination.
Range Tracking with Pulsed Radar. For pulsed radar, the target range is measured from the time delay
() between transmit pulse and received echo from the target. Figure 5 shows the basic configuration of a
range tracker used with pulsed radar. The heart of a range tracker is the time discriminator that enables the
tracker to determine the time difference between the range reference (estimated delay time) and the actual
range of the target return. The range error (r ) is normally bipolar and proportional to the range (or time)
difference between the estimated range and measured range. The range error is then input to a range and
velocity estimator (and possibly acceleration estimator) circuit. The function of the range error output is to
drive estimated range to the measured range. In most cases, an initial range (and possibly range rate) in the
general vicinity of the target range must be input to the range, velocity estimator circuit in order to enable it
to acquire the target.
There are three basic classes of range trackers, which will be designated as analog, digital, and computer
tracker range trackers. The most common analog-type tracker circuit uses early and earlylate gates, such as
those shown in Fig. 6. The detected target video is input to both early and late gates. During the early gate
time, the portion of the video signal existing during that time period is fed through to an integrator circuit,
which integrates the signal energy during that time period. The late gate likewise feeds the video signal during
the late gate time period to a second integrator. The outputs of the two integrators are compared in a difference
circuit. If there is more video energy in one of the integrators, an error signal proportional to the difference is
RADAR TRACKING
generated. The polarity of the error signal depends upon which integrator output is greater. The error voltage
then is provided to the range servo loop circuit, which generates voltages proportional to the estimated range,
velocity, and possibly acceleration. The range voltage (estimated range) drives the timing generator, which
generates the early and late gate times dependent upon the range voltage. If more video energy is in the late
gate time, the error voltage causes the range voltage to increase so that the partition between the early and
late gates moves out in range and becomes aligned on the centroid of the video pulse. In order for the range
tracker to initially acquire track, the earlylate gates must be positioned so that a significant portion of the
target video energy appears in the early or late gate times. An operator can accomplish this by observing a
radar display and setting the initial range into the track circuit.
The range tracking accuracy of the range tracker is dependent upon the signal to noise power ratio (SNR)
of the signal compared to the noise in the early and late gate time periods. According to Barton (3), the standard
deviation of the range error (r1 ) on a single pulse basis is given by
where
B = receiver frequency bandwidth
0 = pulse width.
Normally, the servo loop integrates a number of pulses to provide smoothing of the range voltage, which
reduces the effects of noise jitter upon the range determination. For noncoherent operation, the range error is
effectively reduced by 1/n, the number of pulses integrated. The resulting range error is then given by
where
f r = pulse repetition frequency,
t0 = observation time.
RADAR TRACKING
Digital range trackers can be implemented using a number of techniques. In most cases, the range
information (estimated range) is stored in a digital counter and is updated (up or down counted) depending
upon the actual range compared to the estimated range. An earlylate gate discriminator, such as that shown
for the analog range tracker can be used, and the error voltage then drives the up/down count. A simpler method
for accomplishing the discrimination function is shown in Fig. 7. For this case, a range window is positioned
about the radar video, and the video voltage is sampled at equal increments across the pulse. It should be noted
that the digital discriminator of Fig. 7 requires that the signal be passed through an approximately matched
filter prior to sampling, if the SNR is to be optimized. The split-gate tracker performs the matched filter
function by averaging over the gates, and hence can be preceded by an IF amplifier with wider bandwidth. The
digital circuit then drives the center of the range window to the centroid of the target video signal by equalizing
the voltages in the early and late sample times. Three samples are required, as a minimum, for this type of
discriminator: an early sample, a late sample, and an on-target sample. When the tracker is centered on the
target, the on-target sample voltage is maximized, thus indicating that a true target is being tracked, rather
than noise. Again, the range window must initially be set to the approximate target range or caused to slew
automatically until a target is detected.
The analog and digital trackers described earlier are primarily intended for range tracking a single
target, and in most cases the radar antenna is boresighted on the target either manually or by using an angle
tracker. Tracking circuits, either analog or digital, can be designed to track targets using continuously scanning
antennas. For this case, the target returns are received only by the radar during the time when the antenna
beam a scans by the target, and the tracker must use prediction algorithms to estimate the position of the target
on the next scan. If multiple targets are to be tracked, then individual analog or digital tracking circuits must
be used for each target tracked. In most cases where multiple targets are to be tracked, especially in scanningtype radars, the tracking functions are performed in a computer using specialized tracking algorithms. Because
track-while-scan tracking normally involves angle tracking as well as range tracking, the discussion on multiple
target computer tracking will be deferred.
Angle Tracking
Angle tracking can differ depending upon the application. For dedicated target-tracking radars, the antenna
is kept boresighted on the target by the angle-tracking circuits and the antenna servo. With a continuously
scanning antenna, the centroid of the target returns is measured each time the radar scans by the target, and
uses an estimator to predict the position of the target on the next scan. For multifunction or phase array radars,
the target track is updated each time the antenna is scanned to the target location. Because track-while-scan
RADAR TRACKING
and multitarget trackers normally require range (and possible Doppler tracking), the angle tracking described
in this section is limited to a single target, boresighted angle tracking systems. The most common types of
on-boresight trackers use conical scan, sequential lobe, or monopulse-angle-sensing techniques.
Conical Scan Angle Trackers. Conical scan is the simplest angle-sensing technique in that only a
single receiver channel is required. As shown in Fig. 8, the antenna beam is squinted off the antenna rotational
axis. The squinted antenna beam is rotated about the antenna boresight by either rotating the antenna or
nutating an offset feed. If the target is located on the antenna boresight, the target video signal maintains a
constant amplitude as the antenna rotates. However, if the target moves off boresight, the target video signal
will have a sinusoidal amplitude variation given by
where
E0 = average magnitude of received signal,
= angular distance of target from boresight,
K s = antenna error slope,
s = antenna rotator scan frequency,
= phase angle of the return modulation relative to the scan rotation.
In order to determine the transverse (azimuth) and elevation angle error components, the equation can
be rewritten in the form
where
t = transverse (azimuth) angle error component,
e = elevation angle error component.
By using the preceding equation, the angle resolver can determine the azimuth and elevation error
components of the target direction from boresight. These angle error components are then coupled into the
azimuth and elevation inputs of the antenna servo positioner, which then drives the antenna boresight onto the
target direction. Although this is conceivably the easiest angle-sensing technique, it is susceptible to tracking
errors produced by amplitude fluctuations of the target. Also, for military applications, because the conical
scan modulation can be detected, modulation jammers can drive the antenna off the target.
Sequential Lobe. Sequential lobe angle sensing is similar to that of conical scan, except that the
beam is switched electronically between beam positions. For dual-axis (azimuth and elevation) angle sensing,
10
RADAR TRACKING
generally four beam positions are used (up, down, left, right). By comparing the amplitude of the received signals
in the upper and lower beams, and knowing the shape of the antenna beams, the angular elevation angle of
the target from the antenna boresight can be determined. A similar technique can be used for azimuth angle
sensing. The technique can either use a single receiver channel or separate receivers for azimuth and elevation
angle sensing. The advantage of the technique is that the switching of the beams can be accomplished on a
pulse-to-pulse basis, thus making it less vulnerable to target radar cross-sectioned fluctuations. It, however,
still has a vulnerability to modulation-type jammers.
Lobe on receive only (LORO) is a variation of sequential lobe sensing. With this technique, the transmitter
either uses a separate transmit horn on boresight or transmits simultaneously through all four horns. The
sequential lobing is accomplished only on receive, through sequential sampling of the signals in the four horns.
The advantage of LORO is that modulation jammers cannot detect the sequential modulation pattern of the
receivers.
Monopulse. Monopulse sensing provides the ability to determine the angle of arrival in a single pulse
by simultaneously processing the signals in multiple receive beams. Figure 9 shows an example of a four-horn
monopulse configuration for dual-plane (elevation and azimuth) angle sensing. The four-horn configuration
shown in Fig. 9 is useful for a description of the basic process, but practical radars built since the 1960s
have used more complex feed systems to optimize the sum gain, difference error slopes, and sidelobes of all
channels. Amplitude-type monopulse uses simultaneous antenna beams squinted at angles off the elevation
and azimuth boresights. The relative amplitude of receive signals determines the angular distance of the target
off the boresight. Another type of monopulse, referred to as phase-sensing monopulse, uses separate receive
apertures spaced a short distance apart, but with the beams pointed parallel with the antenna boresight.
For this type of monopulse sensing, the phase difference between the receive signals determines the angular
distance of the target from boresight. The monopulse feed, such as shown in Fig. 9, is normally used either to
illuminate a parabolodial reflector directly or to illuminate a subreflector for a Cassegrain-type antenna. The
monopulse feed is normally attached directly to a and comparator. The and comparator combines the
received signals in the four beams to form a signal, a AZ signal, and a EL signal. According to Rhodes
(4), amplitude sensing and phase sensing are equivalent and can be converted to and sensing. Within the
3 dB beamwidth of the pattern of the monopulse antenna, the function / is approximately linear. The
target azimuth angle off the azimuth boresight and the elevation angle off the elevation boresight can be
determined from
where
K = antenna slope (a function of the squint angle of the beams),
AZ = phase angle of the AZ signal relative to the signal,
EL = phase angle of the EL signal relative to the signal.
For a point like target (target extent less than the antenna beamwidth), the phase angle between the
and signals is normally either 0 or 180 , depending upon which side of the boresight the target is located.
A typical configuration for a three-channel monopulse receiver is shown in Fig. 10. The and signals, after down-conversion to IF are amplified in gain-controlled amplifiers. The IF outputs from the
gain-controlled amplifiers are input to amplitude-sensitive phase detectors, along with the IF outputs. The
RADAR TRACKING
11
phase-sensitive amplitude detector provides a video output signal proportional to the amplitude of the signal
and the cosine of the phase angle between the signal and the signal. In order to maintain a constant
number of volts per degree for the output phase amplitude phase detectors, the gain of the receivers must be
maintained to provide a constant output signal level at the range of the target. To do this, the sum signal is
detected to provide a video signal to a range tracker circuit, which then locks the range onto the target.
The output video signal is then sampled at the range of the target, and this is then used to form the gain
control voltage in the three receiver channels. This signal normalization maintains the desired number of
output volts per degree from the channel receivers. Close tolerances on gain and phase track of the three
gain control amplifiers are required to provide the integrity of the angle error calibration.
Figure 11 shows an example of a two-channel monopulse receiver. The , AZ , and EL microwave signals
out of the monopulse comparator are switched in a RF commutator so that on receive pulse 1, + AZ , and
12
RADAR TRACKING
AZ signals are in receiver channels 1 and 2, respectively. On the second receive pulse, + EL , and EL
are coupled in to receive channels 1 and 2. On the third receive pulses, the polarities are switched, so that the
AZ , and + AZ signals are input to channels 1 and 2 on the third pulse, and the EL , and + EL
signals are in channels 1 and 2 on the fourth pulse. The receive signals in channels 1 and 2 are down-converted
from RF to intermediate frequency (IF), amplified in gain-controlled amplifiers and subsequently converted to
video. The decommutator circuit then uses the difference in the video outputs to form the AZ error and the
EL error signals. The error signals are then coupled into the antenna servo to maintain the antenna boresight
on the tracked target.
In order for the angle circuit to maintain a constant number of volts per degree for the angle error output,
the gain of the receivers must be maintained to provide a constant (on the average) output signal level at the
target range. In order to accomplish this, the sum of the video receive signals is provided to a range tracker
circuit, which then determines the range to the tracked target. The output video signal is then sampled at the
range of the target, and this is then used to form the gain control voltage to both the receiver channels, in order
to maintain relatively constant target output levels in the receivers.
The advantage of the two-channel receiver is that it eliminates the need for a third receiver channel,
and that it eliminates any zero drift in the phase-sensitive amplitude detectors. This is at the expense of 3 dB
less efficiency (as compared to the three-channel configuration), and potential sensitivity to target amplitude
fluctuation if the two channel gains are not identical. The disadvantage is that the angle error is only determined
on alternate pulses, and any noise or differential losses in the switching process will tend to degrade the
accuracy and precision of track. Thus, some sacrifice in tracking precision will be suffered in comparison to a
full three-channel monopulse angle tracker.
Angle Error Sources. The accurate determination of the angle to the target is influenced by a number
of factors including radar-dependent errors, target-dependent errors, and propagation effects. Radar-dependent
errors include the effects of thermal noise, antenna misalignment and cross coupling errors, and radar instrumentation error sources. The angular errors resulting from thermal noise can be quantified and are primarily
dependent upon the signal-to-noise ratio. For a conical scan radar, the variance in the angle determination is
RADAR TRACKING
13
given by
where
K s = conical scan angle error slope,
c = antenna 3-dB bandwidth,
SNR = signal-to-noise power ratio,
f r = pulse repetition frequency,
n = servo bandwidth,
B = receiver bandwidth,
= pulse width.
For a monopulse angle tracker, the variance is given by
where
K m = monopulse error slope,
m = antenna 3 dB beamwidth.
Glint is one of the most significant target dependent, angle error sources for complex targets such as
aircraft and ships. Complex targets consist of multiple scatterers separated in angle and range. Rather small
variations in target aspect angle can change the phase relationship of the separate scatterers, resulting in
large variations of the amplitude and indicated angle to target. Depending upon the extent of the scatterers
and their phase relationships, the indicated angle to the target can actually be outside the physical dimensions
of the target. In order to understand the phenomena, the slope of the phase front resulting from two isolated
point targets is given by Dunn and Howard (5) as
where
a = relative amplitude of the one scatterer to the stronger scatterer,
L = lateral distance separating the two scatterers,
= relative phase of the two scatterers,
= angle between the perpendicular bisector of the scatterers and direction to the radar.
If is set equal to zero, then
14
RADAR TRACKING
The preceding equation has been plotted in Fig. 12 for a = 0.9. As can be seen from the plot, when the relative
phase angle between the two scatterers approaches 180 , the indicated angular position is outside the directions
to the two scatterers.
Propagation effects such as multipath and ducting can also affect the angular indication, especially for
elevation angle sensing. Multipath is a severe problem for low angle tracking of targets, where the multipath
return from the terrain is in the main beam (or possibly even the sidelobes) of the antenna. Multipath contributions can be both from specular and diffuse reflections from the terrain, and their contributions are a function
of the surface roughness. For specular reflections, the return signals can be expressed as
where
At = free-space target amplitude at the antenna,
Ar = free-space multipath (target image) amplitude at the antenna,
f (t ) = antenna voltage gain in the direction of the target,
f (r ) = antenna voltage gain in the direction of the multipath return,
o = magnitude of the reflection coefficient,
= relative phase angle between the direct and the multipath return.
In a sense, the angular errors caused multipath effects are similar to those associated with the two-point
scatterer situation.
RADAR TRACKING
15
Doppler Tracking
Tracking targets in a clutter background is one of the major problems for radar trackers. Fortunately, terrain
clutter generally has a narrow specular extent. If the target is moving, the Doppler frequency of its return is
normally outside that of the terrain clutter. Doppler filtering can then be used to reject clutter, while keeping
the target returns. The simplest type of Doppler filtering is obtained by using moving target indication (MTI)
processing. More advanced Doppler processing enables the determination of the Doppler frequency (and hence
the radial velocity) of the target. MTI or Doppler filtering must be applied to both sum and difference channels
of a monopulse tracker, and in conical scan or lobing radar must be able to cancel the modulation induced on
the clutter by scanning.
MTI Processing. For a ground-based radar system, MTI processing provides the capability to reject
clutter by filtering out the returns whose spectral content is close to the pulse repetition frequency (PRF) of the
radar. This is accomplished by comparing the phase and amplitude of the target returns on successive pulse
intervals. Coherent radar operation is normally used for MTI processing, however, coherent on-receive MTI
processing can be used with noncoherent radars to provide most of the benefits achieved with coherent radar
processing. In MTI processing, if the phase and amplitude of the returns stay constant over two, three, or more
pulse intervals, then the returns are assumed to be associated with clutter and are rejected. The phase (and
possibly the amplitude) of moving target returns will change on a pulse-to-pulse basis and are not rejected by
the MTI filtering. MTI-filtered target returns can then be tracked by range and angle tracking circuits.
Doppler Filtering. Full Doppler tracking requires coherent radar operation and can improve the tracking ability of the radar using narrow filter bandwidths, thus increasing the sensitivity of the radar. The Doppler
filtering can also enable the determination of the actual Doppler frequency of the radar returns, thus providing
an exact determination of the target radial velocity. In addition, for airborne radars, the Doppler frequencies of
the clutter returns are a function of the aircraft velocity and the aspect angles to the clutter patch. Thus MTI
processing cannot be used for clutter rejection with airborne radars.
Continuous wave (CW) radar provides the ability to track a moving target while rejecting clutter. Normally,
separate transmit and receive antennas are used for CW tracking radars. An example of a Doppler phase lock
loop, a simplified version of that shown by Morris (6), is given in Fig. 13. The signal input is mixed down
to the center frequency (f 2 ) of the narrow band-pass filter. The input to the signal mixer is derived from a
combination of the output of the phase locked oscillator (PLO) which is then mixed with the IF local oscillator
(IF LO) frequency to provide the IF signal necessary to mix the signal down to frequency f 2 . Any increase
or decrease in the Doppler frequency (f D ) will cause the PLO output frequency to change in order to maintain
the input to the band-pass filter at frequency f 2 . The AZ and EL signals are also mixed down to frequency
f 2 . The AZ and EL signals are narrowband filtered, and used to derive the AZ error and EL error signals.
For a high PRF pulsed Doppler radar, a narrow pass-band filter is normally used to limit the receive
spectrum to f o PRF/2. This has the effect of converting the pulsed signal to CW, at which time the CW
Doppler tracking configuration described earlier can be used for the Doppler and angle tracking. If range
tracking of the signal is also required, the signal must first be sampled at the range of the target prior to
narrow band filtering. A minimum of two adjacent range cell samplers, each followed by narrow band Doppler
filter, are required to accomplish range tracking. In this case, the range samplers act as early and late gate
samplers, and by comparing the output Doppler amplitudes, the range tracker can keep the received pulses
centered between the two range samplers. Acquisition with a pulsed Doppler tracker can be a complicated
process. In order for the Doppler tracker to acquire the target, both the range and Doppler frequency must be
established in order to provide the Doppler output signals required for tracking. Thus, unless the range and the
Doppler frequency is known (and normally they are not), a search process both in range and Doppler frequency
must be initiated to find the target and initiate track. Other configurations exist for Doppler filter trackers.
Barton (3) describes a technique using narrow-band filters offset above and below from a center frequency. By
16
RADAR TRACKING
comparing the amplitudes out of the high- and low-frequency narrowband filters, an estimate of the Doppler
frequency can be obtained on a single pulse basis.
Digital Doppler Processing. With the advent of high-speed digital processors, the Doppler frequencies
can be computed directly. For this type of processing, the receive signals are normally converted to I and
Q digital format using high-speed analog to digital converters. The range-sampled I and Q signals can be
stored for a selected number of pulse repetition intervals (PRI), and input to fast Fourier transform (FFT)
computational routines. The FFT computes the detected amplitude versus Doppler frequency for each sampled
range. Tracking algorithms can then use the detected targets out of the FFT processor to establish the range
track, and subsequently angle and Doppler track.
Radar Ambiguities
Generally radars are classified as low, medium, or high PRF radars. For low PRF radars, all the target (and
clutter) returns are received prior to transmission of the next radar pulse. With high PRF radars, the Doppler
frequencies of all the target (and clutter) signals are less than that of the radar PRF. Low PRF radars, which
are unambiguous in range, are generally ambiguous in Doppler, whereas high PRF radars are normally highly
ambiguous in range. Medium PRF radars can be ambiguous in both range and Doppler.
Range Ambiguities and Eclipsing. Figure 14 shows receive signals over several PRI. The returns
from target 1 occur within the same PRI as the transmit pulse that initiated the target returns, and so target
1 range is unambiguous. The returns from target 2 occur at the same times when other transmit pulses are
being generated. Because receiver returns are normally disabled during the transmit pulse times to prevent
receiver saturation (and possibly burn-out), target 2 returns are eclipsed, and not detected in the receiver. The
returns from target 3 are from a range exceeding the unambiguous range, so that the returns in the current
PRI are associated with pulses transmitted several pulses earlier. Thus, from the radar display, the returns
from target 3 appear to be from a much closer range.
RADAR TRACKING
17
Range eclipsing occurs quite frequently in high PRF radars because of the relatively high transmit time
duty factors. Even on medium PRF radars eclipsing must be avoided for reliable target detection. Eclipsing can
be avoided by changing the PRF when the radar determines that the tracked target range is approaching an
eclipse situation. An alternate solution is to switch between two or more PRI, so that the target will be visible
in the PRIs in which it is not eclipsed.
Clutter returns with delay times exceeding the PRI (second-time around returns) can cause serious
problems to an MTI radar. This is because many MTI radars employ pulse-to-pulse stagger to avoid blind
ranges. With PRF-staggered MTI radar, second time around clutter returns are not cancelled because the
apparent range changes from pulse to pulse. In general, range ambiguities need be resolved, especially for
medium and high PRF radars. Even for relatively low PRF radar, such as the AN/MPS-36 instrumentation
tracking radar with a 320 Hz PRF (unambiguous range of 253 nm), the radar when its return is augmented
by a transponder can track missiles many thousands of miles. A number of methods are available for resolving
range ambiguities. One method is to use a form of PRF stagger in which the transmission time is varied on a
pulse-to-pulse basis. The only receive pulses that align on a pulse-to-pulse basis are those corresponding to the
destagger associated with that specific number of PRI. Another method is to apply intrapulse coding on the
transmit pulse in which the coding is changed on a pulse-to-pulse basis. On receipt, receive signals can then
be associated with the particular transmit pulse responsible for the target returns.
Doppler Ambiguities and Blind Speeds. Figure 15 illustrates receive signals (in frequency space)
for a pulsed coherent radar. The spectral content of the clutter returns are centered about the PRF frequency
lines denoted by f o nPRF . Target 1 has a Doppler frequency that is less than the PRF, and so its Doppler can
be determined unambiguously. Target 2 Doppler frequency is at a multiple of the PRF, and because the clutter
returns are normally much higher than those of the target, it is highly unlikely that the target will be detected.
In fact, most coherent radars intentionally reject frequencies around the f o nPRF frequencies, specifically to
reject clutter. Target speeds associated with Doppler frequencies of f o nPRF are referred to as blind speeds.
Target 3 Doppler frequency exceeds that of the PRF so that the actual Doppler frequency cannot be determined
from the receive spectrum.
Blind speeds can be avoided by several methods. Many coherent MTI radars avoid blind speeds by varying
the PRF on a pulse-to-pulse basis. By appropriately selecting a number of different PRFs, and switching PRFs
on a pulse-to-pulse basis, blind speeds can be avoided over a large range of target velocities. However, as noted
previously, second-time-around clutter returns will pose a problem to this type of processing. Most Doppler
radars require a constant PRF during the coherent processing interval (CPI). Doppler radars can avoid blind
speeds by switching to a different PRF when it notes that it is approaching a blind speed. Alternately, the radar
could transmit groups of pulses at different PRFs so that at most only one group would be at the blind speed.
18
RADAR TRACKING
Resolving Doppler ambiguities can be accomplished by several techniques. If the radar is tracking the
target range, the range rate determination is generally accurate enough to enable determination of which PRF
multiple the target Doppler is located. If two or more groups of Doppler PRFs are used, the ambiguity can often
be resolved from the measured Doppler frequencies resulting from the multiple PRFs.
Multiple Target Tracking. In many cases there is a need to track multiple targets simultaneously. Continuously scanning surveillance radars, such as the FAAs ASR-9 airport surveillance radars, must track all
the targets (airplanes) within their coverage regime. This tracking must be performed on a scan-to-scan basis,
and thus this type of tracking is commonly referred to as track-while-scan processing. Phased array radars are
also multiple target trackers, because they normally interleave switched beam locations to track a number of
targets, with scanning for new targets, as well as performing other possible functions. With these radars, the
targets (or aircraft) are only viewed for a number of pulses on a scan-to-scan (or look-to-look) basis.
Most modern scan-to-scan (or look-to-look) radars use computers for multiple target tracking. Because
the aircraft positions typically change on a look-to-look basis, tracking algorithms must be derived to predict
the estimated target positions on the next scan, based upon on previous scans. The accuracy of these predicted
positions is limited by the maneuver capabilities of the targets being tracked, so that the predicted positions
are only estimates of their actual positions. Association algorithms must then be used to determine (1) if
detected target is associated with an established track, (2) which established target track the target should be
associated with, and (3) to determine if a new track should be established, if no track association is made.
Figure 16 shows a typical flow diagram for a multiple target tracker. The raw target position information,
such as range and azimuth angle (and possibly elevation angle or height), is derived in the radar. Most multiple
target tracker associative algorithms prefer to track in rectilinear coordinates (RN , RE , RV ) rather than polar
coordinates so that the conversion must be made from polar coordinates. If North is assumed to be at zero
RADAR TRACKING
19
degrees, then
where
R = range,
= azimuth angle,
= elevation angle.
The current target position (RC ) is then
After the coordinate transformations are performed on the incoming radar target data, present target detections
are then compared in association algorithms to determine if the target data are associated with established
target tracks. For the FAA airport surveillance radars, the radar target detections are also associated with the
secondary (beacon) radar target reports. The beacon returns also include aircraft identification and reported
aircraft height. The combined associations are then used to update the target tracks and predict the aircraft
locations on the next scan using the track prediction and smoothing algorithms. If the target detection is
not associated with any of the present target tracks, then the position information is considered for the
establishment of a new target track. In order to establish a new target track, generally the target must be
associated with m on n of the previous scans in order to establish a new track. After the m out of n association
is made, a new track is established, and the past position information on those target detections is used to
predict the target location on the next scan. Target tracks are generally dropped after a certain number of
successive target track associations are missed. The information on established target tracks is then routed to
radar displays and possibly to weapons systems.
Smoothing and Prediction Algorithms. Radar measurements of target positions and velocities are
often imprecise as a result of a number of factors, such as signal-to-noise ratio, target RCS fluctuations,
multipath, and clutter contamination. Various algorithms can be used for the track smoothing and prediction
to mitigate the effects of scan-to-scan position and velocity measurement errors, and thus to improve the
accuracy of tracking. Kalman filters (7) are probably the best -known smoothing and prediction algorithms.
Alpha, beta (, or , , ) trackers are a subset of the Kalman filters and are the simplestbecause they use
precomputed fixed gains. The , , equations applied to position and velocity smoothing are
20
RADAR TRACKING
where
T = sampling period,
RC = measured position,
C = smoothed estimate of current position,
R
PC = predicted position at the time of the measurement,
R
R
PC = predicted velocity at the time of the measurement,
R
P(C+1) = predicted acceleration T s later,
T = time between measurements.
RADAR TRACKING
21
The precomputed fixed gains , , can vary between zero and one, with values toward one giving
the greatest emphasis toward the current measurements, whereas values toward zero provide the greatest
smoothing. Benedict and Bordner (8) analyzed the gains for an , for track-while scan application and
determined the optimal selection for this application as
The performance of the , , trackers are limited by the selection of the fixed gains, which may not be optimal
for all situations. Bar-Shalom and Li (9) discusses the use of Bayesian data association techniques, as well as
multiple model estimators for providing superior performance for multitarget tracking.
BIBLIOGRAPHY
1.
2.
3.
4.
5.
6.
7.
8.
9.
L. V. Blake, Prediction of radar range, in M. I. Skolnik, (ed.), Radar Handbook, New York: McGraw-Hill, 1990, chap. 2.
P. Swerling, Probability of detection for fluctuating targets, IRE Trans., IT-6: 269308, 1960.
D. K. Barton, Radar System Analysis, Englewood Cliffs, NJ: Prentice-Hall, 1964.
D. R. Rhodes, Introduction to Monopulse, New York: McGraw-Hill, 1959, p. 41; reprint, Norwood, MA: Artech House,
1980.
J. H. Dunn, D. D. Howard, Radar target amplitude, angle, and Doppler scintillation from analysis of the echo signal
propagating through space, IEEE Trans. Microw. Theory Tech., MTT-16: 715728, 1968.
G. V. Morris, Doppler frequency tracking, in J. L. Eaves and E. K. Reedy (eds.), Principles of Modern Radar, New York:
Van Nostrand Reinhold, 1987, Chap. 19.
R. E. Kalman, New results in linear filtering and prediction theory, ASME Trans., 83D: 95108, 1961.
T. R. Benedict, G. W. Bordner, Synthesis of an optimal set of track-while-scan smoothing equations, IRE Trans. Autom.
Control, AC-7: 2732, 1962.
Y. Bar-Shalom, Xiao-Rong Li, Multitarget-Multisensor Tracking: Principles and Techniques, Storrs, CT: Yaakov BarShalom, 1995.
JOSEPH A. BRUDER
Air Force Research Laboratory
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Figure 1. Radio wave received at a direction nding site. Contours of constant amplitude propagate radially from the transmitting
antenna, and the angle-of-arrival is characterized by azimuth and elevation.
where r = (rx , ry , rz ) is the spatial coordinate, B is signal amplitude, fo is the frequency of the wave, ko = vfo /|v|2 is the
wavenumber which is a function of the scalar frequency
and v is the vector velocity of propagation (typically assumed to be the speed of light in free space), and is a random starting phase which is uniformly distributed over [0,
2]. The AOA information is contained in the 2ko r phase
term and is given as
niques are generally referred to as superresolution methods. Finally, current trends in DF research are surveyed
and performance benets are assessed.
APPLIED DIRECTION FINDING TECHNOLOGY
A radio direction nding system performs both time and
spatial sampling of the eld distribution and processes
the samples to estimate AOA. A DF system acquires spatial samples through a combination of individual antenna
placements and/or antenna patterns. The local description
of any spatial eld distribution may be estimated either
from a set of spatially separated samples or from a set of
spatial derivatives at a single point in space.
A local description of a eld at an arbitrary point r in
a Cartesian coordinate system may be developed by considering a monochromatic plane wave propagating in free
space as
If it is assumed that there are M antennas in the array, then a simultaneous sampling of the output at each
antenna may be expressed as a column vector X(t, r) = [x1 ,
. . . , xM ]T , where T denotes the transpose operation. The
vector X is known as an array snapshot. If the AOA term
in Eq. (1) is separated, then the array snapshot for a single
incident signal may be characterized as
Figure 3. Similarity of radio navigation and DF radio-location technology. (a) Shipboard navigation technique using ve radio beacons for position xing. (b) Radio transmitter location technique
with ve direction nding sites.
Figure 5. Contrasting single and multiple plane waveelds incident on DF arrays. (a) Single plane
wave incident on a circular array of antennas. Amplitude is everywhere constant and contours of
constant phase are parallel straight lines. (b) Multiple plane waves incident on orthogonal baseline interferometer array. Contours of constant amplitude and phase are distorted. (Contour plot
provided by D. N. Travers.)
the following paragraphs. Under either scenario, the basic procedure for performing the calibration is to record
the appropriate measurement from the installed DF system while exposing it to a controlled (calibration) incident
plane wave under every appropriate combination of signal parameter (i.e., azimuth, elevation, polarization, and/or
frequency).
Calibration for AOA Error Correction. For a low degree
of residual site interaction with the DF antennas, moderate pattern distortion and moderate DF error exist. In
this case, calibration for AOA error correction is effective.
One conventional approach for reducing AOA errors in DF
performance is to start with a carefully controlled site in
which the DF array is removed as far as practical from
Figure 6. Moderate and severe site interaction AOA calibration curves. (a) Moderate site interaction with AOA errors which are correctable. (b) Severe site interaction with reentrant regions in
which AOA error correction is ambiguous.
measuring antenna responses and storing this information in an array manifold consisting of antenna patterns
versus frequency, AOA, and polarization. For ship and aircraft platforms, the installed array steering vectors may
be measured by repeatedly turning the platform in circles
to expose the antennas to all possible AOAs from a fareld calibration station transmitting a wide range of signal
frequencies. As an alternative, array manifolds have been
obtained by performing calibrationlike measurements of
antenna responses from scale-model arrays installed on
highly detailed miniature models of the platform.
Iterative Search DF Techniques. The essential process of
any iterative search DF technique is to select the AOA of
the calibrated array steering vector which best agrees with
the unknown measured array response. The primary difference between DF iterative search schemes is the criterion
for obtaining best agreement. A commonly used procedure
for iteratively comparing observed array response to the
calibrated array response vectors is a beam steering process which acts as an equivalent bank of matched lters. In
this case, best agreement is dened in a least mean squared
sense.
A digital beamformer may be viewed as a matched lter that processes the observed antenna response to produce a single (scalar) response that is maximum when the
preferred direction of the beamformer best agrees with
the AOA of the signal. Beam steering DF iteratively processes an observed response vector through a progression
of matched lters (viz., array steering vectors), each of
which represents a different AOA. Under the constraint
of a normalized input vector (i.e., unit-norm), the output
level of the lter is maximum when excited by a vector
that matches the lter parameters. Or stated another way,
maximum output is obtained from the lter whose steer-
ing vector AOA best agrees with the bearing of the observed
signal. In this manner, the process scans (or steers) a simulated beam over all possible AOAs, searching for the steering direction that maximizes the beamformer response.
The beam steered iterative process may be characterized mathematically by considering an array manifold of L
steering vectors for a particular frequency. The array manifold is the set of array steering vectors A for i = 1, L. This
implies that the DF system was calibrated at L bearings in
the interval [0, 360] degrees. If X denotes the observed array response vector for a signal of interest, then the output
y of each matched lter is given as
where H denotes the Hermitian or conjugate transpose operation. The best AOA estimate is obtained when yi is a
maximum. A typical plot of y is shown in Fig. 7. In this
illustration, L = 25, and the AOA estimate is 195 corresponding to the maximum value associated with the array
steering vector at y14 . This process results in an AOA estimate that is best in a least squared sense.
Three areas of particular importance that impact the
performance of DF systems which are based on iterative
search techniques are (1) antenna array design, (2) adequacy of calibration, and (3) polarization effects. Antenna
array design critically inuences the pattern of the steerable beam. The characteristics of the steerable beam are
determined by the geometry of the array, the intrinsic element patterns and the number of antennas in the array.
Of primary concern in the design of the steerable beampattern is the beamwidth of the main lobe and relative level
of the side lobes.
Beam steering DF system performance is also controlled
by the accuracy and completeness of the calibration process
10
where H is the Hermitian operation and E{} denotes statistical expectation. Each matrix element R(ri , rj ) is the
averaged product of the output of the antenna located at
point ri times the conjugated output of the antenna located
at point rj .
Subspace Based Superresolution. A large volume of work
has been presented over the past two decades on superresolution techniques that are based on an eigen decomposition of the spatial covariance matrix. Many modern algorithms have their origin in the early work of Pisarenko
(14) which was revived and expanded by Schmidt (15).
Schmidts MUltiple SIgnal Classication (MUSIC) algorithm is the most widely cited superresolution technique in
the present day literature. It has been the springboard for
a seemingly endless ow of methods that are variations of
the original approach. In this treatise, the original MUSIC
algorithm is considered; however, for an extensive survey of
MUSIC related techniques, the reader is referred to Krim
and Viberg (16).
The initial step in the MUSIC algorithm is to solve the
following eigen equation
where R is the M M spatial covariance matrix, is an arbitrary eigenvalue, and E is an arbitrary eigenvector. This
formulation implicitly assumes that the noise background
is uncorrelated white Gaussian noise. The n eigenvalues
may be ordered such that M M1 1 . The corresponding eigenvectors are arranged to form the matrix RE
A threshold value is determined such that the eigenvalues greater than the threshold are assumed to be associated with eigenvectors residing in the signal subspace.
Likewise, eigenvalues that are smaller than the threshold
produce eigenvectors which are assumed to be in the noise
subspace. If the spatial covariance matrix were M M and
there were d signals in the waveeld, then the resulting
eigenvector matrix would be partitioned such that the rst
d columns are vectors spanning the signal subspace, and
the rightmost p = (M d) columns are vectors spanning the
noise subspace. If the matrix partition corresponding to
noise were denoted Rp , then the MUSIC spectrum would
be given by
11
using Eq. (9) for each array steering vector A, and AOA
is given by the array steering vector that maximizes P.
If multiple signals were present in the incident waveeld,
then the MUSIC spectrum would exhibit multiple peaks.
This is illustrated in the next section.
Multisignal DF Example. The MUSIC algorithm is capable of simultaneous DF on multiple incident signals. To illustrate the capability, a linear antenna array geometry is
considered with two interfering signals incident on the array (17). The antennas were deployed in a nine-element
minimum redundancy array conguration and this provided an equivalent 30 element lled array measurement.
The plot of Fig. 8 shows the results obtained for one signal
on array end re at 180 and a second signal at 61 off array
bore sight at 151 . The signal at 151 also produced a peak
at 209 due to the inherent bearing ambiguity present in
a linear array. The effect of the end re grating lobe of the
antenna array is evidenced by the relatively broad peak
about 180 . The signal arriving at a bearing removed from
the end re condition produced the more robust peaks observed at 151 and 209 . Although the MUSIC algorithm
was able to provide DF results for both signals, these data
clearly indicated that angular resolution becomes poorer
for signals arriving from directions near array end re.
A situation in which the MUSIC algorithm was unable
to correctly resolve the two signals is shown in Fig. 9. In
this case, the AOA separation of the two signals was 4 ,
and the AOAs were near array end re, 160 and 164 respectively. The image solutions were evident at 196 and
200 azimuth. Because the AOAs were close together and
near array end re, the MUSIC algorithm was not able
to resolve the two signals. Two peaks were evident in the
MUSIC spectrum; however, neither one indicated an AOA
correctly associated with an arriving signal. Improved AOA
Superresolution Implementation Issues. One of the primary difculties encountered in the implementation of the
superresolution techniques is the requirement for a precise
characterization of the antenna array response (viz., array
manifold). In general, the array manifold must be known
for all frequencies, polarizations, and AOAs. In practice,
an array deployed on highly conducting soil and on a site
free of interacting structures may be characterized mathematically using ideal antenna responses (17). However, for
antenna arrays deployed on shipboard, airborne, satellite,
or other cluttered sites, a mathematical characterization
is generally not possible and the array manifold must be
determined by calibration using a transmitter at known
locations. Due to the highly robust nature of the superresolution techniques, the array calibration must be done in
increments of AOA, frequency, polarization, etc., which will
be immune to signicant interpolation error. These issues
were discussed in the previous section relating to array
calibration.
Another source of difculty is detecting the number of
signals in the waveeld. One rule-of-thumb is based on the
relative magnitudes of the eigenvalues. Larger eigenvalues are associated with signals and smaller eigenvalues
are associated with noise. This process works reasonably
well in high SNR situations, but it is unreliable for low
SNR. It has been shown that underestimation of the number of signals results in poor AOA performance (18), and
for this reason, system designers generally try to overestimate the number of signals; however, overestimation may
be a problem in low SNR situations due to the fact that the
eigen based techniques tend to produce extraneous peaks
corresponding to the number of estimated signals. A num-
12
TRENDS IN DF RESEARCH
Two primary areas of research in the science and technology of radio direction nding are: (1) efforts to improve DF
system performance in the presence of reradiating structures and (2) investigations to improve the performance
of the superresolution waveeld decomposition techniques.
The discussion in this section focuses on a representative
subset of the many important research efforts going on.
13
14
from the fact that the spatial covariance matrix is not statistically stationary in the wide sense. That is, the terms in
the matrix depend upon antenna location as well as spatial
separation. Spatial smoothing is a process whereby the array is partitioned into smaller subarrays, and the resulting
covariance matrices from the subarrays are averaged. The
resulting averaged covariance matrix effectively decorrelates the coherent signals.
A technique has been recently proposed by Li et al.
(33) to estimate AOA for coherent signal components without the use of spatial smoothing and eigen decomposition.
The authors have developed a new method for 2-D spatialspectrum estimation of coherent signals using a rectangular planar array, and the method works in the presence of
unknown noise environments. The authors claim that the
performance of the proposed technique is similar to that of
spatial smoothing in the presence of spatially white noise,
and it provides improved performance in spatially colored
noise environments.
In another approach, Delis and Papadopoulos (34)
propose an enhanced forward/backward spatial ltering
method that provides improved performance over spatial
smoothing techniques. The authors contend that their enhanced spatial ltering approach requires the same number of antenna elements as the spatial smoothing methods,
and it achieves improved performance.
An improved spatial smoothing technique has been proposed by Du and Kirlin (35). Two problems with the spatial smoothing method are it reduces the effective aperture
of the array, and it does not take into account the cross
correlation between the subarrays. The authors propose
an averaging technique which utilizes the correlations between the subarrays to produce a more statistically stable
estimate of the averaged covariance matrix. The authors
suggest that the technique provides improved performance
when the subarrays are small compared to the size of the
overall array.
If the transmitter is moving, then a concept known as
temporal smoothing can be used to decorrelate coherent
signals (38). Gu and Gunawan (39) showed that, for simulated VHF and UHF signals from a moving transmitter,
temporal smoothing can resolve more closely spaced coherent signals than spatial smoothing. They showed temporal
smoothing requires M+1 antennas to estimate the bearings
of M coherent signals, whereas spatial smoothing requires
3M/2 antennas.
OPERATIONAL ISSUES
All direction nding operations proceed on the axiomatic
assumption that the bearing measured is conrmed on the
signal of interest. The advance of DF technology has produced DF systems with the potential for excellent operational performance. Experience shows this potential may
never be realized in practice unless equal consideration
is given to explicit conrmation that the AOA reported is
on the signal of interest. Traditionally, DF conrmation
has been the responsibility of the DF operator, while DF
system engineers have concentrated on bearing accuracy,
sensitivity, and response time. The growing speed and com-
BIBLIOGRAPHY
1. P. J. D. Gething, Radio Direction Finding and Superresolution,
London: Peregrinus, 1991.
2. H. H. Jenkins, Small-Aperture Direction-Finding, Norwood,
MA: Artech House, 1991.
3. L. F. McNamara, The Ionosphere: Communications, Surveillance, and Direction Finding, Malabar, FL: Krieger, 1991.
4. D. N. Travers (ed.), Abstracts on Radio Direction Finding, 2nd
ed., San Antonio, TX: Southwest Research Institute, 1996.
5. R. E. Franks, Direction-nding antennas, Antenna Handbook:
Theory, Applications, and Design, Y. T. Lo andS. W. Lee (eds.),
New York: Van Nostrand Reinhold, 25.425.9, 1988.
6. D. N. Travers, Characteristics of electrically small-spaced loop
antennas, IEEE Trans. Antennas Propag., 13: 639641, 1965.
7. J. E. Hipp, Experimental comparisons of sky wave DF algorithms using a small circular array of loop antennas, 4th Int.
Conf. HF Radio Syst. Techniques, pp. 215220, 1988.
8. H. D. Kennedy, W. Wharton, Direction-nding antennas and
systems, Antenna Eng. Handbook, H. Jasik (ed.), New York:
McGraw-Hill, 39.1639.18, 1961.
9. J. E. Hipp,Adaptive Doppler DF system.U.S. Patent No. 5,
321,410, 1994.
10. R. M. Wundt, Wullenweber arrays, Signal Processing Arrays;
Proc. 12th Symp. Agard Conf. Proc., Dusseldorf, (16), 128152,
1966.
11. D. E. N. Davies, Circular arrays, The Handbook of Antenna
Design, A. W. Rudge, K. Milne, A. D. Olver andP. Knight (eds.),
London: Peregrinus, pp. 9991003, 1986.
12. W. M. Sherrill, D. N. Travers, P. E. Martin,Phase linear interferometer system and method, U.S. Patent No. 4,387,376,
1983.
13. R. L. Johnson, Q. R. Black, A. G. Sonsteby, HF multipath passive single site radio location, IEEE Trans. Aerosp. Electron.
Syst., 30: 462470, 1994.
14. V. F. Pisarenko, The retrieval of harmonics from a covariance function, Geophysical J. Roy. Astronom. Soc., 33: 347366,
1973.
15. R. O. Schmidt, Multiple emitter location and signal parameter
estimation, IEEE Trans. Antennas Propag., AP-34: 276280,
1986.
16. H. Krim, M. Viberg, Two decades of array signal processing
research, IEEE Signal Process. Magazine, 13 (4): 6794, 1996.
17. R. L. Johnson, An experimental investigation of three eigen
DF techniques, IEEE Trans. Aerosp. Electron. Syst., 28:
852860, 1992.
15
RICHARD L. JOHNSON
JACKIE E. HIPP
WILLIAM M. SHERRILL
Southwest Research Institute,
6220 Culebra Road, San
Antonio, TX, 78238
RADIO NAVIGATION
RADIO NAVIGATION
Trans
p
A key function of navigation is the estimation of current position of a vessel. The reception of radio signals from transmitters whose location is known is a common means of implementing the position estimation functions. Several different
schemes have been developed. They can be classified according to the means of determining a position from radio signals. Figure 1 provides the geometric relationships for the different schemes.
A thetatheta system determines the position by the vessels bearing with respect to two transmitters. This scheme is
not common in aviation due to its low accuracy when compared with other available systems.
Rhotheta systems use radio signals to determine distance
and bearing with respect to the transmitter. This scheme has
been in common use in aviation for many years. Most of the
airspace can be flown using this basic means of navigation.
Errors in bearing measurement will result in position errors
that depend on the distance from the station.
Rhorho systems are based on distance measuring equipment (DME) that determines the position of an aircraft using
two or more distance values. When only two distance values
are available, there is potentially an ambiguity of position.
This ambiguity is usually resolved by using the last computed
position to determine the most reasonable position. The position accuracy of the rhorho solution is dependent upon the
accuracy of the measured distance and the bearing angles to
the stations. If the aircraft is close to the line through two
stations, the error in the position solution using only those
two stations becomes large.
Hyperbolic systems measure the time delay of signals simultaneously transmitted from three or more stations. The
N
N
2
Thetatheta
Rhotheta
LOP
Rhorho
Hyperbolic
Interr
onded
ogatio
pulse
n puls
e
123
pair
pair
DME station
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
124
RADIO NAVIGATION
DME Channels
1 to 16
17 to 56
Even channels
with ILS
Odd channels
with VOR
60 to 69
57 to 59, 70 to
126
Assignment
VHF Frequencies
Unpaired
Unpaired
VOR
Distance in nm
= (Time duration in microseconds 50 ms)/12.359 ms/nm
The airborne DME unit has memory that handles the situation when the reception of the DME reply is momentarily interrupted. The equipment uses the memory to remain in the
track mode and provide distance data during the reply interruption. The memory allows the DME to provide distance information for up to 10 s after loss of reception.
It is common for airborne DME units to handle up to five
DME ground facilities simultaneously by multiplexing the receiver-transmitter circuits. This allows the equipment to simultaneously provide distance information for up to five
DME navaids.
The airborne DME transmits and receives on one of 252
channels. There are 126 X and 126 Y channels. The transmit
and receive frequencies of any one channel are separated by
63 MHz. In the first 63 X channels, the ground-to-air frequency is 63 MHz below the air-to-ground frequency. For X
channels 64 through 126, the ground-to-air frequency is 63
MHz above the air-to-ground frequency. For Y channels the
situation is reversed. The ground-to-air frequency of the first
63 Y channels is 63 MHz above the air-to-ground frequency.
Channels 64Y to 126Y, the ground-to-air frequency is 63 MHz
below the air-to-ground frequency. The 252 ground-to-air frequencies are each whole MHz frequencies from 962 MHz to
1213 MHz. The air-to-ground frequencies are each whole MHz
frequencies from 1025 MHz to 1150 MHz.
The duration between each pulse of the pulse pair transmitted by the airborne equipment and that of the ground
equipment is different for X and Y channels. The table below
shows the pulse spacing.
X-channel
Y-channel
Air-to-ground
Ground-to-air
12 s
36 s
12 s
30 s
VHF RF Carrier
108112 MHz
even-tenths MHz
or 112117.95 MHz
9960 Hz subcarrier
30 Hz reference
9960 Hz frequency
modulated at 30 Hz
(reference)
Variable 30 Hz AM
Most DME channels are paired with a VHF frequency allocated to VOR or ILS. That is, for each VOR or ILS frequency
there is an assigned DME channel for use when DME equipment is part of the navaid facility. The X channels are paired
with VHF frequencies in 100 kHz increments (108.00, 108.10,
108.20, etc.). The Y channels are paired with VHF frequencies
in 100 kHz increments but offset by 50 kHz (108.05, 108.15,
108.25, etc.). The table below shows the DME channel pairing
with VHF frequencies.
RADIO NAVIGATION
0
180
125
360
Reference
symbol
10440 Hz
9960 Hz
9480 Hz
Variable
signal
north
0
Magnetic north
0
180
360
Reference
symbol
180
360
Reference
symbol
VOR
station
270
90
Variable
signal
west
Variable
signal
east
180
180
360
Reference
symbol
Variable
signal
south
126
RADIO NAVIGATION
DME
Bia
Runway
DME
Bias
Antenna
beam pattern
Glideslope and
DME antenna
Localizer
antenna
Runway
Zero DME
distance
indication
Runway
Localizer beam
centerline
Back
course
150 Hz
90 Hz
Figure 7. Localizer antenna lobe patterns.
Localizer
course
RADIO NAVIGATION
Glideslope
antenna
Glideslope
centerline
90 Hz
150 Hz
Runway
400
Hz
1300
Hz
1300
Hz
Glid
pe
eslo
path
Runway
Inner
marker
Middle
marker
Outer
marker
127
128
RADIO NOISE
and the range to each satellite and hence estimate the receiver position.
To deny the accuracy of GPS to unfriendly forces, the satellite signals are intentionally degraded using a concept
known as selective availability (SA). This technique degrades
to L1 signal characteristics to the extent that navigation accuracy is about 100 m (95%).
MICROWAVE LANDING SYSTEM (MLS)
Microwave landing systems consist of an azimuth and elevation microwave transmitters, a conventional DME transponder, and the airborne receivers. The azimuth transmitter provides coverage for 40 to each side of the centerline. The
elevation transmitter provides coverage up to 15 of elevation.
Microwave landing transmitters operate on one of 200 assigned frequencies between 5.031 GHz and 5.1907 GHz. The
azimuth transmitter provides a narrow beam signal that
sweeps the azimuth coverage area (40) at a rapid rate. By
detecting the timing between the reception of the microwave
signal, the receiver can determine the azimuth angle from the
centerline. A preamble microwave signal is transmitted from
a broad beam antenna to indicate the beginning of the azimuth sweep. Various information is digitally encoded in the
preamble signal. The elevation function is provided in the
same manner as the azimuth function. High sweep rates provide about 40 samples per second for azimuth and elevation.
GERALD E. BENDIXEN
Rockwell Collins, Inc.
730
SEARCH RADAR
SEARCH RADAR
Search radar is used widely to provide electronic surveillance
of the environment to detect objects that would otherwise be
invisible to the unaided observer. These systems usually function without operator interaction to provide information rates
commensurate with high-speed decision making by the user.
That user might, for example, be an automated weapon
launch system, an air traffic controller, or a traffic policeman.
OVERVIEW
This section provides a summary of the function, applications,
elements, and design challenges of modern search radar.
Function
The single function of the radar is to provide volume surveillance over all or portions of a sphere centered at the radar
antenna. This is accomplished by radiating high-energy microwave pulses into the volume and detecting these pulses as
they are reflected from objects of interest (targets). An antenna focuses the radiation to create a narrow beam of energy. This selectivity and the short duration of the pulses
allow the radar to measure target location in distance (range)
and in one or two angular dimensions (bearing and elevation).
The antenna may be rotated mechanically in angle or the
beam may be steered electronically in one or two dimensions.
A receiver amplifies the target-reflected pulses and removes the microwave carrier frequency through a heterodyning process. The received pulses are applied to a signal processor for target detection and location measurement. In
modern designs, the signals are converted to digital format
with subsequent processing carried out digitally.
Target detection is performed by comparing the magnitudes of received signals to a preset threshold. Signals exceeding this threshold are declared targets and their parameters are passed on to a location measurement process.
Occasionally, internal receiver noise will exceed threshold.
These occurrences are termed false alarms. Adaptive thresholds are employed to maintain a constant average false alarm
rate (CFAR).
Target range is determined by noting the time of detection
relative to the time of transmission. The translation from
time to range is predicated on the fact that the pulses travel
at the speed of light. Angular measurement is obtained by
noting the position of the antenna at the time of detection.
In simpler applications, target information is applied to an
electronic display. This display provides the operator with a
picture of the environment that is updated on each antenna
scan. Usually, a plan view is presented in x and y coordinates.
Automatic designs rely on a general-purpose digital computer to interpret target detection information. This computer
provides scan-to-scan correlation, trajectory extrapolation,
and, in military applications, threat assessment. In applications supporting weapon engagement, a kill assessment function may be provided.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
SEARCH RADAR
In earth-based applications, there are a number of extraneous signals that may cause interference. Among these are unwanted clutter returns from the local terrain, sea surface, or
weather. In military applications, an adversary may radiate
noise in an attempt to hide returns from their own vehicles.
These interference signals are referred to as electronic countermeasures (ECM) or jamming. More sophisticated techniques are in use that attempt to confuse the radar by generating false targets. Finally, a large number of radar systems
are in use today. Each has its own peculiar function and set
of parameters, but there may be several systems colocated in
the same general area. Direct reception of radiation from another radar may be interpreted as a target return. A function
of the well-designed radar is minimization of clutter, ECM,
and friendly interference effects.
Applications
There are a number of ways to categorize search radars. They
may be developed for the military, be produced commercially,
or be sponsored by governmental agencies. The platforms may
be surface-based at sea or on land, airborne in aircraft or missiles, or space-based. A radar is either two-dimensional (range
and bearing) or three-dimensional (range, bearing, and elevation). It also may be characterized, loosely, as long, medium, or
short range. Finally, microwave transmission frequency is an
important attribute. Radars are in use having carrier frequencies from 1 GHz, or lower, to 20 GHz, or higher.
Examples of search radar applications include general surveillance, weapon support, aircraft and ship navigation, air
traffic control, harbor traffic control, early warning of attack,
weather alerting and monitoring, obstruction alerting, vehicular speed measurement, and satellite location monitoring.
Elements
In general, a search radar is composed of several distinct electronic elements having specific functions. These include a frequency synthesizer, a timing generator, a power amplifier, an
antenna, an antenna platform, a microwave receiver, an intermediate frequency receiver, a signal processor, and a control computer. An operator display-and-control panel may be
included. Figure 1 is a block diagram showing the interactions among these elements.
The frequency synthesizer provides the basic radar signals
necessary for carrier transmission and for local oscillator production. Each frequency conversion used in the radar receivers requires one local oscillator signal. An additional signal
may be provided to serve as the basic system clock, which sets
the timing intervals for pulse transmission, range gating, and
digital sampling. Coherency is an absolute requirement in
modern designs due to the need to reject clutter by Doppler
processing. Therefore, all signals must be phase related. Often, coherency is controlled by starting with crystal-based oscillators operating at low frequency. Microwave signals are
produced using frequency multiplier chains, offset modulators, and frequency dividers.
A timing generator provides timing commands for the other
radar elements. These commands include transmission pulse
width and repetition frequency, range gate width, and digital
converter sampling intervals. This element also provides
start-and-stop events for the various algorithms involved in
731
732
SEARCH RADAR
Antenna
platform
Display
Microwave
receiver
Antenna
Power
amplifier
Intermediate
frequency
receiver
Frequency
synthesizer
Signal
processor
Control
computer
Timing
generator
Control
panel
Figure 1. Generalized block diagram of a search radar showing interfaces between elements.
Most modern search radars employ the generic elements shown in this diagram.
ter and higher velocity targets. Advantage of this phenomenon can be taken only if the transmission and local oscillator
signals are coherent. That is, their frequency and phase must
remain nearly constant over time. Therefore, close attention
must be given to short-term phase and frequency stability. In
addition, the high population density of radar in operation
forces a tight spectral allocation to each system. Legally, the
designer cannot allow his frequency to drift outside the prescribed boundaries. This problem may be resolved by utilizing
oscillators based on crystalline structures having a precise
molecular structure that supports only a narrow vibration
spectrum. Power amplifiers exhibiting low phase noise characteristics may also be required.
Very-short-term instability is referred to as frequency
modulation (FM) noise. This noise is usually broadband and
is radiated in conjunction with the transmission carrier. Clutter-reflected returns of this noise can cause interference with
the relatively low-strength returns from targets even though
targets and clutter are separated by a Doppler shift. In typical applications, this noise must be maintained as low as 120
dB below the carrier level when measured in a bandwidth of
1 Hz. Control of this noise may be achieved using automatic
phase and frequency control loops within the frequency synthesizer.
A coherent radar is dependent upon the stability of the
pulse repetition frequency. Deviations in this frequency,
termed pulse jitter, can cause intermodulation products or
clutter to spill over into the target spectrum and cause interference. Careful attention to the stability of the system clock
can minimize this problem.
Target detectability is a direct function of transmitted
power level. Once a power amplifier device, modulator, and
power supply have been selected, this parameter is set. However, care must be taken that its value does not deteriorate
over time.
In longer range applications, the transmission power is
high enough to destroy receiver circuitry if allowed to enter.
It is mandatory that provisions be made to prevent this occurrence. Common practice is to use high power circulator devices whose phase characteristics cause transmission cancellation at the receiver input. In reality, these devices provide
SEARCH RADAR
733
Dynamic range is a measure of the radars ability to process large signal returns simultaneously with very low signal
returns without degradation. Generally, clutter returns are
the largest signals present at the radar input. If a clutter echo
arrives in time conjunction with a target echo and if the
clutter level causes receiver saturation, then small signal suppression occurs and target detectability is degraded. Similarly, clutter may exceed the dynamic range of the analog-todigital converter. In this case, totally erroneous data may be
produced. The careful designer will choose analog components
having a high saturation level and will choose digital converters having a large number of bits in their output word length.
Typically, word lengths of 14 to 16 bits are required to withstand worst-case clutter inputs.
Even when clutter returns are processed linearly through
the radar receiver and digital converter, they must be rejected
within the signal processor to avoid interference with target
returns. Classical analog filtering is not effective against clutter, because the receiver must pass the entire pulse spectrum
of the target, typically several megahertz, while the Doppler
offset is only on the order of a few kilohertz. Common practice
is to employ digital clutter cancelers also known as moving
target detectors (MTD). In its simplest form, the MTD algorithm provides subtraction of the current pulse sample from
that received one or more repetition intervals earlier. Because
clutter has a near-zero Doppler shift, it is canceled almost
totally, whereas the Doppler shifted target return is passed
with little attenuation. The result is a high-pass digital filter.
Feed-forward and feedback multiplication factors may be
used to tailor the frequency response to specific target requirements.
Clutter may be received from the terrain, from the surface
of the sea and from rainfall. Terrain reflections are usually
the most intense, but, because their Doppler shift is very low,
they are canceled easily. Sea clutter becomes a problem only
under heavy-sea conditions. Rain clutter presents the greatest challenge, because it can be highly intense and that intensity increases with the fourth power of radar frequency. This
problem is burdened by the Doppler shifts from wind-driven
rain drops. Because wind velocity generally increases with altitude, the cancellation of higher altitude rainfall clutter becomes problematic.
Microwave transmissions tend to resonate with molecules
present in the atmosphere. This resonance causes attenuation
with an attendant degradation to target detectability. Oxygen
and water vapor are the primary contributors to this attenuation, although smoke, haze, and smog can also be factors.
Even though there exist windows of decreased attenuation,
the effect is more pronounced at higher radar frequencies.
Rainfall also can cause severe attenuation. Over frequency,
attenuation rates vary from a few tenths of a decibel per kilometer of transmission path to more than 1 dB.
Microwave components contribute some degree of loss with
attendant transmission strength reduction and target signal
attenuation. Included are waveguide runs, waveguide joints,
circulators, phase shifters, limiters, and filters. Without careful design, the summation of these losses can exceed 10 dB.
There are a number of variable losses associated with the
antenna beam and signal processing. A target may appear at
an angle off the antenna elevation boresight. During the target dwell, the antenna gain will vary. Pulse returns may be
sampled at other than peak response times. Noise level estimates may not be exact. The matched filter approximation
734
SEARCH RADAR
used will not yield the theoretical maximum SNR. These effects are always probabilistic but must be considered in detection calculations.
In military applications, ECM must be considered. This is
employed in an attempt to hide the echo from incoming aircraft or missiles or to decoy and confuse the search radar. The
simplest form is barrage noise jamming in which high noise
levels are radiated across the entire frequency band allocated
to the radar. These systems require no knowledge of the exact
transmission frequency of the victim radar. In those cases
where the adversary is able to measure the transmission frequency, spot jamming may be effective. Here, the noise jamming is restricted to the instantaneous spectrum occupied by
the radar transmission and may have much higher power
density than that of barrage jamming. More advanced techniques are available in which the ECM system reradiates the
transmission with variable delay or modulation in an attempt
to create false targets. The effectiveness of ECM depends
upon whether or not the jamming source can be located in
angular coincidence with protected targets. If not, its effect
may be minimized by careful control of antenna sidelobes.
Worldwide, there exist a large number of radars of various
types operating in a variety of frequency bands. It is likely
that a given radar will be required to operate in an environment containing several other radars operating in proximity.
In this case, reception of direct transmission and target reflected echoes can cause interference and false target production. When the various radars are of the same type, based on
a common design, the designer may minimize interference using careful frequency channelization and repetition frequency
selection. When disparate types are involved, techniques exist
for editing out pulse returns at repetition frequencies other
than that currently used by the victim radar. In all cases,
careful antenna sidelobe control is mandatory.
An emerging technology has been deployed that attempts
to render aircraft, missiles, and ships invisible to radar. This
stealth technology utilizes radar absorbent material (RAM)
and geometric vehicle design to greatly reduce radar reflectivity. In fact, stealth techniques do not result in total invisibility but radar return levels from these targets are reduced appreciably. Unfortunately, from a design standpoint, little can
be done to counter these threats except to increase transmitter power, frequency stability, antenna aperture, and/or receiver sensitivity. However, the stealth technology is not totally effective over an infinite bandwidth. Therefore, it may
turn out that the next generation radar designs must be
based on either very low or very high radar frequencies.
These decisions will have far-reaching implications in terms
of antenna beamwidth, Doppler resolution, and repetition frequency selection. Stealth is, probably, the most serious concern for future military radar development.
Target detection decisions are based on signals exceeding
some preset threshold. Ideally, that threshold is computed by
multiplying an average measurement of the ambient receiver
noise level by some constant. Thus, the threshold will vary as
the noise level varies due to temperature changes and local
clutter conditions. Since false alarms on noise are probabilistic events, this method ensures a constant average false
alarm rate (CFAR). The problem is in determining the ambient noise level. This may be accomplished by averaging a
number of samples taken from range cells next to the target
cell under investigation. Use of a larger number of cells re-
SEARCH RADAR
Pt t Gt t Ar Np L
(4 )2 R4 F
Td
Tr
735
Tv
Pa t Ar L
8 2 e F (Rd + Vt Tv )4
Tv = Nb Ts
Then, maximized ENR at initial range is given by
Insertion of these relations into the original range equation
yields the following form, which highlights the critical radar
parameters:
ENR =
Pa t Ar LTv
8 2 e FR4
Power-Aperture Product
The designer has control over some parameters but not others. Target cross section and elevation coverage are fixed by
ENR =
kPa t Ar L
e FVt R3d
736
SEARCH RADAR
e =
Gt =
4Ar
2
3.0 108
= 0.030 m
10 109
4
= 13, 963 (41.4 dB)
(0.030)2
or 5 kW (67 dB/mW).
The optimized volume search time is
a =
Rd
10, 000
=3s
=
3Vt
3 1100
2
2
= 8.38 rad/s
=
Ts
0.75
Pa Tr
105 10 106
Pt =
=
= 5 106 mW
t
200 109
Tv =
e
45
= 11.25 deg (0.196 rad)
=
Nb
4
Tv
3
= = 0.75 s (80 RPM)
Nb
4
dB
ENR
e
F
Vt
R 3d
k
t
L
P a Ar
13.0
1.0
174.0
3.0
30.4
120.0
28.7
20.0
10.0
50.1
4
4
= 0.004459 rad (0.26 deg)
=
Gt e
13963 0.196
0.00459
a
=
= 0.000548 s (0.548 ms)
8.38
Td
0.000548
=
= 55 (17.4 dB)
Tr
10 106
4
4 10, 000
Rd =
= 13737
3
3
SEARCH RADAR
Table 2. Example of Verification of Energy-to-Noise Ratio
Parameter
dB
Pt
t
Gt
t
Ar
Np
L
(4)2
R 4i
F
ENR
67.0
67.0
41.4
20.0
0.0
17.4
10.0
22.0
165.0
174.0
3.0
12.8
Doppler Shift
F =
2Vt Fr
c
Fmin =
2 100 109
= 667 Hz
3.0 108
Fmax =
2 1000 109
= 6667 Hz
3.0 108
6667 667
= 0.82
7334
Fmin =
737
2 100 35 109
= 23, 333 Hz
3.0 108
Fmax =
2 1000 35 109
= 233, 333 Hz
3.0 108
f r1 = cos[1 (t ) + 1 ]
f t1 = cos(1t + 1 )
738
SEARCH RADAR
ff1
ft
Antenna
ft
fl1
xN3
xN4
ff2
Video
fl2
xN5
f4
f2
f3
f0
xN2
f1
S
xN1
SEARCH RADAR
739
sonnel owing to the high voltages involved and the high radiation levels required. Failure rates are often high due to the
large component stress induced by generated heat within internal circuitry. Finally, periodic maintenance is complicated
by the weight of components and the fact that most components must be submerged in heat-dissipating liquids. All of
these disadvantages are balanced only by the single gain factor offered by this element.
In making system level trade-offs, the careful designer is
advised to select the largest antenna possible to reduce the
requirements on the power amplifier. With the possible exception of exotic phased arrays, antennas are, generally, less expensive, more reliable, less hazardous, and more maintainable than their power amplifier counterparts. In addition, the
larger antenna offers narrower beamwidths and increased angular accuracy.
Power output notwithstanding, radar frequency bandwidth
may be an important design consideration. The first requirement is the ability to pass, with high fidelity, a narrow pulse
spectrum. Pulse widths on the order of 100 ns having bandwidths of 10 MHz are not uncommon. The klystron vacuum
tube provides an instantaneous bandwidth capable of supporting most pulse applications. However, when rapidly
tuned frequency agility is required, amplifier bandwidth becomes very critical. In applications designed to thwart narrowband spot jamming, agility bandwidths on the order of
1000 MHz may be required. Tuning intervals of only a few
milliseconds may be dictated. These parameters are supported easily by the frequency synthesizer, but, for example,
a mechanically tuned klystron will not meet the tuning speed
desired. In that case, wideband devices such as the traveling
wave tube (TWT) must be used.
A potentially serious problem arises when broadband
power amplifiers are utilized. This is radiation of broadband
noise. The amplifier not only provides gain to the transmission signal but to thermal noise as well. When this energy
impinges another friendly radar in the vicinity, the noise can
cause sensitivity degradation in the victim radar. Selectivity
in the victim radar can be used to reject the signal spectrum
of the interference but will not reject the broadband noise.
Table 3 depicts a typical scenario.
In the table, is the inherent noise power density at the
power amplifier input, Ga is the amplifier gain, F is its noise
figure, Gt is transmit antenna gain, R is a typical separation
between interferer and victim, Ar is the victim antenna capture area, Gsl is a typical victim sidelobe rejection, and Pr is
the resultant noise density at the victim receiver. Suppose
that the victim receiver noise figure is 3 dB. Then, the reTable 3. Computation of Received Noise in a Typical
Interference Scenario
Parameter
Value
Units
Ga
F
Gt
4
R2
Ar
Gsl
Pr
174
60
20
40
11
66
0
20
151
dB/mW Hz1
dB
dB
dB
dB
dB/m2
dB/m2
dB
dB/mW Hz1
SEARCH RADAR
0
5
Antenna power gain (dB)
740
10
15
20
25
30
35
40
8
Antenna
Once the basic antenna area has been selected based on
power-aperture product requirements, the designer is free to
choose the horizontal and vertical dimensions. Longer dimensions produce narrower beamwidths in that plane. Thus, the
antenna might be long horizontally and short vertically. This
would produce a fan beam narrow in azimuth and broad in
elevation. However, certain applications might require a rotated fan beam with narrow elevation and wide azimuth
beamwidths. In some cases, a square or circular aperture
might be selected to give comparable beamwidths in both
planes. In all cases, the antenna gain should be constant to
satisfy range equation requirements.
The antenna radiation pattern is, by reciprocity, identical
to the reception pattern. The far-field electrical field intensity
in a given plane, analogous to voltage in an electronic circuit,
is determined by taking the Fourier transform of the current
distribution across the antenna face in that dimension. Thus,
a/2
z
E() =
A(z)exp j2 sin dz
a/2
where is the angle off boresight, a is the linear dimension
of the antenna, A(z) is the current distribution, and is the
wavelength. When the current distribution is constant or uniform, the gain is maximized and the normalized pattern is
given by
E() =
sin[(a/)sin ]
(a/)sin
z
a
2
0
2
Off-axis angle (deg)
where (a/ ) sin . In this case, the aperture is not illuminated fully and the main beam gain is reduced to 0.81
(0.9 dB) relative to the uniformly illuminated case. This pattern is shown in Fig. 3 also.
The uniform distribution yields a first sidelobe level of 13
dB relative to mainlobe gain while the cosine distribution
gives a 23 dB first sidelobe. The respective beamwidths are
1.53 and 2.07.
Skolnik (2) has a definitive dissertation on antenna design.
Microwave Receiver
The most important parameter associated with the microwave receiver is the noise figure. That parameter sets the basic sensitivity of the radar. The noise figure of any electronic
device is defined as the ratio of input SNR to output SNR. It
is a measure of the contribution of that device to overall system noise level.
If the first component in the receiver chain is the mixer
used to convert to first IF, then the loss in that mixer dictates
the noise figure. This loss could be as high as 10 dB even for
a well-designed mixer. If this noise figure is unacceptably
high, it may be reduced by incorporating a low noise radar
frequency amplifier ahead of the first mixer. The noise figure
of the combined amplifier-mixer is given by
F0 = F1 +
F2 1
G1
10 1
= 2.09 (3.2 dB)
100
SEARCH RADAR
sin( f )
f
2
/2
Antenna
Receiver
/4
/4
Transmitter
Figure 4. Circuit schematic of a circulator used for receiver protection. The circulator utilizes path length differences to cancel the
transmission of the receiver port.
741
Value
Units
Pt
Gt
4
R2
Ar
Gsl
Ls
Pi
70
40
11
86
0
20
70
77
dB/mW
dB
dB
dB/m2
dB/m2
dB
dB
dB/mW
742
SEARCH RADAR
c
s + c
where c is a radian corner frequency defining the filter bandwidth. The magnitude squared response of this filter is
H( f ) =
f2
f c2
+ f c2
1
2
f2
fc
f c2
df =
2
+ fc
2
[1 exp(2 f c )]2
( f c /2)
SEARCH RADAR
T = 1 + N1 T1
T = 2 + N2 T2
Usually, determination of the two integers, N1 and N2, is accomplished by iteration or trial and error. Either integer may
be used to compute true range.
Because the combined pattern of two frequencies will repeat at some long range, the resolution procedure, in reality,
only resolves the ambiguity to within some other ambiguity.
The pattern repeats, in general, at the product of the two repetition intervals when those intervals are expressed as integral multiples of the sampling interval unless there are common factors involved. As an example, assume that T1 29.0
microseconds (s) and that T2 37.0 s. Furthermore assume a sampling interval of Ts 0.2 s. Then T1 145 Ts
and T2 185 Ts. The product is P 145 185 0.2 5365
s. However, the common factor of five reduces the pattern
repetition to 1073 s (161 km). A better choice might be T1
29.8 s and T2 36.2 s. Then, the product is P 149
181 0.2 5393.8 s (809 km) and, since the integers are
prime, that is the maximum resolved ambiguity.
Dynamic Range
The ratio of maximum to minimum signal levels that may be
processed linearly without concern for harmonic production
or small signal suppression is termed the dynamic range of
the system. This parameter is set either by the characteristics
of the microwave and IF amplifiers or those of the analog-todigital converter (ADC).
For low-level input signals, electronic amplifiers are linear
and the output level is proportional to the input level. However, as the input level increases, the output begins to approach a constant independent of input level. This condition
is termed saturation. When a high-level clutter signal and a
low-level target signal are processed simultaneously through
a saturated amplifier, the target signal is suppressed. Although the receiver noise is suppressed also, the FM noise
carried by clutter is not affected. The overall result is a decrease in target SNR.
The ADC usually determines the system dynamic range.
The theoretical noise level due to quantization in the ADC is
of one least significant bit (LSB). This is equivalent to a
power level of 11 dB relative to the LSB. However, practical
ADC devices exhibit noise levels well above theoretical. Typically, a noise level of 1 dB can be expected. The maximum
signal that may be processed linearly is dependent upon the
number of bits available at the ADC output. Consider a 12bit device. Because one bit must be assigned to the sign of the
output, the maximum peak signal is 20 log (211) 66 dB. The
corresponding rms level is 63 dB. Thus, the apparent dynamic
range is 64 dB.
The ADC noise will add to the input receiver noise and
reduce sensitivity. This reduction may be minimized by setting the receiver noise somewhat higher than that of the
ADC. For example, a receiver noise level 6 dB above ADC
noise yields a 1 dB degradation. Now, the dynamic range is
reduced to 58 dB.
Compounding the problem are fluctuations in receiver gain
over ambient temperature. Should that gain drift downward
by 6 dB, the sensitivity degradation would be 3 dB. This unwanted circumstance is avoided by elevating the nominal re-
743
ceiver noise level an additional 6 dB. If the gain should increase by 6 dB and if the IF limit level were set exactly at the
maximum ADC level, then the higher level would overload
the converter. This is avoided by setting the limit level 6 dB
below that of the ADC. The overall result of these adjustments is a 12 dB decrease in dynamic range to a value of
46 dB.
Additional dynamic range may be obtained by increasing
the number of bits available from the ADC. Each added bit
represents an increase of 6 dB in the dynamic range. However, adding bits reduces processing speed. Currently, devices
are available that output 14 bits at rates of 10 MHz. In the
near future, it is expected that 16 bits or more may be
achieved at rates up to 30 MHz or higher.
It is possible to monitor receiver noise level and, using
feedback, to control the receiver gain. This would enable the
receiver noise and IF limit level to be maintained constant
relative to the ADC parameters. The result would be a 12 dB
increase in dynamic range. Use of this technique with a 14
bit ADC enables a dynamic range of 70 dB.
Target Detection
A primary function of the search radar is to provide detection
of targets. Usually, this is achieved by comparing received
signals to a preset threshold. For this function, the operative
measure of performance is probability of detection. This probability depends upon input SNR and allowable false alarm
rates.
Detection theory is couched in Rician statistics. This theory treats the problem of the probability that the magnitude
of a sinusoidal signal imbedded in Gausian noise will exceed
a given threshold. [See Schwartz (4) for a detailed derivation.]
The correct threshold is determined by electing an allowable probability of false alarm, . That probability is given by
= exp(Vt2 /2 2 )
where Vt is the threshold voltage and is the root mean
square (rms) noise voltage. It is usual to set this threshold
based on a measured value of the average noise magnitude.
That average is
m1 =
2
The threshold is set at a constant, k, times the measured average. The proper selection of the constant is
1/2
4
k = ln
744
SEARCH RADAR
where
Here, s2 is the target SNR and I0 is the modified Bessel function of the first kind and zero order. The integral cannot be
evaluated in closed form. However, it is solved easily by numerical integration on a digital computer. This evaluation is
aided by the following power series expansion of the modified
Bessel function.
I0 (z) =
z2n
n=0
22n (n!)2
Probability of detection
1.0
0.8
u=1.0E 5
0.6
u=1.0E 6
30 106
= 2.0 107
150.0
185 30 106
= 3.7 105
150.0
0.4
0.2
0.0
6
9
12
Signal-to-noise ratio (dB)
15
Figure 5. Detection performance as a function of signal-to-noise ratio. Curves represent two different false alarm rates. Probability of
detection increases as SNR increases but decreases as false alarm
rate is decreased.
SEARCH RADAR
V (Ni )
N
M 1 r1 + M 2 r2
M1 + M2
745
r = k r r / S
where kr is a constant depending upon implementation parameters, r is the pulse width measured in units of range,
and S is the SNR. At higher SNR, the error may be reduced
to a fraction of the pulse width. Note that in systems using
pulse integration split gating may be used only if sums of
pulse magnitudes are obtained or true coherent integration is
employed. It is of no use with binary integration.
In simple applications, target bearing is determined by
noting the angular orientation of the antenna at the time of
detection. Thus, the accuracy can be no better than the azimuth beamwidth of the antenna. When improved accuracy
is required, there are a number of beam splitting techniques
available. A very sophisticated approach takes advantage of
the known shape of the antenna pattern to form a curve fit of
the received data. Solution of the resulting set of equations
yields a very exact measure of azimuth. In all beam splitting
techniques, the rms error will be given by
a = k a a / S
where ka is a constant related to antenna parameters and a
is the azimuth beamwidth. Higher values of SNR yield errors
of a fraction of beamwidth. Skolnik (6) derived theoretical
boundaries for measurement accuracy.
Except for some advanced designs where the antenna is
scanned past the target in elevation as well as azimuth, elevation beam splitting is not possible. Therefore, the error remains the elevation beamwidth and this value may be quite
large. This error can have a profound impact on scan-to-scan
correlation algorithms.
In some applications, it is required that a measure of target cross section be obtained. This parameter might be used,
for example, to distinguish bird and insect returns from aircraft. This measurement depends upon a priori knowledge of
all the parameters comprising the radar range equation.
Since these parameters may fluctuate, especially over time,
the accuracy of this measurement probably is limited to
3 dB.
Target Acquisition
In automatic systems, target acquisition is a requirement.
Through this process, a target file is maintained for each detected target. This file contains a record of target range and
bearing and, possibly, elevation, velocity, and cross section.
Each file is updated after each antenna scan with the latest
measurement data.
File updates are accomplished by performing scan-to-scan
correlation. This process is effected by associating the latest
detection data with earlier data using the concept of correlation windows. For each target, a window in space is established into which it may be expected that the next measured
parameters from that target may fall. If a given set of parameters falls outside all established windows, then a new target
is declared and a new file initiated. Correlation windows are,
generally, rather large after initial detection but are allowed
to decrease in size as the target history matures.
An example of the use of windows may be helpful. Assume
a two-dimensional radar having a single elevation beam that
746
SEARCH RADAR
A1
x
fi
1/z
f0
B1
Figure 6. Single stage moving target indicator using both feed-forward and feedback. Frequency response may be shaped by varying
the multiplier coefficients.
SEARCH RADAR
10
No feedback
0
747
With feedback
Electronic Countermeasures
10
20
30
0.0
0.2
0.4
0.6
0.8
Frequency relative to PRF
1.0
Figure 7. Frequency response of a single stage moving target indicator with and without feedback. Note the widening of the response
when feedback is used.
In military applications, the use of electronic countermeasures (ECM) by the enemy is, practically, a given condition.
A prevalent form of ECM is broadband noise jamming. If the
enemy can muster sufficient power levels and place this noise
sufficiently close to the victim radar, then it can always defeat
any radar. In those cases, the radar designer can do little to
prevent degradation directly. However, there are scenarios in
which jammer identification and/or frequency agility may be
used to mitigate this degradation.
The total noise power received from a jammer is given by
Pr =
Pj Gj Br Ar Gsl
4R2j
748
SEARCH RADAR
length. This is not a total panacea. Selection of PRF is a delicate process involving questions of blind ranges, blind velocities, and range ambiguity resolution. In a given application,
there may not be a sufficient band of usable PRFs to serve
multiple radars. Moreover, diversity may even compound the
problem. If an interference pulse is sufficiently strong, it may
be detectable without integration. This pulse will migrate
over many range cells and produce multiple detections. This
could result in a swarm of false targets that might overcome
the correlation capability of the system computer.
Neither of the techniques discussed above is applicable to
the class involving radars of different types. In this case, the
only viable defense appears to be asynchronous pulse detection (APD). In the APD technique, successive pulses in a
given range cell are compared. If the later pulse is much
larger than the earlier pulse, it is edited from the data stream
and replaced by either the earlier sample or a random number. Of course, this approach works only when two different
PRFs are involved. Asynchronous pulse detection can be very
effective in minimizing RFI. However, depending upon the
thresholding scheme employed, it can cause degradation in
the detectability of real targets.
ADVANCED DESIGN
In this section, topics are discussed that represent potential
performance improvements to future designs. However, none
of these are new innovations. The concepts have been known
and understood for many years. As technology evolves,
though, these techniques become more and more cost effective
and attractive to the designer of advanced systems.
Phased Array Antennas
Phased array antennas are planar arrays of waveguide slots.
Variable phase shifters are used to drive a group of slots and,
thus, to effect electronic steering of the antenna beam. This
technique can eliminate the necessity for bulky mechanical
devices such as motors and gimbaled platforms.
The basic theory of phased arrays is described best by considering the simplest case, which is a two-element array. The
radiated fields from two adjacent sources combine in space to
form a radiation pattern. When a phase shift is applied to one
element, the directivity of that pattern may be altered. In this
simple case, it may be shown that the relative gain of the
array is
G=
2 + 2 cos( sin )
4
SEARCH RADAR
be the product of the pattern from each element and the array
factor, which is determined by the element spacing. The optimum spacing is one-half wavelength. At wider spacing, the
pattern begins to develop unwanted sidelobes called grating
lobes. These can be as large as the main lobe and may cause
confusion or interference. In addition, coupling between elements can alter the actual antenna pattern. Phased array design is a complex process.
When the array beam is steered off-axis, the beam will
broaden and the gain will decrease. In general, this effect is
in proportion to the cosine of the steered angle. For example,
a steering angle of 45 may result in a loss of 1.5 dB relative
to on-axis gain.
When the element spacing is one-half wavelength, the
number of elements and required phase shifters can become
quite large. For example, consider a design at X band where
one wavelength is 0.03 m. An antenna 1 m on a side would
require 4356 phase shifters to enable steering in both planes.
The sheer cost and weight of this system might be prohibitive.
A good compromise design is one in which the antenna
array is rotated mechanically in azimuth while being steered
electronically in elevation. The example given previously
would then require only approximately 66 phase shifters to
provide elevation-only steering. This approach also allows for
raster scanning. Rather than holding the elevation position
constant over a full 360 rotation, the beam could be directed
to visit several elevation positions during one scan. This not
only reduces the time required to illuminate a given volume
but, since the beam traverses the target in both dimensions,
beam splitting in elevation and azimuth could be implemented.
An ultimate phased array design is the conformal array.
Here, the array is designed as an integral part of an existing
geometry. That geometry might be the fuselage or wing of an
aircraft or the hull of a ship. The ideal conformal array would
be a sphere or hemisphere. This design could eliminate offaxis steering loss, because the beam would always be perpendicular to the array surface.
Another advantage of phased arrays is their capability for
instant target verification. An initial detection could be followed by freezing the beam in the direction of the target detection. Then, a longer dwell could be chosen to both reduce
measurement error and increase confidence level. The time
savings relative to scan-to-scan verification could be significant.
A final application of phased arrays is in platform motion
compensation. When the radar is carried by an aircraft or
ship, it is desirable that the beam position be maintained relative to earth coordinates independent of platform motion.
This is implemented easily using beam steering and its use
eliminates the necessity for complex motor-gimbal apparatus.
F=
N1
n=0
[cos 2 f c nT r j sin 2 f c nT r ]
where A is the peak amplitude of the input signal, f is the
frequency of the input signal, f c is the center frequency of the
filter and Tr is the pulse repetition interval. N is the number
of pulses integrated. Note that the input and coefficient sequences are expressed as complex quantities. The real part of
the input is taken from the in-phase video channel, whereas
the imaginary part is taken from the quadrature. The output
is also complex valued.
The real, or in-phase, output may be shown to be
Fi =
N1
A cos 2( f f c )nTr
n=0
Fq =
N1
A sin 2( f f c )nT r
n=0
When the input signal frequency coincides with the filter center frequency, the outputs are
Fi = NA
Active Arrays
The next generation of search radar may well use the concept
of the active array. This is a natural extension of the phased
array. In that design, each array element is provided with its
own transmitter and receiver module. This solid-state module
contains a low-power transmission amplifier, receiver protection, a low-noise RF preamplifier, filtering, and frequency conversion. It also contains a digitally controlled phase shifter
for beam control. On transmission, all modules are driven by
749
and
Fq = 0
The square of the output magnitude, which is output power,
is, then,
M 2 = Fi2 + Fq2 = N 2 A2
750
SEARCH RADAR
F=
n=0
Fi =
N1
10
15
20
25
xn cos 2 f c nT r + yn sin 2 f c nT r
n=0
30
0.0
V(Fi ) =
N1
n=0
N1
F=
N1
n=0
[cos 2 f c nT r j sin 2 f c nT r ]
V(Fq ) = N 2
Then, the output SNR is
SNR0 =
1.0
2 [cos2 2 f c nT r + sin2 2 f c nT r ] = N 2
n=0
0.2
0.4
0.6
0.8
Frequency normalization factor (k)
N 2 A2
A2
= N 2 = N SNRi
2
2N
2
where SNRi is the input, per-pulse SNR. Thus, the DFT provides an SNR gain equal to the length of integration. For example, if that length was 100, the gain would be a very impressive 20 dB. No other known technique yields a higher
SNR gain than the DFT.
The normalized frequency response of the DFT filter may
be shown to be
sin N
Gnorm =
N sin
2
where
= f/f r
f is frequency relative to filter center frequency and f r is the
PRF. This response is plotted in Fig. 8 for the case N 10.
In this plot, a frequency normalization factor, k, is used.
When k 0, f 0 and when k 1, f f r.
It will be noted that the response exhibits sidelobes. The
first sidelobe is 13 dB below the main lobe response. Also,
note that the response repeats at integral multiples of the
PRF.
If the relatively high sidelobes are not sufficient to provide
desired clutter attenuation, then window functions may be
applied to suppress these sidelobes. A window function is a
real-valued, time-varying sequence applied to the input data
for all filters. With a window, the DFT response is given by
B=
1
fr
=
N
Td
SEARCH RADAR
Fl ( f ) Fh ( f )
Fl ( f ) + Fh ( f )
751
752
DUSTIN J. WILSON
Hughes Missile Systems Company
(retired)