You are on page 1of 86

Why it is important to understand propagation

The radio spectrum has a large range of applications in many operational environments. The frequency
bands cover many orders of magnitude in frequency. Radio systems require transmitting sources and
receivers to provide wireless communications links. Understanding the radio channel between the
transmitter and the receiver is critical in designing any radio system.

Service providers for example broadcasters, private radio users like taxi firms, mobile phone network
operators, the MOD and a host of others all want to know how to get radio coverage for their particular
application. When planning a service they need to know where to put their masts, how many masts will be
needed, what antennas should be used, how much transmitter power is going to be needed and how reliable
their radio links will be.

Regulators who manage the radio spectrum are interested in making the best use of this limited and
valuable resource. They need to regulate so that as many as possible can share the radio spectrum and so
are interested in predicting interference between users. Where do signals go beyond where they are
intended to. Also what is this or that bit of spectrum worth and how much benefit can the taxpayer gain
from selling the rights?

It is frequently forgotten how important propagation is to the overall performance of a communications


network. Radiowave propagation studies allow us to estimate and evaluate the radio channel and so design
systems that work as well as possible. Some people, especially the regulators may prefer propagation to be
simple, but in practice propagation is fairly complicated, so we will start with the fundamentals….

What is free space?

Free space in this context means space with nothing at all in it, it does not exist in the known universe but
interstellar space is a good approximation. We start at this level as there is nothing there to make the maths
even more complicated than it already is.

The important features of free


space:

• Uniform everywhere
• Contains no electrical
EMPTY SPACE
charge
• Carries no current

• Infinite extent in all


dimensions
OK - so now we have to deal with Maxwell's equations, or at least just mention them. You can skip this bit
if you are one of those people with a pathological fear of algebra, you will miss out a little, but that is just
the way it is.

Maxwell's equations

Radio waves are predicted to propagate in free space by electromagnetic theory, they are a solution to
Maxwell's Equations.

James Clerke Maxwell (1831-1879) was an interesting character. His first paper to the Edinburgh Royal
Society “On the Description of Oval Curves, and those having a plurality of Foci” was written when he was
only 14 and had to be read out for him because he was too young. It was based on work he had done using
twine, pins and a pencil. Besides his famous work on Electromagnetic theory, he was a leading contributor
to the kinetic theory in gases and to the theory of colour vision. He correctly discovered how we perceive
colour and took the first colour photograph, an image of of a Tartan Ribbon in 1861 using 3 coloured
filters, red, green and blue to capture and later project 3 copies of the image.

E = Electric vector field


H = Magnetic vector field
ρ = charge enclosed = 0 in free space
J = current density = 0 in free space

These are really not that bad, firstly, E and H Fields:

An electric field E represents the direction a charge will move


Magnetic fields H are the directions a magnet would align

Also Note
You can not create nor destroy charge
There are no magnetic monopoles

The Divergence operator ∇⋅ E ∇⋅ H

In physical terms, the divergence of a three dimensional vector field is the extent to which the vector field
flow behaves like a source or a sink at a given point.

tells us lines of Electric flux, are proportional to the electric field and "diverge" away from a
region containing electrical charge. Electric field lines which do not form closed loops begin and end on
charge

tells us lines of Magnetic field never diverge from anything, and so must form closed loops.
(because there is no “magnetic charge”)

Now for the Curl operator ∇ xE ∇ xH

This is the tendency of the vector field to loop or rotate in space. The curl shows a vector field's rate of
rotation: the direction of the axis of rotation and the magnitude of the rotation.

tells us electric field lines which form closed loops, encircle (curl) a changing magnetic field.

tells us magnetic field lines H form loops which encircle both conduction current density J, and also
"displacement current density" (dE/dt) which is generated by time-varying electric fields. The current
density J flowing out of a region ("diverging") must result in a decrease of charge within the region. In free
space there is no charge and so J = 0.

Maxwell’s equations show that looping E field with give rise to a change in the H field and a looping H
field will give rise to a changing E field. This is very important, an electromagnetic wave is a solution to
Maxwell's equations. Fortunately, we can get a long way without solving Maxwell’s equations ourselves..

The plane wave equation:

An diagram of a plain wave is shown below:

The solution to Maxwell's equations for a plane wave are:

E = E0 cos (φ)x

and

H = H0 cos (φ)y

where x and y are the unit vectors in their respective directions - not quite the correct notation but this is
HTML.

We are not going to do the maths for this in any detail but if we substitute the above functions of E and H
into Maxwell's equations we can show that they work. E and H are orthogonal, as shown by the unit
directional vectors, and also have a sinusoidal variation in amplitude. By convention the polarisation is
defined by the direction of the E field. The wavelength is the distance traveled in one cycle of E and H.
The existence of plane waves was predicted by John Henry Poynting in 1884. To put it exceptionally
crudely, a looping Electric field causes a looping magnetic field which causes a looping Electric field etc.
and effectively the Electromagnetic wave propagates like a perpetual game of leapfrog.

John Henry Poynting (1852–1914) pictured below, was a physicist and a professor of physics at Mason
Science College which is now part of the University of Birmingham.
We can derive some relationships from the plane wave solution that will be useful later. The field strength
can be expressed as:

E = E0 cos (ωt + kz)x


H = H0 cos (ωt + kz)y

Where ω = 2πf (the angular frequency)

k = the wave number, the rate of change of phase with distance:

The wave number k is frequently found in EM theory, it describes the variation with distance along the
propagation axis whereas ωt is the variation with time. The ratio of E to H is the impedance of free space
(by Ohms law):

The impedance of free space, which is equal to 377 Ohms. is an important figure as it tells us the
relationship between the E and H field strengths.

The Poynting vector

The Poynting vector usually written as S is the direction in which energy travels in an EM wave, we will
not go into the vector calculus, but it is given by taking the cross product of the vector field of E and the
complex conjugate of the vector field H.

S = E × H*

This represents a power flow along the z axis. The average in watts per square metre is given by:

Sav = ½E0H0z Watts/m2


The phase velocity is the rate along the z axis that a point of constant phase moves,

which is the speed of light and is approximately 3x108 m/s in free space

and the wavelength is the velocity divided by the frequency:

The Exponential Notation

It is also possible for us to express E and H in exponential notation:

Where Re{} means take the real part. This comes from the equivalence ejx = cos(x) + j sin(x)

All this is very useful as by using this notation differentiation in easy because the differential of ejx = -jejx.
It basically makes the maths of the plane wave equation easier because of the relative ease of taking the
differential. From the point of view in question, i.e. Maxwell's equations we get:

We have effectively got rid of the tedium of finding the differentials which makes a big difference.

Losses in materials

All materials that are not free space are lossy to some extent or another. The amplitudes of the E and H
fields decay exponentially with distance along the direction of propagation:
We can write this mathematically as:

where Lz represents the loss with distance. The rate of decay here is important – it is exponential, so a
wave propagating through a Lossy medium can be described as having a specific attenuation in terms of
decibels per metre.

E.g. a length of coaxial cable or waveguide will have a loss specified in dB per metre. A one metre length
might be specified to lose 1 dB. This is a 25% loss. A 2 metre length will not lose 50%, it will lose 2 dB
which is 40%. Obvious, but important.

Refractive Index

We are used to learning about refractive index in optics. Remember the wave number

We can see this depends on the material because the wavelength λ depends on the speed of light in the
material. We could actually write

k = nko where n is the refractive index and ko is the value of k in free space. The refractive index is the
ratio of the speed of light in free space to the speed of light in the medium, i.e. n = c/v.

Using the exponential notation:

So...
This notation is useful as we can often forget about the harmonic oscillations, assume they carry on
minding their own business confident that they will still be there when we need them, while concentrating
our algebraic skills on what happens with distance.

Now on to some applications.

Free Space Loss

We all now know that:

• A radio wave launched from a point in any given direction will propagate outwards from that
point at the speed of light
• The energy will travel in a straight line, as there is nothing to prevent them doing so
• They will do this forever.

So how can we talk about a loss ?

Actually, this last“forever” statement is not quite true, the energy is carried by photons that do eventually
decay but as the half life of a photon is of the order of 6.5 Billion years, we don't need to worry about it.

We can’t really talk about free space loss, but we do anyway…

What is means is the ratio of the received power to the transmitted power,

this not really a loss at all, energy is conserved, it is just that usually not all of it is captured at the receiver.

We can easily predict the free space loss from the well known equation:

Free Space Loss = 32.45 + 20log(d) + 20log(f) dB (where d is in km and f is in MHz)

- it is important to understand where this comes from. Imagine a light bulb in free space, light spreads out
more or less equally in all directions
The wavefront expands as a sphere and the energy flux radiates outwards at the speed of light.

The Power Flux Density (PFD) is the power per unit area is the power from the bulb divided by the area of
this sphere. The area of a Sphere = 4πr2, So:

Ppfd = Pt / 4πr2 w/m2

Field strength ∝ √P (From Ohms Law (P = V2/R, Erms2 = PZ0)

Which demonstrates that in free space, field strength is inversely proportional to range and hence power
which is proportional to the square of the field strength is inversely proportional to the square of the range.
(This is a simplistic explanation and possibly stating the obvious, however it has been included just in case
the inverse square law is not obvious it everyone.)

An Example

Voyager 1 is a space probe which is now 15 billion km from Earth (USA billions). How strong are its
transmissions at the Earth?

The transmitter power is about 13 watts at 8415MHz. The 3.7m antenna has a gain of 48 dB which makes
this an effective power in our direction of 800kW.
This is covered soon, but the effect of a large dish is to focus the energy to only fill a small segment of the
sphere around the object. This forms a cone, the area of the face of a cone still expands with the square of
the distance.

The power density at Earth is from the previous equations:

watts per square metre

Voyager is received by the large 70m dish at Goldstone. A 70m dish has an area of 3800m, so the total
power it receives over that area is ~ 1 x 10-18 W – about an attowatt. One attowatt is not very much (-180
dBW). We will come back to this later when we cover link budgets

Voyager 1 is nearly 30 years old, it was launched on 5 th September 1977 from Cape Canaveral from a
Titan-Centaur rocket. While the rocket used a lot of fuel, the overall consumption is now up to about 30
000 mpg and getting higher. It is now the most distant man made object at 100Au distance from the sun and
heading away at 3.6Au per year. 1Au is the radius of the Earth Orbit. It is so far away that light takes nearly
14 hours to make the trip.

The influence of the sun at this range is 10000 times smaller – so there is no hope at all for solar power.
Instead, the spacecraft gets its power from a radioactive source, which decays producing heat which is in
turn converted into electricity. This source now produces 290W compared to 470W at launch. There are
also liquid propellants for station keeping, the spacecraft has 28kg left which should last at least to around
2020. It will reach the Heliopause in 2015, effectively leaving the solar system.

Goldstone Observatory is located in the Mojave Desert, California, USA. It was set up in the late 1950s to
communicate with the Pioneer space missions.
The observatory is part of NASA's Deep Space Network. There is a 26 metre antenna which was built to
support the Apollo missions in the 1960s. There are five 34m dishes, four of which are high efficiency
using beam waveguides and one very large 70m dish which is used to communicate with distant missions
like Voyager. It has a communications range of about 16 billion km. For Voyager, the normal data rate for
telemetry is 160bits/second with a maximum data rate of 1.4kb/s.

Deriving the FSL equation

This is sometimes referred to as the Friis formula. We start with the equation for received power flux
density,

To find the power received, we multiply by the effective area of the antenna

The effective area Ae of an antenna is related to its gain

Substituting:
rearranging:

Usually, we refer to path loss between isotropic antennas, so

Remembering:

And finally

In useful units of MHz, dB and km

Which after putting in the constants and correcting for units of MHz and km leads us to the standard result:

FSL = 32.4 + 20 log(f) + 20 log(d)

Propagation in the Atmosphere

This section is mainly concerned with the effects of the Troposphere, the lowest region of the atmosphere
that extends upwards to about 10-20km. We are interested in the effects of the air and the weather on radio
wave propagation.
Structure of the Atmosphere

The figure below shows the layers of the atmosphere.

Moving upwards from the ground the layers which are differentiated by the variation of temperature with
height are:

Troposphere: From the ground continuing up to between 7 km at the poles and 17 km at the equator with
some variation due to weather. It is the thinnest but most dense layer, with 72% of the total mass of the
atmosphere is below 10 000m. The troposphere is well mixed mixing due to solar heating at the surface.
This heating warms air masses near the ground, which then rise as thermals. On average, temperature
decreases with height.

Stratosphere: This extends from the top of the troposphere (7–17 km) up to around 50 km. In the
stratosphere temperature increases with height.

Mesosphere: This extends from about 50 km to around 80-85 km. Temperature decreases with height.

Thermosphere: This extends from 80–85 km to in excess of 600km. The temperature increases with
height. The Thermosphere is the boundary of the atmosphere, beyond the Thermosphere is the Exosphere,
which extends into space.

The boundaries between the layers are the tropopause, the stratopause, the mesopause and the thermopause.

Gaseous Attenuation

A major difference in propagation through the atmosphere vs. free space is that there is air present. Air is
made up of:

o Nitrogen (N2) 78.%


o Oxygen (O2) 21%
o Argon (Ar) 0.9%
o Carbon dioxide (CO2) 0.1% - (Varies with location, increasing…)
o Neon, Helium, Krypton 0.0001%
o Water vapour (H20) which varies in concentration from 0-2%
With Trace quantities of: Methane (CH4), Sulphur dioxide (SO2), Ozone (O3), Nitrogen oxide (NO)
Nitrogen Dioxide (NO2). There are other gases too, as well as particulates and pollution.

Gas molecules interact with the Electromagnetic field. This may cause energy loss E.g. H2O molecules are
asymmetric and will try and align with the Electric field.

There are other interactions too, magnetic field molecular oscillations etc. Here is an example of a
resonance line – which shows permittivity versus frequency

The amount of loss depends on the resonant frequency - "absorption line" of the Gas molecules in question,
the concentration of that Gas in the atmosphere and the length of the path. The most significant gases up to
300GHz are Water Vapour and Oxygen. Atmospheric pressure has an effect as it broadens the resonance
lines through the collisions between molecules. The specific attenuation of Oxygen and water vapour at sea
level is shown in the diagram below.
The lines that are most significant up to 300GHz are those of water vapour at 22.3, 183.3 and 323.8 GHz
and those of Oxygen where there is a series of lines between 57 and 63 GHz with another line at 118.74
GHz.

To predict gaseous attenuation requires a model, that allows us to represent the specific attenuation
mathematically. The specific attenuation can be calculated by summing the effects of all the significant
resonance lines, this requires a computer programme. The ITU-R Recommendation P.676 contains several
detailed models…..and there is a simple model too, a curve fit which we will look at next.

Rough & Ready Model for Sea Level Gaseous Attenuation

For water the model which is valid to 350 GHz is:

Where ρ is the water vapour concentration in g/m3 and f is the frequency in GHz.

For Oxygen it is a bit more complicated with two models, on for below 57 GHz and one for above 63 GHz.
For 57-63 GHz, an averaged value of 14.9 dB/km is used.
Where: f is the frequency in GHz.

The total combined attenuation is then found by adding together the specific attenuation figures above.
Given the specific attenuation in dB/km, you can then calculate the total loss along a path by multiplying
by the path length. A couple of examples:

Not at Sea Level?

This simple model relates to links at sea level. For higher altitudes, to a first approximation simply scale by
the reduction in atmospheric density compared to sea level. For example there is practically no water
vapour above the clouds so water vapour attenuation does not effect links between planes and satellites.
There is a utility elsewhere on this site which implements a much more complex model.

That assumes pressure, temperature and humidity do not change, which is a reasonable assumption for
short paths over land but not for slant paths to satellites. For slant paths, there are two ways of approaching
the correct result. The most complex is to simulate the atmosphere as a series of layers, work out the
geometry for how long the path is in each layer, calculate the specific attenuation for that layer and finally
sum all the losses up in an integration.

The second way is to use the scale height approximation. This models the whole atmosphere with a single
set of parameters for the pressure, temperature and water vapour density taken to be those at sea level. The
height of this simulated layer is called the scale height. First the attenuation vertically up to the scale height

is calculated, this is called Zenith attenuation. It is not all that simple as the scale height depends on
frequency. These equations should work up to 57GHz within about 10% error:
As it is a slant path we are generally interested in the path length through the layer is longer, and is
estimated by trigonometry based on the elevation angle which should be within the range 5 - 90 degrees. Of
course, it is usually best to work all this out using a computer or a spreadsheet.

A really good model for atmospheric gas attenuation is available from the ITU-R in ITU-R P.676-6
"Attenuation by Atmospheric Gases". I have a software implementation of this on my software page.

Here are some useful figures for a 100km terrestrial path at sea level with 1013 mB, 15C, Water vapour
concentration 7.5g/m3

If you are calculating the gas loss for path in a duct, assume a median water vapour concentration of 3g/m3

Beyond the Horizon

You might imagine waves travel along straight lines for ever, or until they hit something. For a transmitter
on the ground power radiated above the horizon will go into space, Horizontally beamed signals will travel
to the horizon and then be absorbed, signals below horizontal will be absorbed or scatter into space.
Rule of thumb – distance to radio horizon (km) versus transmitter height (m) d = 4.12√h

We know signals do propagate beyond the horizon and the major mechanisms are:

• Refraction - bending of signals towards ground


• Scattering - from eddies in the air, from rain , from reflecting surfaces and objects
• Diffraction - from terrain, buildings and vegetation

Atmospheric Refraction

To understand refraction, which is the atmospheric bending of the radio path away from a straight line, we
need to remember Snell's law.

Willebrord Snel van Royen (1580–1626) was a Dutch astronomer and mathematician and is most famous
for his law of refraction now known as Snell's law. In 1617 he reported on an experiment to measure the
distance between Alkmaar and Bergen op Zoom which are separated by one degree with the aim of
determining the radius of the Earth. He measured one degree to be equal to 107.4 km. which was only 3km
out. He also developed new method for calculating π. He discovered his law of refraction in 1621.

Refractive index vs Height

As we move to higher altitudes we have lower pressures and lower temperatures. As a result the refractive
index falls with height.
Radio waves get “bent” downwards and are able to propagate beyond the geometric horizon, which extends
range.

To find out how much we need to know how the refractive index of air varies with height. This requires the
introduction of a new unit, for reasons that become obvious.

N - Units

The refractive index of air is very close to 1. Typically n = 1.0003 at sea level and this is most tedious -
there are lots of decimals that must be used because the detail is important, so we define a new unit, the N
unit

N = (n - 1) x 1 000 000

N is typically 310 at sea level in the UK. The value of N can be calculated from:

Where:

P = dry pressure, ~1000mb


T = temperature, ~300k
e = water vapour partial pressure ~40mb
The dry term depends only on pressure and temperature, the wet term also depends on the water vapour
concentration. The temperature, pressure and water vapour pressure vary with time and space.

Pressure falls exponentially with height, the scale height, where it drops to 1/e of the sea level value is
around 8km. This value of e is not the water vapour pressure, it is the constant e from natural logs and has
the value 2.718. Scale heights are used frequently in describing functions that decrease exponentially.

Temperature falls by 1oC/100m in the first few km above sea level.

Water partial pressure is much more complex, it is strongly governed by the weather and is limited to the
saturated vapour pressure. Because the water vapour pressure is governed by the amount of moisture the air
can hold, once the temperature drops below 0C the water vapour condenses out as clouds. The saturated
water vapour pressure is around 40 mbar at 300K (a warm day) and 6mbar at 273K (freezing). The zero
degree isotherm is typically at a few km in altitude, near the cloud base. Practically, we can say the amount
of water vapour above 2-3km is negligible.

The result is that the refractive index falls exponentially with height in a “standard” atmosphere. The scale
height is ~7.4km and in the first 1000m we can approximate this as a straight line with a slope ~ -40 N/km.

Representing an exponential function as a straight line is cheating, but it is a good enough approximation
up to 1000m or so. Beware of this when planning systems on top of mountains.

Super-refraction
If dN/dh exceeds -157 N units, signals will be refracted by more than the curvature of the Earth and be
trapped. We call this super-refraction.

N typically falls by 40 units per km of height which we call the lapse rate of N.

The rate of change of angle dθ/dh ~ dn/dh ~ dN/dh x106 which we find from Snell’s law and applying the
small angle approximation sin(θ)~tan(θ)~(θ) and

The radius of the Earth is ~ 6371 km. To follow Earth curvature, dθ/dh needs to exceed the rate of change
of curvature of the earth, which is found to be -1.57x10-4 radians/km if you do the maths. Remember N
units are a million x ( refractive index - 1). So dN/dh = -157 N units/km is required for a radio wave to just
follow Earth.

The equivalent Earth radius

Many models are simpler if we can treat radio waves as if they were traveling along straight lines in a
standard atmosphere (dN/dh = -40)
We can achieve this by pretending the Earth has a larger radius which we call the equivalent Earth radius
R e.

Typically Re = 4/3 R in the UK and we define the “k factor” k such that Re = k R.

Having done this we can then look at paths by drawing straight lines rather than curves across a terrain
profile. The ability to draw straight lines is practically, very important. It simplifies propagation prediction
software used in link planning. Nobody does it by hand any more. The image shows a path profile where
line of sight is blocked. The red ovals show the Fresnel ellipsoids, in this case the first. These will be
covered when we come on to study diffraction, but the practical point is that for a link to be line of sight,
no-terrain should enter into the Fresnel ellipsoid.
Example of a path profile
(note the red curves represent Fresnel zones, to be covered later)

Ducting and Inversions

Non-standard atmospheres can lead to anomalous propagation. Pressure tends to be quickly restored to
equilibrium and most important are variations in the water vapour concentration and temperature. Ducts
tend to form when either Temperature is increasing or water vapour concentration is decreasing unusually
rapidly with height. For example:
Ducts can occur either at ground level or elevated and depending on the terminal height the signal may or
may not couple into the duct. To couple into and remain in a duct the angle of incidence must be small,
typically less than 1o.

Duct depth and “Roughness” are also important. If the duct depth is small compared to the wavelength,
energy will not be trapped. If the roughness is large compared to the wavelength, energy will be scattered
out of the duct. Surface ducts have the ground as a boundary and energy will be lost to the terrain,
vegetation etc.
The significance of elevated ducts is that they can allow signals to propagate for very long distances over
the horizon. It is possible for intermediate terminals to be below the elevated duct and not able to couple
into it – resulting in non-monotonic path loss with range.

A good example of the temperature inversion occurred on 7th November 2006. Strong inversions like this
are unusual in the UK.

The refractivity profiles (http://weather.uwyo.edu/upperair/sounding.html) show a widespread sharp


decrease in N with height gave rise to strong super-refraction. This caused some interesting anomalous
propagation effects and long range interference to services.

What causes conditions like this?


Causes of Ducting

Briefly the weather alters temperature pressure and humidity regions of air are moved about, mixed up,
elevated and depressed by cyclones and anti-cyclones, heated by the sun and cooled by radiation at night.

Evaporation Ducts

There is usually a region for a few metres above the surface of the sea where the water vapour pressure is
high due to evaporation. This also occurs over large bodies of water, for example the great lakes. The
thickness of the duct varies with temperature of the location, typically 5m in the North sea, 10-15m in the
Mediterranean and often much more over warm seas as in the Caribbean and Gulf. Naturally, these ducts
have a significant effect on Shipping and have been extensively researched. It is the reason that VHF/UHF
propagation over sea can extend to great distances causing all sorts of international frequency co-ordination
problems.

Temperature Inversions

Usually, temperature falls with height by about 1K per 100m. On clear nights the ground cools quickly and
this can result in a temperature inversion, where the air temperature rises with height.

Solar radiation heats up the ground and Radiation from the land raises the air temperature near ground, this
warm air rises. On clear nights the ground cools very quickly, also cooling the air close to it, this results in
cool air close to the ground with warm air above it soon after sunset this is a temperature inversion.

If it is dry, the temperature term is dominant and super refraction and ducting can occur. This is particularly
common in desert regions.

If there is significant water vapour the relative humidity can quickly rise to 100% and vapour condenses out
as fog. This condensation reduces the water vapour density near the ground leading to cold dry air near the
ground, warmer moister air above and results in sub-refraction. This can lead to multipath on otherwise
apparently perfectly good line of sight links.

Subsidence

This is a mechanism that can lead to elevated ducts and is associated with high pressure weather systems -
anticyclones. Descending cold air forced downwards by the anticyclone heats up as it is compressed and
becomes warmer than the air nearer the ground leading to an elevated temperature inversion. (Atmospheric
pressure always increases closer to the ground unless someone has let of a bomb above you). This all
happens around 1-2km above the ground far too high to cause ducting except for very highly elevated
stations as the coupling angle into the duct is too great for a ground based station. As the anticyclone
evolves the air at the edges subsides and this brings the inversion layer closer to the ground. A similar
descending effect happens at night. In general, the inversion layer is lowest close to the edge of the
anticyclone and highest in the middle. Anticyclones and subsequent inversions often exist over large
continents for long periods.

Advection
This is the movement of air masses, typically occurring in Early evenings in the summer with air from a
warm land surface advecting over the cooler sea. This warm air mixes with the cooler air which is
relatively moist through being close to the surface of the sea. This leads extending the height of the
evaporation duct and to high humidity gradients and a temperature inversion forming a surface duct within

the first few 100m above the sea. These ducts do not persist over land and are a coastal effect. Typically in
the UK they are associated with warm anticyclonic weather over the continent of Europe and advection out
over the north sea. They tend to be weaker than subsidence ducts but do occur relatively often over the
North Sea and can persist for many days. For example, it is relatively common for UHF signals to
propagate well beyond line of sight from the East coast of England across the North Sea to the low
countries.

The picture below shows what the ITU-R consider to be the global incidence of ducting. It replaces an
earlier model that only used Latitude.This really does still need to be tested some more as it may be more of
a reflection of Matlab plotting routines for sparse data than actual reality. Use with care.

The original model was very crude:

As good UK citizens we are most concerned with the UK probability of ducting. Evaporation ducts -
happen all the time and a widespread duct frequently forms over the sea, e.g. North Sea - UK - Low
countries. Surface ducts occur for around 6% of time, they tend to be up to 300m in height and cover
~100km. This is a fairly low incidence as surface ducts occur for around 50% of the time in the Gulf, they
are not really anomalous there.

Elevated ducts exist for around 7% of time, they occur up to 3km in altitude, and cover ~100km. Again this
is low incidence as elevated ducts happen for 40% of the time in Gulf.
Diffraction

Diffraction is the "Bending” of wavefronts around obstacles. Diffraction occurs with all propagating waves,
including sound waves, waves on water waves in materials and electromagnetic waves. Diffraction always
occurs, its effects are generally only noticeable for waves where the wavelength similar to the size of the
diffracting object.

E.g. a Signal passing through a window

Diffraction is a large subject with some fairly difficult mathematics - we will try to limit the maths. What
happens when an EM wave encounters a barrier?

Signals diffract around the barrier


The Huygens construction

Christiaan Huygens (1629–1695) was a Dutch


mathematician, astronomer and physicist. He
came up with a theory that light was a wave.
His rule is

“Each point on a wavefront acts as a source of


secondary wavelets. The combination of these
secondary wavelets produces the new
wavefront in the direction of propagation”.

Diffraction over a perfectly absorbing knife edge can be understood through using Huygens construction:
The Cornu spiral is developed from Huygens, summing the amplitude and phase of each wavelet. To find
the field at B from the wavefront A-A’
Sum the wavelets -2,-1,0,1,2 accounting for vector nature of the field.

Now move a knife edge upwards and it cuts out the lower wavelets, effectively cutting out the lower part of
the spiral
The effect of this is initially an oscillation then as the direct path is cut off, a signal loss.

Estimating the diffraction loss

Consider this geometry of a Knife edge gradually cutting off the wavefront. Also h << d1, d2 and λ<<
d1,d2

Define parameter v:

Be careful with the sign if you use this last one, v is negative when edge is below the direct path.

The Cornu spiral is given to a good approximation by plotting the Fresnel Sine and Cosine integrals of v,
where:

The E field can be expressed by summing from the top of the knife edge to infinity.
..Which is beyond this tutorial…but can be written as:

Which we could work out but fortunately there is a very good approximation:

Taken to be zero for v < -0.7

Here it is plotted:

Note: For a grazing knife edge, h = 0 so v = 0, giving a 6 dB loss. For first Fresnel zone clearance, v < -1.4,
no loss).

Fresnel zones

I have mentioned these several times so I had better define them. From diffraction theory, it is clear that the
effect of the knife edge begins before the direct path is cut. Some clearance is needed and the amount is
expressed in terms of Fresnel zones. The first Fresnel zone is the locus of points where the additional path
length compared with the shortest path does not exceed λ/2.
Further Fresnel zones are defined by additional l/2 path length increments.

Line of Sight?

We generally assume line of sight clearance if more than 0.6 of the 1st Fresnel zone diameter is cleared.

Calculating Knife edge losses over the Earth

It is necessary to account for the curvature of the earth and for any slope in the path to calculate how far a
knife edge impinges on a path. We do this by defining everything relative to a reference plane.

As long as the path length is much greater than the height of the obstacle, which it usually is, we can make
an approximation for the height of the edge above the reference plane. It is this height that is used in the
diffraction calculation.
So the diffraction parameter v is given by:

Which can be used to calculate the path loss from the equation for J(v)

Multiple Edges

What should we do if there is more than one obstruction? There are several models commonly used,
including those by Burlington, Epstein-Petersen, Deygout, etc. They are all approximations and have
potential for errors, especially with closely spaced edges. They differ in how they calculate the geometry
and hence the parameter v and they differ in how many edges they take into account and how they add up
the loses.

Burlington

The Burlington method is quite simple, it is based on constructing an equivalent single knife edge at the
intersection of Tx/Rx “horizons” and calculating the loss based on that. It is easy to do but is prone to
underestimate loss as it can ignore important intermediate edges.

Epstein-Peterson

Sums up the loss for each single edge in turn, using height above dotted line as the effective height of edge.
Potentially a better method but causes large errors on paths with closely spaced edges.
Deygout (the principle edge method)

This method is more involved, it splits the path into segments. Firstly we need to find edge with largest
value of parameter v, ignoring all other edges. This is called the “Principle Edge” and its v parameter is
saved.

Now working from the principle edge P, we treat as if there is a new path between the TX and the principle
edge and create a new reference plane and calculate v for the intermediate edge, if there is one, based on
the height above the reference plane.

This edge will have a lower value of v and becomes the principle edge for the path from Tx to P. The
process is recursive for multiple intermediate edges and can be repeated until all edges are considered. The
method ignore any edges with 1st Fresnel zone clearance. The same process is used along the path from P
to the receiver.

At the end of the procedure we will have a set of J(v) losses for each edge - the method simply adds these
up. So for 3 edges:
L = J(vp) + J(vtp) + J(vpr)

Generally, modified Deygout methods are used with fudge factors and scaling factors to further improve
the accuracy compared to measurements.

Real Terrain

Normal terrain, hills etc do not really look like knife edges and is often better represented by cylinders,
which have a higher loss.

Fortunately, we can approximate the additional loss, L = J(v) + T where T is an additional loss that
accounts for diffraction at the tangents to the cylinder.

Most real-world obstructions are not like knife edges and it is only possible to solve the equations for
idealised cases. Solutions for many objects and including reflection effects, loss from trees etc. rapidly
become impractical and in many cases. Usually we do not really know enough detail about the exact nature
of the terrain anyway. For example mobile systems would need re-analysing every 0.1 wavelengths and for
3G systems that would require a terrain map with points every 1.5cm. To overcome this and make a best
guess, path loss prediction models are used, we will come on to these later.

Reflections from surfaces and objects

The effects of reflections on the radio channel are very important in propagation studies. We will start off
by looking at the simplest case of a reflection from a smooth flat surface of infinite extent. This is one of
those cases that does not really exist, but as long as the surface is large compared to the size of the Fresnel
zone, it will behave in the same way. Exactly what we mean by "smooth" we will come onto later.

Reflection from a plane

The phase and amplitude of the reflected wave is found from the reflection coefficient ρ. The value of ρ is
different when the E plane or the H plane is parallel to the reflecting plane. The reason the expressions are
different is because the surface has different properties for E and H fields, one governed by the permittivity,
the other by the permeability.
Where ε = permitivity, σ = conductivity. Generally ρ is complex and both amplitude and phase change on
reflection. Frequently substitutions are made for relative permitivity and relative permeability.

We define the relative permittivity εr = ε/εo


And for conductivity we use parameter x = σ/ωεo

Some Typical Values:

Conductivity Relative Dielectric


Surface
(Siemens) Constant
Dry Ground 0.001 4-7
Average
0.005 15
Ground
Wet Ground 0.02 25-30
Sea Water 5 81
Fresh Water 0.01 81

The equations for ρ now look like this:

Note if θ >> 0, sin(θ) >>0 so ρ >> 1

This is best illustrated by a picture which shows the magnitude of the reflection co-efficient. Note that the
Brewster angle only occurs with Vertical polarisation. This is significant and finds practical use in polaroid
sunglasses, which for example will cut down reflections from the road when driving into the sun.
We have already noted that for very small angles, the reflection coefficient ρ is -1. It is -1 as there is a 180 o
phase change on reflection. This is important as on terrestrial path we can often get two signals arriving at
around the same amplitude, the direct wave and one via a ground reflection. Depending on the relative
phase shift and the relative path lengths the two signals can add together constructively or subtract from
each other destructively as in the example. We call this phenomenon multipath.

For the signals to cancel, the phase shift of the reflected path needs to be a complete wavelength. The
reflection gives a 180o phase change, for destructive interference we need a 180o phase change which we
can get by adding a further 360o. That is the path needs to be one wavelength longer. For example, the path
length difference can depend on the receiver height.
Doing the algebra and assuming d >> h and

The field strength can be calculated by adding the direct and reflected ray accounting for phase difference
Δφ .

The power is proportional to the square of the field strength and for grazing incidence ρ = -1 so:

So for example with d =1km, h tx = 10m, hrx = 1-10m, λ = 30cm we find signal strength lobing with height
with a sine squared response.

Without an antenna moving up and down you might think this is unimportant as long as you avoid putting
an antenna in the null, but it is. On long links, the change in refractive index of the air with the weather may
move the nulls about vertically. There can also be multipath within the atmosphere itself via refraction
leading to multipath cancellation - it is frequently the most important fading mechanism for microwave
links below 10GHz. An example is shown below which was measured on a 60km 1.5 GHz link.

Usually, we do not think of the ground itself moving up and down relative to the antennas, this does happen
though with the classic example being a path over an estuary. People receiving terrestrial TV over an
estuary tend to need two antennas vertically spaced by a around a metre so that only one will be in a signal
null at any given time.

The relative path length phase difference also depends on the wavelength and hence the frequency, so you
may find nulls in the spectrum. This is a particular problem for broadband systems as where there is a
reflection, there is likely to be a null somewhere in the spectrum.
Rough and smooth surfaces

A rough surface can be thought of one that gives diffuse reflections, like you might see from light reflected
off a brick wall and a smooth surface as one that gives specular reflections, like looking in a mirror.

There is no hard transition from rough to smooth, it is a gradual process but that is not good enough for
radio engineers, we need some form of criterion and the one we will use was developed by Rayleigh. It is
based on the path length difference between "rays" reflecting off the surface. The term "ray" is a little
misleading, we are considering a wavefront and it is really the amount of distortion of that wavefront that
matters.

Taking ray A and Ray B above, the surface is considered smooth if the path length difference is no more
than a small fraction of a wavelength. Specifically a surface is rough if the phase difference is less than 90 o
or π/2 radians. In our example above there is a different path length for both rays of 2dsin(θ) and the phase
difference is this length multiplied by 2π/λ. Mathematically the phase shift is:
And if we define rough as a phase shift Δφ > π/2 a surface is rough if:

Note that this depends on the angle of incidence, so to a wave arriving at a shallow incidence, a surface will
appear smoother than head on.

Typically, roughness is described by the standard deviation σ of the surface around its mean level.

A constant is derived assessing the roughness in terms of σ:

The Rayleigh criterion is if C < 0.1 then we have a smooth surface. If C > 10 then the reflection is so
diffuse it can usually be neglected
For example, at an angle of 30o at 1.6 GHz, a rough surface has σ of more than 3mm. At 5o it would be
1.8cm.

This criteria is also applied to reflectors used in antennas, for example a parabolic dish. The maximum
useful frequency of the dish is usually taken as that where the deviation of the surface from an ideal
parabola is just equal to 0.1 wavelengths.

There is no use making a dish antenna better than it needs to be, indeed, making a reflector from a mesh
rather than as a solid has several practical advantages in terms of weight and wind loading. At 10 GHz 0.1
wavelengths is 3mm, so the modern satellite TV dishes a dishes which are perforated by 2mm holes still
work almost as well as solid dishes.

Dichroic reflectors

When is a reflector not a reflector? Take a piece of metal and drill holes in it, so it is mostly holes.
Depending on the size of the holes compared to the wavelength this is either a good reflector, or most of the
energy will go straight through. What use is that? It is effectively a filter and a highly efficient one to use in
the feed of a radio astronomy dish and can be used for splitting the beam of a beam waveguide into
frequency dependant paths. Here are some in situ in the base of one of the 35m DSN antennas at Goldstone.
Fresnel Zones again

The principle of Fresnel clearance applies to reflection. Remember the first Fresnel zone encloses all points
where the additional path length is less than λ/2. As a rule of thumb, reflecting object needs to be larger
than 0.6 of the 1st Fresnel zone or there will be scattering loss. This becomes especially important at longer
wavelengths as objects behave more like scatterers than reflectors.
Tropospheric Scattering

Air is not uniform, there are eddies, thermals, turbulence etc. where the air has slightly different pressure
and hence a different refractive index. The eddies have outer scales ~100m and inner scales of ~ 1mm.
Energy that is fed into a turbulent system goes primarily into the larger eddies and from these, smaller
eddies are shed. This process continues until the scale of the turbulence is small enough for viscous action
to become important and dissipation as heat to occur.

The variations have a spectrum, the Kolmogorov spectrum with a slope proportional to K-11/16.

The effect of these irregularities on the wavefront is for the waves to be scattered and defocused.
It is a very slight effect and Energy is scattered by very small angles, but over long paths this leads to
Troposcatter propagation. Signals may be scattered to receiver beyond the horizon. This mechanism is the
dominant mode for long range VHF/UHF propagation.

The Common volume formed by the intersection of the antenna patterns is important. The common volume
needs to be in the troposphere, so there is a limit to the propagation range.

A typical value for the loss by this mode, for a 250km 150MHz path with 20 dBi antennas is ~140dB (see
later) - THIS INCLUDES ANTENNA GAIN. The line of sight loss would be ~80dB, including antenna
gain, Troposcatter is 65 dB below line of sight in this case, but very few terrestrial paths this long are line
of sight.
The median loss is given by:

L = M + 30log(f) + 10log(d) + 30log(θ) + LN+ LC - Gt - Gr

This is an empirical model, with M is typically 19-40 dB depending on climate.

θ is the scatter angle (milliradians) - Note how the loss increases dramatically with θ, L N accounts for the
height of the common volume, LC is the aperture-medium coupling loss and Gt , Gr are the gains of the
antennas.

To calculate θ as the scatter angle in milliradians we apply the formula:


θ =θ e+θ t+θ r

Where θ t and θ r are the transmitter and receiver horizon angles and

θ e = 1000 d/Re
Re = effective earth radius ~ 4/3 x 6370km.

The value of M varies between 19dB and 40 dB depending on the climate In the UK the usual values are
M=33 dB for overland paths and M=26 dB for paths over the sea.

LN accounts for the transmission loss variation with the height of the common volume (there is less air
higher up).

LN = 20log(5 +γ H) + 4.34γ h

Where H = 10-3 θ d/4 and h = 10-6 θ 2


Re/8 and γ is a climatological parameter ~0.27 km-1 in the UK.

LC is the aperture to medium coupling loss taking account of the common volume variation with antenna
gain:

LC = 0.07 e0.055(Gt + Gr)

It is possible to calculate Troposcatter loss for other percentage of time values using a correction factor:

L(p) = L(50 - Y(p)


Where Y(p) = C(p)Y(90)

p 50 90 99 99.9 99.99
C(p) 0 1 1.82 2.41 2.9

Y(90) again depends on climate and location

Y(90) = - 2.2 - (8.81 - 2.3x 10-4 f)e-0.137h over land

Y(90) = - 9.5 -3e-0.137h over sea


Going the other way, to lower time percentages down to around 20%, the distribution is symmetrical so:

L(p) = L(50) - {L(100-p) - L(50)} when 20 < p <50

Examples

Frequency 144 MHz, Antenna Gains 16 dBi path length 250km. Horizon angle in both cases = 0
milliradians - i.e. looking horizontally.

θ = θ e = 1000 x 500 / (1.333 x 6370) = 29.4 milliradians

M = 32, H = 1.84, h = 0.92, LN = 15.9, LC = 0.41

so L(50) = 149 dB

Another example predicting troposcatter path loss for systems operating in the WiMAX bands from 2.4 to
42GHz is shown below. This is based on the ITU-R P452 model, which should strictly only be used for
interference assessment. Excess loss here is the extra loss compared to a line of sight path.

Another example, total loss for a 100km troposcatter path, here 145MHz 10 dBi antennas, otherwise 60cm
dish, 70% efficient.
Scintillation

Scintillation is a rapid variation in the signal amplitude. One of the causes is tropospheric irregularities and
the amount of scintillation depends on the wavelength compared to the eddy size. This effect is present all
the time, (twinkle twinkle little star) and although tropospheric scintillation is generally not significant for
terrestrial systems below 40GHz it is important for low elevation satellite links where there is a handy
formula for predicting it:

Where G(r) is an aperture averaging factor that depends on the radius of the antenna and the wavelength, ε
is the elevation angle and f = frequency in GHz. G(r) can be calculated from:

Where:

D = antenna diameter
η = antenna efficiency
h = height of turbulence, 1000m
Re = effective earth radius 8500km

Here is an example of applying the formula:


Molecular scattering

Molecular scattering is an important mechanism too and becomes more so at higher frequencies from
1000GHz up. Remember that the energy in a radiowave is quantified into discrete packets of energy called
photons - wave/partial duality etc. The energy of the photon is related to the frequency of oscillation e = hf,
where h is Plank's constant. When considering the scattering of high frequency radiowaves, it is often more
convenient to think in terms of photons.

Rayleigh Scattering: When photons are of wavelengths comparable to the the size of gas molecules,
scattering occurs. The most common mode of scattering is elastic scattering where energy is not transferred
from the photon to the molecule. This type of scattering is called Rayleigh scattering. This scattering
increases with the fourth power of the frequency, which incidentally is why the sky is blue.
Raman scattering: It is also possible for photons to interact with gas molecules in an inelastic manner so
that energy is transferred between the photon and the molecule. This is called Raman scattering and this is
of particular importance to optical communications systems.

At higher photon energies incident photons can excite vibrational modes polarised molecules. This is an
energy transfer process with the resultant emission of a scattered photon of lower energy (i.e. lower
frequency/higher wavelength) and leaving the molecule in a higher energy vibrational mode. Only certain
vibrational mode energies are allowed and by inference, only discrete frequency/wavelength differences
can occur. The spectra of the resultant scattered photons forms a set of spectral lines at discrete offsets from
the original frequency/wavelength called "Stokes lines". It is also possible for a molecule to give up some
of its energy to an incident photon and thereby increase the photon energy. Again, this forms a discrete set
of lines, the "anti-Stokes lines".

Clutter and Vegetation

"Clutter" means things spoiling the view that are not part of the terrain (including vegetation). Vegetation
mainly refers to trees and large bushes that also get in the way of the radio path. Clutter causes loss as it is
usually made of “lossy” materials. Buildings are often thought to be completely opaque, this is incorrect,
signals do penetrate buildings especially if there are many windows in an open plan design or if the walls
are made of wood. Tree branches and leaves cause loss and scatter EM waves passing through them,
resulting in a strong multipath component which varies with movement of the leaves and branches in the
wind. Trees often have more foliage in the summer than the winter and this leads to seasonal variability in
their effect on radiowaves.

There is no such thing as a typical tree but here are some examples of the loss in dB/m experienced while
moving into a woodland:
This is a little misleading as it depends very much on the particular woodland in question. Looking at it
more analytically, higher frequency waves propagate little into the vegetation, but diffract around it both
horizontally and vertically reflect from the ground.

A traveling wave propagates over the top of the vegetation by Radiative transfer. This is a forward
scattering mode through the canopy, a bit like rain scatter. The scattering function depends strongly on the
number of leaves, the leaf shape, how much water each leaf contains.
The result is an initial rapid signal drop with loss that starts to tail off as you go further into the wood.
Beyond a certain depth into the wood, the loss does not increase much.

The maths is difficult, below is is an example of one of the RET theory equations, which is why we will not
go into this in any detail in this stage.

While the scattering loss would be difficult to work out by hand, it is relatively easy to implement as a
computer programme. The input parameters required depend on the tree type. A good set of these along
with a full explanation of the model is available from the latest version of the ITU-R recommendation
P.833.

Dynamic effects

As trees move in the wind, the signal received through vegetation which contains many multipath
components varies rapidly with time. An example of a measured standard deviation of signal level against
wind speed at 38GHz was measured as part of the EU 5th Framework Embrace project and are now
incorporated in ITU-R recommendation P.1410.
Variability

Most phenomena change over time and space and propagation is no different. Propagation in the
troposphere is often strongly influenced by the weather and time variability is significant, for example a
rain attenuation event will only happen if it is raining, this only happens for about 5% of the time in
England. The resulting rain attenuation will vary according to both the rainfall rate variations within the
rainstorm and through the movement of the rainstorm relative to the radio path.

Here is an example of a rain fade on a 42GHz link measured by Telenor as part of the EU 5th Framework
Embrace project. It demonstrates how variations in attenuation occur over time scales of seconds to
minutes.
Other metrological parameters vary as well, for example temperature and humidity vary relatively slowly
with time.

Even though the relative humidity is changing fairly slowly this event still lead to rapid changes in signal
strength, as it caused significant ducting with long range enhancements and rapid multipath fading on a
long range terrestrial microwave link.

Probability definitions

To describe the variability of a propagation path we can say things like:

“there is a 50% chance it will be less than this”


“it will only be this high for 10 minutes per year”
“20% of households will get a signal”
“the rate of change will only exceed X in 10% of cases”

But to plan we need some basic statistical concepts to formally describe probabilities, these will now be
introduced. Some common terms that are frequently used in propagation studies are:

Annual statistics - these are the statistics of an effect measured over an “average” year
Worst Month - these are like the annual statistics but expressed for the worst month of an average year
Exceedance - the probability that some metric will be exceeded e.g. fading will exceed X dB for P% of the
year
Non Exceedance - the probability that some metric will be NOT be exceeded e.g. fading will not exceed X
dB for P% of the year

The Mean - The average level of a variable, often the root mean square, r.m.s. The
The Standard Deviation - A measure of how far the samples is a set of measurement deviate from their
mean value

Median (Sometimes 50% value) - If you arrange all the samples of a measurement in ascending or
descending order, the median is the one in the middle

Lower/upper Decile - Arrange all your measurement samples in ascending order, the bottom 10% are the
lower decile, the top 10% the upper decile

Probability Density Functions

These are very common in statistics and especially common in propagation studies. A Probability Density
Function (PDF) describes the probability of a variable being at some defined value, E.g signal level

To make a PDF we first arrange the data samples into a set of value “bins” (e.g. values between 0 to 1, 1 to
2, 2 to 3, etc) and then taking the whole data set count how many samples fall into each bin. For example,
say we wanted to make a PDF of the sin2(x) function from 1-10 where x is in radians.

The sin2(x) function.


We can do this by taking samples, e.g. 100 regularly spaced samples of x between 0 and 10 and place the
values of sin2(x) into 10 bins, 0-0.1, 0.1-0.2....0.9-1 as below.

The Histogram The Probability Density Function

This result is the PDF of sin 2(x). Frequently the probability axis will be as a percentage and as we are often
interested in rare values it will probably also use a log scale.

Frequently we also see the Cumulative Density Function (CDF) which is simply the cumulative sum across
the PDF. You can think of a CDF as an Integral of a PDF.

The Probability Density Function The Cumulative Density Function


Here are some examples of real PDFs and CDFs.

A Cumulative Density Function


A Probability Density Function
The graph above is the CDF of the
two line of sight signals from the plot
The graph above is real measured data from a 70km line of
on the left illustrating the multipath
sight path and a 200km trans-horizon path. The increased
fading - note the log probability scale
spread in the amplitude range of the trans-horizon data is
is along the X-axis with the signal
evident.
level on the Y-axis. Plotting it this
way around is just as valid.

Common Distributions

In propagation modelling, real variability of the signal is modelled as one of the "standard" distributions so
that can be analytically handled. Typical standard distributions that are used are:

Normal (Gaussian) – Many measurements fonform to the Normal distribution, for example white noise
Log Normal – useful for Rain fade durations
Rayleigh – This is used in modeling urban mobile signal strengths and is a good model where there is no
line of sight
Rician – This is useful for modeling rural mobile signal strength where there is line of sight plus multipath

The Normal Distribution

This is also frequently called a Gaussian distribution.


One useful feature to remember about the Normal distrivution is that 95% of the samples will lie within 2
standard deviations and 99.7% will lie within 3 standard deviations. So if you are told a model has a
standard deviation of error of 8 dB, 99.7% of samples will be within ±24 dB of the mean. A standard
deviation of error of 8 dB is not unusual for a mobile radio planning model.

The Log-Normal Distribution

A variable x is log-normally distributed if ln(x) is normally distributed. Log normal distributions have been
used to model shadowing in mobile systems.
The Rayleigh Distribution

This is typical of non line of sight mobile signal levels where the power comes from many scattered paths.

The Rician Distribution

This is typical of mobile systems where there is a line of sight component as well as several strong
multipath components.

Where Io is the first order Bessel function. Note – if v = 0, this reduces to the Rayleigh distribution.

We can re-write the equation for a Rician distribution by introducing a parameter k

k is called the “Rice Factor” and is the ratio of the power in the constant part due to the line of sight
component to that in the random part due to the non-line of sight components.

Finding Data on distributions

Propagation is greatly influenced by terrain and the weather and weather data has been collected over many
years. Statistics are available from the ITU SG3 for Rainfall rates, Refractivity gradients, Clouds, Wind
speed, Solar activity, etc. etc. Terrain maps available from USGS for free or from OSGB if you are rich
enough to pay the annual fees and can put up with the onerous usage restrictions.
The best free terrain data can be obtained from the Shuttle Radar Topography Mission (SRTM) web page.
The SRTM was a joint project between the National Geospatial-Intelligence Agency (NGA) and the
National Aeronautics and Space Administration (NASA).

The data was collected over a 1 arc second grid for all land areas between 60° north and 56° south latitude.
For various reasons, mostly to do with “defense” global data is only available at 3 arc second resolution –
but this is available for free. This is very different to OSGB data which, although better resolution is
expensive and subject to strict licensing conditions.

USGS ETOPO2 Data

Unlike terrain and clutter data, there is no need to have data for climatic parameters on a fine grid of points.
Point statistics of parameters at spacing of a few km are sufficient, which is fortunate as that is all that is
currently available.

Link Budgets
In the following sections of the tutorial we will change emphasis a bit and start to look in particular at the
effects of radiowave propagation on channels. We will cover link budgets, noise, wide band effects,
modeling and measurement techniques.

Link Budgets

This is all about finding the signal level received from the signal level transmitted. A link budget is a
formal way of calculating the expected received signal to noise ratio. This is something designers generally
want to know to make design decisions like what antenna gain and how much transmitter power is needed.
This effects the hardware cost and is important in satisfying the license conditions etc. Knowing how to
properly make a link budget is a very important skill for a communications system design engineer. Some
people make a lot of money out of being able to do it well. It is at the basis of antennas and propagation
studies as the path loss between the terminals depends only on the propagation loss and the antenna gain.

Link budgets usually start with the transmitter power and sum all the gains and losses in the system
accounting for the propagation losses to find the received power. Then the noise level at the receiver is
estimated so we can take the ratio of the signal power to the noise power and work out the performance of
the link. This procedure is shown for the generic system below:

The 3 steps are

1. find the signal power at the receiver by subtracting the path loss from the transmitted power,
remembering to account for antenna gains and feeder losses.
2. find the noise power from the antenna and add to this any noise generated within the system
3. Calculate the ratio of signal power to noise power

What is not included yet in the above in order to avoid confusion is the interference. Interference can often
be treated like additional noise, but the effect of interference depends very much on the modulation scheme
being used. With digital systems, interference can be treated as noise, but beware of pulse type interference,
which may have a low average power but can completely disrupt services like DTT and DAB through
causing bursts of unrecoverable errors that prevent the highly compressed content from being decoded.

EIRP

The transmitter parameters are often further simplified using the concept of EIRP. This is useful as it
allows us to treat systems with very different antenna characteristics similarly.
In radio systems, the Equivalent Isotropically Radiated Power (EIRP) is the amount of power that would
have to be radiated by an isotropic antenna to produce the equivalent power density observed from the
actual antenna in a specified direction. The EIRP is still a function of direction, we are not assuming power
is radiated isotropically. Usually EIRP is quoted for bore sight, defined as the axis of maximum radiation.
Occasionally we need to refer to the off axis EIRP which may be in the direction of another system that is
suffering interference.

The EIRP is usually quoted in decibels compared to a reference power, e.g. 1watt, 0dBW or 1 milliwatt
0dBm. The EIRP is a useful quantity for comparing systems as it is system independent, that is we do not
need to know anything else in order to calculate the radiated field strength.

Path Losses

We now need to consider the link parameters - the path loss, which we know this already, it is the sum of
all the losses between transmitter and receiver that are not to do with the antennas or feeders.

Path loss = Free space loss + Gas loss + Additional path loss

Signal power at receiver

We now have enough information to calculate the signal power at the receiver:

Received power = EIRP - Path Loss + Receiver antenna gain

E.g. Handheld radio 448 MHz, EIRP ~ 0.5 Watt = -3 dBW, Antenna gain, 0 dB (Isotropic) so for a 1km
line of sight path, the loss = 85 dB and the received power = -88 dBW (That is a strong signal). It is easy,
but we have assumed the receiver is linear. With high received signal powers from -40 dBW upwards, this
becomes less likely. Many strong signals at a receiver may cause undesirable intermodulation products to
be generated which will degrade the performance.

Noise power at the receiver

We are half way to finishing our link budget with the signal power at the receiver. We need to know the
noise power to find the signal to noise ratio. Noise comes from several sources, there is natural noise from
the environment, noise generated within the receiver itself and man made noise. Everything with a
temperature will generate noise - Boltzmann’s law says the noise power per unit bandwidth = kT where k is
Boltzmann’s constant and T is the absolute temperature in Kelvin.

Important features of this type of noise is that noise is additive, if you have two noise sources you get the
sum of the noise power from each, and noise has a flat spectrum, so if you increase the bandwidth you
increase the noise power in proportion.

Boltzmann’s constant is often expressed in the units of dB Watts per Hz per Kelvin, that is how many watts
you get per Hz of bandwidth for each Kelvin of temperature. Its value is -228.6 dBw/HzK. For example
you might an antenna looking at the ground has a noise temperature of 290K. The noise power received in
a 1MHz bandwidth at a noise temperature of 290K is:

Noise power per MHz = -228.6 + 60 + 24.6 = -144 dBW

Other external noise sources not part of the system include

• The atmosphere including the ionosphere


• The Earth ~ 290K
• The Sun (it is very hot!)
• Galactic sources (Crab Nebula, etc)
• Cosmic background of ~ 2K
• Man made noise (ranging from negligible to very high)

Some typical values are shown in the figure:


There is some evidence that the man made noise levels are increasing in some environments. This is a hot
topic as Ultra-Wideband systems intend to operate below the noise floor of conventional systems and there
is some disagreement between the UWB and conventional camps over what that level is.

Another issue is over power line transmission PLT technology, which used mains wiring to send data in the
vein hope that the wires will not radiate. They do radiate of course and the aim is to keep the additional
noise to below the current noise floor – this dispute is quite heated because prototype PLT systems have
been demonstrated by the BBC to be severely damaging to broadcast reception.

Noise at the antenna

The antenna picks up noise from the sources in the previous slide, depending on its radiation pattern, it also
generates noise through its own temperature and losses and picks up noise from the Earth at 290K in the
sidelobes. To estimate the noise picked up by an antenna, a quick method is to take the antenna efficiency
as an indication of the sidelobe power. So, if the efficiency is 60%, the external noise in the direction of the
antenna accounts for 60% and the sidelobes represent 40% of the noise pickup. It is further assumed that
half of the sidelobes are looking towards the sky and half towards the ground. For example, with a 60%
efficient dish as might be used for satellite TV reception at 12GHz, the sky noise temperature may be ~15K
and the ground noise temperature 200K. The total antenna noise is estimated as:

Antenna noise temperature = 0.6 x 15K + 0.4 x (15K + 200K)/2 = 52K

The ratio is nearly 4 times compared to what it would be without the sidelobes, so dish efficiency can
sometimes matter even more for noise than it does for received signal power.

Noise generated in the receiver

Noise is generated by all inline devices, for example passive devices including attenuators, waveguides,
cables, filters etc. or active devices, amplifiers, mixers etc. Each can be represented reasonably accurately
as an additional noise source (resistor) at the input to the system:

Passive Devices

If a passive device has loss it will add noise to the system proportional to its temperature (Assumed 290K
unless known) and the loss.

E.g. For a gain of 0.9 (10% power loss), TDevice = 29K.

For a feeder loss of 1 dB the noise temperature increase works out as 75K.
Nothing adds no noise unless it has a temperature of absolute zero. Feeders always have loss. The loss at
microwave frequencies is higher and feeder lengths need to be minimised to obtain a low overall system
noise temperature. Radio astronomy stations whose performance would be totally devastated by a 75K
feeder temperature dispense with the feeder altogether. They use beam waveguides where the only loss is in
the reflecting surfaces – which is low because of their size plus they can be easily cooled.

Active Devices

All active devices generate noise internally, the reasons are complex, but it can be modeled as an effective
noise temperature, e.g. for an amplifier:

kTrx Brx = amplifier noise power

Typical noise temperatures for real amplifiers are in range 10K - 1000K. The Noise Factor is a measure of
how much noise is added by an active device. When the receiver is matched by a load resistor at standard
temperature T0 290K noise power input is:

Noise Factor F = (Noise Out / Noise In) – referenced to input!

Noise Factor:

Which if expressed in dB is FdB = 10log(F) we call this the noise figure.

in terms of noise power after substituting for k and T0:


N0 = FdB -204 + 10log(B) dBW

Alternatively, we can consider an Effective noise temperature:

N0 = kT0B + kTeB = k(T0 + Te)B

and
Cascaded Sources

When a device is described as having a noise temperature or a noise figure, this is always relative to the
INPUT of the device. When summing noise contributions we need to be careful about gain and loss, an
amplifier will amplify input noise as well as the signal and a lossy device will attenuate input noise as well
as adding noise due to its own noise temperature.

Ptotal = G1G2G3...GnkT0B + G1G2G3...GnkT1B + G2G3...GnkT3B + G3...GnkTnB + …

Input x total gain 1st x total gain 2nd x (total gain – gain of 1st stage)...

This tells us something useful – if we have enough gain in our front end low noise amplifier, the noise
figure of the rest of the receiver is of secondary importance. There is a trade off though as too much gain is
bad for performance. The problem is that putting gain at the front end of the receiver before the filtering
reduces the systems immunity to strong out of band signals. Too much overall gain will add to the inter-
modulation distortion generated from in-band signals and thereby reduce the dynamic range of the receiver.

Example

Find the overall noise figure of a receiver with a 10 dB noise figure preceded by an amplifier with a noise
figure of 0.5 dB and a gain of 20 dB ?

To solve this we need to sum the input noise plus the noise temperature of the amplifier multiplied by the
gain of the amplifier plus the noise temperature of the original receiver.

(Rearrange the equation Ptotal = G1kT0B + G1kT1B + kT2B)

T0 = 290K
T1 0.5 dB NF = noise temp of (100.5/10/ - 1) x 290 = 35K
T2 10 dB NF = noise temp of (1010/10 - 1) x 290 = 2610K
G2 20 dB Gain = 100 x
Tadded = 35 + 2610/100 = 61K, Equivalent to a system noise figure of 0.8 dB

We have assumed a 60% antenna efficiency, that is quite a challenge for a mass market product. When
satellite TV first became popular the original LNBs had noise temperatures of about 250K so the total
noise temperature was 300K and the extra noise from the antenna really didn’t make that much difference.
Nowadays LNBs are apparently available with noise temperatures of around 50K giving a total noise
temperature of 100K. Now the antenna is responsible for half of the noise power. The LNB improvement
equates to an improvement of 4.7 dB in SNR. That is more than enough gained to permit the dish size to be
reduced to 45cm. An even smaller dish could be used if it was not for the congestion in the Geostationary
orbit. Instead, we now use 45cm and the improvements in LNBs have allowed an increased data rate.
Why worry about noise figure?

Say we have a receiver with a noise figure of 10 dB and 20MHz bandwidth. What is the equivalent noise
power?

The receiver noise floor is – 121 dBW. Say we are looking for a TV satellite, with a good antenna so the
antenna noise temperature is low, maybe 50K, if we work out the input noise power we find it is:

So our receiver noise is much higher than the input noise, with at –121 dBW being 18 dB worse than it
would be an ideal noiseless receiver. An input signal of –120 dBW from a transponder would have an SNR
of 1 dB, if our system was noiseless this could have been 19 dB.

To fix this the manufacturer fits a 0.5 dB NF LNA , which as we know gave us 61K of additional noise,
which when added to the antenna noise gives a a total of 111K.

The signal would now have an SNR of 15 dB! That is the difference between a good watchable picture and
none at all. If we did not have the low noise amplifier the satellite operator would need to compensate with
14 dB more power, which would be economic suicide.

In perspective

Remember to keep noise temperatures in perspective, there is usually no need for one part of a system to
have a vastly better noise performance than another. Low noise receivers are important in space and
satellite systems but much less important for terrestrial systems in noisy environments.

In a typical business area, at UHF the noise temperature seen by the antenna will be 1000K or more. This is
similar to Te for our example so improving our 10 dB NF receiver is not going to make such a large
difference to the signal to noise ratio. Adding a 20 dB gain amplifier in front would actually make the
receiver worse as it would affect the strong signal handling – there lots of strong signals around in
industrial areas.
Things are different for SETI trying to look for ET at π times the hydrogen line (~4.5 GHz) from a quiet
location with a good antenna with input noise ~ 10K. Even the receiver noise temperature of 61K would be
considered poor and Cryogenic cooling and very low noise systems are needed for radio astronomy.

Summary

Do link budgets in dB as it is easier. The steps are:


Find the Signal (dBW)

1. Work out the transmitter EIRP


2. Work out the path loss
3. Add the receiver antenna gain
4. Subtract the feeder losses

Find the Noise (dBW)

1. Find the natural noise


2. Add the noise from the antenna
3. Add the noise from the feeders etc.
4. Add the noise from the receiver

Find the SNR (dB) = Signal (dBW) – Noise (dBW)

Some Examples
Voyager - again. Last time we calculated Goldstone received a signal level of -180 dBW from Voyager.
Building on our previous example, what is the data rate one might expect to be able to receive from a space
probe.

Goldstone uses the best LNAs available and the system noise temperature is around 30k. The antenna is
designed for very low sidelobe noise and it tends to not be used at low elevation angles. Goldstone uses
cryogenic cooling on the LNAs, much of the noise power comes from the waveguide loss of 0.2 dB,
equivalent to a noise temperature of 14K. The rest comes from the atmospheric loss and antenna noise
temperature, both of which depend on the elevation angle. This equates to a noise power of:
N = -228.6 + 10log(30) = -213.8 dBW/Hz

The S/N is the difference:

S/N = -180 – (-213.8) = 33.8 dB/Hz

Shannon's theory says in the limit we get no errors with a bit energy to noise energy ratio of -1.6 dB. Some
modern exotic schemes get to within a dB or so of this, but remember Voyager was designed in the 1970s.
The coding scheme deployed needs a bit energy to noise energy ratio of 2.5 dB which gives us a maximum
data rate 33.8 – 2.5 = 31.3 dB/Hz, which is equivalent to 1.35 kb/s.

(note: we have ignored several degradations, for example Goldstone’s antenna efficiency, in this link
budget to avoid clouding the point)

A PMR system

Back to our PMR448 - What is the maximum line of sight range?

Radio specification:

Handheld radio 448 MHz, TX ~ 0.5 Watt


Antenna gain, 0 dB (Isotropic)
Noise figure = 6 dB, Bandwidth = 25kHz
Minimum usable SNR 12 dB

Find the noise floor:

Assume the antenna is seeing half ground (290K), half sky (30K) = 160K average

Noise figure 6 dB gives (4 -1) x 290 = 870K noise temp

Total noise temperature = 870 + 160 = 1030 K


Noise power = kTB = 1.38x10-23 x 1030 x 25,000 = 3.55x10-16 W = -155 dBW

Transmitted Power (EIRP):

Our EIRP is the transmitter power + the antenna gain

EIRP = -3 dBW power + 0 dBi antenna gain = -3 dBW

Maximum path loss capability:

This is simply the difference between the EIRP and the required power, so we need to find minimum
required signal power:

We need 12 dB SNR to communicate so the signal power must be at least 12 dB above the noise floor

Required power = –155+12 = -143 dBW


Path loss capability = (-3) - (-143) = 140 dB

Path loss with distance:

At this frequency gases can be ignored, so the line of sight path loss is given by

Path Loss = 32.4 + 20log(d) + 20log(448) = 85.4 + 20log(d) dB

This should equal 140 dB, so

20log(d) = 54.6

and

d = 10(54.6/20) = 537km

The reason that PMR 448 handies don’t work over 537km because there are not usually any 537km line of
sight paths. We have of course completely forgotten about interference here. Interference is frequently the
major limiting factor in mobile communications systems. That is why it is important to understand the
propagation characteristics of unwanted co-channel signals as well as wanted ones.

Signals and Noise

An exact copy of the signal transmitted is not what is received at the receiver. Noise is added to the signal,
the amplitude and phase of the signal varies with time and location, there is a time delay which may be
variable and the signal shape is distorted. So far we have treated noise as a purely random signal which is
added at the receiver input, we call this Additive White Gaussian Noise (AWGN).

I and Q representation

Typically, we represent signals as their complex baseband equivalents, I and Q representation where I and
Q are orthogonal and represented as a complex number:

n(t) = xn(t) + jyn(t)

Where x and y are random variables of zero mean. The mean noise power is related to the noise variance.
We will almost always treat noise as additive and Gaussian.

Showing

We know we can represent a random variable by its Probability Density function p(x). The probability the
value lies between two values a and b is:

All PDFs by definition have a total integral of 1

So the probability a value is less than a is:

We call P the cumulative distribution and so also the area between a and b is the probability below b minus
probability below a;

Differentiating:
The Expectation of some function f(x) of a random variable x is:

Expectation is a statistical term referring to the Expected (average) value of the function. Here we are
interested in the Expectation (mean) of x,
which we get if we set f(x) = x. This is often called the first moment of x in statistics.

The variance is given by:

To find the power of complex signal n(t) we multiply by the complex conjugate.

The mean power is the expectation of N(t):

Expectation is a statistical term, according to Wikipedia:

In probability theory the expected value (or mathematical expectation) of a random variable is the sum of
the probability of each possible outcome of the experiment multiplied by its payoff ("value"). Thus, it
represents the average amount one "expects" as the outcome of the random trial when identical odds are
repeated many times. Note that the value itself may not be expected in the general sense; it may be unlikely
or even impossible. For example, the expected value from the roll of an ordinary six-sided die is 3.5, which
is not one of the possible outcomes.

Remember that xn and yn oscillate about zero, which means their mean values are zero.

so
It is assumed σx and σy have the same variance σn

so

Fading Channels

The signal amplitude at the receiver varies over time, we generally call this signal fading. We split this into
slow fading and fast fading. Everything is relative, but this generally means variations in amplitude that
change slowly with time. E.g slowly compared to the transmission frame length. Often engineers think of
slow fading as being fading where the system might have time to react in some way, for example using an
AGC system. Fast fading is signal variation that is considered too rapid for the system to follow.

Rain fading is an example of slow fading – with time variability measured in seconds and minutes. Mobile
operators tend to consider shadowing by buildings as slow fading, periods of seconds while passing
buildings.

Fast fading generally means variations in the signal amplitude that change rapidly with time, E.g. times of
the order of a packet, or even a symbol.

Fast fading typically varies about a mean value and often fast fading is superimposed on slow fading.
Multipath can cause fast fading in mobile systems. Tropospheric scintillation is another example of fast
fading, though it is really a form of multipath too.

Models for fading channels

Firstly, consider the channel to be narrow band, that is the fading is flat across the channel bandwidth with
no frequency selective fading. We can model flat fading channels as below:
Often we combine two path loss models: one for slow variations, slow fading A(t) and another for fast
variations, fast fading α(t)

These models work well for channels that are not frequency selective, some narrow band fixed links for
example, but they are not sufficient where there is frequency selective fading. We call such channels
wideband channels and “Wideband” could mean 1GHz or 100Hz, it depends on the channel not the
bandwidth compared to the centre frequency.

Wideband Channels

Ideally, we expect a channel to have a perfectly flat and linear frequency response. Frequently want to
know the channel impulse response as we are interested in transmitting data.
Real channels show distortion and the time domain representation is frequently used. The received pulse is
distorted (always) because of the frequency and delay response of the channel.

Multipath components are frequently present in mobile systems and if two components are of similar
amplitude, they will interfere constructively and destructively as we have already seen in our ground
reflection example:

If there is no energy in the channel, the receiver can not work – there are ways to overcome this, systems
like OFDM used in Digital Terrestrial Television (DTT). Apart from this filtering effect – pulses are also
spread out over time. We represent this with the channel impulse response:

Typically, the receiver can not distinguish the resulting mess, unless it is specifically designed to (Rake,
MIMO).
We define the number of components and delay parameters by applying a threshold which is often set so
that 90% of the energy is contained. This is the “Delay Window”.

The number of components is the number in the delay window, the Delay Spread is a measure of the
variability of the width of the delay window.

Importance of multipath delays

Data is traditionally transmitted in the time domain as a sequence of symbols. Each symbol has a set
duration, for example at 1 Mbaud, each symbol lasts 1 µS.

Delay spread causes the error rate to cease to decrease with increased SNR. There is an error floor effect
where increasing the power will not improve the error rate.
All digital systems have an error floor - but generally it helps if it is at a low error rate, like 10 -12 or better.
By using coding techniques, it is possible to compensate for high channel error rates, though possibly not
for a situation as bas as in the top line above.

The tapped delay line model

This is a popular model for multipath simulation work. We can model multipath a a series of amplitude
weighted delayed copies of the input signal.

Note the taps considered uncorrelated. The delays &tau 1, &tau2 … &taun are not fixed, but functions
conforming to the delay spread.

We define the impulse response h(t,τ) - the delay spread function

( * represents Convolution)
Defining Delay Spread

Here are a set of definitions for come common paramaters used in modeling multipath.

The received signal, P(t) made up from N taps P1(t) to PN (t)

Excess delay is the delay of any tap relative to the first


Total delay is the delay difference between first and last tap
Total power is the sum of all the tap powers

Mean delay, is the average delay weighted by power

The rms delay spread

The rms delay spread is often quoted for various environments. Modern demodulators are able to de-spread
and make use of the multipath components in the incoming signal and therefore gain a significant increase
in signal energy. RAKE receivers are an example of this technology.

Delay spread Examples


Here are some typical values at 1-2 GHz for mobile systems. Inter-symbol interference occurs when the
delay spread is similar to the symbol duration. For reference, one GSM symbol is 3.7µs long, so we worry
about it most in Urban and Hilly areas.

Environment rms delay spread (µS)


0.01-0.05

Indoor

Mobile satellite 0.04-0.05


Open area < 0.2
Suburban macrocell <1
Urban macrocell 1-3
Hilly area macrocell 3-10

Frequency Domain

Not only are signals dispersed in time but also in frequency, Otherwise delays would be constant. It is just
as valid to work in the frequency domain which leads to the phenomena of Doppler spread. This requires
the transmitter, the receiver or the scattering object to be moving, the term “mobile radio” gives us a clue
here…. It can also be caused by the medium moving or evolving, the Ionosphere for example.

There is another version of transfer function which is a filter with a time variant frequency response. It is
called the Time Variant Transfer TVT function. This is actually the Fourier Transform of the delay spread
function h(t,τ;)

Bello Functions

The relationships for the family of channel functions are named after a chap called Bello who defined them
in 1963.
The Fourier transform and inverse Fourier transform is beyond the scope of this course, however it is useful
to know that this is a model that allows us to move between the time domain and the frequency domain
which can be easily implemented in DSP hardware.

Measurements

Measurements are needed for developing propagation models and models are needed for design and
planning. Developing models from Maxwell’s equations is often not practical We don’t always have
enough environmental data for a completely deterministic solution. The type of measurement required
depends on the service. Long term statistics measured over many years are needed to assess fading
distributions on microwave links. Here we are interested in events that only last for a few minutes per year,
so for example 0.01% of a year is only 53 minutes, and it takes many years to get a reliable measurement.
A large number of short term measurements are needed to test a mobile coverage prediction model. Here
we are more interested in probabilities of call failure of a few percent but for a very large number of
locations.

Beacon measurements are typically used in satcoms and terrestrial links when a long term measurement is
needed. A reliable test signal (usually CW) is sent and this is received and recorded by a beacon receiver,
made up from specialised hardware designed to give accurate long term results. A large dynamic range
often needed and the system must be very stable over a long time which means it will be probably be
expensive. There are many years of measurements available in ITU-R databases etc.

Here is an example of a long term measurement, in this case the beacons on the Olympus satellite, which
was at an elevation of around 300.
The monthly statistics for 30 GHz below show that there is considerable monthly variability. This tells us
we will need many months of measurement to get a good idea of the “average”. Note the similar curve
shapes - this implies the “worst month” can be found by scaling from the average value.

Drive round measurements are commonly used for mobile systems. A transmitter is set up in a “typical”
location to represent a base station and a measurement vehicle drives a set route. GPS etc. is used for
positioning and databases and aerial photos used for analysis.
Google Earth now has some very useful mapping and visualisation tools built in. The image above was
produced by submitting a track file of lat, long and signal strength to the software. It is then possible to
zoom in and around the track and note the effects of the street layout on the received signal level.

What to measure

Measuring power might not be sufficient, there could be Doppler spreading, delay spread e.g. rain scatter.
To measure these we often use channel sounders. There are many types, put “Channel sounder” into
Google and you will get hundreds of hits, it is a big business. One method is to transmit a known pseudo
random sequence and at the receiver use a sliding correlator to resolve individual multipath components. A
very simple sounder is shown diagramatically below:

In this type of sounder the resolution and ambiguity depend on the length and rate of the PN sequence.

Channel Sensing

More advanced communications systems sense the channel to make optimum use of it. Most modern cell
phones do this. This can be an active negotiation between transmitter and receiver, or it can be only at the
receiver, the latter is necessary for broadcast. A sophisticated scheme is used in the new standard for digital
HF broadcasting, DRM. HF links via ionospheric reflection can suffer severe delay and Doppler spreading.

The DREAM software package is an open source DRM decoder based on DSP. It is shown here receiving
RTL at ~6 MHz at a data rate 21kb/s in 10kHz, mode B. DRM mode B in 10kHz uses 206 COFDM carriers
and here they are modulated with 64QAM. The dialogue is telling us we have an SNR of 21.6 dB and a
delay spread of 3.59mS. A symbol in this transmission mode lasts 27mS so this is not a problem - the
signal decodes properly.

The software can measure the channel impulse response, here the decoder has used known patterns the
received signal to measure the channel impulse response - the DRM standard includes special symbols to
aid channel estimation.

The plots are a few seconds apart and four components are detected. Note the delay spacing is ~2mS
(600km), probably an F-layer reflection 300km up. This is the resulting DRM superframe of about 1.2
seconds at the decision stage:
There are many samples of each of the 206 carriers superimposed in this constallation - but you can make
out the FAC codes and SDC codes which are designed to aid data recovery and identify the transmission.
The FACs are 3 dedicated pilot carriers transmitted at offsets of 750 Hz, 2250 Hz, 3000 Hz that always use
QPSK modulation so they may be more robustly decoded. Their power is usually double that of the other
carriers to help frequency acquisition. They contain 64 bits of data plus 8 bits parity that tell the decoder the
transmission mode and what the multiplex contains.

The SDC is a service descriptor and it is only sent at the beginning of each transmission frame. It contains
useful information like the station name and programme schedules.

You might also like