You are on page 1of 44

Introduction to communication system:

Communication is a way of conveying information, Technology changes, but communication


lasts; the availability of communication technologies has made a great impact on human lives.
When we communicate, we are sharing information. This sharing can be local or remote. While
remote communication takes place over distance, the term telecommunication includes
telephony and television, means communication at a distance. Tele-Communications can
therefore be basically grouped into two -voice & data.
Three basic components that comprise a full communication channel:
1. The Sender A transmitter encodes the message in a language that can be understood by the
receiver.
2. The Receiver decodes the message.
3. The Medium Air, copper wires, optical fiber. These carry the message across from the
sender to the receiver.

Fig:. Communication system
Telecommunication systems have now made it possible to communicate with virtually anyone at
any time. Early telegraph and telephone system used copper wire to carry signal over the earth
surface and across oceans and high frequency (HF) radio, also commonly called shortwave radio,
made possible inter-continental telephone links.
But now there is different types communication system. They are following

Telephone System

Cellular Systems


Packet Data Systems

Satellite Systems

Microwave Systems

Fiber Optic Systems

Every communication system has its own frequency range, system, capacity, application
implementation cost.
On the basis of transmission system there are two types of communication system

Wired communication system

Wireless communication system
Modulation in Electronic Communication
Low frequency signals, like the audio messages, cannot propagate efficiently by themselves over
a long distance. They need some carriers to carry them over several kilometres. The cables can
guide the signals. For longer distances, wireless communication has proved to be very useful.
Audio signals are generated by low-frequency mechanical vibrations. The transducer, also called
a microphone, converts the sound wave into an electrical variation with time. These low
frequency signals cannot propagate over long distance in space as they get badly attenuated. In
electromagnetic broadcast, the low audio frequency electrical signals (20 Hz to20 kHz) are,
therefore, carried by a much higher frequency radio wave (in the range of hundreds of kHz). This
process is called modulation. Radio or television broadcasts are examples of modulated
transmission.


Need of modulation due to
1. Reduce noise and interference.
2. Channel assignment.
3. Multiplexing or transmission of several messages over a single
Channel.
4. Overcome equipment limitation.
High radio frequency (r.f.) signals are more efficient in carrying electrical signals over a very
long distance through free space because of its minimal attenuation at the r.f. range. The signal
power is launched most efficiently into space or picked up if the geometrical size of the antenna
and the wavelength of the electrical signal are comparable. In the audio frequency signal, the
wavelength is several kilometres long. No practical antenna of such enormous size can be
fabricated. The size of the antenna becomes practicable in the r.f. range. Low frequency audio
messages are, therefore, carried by the high frequency radio waves through space. Because the
carrier signal wave length is comparable to the antenna size, it is possible to satisfactorily launch
the radio signal into space.
If audio signals arising from different sources are made to propagate over the same medium, they
interfere badly as they occupy the same frequency spectrum. When made to ride on carrier
frequencies of different magnitudes, the original signals get translated in the frequency domain
by an amount dictated by the carrier frequencies. If the carrier frequencies are sufficiently apart,
then the audio signals also get separated and do not interfere. Modulation also helps in allocating
separate channels to message signals that overlap in time and frequency, so that after modulation
they coexist without interference.
A carrier signal S
C
(t) = A
C
cos (
c
t +
C
) has three independent parameters describing its
behaviour. These are its amplitude A
C
, phase
C
, and frequency
C
. Any of these three
parameters can be varied directly with the message signal m(t). The message therefore produces
a variation in the carrier amplitude or it affects the instantaneous frequency or phase of the
carrier sinusoid.

When the amplitude of the carrier varies according to the message it is called amplitude
modulation. Similarly, when the message changes the phase or frequency of the carrier, the result
is phase or frequency modulation. Both the phase or frequency variations affect the instantaneous
angle (t) = (
C
t +
C
) of the sinusoidal/co-sinusoidal carrier. Thus, modulation of the phase or
the frequency of a carrier signal may also be jointly called angle modulation. In case of
amplitude modulation, it is the amplitude A
C
(t) of the carrier that becomes a function of time.
The amplitude is made to vary directly with message m(t). An amplitude modulator effectively
multiplies the carrier signal S
C
(t) by the message signal m(t).


Figure : Amplitude modulation and angle modulation of the carrier by the message.
In frequency modulation, the instantaneous frequency (t) is a function of time, which varies by
an amount around the un-modulated frequency
C
. This deviation is proportional to the
instantaneous value of the modulating signal m(t). So the instantaneus frequency of the carrier

C
(t) becomes
C
(t) in case of frequency modulation. The deviation in the frequency (t)
is equal to the product
f
m(t). The multiplicative constant
f
is called the frequency modulation
index and is responsible for the extent of modulation prodiced in the instantaneous carrier
frequency.
Similarly, for phase modulation, the phase is expressed as [
C
(t) =
C
(t)]; hence, becomes a
function of time. The phase changes directly with the modulating message signal so that the
phase deviation can be represented as (t) =
p
S
m
(t). Here, the symbol
p
is the phase
modulation index.

Modulation adds certain advantages to the communication process. It makes possible the
construction of a practical and an efficient antenna. It differentiates those message signals that
have components overlapping both in time and frequency. This is achieved by making the
message signals of lower frequency to ride on carriers of different frequencies of much higher
magnitude. This is much like digging out channels and directing the water streams through non-
overlapping courses. This restricts the flow of separate messages into separate channels.
There are other processes where the electrical signals are carried directly from the source to the
destination by physical cables. This method of directing the original signal, called the base-band,
directly through the medium is called base-band transmission. An example may be the old
telegraph where the coded message in the form of dots and dashes were carried over electric
cables from one point to another.
Digital modulation methods
In digital modulation, an analog carrier signal is modulated by a discrete signal. Digital
modulation methods can be considered as digital-to-analog conversion, and the corresponding
demodulation or detection as analog-to-digital conversion. The changes in the carrier signal are
chosen from a finite number of M alternative symbols (the modulation alphabet).

A simple example: A telephone line is designed for transferring audible sounds, for example
tones, and not digital bits (zeros and ones). Computers may however communicate over a
telephone line by means of modems, which are representing the digital bits by tones, called
symbols. If there are four alternative symbols (corresponding to a musical instrument that can
generate four different tones, one at a time), the first symbol may represent the bit sequence 00,
the second 01, the third 10 and the fourth 11. If the modem plays a melody consisting of 1000

tones per second, the symbol rate is 1000 symbols/second, or baud. Since each tone (i.e.,
symbol) represents a message consisting of two digital bits in this example, the bit rate is twice
the symbol rate, i.e. 2000 bits per second. This is similar to the technique used by dialup modems
as opposed to DSL modems.
According to one definition of digital signal, the modulated signal is a digital signal, and
according to another definition, the modulation is a form of digital-to-analog conversion. Most
textbooks would consider digital modulation schemes as a form of digital transmission,
synonymous to data transmission; very few would consider it as analog transmission.
Fundamental digital modulation methods
The most fundamental digital modulation techniques are based on keying:
- PSK (phase-shift keying): a finite number of phases are used.
- FSK (frequency-shift keying): a finite number of frequencies are used.
- ASK (amplitude-shift keying): a finite number of amplitudes are used.
- QAM (quadrature amplitude modulation): a finite number of at least two phases and at
least two amplitudes are used.
In QAM, an in phase signal (or I, with one example being a cosine waveform) and a quadrature
phase signal (or Q, with an example being a sine wave) are amplitude modulated with a finite
number of amplitudes, and then summed. It can be seen as a two-channel system, each channel
using ASK. The resulting signal is equivalent to a combination of PSK and ASK.
In all of the above methods, each of these phases, frequencies or amplitudes are assigned a
unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an equal
number of bits. This number of bits comprises the symbol that is represented by the particular
phase, frequency or amplitude.
If the alphabet consists of alternative symbols, each symbol represents a message
consisting of N bits. If the symbol rate (also known as the baud rate) is symbols/second (or
baud), the data rate is bit/second.

For example, with an alphabet consisting of 16 alternative symbols, each symbol represents 4
bits. Thus, the data rate is four times the baud rate.
In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is
constant, the modulation alphabet is often conveniently represented on a constellation diagram,
showing the amplitude of the I signal at the x-axis, and the amplitude of the Q signal at the y-
axis, for each symbol.

Demodulation
Demodulation is the process of recovering the signal intelligence from a modulated carrier wave.
This process, also called detection, is the reverse process of modulation. The wireless signal
consists of radio frequency (high frequency) carrier wave modulated by audio frequency (low
frequency). The diaphragm of a telephone receiver or a loud speaker cannot vibrate with high
frequency. Moreover, this frequency is beyond the audible range of human ear. So, it is
necessary to separate the audio frequencies from radio- frequency carrier waves.
Important characteristics of a detector
(i) Linearity
(ii) Sensitivity
(iii) Signal handling capacity
Linearity is determined by how accurately the output of the detector follows the input signal. If
the variation in the amplitude of output is proportional to the input amplitude, the detector is said
to be linear. The sensitivity is a measure of how much the input signal is delivered as useful
output. The signal handling is a measure of the signal amplitudes that a detector can accept
without distortion.

There are several ways of demodulation depending on how parameters of the base-band signal
are transmitted in the carrier signal, such as amplitude, frequency or phase. For example, for a
signal modulated with a linear modulation, like AM (Amplitude Modulation), we can use a
synchronous detector. On the other hand, for a signal modulated with an angular modulation, we
must use an FM (Frequency Modulation) demodulator or a PM (Phase Modulation) demodulator.
Different kinds of circuits perform these functions.
Many techniquessuch as carrier recovery, clock recovery, bit slip, frame synchronization, rake
receiver, pulse compression, Received Signal Strength Indication, error detection and correction,
etc. -- are only performed by demodulators, although any specific demodulator may perform
only some or none of these techniques.
Radio Frequency spectrum
Radio spectrum refers to the part of the electromagnetic spectrum corresponding to radio
frequencies that is, frequencies lower than around 300 GHz (or, equivalently, wavelengths
longer than about 1 mm). Different parts of the radio spectrum are used for different radio
transmission technologies and applications. Radio spectrum is typically government regulated in
developed countries and, in some cases, is sold or licensed to operators of private radio
transmission systems (for example, cellular telephone operators or broadcast television stations).
Ranges of allocated frequencies are often referred to by their provisioned use (for example,
cellular spectrum or television spectrum). A band is a small section of the spectrum of radio
communication frequencies, in which channels are usually used or set aside for the same
purpose. Above 300 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is
so great that the atmosphere is effectively opaque, until it becomes transparent again in the near-
infrared and optical window frequency ranges. To prevent interference and allow for efficient
use of the radio spectrum, similar services are allocated in bands. For example, broadcasting,
mobile radio, or navigation devices, will be allocated in non-overlapping ranges of frequencies.
Each of these bands has a basic band plan which dictates how it is to be used and shared, to
avoid interference and to set protocol for the compatibility of transmitters and receivers. As a
matter of convention, bands are divided at wavelengths of 10
n
metres, or frequencies of

310
n
hertz. For example, 30 MHz or 10 m divides shortwave (lower and longer) from VHF
(shorter and higher). These are the parts of the radio spectrum, and not its frequency allocation.



ITU
The ITU radio bands are designations defined in the ITU Radio Regulations. States that "the
radio spectrum shall be subdivided into nine frequency bands, which shall be designated by
progressive whole numbers in accordance with the following table




.

.

Modes of communication
The term Transmission Mode defines the direction of the flow of information between two
communication devices i.e. it tells the direction of signal flow between the two devices. There
are three ways or modes of data transmission: Simplex, Half duplex (HDX), Full duplex (FDX)

Simplex: In Communication Networks, Communication can take place in one direction
connected to such a circuit are either a send only or receive only device. There is no mechanism
for information to be transmitted back to the sender. Communication is unidirectional. TV
broadcasting is an example. Simplex transmission generally involves dedicated circuits.
Simplex circuits are analogous to escalators, doorbells, fire alarms and security systems:
Examples of Simplex mode:
1. A Communication between a computer and a keyboard involves simplex duplex transmission.
A television broadcast is an example of simplex duplex transmission.
2. Another example of simplex transmission is loudspeaker system. An announcer
speaks into a microphone and his/her voice is sent through an amplifier and then to all
the speakers.
3. Many fire alarm systems work the same way.
.
.
.
.
.

.
.
.
.
.


Half Duplex: A half duplex system can transmit data in both directions, but only in one
direction at a time that mean half duplex modes support two-way traffic but in only one
direction at a time. The interactive transmission of data within a time sharing system may be
best suited to half-duplex lines. Both the connected devices can transmit and receive but not
simultaneously. When one device is sending the other can only receive and vice-versa. Data is
transmitted in one direction at.a time, for example. a walkie-talkie. This is generally used for
relatively low-speed transmission, usually involving two-wire, analog circuits. Due to switching
of communication direction, data transmission in this mode requires more time and processes
than under full duplex mode. Examples of half duplex application include line printers, polling
of buffers, and modem commuriications (many modems can support full duplex also).


Example of half duplex mode:
A walkie-talkie operates in half duplex mode. It can only send or receive a transmission at any
given time. It cannot do both at the same time. As shown in fig. computer A sends information
to computer B. At the end of transmission, computer B sends information to computer A.
Computer A cannot send any information to computer B, while computer B is transmitting
data.
Full Duplex: A full duplex system can transmit data simultaneously in both directions on
transmission path. Full-duplex method is used to transmit the data over a serial communication
link. Two wires needed to send data over a serial communication link layer. Full-duplex
transmission, the channel capacity is shared by both communicating devices at all times. Both
the connected devices can transmit and receive at the same time. Therefore it represents truly bi-
directional system. The link may contain two separate transmission paths one for sending and
another for receiving.



Example of Full duplex mode:
Telephone networks operate in full duplex mode when two persons talk on telephone line, both
can listen and speak simultaneously.

Medias of transmission
Transmission media is a pathway that carries the information from sender to receiver. We use
different types of cables or waves to transmit data. Data is transmitted normally through
electrical or electromagnetic signals. An electrical signal is in the form of current. An
electromagnetic signal is series of electromagnetic energy pulses at various frequencies. These
signals can be transmitted through copper wires, optical fibers, atmosphere, water and vacuum
Different Medias have different properties like bandwidth, delay, cost and ease of installation
and maintenance. Transmission media is also called Communication channel.
Types of Transmission Media
Transmission media is broadly classified into two groups.
1. Wired or Guided Media or Bound Transmission Media
2. Wireless or Unguided Media or Unbound Transmission Media



Wired or Guided Media or Bound Transmission Media: Bound transmission media are the
cables that are tangible or have physical existence and are limited by the physical geography.
Popular bound transmission media in use are twisted pair cable, co-axial cable and fiber optical
cable. Each of them has its own characteristics like transmission speed, effect of noise, physical
appearance, cost etc. Copper cables allow the propagation of electric signals (i.e., electric
voltage or current pulses), whereas optical fiber cables allow the propagation of light pulses.
Wireless media include the free space, the ionsphere, etc.
Wireless or Unguided Media or Unbound Transmission Media: Unbound transmission media
are the ways of transmitting data without using any cables. These media are not bounded by
physical geography. This type of transmission is called Wireless communication. Nowadays
wireless communication is becoming popular. Wireless LANs are being installed in office and
college campuses. This transmission uses Microwave, Radio wave, Infra red are some of
popular unbound transmission media. Wireless media allow the propagation of electromagnetic
waves. The transmission/reception of electromagnetic waves requires the use of some wireless
link (also called radio link, due to the fact that radio broadcast was one the first commercial
wireless communication system in use), such as terrestrial microwave links, satellite links, etc

The data transmission capabilities of various Medias vary differently depending upon the
various factors. These factors are:
1. Bandwidth. It refers to the data carrying capacity of a channel or medium. Higher bandwidth
communication channels support higher data rates.

2. Radiation. It refers to the leakage of signal from the medium due to undesirable electrical
characteristics of the medium.
3. Noise Absorption. It refers to the susceptibility of the media to external electrical noise that
can cause distortion of data signal.
4. Attenuation. It refers to loss of energy as signal propagates outwards. The amount of energy
lost depends on frequency. Radiations and physical characteristics of media contribute to
attenuation.
Other types of channels include storage media (e.g., hard disks, cd, dvd, etc.) and underwater
acoustic channels (which allow the propagation of sound waves). Communication channels vary
in capacity (i.e., the amount of information per time unit that can be carried in a reliable
fashion), attenuation (i.e., reducing of transmitted signal's strength; attenuation increases with
the channel length), distortion (i.e., alternation of signal's variation pattern, where is impressed
the information), noise (i.e., random unwanted signals that corrupt the signal shape), etc.
Comparison of analog and digital communication
1. Analog signals are signals with continuous values. Analog signals are continuous in both
time and value. Analog signals are used in many systems, although the use of analog
signals has declined with the advent of cheap digital signals. All natural signals are
Analog in nature.
Digital signals are discrete in time and value. Digital signals are signals that are
represented by binary numbers, "1" or "0". The 1 and 0 values can correspond to
different discrete voltage values, and any signal that doesn't quite fit into the scheme just
gets rounded off. Digital signals are sampled, quantized & encoded version of
continuous time signals which they represent. In addition, some techniques also make
the signal undergo encryption to make the system more tolerent to the channel.
2. Analog systems are less tolerant to noise, make good use of bandwidth, and are easy to
manipulate mathematically. However, analog signals require hardware receivers and
transmitters that are designed to perfectly fit the particular transmission. If you are

working on a new system, and you decide to change your analog signal, you need to
completely change your transmitters and receivers.
Digital signals are more tolerant to noise, but digital signals can be completely corrupted
in the presence of excess noise. In digital signals, noise could cause a 1 to be interpreted
as a 0 and vice versa, which makes the received data different than the original data.
Imagine if the army transmitted a position coordinate to a missile digitally, and a single
bit was received in error? This single bit error could cause a missile to miss its target by
miles. Luckily, there are systems in place to prevent this sort of scenario, such as
checksums and CRCs, which tell the receiver when a bit has been corrupted and ask the
transmitter to resend the data. The primary benefit of digital signals is that they can be
handled by simple, standardized receivers and transmitters, and the signal can be then
dealt with in software (which is comparatively cheap to change).
3. Efficiency: The Source Coding Theorem allows quantification of just how complex a
given message source is and allows us to exploit that complexity by source coding
(compression). In analog communication, the only parameters of interest are message
bandwidth and amplitude. We cannot exploit signal structure to achieve a more efficient
communication system.
4. Performance: Because of the Noisy Channel Coding Theorem, we have a specific
criterion by which to formulate error-correcting codes that can bring us as close to error-
free transmission as we might want. Even though we may send information by way of a
noisy channel, digital schemes are capable of error-free transmission while analog ones
cannot overcome channel disturbances; see this problem for a comparison.
5. Flexibility: Digital communication systems can transmit real-valued discrete-time
signals, which could be analog ones obtained by analog-to-digital conversion, and
symbolic-valued ones (computer data, for example). Any signal that can be transmitted
by analog means can be sent by digital means, with the only issue being the number of
bits used in A/D conversion (how accurately do we need to represent signal amplitude).
Images can be sent by analog means (commercial television), but better communication
performance occurs when we use digital systems (HDTV). In addition to digital
communication's ability to transmit a wider variety of signals than analog systems,

point-to-point digital systems can be organized into global (and beyond as well) systems
that provide efficient and flexible information transmission. Computer networks,
explored in the next section, are what we call such systems today. Even analog-based
networks, such as the telephone system, employ modern computer networking ideas
rather than the purely analog systems of the past.
History perspective
The basic components of a communication technology are:
l Transmitter of Information signal
l Carrier of signal
l Receiver of signal
Considering different types of communication, the broad types can be cited, like, Radio,
Television, Satellite and optical communication.
Radio Communication
In this respect the first stepping stone was Radio Communications. Guglielmo Marconi
(1874.1937) is known as the father of wireless communication. He was fascinated by the idea of
using radio waves to communicate. He discovered a way to transmit and receive radio waves
and perfected it during the 1890s. On December 1901, Marconi proved to the world that it was
possible to send messages across continents. To communicate by radio, we need a radio
transmitter and a radio receiver. Sound waves (Speech/Music) are converted into electrical
signals in the microphone. These signals are combined with a radio wave with a particular
frequency called a carrier wave into the transmitter. By this process called modulation, the
resulting signal is passed to the transmitting aerial and is transmitted through free space as
electromagnetic radio waves. The electromagnetic radio wave is picked up by the antenna of a
radio receiver. The carrier wave is tuned and information signal is separated by a detector in the
radio receiver by the process called demodulation. The loudspeaker of the radio converts the
electrical signals back to sound.
The Satellite
Many satellites were developed and launched in the late 1950s and in the 1960s, for
observation. In July 1962, NASA launched the first public telecommunications satellite .Telstar.

and broadcast the first live television pictures across the Atlantic. Satellite communication
differs uniquely from other modes of communication owing to its ability to link all the users
simultaneously on the earth.s surface. In 1945, Dr. Arthur Clarke wrote in his book .Wireless
World. that the satellites placed in geostationary Communication Satellite Transmitting Station
Receiving Station Downlink Uplink
Optical Fibres And Laser
Orbit of the earth could provide worldwide communication. As it is known that the satellite in
geo-stationary orbit of the earth (at an altitude of 35,786 kms) rotates at the same angular
velocity as the earth and thus appears to be stationary with respect to a location on the earth and
this is a big advantage for communication links. Clarkes foresight turned into reality twenty
years later when the first satellite was launched in 1965. By placing a satellite in a high
geostationary orbit, the satellite remains above a particular point on the Equator. Three
geostationary satellites, one above the Pacific, one above the Atlantic and one above the Indian
Ocean would be enough to relay signals to and from any point on the surface of the Earth. In
1965, the satellite .Early Bird. was launched on a geostationary orbit and became the first
commercial satellite to provide a constant link between Europe and America.
Initiatives have been taken for large number of satellites for providing high capacity local
access for multimedia anywhere in the world. Satellites form an essential part of
telecommunication system worldwide carrying large amounts of data and telephone traffic in
addition to television signals. Gateway stations are collecting data from different routes and
interfacing with the existing terrestrial infrastructure. Long-distance calls or TV programs travel
by radio or along cables to the satellite ground station. The signals are amplified and beamed
from a parabola antenna into space. The satellite receives the signal, amplifies it again and
sends it back to the appropriate ground station. The signals are again amplified and sent by
wareless radio waves or electrical wave along cables. With a .dish. connected to their TV set,
people can pick up signals directly from a satellite and receive more channels which are called
the satellite television.




Limitation and advantages of communication systems
Advantages:
1. more easy to generate.
2. easy way of communication.
Disadvantages:
1. very difficult to transmit as it is.
2. devices used are expensive.
3. lots and lots of noise interruptions.
4. accuracy is less.
5. transmission and reception is not very easy
Trade off
1. Bandwidth efficiency
2. Power efficiency
3. Performance
4. System complexity
5. Cost









Noise figure and noise figure calculation

Noise figure (NF) and noise factor (F) are measures of degradation of the signal-to-noise ratio
(SNR), caused by components in a radio frequency (RF) signal chain. It is a number by which
the performance of a radio receiver can be specified. The noise factor is defined as the ratio of
the output noise power of a device to the portion thereof attributable to thermal noise in the
input termination at standard noise temperature T
0
(usually 290 K). The noise factor is thus the
ratio of actual output noise to that which would remain if the device itself did not introduce
noise, or the ratio of input SNR to output SNR. The noise figure is simply the noise factor
expressed in decibels (dB). The noise figure is the difference in decibels (dB) between the noise
output of the actual receiver to the noise output of an ideal receiver with the same overall gain
and bandwidth when the receivers are connected to matched sources at the standard noise
temperature T
0
(usually 290 K). The noise power from a simple load is equal to k T B, where k
is Boltzmann's constant, T is the absolute temperature of the load (for example a resistor), and B
is the measurement bandwidth. The noise factor F of a system is defined as:

Where SNR
in
and SNR
out
are the input and output signal-to-noise ratios, respectively. The SNR
quantities are power ratios. The noise figure NF is defined as:

Where SNR
in, dB
and SNR
out, dB
are in decibels (dB). The noise figure is the noise factor, given in
dB:

These formulae are only valid when the input termination is at standard noise temperature T
0
,
although in practice small differences in temperature do not significantly affect the values. The
noise factor of a device is related to its noise temperature T
e



Attenuators have a noise factor F equal to their attenuation ratio L when their physical
temperature equals T
0
. More generally, for an attenuator at a physical temperature T, the noise
temperature is , giving a noise factor of:

If several devices are cascaded, the total noise factor can be found with

where F
n
is the noise factor for the n-th device and G
n
is the power gain (linear, not in dB) of the
n-th device. In a well designed receive chain, only the noise factor of the first amplifier should
be significant.
Noise calculation
It has been stated that noise is an unwanted signal that accompanies a wanted signal, and, as
discussed, the most common form is random (non-deterministic) thermal noise. The essence of
calculations and measurements is to determine the signal power to Noise power ratio, i.e. the
(S/N) ratio or (S/N) expression in dB.
i.e. Let S= signal power (mW)
N = noise power (mW)


dBm dBm
dB
dB
dBm
dBm
dB
ratio
N S
N
S
N S
N
S
e i
mW
mW N
N and
mW
mW S
S
that recall Also
N
S
N
S
N
S
N
S
=
|
.
|

\
|
=
|
.
|

\
|
|
.
|

\
|
=
|
.
|

\
|
=
|
.
|

\
|
= |
.
|

\
|
=
|
.
|

\
|
10 10
10
10
10
log 10 log 10 . .
1
) (
log 10
1
) (
log 10
log 10

Powers are usually measured in dBm (or dBw) in communications systems. The equation
dBm dBm
dB
N S
N
S
= |
.
|

\
|
is often the most useful. The |
.
|

\
|
N
S
at various stages in a communication
system gives an indication of system quality and performance in terms of error rate in digital
data communication systems and fidelity in case of analogue communication systems.
(Obviously, the larger the |
.
|

\
|
N
S
, the better the system will be). Noise, which accompanies the
signal is usually considered to be additive (in terms of powers) and its often described as
Additive White Gaussian Noise, AWGN, noise. Noise and signals may also be multiplicative
and in some systems at some levels of |
.
|

\
|
N
S
, this may be more significant than AWGN. In
order to evaluate noise various mathematical models and techniques have to be used,
particularly concepts from statistics and probability theory, the major starting point being that
random noise is assumed to have a Gaussian or Normal distribution. We may relate the concept
of white noise with a Gaussian distribution as follows:


Gaussian distribution graph shows Probability of noise voltage vs voltage i.e. most
probable noise voltage is 0 volts (zero mean). There is a small probability of very large +ve or
ve noise voltages. White noise uniform noise power from DC to very high frequencies.
Although not strictly consistence, we may relate these two characteristics of thermal noise as
follows:

The probability of amplitude of noise at any frequency or in any band of frequencies (e.g. 1 Hz,
10Hz 100 KHz .etc) is a Gaussian distribution. Noise may be quantified in terms of noise
power spectral density, p
0
watts per Hz, from which Noise power N may be expressed as
N= p
0
B
n
watts
Where B
n
is the equivalent noise bandwidth, the equation assumes p0

is constant across the band
(i.e. White Noise). B
n
is not the 3dB bandwidth, it is the bandwidth which when multiplied by

p
0
Gives the actual output noise power N. This is illustrated further below.


Ideal low pass filter
Bandwidth B Hz = B
n
N= p
0
B
n
watts
Practical LPF
3 dB bandwidth shown, but noise does not suddenly cease at

B
3dB

Therefore, B
n
> B
3dB
, B
n
depends on actual filter.
N= p
0
B
n

In general the equivalent noise bandwidth is > B
3dB
. Alternatively, noise may be quantified in
terms of mean square noise i.e.
____
2
V , which is effectively a power. From this a Root mean
square (RMS) value for the noise voltage may be determined.
i.e. RMS =
____
2
V
In order to ease analysis, models based on the above quantities are used. For example, if we
imagine noise in a very narrow bandwidth, f o , as df f o , the noise approaches a sine wave
(with frequency centred in df). Since an RMS noise voltage can be determined, a peak value
of the noise may be invented since for a sine wave
RMS =
2
Peak

the peak value is entirely fictions since in theory the noise with a Gaussian distribution could

have a peak value of + or - volts. Hence we may relate
Mean square RMS 2 (RMS) Peak noise voltage (invented for convenience)
Noise Temperature
Noise temperature is one way of expressing the level of available noise power introduced by a
component or source. The power spectral density of the noise is expressed in terms of the
temperature (in kelvins) that would produce that level of JohnsonNyquist noise, thus:

Where:
- is the power (in watts)
- is the total bandwidth (Hz) over which that noise power is measured
- is the Boltzmann constant (1.38110
23
J/K, joules per kelvin)
is the noise temperature (K) The noise of a system or network can be defined in three
different but related ways: noise factor (Fn), noise figure (NF) and equivalent noise temperature
(Te); these properties are definable as a simple ratio, decibel ratio or temperature, respectively.
For components such as resistors, the noise factor is the ratio of the noise produced by a real
resistor to the simple thermal noise of an ideal resistor. The noise factor of a system is the ratio
of output noise power (P
no
) to input noise power (P
ni
):



To make comparisons easier, the noise factor is always measured at the standard temperature
(T
o
) 290K (standardized room temperature).The input noise power P
ni
is defined as the product
of the source noise at standard temperature (T
o
) and the amplifier gain (G):
P
ni
= GKBT
0



It is also possible to define noise factor F
n
in terms of output and input S/N ratio:

which is also:

where

S
ni
is the input signal-to-noise ratio
S
no
is the output signal-to-noise ratio
P
no
is the output noise power
K is Boltzmann's constant
(1.38 X 10
-23
J/K)
T
o
is 290K
B is the network bandwidth in hertz (Hz)
G is the amplifier gain

The noise factor can be evaluated in a model that considers the amplifier ideal and therefore
amplifies only through gain G the noise produced by the input noise source:

Or

Where

N is the noise added by the network or amplifier

The noise figure is a frequently used measure of an amplifier's goodness, or its departure from

the ideal. Thus it is a figure of merit. The noise figure is the noise factor converted to decibel
notation:
NF = 10 LOG F
n


Where

NF is the noise figure in decibels (dB)
F
n
is the noise factor
LOG refers to the system of base-10 logarithms
The noise temperature is a means for specifying noise in terms of an equivalent temperature.
Evaluating Equation 5-18 shows that the noise power is directly proportional to temperature in
degrees Kelvin and that noise power collapses to zero at absolute zero (0K).
Note that the equivalent noise temperature T
e
is not the physical temperature of the amplifier,
but rather a theoretical construct that is an equivalent temperature that produces that amount of
noise power. The noise temperature is related to the noise factor by:
T
e
= (F
n
- 1) T
o


and to noise figure by:


Now that we have noise temperature T
e
, we can also define noise factor and noise figure in
terms of noise temperature:


And



The total noise in any amplifier or network is the sum of internally and externally generated
noise. In terms of noise temperature:
P
n(total)
= GKB(T
o
+ T
e
)
where

P
n(total)
is the total noise power
Band pass noise model
Noise in communication systems is produced from a filtering operation on the wideband noise
that appears at the receiver input. Most communication occurs in a fairly limited bandwidth
around some carrier frequency. Since that the input noise in a communication system is well
modeled as an additive white Gaussian noise, it makes sense to limit the effects of this noise by
filtering around this carrier frequency.
Noise figure for cascaded stages
Consider a system with cascade of two gain stages having gains of , and with noise
figure of and respectively.

Cascaded gain stages with noise power output



The noise power at the output of the first stage is the sum of
a) Noise power at the input terminal of first gain stage which is amplified by
gain .
b) Noise power generated by the first gain stage
Summing them up, the noise seen at the output of first gain stage is,

The sum of powers is possible because it is assumed that the input noise and the noise added by
the system are uncorrelated i.e.
, where
when and are uncorrelated.
Similarly the noise power seen at the output of the second stage is the sum of
a) Noise power at the input terminal of second gain stage which is
amplified by gain .
b) Noise power generated by the second gain stage
Adding them up, the noise seen at the output of second gain stage is, Equivalent Noise Figure of
2 gain stages
Summarizing, the equivalent noise figure of the two cascaded stage is,


and the equivalent gain is,

Extending this to cascade of stages, the equivalent noise figure is,

Signal in presence of noise
Suppose you have a transmission line carrying a digital signal and the line is picking up random
noise such that half the bits in the original signal are randomly flipped before they reach the
receiver. What techniques are available to recover such a corrupted signal?
Recovering or enhancing a signal, or improving a signal to- noise ratio (SNR) simply means
reducing the noise accompanying a signal. There are two basic ways of doing this: 1.
Bandwidth reduction, where the noise is reduced by reducing the system noise bandwidth (Bn).
This approach works well if the frequency spectra of the noise and signal do not overlap
significantly, so that reducing the noise bandwidth does not affect the signal. With random
white noise, the output noise is proportional to Bn. With nonwhite noise, other relationships
will apply. Averaging or integrating techniques, where successive samples of the signal are
synchronized and added together. The signal will grow as the number (n) of added samples;
with random white noise, the noise will grow as n. This is only the case, if the signal
characteristics are stationary for the duration of the extraction process. Sometimes it is useful to
combine both techniques. signal and noise spectra and improving a signal-to-noise ratio must be
done at the expense of the response time or measurement time (T ); with random white noise
interference, the output signal-to-noise ratio is proportional to T . The bandwidth reduction
technique is best looked at from a frequency-domain point of view; signal averaging and
correlation techniques lend themselves to time domain analysis.
Noise in communication system
The term noise refers to unwanted electrical signals that are always presented in electrical

systems. The presence of noise superimposed on a signal tends to obscure or mask the signal; it
limits the receivers ability to make correct symbol decisions, and thereby limits the rate of
information transmission. Noise arises from a variety of sources, both manmade and natural.
The man-made noise includes such sources as spark-plug ignition noise, switching transients,
and other radiating electromagnetic signals. Natural noise includes such elements as the
atmosphere, the sun, and other galactic sources. Good engineering design can eliminate much of
the noise or its undesirable effect through filtering, shielding, the choice of modulation, and the
selection of an optimum receiver site. For example, sensitive radio astronomy measurements are
typically located at remote desert locations, far from man-made noise sources. However, there is
one natural source of noise, called thermal or Johnson noise, which cannot be eliminated.
Thermal noise is caused by the thermal motion of electrons in all dissipative components
resistors, wires, and so on. The same electrons that are responsible for electrical conduction are
also responsible for thermal noise. Thermal noise as a zero-mean Gaussian random process. A
Gaussian process n(t) is a random function whose value n at any arbitrary time t is statistically
characterized by the Gaussian probability density function

The normalized or standardized Gaussian density function of a zero-mean process is obtained
by assuming that mean= 1. We will often represent a random signal as the sum of a Gaussian
noise random variable and a dc signal. That is,

Where z is the random signal, a is the dc component, and n is the Gaussian noise random
variable. The pdf p(z) is then expressed as


The Gaussian distribution is often used as the system noise model because of a theorem, called
the central limit theorem which states that under very general conditions the probability
distribution of the sum of j statistically independent random variables approaches the Gaussian
distribution as j no matter what the individual distribution functions may be. Therefore, even
though individual noise mechanisms might have other than Gaussian distributions, the
aggregate of many such mechanisms will tend toward the Gaussian distribution.

Pre-emphasis and De-emphasis
Emphasis is a system process designed to decrease, (within a band of frequencies), the
magnitude of some (usually higher) frequencies with respect to the magnitude of other (usually
lower) frequencies in order to improve the overall signal-to-noise ratio by minimizing the
adverse effects of such phenomena as attenuation differences or saturation of recording media
in subsequent parts of the system. Special time constants dictate the frequency response curve,
from which one can calculate the cutoff frequency. In telecommunications emphasis is the
intentional alteration of the amplitude-vs.-frequency characteristics of the signal to reduce
adverse effects of noise in a communication system. The whole system of pre-emphasis and de-
emphasis is called emphasis. The high-frequency signal components are emphasized to produce
a more equal modulation index for the transmitted frequency spectrum, and therefore a better

signal-to-noise ratio for the entire frequency range. Emphasis is commonly used in LP records
and FM broadcasting. In processing electronic audio signals, pre-emphasis refers to a system
process designed to increase (within a frequency band) the magnitude of some (usually higher)
frequencies with respect to the magnitude of other (usually lower) frequencies in order to
improve the overall signal-to-noise ratio by minimizing the adverse effects of such phenomena
as attenuation distortion or saturation of recording media in subsequent parts of the system. The
mirror operation is called de-emphasis, and the system as a whole is called emphasis.
Pre-emphasis is achieved with a pre-emphasis network which is essentially a calibrated filter.
The frequency response is decided by special time constants. The cutoff frequency can be
calculated from that value. Pre-emphasis is commonly used in telecommunications, digital
audio recording, record cutting, in FM broadcasting transmissions, and in displaying the
spectrograms of speech signals. One example of this is the RIAA equalization curve on 33 rpm
and 45 rpm vinyl records. Another is the Dolby noise-reduction system as used with magnetic
tape. In high speed digital transmission, pre-emphasis is used to improve signal quality at the
output of a data transmission. In transmitting signals at high data rates, the transmission medium
may introduce distortions, so pre-emphasis is used to distort the transmitted signal to correct for
this distortion. When done properly this produces a received signal which more closely
resembles the original or desired signal, allowing the use of higher frequencies or producing
fewer bit errors. Pre-emphasis is employed in frequency modulation or phase modulation
transmitters to equalize the modulating signal drive power in terms of deviation ratio. The
receiver demodulation process includes a reciprocal network, called a de-emphasis network, to
restore the original signal power distribution.
De-emphasis
In telecommunication, de-emphasis is the complement of pre-emphasis, in the antinoise system
called emphasis. Emphasis is a system process designed to decrease, (within a band of
frequencies), the magnitude of some (usually higher) frequencies with respect to the magnitude
of other (usually lower) frequencies in order to improve the overall signal-to-noise ratio by
minimizing the adverse effects of such phenomena as attenuation differences or saturation of
recording media in subsequent parts of the system. Special time constants dictate the frequency

response curve, from which one can calculate the cutoff frequency. Pre-emphasis is commonly
used in audio digital recording, record cutting and FM radio transmission. In serial data
transmission, de-emphasis has a different meaning, which is to reduce the level of all bits except
the first one after a transition. That causes the high frequency content due to the transition to be
emphasized compared to the low frequency content which is de-emphasized. This is a form of
transmitter equalization; it compensates for losses over the channel which is larger at higher
frequencies. Well known serial data standards such as PCI Express, SATA and SAS require
transmitted signals to use de-emphasis.
Noise quieting effect
Noise reduction is the process of removing noise from a signal. All recording devices, both
analogue or digital, have traits which make them susceptible to noise. Noise can be random or
white noise with no coherence, or coherent noise introduced by the device's mechanism or
processing algorithms. In electronic recording devices, a major form of noise is hiss caused by
random electrons that, heavily influenced by heat, stray from their designated path. These stray
electrons influence the voltage of the output signal and thus create detectable noise. In the case
of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the
grain structure of the medium. In photographic film, the size of the grains in the film determines
the film's sensitivity, more sensitive film having larger sized grains. In magnetic tape, the larger
the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the
medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to
lower the noise to an acceptable level. FM reception specifications including the FM capture
ratio and its associated capture effect, along with the FM quieting figures and facilities
including squelch are of great importance to users of FM systems. Frequency modulation, FM is
widely used in radio communications and broadcasting, particularly on frequencies above 30
MHz. It offers many advantages, particularly in mobile radio applications where its resistance to
fading and interference is a great advantage. It is also widely used for broadcasting on VHF
frequencies where it is able to provide a medium for high quality audio transmissions.In order to
be able to receive FM a receiver must be sensitive to the frequency variations of the incoming
signals which may be wide or narrow band. However the set is made insensitive to the
amplitude variations. This is achieved by having a high gain IF amplifier. Here the signals are

amplified to such a degree that the amplifier runs into limiting. In this way any amplitude
variations are removed and this improves the signal to noise ratio after the point when the signal
limits in the IF stages. However the high levels of gain associated with the limiting process
mean that when no signal is present, very high levels of noise appear at the output of the FM
demodulator. To overcome the problem of the high noise levels when no signal is present a
circuit known as "squelch" is normally used. This detects when no signal is present and cuts the
audio, thereby removing the noise under these conditions. The level for this is normally present
in domestic radios, but there is often a level adjustment for PMR or handheld transceivers, or
for scanners and professional receivers.
One of the advantages of FM is its resilience to noise. This is one of the main reasons why it is
used for high quality audio broadcasts. However when no signal is present, a high noise level is
present at the output of the receiver. If a low level FM signal is introduced and its level slowly
increased it will be found that the noise level reduces. From this the quieting level can be
deduced. It is the reduction in noise level expressed in decibels when a signal of a given
strength is introduced to the input of the set. Typically a broadcast tuner should give a quieting
level of 30 dB for an input level of around a microvolt.
Capture effect
Another effect that is often associated with FM is called the capture effect. This can be
demonstrated when two signals are present on the same frequency. When this occurs it is found
that only the stronger signal will heard at the output This can be compared to AM where a
mixture of the two signals is heard, along with a heterodyne if there is a frequency difference. A
capture ratio is often defined in receiver specifications. It is the ratio between the wanted and
unwanted signal to give a certain reduction in level of the unwanted signal at the output. In
telecommunications, the capture effect, or FM capture effect, is a phenomenon associated
with FM reception in which only the stronger of two signals at, or near, the same frequency
will be demodulated.
The capture effect is defined as the complete suppression of the weaker signal at the receiver
limiter (if it has one) where the weaker signal is not amplified, but attenuated. When both

signals are nearly equal in strength, or are fading independently, the receiver may switch from
one to the other and exhibit picket fencing. The capture effect can occur at the signal limiter, or
in the demodulation stage, for circuits that do not require a signal limiter.
[citation needed]
Some
types of radio receiver circuits have a stronger capture effect than others. The measurement of
how well a receiver can reject a second signal on the same frequency is called the capture ratio
for a specific receiver. It is measured as the lowest ratio of the power of two signals that will
result in the suppression of the smaller signal. Amplitude modulation, or AM radio,
transmission is not subject to this effect. This is one reason that the aviation industry, and
others, have chosen to use AM for communications rather than FM, allowing multiple signals to
be broadcast on the same channel. Similar phenomena to the capture effect are described in AM
when offset carriers of different strengths are present in the pass band of a receiver. For
example, the aviation glideslope vertical guidance clearance beam is sometimes described as a
"capture effect" system, even though it operates using AM signals Normally a reduction of the
unwanted signal of 30 dB is used. To give an example of this the capture ratio may be 2 dB for
a typical tuner to give a reduction of 30 dB in the unwanted signal. In other words if the wanted
signal is only 2 dB stronger than the unwanted one, the audio level of the unwanted one will be
suppressed by 30 dB. In FM demodulation the receiver tracks the modulated frequency shift of
the desired carrier while discriminating against any other signal since it can only follow the
deviation of one signal at a time. In AM modulation the receiver tracks the signal strength of the
AM signal as the basis for demodulation. This allows any other signal to be tracked as just
another change in amplitude. So it is possible for an AM receiver to demodulate several carriers
at the same time, resulting in an audio mix. If the signals are close but not exactly on the same
frequency the mix will not only include the audio from both carriers but depending on the
carrier separation a whistle might be heard as well representing the difference in the carrier
frequencies. This mix can also occur when an AM carrier is received on a channel that is
adjacent to the desired channel. The resulting overlap forms the high pitched whistle (about 10
Kilohertz) that can often be heard behind an AM station at night when other carriers from
adjacent channels are traveling long distances due to atmospheric bounce.
Since AM assumes short term changes in the amplitude to be information, any electrical
impulse will be picked up and demodulated along with the desired carrier. Hence lightning

causes crashing noises when picked up by an AM radio near a storm. FM radios suppress short
term changes in amplitude and are therefore much less prone to noise during storms and during
reception of electrical noise impulses. For digital modulation schemes it has been shown that for
properly implemented on-off keying/amplitude-shift keying systems, co-channel rejection can
be better than for frequency-shift keying systems.
Noise in modulation systems
Noise Analysis for- AM, FM
The following assumptions are made:
1. Channel model is distortion less
{Additive White Gaussian Noise (AWGN)
2. Receiver Model
The filter is an ideal band pass filter and an ideal demodulator

You might also like