Professional Documents
Culture Documents
Electronic
Devices
&
Circuits
Diode is also two-terminal, passive but non-linear a device. Figure 1 shows the diode.
Similarly, the forward voltage drop, which is about 0.5 or 0.8 V, is of little concern. For
these reasons we treat the diode as a good approximation of an ideal one-way conductor.
Chapter 2
Even though the scale is in tens of volts in the negative region, there is a point where the
application of too negative a voltage will result in a sharp change in the characteristics, as
shown in Fig. 2.1.
Fig 2.1
Zener Region
The current increases at a very rapid ratein a direction opposite to that of the positive
voltage region. The reverse-bias potentialthat results in this dramatic change in characteristics
is called the Zener potentialand is given the symbol VZ.As the voltage across the diode
increases in the reverse-bias region, the velocityof the minority carriers responsible for the
reverse saturation current Iswill also increase.Eventually, their velocity and associated kinetic
energy (WK=1/2(mv)2) will besufficient to release additional carriers through collisions with
otherwise stable atomicstructures. That is, an ionization process will result whereby valence
electrons absorbsufficient energy to leave the parent atom. These additional carriers can then
aid theionization process to the point where a high avalanche current is established and
theavalanche breakdown region determined.
The avalanche region (VZ) can be brought closer to the vertical axis by increasingthe
doping levels in the p- and n-type materials. However, as VZdecreases to very lowlevels, such
as _5 V, another mechanism, called Zener breakdown, will contribute tothe sharp change in
the characteristic. It occurs because there is a strong electric fieldin the region of the junction
that can disrupt the bonding forces within the atom and“generate” carriers. Although the
Zener breakdown mechanism is a significant contributoronly at lower levels of VZ, this sharp
Chapter 3
LIGHT-EMITTING DIODES
4.1 Rectification
A rectifier changes ac (alternating current) to dc (direct current). This is the most
important application of diodes. Diodes are sometimes called rectifiers.
The basic circuit is shown in Fig. 4.1.
The full-wave bridge diodes prevent flow of current back out of capacitor. The capacitor is an
1
energy storage element. The energy stored in a capacitor is E = CU 2 . For C in F (farads)
2
and U in V (Volts), E comes out in J (joules) and J=Watt/sec. The capacitor value is chosen
so that R load >> 1/ f , where f is the ripple frequency. For power line sine wave it is
2*50Hz=100Hz. It allows ensuring small ripples, by making the time constant for discharge
much longer than the time between recharging (the capacitor is charging very quickly, while
discharging is very slow).
We should remember about forward drop voltage of the diode: This circuit gives no output
for signal for input smaller then, forward drop voltage, let us say 0.5 V pp (peak to peak). If
this is a problem, there are various tricks that help to combat this limitation. For instance:
1. use Schottky diodes with smaller forward drop voltage (approximately 0.2V),
2. use so called circuit solution, which means modifying the circuit structure and
compensating the drop,
3. use matched-pair compensation, use transistors, FETs.
4.5 Limiter
The circuit in Fig.4.10 limits the output swing to one diode drop, roughly 0.6V.
It might seem very small, but if the next device is an amplifier with large voltage
amplification, its input has to be always near zero voltage. Otherwise the output is in state of
saturation. For instance we have an op amp with a gain of 1000. The amplifier operates with
supply voltage ± 15V. Sometimes it can be ± 12V or ± 18V or something in between. It will
never give output voltage bigger than the supply voltage, i.e. ± 15V. It means that the input
signal ± 15mV (± 15V/1000) or bigger will saturate the output. This particular amplifier
gives the output proportional to the input (proportionality factor is 1000) only for input
signals from the interval (-15mV,+15mV).
This diode limiter is often used as input protection for high-gain amplifiers.
BJT TRANSISTOR
A transistor is a semiconductor active device which has the property of transfer of resistance.
It is used to amplify and switch electronic signals. A BJT is a three terminal, three regions;
two junction bipolar device. It consists of back-to-back two P-N junction diodes. The below
figure 5.1 shows two types of transistors based on the polarity and majority and minority
charge carriers in the semiconductor materials.
(a)EMITTER
It is on the left-hand side of the transistor .It is heavily doped regions because its
main function is to emit majority charge carriers (either electrons or holes) to the
base.
(b) BASE
It is the middle region of the transistor. It is very thin (10-6) as compared to either the
emitter or collector and is very lightly-doped.
(c)COLLECTOR
It is on the right-hand side of the transistor it is medially doped and its main function
is to collect majority charge carriers through the base.
As to find the given device as transistor the four basic guide posts are:
(1) Conventional current flows along the arrow whereas electrons flow against it.
(4) IE=IB+ IC
(a) PNP Transistor: N-type material sandwiched between two layers of P-type material. It
is called as PNP transistor.
(b) NPN Transistor: P-type material sandwiched between two layers of N-type material.
It is called as NPN transistor.
The symbols employed for PNP and NPN transistors are shown. The arrowhead is always at
the emitter (not at the collector) and in each case, its direction indicates the conventional
direction of current flow. For a PNP transistor, arrowhead points from emitter to base
meaning that emitter is positive with respect to base (and also with respect to collector). For
NPN transistor, it points positive with respect to emitter.
The emitter, base and collector are provided with terminals which are labelled as E, B and C.
The two junctions are:
It may be noted, in passing, that transistors are made by growing, alloying or diffusing
processes.
For proper working of a transistor, it is essential to apply voltages of correct polarity across
its two junctions. It is worthwhile to remember that for normal operation
Figure 5.2 for properly biased NPN and PNP transistors is shown beside. In the fig,
two batteries respectively provide the dc emitter supply voltage VEE and collector supply
voltage Vcc for properly biasing the two junctions of the transistor. In fig (a), positive terminal
of VEE is connected to P-type emitter in order to repel or Push holes into the base.The
negative terminal of Vcc is connected to the collector so that it may attract or pull holes
through the base. Similar considerations apply to the NPN transistor of fig (b).
Basically, there are three types of circuit connections (called configurations) for operating a
transistor.
The term ‘common’ is used to denote the electrode that is common to the input and output
circuits. Because the common electrode is generally grounded, these modes of operation are
frequently referred to as grounded-base, grounded-emitter and grounded-collector
configurations.
Since transistor is a 3-terminal device, one of its terminals has to be common to the input and
output circuits.
6.1 CB CONFIGURATION
In this configuration, emitter current IE is the input current and collector current I C is the
output current. The input signal is applied between the emitter and base whereas output is
taken out from the collector and base. The Common base configuration is as shown in the
following figure. 6.1.
6.2 CC CONFIGURATION
In this case, input signal is applied between base and collector and output signal is taken out
from emitter-collector circuit. Here IB is the input current and IE is the output current. The
common collector Configuration is as shown in the following Figure 6.2.
6.3 CE CONFIGURATION
IE = IB + IC
The Transistor working as a switch is shown in the figure 6.4. When a transistor is
used as a switch it must be either OFF or fully ON. In the fully ON state the voltage VCE
across the transistor is almost zero and the transistor is said to be saturated because it cannot
pass any more collector current Ic. The output device switched by the transistor is usually
called the 'load'.
In the full ON state: power = Ic × VCE, but VCE = 0 (almost), so the power is very small
The field effect transistor is a three terminal unipolar solid-state device in which
current is controlled by an electric field as is done in vacuum tubes.
(1) SOURCE: it is the terminal through which majority carriers enter the bar. Since
carriers come from it, it is called the source.
(2) DRAIN: It is the terminal through which majority carriers leave the bar. The drain-to-
source voltage VDS drives the drain current ID.
(3) GATE: These are two internally-connected heavily-doped regions which form two P-
N junctions. The gate-source voltage VGS reverse-biases the gates.
(4) CHANNEL: It is the space between two gates through which majority carriers pass
from source to drain when VDS is applied.
It must be kept in mind that gate arrow always points to N-type material.
The channel can be of n-type or p-type, and is accordingly called an nMOS or a pMOS). The
MOSFET is used in digital CMOS logic, which uses P- and N- channel MOSFETs as
building blocks.
As the name indicates, the MOSFET operates only in the enhancement mode and has no
depletion mode. It works with large values of VGS only. It differs in construction from the
DE MOSFET in that structurally there exists no channel between the drain and source.
Hence, it does not conduct when VGS=0. That is why it is called normally OFF MOSFET.
This MOSFET is so called because it can be operated in both depletion mode and
enhancement mode by changing the polarity of VGS. When negative gate-to-source
voltage is applied, the N-channel DE MOSFET operates in the depletion mode. However,
with positive gate voltage, it operates in the enhancement mode. Since a channel exists
between drain and source,ID flows even when VGS=0. That is why DE MOSFET is known
as normally ON MOSFET.
The figure 7.3 shows different symbolic representations of N-channel and P-channel FET
FETs can be used in almost every application in which bipolar transistors can be used.
However, they have certain applications which are exclusive to them:
(1) As input amplifiers in oscilloscopes, electronic voltmeters and other measuring and
testing equipment because their higher rin reduces loading effect to the minimum.
(2) In logic circuits where it is kept OFF when there is zero input while it is turned ON
with very little power input.
(5) Large-scale integration (LSI) and computer memories because of very small size.
Chapter 8
.
Fig 8.1 Half wave Rectified out put
Now the average value of this waveform is not zero, since there is no negative half.Hence a
rectifier circuit converts A.C. Signal with zero average value to a unidirectional waveform
with non zero average value.
A.C input is normally the A.C. main supply. Since the voltage is 230V, and such a
highvoltage cannot be applied to the semiconductor diode, step down transformer should be
used. Iflarge D.C. voltage is required vacuum tubes should be used. Output voltage is taken
across theload resistorRL. Since the peak value of A.C. signal is much larger than Vy' we
neglect Vy' foranalysis.
The purpose of a rectifier circuit is to convert A.C. to D.C. But the simple circuit
shown before will not achieve this. Rectifier converts A.C. to unidirectional flow and not
D.C. So filters are used to get pure D.C. Filters convert unidirectional flow into D.C.
Bridge rectifiers are available in a package with all the 4 diodes incorporated in one
unit. It will have two terminals for A.C. Input and two terminals for DC output. Selenium
rectifiers are also available as a package.
Basic definitions
Ideally an amplifier should reproduce the input signal, with change in magnitude and with or
without change in phase. But some of the short comings of the amplifier circuit are
I. Change in the value of the gain due to variation in supplying voltage, temperature or due to
components.
2. Distortion in wave-form due to non linearities in the operating characters of the amplifying
device.
3. The amplifier may introduce noise (undesired signals) the above drawbacks can be
minimizing if we introduce feedback
Based on series/shunt and voltage/current the four different types of feedback are,
1. Voltage series feedback.
2. Voltage shunt feedback.
For an amplifier, the additional power due to amplification is derived from the DC
bias supply. So an amplifier effectively converts DC to AC. But it needs AC input. Without
AC input, there is no AC output. In the oscillator circuits also DC power is converted to AC.
But there is no AC input signal. So the difference between amplifier and oscillator is in
amplifiers circuits, the DC power conversion to AC is controlled by the AC input signal. But
in oscillators, it is not so. There are two types of oscillator’s circuits:
I. Harmonic Oscillators
2. Relaxation Oscillators.
Harmonic Oscillators produce sine waves. Relaxation Oscillators produce saw tooth and
square waves etc. Oscillator circuits employ both active and passive devices. Active devices
convert the DC power to AC. Passive components determine the frequency of oscillators
f = [2 Π ]-1
FOR COLPITTS OSCILLATOR
f = [2 Π ]-1
Ct = C1C2/(C1 + C2)
MICROWAVE
ENGINEERING
The short wavelengths involved in turn mean that the propagation time for electrical
effects from one point in a circuit to another point is comparable with the period of the
oscillating currents and charges in the system. As a result, conventional low-frequency circuit
analysis based on Kirchhoffs laws and voltage-current concepts no longer suffices for an
adequate description of the electrical phenomena taking place. It is necessary instead to cany
out the analysis in terms of a description of the electric and magnetic fields associated with
the device. In essence, it might be said, microwave engineering is applied electromagnetic
fields engineering. For this reason the successful engineer in this area must have a good
working knowledge of electromagnetic field theory.
Table 1.2 represents modern labelling of frequency bands according to IEEE
standards.
The great interest in microwave frequencies arises for a variety of reasons. Basic
among these is the ever-increasing need for more radio-frequency-spectrum space and the
rather unique uses to which microwave frequencies can be applied. When it is noted that the
frequency range 109 to 1012 Hz contains a thousand sections like the frequency spectrum from
In more recent years microwave frequencies have also come into widespread use in
communication links, generally referred to as microwave links. Since the propagation of
microwaves is effectively along line-of-sight paths, these links employ high towers with
reflector or lens-type antennas as repeater stations spaced along the communication path.
Such links are a familiar sight to the motorist traveling across the country because of their
frequent use by highway authorities, utility companies, and television networks. A further
interesting means of communication by microwaves is the use of satellites as microwave
relay stations. The first of these, the Telstar, launched in July 1962, provided the first
transmission of live television programs from the United States to Europe.
At the present time most communication systems are shifting to the use of digital
transmission, i.e., analogue signals are digitized before transmission. Microwave digital
communication system development is progressing rapidly. In the early systems simple
modulation schemes were used and resulted in inefficient use of the available frequency
spectrum. The development of 64-state quadrature amplitude modulation (64-QAM) has
made it possible to transmit 2,016 voice channels within a single 30-MHz RF channel. This is
competitive with FM analog modulation schemes for voice.
Even though such uses of microwaves are of great importance, the applications of
microwaves and microwave technology extend much further, into a variety of areas of basic
and applied research, and including a number of diverse practical devices, such as microwave
ovens that can cook a small roast in just a few minutes
Waveguides periodically loaded with shunt susceptance elements support slow waves
having velocities much less than the velocity of light, and are used in linear accelerators.
These produce high-energy beams of charged particles for use in atomic and nuclear
research. The slow-traveling electromagnetic waves interact very efficiently with charged-
particle beams having the same velocity, and thereby give up energy to the beam. Another
Sensitive microwave receivers are used in radio astronomy to detect and study the
electromagnetic radiation from the sun and a number of radio stars that emit radiation in this
band. Such receivers are also used to detect the noise radiated from plasmas (an
approximately neutral collection of electrons and ions, e.g., a gas discharge). The information
obtained enables scientists to analyze and predict the various mechanisms responsible for
plasma radiation. Microwave radiometers are also used to map atmospheric temperature
profiles, moisture conditions in soils and crops, and for other remote-sensing applications as
well.
Chapter 2
For a large variety of waveguides of practical interest it turns out that all the boundary
conditions can be satisfied by fields that do not have all components present. Specifically, for
transmission lines, the solution of interest is a transverse electromagnetic wave with
transverse components only, that is, Ez = Hz = 0, whereas for waveguides, solutions with Ez =
0 or Hz = 0 are possible. Because of the widespread occurrence of such field solutions, the
following classification of solutions is of particular interest.
1. Transverse electromagnetic (TEM) waves. For TEM waves, Ez = Hz = 0. The electric
field may be found from the transverse gradient of a scalar function *(x,y), which is a
function of the transverse coordinates only and is a solution of the two-dimensional
Laplace equation.
2. Transverse electric (TE), or H, modes. These solutions have Ez = 0, but Hz ¥= 0. All the
field components may be derived from the axial component Hz of magnetic field.
3. Transverse magnetic (TM), or E, modes. These solutions have Hz = "» but Ez ¥= 0. The
field components may be derived from Ez.
In some cases it will be found that a TE or TM mode by itself will not satisfy all the
boundary conditions. However, in such cases linear combinations of TE and TM modes may
be used, since such linear combinations always provide a complete and general solution.
Although other possible types of wave solutions may be constructed, the above three types
are the most useful in practice and by far the most commonly used ones.
Hollow-pipe waveguides do not support a TEM wave. In hollow-pipe waveguides the waves
are of the TE and TM variety. The waveguide with a rectangular cross section is the most
widely used one. It is available in sizes for use at frequencies from 320 MHz up to 333 GHz.
The WR-2300 waveguide for use at 320 MHz has internal dimensions of 58.42 in by 29.1 in
and is a very large duct. By contrast, the WR-3 waveguide for use at 333 GHz has internal
Modes of Propagation
The rectangular waveguide with a cross section as illustrated in Fig. 2.1 is an example of
a wave guiding device that will not support a TEM wave. Consequently, it turns out that
unique voltage and current waves do not exist, and the analysis of the waveguide properties
has to be carried out as a field problem rather than as a distributed-parameter-circuit problem.
In a hollow cylindrical waveguide a transverse electric field can exist only if a time-
varying axial magnetic held is present. Similarly, a transverse magnetic field can exist only if
either an axial displacement current or an axial conduction current is present, as Maxwell's
equations show. Since a TEM wave does not have any axial field components and there is no
center conductor on which a conduction current can exist, a TEM wave cannot be propagated
in a cylindrical waveguide.
The types of waves that can be supported (propagated) in a hollow empty waveguide are
the TE and TM modes. The essential properties of all hollow cylindrical waveguides are the
For a rectangular waveguide with a width aequal to twice the height ft, the maximum
bandwidth of operation over which only the dominant TE10 mode propagates is a 2:1 band.
For some system applications it is necessary to have a waveguide that operates with only a
single mode of propagation over much larger bandwidths. A transmission line supporting
only a TEM mode can fulfil this requirement but must then have cross-sectional dimensions
that are small relative to the shortest wavelength of interest. A coaxial transmission line will
support higher-order TE and TM modes in addition to the TEM mode. Thus, to avoid
excitation of a higher-order mode of propagation, the outer radius must be kept small relative
to the wavelength. The small cross section implies a relatively large attenuation; so some
other form of waveguide is needed.
Chapter 3
WAVEGUIDE COMPONENTS-1
3.1 ATTENUATORS
Perhaps the most satisfactory precision attenuator developed is the rotary attenuator,
which we now examine in some detail. The h components of this instrument consist of two
rectangular-to-circular waveguide tapered transitions, together with an intermediate section of
circular waveguide that is free to rotate,. A thin tapered resistive card is placed at the output
end of each transition section and oriented parallel to the broad walls of the rectangular
guide. A similar resistive card is located in the intermediate circular-guide section. The
incoming TE10 mode in rectangular guide is transformed into the TE1n mode in the circular
guide with negligible reflection by means of the tapered transition. The polarization of the
TE1n mode is such that the electric field is perpendicular to the thin resistive card in the
transition section. As such, this resistive card has a negligible effect on the TEn mode. Since
the resistive card in the centre section can be rotated, its orientation relative to the electric
field of the incoming TE1n mode can be varied so that the amount by which this mode is
attenuated: adjustable.
Phase shifters are used to change the transmission phase angle (phase of S21) of a network.
Ideal phase shifters provide low insertion loss, and equal amplitude (or loss) in all phase
states. While the loss of a phase shifter is often overcome using an amplifier stage, the less
loss, the less power that is needed to overcome it. Most phase shifters are reciprocal
networks, meaning that they work effectively on signals passing in either direction. Phase
shifters can be controlled electrically, magnetically or mechanically. Most of the phase
shifters described on this web site are passive reciprocal networks; we will concentrate
mainly on those that are electrically-controlled.
While the applications of microwave phase shifters are numerous, perhaps the most important
application is within a phased array antenna system (a.k.a. electrically steerable array, or
ESA), in which the phase of a large number of radiating elements are controlled to force the
electro-magnetic wave to add up at a particular angle to the array. The total phase variation of
a phase shifter need only be 360 degrees to control an ESA of moderate bandwidth. Networks
that stretch phase more than 360 degrees are often called line stretchers, and are constructed
similar to the switched line phase shifters to be described below.
The convention followed for phase shifters is that the shortest phase length is the reference or
"off" state, and the longest path or phase length is the "on" state. Thus a 90 degree phase
shifter actually provides minus ninety degrees of phase shift in its "on" state.
Phased arrays
3.4DIRECTIONAL COUPLERS
A directional coupler is a four-port microwave junction with the properties discussed below.
With reference to Fig. 3.1, which is a schematic illustration of a directional coupler, the ideal
directional coupler has the property that a wave incident in port 1 couples power into ports 2
and 3 but not into port 4. Similarly, power incident in port 4 couples into ports 2 and 3 but
not into port 1. Thus ports 1 and 4 are uncoupled. For waves incident in port 2 or 3, the
power is coupled into ports 1 and 4 only, 80 that ports 2 and 3 are also uncoupled. In
addition, all four ports are matched. That is. if three ports are terminated in matched loads,
the fourth port appears terminated in a matched load, and an incident wave in this port suffers
no reflection.
Directional couplers are widely used in impedance bridges for microwave
measurements and for power monitoring. For example, if" a radar transmitter is connected to
port 1, the antenna to port 2. a microwave crystal detector to port 3. and a matched load to
port 4. the power received in port 3 is proportional to the power flowing from the transmitter
Chapter 4
The development of ferrite materials suitable for use at microwave frequencies has
resulted in a large number of microwave devices. A number of them have nonreciprocal
electrical properties; i.e., the transmission coefficient through the device is not the same for
different directions of propagation
5.2 Gyrator
5.3ISOLATOR
The isolator is similar to the gyrator in construction except that it employs a 45° twist
section and 45° of Faraday rotation. In addition, thin resistive cards are inserted in the input
The devices utilizing ferrites for their operation represent only a small number of the
large variety of devices that have been developed. In addition to the above, there are other
forms of isolators, both reciprocal and nonreciprocal phase shifters, electronically controlled
(by varying the current in the electromagnet that supplies the static biasing field) phase
shifters and modulators, electronic switches and power limiters, etc. The nonlinear property
of ferrites for high signal levels has also been used in harmonic generators, frequency mixers,
and parametric amplifiers. A discussion of these devices, together with design considerations,
performance data, and references to the original literature, contained in the book by Lax and
Button, listed in the references at the end of this chapter. The recent article by Rodriquez
gives a good survey of the present status of ferrite devices.
Both types of tubes utilize an electron beam on which space-charge waves and cyclotron
waves can be excited. The space-charge waves are primarily longitudinal oscillations of the
electrons and interact with the electromagnetic fields in cavities and slow-wave circuits to
give amplification.
Klystron amplifiers have the advantage (over the magnetron) of coherently amplifying
a reference signal so its output may be precisely controlled in
amplitude, frequency and phase. Many klystrons have a waveguide for coupling microwave
energy into and out of the device, although it is also quite common for lower power and
lower frequency klystrons to use coaxial couplings instead. In some cases a coupling probe is
used to couple the microwave energy from a klystron into a separate external waveguide.
The name klystron comes from the stem form κλυσ- (klys) of a Greek verb referring to
the action of waves breaking against a shore, and the end of the word electron
Working
Klystrons amplify RF signals by converting the kinetic energy in a DC electron beam
into radio frequency power. A beam of electrons is produced by a thermionic cathode (a
heated pellet of low work function material), and accelerated by high-voltage electrodes
(typically in the tens of kilovolts). This beam is then passed through an input cavity. RF
The figure 6.2 shows reflex Klystron. In the reflex klystron (also known as a 'Sutton' klystron
after its inventor), the electron beam passes through a single resonant cavity. The electrons
are fired into one end of the tube by an electron gun. After passing through the resonant
cavity they are reflected by a negatively charged reflector electrode for another pass through
the cavity, where they are then collected. The electron beam is velocity modulated when it
first passes through the cavity. The formation of electron bunches takes place in the drift
space between the reflector and the cavity. The voltage on the reflector must be adjusted so
that the bunching is at a maximum as the electron beam re-enters the resonant cavity, thus
ensuring a maximum of energy is transferred from the electron beam to the RF oscillations in
the cavity. The voltage should always be switched on before providing the input to the reflex
klystron as the whole function of the reflex klystron would be destroyed if the supply is
provided after the input. The reflector voltage may be varied slightly from the optimum
value, which results in some loss of output power, but also in a variation in frequency. This
effect is used to good advantage for automatic frequency control in receivers, and
in frequency modulation for transmitters. The level of modulation applied for transmission is
small enough that the power output essentially remains constant. At regions far from the
optimum voltage, no oscillations are obtained at all. This tube is called a reflex klystron
because it repels the input supply or performs the opposite function of a klystron.
Modern semiconductor technology has effectively replaced the reflex klystron in most
applications.
In addition to the main types of microwave tubes already discussed thereare a variety of
others as well. In one form of travelling-wave tube the resistance-wall amplifier, the helix is
replaced by a circular guide lined with a resistive material. The resistive lining enables a slow
wave to propagate in the guide, a wave that is highly attenuated in the absence of a beam If
an electron beam is present, amplification takes place with a growth constant aB large enough
to offset the attenuation due to the resistive lining. Thus a net overall amplification is
obtained,
In another form of travelling-wave tube, the double-stream amplifier.two parallel
electron beams are used. In this tube one of the beams provides the slow-wave structure, or
circuit, for the other beam.
It is also possible to amplify the space-charge waves directly by passing the beam
through a succession of accelerating and decelerating regions. This type of tube is called a
velocity-jump amplifier because the beam velocity v{l is periodically changed, or jumped, to
new values.
For both the O-type and M-type travelling-wave tubes, it is possible to adjust the beam
velocity so that it is equal to the phase velocity of any one of the spatial harmonics making up
Chapter 7
In some materials (III-V compounds such as GaAs and InP), after an electric field in
the material reaches a threshold level, the mobility of electrons decrease as the electric field
is increased, thereby producing negative resistance. A two-terminal device made from such a
material can producemicrowave oscillations, the frequency of which is primarily determined
by the characteristics of the specimen of the material and not by any external circuit. The
Gunn Effect was discovered by J. B. Gunn of IBM in 1963.
The Gunn diode is a so-called transferred electron device. Electrons are transferred
from one valley in the conduction band to another valley. In order to understand the nature of
the transferred electron effect exhibited by Gunn diodes, it is necessary to consider the
electron drift velocity versus electric field (or current versus voltage) relationship for GaAs
(see Figure7.1). Below the threshold field, Eth, of approximately 0.32 V/mm, the device acts
as a passive resistance. However, above Eth the electron velocity (current) decreases as the
field (voltage) increases producing a region of negative differential mobility, NDM
(resistance, NDR). This is the essential feature that leads to current instabilities and Gunn
oscillations in an active device and is due to the special conductance band structure of direct
band gap semiconductors such as GaAs
l In the lower G valley, electrons exhibit a small effective mass and very high
mobility, µ. In the satellite L valley, electrons exhibit a large effective mass and very low
mobility, µ
Above the high field, E H, most electrons reside in the L valley and the device
behaves as a passive resistance (of greater magnitude) once again. In a practical Gunn diode,
electrons are accelerated from the cathode by the prevailing electric field. When they have
acquired sufficient energy, they begin to scatter into the low mobility satellite valley and
slow down.
The question of exactly how the NDR phenomenon in GaAs results in Gunn-
oscillations can now be answered with the aid of Figure 4. A sample of uniformly doped n-
type GaAs of length L is biased with a constant voltage source V=0
The electrical field is therefore constant and its magnitude given by E. 0 =V 0 /L.
From the bottom graph in Figure 4 it is clear that the electrons flow from cathode to anode
with constant velocity v3
7.4 Applications
Gunn diodes are reliable, relatively easy to install and the lower output power levels fall well
below the safety exposure limits. They are ideally suited for use in low noise sources such as
Vehicle ABS
Automatic identification
Presence/absence indicators
Movement sensors
Distance measurements
1. IMPATT Diode
2. TRAPPATT Diode
3. BARITT Diode
They operate at frequencies between about 3 and 100 GHz or more. A main advantage is
their high power capability. These diodes are used in a variety of applications from low
power radar systems to alarms. A major drawback of using IMPATT diodes is the high level
of phase noise they generate. This results from the statistical nature of the avalanche process.
Nevertheless these diodes make excellent microwave generators for many applications.
The original proposal for a microwave device of the IMPATT type was made by Read and
involved a structure. The Read diode consists of two regions (i) The Avalanche region (a
region with relatively high doping and high field) in which avalanche multiplication occurs
and (ii) the drift region (a region with essentially intrinsic doping and constant field) in which
the generated holes drift towards the contact. A similar device can be built with the
configuration in which electrons generated from the avalanche multiplication drift through
the intrinsic region.
An IMPATT diode generally is mounted in a microwave package. The diode is mounted with
its high–field region close to a copper heat sink so that the heat generated at the diode
junction can be readily dissipated. Similar microwave packages are used to house other
microwave devices.
Impact ionization
If a free electron with sufficient energy strikes a silicon atom, it can break the covalent
bond of silicon and liberate an electron from the covalent bond. If the electron liberated gains
energy by being in an electric field and liberates other electrons from other covalent bonds
then this process can cascade very quickly into a chain reaction producing a large number of
electrons and a large current flow. This phenomenon is called impact avalanche.
Consider a dc bias VB, just short of that required to cause breakdown, applied to the diode.
Let an AC voltage of sufficiently large magnitude be superimposed on the dc bias, such that
during the positive cycle of the AC voltage, the diode is driven deep into the avalanche
breakdown. At t=0, the AC voltage is zero, and only a small pre-breakdown current flows
through the diode. As t increases, the voltage goes above the breakdown voltage and
secondary electron-hole pairs are produced by impact ionization. As long as the field in the
avalanche region is maintain above the breakdown field, the electron-hole concentration
grows exponentially with t. Similarly this concentration decays exponentially with time when
the field is reduced below breakdown voltage during the negative swing of the AC voltage.
The holes generated in the avalanche region disappear in the p+ region and are collected by
the cathode. The electrons are injected into the i – zone where they drift toward the n+ region.
Then, the field in the avalanche region reaches its maximum value and the population of the
electron-hole pairs starts building up. At this time, the ionization coefficients have their
maximum values. The generated electron concentration does not follow the electric field
instantaneously because it also depends on the number of electron-hole pairs already present
in the avalanche region. Hence, the electron concentration at this point will have a small
value. Even after the field has passed its maximum value, the electron-hole concentration
continues to grow because the secondary carrier generation rate still remains above its
average value. For this reason, the electron concentration in the avalanche region attains its
maximum value at, when the field has dropped to its average value. Thus, it is clear that the
avalanche region introduces a 90o phase shift between the AC signal and the electron
concentration in this region.
With a further increase in t, the AC voltage becomes negative, and the field in the avalanche
region drops below its critical value. The electrons in the avalanche region are then injected
into the drift zone which induces a current in the external circuit which has a phase opposite
to that of the AC voltage. The AC field, therefore, absorbs energy from the drifting electrons
as they are decelerated by the decreasing field. It is clear that an ideal phase shift between the
diode current and the AC signal is achieved if the thickness of the drift zone is such that the
bunch of electron is collected at the n+ - anode at the moment the AC voltage goes to zero.
This condition is achieved by making the length of the drift region equal to the wavelength of
1. MICROWAVE GENERATOR
2. MODULATED OUTPUT OSCILLATOR
3. RECEIVER LOCAL OSCILLATOR
4. PARAMETRIC AMPLIFIER (par amps)
5. INTRUSION ALARM NETWORK
6. FM TELECOMMMUNICATION TRANSMITTERS
7. CW DOPPLER RADAR TRANSMITTER.
The graph in figure 8.3 below shows the working of TRAPPATT diode
DE – Plasma extraction
EF – Residual extraction
FG – Charging of diode
APPLICATIONS
Chapter 9
When microwave engineers talk about a "fifty-ohm system", what does that mean? A
common misconception is that if you placed an ohmmeter across the ground and conductor of
a fifty-ohm coax cable, you would always read 50 ohms. This is not the case, here's what
we're talking about: transmission lines have two important properties that depend on their
geometry, their inductance per unit length, and their capacitance per unit length. The
"characteristic impedance" of a system is calculated from the ratio of these two:
Z=sqrt(L'/C')
Let's start with coax cable. The inductance per unit length is mainly attributed to the
diameter of the centre conductor. Decrease this diameter (keeping everything else the same)
and you will increase the inductance. This also raises the characteristic impedance, referring
to the equation above. Filling the cable with a material of higher relative dielectric raises the
unit capacitance, and lowers the line impedance.
Another example: micro strip. Here unit capacitance and inductance are inexorably linked
together; widening the micro strip line decreases its inductance while it increases it
capacitance. Hence, wide lines are always lower in impedance than narrow lines for a given
substrate height. As with coax, the dielectric constant of the substrate has a big effect on
capacitance; using a higher dielectric substrate will yield a lower impedance line, all other
things being equal. So it is important not to mix up your Rogers Droid materials, once your
circuit is etched it is pretty hard to judge the dielectric constant from color and texture alone!
9.1Impedance matching
Impedance matching of source and load is important to get maximum power transfer. If you
have a 75 ohm load, you don't want to drive it with a 50 ohm source, because it is inefficient.
You can learn more about the simple math behind maximum power transfer by clicking here.
Simple impedance transformation can be done using quarter wave transformers. Click here to
go to our main page on quarter-wave tricks!
"Dielectric constant" is another way to say "relative permittivity". Check out our separate
page on permittivity for more info on this subject. Although some people use the phrase
"relative dielectric constant", this is incorrect, akin to saying "deja vu again".
C=( 0x R xA)/D
where 0 is the permittivity of free space (thanks, Maarten!), R is the relative permittivity
(the dielectric constant) of the material between the plates, A is the area of the parallel plates,
and D is the distance they are separated. Technically for this expression to be 100% accurate,
the material surrounding the plates must be of the same relative dielectric constant R, but
this induces only a small error in the calculation under most circumstances. 0 is equal to
8.854x10-12 Farads per meter (you should commit this to memory). Most often it is the
dielectric constant R that is most important in microwaves.
For electromagnetic radiation, the permittivity of the medium that the wave is propagating in
is equal to R 0. In a vacuum or in dry air, R is equal to unity, and the signal travels at the
speed of light. All electromagnetic energy, from 60 Hertz power that your electric company
sells you, to signals that the latest Mars satellite returns to earth, travels really, really fast. In a
vacuum, the speed of light, denoted "c" in textbooks, is 2.998 x 1010centimetres/second
(thanks, Jared!) , or 2.998 x 108 meters per second, or about 186,000 miles per second, which
puts the moon about 1.5 seconds away by radio.
The dielectric constant of a material can be used to quantify how much a material "slows" an
electromagnetic signal. The velocity of the signal within any transmission line that is 100%
filled with a material of dielectric constant R is computed by:
v=c/sqrt( R )
So if your strip line or coax transmission line is fabricated on a material with dielectric
constant 2.2, the velocity of propagation is only 67% of the speed of light in free space.
Similarly, because wavelength is proportional to velocity, the length of a quarter-wave
transformer is also 67% of what it would be in free space. Thus one of the tricks of reducing
the size of microwave components is revealed; by using materials of higher dielectric
constant, distributed structures can be made smaller. One of the advantages of using GaAs for
A very good rule of thumb is that electromagnetic radiation in free space travels about
one foot in one nanosecond; a more exact value is 0.983571 feet per nanosecond. This slows
to about 8 inches per nanosecond for coax cables filled with PTFE (almost all coax cables are
filled with PTFE, or a combination of PTFE and air.) For more information please see our
discussion of group delay.
By designing really tiny parts, you can often consider them lumped elements, even at
microwave frequencies. You must keep the critical dimensions (such as length and width of a
thin-film resistor) small compared to an electrical quarter wavelength. For example, if you
are designing a 50 ohm micro strip load resistor at X-band, on an alumina substrate (dielectric
constant 9.8), a quarter wavelength is approximately 120 mils. You'd better keep both the
length and width of the resistor to less than 40 mils, or you else you have to spend some time
with a EDA simulation tool such as Agilent ADS or Eagleware Genesis evaluating the
performance. Where else but microwave engineering can you make a project out of designing
a stupid fifty-ohm resistor?!
At low frequencies, the metal that connects components together is treated as an ideal
connection, with no loss, no characteristic impedance, and no transmission phase angle.
When interconnects become an appreciable fraction of the signal wavelength, these
interconnections themselves must be treated as distributed elements or transmission lines. An
extreme example of the need to consider the distributed properties of transmission lines is
when we are dealing with a quarter-wavelength. At this electrical length (90 degrees), an
open circuit is transformed to a short circuit, and a short-circuit is transformed to an open
circuit! Think about this: a short-circuited 90 degree "stub" hanging in shunt off of a
transmission line will be invisible to signals propagating down the transmission line, while an
open circuited 90 degree stub shunting a transmission line will cause a short circuit and the
propagating signal will get hosed! A whole lot of microwave engineering exploits this
concept, so you'd better understand it.
One "classic" distributed element is the quarter-wave transformer (we've written an entire
chapter on this and other quarter wave tricks! The quarter wave transformer is used to shift
the impedance of a circuit by the following simple formula:
Z2=sqrt(Z0ZL)
VSWR stands for voltage standing wave ratio. It is a measure of how well a network is
matched to it's intended characteristic impedance (Z0), which is almost always 50 ohms in
microwave engineering. Return loss is just another way to express the same thing. Both are
used in microwave engineering, that's just to keep you on your toes.
VSWR dates back to the days when a "standing wave meter" was an important piece of lab
equipment. Long before you could buy s network analyzer for measuring how well a part is
impedance matched, the standing wave meter was used by engineers to evaluate the same
problem. A small probe was inserted into a waveguide, the output of which was rectified,
producing a current or voltage proportional to the electric field with the waveguide. The
engineer would pull the probe longitudinally along the waveguide, in search of local maxima
and minima readings. These are due to the standing wave within the transmission line. The
ratio of the maximum to the minimum voltage recorded was known as the voltage standing
wave ratio (VSWR). To this day VSWR is often used to quantify how well a part is
impedance matched. Always expressed as a ratio to unity, a VSWR of 1.0:1 indicates
perfection (there is no standing wave). A VSWR of 2:1 means the maxima are twice the
voltage of the minima. A high VSWR such as 10:1 usually indicates you have a problem,
such as a near open or near short circuit.
1.1 Introduction
Before seeing what is a digital communication system let us first see what is a
communication system. The Communication system is as shown in the figure 1.1 below
A digital communication system has several distinguishing features when compared with an
analogue communication system. Both analogue (such as voice signal) and digital signals
(such as data generated by computers) can be communicated over a digital transmission
system. When the signal is analogue in nature, an equivalent discrete-time-discrete-amplitude
representation is possible after the initial processing of sampling and quantization. So, both a
digital signal and a quantized analogue signal are of similar type, i.e. discrete-time-discrete-
amplitude signals.
A key feature of a digital communication system is that a sense of ‘information’, with
appropriate unit of measure, is associated with such signals. This visualization, credited to
Claude E. Shannon, leads to several interesting schematic description of a digital
communication system. For example, consider Fig.1.1which shows the signal source at the
transmission end as an equivalent ‘Information Source’ and the receiving user as an
‘Information sink’. The overall purpose of the digital communication system is ‘to collect
information from the source and carry out necessary electronic signal processing such that the
information can be delivered to the end user (information sink) with acceptable quality’. One
may take note of the compromising phrase ‘acceptable quality’ and wonder why a digital
transmission system should not deliver exactly the same information to the sink as accepted
from the source. A broad and general answer to such query at this point is: well, it depends on
the designer’s understanding of the ‘channel’ (Fig. 1.1) and how the designer can translate his
knowledge to design the electronic signal processing algorithms / techniques in the ’Encoder’
Fig 1.2
Block Diagram of Digital Communication
It is a method used to digitally represent sampled analogy signals, which was invented
by Alec Reeves in 1937. It is the standard form for digital audio in computers and
various Blu-ray, Compact Disc and DVD formats, as well as other uses such as
digital telephone systems. A PCM stream is a digital representation of an analog signal, in
which the magnitude of the analogue signal is sampled regularly at uniform intervals, with
each sample being quantized to the nearest value within a range of digital steps.
PCM streams have two basic properties that determine their fidelity to the original analog
signal: the sampling rate, which is the number of times per second that samples are taken; and
the bit depth, which determines the number of possible digital values that each sample can
take.
2.2Modulation
In the diagram, fig 2.2 a sine wave (red curve) is sampled and quantized for pulse
code modulation. The sine wave is sampled at regular intervals, shown as ticks on the x-axis.
For each sample, one of the available values (ticks on the y-axis) is chosen by some
algorithm. This produces a fully discrete representation of the input signal (shaded area) that
can be easily encoded as digital data for storage or manipulation. For the sine wave example
at right, we can verify that the quantized values at the sampling moments are 7, 9, 11, 12, 13,
14, 14, 15, 15, 15, 14, etc. Encoding these values as binary numbers would result in the
following set of nibbles: 0111, 1001, 1011, 1100, 1101, 1110, 1110, 1111, 1111, 1111, 1110,
etc. These digital values could then be further processed or analyzed by a purpose-
specific digital signal processor or general purpose DSP. Several Pulse Code Modulation
streams could also be multiplexed into a larger aggregate data stream, generally for
transmission of multiple streams over a single physical link. One technique is called time-
division multiplexing, or TDM, and is widely used, notably in the modern public telephone
system. Another technique is called Frequency-division multiplexing, where the signal is
assigned a frequency in a spectrum, and transmitted along with other signals inside that
spectrum. Currently, TDM is much more widely used than FDM because of its natural
compatibility with digital communication, and generally lower bandwidth requirements.
There are many ways to implement a real device that performs this task. In real systems, such
a device is commonly implemented on a single integrated that lacks only the clock necessary
for sampling, and is generally referred to as an ADC (Analogue-to-Digital converter). These
devices will produce on their output a binary representation of the input whenever they are
triggered by a clock signal, which would then be read by a processor of some sort.
To produce output from the sampled data, the procedure of modulation is applied in
reverse. After each sampling period has passed, the next value is read and a signal is shifted
to the new value. As a result of these transitions, the signal will have a significant amount of
high-frequency energy. To smooth out the signal and remove these
undesirable aliasing frequencies, the signal would be passed through analogue filters that
suppress energy outside the expected frequency range (that is, greater than the Nyquist
frequency fs / 2). Some systems use digital filtering to remove some of the aliasing,
converting the signal from digital to analogue at a higher sample rate such that the analogue
filter required for anti-aliasing is much simpler. In some systems, no explicit filtering is done
at all; as it's impossible for any system to reproduce a signal with infinite bandwidth, inherent
losses in the system compensate for the artefacts — or the system simply does not require
much precision. The sampling theorem suggests that practical PCM devices, provided a
sampling frequency that is sufficiently greater than that of the input signal, can operate
without introducing significant distortions within their designed frequency bands.
The electronics involved in producing an accurate analogue signal from the discrete data are
similar to those used for generating the digital signal. These devices are DACs (digital-to-
2.3Limitations
Choosing a discrete value near the analogue signal for each sample leads
to quantization error, which swings between -q/2 and q/2. In the ideal case (with a fully
linear ADC) it is distributed over this interval, with zero mean and variance of q2/12.
Between samples no measurement of the signal is made; the sampling
theorem guarantees non-ambiguous representation and recovery of the signal only if it has
no energy at frequency fs/2 or higher (one half the sampling frequency, known as
the Nyquist frequency); higher frequencies will generally not be correctly represented or
recovered.
As samples are dependent on time, an accurate clock is required for accurate reproduction. If
either the encoding or decoding clock is not stable, its frequency drift will directly affect the
output quality of the device. A slight difference between the encoding and decoding clock
frequencies is not generally a major concern; a small constant error is not noticeable. Clock
error does become a major issue if the clock is not stable, however. A drifting clock, even
with a relatively small error, will cause very obvious distortions in audio and video signals,
for example.
Extra information: PCM data from a master with a clock frequency that can not be influenced
requires an exact clock at the decoding side to ensure that all the data is used in a continuous
stream without buffer under run or buffer overflow. Any frequency difference will be audible
at the output since the number of samples per time interval can not be correct. The data speed
in a compact disk can be steered by means of a servo that controls the rotation speed of the
disk; here the output clock is the master clock. For all "external master" systems like DAB
the output stream must be decoded with a regenerated and exact synchronous clock. When
the wanted output sample rate differs from the incoming data stream clock then a sample rate
converter must be inserted in the chain to convert the samples to the new clock domain.
In conventional PCM, the analogue signal may be processed (e.g., by amplitude compression)
before being digitized. Once the signal is digitized, the PCM signal is usually subjected to
further processing (e.g., digital data compression).
Some forms of PCM combine signal processing with coding. Older versions of these systems
applied the processing in the analogue domain as part of the A/D process; newer
implementations do so in the digital domain. These simple techniques have been largely
rendered obsolete by modern transform-based audio compression techniques.
DPCM encodes the PCM values as differences between the current and the predicted
value. An algorithm predicts the next sample based on the previous samples, and the
encoder stores only the difference between this prediction and the actual value. If the
prediction is reasonable, fewer bits can be used to represent the same information. For
audio, this type of encoding reduces the number of bits required per sample by about 25%
compared to PCM.
Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of the
quantization step, to allow further reduction of the required bandwidth for a given signal-
to-noise ratio.
Delta modulation is a form of DPCM which uses one bit per sample.
In telephony, a standard audio signal for a single phone call is encoded as 8,000 analogue
samples per second, of 8 bits each, giving a 64 Kbit/s digital signal known as DS0. The
default signal compression encoding on a DS0 is either μ-law (mu-law) PCM (North America
and Japan) or A-law PCM (Europe and most of the rest of the world). These are logarithmic
compression systems where a 12 or 13-bit linear PCM sample number is mapped into an 8-bit
value. This system is described by international standard G.711. An alternative proposal for
a floating point representation, with 5-bit mantissa and 3-bit radix, was abandoned.
Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense
to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8-
bit µ-law or A-law PCM samples into a series of 4-bit ADPCM samples. In this way, the
capacity of the line is doubled. The technique is detailed in the G.726 standard.
Chapter 3
DIGITAL MODULATION TECHNIQUES I
There are three major classes of digital modulation techniques used for transmission
of digitally represented data:
All convey data by changing some aspect of a base signal, the carrier wave (usually
a sinusoid), in response to a data signal. In the case of PSK, the phase is changed to represent
the data signal. There are two fundamental ways of utilizing the phase of a signal in this way:
A convenient way to represent PSK schemes is on a constellation diagram. This shows the
points in the Argand plane where, in this context, the real and imaginary axes are termed the
in-phase and quadrature axes respectively due to their 90° separation. Such a representation
on perpendicular axes lends itself to straightforward implementation. The amplitude of each
point along the in-phase axis is used to modulate a cosine (or sine) wave and the amplitude
along the quadrature axis to modulate a sine (or cosine) wave.
In PSK, the constellation points chosen are usually positioned with uniform angular spacing
around a circle. This gives maximum phase-separation between adjacent points and thus the
best immunity to corruption. They are positioned on a circle so that they can all be
transmitted with the same energy. In this way, the moduli of the complex numbers they
represent will be the same and thus so will the amplitudes needed for the cosine and sine
waves. Two common examples are "binary phase-shift keying" (BPSK) which uses two
phases, and "quadrature phase-shift keying" (QPSK) which uses four phases, although any
number of phases may be used. Since the data to be conveyed are usually binary, the PSK
scheme is usually designed with the number of constellation points being a powerof 2.
3.1 Amplitude-shift keying (ASK) is a form of modulation that represents digital data as
variations in the amplitude of a carrier wave.
The amplitude of an analogue carrier signal varies in accordance with the bit stream
(modulating signal), keeping frequency and phase constant. The level of amplitude can be
Like AM, ASK is also linear and sensitive to atmospheric noise, distortions, propagation
conditions on different routes in PSTN, etc. Both ASK modulation and demodulation
processes are relatively inexpensive. The ASK technique is also commonly used to
transmit digital data over optical fibre. For LED transmitters, binary 1 is represented by a
short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a
fixed "bias" current that causes the device to emit a low light level. This low level represents
binary 0, while a higher-amplitude light wave represents binary 1.
3.2 Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing,
or modulating, the phase of a reference signal (the carrier wave).
Any digital modulation scheme uses a finite number of distinct signals to represent digital
data. PSK uses a finite number of phases; each assigned a unique pattern of binary digits.
Usually, each phase encodes an equal number of bits. Each pattern of bits forms
the symbol that is represented by the particular phase. The demodulator, which is designed
specifically for the symbol-set used by the modulator, determines the phase of the received
signal and maps it back to the symbol it represents, thus recovering the original data. This
requires the receiver to be able to compare the phase of the received signal to a reference
signal — such a system is termed coherent (and referred to as CPSK).
Alternatively, instead of using the bit patterns to set the phase of the wave, it can instead be
used to change it by a specified amount. The demodulator then determines the changes in the
phase of the received signal rather than the phase itself. Since this scheme depends on the
difference between successive phases, it is termed differential phase-shift keying (DPSK).
DPSK can be significantly simpler to implement than ordinary PSK since there is no need for
the demodulator to have a copy of the reference signal to determine the exact phase of the
received signal (it is a non-coherent scheme). In exchange, it produces more erroneous
demodulations. The exact requirements of the particular scenario under consideration
determine which scheme is used.
Alternatively, instead of using the bit patterns to set the phase of the wave, it can instead be
used to change it by a specified amount. The demodulator then determines the changes in the
phase of the received signal rather than the phase itself. Since this scheme depends on the
difference between successive phases, it is termed differential phase-shift keying (DPSK).
DPSK can be significantly simpler to implement than ordinary PSK since there is no need for
the demodulator to have a copy of the reference signal to determine the exact phase of the
received signal (it is a non-coherent scheme). In exchange, it produces more erroneous
demodulations. The exact requirements of the particular scenario under consideration
determine which scheme is used.
BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the simplest form of
phase shift keying (PSK). It uses two phases which are separated by 180° and so can also be
termed 2-PSK. It does not particularly matter exactly where the constellation points are
positioned, and in this figure they are shown on the real axis, at 0° and 180°. This modulation
is the most robust of all the PSKs since it takes the highest level of noise or distortion to
make the demodulator reach an incorrect decision. It is, however, only able to modulate at 1
bit/symbol (as seen in the figure) and so is unsuitable for high data-rate applications when
bandwidth is limited.
3.5 QPSK
The mathematical analysis shows that QPSK can be used either to double the data rate
compared with a BPSK system while maintaining the same bandwidth of the signal, or
to maintain the data-rate of BPSK but halving the bandwidth needed. In this latter case, the
BER of QPSK is exactly the same as the BER of BPSK - and deciding differently is a
common confusion when considering or describing QPSK.
Given that radio communication channels are allocated by agencies such as the Federal
Communication Commission giving a prescribed (maximum) bandwidth, the advantage of
QPSK over BPSK becomes evident: QPSK transmits twice the data rate in a given bandwidth
than BPSK does - at the same BER. The engineering penalty that is paid is that QPSK
transmitters and receivers are more complicated than the ones for BPSK. However, with
modern electronics technology, the penalty in cost is very moderate.
As with BPSK, there are phase ambiguity problems at the receiving end, and differentially
encoded QPSK is often used in practice.
The two binary states, logic 0 (low) and 1 (high), are each represented by an analogue
waveform. Logic 0 is represented by a wave at a specific frequency, and logic 1 is
represented by a wave at a different frequency.
With binary FSK, the centre or carrier frequency is shifted by the binary input data. Thus the
input and output rates of change are equal and therefore the bit rate and baud rate
equal.
The frequency of the carrier is changed as a function of the modulating signal (data),
which is being transmitted. Amplitude remains unchanged. Two fixed-amplitude
carriers are used, one for a binary zero, the other for a binary one.
You can see from the movie below how the FSK wave form is generated. Note when
the edge of aa new logic level enters the transmitter the frequency of the output.
Chapter 4
DELTA MODULATION
To achieve high signal-to-noise ratio, delta modulation must use oversampling techniques,
that is, the analog signal is sampled at a rate several times higher than the Nyquist rate.
Derived forms of delta modulation are continuously variable slope delta modulation, delta-
sigma modulation, and differential modulation. The Differential Pulse Code Modulation is
the super set of DM. The block diagram of Delta modulation is given below in diagram 4.1
4.1Principle
Rather than quantizing the absolute value of the input analogue waveform, delta
modulation quantizes the difference between the current and the previous step, as shown in
the block diagram in Fig. 4.1
Adaptive delta modulation (ADM) or continuously variable slope delta modulation (CVSD)
is a modification of DM in which the step size is not fixed. Rather, when several consecutive
bits have the same direction value, the encoder and decoder assume that slope overload is
occurring, and the step size becomes progressively larger. Otherwise, the step size becomes
gradually smaller over time. ADM reduces slope error, at the expense of increasing
quantizing error. This error can be reduced by using a low pass filter.
Chapter 5
INFORMATION THEORY
Data compression (source coding): There are two formulations for the compression
problem:
This division of coding theory into compression and transmission is justified by the
information transmission theorems, or source–channel separation theorems that justify the
use of bits as the universal currency for information in many contexts. However, these
theorems only hold in the situation where one transmitting user wishes to communicate to
one receiving user. In scenarios with more than one transmitter (the multiple-access channel),
more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel),
or more general networks, compression followed by transmission may no longer be
optimal. Network information theory refers to these multi-agent communication models.
Any process that generates successive messages can be considered a source of information.
A memory less source is one in which each message is an independent identically-distributed
random variable, whereas the properties of periodicity and stationary impose more general
constraints. All such sources are stochastic. These terms are well studied in their own right
outside information theory.
5.3Channel Capacity
Consider the communications process over a discrete channel. A simple model of the
process is shown below: in fig 5.1
Here X represents the space of messages transmitted, and Y the space of messages received
during a unit time over our channel. Let p(y | x) be the conditional probability distribution
function of Ygiven X. We will consider p(y | x) to be an inherent fixed property of our
communications channel (representing the nature of the noise of our channel). Then the joint
distribution of X and Y is completely determined by our channel and by our choice of f(x), the
marginal distribution of messages we choose to send over the channel. Under these
constraints, we would like to maximize the rate of information, or the signal, we can
communicate over the channel. The appropriate measure for this is the mutual information,
and this maximum mutual information is called the capacity and is given by:
Chapter 6
SHANNON – HARTLEY THEORM
where
In 1927, Nyquist determined that the number of independent pulses that could be put through
a telegraph channel per unit time is limited to twice the bandwidth of the channel. In symbols,
where fp is the pulse frequency (in pulses per second) and B is the bandwidth (in hertz). The
quantity 2B later came to be called the Nyquist rate, and transmitting at the limiting pulse
rate of 2B pulses per second as signalling at the Nyquist rate. Nyquist published his results in
1928 as part of his paper "Certain topics in Telegraph Transmission Theory."
Shannon's theorem shows how to compute a channel capacity from a statistical description of
a channel, and establishes that given a noisy channel with capacity C and information
transmitted at a line rate R, then if
there exists a coding technique which allows the probability of error at the receiver to be
made arbitrarily small. This means that theoretically, it is possible to transmit information
nearly without error up to nearly a limit of C bits per second.
the probability of error at the receiver increases without bound as the rate is increased. So no
useful information can be transmitted beyond the channel capacity. The theorem does not
address the rare situation in which rate and capacity are equal.
The Shannon–Hartley theorem establishes what that channel capacity is for a finite-
bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result
with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in
Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through
error-correction coding rather than through reliably distinguishable pulse levels.
If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could
transmit unlimited amounts of error-free data over it per unit of time. Real channels,
however, are subject to limitations imposed by both finite bandwidth and nonzero noise.
So how do bandwidth and noise affect the rate at which information can be transmitted over
an analogue channel?
Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate.
This is because it is still possible for the signal to take on an indefinitely large number of
different voltage levels on each symbol pulse, with each slightly different level being
assigned a different meaning or bit sequence. If we combine both noise and bandwidth
limitations, however, we do find there is a limit to the amount of information that can be
transferred by a signal of a bounded power, even when clever multi-level encoding
techniques are used.
In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by
addition. That is, the receiver measures a signal that is equal to the sum of the signal
encoding the desired information and a continuous random variable that represents the noise.
This addition creates uncertainty as to the original signal's value. If the receiver has some
information about the random process that generates the noise, one can in principle recover
the information in the original signal by considering all possible states of the noise process. In
the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian
process with a known variance. Since the variance of a Gaussian process is equivalent to its
power, it is conventional to call this variance the noise power.
Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise
is added to the signal; "white" means equal amounts of noise at all frequencies within the
channel bandwidth. Such noise can arise both from random sources of energy and also from
coding and measurement error at the sender and receiver respectively. Since sums of
independent Gaussian random variables are themselves Gaussian random variables, this
Chapter 7
Linear Block Codes
Linear codes are used in forward error correction and are applied in methods for transmitting
symbols (e.g., bits) on a communications channel so that, if errors occur in the
communication, some errors can be detected by the recipient of a message block. The "codes"
in a linear block code are blocks of symbols which are encoded using more symbols than the
original value to be sent. A linear code of length n transmits blocks containing n symbols. For
example, the "(7,4)" Hamming code is a linear binary code which represents 4-bit values each
using 7-bit values. In this way, the recipient can detect errors as severe as 2 bits per block.
[2]
As there are 16 distinct 4-bit values expressed in binary, the size of the (7,4) Hamming
code is sixteen.
the vector space where is the finite field with q elements. Such a code with
parameter q is called a q-ary code (e.g., when q = 5, the code is a 5-ary code). If q = 2
or q = 3, the code is described as a binary code, or a ternary code respectively.
Because the linear code could be considered as a linear subspace C of (and therefore a
codeword is a vector in this linear subspace), any codeword could be represented as a
linear combination of a set of basis vectors such
that
7.2Hamming codes
As the first class of linear codes developed for error correction purpose, famousHamming
codeshas been widely used in digital communication systems. For any positive integer ,
there exists a [2r − 1,2r − r − 1,3]2 Hamming code. Since d = 3, this Hamming code can
correct 1-bit error.
Example: The linear block code with the following generator matrix and parity check matrix
is a [7,4,3]2 Hamming code.
:
7.3 Hadamard codes
Hadamard code is a [2r,r,2r – 1]2 linear code and is capable of correcting many errors.
Hadamard code could be constructed column by column : the ith column is the bits of the
binary representation of integer I, as shown in the following example. Hadamard code has
minimum distance 2r – 1 and therefore can correct 2r – 2 – 1 errors.
Hadamard code is a special case of Reed-Muller code If we take the first column (the all-zero
column) out from , we get [7,3,4]2 simplex code, which is the dual code of Hamming
code. Let be the parity check matrix of C, then the code generated by is called
the dual code of C.
Block codes are tied to the sphere packing problem, which has received some attention over
the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table
and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely
on more dimensions which cannot easily be visualized. The powerful Golay code used in
deep space communications uses 24 dimensions. If used as a binary code (which it usually is)
the dimensions refer to the length of the code word as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies
can be packed into a circle on a table top, or in 3 dimensions, how many marbles can be
packed into a globe. Other considerations enter the choice of a code. For example, hexagon
packing into the constraint of a rectangular box will leave empty space at the corners. As the
dimensions get larger, the percentage of empty space grows smaller. But at certain
Another code property is the number of neighbours a single code word may have.Again, let's
use pennies as an example. First we pack the pennies in a rectangular grid. Each penny will
have 4 near neighbours (and 4 at the corners which are farther away). In a hexagon, each
penny will have 6 near neighbours. When we increase the dimensions, the number of near
neighbours increases very rapidly. The result is the number of ways for noise to make the
receiver choose a neighbour (hence an error) grows as well. This is a fundamental limitation
of block codes, and indeed all codes. It may be harder to cause an error to a single neighbour,
but the number of neighbours can be large enough so the total error probability actually
suffers.
Properties of linear block codes are used in many applications. for example Syndrome-Coset
uniqueness property of linear block codes is used in Trellis shaping,one of the best
known shaping codes. This same property is used in Sensor networks for distributed source
coding
CONVOLUTIONAL CODES
8.1 Definition
each m-bit information symbol (each m-bit string) to be encoded is transformed into
an n-bit symbol, where m/n is the code rate (n ≥ m) and
The transformation is a function of the last k information symbols, where k is the
constraint length of the code.
To convolutional encode data, start with k memory registers, each holding 1 input bit.
Unless otherwise specified, all memory registers start with a value of 0. The encoder
has n modulo-2 adders (a modulo 2 adder can be implemented with a single Boolean XOR
gate, where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0), and n generator polynomials —
one for each adder (see figure below). An input bit m1 is fed into the leftmost register. Using
the generator polynomials and the existing values in the remaining registers, the encoder
outputs n bits. Now bit shift all register values to the right (m1 moves to m0, m0 moves to m-1)
and wait for the next input bit. If there are no remaining input bits, the encoder continues
output until all registers have returned to the zero state.
The figure below is a rate 1/3 (m/n) encoder with constraint length (k) of 3. Generator
polynomials are G1 = (1, 1, 1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are
calculated (modulo 2) as follows:
n1 = m1 + m0 + m-1
n2 = m0 + m-1
n3 = m1 + m-1.
Several algorithms exist for decoding convolutional codes. For relatively small values
of k, the Viterbi algorithm is universally used as it provides maximum
likelihood performance and is highly parallelizable. Viterbi decoders are thus easy to
implement in VLSI hardware and in software on CPUs with SIMD instruction sets.
Longer constraint length codes are more practically decoded with any of several sequential
decoding algorithms, of which the Fanon algorithm is the best known. Unlike Viterbi
decoding, sequential decoding is not maximum likelihood but its complexity increases only
slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such
codes were used in the Pioneer program of the early 1970s to Jupiter and Saturn, but gave
way to shorter, Viterbi-decoded codes, usually concatenated with large Reed-Solomon error
correction codes that steepen the overall bit-error-rate curve and produce extremely low
residual undetected error rates.
Both Viterbi and sequential decoding algorithms return hard-decisions: the bits that form the
most likely code word. An approximate confidence measure can be added to each bit by use
of the Soft output Viterbi algorithm. Maximum a posteriori (MAP) soft-decisions for each bit
can be obtained by use of the BCJR algorithm.
An especially popular Viterbi-decoded convolutional code, used at least since the Voyager
program has a constraint length k of 7 and a rate r of 1/2.
Longer constraint lengths produce more powerful codes, but the complexity of the
Viterbi algorithm increases exponentially with constraint lengths, limiting these more
powerful codes to deep space missions where the extra performance is easily worth the
increased decoder complexity.
REFERENCES
BOOKS
MICROWAVE ENGINEERING
2. Foundations for Microwave Engineering – R.E. Collin, IEEE Press, John Wiley, 2nd
Edition, 2002
DIGITAL COMMUNICATIONS
1. Modern Analog and Digital Communication – B.P.Lathi, Oxford reprint, 3rd edition, 2004
URL
1. http://ieee-elearning.org/