You are on page 1of 39

EE603 Class Notes 10/10/2013 John Stensby

Chapter 9: Commonly Used Models: Narrow-Band Gaussian Noise and Shot Noise

Narrow-band, wide-sense-stationary (WSS) Gaussian noise (t) is used often as a noise

model in communication systems. For example, (t) might be the noise component in the output

of a radio receiver intermediate frequency (IF) filter/amplifier. In these applications, sample

functions of (t) are expressed as

(t) c (t)cos c t s (t)sin c t , (9-1)

where c is termed the center frequency (for example, c could be the actual center frequency of
the above-mentioned IF filter). The quantities c(t) and s(t) are termed the quadrature
components (sometimes, c(t) is known as the in-phase component and s(t) is termed the

quadrature component), and they are assumed to be real-valued.


Narrow-band noise (t) can be represented in terms of its envelope R(t) and phase (t).
This representation is given as

(t) R(t) cos( c t (t)) , (9-2)

where

R(t) c2(t) s2(t)


(9-3)
1
(t) tan (s (t) / c (t)).

Normally, it is assumed that R(t) 0 and (t) for all time.


Note the initial assumptions placed on (t). The assumptions of Gaussian and WSS
behavior are easily understood. The narrow-band attribute of (t) means that c(t), s(t), R(t)
and (t) are low-pass processes; these low-pass processes vary slowly compared to cosct; they

Updates at http://www.ece.uah.edu/courses/ee385/ 9-1


EE603 Class Notes 10/10/2013 John Stensby

S()

watts/Hz
-c c
Rad/Sec
Fig. 9-1: Example spectrum of narrow-band noise.
are on a vastly different time scale from cosct. Many periods of cosct occur before there is
notable change in c(t), s(t), R(t) or (t).
A second interpretation can be given for the term narrow-band. This is accomplished in
terms of the power spectrum of (t), denoted as S(). By the Wiener-Khinchine theorem,
S() is the Fourier transform of R(), the autocorrelation function for WSS (t). Since (t) is
real valued, the spectral density S() satisfies

S ( ) 0
(9-4)
S ( ) S ( ).

Figure 9-1 depicts an example spectrum of a narrow-band process. The narrow-band attribute
means that S() is zero except for a narrow band of frequencies around c ; process (t) has a
bandwidth (however it might be defined) that is small compared to the center frequency c.
Power spectrum S() may, or may not, have c as axes of local symmetry. If c is
an axis of local symmetry, then

S ( c ) S ( c ) (9-5)

for 0 < < c, and the process is said to be a symmetrical band-pass process (Fig. 9-1 depicts a
symmetrical band-pass process). It must be emphasized that the symmetry stated by the second
of (9-4) is always true (i.e., the power spectrum is even); however, the symmetry stated by (9-5)

Updates at http://www.ece.uah.edu/courses/ee385/ 9-2


EE603 Class Notes 10/10/2013 John Stensby

may, or may not, be true. As will be shown in what follows, the analysis of narrow-band noise is
simplified if (9-5) is true.
To avoid confusion when reviewing the engineering literature on narrow-band noise, the
reader should remember that different authors use slightly different definitions for the cross-
correlation of jointly-stationary, real-valued random processes x(t) and y(t). As used here, the
cross-correlation of x and y is defined as Rxy( ) E[x(t+)y(t)]. However, when defining Rxy,
some authors shift (by ) the time variable of the function y instead of the function x.
Fortunately, this possible discrepancy is accounted for easily when comparing the work of
different authors.
(t) has Zero Mean
The mean of (t) must be zero. This conclusion follows directly from

E[(t)] E[c (t)]cos c t E[s (t)]sin c t . (9-6)

The WSS assumption means that E[(t)] must be time invariant (constant). Inspection of (9-6)
leads to the conclusion that E[c] = E[s] = 0 so that E[] = 0.
Quadrature Components In Terms of and

Let the Hilbert transform of WSS noise (t) be denoted in the usual way by the use of a
circumflex; that is, (t) denotes the Hilbert transform of (t) (see Appendix 9A for a discussion
of the Hilbert transform). The Hilbert transform is a linear, time-invariant filtering operation
applied to (t); hence, from the results developed in Chapter 7, (t) is WSS.
In what follows, some simple properties are needed of the cross correlation of (t) and
(t) . Recall that (t) is the output of a linear, time-invariant system that is driven by (t). Also

recall that techniques are given in Chapter 7 for expressing the cross correlation between a
system input and output. Using this approach, it can be shown easily that

Updates at http://www.ece.uah.edu/courses/ee385/ 9-3


EE603 Class Notes 10/10/2013 John Stensby

R ( ) E[(t ) (t)] R ( )

(t )(t)] R ( )
( ) E[
R
(9-7)
(0) R (0) 0
R

R ( ) R ( ) .

Equation (9-1) can be used to express (t) . The Hilbert transform of the noise signal can
be expressed as

(t) c (t) cos c t s (t) sin c t c (t) cos c t s (t) sin c t (9-8)

c (t) sin c t s (t) cos c t .

This result follows from the fact that c is much higher than any frequency component in c or
s so that the Hilbert transform is only applied to the high-frequency sinusoidal functions (see
Appendix 9A).
The quadrature components can be expressed in terms of and . This can be done by
solving (9-1) and (9-8) for

(t)sin c t
c (t) (t)cos c t
(9-9)
(t) cos c t (t)sin c t .
s (t)

These equations express the quadrature components as a linear combination of Gaussian .


Hence, the components c and s are Gaussian. In what follows, Equation (9-9) will be used to
calculate the autocorrelation and crosscorrelation functions of the quadrature components. It
will be shown that the quadrature components are WSS and that c and s are jointly WSS.
Furthermore, WSS process (t) is a symmetrical band-pass process if, and only if, c and s are

Updates at http://www.ece.uah.edu/courses/ee385/ 9-4


EE603 Class Notes 10/10/2013 John Stensby

uncorrelated for all time shifts.


Relationships Between Autocorrelation Functions R , Rc and Rs

It is easy to compute, in terms of R, the autocorrelation of the quadrature components.


Use (9-9) and compute the autocorrelation

R c ( ) E[c (t)c (t )]

E[(t)(t )]cos c t cos c (t ) E[ (t)(t )]sin c t cos c (t ) (9-10)

(t )]cos c t sin c (t ) E[
E[(t) (t)
(t )]sin c t sin c (t ) .

This last result can be simplified by using (9-7) to obtain

R c ( ) R ( )[cos c t cos c (t ) sin c t sin c (t )]

R ( )[cos c t sin c (t ) sin c t cos c (t )] ,

a result that can be expressed as

R c ( ) R ( ) cos c R ( )sin c . (9-11)

The same procedure can be used to compute an identical result for R s ; this leads to the
conclusion that

R c ( ) R s ( ) (9-12)

for all . Note that c and s are zero mean, Gaussian, and wide-sense stationary.
Equations (9-11) and (9-12) allow the computation of R c and R s given only R .
However, given only R c and/or R s , we do not have enough information to compute R

Updates at http://www.ece.uah.edu/courses/ee385/ 9-5


EE603 Class Notes 10/10/2013 John Stensby

(what is missing?).
A somewhat non-intuitive result can be obtained from (9-11) and (9-12). Set = 0 in the
last two equations to conclude that

R (0) R c (0) R s (0) , (9-13)

an observation that leads to

E[ (t)] E[c2(t)] E[s2(t)]


(9-14)
Avg Pwr in (t) = Avg Pwr in c (t) = Avg Pwr in s (t).

The frequency domain counterpart of (9-11) relates the spectrums S , Sc and Ss .


Take the Fourier transform of (9-11) to obtain

1
Sc() Ss()
2
S ( c ) S ( c )
1

2
sgn( c )S ( c ) sgn( c )S ( c ) (9-15)

1 1

2
1 sgn( c ) S ( c ) 1 sgn( c ) S ( c )
2

Since c and s are low-pass processes, Equation (9-15) can be simplified to produce

Sc( ) Ss( ) S ( c ) S ( c ), c c
(9-16)
0, otherwise,

a relationship that is easier to grasp and remember than is (9-11).

Updates at http://www.ece.uah.edu/courses/ee385/ 9-6


EE603 Class Notes 10/10/2013 John Stensby

Equation (9-16) provides an easy method for obtaining Sc and/or Ss given only S .
First, make two copies of S. Shift the first copy to the left by c, and shift the second copy
to the right by c. Add together both shifted copies, and truncate the sum to the interval c
c to get Sc . This shift and add procedure for creating Sc is illustrated by Fig. 9-2.
Given only S, it is always possible to determine Sc (which is equal to Ss ) in this manner.
The converse is not true; given only Sc , it is not always possible to create S (Why? Think

c c

Sc) , < c

c c

Sc) , < c

c c

S = Sc) + Sc) , < c


c

c c

Fig. 9-2: Creation of Sc from shifting and adding copies of S .

Updates at http://www.ece.uah.edu/courses/ee385/ 9-7


EE603 Class Notes 10/10/2013 John Stensby

about the fact that Sc( ) must be even, but S may not satisfy (9-5);what is missing?).
Crosscorrelation R c s
It is easy to compute the cross-correlation of the quadrature components. From (9-9) it
follows that

R c s ( ) E[c (t )s ( t )]

(t)]cos c (t )cos c t E[(t )(t)]cos c (t )sin c t


E[(t ) (9-17)

(t )
E[ (t)]sin c (t )cos c t E[
(t )(t)]sin c (t )sin c t .

By using (9-7), Equation (9-17) can be simplified to obtain

R c s ( ) R ( )[ sin c t cos c (t ) cos c t sin c (t )]

R ( )[cos c t cos c (t ) sin c t sin c (t )] ,

a result that can be written as

R c s ( ) R ( )sin c R ( )cos c . (9-18)

The cross-correlation of the quadrature components is an odd function of . This follows


directly from inspection of (9-18) and the fact that an even function has an odd Hilbert
transform. Finally, the fact that this cross-correlation is odd implies that R c s (0) = 0; taken at
the same time, the samples of c and s are uncorrelated and independent. However, as
discussed below, the quadrature components c(t1) and s(t2) may be correlated for t1 t2.
The autocorrelation R of the narrow-band noise can be expressed in terms of the
autocorrelation and cross-correlation of the quadrature components c and s . This important
result follows from using (9-11) and (9-18) to write

Updates at http://www.ece.uah.edu/courses/ee385/ 9-8


EE603 Class Notes 10/10/2013 John Stensby

R c s () sin c cos c R ()
.

R c () cos c sin c R ()

Invert this to obtain

R () sin c cos c R c s ()
, (9-19)

R () cos c sin c R c ()

which yields the desired formula

R ( ) R c ( ) cos c R c s ( )sin c . (9-20)

Equation (9-20) shows that both R c () and R c s () are necessary to compute R .


Comparison of (9-16) with the Fourier transform of (9-20) reveals an unsymmetrical
aspect in the relationship between S , Sc and Ss . In all cases, both Sc and Ss can be
obtained by simple translations of S as is shown by (9-16). However, in general, S cannot be
expressed in terms of a similar, simple translation of Sc (or Ss ), a conclusion reached by
inspection of the Fourier transform of (9-20). But, as shown next, there is an important special
case where R c s( ) is identically zero for all , and S can be expresses as simple translations
of Sc .
Symmetrical Bandpass Processes
Narrow-band process (t) is said to be a symmetrical band-pass process if

S ( c ) S ( c ) (9-21)

for 0 < < c. Such a bandpass process has its center frequency c as an axis of local

Updates at http://www.ece.uah.edu/courses/ee385/ 9-9


EE603 Class Notes 10/10/2013 John Stensby

symmetry. In nature, symmetry usually leads to simplifications, and this is true of Gaussian
narrow-band noise. In what follows, we show that the local symmetry stated by (9-21) is
equivalent to the condition R c s ( ) 0 for all (not just at = 0).
The desired result follows from inspecting the Fourier transform of (9-18); this transform
is the cross spectrum of the quadrature components, and it vanishes when the narrow-band
process has spectral symmetry as defined by (9-21). To compute this cross spectrum, first note
the Fourier transform pairs

R () S ()
(9-22)
R () jsgn()S () ,

where

1 for > 0

sgn() (9-23)
1 for 0

is the commonly used sign function. Now, use Equation (9-22) and the Frequency Shifting
Theorem to obtain the Fourier transform pairs

1
R ( ) sin c S ( c ) S ( c )
2j
(9-24)
1
R ( ) cos c Sgn( c ) S ( c ) Sgn( c ) S ( c ) .
2j

Finally, use this last equation and (9-18) to compute the cross spectrum

Updates at http://www.ece.uah.edu/courses/ee385/ 9-10


EE603 Class Notes 10/10/2013 John Stensby

a) S()

UL L U
-2c -c c 2c

b) S(-c)

1-Sgn(-c)

UL LU
-2c -c c 2c
c) S(+c)

1+Sgn(+c)

UL LU
-2c -c c 2c
Figure 9-3: Symmetrical bandpass processes have c(t1) and s(t2)
uncorrelated for all t1 and t2.

Sc s ( ) F [ R c s ( )]

(9-25)
1
S ( c )[1 Sgn( c )] S ( c )[1 Sgn( c )] .
2j

Figure 9-3 depicts example plots useful for visualizing important properties of (9-25).
From parts b) and c) of this plot, note that the products on the right-hand side of (9-25) are low
pass processes. Then it is easily seen that

0 , c


Scs ( ) j[ S ( c ) S ( c )], c c (9-26)

0 , c .

Updates at http://www.ece.uah.edu/courses/ee385/ 9-11


EE603 Class Notes 10/10/2013 John Stensby

Note that Sc s ( ) 0 is equivalent to the narrow-band process satisfying the symmetry


condition (9-21). On Fig. 9-3, the symmetry condition (9-21) implies that the spectral
components labeled with U can be obtained from those labeled with L by a simple folding
operation. And, when the spectrums are subtracted as required by (9-26), the portion labeled U
(alternatively, L ) on Fig. 9-3c will cancel the portion labeled L (alternatively, U ) on Fig 9-3b.
Finally, since the cross spectrum is the Fourier transform of the cross-correlation, Sc s ( ) 0
is both necessary and sufficient for c(t1) and s(t2) to be uncorrelated for all t1 and t2 (not just t1
= t2).
System analysis is simplified greatly if the noise encountered has a symmetrical
spectrum. Under these conditions, the quadrature components are uncorrelated, and (9-20)
simplifies to

R ( ) R c ( ) cos c . (9-27)

Also, the spectrum S of the noise is obtained easily by scaling and translating Sc F [R c ]
as shown by

1
S ( ) [Sc ( c ) Sc ( c )] . (9-28)
2

This result follows directly by taking the Fourier transform of (9-27). Hence, when the process
is symmetrical, it is possible to express S in terms of a simple translations of Sc (see the
comment after (9-20)). Finally, for a symmetrical bandpass process, Equation (9-16) simplifies
to

Sc( ) Ss( ) 2 S ( c ), c c
. (9-29)
0, otherwise

Updates at http://www.ece.uah.edu/courses/ee385/ 9-12


EE603 Class Notes 10/10/2013 John Stensby

L C
+
S() = N0/2 +
watts/Hz
(WGN) R

-
Figure 9-4: A simple band-pass filter driven by white Gaussian
noise (WGN).
Example 9-1: Figure 9-4 depicts a simple RLC bandpass filter that is driven by white Gaussian
noise with a double sided spectral density of N0/2 watts/Hz. The spectral density of the output is
given by

2
N 2 N 2 0 ( j )
S ( ) 0 H bp ( j ) 0 , (9-30)
2 2 ( 0 j )2 c2

where 0 = R/2L, c = (n2 - 02)1/2 and n = 1/(LC)1/2. In this result, frequency can be
normalized, and (9-30) can be written as

2
N0 2 0 ( j )
S ( ) , (9-31)
2 ( 0 j )2 1

where 0 = 0/c and = /c. Figure 9-5 illustrates a plot of the output spectrum for o = .5;
note that the output process is not symmetrical. Figure 9-6 depicts the spectrum for o = .1 (a
much sharper filter than the o = .5 case). As the circuit Q becomes large (i.e., o becomes
small), the filter approximates a symmetrical filter, and the output process approximates a
symmetrical bandpass process.
Envelope and Phase of Narrow-Band Noise
Zero-mean quadrature components c(t) and s(t) are jointly Gaussian, and they have the
same variance 2 = R ( 0) R c ( 0) R s ( 0) . Also, taken at the same time t, they are

Updates at http://www.ece.uah.edu/courses/ee385/ 9-13


EE603 Class Notes 10/10/2013 John Stensby

S() S()
1.0 1.0
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1

-2 -1 0 1 2 -2 -1 0 1 2
(radians/second) (radians/second)

Figure 9-5: Output Spectrum for o = .5 Figure 9-6: Output Spectrum for o = .1

independent. Hence, taken at the same time, processes c(t) and s(t) are described by the joint
density

f ( c , s )
1
exp
LM
c2 s2
.
OP (9-32)
2 2 MN
2 2 PQ

We are guilty of a common abuse of notation. Here, symbols c and s are used to denote
random processes, and sometimes they are used as algebraic variables, as in (9-32). However,
always, it should be clear from context the intended use of c and s.
The narrow-band noise signal can be represented as

(t) c (t) cos c t s (t) sin c t


(9-33)
1 (t) cos(c t 1 (t))

where

Updates at http://www.ece.uah.edu/courses/ee385/ 9-14


EE603 Class Notes 10/10/2013 John Stensby

1 (t) c2 (t) s2 (t)


(9-34)
1 s (t)
1 (t) Tan , -<1 ,
(t)
c

are the envelope and phase, respectively. Note that (9-34) describes a transformation of c(t)
and s(t). The inverse is given by

c 1 cos(1 )
(9-35)
s 1 sin(1 )

The joint density of 1 and can be found by using standard techniques. Since (9-35) is
the inverse of (9-34), we can write

(c , s )
f (1, 1 ) f (c , s ) (9-36)
(1, 1 ) c 1 cos 1
s 1 sin1

(c , s ) cos 1 1sin1
det 1
(1, 1 ) sin1 1 cos 1

(again, the notation is abusive). Finally, substitute (9-32) into (9-36) to obtain

1 1
f ( 1, 1 ) exp 2 12 (sin 2 1 cos2 1 )
2
2 2
. (9-37)
1 1 2
exp 1 ,
22 22

for 1 0 and - < . Finally, note that (9-37) can be represented as

Updates at http://www.ece.uah.edu/courses/ee385/ 9-15


EE603 Class Notes 10/10/2013 John Stensby

Fig. 9-7: A hypothetical sample function of narrow-band Gaussian noise. The envelope is
Rayleigh and the phase is uniform.

f ( 1, 1 ) f ( 1 )f(1 ) , (9-38)

where

LM
1
f ( 1 ) 12 exp 2 12 U( 1 )
OP (9-39)
2N Q
describes a Rayleigh distributed envelope, and

1
f(1 ) , - < 1 (9-40)
2

describes a uniformly distributed phase. Finally, note that the envelope and phase are
independent. Figure 9-7 depicts a hypothetical sample function of narrow-band Gaussian noise.
Envelope and Phase of a Sinusoidal Signal Plus Noise - the Rice Density Function
Many communication problems involve deterministic signals embedded in random noise.
The simplest such combination of signal and noise is that of a constant frequency sinusoid added
to narrow-band Gaussian noise. In the 1940s, Steven Rice analyzed this combination and
published his results in the paper Statistical Properties of a Sine-wave Plus Random Noise, Bell
System Technical Journal, 27, pp. 109-157, January 1948. His work is outlined in this section.

Updates at http://www.ece.uah.edu/courses/ee385/ 9-16


EE603 Class Notes 10/10/2013 John Stensby

Consider the sinusoid

s(t) A0 cos( c t 0 ) A0 cos 0 cos c t A0 sin 0 sin c t , (9-41)

where A0, c, and 0 are known constants. To signal s(t) we add noise (t) given by (9-1), a
zero-mean WSS band-pass process with power 2 = E[2] = E[c2] = E[s2]. This sum of signal
and noise can be written as

s(t) + (t) [A0 cos 0 c (t)]cos c t [A0 sin 0 s (t)]sin c t


(9-42)
2 (t) cos[c t 2 ] ,

where

2 (t) [A 0 cos 0 c (t)]2 [A 0 sin 0 s (t)]2


(9-43)
1 A 0 sin 0
s (t)
2 (t) tan , 2 ,
A
0 cos 0 c (t)

are the envelope and phase, respectively, of the signal+noise process. Note that the quantity
(A0 / 2 )2 / 2 is the signal-to-noise ratio, a ratio of powers.
Equation (9-43) represents a transformation from the components c and s into the
envelope 2 and phase 2. The inverse of this transformation is given by

c (t) 2 (t) cos 2 (t) A0 cos 0


(9-44)
s (t) 2 (t)sin 2 (t) A0 sin 0 .

Updates at http://www.ece.uah.edu/courses/ee385/ 9-17


EE603 Class Notes 10/10/2013 John Stensby

Note that constants A0cos0 and A0sin0 only influence the means of 2cos( 2) and 2sin( 2)
(remember that both c and s have zero mean!). In the remainder of this section, we describe
the statistical properties of envelope 2 and phase 2.
At the same time t, processes c(t) and s(t) are statistically independent (however, for
0, c(t) and s(t+) may be dependent). Hence, for c(t) and s(t) we can write the joint
density

exp[( c2 s2 ) / 2 2 ]
f ( c , s ) (9-45)
2 2

(we choose to abuse notation for our convenience: c and s are used to denote both random
processes and, as in (9-45), algebraic variables).
The joint density f( 2) can be found by transforming (9-45). To accomplish this, the
Jacobian

(c , s ) cos 2 2sin2
det 2 (9-46)
( 2 , 2 ) sin2
2 cos 2

can be used to write the joint density

(c , s )
f ( 2 , 2 ) f (c , s ) (9-47)
( 2 , 2 ) c 2 cos 2 A0 cos 0
s 2 sin2 A 0 sin 0

f ( 2 , 2 )
2
2 2 2

exp 1 2 [ 22 2A0 2 cos( 2 0 ) A02 ] U( 2 ) .

Now, the marginal density f() can be found by integrating out the 2 variable to obtain

Updates at http://www.ece.uah.edu/courses/ee385/ 9-18


EE603 Class Notes 10/10/2013 John Stensby

2
f ( 2 ) f ( 2 , 2 ) d 2
0
(9-48)

A 2
22 exp 1 2 [ 22 A02 ] U( 2 ) 21 exp{ 0 2 2 cos( 2 0 )}d 2 .
2 0

This result can be written by using the tabulated function

I0 ( ) 21
2
z
exp{ cos( )}d ,
0
(9-49)

the modified Bessel function of order zero. Now, use definition (9-49) in (9-48) to write

A FH IK RS
f ( 2 ) 22 I0 2 2 0 exp 1 2 [22 A 02 ] U( 2 ) , UV
2 T W (9-50)

a result known as the Rice probability density. As expected, 0 does not enter into f(2).
Equation (9-50) is an important result. It is the density function that statistically
describes the envelope 2 at time t; for various values of A0/, the function f(2) is plotted on
Figure 9-8 (the quantity (A0 / 2)2 / 2 is the signal-to-noise ratio). For A0/ = 0, the case of no
sinusoid, only noise, the density is Rayleigh. For large A0/ the density becomes Gaussian. To
observe this asymptotic behavior, note that for large the approximation

e
I 0 ( ) , >> 1, (9-51)
2

becomes valid. Hence, for large 2A0/ Equation (9-50) can be approximated by

f ( 2 )
2 RS UV
exp 1 2 [2 A 0 ]2 U ( 2 ) .
2A 0 2 T
2 W (9-52)

Updates at http://www.ece.uah.edu/courses/ee385/ 9-19


EE603 Class Notes 10/10/2013 John Stensby

0.8

0.7
A0/ = 0
0.6

f (2)
0.5 A0/ = 1
A0/ = 2
A0/ = 3
0.4 A0/ = 4

0.3

0.2

0.1

0.0
0 1 2 3 4 5 6

Figure 9-8: Rice density function for sinusoid plus noise. Plots
are given for several values of A0/. Note that f is approximately
Rayleigh for small, positive A0/ density f is approximately
Gaussian for large A0/.

For A0 >> , this function has a very sharp peak at 2 = A0, and it falls off rapidly from its peak
value. Under these conditions, the approximation

f ( 2 )
1 RS
exp 1 2 [2 A 0 ]2 UV
2 2 T2 W (9-53)

holds for values of 2 near A0 (i.e., 2 A0) where f(2) is significant. Hence, for large A0/,
envelope 2 is approximately Gaussian distributed.
The marginal density f(2) can be found by integrating 2 out of (9-47). Before
integrating, complete the square in 2 , and express (9-47) as

2
{ } RST UV
2
f ( 2 , 2 ) exp 1 2 [2 A 0 cos( 2 0 )]2 exp A 02 sin 2 ( 2 0 ) U( 2 ) . (9-54)
2 2 2 2 W

Updates at http://www.ece.uah.edu/courses/ee385/ 9-20


EE603 Class Notes 10/10/2013 John Stensby

Now, integrate 2 out of (9-54) to obtain


f ( 2 ) f ( 2 , 2 )d2
0
(9-55)

2
2
2
2
exp A0 sin 2 ( 2 0 )
0
2 2
exp 12 [ 2 A0 cos( 2 0 )]2 d 2 .
2

On the right-hand-side of (9-55), the integral can be expressed as the two integrals

z0
2
2 2 {
exp 1 2 [2 A 0 cos( 2 0 )]2 d2
2 }
z 2{2
0
A 0 cos( 2 0 )}
4 2 { 2 }
exp 1 2 [2 A 0 cos( 2 0 )]2 d2 (9-56)


A 0 cos( 2 0 )
2 2 z0 { 2 }
exp 1 2 [2 A 0 cos( 2 0 )]2 d2

After a change of variable = [2 - A0cos(2 - 0)]2 , the first integral on the right-hand-side of
(9-56) can be expressed as

0
2{ 2 A0 cos( 2 0 )}
4 2
exp 1 2 [ 2 A0 cos( 2 0 )]2 d 2
2

1
2 2 exp[ 2 ]d (9-57)
42 A0 cos ( 2 0 ) 2

1 A2 cos2 ( 2 0 )
exp 0 .
2 2 2

After a change of variable = [2 - A0cos(2 - 0)]/, the second integral on the right-hand-side
of (9-56) can be expressed as

Updates at http://www.ece.uah.edu/courses/ee385/ 9-21


EE603 Class Notes 10/10/2013 John Stensby

1
2 0


exp 12 [ 2 A0 cos( 2 0 )]2 d2
2


1

2 (A0 / ) cos[ 2 0 ]

exp 12 2 d
(9-58)

1
1 (A0 / ) cos[ 2 0 ]
2
exp 12 2 d
A
F( 0 cos[ 2 0 ]),

where

F( x ) 1 z
2
x
exp d
2
2

is the distribution function for a zero-mean, unit variance Gaussian random variable (the identity
F(-x) = 1 - F(x) was used to obtain (9-58)).
Finally, we are in a position to write f( ), the density function for the instantaneous
phase. This density can be written by using (9-57) and (9-58) in (9-55) to write

1 A2
f ( 2 ) exp 0
2 2 2
, (9-59)
A cos( 2 0 ) 2 A
0 exp A0 sin 2 ( 2 0 ) F( 0 cos[ 2 0 ])
2 2 2

the density function for the phase of a sinusoid embedded in narrow-band noise. For various
values of SNR and for 0 = 0, density f() is plotted on Fig. 9-9. For a SNR of zero (i.e., A0 =
0), the phase is uniform. As SNR A02/2 increases, the density becomes more sharply peaked (in
general, the density will peak at 0, the phase of the sinusoid). As SNR A02/2 approaches
infinity, the density of the phase approaches a delta function at 0.

Updates at http://www.ece.uah.edu/courses/ee385/ 9-22


EE603 Class Notes 10/10/2013 John Stensby
2.0

1.8

1.6

1.4
f (2)
1.2
A0/ = 4
1.0

0.8

0.6
A0/ = 2
0.4
A0/ = 0 A0/ = 1
0.2

0.0
- -/2 0 /2
Phase Angle 2

Figure 9-9: Density function for phase of signal plus noise A0cos(0t+0) +
{c(t)cos(0t) - s(t)sin(0t)} for the case 0 = 0.
Shot Noise
Shot noise results from filtering a large number of independent and randomly-occuring-
in-time impulses. For example, in a temperature-limited vacuum diode, independent electrons
reach the anode at independent times to produce a shot noise process in the diode output circuit.
A similar phenomenon occurs in diffusion-limited pn junctions. To understand shot noise, you
must first understand Poisson point processes and Poisson impulses.
Recall the definition and properties of the Poissson point process that was discussed in
Chapters 2 and 7 (also, see Appendix 9-B). The Poisson points occur at times ti with an average
density of d points per unit length. In an interval of length , the number of points is distributed
with a Poisson density with parameter d.
Use this Poisson process to form a sequence of Poisson Impulses, a sequence of impulses
located at the Poisson points and expressed as

Updates at http://www.ece.uah.edu/courses/ee385/ 9-23


EE603 Class Notes 10/10/2013 John Stensby

x(t) z(t)
... ... d/dt ... ...

Poisson Process Poisson Impulses

Figure 9-10: Differentiate the Poisson Process to get Poisson impulses.

z(t) (t t i ) , (9-60)
i

where the ti are the Poisson points. Note that z(t) is a generalized random process; like the delta
function, it can only be characterized by its behavior under an integral sign. When z(t) is
integrated, the result is the Poisson random process

n(0, t), t 0

t
x(t) z()d 0, t0 (9-61)
0

-n(0, t) t 0,

where n(t1,t2) is the number of Poisson points in the interval (t1,t2]. Likewise, by passing the
Poisson process x(t) through a generalized differentiator (as illustrated by Fig. 9-10), it is
possible to obtain z(t).
The mean of z(t) is simply the derivative of the mean value of x(t). Since E[x(t)]=dt, we
can write

d
z E[z(t)] E[x(t)] d . (9-62)
dt

This formal result needs a physical interpretation. One possible interpretation is to view z as

Updates at http://www.ece.uah.edu/courses/ee385/ 9-24


EE603 Class Notes 10/10/2013 John Stensby

t/2
z limit 1t z( )d limit 1t d t random fluctuation with increasing t d . (9-63)
t t / 2 t

For large t, the integral in (9-63) fluctuates around mean dt with a variance of dt (both the
mean and variance of the number of Poisson points in (-t/2, t/2] is dt). But, the integral is
multiplied by 1/t; the product has a mean of d and a variance like d/t. Hence, as t becomes
large, the random temporal fluctuations become insignificant compared to d, the infinite-time-
interval average z.
Important correlations involving z(t) can be calculated easily. Because Rx(t1,t2) = d2 t1t2
+ dmin(t1,t2) (see Chapter 7), we obtain


R xz (t1, t 2 ) R x (t1, t 2 ) d2 t1 d U(t1 t 2 )
t 2
(9-64)

R z (t1, t 2 ) R xz (t1, t 2 ) d2 d (t1 t 2 ) .
t1

Note that z(t) is wide-sense stationary but x(t) is not. The Fourier transform of Rz() yields

Sz ( ) d 2 d2 ( ) , (9-65)

the power spectrum of the Poisson impulse process.


Let h(t) be a real-valued function of time and define

s(t) h(t t i ) , (9-66)


i

a sum known as shot noise. The basic idea here is illustrated by Fig. 9-11. A sequence of
functions described by (9-60) (i.e., process z(t)) is input to system h(t) to form output shot noise
process s(t). The idea is simple: process s(t) is the output of a system activated by a sequence of

Updates at http://www.ece.uah.edu/courses/ee385/ 9-25


EE603 Class Notes 10/10/2013 John Stensby

h(t)

z(t) s(t) = h(t)z(t)


... ... h(t) ... ...
ti-1 ti ti+1 ti+2 ti-1 ti ti+1 ti+2
Poisson Impulses Shot Noise
Figure 9-11: Converting Poisson impulses z(t) into shot noise s(t)
impulses (that model electrons arriving at an anode, for example) that occur at the random
Poisson points ti.
Determined easily are the elementary properties of shot noise s(t). Using the method
discussed in Chapter 7, we obtain the mean


s E s(t) E z(t) h(t) h(t) E z(t) d h(t)dt d H(0) . (9-67)
0

Shot noise s(t) has the power spectrum

2 2 2
Ss ( ) H() Sz ( ) 2 d2 H 2 (0)( ) d H( ) 2s2 ( ) d H( ) . (9-68)

Finally, the autocorrelation is


R s ( ) = F -1 Ss ( ) d2 H 2 (0) d H( ) e jd d2 H 2 (0) d ( ) ,
2
(9-69)
2

where

1
H() e jd h(t)h(t )dt .
2
()
2
(9-70)

From (9-67) and (9-69), shot noise has a mean and variance of

Updates at http://www.ece.uah.edu/courses/ee385/ 9-26


EE603 Class Notes 10/10/2013 John Stensby

s = d H(0)
(9-71)
2
s2 = [ d2 H 2 (0) + d (0)] - [ d H(0)]2 d (0) d H() d ,
2

respectively (Equation (9-71) is known as Campbells Theorem).


Example: Let h(t) = e-tU(t) so that H() = 1/( + j), () e / 2 and

2
d
s E[s(t)] d R s ( ) d e
2
(9-72)
2

s2 d Ss () 2 d ( ) 2 d 2
2

First-Order Density Function for Shot Noise


In general, the first-order density function fs(x) that describes shot noise s(t) cannot be
calculated easily. Before tackling the difficult general case, we first consider a simpler special
case where it is assumed that h(t) is of finite duration T. That is, we assume initially that

h(t) 0, t 0 and t T . (9-73)

Because of (9-73), shot noise s at time t depends only on the Poisson impulses in the
interval (t - T, t]. To see this, note that


s(t) h(t ) ( t i )d h(t t i ) , (9-74)

i t T t i t

so that only the impulses in the moving window (t - T, t] influence the output at time t. Let
random variable nT denote the number of Poisson impulses during (t - T, t]. From Chapter 1, we

Updates at http://www.ece.uah.edu/courses/ee385/ 9-27


EE603 Class Notes 10/10/2013 John Stensby

know that

( dT) k
P[n T k] e d T . (9-75)
k!

Now, the Law of Total Probability (see Ch. 1 and Ch. 2 of these notes) can be applied to write
the first-order density function of the shot noise process s(t) as


( d T) k
fs (x) fs (xnT k)P[nT k] fs (xnT k)edT k!
(9-76)
k 0 k 0

(note that fs(x) is independent of absolute time t). We must find fs(xnT = k), the density of shot
noise s(t) conditioned on there being exactly k Poisson impulses in the interval (t - T, t].
When nT = k = 0, there are no Poisson points in (t - T, t], and we have

g 0 (x) fs (xnT = 0) = (x) (9-77)

since the output is zero.


For each fixed value of k 1 that is used on the right-hand-side of (9-76), conditional
density fs(xnT = k) describes the filter output due to an input of exactly k impulses on (t - T, t].
That is, we have conditioned on there being exactly k impulses in (t - T, t]. As a result of the
conditioning, the k impulse locations can be modeled as k independent, identically distributed
(iid) random variables (all locations ti, 1 i k, are uniform on the interval).
For the case k = 1, at any fixed time t, fs(xnT = 1) is actually equal to the density g1(x)
of the random variable

x1 (t) h(t t1 ) , (9-78)

Updates at http://www.ece.uah.edu/courses/ee385/ 9-28


EE603 Class Notes 10/10/2013 John Stensby

where random variable t1 is uniformly distributed on (t - T, t] so that t - t1 is uniform on [0,T).


That is, g1(x) fs(xnT = 1) describes the result that is obtained by transforming a uniform
density (used to describe t - t1) by the transformation h(t - t1).
Convince yourself that density g1(x) = fs(xnT = 1) does not depend on time. Note that
for any given time t, random variable t1 is uniform on (t-T, t] so that t - t1 is uniform on [0,T).
Hence, the quantity x1(t) h(t-t1) is assigned values in the set {h() : 0 < T}, the assignment
not depending on t. Hence, density g1(x) fs(xnT = 1) does not depend on t.
The density fs(xnT = 2) can be found in a similar manner. Let t1 and t2 denote
independent random variables, each of which is uniformly distributed on (t - T, t], and define

x 2 (t) h(t t1 ) h(t t 2 ) . (9-79)

At fixed time t, the random variable x2(t) is described by the density g2(x) fs(xnT = 2) = g1g1
(i.e., the convolution of g1 with itself) since h(t - t1) and h(t - t2) are independent and identically
distributed with density g1.
The general case fs(xnT = k) is similar. Let ti, 1 i k, denote independent random
variables, each uniform on (t - T, t]. At fixed time t, the density that describes

x k (t) h(t t1 ) h(t t 2 ) h(t t k ) (9-80)

is

g k (x) fs (xnT = k) = g1 (x) g1 (x) g1 (x) , (9-81)



k 1 convolutions

the density g1 convolved with itself k-1 times.


The desired density can be expressed in terms of results given above. Simply substitute

Updates at http://www.ece.uah.edu/courses/ee385/ 9-29


EE603 Class Notes 10/10/2013 John Stensby

(9-81) into (9-76) and obtain


( d T) k
fs (x) e d T g k (x) . (9-82)
k 0
k!

Convergence is fast, and (9-82) is useful for computing the density fs when dT is small (the case
for low density shot noise), say on the order of 1, so that, on the average, there are only a few
Poisson impulses in the interval (t - T, t]. For the case of low density shot noise, (9-82) cannot
be approximated by a Gaussian density.
fs(x) For An Infinite Duration h(t)
The first-order density function fs(x) is much more difficult to calculate for the general
case where h(t) is of infinite duration (but with finite time constants). In what follows, assume
that infinite duration h(t) is significant only over 0 t < 5max, where max is the filters dominant
time constant. We show that shot noise is approximately Gaussian distributed when d is large
compared to the time interval over which h(t) is significant (so that, on the average, many
Poisson impulses are filtered to form s(t)).
To establish this fact, consider first a finite duration interval (-T/2, T/2), and let random
variable nT, described by (9-75), denote the number of Poisson impulses that are contained in the
interval. Located in (-T/2, T/2), denote the nT Poison Impulse locations as tk, 1 k nT. Use
these nT impulse locations to define time-limited shot noise

nT
sT (t) h(t t k ), t . (9-83)
k 1

Shot noise s(t) is the limit of sT(t) as T approaches infinity.

Updates at http://www.ece.uah.edu/courses/ee385/ 9-30


EE603 Class Notes 10/10/2013 John Stensby

In our analysis of shot noise s(t), we first consider the characteristic function

s () E e js limit E e j sT . (9-84)
T

For finite T, characteristic function E e j sT depends on t (which is suppressed in the notation).


However, shot noise is wide-sense stationary, so s() should be free of t. Now, write the
characteristic function of sT as


E e j sT E e jsTnT k P nT k , (9-85)

k 0

where P[nT = k] is given by (9-75). In the conditional expectation used on the right-hand-side
of (9-85), output sT results from filtering exactly k impulses (this is different from the not-
conditioned-on-anything sT that appears on the left-hand-side of (9-85)). We model the impulse
locations as k independent, identically distributed (iid they are uniform on (-T/2, T/2)) random
variables. As a result, the terms h(t - ti) in sT(t) are independent so that

,
k
E e j sT nT k E e j sT nT 1 (9-86)

where

1 T/2 jh(t x)
E e j sT nT 1 e dx, (9-87)
T T/2

since each ti is uniformly distributed on (-T/2, T/2). Finally, by using (9-84) through (9-87), we
can write

Updates at http://www.ece.uah.edu/courses/ee385/ 9-31


EE603 Class Notes 10/10/2013 John Stensby


s () limit E e j sT limit E e j sTnT k P nT k
T T
k 0

k
1 T/2 jh(t x) k d T ( d T)
limit e dx e (9-88)
T k 0 T T/2 k!

k
T/2 e jh(t x) dx
d T
d T/ 2
limit e
T

k 0 k!

T/2
For finite T, the quantity T / 2 e jh(t x) dx depends on t. However, the quantity e jh(t x) dx

does not depend on t. Recalling the Taylor series of the exponential function, we can write
(9-88) as

s ( ) limit exp{ d T}exp d


T
T / 2 jh(t x)
T / 2
e dx exp d



e jh(t x) 1 dx ,

(9-89)

a general formula for the characteristic function of the shot noise process.
In general, Equation (9-89) is impossible to evaluate in closed form. However, this
formula can be used to show that shot noise is approximately Gaussian distributed when d is
large compared to the time constants in h(t) (i.e., compared to the time duration where h(t) is
significant).
This task will be made simpler if we center and normalize s(t). Define

s(t)- d H(0)
s(t) , (9-90)
d

so that

Updates at http://www.ece.uah.edu/courses/ee385/ 9-32


EE603 Class Notes 10/10/2013 John Stensby

E s 0
. (9-91)

R s () () h(t)h(t )dt

(see (9-67) and (9-69)). The characteristic functions of s and s are related by

s - d H(0)
s ( ) E e js E exp j exp j d H(0) s ( d ) . (9-92)

d


Use (9-89) and H(0) h(t x)dx on the right-hand-side of (9-92) to write

j j
s ( ) exp d exp h(t x) 1 h(t x) dx . (9-93)
d d

Now, in the integrand of (9-93), expand the exponential in a power series, and cancel out the
zero and first-order terms to obtain

k k
( j) k h(t x)
h(x)
( j) k
s ( ) exp d k! dx exp d k!
dx

d d
k 2 k 2
(9-94)
k
h (x)dx

1 ( j) k
exp 2 h 2 (x)dx exp d k! k
.
2 k 3


d

As d approaches infinity, (i.e., becomes large compared to the time constants in low-pass filter
h(t)), Equation (9-94) approaches the characteristic function of a Gaussian process.
To see this, let max be the dominant (i.e., largest) time constant in low-pass filter h(t).
There exists constant M such that (note that h is causal)

Updates at http://www.ece.uah.edu/courses/ee385/ 9-33


EE603 Class Notes 10/10/2013 John Stensby

h(x) M e x / max U(x) (9-95)

(max is the filter effective memory duration). Consequently, we can bound


h(x) dx M k e kx / max dx M k max .
k
h k (x)dx (9-96)
0 k

Consider the second exponent on the right-hand-side of (9-94). Using (9-96), it is easily
seen that

h k (x)dx k


( j) k

( j) k M
max
d k! k
. (9-97)

k!

k k 2
k 3 d k 3 d

Now, as d (so that max / d 0) in (9-97), each right-hand-side term approaches zero
so that

max
h k (x)dx 0
k d
( j)
d k! k
0 . (9-98)
k 3

d

Use this last result with Equation (9-94). As max / d 0 with increasing d , we have

( j)2 2
s () exp h (x)dx exp 1 s2 2 , (9-99)
2 2

where

s2 R s (0) (9-100)

Updates at http://www.ece.uah.edu/courses/ee385/ 9-34


EE603 Class Notes 10/10/2013 John Stensby

is the variance of standardized shot noise s(t) (see (9-91)). Note that Equation (9-99) is the
characteristic function of a zero-mean, Gaussian random variable with variance (9-100). Hence,
shot noise is approximately Gaussian distributed when d is large compared to the dominant

time constant in low-pass filter h(t) (so that, on the average, a large number of Poisson impulses

are filtered to form s(t)).


Example: Temperature-Limited Vacuum Diode
In classical communications system theory, a temperature-limited vacuum diode is the
quintessential example of a shot noise generator. The phenomenon was first predicted and
analyzed theoretically by Schottky in his 1918 paper: Theory of Shot Effect, Ann. Phys., Vol 57,
Dec. 1918, pp. 541-568. In fact, over the years, noise generators (used for testing/aligning
communication receivers, low noise preamplifiers, etc.) based on vacuum diodes (i.e., Sylvania
5722 special purpose noise generator diode) have been offered on a commercial basis.
Vacuum tube noise generating diodes are operated in a temperature-limited, or saturated,
mode. Essentially, all of the available electrons are collected by the plate (few return to the
cathode) so that increasing plate voltage does not increase plate current (i.e., the tube is
saturated). The only way to increase (significantly) plate current is to increase filament/cathode

temperature. Under this condition, between electrons, space charge effects are minimal, and
individual electrons are, more or less, independent of each other.
The basic circuit is illustrated by Figure 9-12. In a random manner, electrons are emitted
by the cathode, and they flow a distance d to the plate to form current i(t). If emitted at t = 0, an
independent electron contributes a current h(t), and the aggregate plate current is given by

i(t) h(t t k ) , (9-101)


k

where tk are the Poisson-distributed independent times at which electrons are emitted by the
cathode (see Equation (9-66)). In what follows, we approximate h(t).

Updates at http://www.ece.uah.edu/courses/ee385/ 9-35


EE603 Class Notes 10/10/2013 John Stensby

i(t)

d
RL

+ - - +
Vf Vp
Filament Plate
Figure 9-12: Temperature-limited vacuum
diode used as a shot noise generator.
As discussed above, space charge effects are negligible and the electrons are
independent. Since there is no space charge between the cathode and plate, the potential
distribution V in this region satisfies Laplaces equation

2V
0. (9-102)
2x

The potential must satisfy the boundary conditions V(0) = 0 and V(d) = Vp. Hence, simple
integration yields

Vp
V= x, 0xd. (9-103)
d

As an electron flows from the cathode to the plate, its velocity and energy increase. At
point x between the cathode and plate, the energy increase is given by

Vp
E n (x) eV (x) = e x, (9-104)
d

where e is the basic electronic charge.


Power is the rate at which energy changes. Hence, the instantaneous power flowing from
the battery into the tube is

Updates at http://www.ece.uah.edu/courses/ee385/ 9-36


EE603 Class Notes 10/10/2013 John Stensby

dE n dE n dx Vp dx
e Vp h , (9-105)
dt dx dt d dt

where h(t) is current due to the flow of a single electron (note that d -1dx/dt has units of sec-1 so
that (e/d ) dx/dt has units of charge/sec, or current). Equation (9-105) can be solved for current to
obtain

e dx e
h vx , (9-106)
d dt d

where vx is the instantaneous velocity of the electron.


Electron velocity can be found by applying Newtons laws. The force on an electron is
just e(Vp/d), the product of electronic charge and electric field strength. Since force is equal to
the product of electron mass m and acceleration ax, we have

e Vp
ax = . (9-107)
m d

As it is emitted by the cathode, an electron has an initial velocity that is Maxwellian distributed.
However, to simplify this example we will assume that the initial velocity is zero. With this
assumption, electron velocity can be obtained by integrating (9-107) to obtain

e Vp
vx = t. (9-108)
m d

Over transition time tT the average velocity is

1 tT e Vp d
vx
tT 0
v x dt =
2m d
tT .
tT
(9-109)

Updates at http://www.ece.uah.edu/courses/ee385/ 9-37


EE603 Class Notes 10/10/2013 John Stensby

h(t)
2e/tT

tT t
Figure 9-13: Current due to a single electron emitted by
the cathode at t = 0.

Finally, combine these last two equations to obtain

2d
vx = t, 0 t tT . (9-110)
t2
T

With the aid of this last relationship, we can determine current as a function of time.
Simply combine (9-106) and (9-110) to obtain

2e
h(t) t, 0 t tT , (9-111)
t2
T

the current pulse generated by a single electron as it travels from the cathode to the plate. This
current pulse is depicted by Figure 9-13.
The bandwidth of shot noise s(t) is of interest. For example, we may use the noise
generator to make relative measurements on a communication receiver, and we may require the
noise spectrum to be flat (or white) over the receiver bandwidth (the noise spectrum
amplitude is not important since we are making relative measurements). To a certain flatness,
we can compute and examine the power spectrum of standardized s(t) described by (9-90). As
given by (9-91), the autocorrelation of s(t) is

Updates at http://www.ece.uah.edu/courses/ee385/ 9-38


EE603 Class Notes 10/10/2013 John Stensby


R s ( ) 2
e 2 tT
tT 0
t(t + )dt =
4 e2
3 tT
1
2
tT

1

2tT

, 0 tT
R s (), tT 0 . (9-112)

0, otherwise

The power spectrum of s(t) is the Fourier transform of (9-112), a result given by

0

S () 2 R s () cos()d
4
(tT ) 4 (tT )2 2(1 cos tT tT sin tT ) . (9-113)

Plots of the autocorrelation and relative power spectrum (plotted in dB relative to peak power at
= 0) are given by Figures 9-14 and 9-15, respectively.
To within 3dB, the power spectrum is flat from DC to a little over = /tT. For the
Sylvania 5722 noise generator diode, the cathode-to-plate spacing is .0375 inches and the transit
time is about 310-10 seconds. For this diode, the 3dB cutoff would be about 1/2tT = 1600Mhz.
In practical application, where electrode/circuit stray capacitance/inductance limits frequency
range, the Sylvania 5722 has been used in commercial noise generators operating at over
400Mhz.

10Log{S()/S(0)}
Rs() 2

0
Relative Power (dB)

4e 2
3tT -2

-4

-6

-8

-10
-tT tT /tT (Rad/Sec)

Figure 9-14: Autocorrelation function of Figure 9-15: Relative power spectrum of nor-
normalized shot noise process. malized shot noise process.

Updates at http://www.ece.uah.edu/courses/ee385/ 9-39

You might also like